[ih] IP over wireless [was: booting linux on a 4004]
Brian E Carpenter
brian.e.carpenter at gmail.com
Tue Oct 15 13:12:25 PDT 2024
Barbara,
I've been communing with myself about how to reply to this. When I was trying to explain to students why Ethernet is like it is, of course like everybody I started with Aloha before talking about yellow cable and CSMA/CD. That is of course largely irrelevant today, but we still use the same frame format. Ethernet (partly because it descended from Aloha) natively supports broadcast.
Then there is Wi-Fi which has the goal of emulating Ethernet, so must support broadcast even though it's disastrous to do so and has many bad consequences. Short explanation: if you run a large enough Wi-Fi network, it will end up saturated by multicast traffic.
We don't even have that properly documented in an RFC today, but it's on its way:
https://www.ietf.org/archive/id/draft-ietf-6man-ipv6-over-wireless-06.html
That said, because IPv6 was designed in the Ethernet era, IPv6 over Wi-Fi works in a relatively simple way, and 1500 byte packets are the norm. So small Wi-Fi networks are fine.
Then there is the whole topic of low-power wireless networks where 1500 bytes is certainly not the norm and some kind of adaptation layer is needed. (I'm no expert in that area, but as far as I can tell all the effort has gone into IPv6 rather than IPv4.) You'll find references to that work in the above draft, but I think the key one is RFC 4944 about IPv6 over IEEE 802.15.4. There's a whole section about the adaptation layer:
https://www.rfc-editor.org/rfc/rfc4944.html#section-5
So, the wireless people have simply accepted the magic 1280 rule and adapted to it.
As to the origin of that magic rule, there's one view of its origin at
https://mailarchive.ietf.org/arch/msg/int-area/Wpv1jT6UQt6KlzrdIoZSSLF-7XA
(RFC 1883 specified 576 bytes in 1995. RFC 2460 raised it to 1280 in 1998.)
Regards
Brian Carpenter
On 11-Oct-24 10:09, Barbara Denny via Internet-history wrote:
> Thanks Brian. I could see how the number was perhaps derived. I don't know how much it was vetted with different types of networking folks. To me the assumption of an adaptation layer just handling this is something that comes more from people who are used to wired networks than wireless.
>
> As I am sure you know, there were issues about IPv6 discussions when they started in the IETF. I experienced it when I had an opportunity to be at an IETF meeting at the time. For those who don't know this history and are interested, I am sure Steve Deering relayed his experience at the time.
>
> barbara
>
> On Thursday, October 10, 2024 at 01:49:41 PM PDT, Brian E Carpenter <brian.e.carpenter at gmail.com> wrote:
>
> Barbara,
>
> If you mean how the 1280 bytes minimum MTU size for IPv6 was chosen, it was 1500 minus N, where N was an abitrary choice of how many levels of IPv6-in-IPv6 encapsulation would be possible within one 1500 byte packet, plus a bit of spare. So there was some hand waving involved.
>
> It was always assumed that lower layers that couldn't carry 1280 natively would provide an adaptation layer.
>
> We are still very sad that PMTUD doesn't work reliably, because that means that the lowest common denominator of 1280 is often used when we could do much better.
>
> Regards
> Brian Carpenter
>
> On 11-Oct-24 06:55, Barbara Denny via Internet-history wrote:
>> Just a FYI
>> I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so things may have changed.
>> I also don't know how the numbers in the IPV6 specification were selected.
>>
>> barbara
>> On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history <internet-history at elists.isoc.org> wrote:
>>
>> Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment. If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio. I looked recently to see if I could find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic). The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-)
>> barbara
>> On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history <internet-history at elists.isoc.org> wrote:
>>
>> Hi Greg:
>>
>> Thanks for correcting my faulty memory. As partial recompense for being
>> wrong, I'll note I have a partial set of the end2end-interest archives if
>> there are questions. As recompense for my error, I offer the following
>> tidbit:
>>
>> Posted-Date: Tue, 31 Mar 87 17:58:17 PST
>>
>> To: Craig Partridge <craig at LOKI.BBN.COM>
>>
>> Cc: end2end-tf at venera.isi.edu
>>
>> Subject: Re: Thinking about Congestion
>>
>> In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST.
>>
>> Date: Tue, 31 Mar 87 17:58:17 PST
>>
>> From: Van Jacobson <van at lbl-csam.ARPA>
>>
>>
>> Craig -
>>
>>
>> Your note pushed one of my buttons: Sending a lot of data
>>
>> into a congested network doesn't improve transmit efficiency
>>
>> any more than disconnecting the collision detect wire on
>>
>> your ethernet would. Either action makes everyone on the net,
>>
>> including you, lose.
>>
>>
>> There is always an optimum window size but computing it requires
>>
>> knowing how packet loss scales with window size. To first order,
>>
>> the scaling will be the exponential (1 - A)^W where W is the
>>
>> window size and A is a network dependent constant (0 < A < 1).
>>
>> For a long haul net, no-loss throughput will scale with window
>>
>> size like W/T where T is the round trip time. The effective
>>
>> throughput will go like the product of these two terms. For
>>
>> small W the linear term dominates and you see linear throughput
>>
>> increase with increasing window size. For large W the loss term
>>
>> dominates and you see exponential throughput decrease with
>>
>> increasing window size. For small A (low loss rates), the
>>
>> optimum window size will scale like -1/log(1-a).
>>
>>
>> It's possible to do a more exact analysis. A few years ago a
>>
>> friend of mine was working on a tcp/ip implementation for a well
>>
>> known supercomputer manufacturer. At the time there was a huge
>>
>> debate in the company on whether to "modify" tcp. It seems that
>>
>> some cretin in management had decided that the only way to get
>>
>> good network performance was to do huge transfers, where "huge"
>>
>> was much larger than the 64K allowed by the tcp window size
>>
>> field. I was simulating very high performance fiber optic nets
>>
>> at the time and found this argument to be completely at odds with
>>
>> my results. I was so incensed that I wrote a little 5 page paper
>>
>> for my friend titled "Some notes on choosing an optimum transfer
>>
>> size" that started out:
>>
>>
>> "The choice of network transfer size seems to have been
>>
>> driven by the idea that ``bigger is better''. While this
>>
>> reflects a good, American upbringing, it bears only faint
>>
>> resemblance to reality. In the unlikely event that a future
>>
>> decision is made on rational grounds, this note describes the
>>
>> mathematical basis for choice of window and transfer size."
>>
>>
>> I'm afraid it went on in much the same tone (I must have been
>>
>> drunk when I wrote it) but I did summarize how to apply Erlang's
>>
>> and Hill's loss functions to tcp (the same analysis would apply
>>
>> to rdp - the only difference is rdp gains a factor of two in
>>
>> throughput over tcp at very high loss rates). If you're
>>
>> interested in the math, I'd be glad to send you extracts from
>>
>> this thing or the references I used.
>>
>>
>> - Van
>>
>>
>> On Thu, Oct 10, 2024 at 12:47 AM Greg Skinner <gregskinner0 at icloud.com>
>> wrote:
>>
>>>
>>> On Oct 5, 2024, at 5:42 PM, Craig Partridge <craig at tereschau.net> wrote:
>>>
>>>
>>> As someone who was in touch with Raj/KK and Van/Mike during the
>>> development of congestion control. They were unaware of each other's work
>>> until spring of 1988, when they realized they were doing very similar
>>> stuff. I think, someone (Dave Clark) in the End2End Research Group became
>>> aware of Raj & KK's work and invited them to come present to an E2E meeting
>>> in early 1988 and E2E (more than IETF) was where Van was working out the
>>> kinks in his congestion control work with Mike.
>>>
>>> Craig
>>>
>>>
>>> I looked into this a bit, and discovered that Raj/KK and Van/Mike were all
>>> at the 6th IETF, which took place in April 1987. [1] (It was a joint
>>> meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards
>>> groups.) Both teams presented their work at the meeting.
>>>
>>> On Sat, Oct 5, 2024 at 5:34 PM John Day via Internet-history <
>>> internet-history at elists.isoc.org> wrote:
>>>
>>>> The work of Jain’s DEC team existed at the same time and I believe
>>>> Jacobson’s original paper references it.
>>>>
>>>> As I said, at least it does congestion avoidance without causing
>>>> congestion (unless under extreme conditions).
>>>>
>>>> I suspect that the main reason Jacobson didn’t adopt it was that they
>>>> were trying to maximize the data rate by running as close to congestion
>>>> collapse as they could. While Jain’s work attempted to balance the
>>>> trade-off between throughput and response time. But that is just policy
>>>> they still could have used ECN to keep from being predatory and used ECN
>>>> while waiting until the queue is full to mark the packets. That is what TCP
>>>> use of ECN does now. Of course, I think that is bad choice because it
>>>> generates lots of retransmissions.
>>>>
>>>>
>>> Some of the reasons why Van/Mike took the approach they did were discussed
>>> in a email message Van sent to the tcp-ip list. It included some
>>> discussions that had taken place on the ietf and end2end-interest lists.
>>> [2] IMO, it’s unfortunate that the existing archives of those lists,
>>> because we would be able to read the points of view expressed by the list
>>> participants.
>>>
>>> When I asked Jain why his wasn’t adopted, he said he isn’t an implementor,
>>>> but an experimenter.
>>>>
>>>> But it is not uncommon to be so focused on the immediate problem to fail
>>>> to notice the system implications.
>>>>
>>>
>>> John, what could they have done that would have met your criteria and
>>> yielded a deployable solution to the congestion problems existing at that
>>> time in the timeframe that it was needed? IMO, their paper should be
>>> assessed in that context.
>>>
>>> --gregbo
>>>
>>> [1] https://www.ietf.org/proceedings/06.pdf
>>> [2] https://ee.lbl.gov/tcp.html
>>>
>>>
>>
>
More information about the Internet-history
mailing list