[ih] Arpanet raw messages, voice, and TCP
Michael Greenwald
mbgreen at seas.upenn.edu
Fri Nov 27 10:31:54 PST 2009
Noel,
[I don't have much time to read this list, unfortunately (seems
fascinating), but in my Thanksgiving weekend skim I came across your note].
I'm not sure if this is what you are asking about/trying to remember:
On Multics the issue with fragmentation was that other hosts did not
retransmit using the same IP ID, so we wasted "lots" of memory (relative
to those days) holding onto fragments of incomplete packets. (On
multics transmission side we were careful to reuse the IP ID for all
protocols we had implemented, and the network_ DIM exposed the IP ID to
allow other user implemented IP protocols to also retransmit and force
the IP ID to be the same as the original. But too many other
implementations retransmitted using new IP IDs so on reassembly side we
could be hurting when lots of fragments were dropped.) Also, in the face
of losses, it was much more efficient for TCP to send unfragmented
packets: retransmissions only resent the lost packet, while with a
fragmented packet, retransmission resent all fragments, even those that
had arrived. I also had some bad experience with highly lossy lines ---
the implementation wasn't optimized for lots of lost fragments --- so
when the # of incomplete pkts started getting large, the lookup to find
a match was relatively slow (this would have been easy to fix, but it
wasn't a priority). Finally, it could happen that we took an extra
process switch per fragment, so fragmented packets were painfully
expensive. There were probably other reasons, but these stand out in my
memory as what convinced/prejudiced me at the time to believe that TCP
should try hard to avoid sending fragmented IP packets.
Finally, I don't remember if by the time of the NCP cutoff there really
were hosts out there that didn't implement reassembly or routers that
didn't fragment, whether or not it was a PITA. Early on, yes, (there
was some bakeoff circa '79 or even 80 when I think only Multics & UCLA
had both frag& reassembly working correctly), but I can't recall any
specific host or router that didn't interoperate because it didn't
implement reassembly or fragmentation respectively. Could be my memory
that's incorrect, though.
On 11/26/09 10:09 AM, Noel Chiappa wrote:
> > From: =?ISO-8859-1?Q?Matthias_B=E4rwolff?= <mbaer at cs.tu-berlin.de>
>
> >> a maximum packet size of ~120 bytes would obviously not have been
> >> that much use for TCP/IP in general.
>
> > I don't understand this. Even the 1974 Cerf/Kahn specification of
> > TCP knew of "breaking up messages into segments"
> > ...
> > While hosts were eventually expected to accept IP packets of at
> > least 576 bytes, they sure can cope with smaller packets ...
> > Why then would 126 bytes foreclose experimenting with TCP/IP?
>
> Do note that I didn't say 'would not work', I said "not .. that much use"!
>
> The reason is that fragmentation turned out to just not work very well,
> because packets which were fragmented were much less 'reliable' (in the
> sense of eventually being delivered complete, to the application). Any
> time packets were being fragmented, things seemed to just not work very
> well. It is for that reason that we eventually added Path MTU Discovery,
> so that fragmentation could be avoided. (Note that IPv6 left out end-end
> fragmentation altogether, for the same reason.)
>
> Why exactly fragmentation didn't work so well I don't recollect very well
> (if we ever knew for sure exactly why). I suspect that the network back
> then was 'lossier' (partly due to poor congestion control causing
> congestion drops, partly due to flakier transmission systems). Since
> end-end retransmission schemes don't work so well when loss rates go up
> (typically there's a 'knee' where performance goes over a cliff), with
> that many more packets involved for a given amount of data, we may have
> gone over the 'knee'.
>
> (Some hosts/routers didn't actually implement fragmention and/or re-assemblyl,
> particularly the latter, as it was a PITA to code, so that was a problem too.)
>
> Of course, with the typical data packet being relatively large (I don't
> know the average packet size for FTP or email, but surely it was at least
> 576), with the ~120 byte (991 bits, to be accurate) Uncontrolled packets,
> of which 20 at least would be the IP header, you're looking at 6 fragments
> for each 576 byte data packet -> very poor performance.
>
>
> > unlike ordinary single-packet messages they would not be subject to
> > the source-destination-IMP ack/retransmit facility.
>
> I'm not sure there was such a thing (see previous message).
>
> Whether the frames of Uncontrolled messages were subject to the normal
> IMP-IMP retransmission I don't know, and 1822 doesn't say, but my _guess_
> is that they would have been (more work to treat them differently, than
> just handle them like any other IMP-IMP frame).
>
> > surely it must have been more fun playing around with higher level
> > end-to-end reliability in a network that actually does drop a packet
> > sometimes.
>
> ROTFLMAO! As Vint indicated, not having lost packets was _not_ a problem
> we had! :-)
>
> Noel
>
>
More information about the Internet-history
mailing list