[ih] Throwing packets away
Vint Cerf
vint at google.com
Tue Nov 3 02:27:18 PST 2009
we assumed that the transmission system(s) below IP level would not be
guaranteed (ethernet and packet radio and dynamically shared satellite
channels did not have the same properties that ARPANET had). So we
built into the TCP layer a re-transmission scheme. This led to the
need for re-sequencing and detection and discard of duplicates. The
flow control in TCP was adapted from the CYCLADES sliding window
mechanism. Gateways (before they were called routers) had finite
storage and because packet networks were NOT synchronous end-to-end,
there was the potential for congestion. You might have a 10 Mb/s
interface on one end sending to a dialup device on the other end. A
good deal of experimentation went into the TCP round-trip time
estimates and packet loss was assumed to be a potential hazard. Van
Jacobson was a pioneer in the development of mechanisms to discipline
TCP flow control, and slow-start to detect when congestion resulted in
packet loss. The capacity of a dynamically shared packet net varies
continuously depending on the traffic matrix. It is not like circuit
switching where capacity is dedicated and unused even when no traffic
is flowing between pairs of end points. Consequently, there was need
to adapt the flow control window on a more-or-less continuous basis
and the potential for congestion produced a concomitant packet loss
potential. one of the interesting statistical observations was that
fair allocation of capacity among dynamic flows could be achieved by
random discard of packets (rather than discarding the packet that
encountered the buffer overflow). This was referred to variously as
"random early discard." Finally, a crude signal was devised to
indicate congestion back to the source when a packet was discarded for
lack of buffer space.
vint
On Nov 3, 2009, at 2:39 AM, John R. Levine wrote:
> [ Feel free to point me at documents or archives I should have read if
> this is a FAQ. I have at least read RFC 635. ]
>
> I'm trying to understand the origins of the TCP/IP approach to
> congestion management by throwing excess packets away, which I
> gather was a pretty radical idea.
>
> It is my impression that the ARPAnet used a reservation approach so
> that the source end wasn't supposed to send a packet until the
> destination end said it had room for that packet, with resends
> primarily for line errors. TCP went to byte windows and congestion
> discarding partly to make it more adaptable to varying network
> speeds, partly to unify the virtual circuit
> management, and that it took a fair amount of twiddling of the
> details of TCP to get good performance out of it.
>
> CYCLADES had a lot of these features. Did the window and congestion
> discards come from there, or somewhere else, or some combination?
>
> Signed,
> Confused in Trumansburg
More information about the Internet-history
mailing list