[ih] Throwing packets away

John Day jeanjour at comcast.net
Tue Nov 3 04:37:33 PST 2009


I am not the best person on this topic, but let me suggest a couple 
of things I remember.  In the 1970s, it was reasonably common 
knowledge that congestion control in datagram networks was an open 
issue.  Early CYCLADES TS and later TCP had flow control to prevent 
the sending host from overrunning the receiving host.  At the time it 
was generally believed that congestion control would go in the 
network somehow.

(BTW, I have come to distinguish flow control from congestion control 
as follows:  Flow control is a feedback mechanism that is co-located 
with the resource being control. Congestion control is a feedback 
mechanism that is not co-located with the resource being controlled. 
So early TS and TCP had flow control but no congestion control.)

I remember an RFC (I think it was, maybe not) by Vint ( and maybe 
others, do you remember what it was?) arguing that we could not bound 
the amount of buffer space required in the switches, so discarding 
was the only real option.  (If memory serves, I *saw* this just prior 
to *seeing*that first TCP spec.  IOW, I won't swear it was written 
before the TCP spec but I think so.  ;-)  So 1974 or 1975).  There 
was considerable work early on the problem by Gerard LeLann at Rennes 
and others at IRIA, now INRIA.

There is also the 1977 IEN#1 from UCL in which there is some 
discussion of the problem and a conjecture that something like 
ingress flow control may be the solution.  In 1979, the INRIA guys 
held a conference on the topic, Flow Control in Computer Networks. 
About this time, you also have quite a lot of work at DEC by former 
CYCLADES people and Raj Jain.  In the early 80s, you have the work by 
Nagle on the problems they were seeing in Ford's IP network.  Ford 
was operating over lower b/w links so was seeing the problem earlier.

So there was a fair amount of work and understanding on the problem 
prior to the 1986 episode.

I have also conjectured that delay in seeing severe congestion in the 
Internet between the switch from NCP to IP was due to two factors: 
The higher bandwidth links and that in 1982 hosts attached to 
switches as opposed to LANs were using either 1822 or X.25, which had 
ingress flow control.  As they disappeared and the population of LAN 
attached hosts increased, what congestion control there was became 
ineffective and disappeared.

But there was plenty of evidence that the problem was out there and a 
fair amount of work on what to do about it.  It would be interesting 
to understand better, why the Internet waited until the crisis was 
upon them before they acted to adopt Jacobson's stop-gap measure.  As 
Vint alluded in his response  control theory says one should put such 
mechanisms as close to the resource being controlled.  The further 
away the more hysterisis in the control.  Jacobson puts them as far 
away as possible.  Of course, at the same time you want to avoid 
putting it at every hop. (The trick with reductio ad absurdum is 
knowing when to stop.)  ;-)  So what was the happy medium?  We still 
don't know really.  Everthing seems to have been one extreme 
(Jacobson) or the other (MPLS, ATM).  We never have figured out how 
to mix the oil and water of connection and connectionless.  Or did we?

Did we choose the host based approach because we were so performance 
constrained in the routers at the time, that people were afraid 
solutions such as Jain's and others would over burden the routers? 
It was true then but isn't now.

Take care,
John

At 5:27 -0500 2009/11/03, Vint Cerf wrote:
>we assumed that the transmission system(s) below IP level would not 
>be guaranteed (ethernet and packet radio and dynamically shared 
>satellite channels did not have the same properties that ARPANET 
>had). So we built into the TCP layer a re-transmission scheme. This 
>led to the need for re-sequencing and detection and discard of 
>duplicates. The flow control in TCP was adapted from the CYCLADES 
>sliding window mechanism. Gateways (before they were called routers) 
>had finite storage and because packet networks were NOT synchronous 
>end-to-end, there was the potential for congestion. You might have a 
>10 Mb/s interface on one end sending to a dialup device on the other 
>end. A good deal of experimentation went into the TCP round-trip 
>time estimates and packet loss was assumed to be a potential hazard. 
>Van Jacobson was a pioneer in the development of mechanisms to 
>discipline TCP flow control, and slow-start to detect when 
>congestion resulted in packet loss. The capacity of a dynamically 
>shared packet net varies continuously depending on the traffic 
>matrix. It is not like circuit switching where capacity is dedicated 
>and unused even when no traffic is flowing between pairs of end 
>points. Consequently, there was need to adapt the flow control 
>window on a more-or-less continuous basis and the potential for 
>congestion produced a concomitant packet loss potential. one of the 
>interesting statistical observations was that fair allocation of 
>capacity among dynamic flows could be achieved by random discard of 
>packets (rather than discarding the packet that encountered the 
>buffer overflow). This was referred to variously as "random early 
>discard." Finally, a crude signal was devised to indicate congestion 
>back to the source when a packet was discarded for lack of buffer 
>space.
>
>
>
>vint
>
>On Nov 3, 2009, at 2:39 AM, John R. Levine wrote:
>
>>[ Feel free to point me at documents or archives I should have read if
>>  this is a FAQ. I have at least read RFC 635. ]
>>
>>I'm trying to understand the origins of the TCP/IP approach to 
>>congestion management by throwing excess packets away, which I 
>>gather was a pretty radical idea.
>>
>>It is my impression that the ARPAnet used a reservation approach so 
>>that the source end wasn't supposed to send a packet until the 
>>destination end said it had room for that packet, with resends 
>>primarily for line errors. TCP went to byte windows and congestion 
>>discarding partly to make it more adaptable to varying network 
>>speeds, partly to unify the virtual circuit
>>management, and that it took a fair amount of twiddling of the 
>>details of TCP to get good performance out of it.
>>
>>CYCLADES had a lot of these features.  Did the window and 
>>congestion discards come from there, or somewhere else, or some 
>>combination?
>>
>>Signed,
>>Confused in Trumansburg




More information about the Internet-history mailing list