[ih] booting linux on a 4004

Greg Skinner gregskinner0 at icloud.com
Sat Oct 5 15:16:03 PDT 2024


Unfortunately, I only have time for a quick response.  Also, unfortunately, neither the ietf nor end2end-interest archives go back far enough to get more of an idea of what VJ, Mike Karels, etc. considered in their approach to congestion avoidance and control.  There was some discussion of it on the tcp-ip list, which can be accessed via Google Groups. [1] [2] I could go into more detail, but I don’t think they intended their approach to somehow set a paradigm for how congestion avoidance and control would be addressed on a (modern) Internet scale.  Something to consider here is with the limited resources they had, they needed (and were able) to come up with something that worked on networks based on the ARPAnet IMP technology of that time, which many Internet users still relied on.

--gregbo

[1] https://groups.google.com/g/mod.protocols.tcp-ip
[2] https://groups.google.com/g/comp.protocols.tcp-ip

> On Oct 2, 2024, at 4:41 PM, John Day via Internet-history <internet-history at elists.isoc.org> wrote:
> 
> Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS.  One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. 
> 
> We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.)
> 
> The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer.
> 
> John
> 
>> On Oct 2, 2024, at 17:51, Vint Cerf <vint at google.com> wrote:
>> 
>> John,
>> 
>> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. 
>> 
>> v
>> 
>> 
>> On Wed, Oct 2, 2024 at 5:19 PM John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net>> wrote:
>>> Busy day. Just getting to looking at these.
>>> 
>>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions:
>>> 
>>> Flow control is a pair-wise issue, Congestion management is an n-party issue.
>>> 
>>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.)
>>> 
>>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.)
>>> 
>>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. 
>>> 
>>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late.
>>> 
>>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify.
>>> 
>>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.)
>>> 
>>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers.
>>> 
>>> Jacobson’s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN)  In a way this is not Van’s fault. It is the classic engineer’s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times.
>>> 
>>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions.
>>> 
>>> There is probably much more, but this is what comes to mind.
>>> 
>>> Take care,
>>> John
>>> 
>>> 
>>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>> 
>>>> One basic problem with blaming the "last packet that caused intermediate
>>>> router congestion" is that it usually blamed the wrong source, among other
>>>> problems. Van Jacobson was/is the guru of flow control (among others) who
>>>> might remember more.
>>>> 
>>>> 
>>>> v




More information about the Internet-history mailing list