[ih] booting linux on a 4004

John Day jeanjour at comcast.net
Sat Oct 5 17:52:10 PDT 2024


Ah, Thanks, that helps clarify my picture of it.  However, it doesn’t change my interpretation. The Raj/KK work is yields a much better system solution that minimizes retransmissions. And there is just something about congestion avoidance that causes congestion that doesn’t sit well.  ;-)

The two Part 1s* of the 4 part DEC Report and the Report as a whole is one of the finest pieces of computer science research I have ever read. It is laid out logically. They take the problem apart, evaluate each part thoroughly and make reasonable decisions while also indicating that other possibilities are to be explored.  I really wish more of the papers I read were this good.

*There are two Part 1s. The one that is part of the DEC Report and the version that was published. The fun thing is one is not a subset of the other and both have good information in them that is not in the other one.  ;-)

Take care,
John

> On Oct 5, 2024, at 20:42, Craig Partridge <craig at tereschau.net> wrote:
> 
> As someone who was in touch with Raj/KK and Van/Mike during the development of congestion control.  They were unaware of each other's work until spring of 1988, when they realized they were doing very similar stuff.  I think, someone (Dave Clark) in the End2End Research Group became aware of Raj & KK's work and invited them to come present to an E2E meeting in early 1988 and E2E (more than IETF) was where Van was working out the kinks in his congestion control work with Mike.
> 
> Craig
> 
> On Sat, Oct 5, 2024 at 5:34 PM John Day via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>> The work of Jain’s DEC team existed at the same time and I believe Jacobson’s original paper references it.
>> 
>> As I said, at least it does congestion avoidance without causing congestion (unless under extreme conditions).
>> 
>> I suspect that the main reason Jacobson didn’t adopt it was that they were trying to maximize the data rate by running as close to congestion collapse as they could. While Jain’s work attempted to balance the trade-off between throughput and response time.  But that is just policy they still could have used ECN to keep from being predatory and used ECN while waiting until the queue is full to mark the packets. That is what TCP use of ECN does now. Of course, I think that is bad choice because it generates lots of retransmissions.
>> 
>> When I asked Jain why his wasn’t adopted, he said he isn’t an implementor, but an experimenter.
>> 
>> But it is not uncommon to be so focused on the immediate problem to fail to notice the system implications.
>> 
>> Take care,
>> John
>> 
>> > On Oct 5, 2024, at 18:16, Greg Skinner <gregskinner0 at icloud.com <mailto:gregskinner0 at icloud.com>> wrote:
>> > 
>> > Unfortunately, I only have time for a quick response.  Also, unfortunately, neither the ietf nor end2end-interest archives go back far enough to get more of an idea of what VJ, Mike Karels, etc. considered in their approach to congestion avoidance and control.  There was some discussion of it on the tcp-ip list, which can be accessed via Google Groups. [1] [2] I could go into more detail, but I don’t think they intended their approach to somehow set a paradigm for how congestion avoidance and control would be addressed on a (modern) Internet scale.  Something to consider here is with the limited resources they had, they needed (and were able) to come up with something that worked on networks based on the ARPAnet IMP technology of that time, which many Internet users still relied on.
>> > 
>> > --gregbo
>> > 
>> > [1] https://groups.google.com/g/mod.protocols.tcp-ip
>> > [2] https://groups.google.com/g/comp.protocols.tcp-ip
>> > 
>> >> On Oct 2, 2024, at 4:41 PM, John Day via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>> >> 
>> >> Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS.  One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. 
>> >> 
>> >> We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.)
>> >> 
>> >> The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer.
>> >> 
>> >> John
>> >> 
>> >>> On Oct 2, 2024, at 17:51, Vint Cerf <vint at google.com <mailto:vint at google.com>> wrote:
>> >>> 
>> >>> John,
>> >>> 
>> >>> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. 
>> >>> 
>> >>> v
>> >>> 
>> >>> 
>> >>> On Wed, Oct 2, 2024 at 5:19 PM John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net> <mailto:jeanjour at comcast.net <mailto:jeanjour at comcast.net>>> wrote:
>> >>>> Busy day. Just getting to looking at these.
>> >>>> 
>> >>>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions:
>> >>>> 
>> >>>> Flow control is a pair-wise issue, Congestion management is an n-party issue.
>> >>>> 
>> >>>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.)
>> >>>> 
>> >>>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.)
>> >>>> 
>> >>>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. 
>> >>>> 
>> >>>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late.
>> >>>> 
>> >>>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify.
>> >>>> 
>> >>>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.)
>> >>>> 
>> >>>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers.
>> >>>> 
>> >>>> Jacobson’s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN)  In a way this is not Van’s fault. It is the classic engineer’s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times.
>> >>>> 
>> >>>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions.
>> >>>> 
>> >>>> There is probably much more, but this is what comes to mind.
>> >>>> 
>> >>>> Take care,
>> >>>> John
>> >>>> 
>> >>>> 
>> >>>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org> <mailto:internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>>> wrote:
>> >>>>> 
>> >>>>> One basic problem with blaming the "last packet that caused intermediate
>> >>>>> router congestion" is that it usually blamed the wrong source, among other
>> >>>>> problems. Van Jacobson was/is the guru of flow control (among others) who
>> >>>>> might remember more.
>> >>>>> 
>> >>>>> 
>> >>>>> v
>> > 
>> 
>> -- 
>> Internet-history mailing list
>> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
>> https://elists.isoc.org/mailman/listinfo/internet-history
> 
> 
> --
> *****
> Craig Partridge's email account for professional society activities and mailing lists.




More information about the Internet-history mailing list