[ih] booting linux on a 4004
John Day
jeanjour at comcast.net
Wed Oct 2 16:41:37 PDT 2024
Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS. One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad.
We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.)
The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer.
John
> On Oct 2, 2024, at 17:51, Vint Cerf <vint at google.com> wrote:
>
> John,
>
> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric.
>
> v
>
>
> On Wed, Oct 2, 2024 at 5:19 PM John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net>> wrote:
>> Busy day. Just getting to looking at these.
>>
>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions:
>>
>> Flow control is a pair-wise issue, Congestion management is an n-party issue.
>>
>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.)
>>
>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.)
>>
>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period.
>>
>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late.
>>
>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify.
>>
>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.)
>>
>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers.
>>
>> Jacobson’s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van’s fault. It is the classic engineer’s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times.
>>
>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions.
>>
>> There is probably much more, but this is what comes to mind.
>>
>> Take care,
>> John
>>
>>
>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>
>>> One basic problem with blaming the "last packet that caused intermediate
>>> router congestion" is that it usually blamed the wrong source, among other
>>> problems. Van Jacobson was/is the guru of flow control (among others) who
>>> might remember more.
>>>
>>>
>>> v
>>>
>>>
>>> On Tue, Oct 1, 2024 at 8:50 PM Barbara Denny via Internet-history <
>>> internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>
>>>> In a brief attempt to try to find some information about the early MIT
>>>> work you mentioned, I ended up tripping on this Final Report from ISI in
>>>> DTIC. It does talk a fair amount about congestion control and source
>>>> quench (plus other things that might interest people). The period of
>>>> performance is 1987 to 1990 which is much later than I was considering in
>>>> my earlier message.
>>>>
>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf
>>>>
>>>> Even though the report mentions testing on DARTnet, I don't remember
>>>> anything about this during our DARTnet meetings. I did join the project
>>>> after the start so perhaps the work was done before I began to participate.
>>>> I also couldn't easily find the journal they mention as a place for
>>>> publishing their findings. I will have more time later to see if I can
>>>> something that covers this testing.
>>>>
>>>> barbara
>>>>
>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via
>>>> Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>>
>>>> multicast is also an issue but I do not recall if that was one that Craig
>>>> & I talked about
>>>>
>>>> Scott
>>>>
>>>>> On Oct 1, 2024, at 7:34 PM, Scott Bradner via Internet-history <
>>>> internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>>>
>>>>> I remember talking with Craig Partridge (on a flight to somewhere) about
>>>> source quench
>>>>> during the time when 1812 was being written - I do not recall
>>>>> the specific issues but I recall that there were more than one issue
>>>>>
>>>>> (if DoS was not an issue at the time, it should have been)
>>>>>
>>>>> Scott
>>>>>
>>>>>> On Oct 1, 2024, at 6:22 PM, Brian E Carpenter via Internet-history <
>>>> internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>>>>>>
>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote:
>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote:
>>>>>>>> Forwarded for Barbara
>>>>>>>>
>>>>>>>> ====
>>>>>>>>
>>>>>>>> From: Barbara Denny <b_a_denny at yahoo.com <mailto:b_a_denny at yahoo.com>>
>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT
>>>>>>>> I think congestion issues were discussed because I remember an ICMP
>>>> message type called source quench (now deprecated). It was used for
>>>> notifying a host to reduce the traffic load to a destination. I don't
>>>> remember hearing about any actual congestion experiments using this message
>>>> type.
>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an
>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate
>>>>>>> thesis about the use of Source Quench for congestion control. I believe
>>>>>>> it included some experiments (maybe all artificial, or only through
>>>>>>> simulation).
>>>>>>> I don't think it had much impact on the rest of the world.
>>>>>>
>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated
>>>> 1984).
>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so
>>>>>> I guess he probably wasn't an MIT undergrad.
>>>>>>
>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995).
>>>> People
>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that basically
>>>>>> it was no use.
>>>>>>
>>>>>> A bit more Google found this, however:
>>>>>>
>>>>>> "4.3. Internet Congestion Control
>>>>>> Lixia Zhang began a study of network resource allocation techniques
>>>> suitable for
>>>>>> the DARPA Internet. The Internet currently has a simple technique for
>>>> resource
>>>>>> allocation, called "Source Quench."
>>>>>> Simple simulations have shown that this technique is not effective, and
>>>> this work
>>>>>> has produced an alternative which seems considerably more workable.
>>>> Simulation
>>>>>> of this new technique is now being performed."
>>>>>>
>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299,
>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ]
>>>>>>
>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA
>>>> now. If she isn't on this list, she should be!
>>>>>>
>>>>>> Brian Carpenter
>>>>
>>>>
>>>> --
>>>> Internet-history mailing list
>>>> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>>
>>>
>>>
>>> --
>>> Please send any postal/overnight deliveries to:
>>> Vint Cerf
>>> Google, LLC
>>> 1900 Reston Metro Plaza, 16th Floor
>>> Reston, VA 20190
>>> +1 (571) 213 1346 <tel:(571)%20213-1346>
>>>
>>>
>>> until further notice
>>> --
>>> Internet-history mailing list
>>> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> Google, LLC
> 1900 Reston Metro Plaza, 16th Floor
> Reston, VA 20190
> +1 (571) 213 1346
>
>
> until further notice
>
>
>
More information about the Internet-history
mailing list