[ih] bufferbloat and modern congestion control (was 4004)
Jack Haverty
jack at 3kitty.org
Sun Oct 6 14:04:16 PDT 2024
Yes, I agree, that's how it works.
But I think that the Service Model has changed over time; the original
goals in the early 80s were to provide multiple types of service, e.g.,
one for interactive needs where timeliness was most important, and
another for bulk transfers where accurate delivery of everything sent
was crucial. That's why TCP was split from IP to enable services such
as UDP.
At some point that goal was apparently abandoned. It might be of
historical interest to know when that occurred and if it was an explicit
decision and if so who made it.
Another main difference now is in the Management of "The Internet". It
has also changed over the decades.
In the 80s, ARPA was in charge of the Internet. Users knew who to call
if they had problems. The first "real" User I remember was Peter
Kirstein and his crew at UCL, who relied on the Internet to do their
everyday work.
When Peter had a problem, he would call or email Vint at ARPA. If the
problem look like it might be related to the "core gateways", I then got
a call or email from Vint. Peter figured this process out and would
then simply CC me on his first complaint.
The "Gateway Group" at BBN (usually Hinden, Brescia, and/or Sheltzer)
would get to work and figure it all out. Perhaps it was a SATNET issue,
but Dale McNeill was down the hall with the SATNET team if needed.
Same with the Arpanet.
When NSF entered the scene, I suspect Steve Wolff's phone number became
more popular. Problems probably cascaded to Dave Mills or other NSF
researchers?
In today's Internet, who is in charge? When you have a phantasmal
Internet experience, "Who Ya Gonna Call?" Where are the Internet's
Ghostbusters?
Jack
On 10/6/24 12:41, Brian E Carpenter via Internet-history wrote:
> "The ISPs involved all did their tests and
> measurements, and reported that *their* part of the Internet was working
> just fine. "
>
> Of course it was. The Internet's "service model" is a best effort to
> deliver independent datagrams. Guaranteed delivery, error-free delivery,
> prompt delivery and in-order delivery are "nice to have". That's what
> the ISPs' economic model have always been based on, because it scales.
>
> This has consequences, one of which is buffer bloat.
>
> (The Web success story is quite similar - all attempts at two-way
> hyperlink systems have failed to scale, but the HTTP/HTML model based
> on best-effort one-way hyperlinks has succeeded.)
>
> Regards
> Brian
>
> On 07-Oct-24 06:22, Jack Haverty via Internet-history wrote:
>> Yes, I agree that Bufferbloat is the most likely root cause of what I
>> saw. In fact, that testing experience is when I actually heard the term
>> "bufferbloat" for the first time and learned what it meant. I can
>> imagine how it probably happened over the years. It was undoubtedly
>> far easier to just add now-inexpensive memory to components inside the
>> network than it was to invent, and deploy, appropriate mechanisms to
>> replace the rudimentary "placeholders" of Source Quench, Type Of
>> Service, hop-based routing, et al, in all of the components and
>> organizations involved in the Internet.
>>
>> But what I also discovered was more disturbing than bufferbloat.
>>
>> Using the same tools I remembered from 40 years ago, we determined that
>> the bloated buffers were likely deep in the bowels of the Internet -
>> most likely inside a fiber carrier several ISPs away from either
>> endpoint of the test. Our ability to analyze was hindered by the lack
>> of pervasive support today for mechanisms such as pings and traceroutes
>> at various points along the route. Parts of the route through the
>> Internet were cloaked in impenetrable (to us mere Users) shields.
>>
>> But the disturbing part was the attitude of the "providers" who operated
>> the various pieces involved along the route we were trying to use. Some
>> of them, deep in the bowels of the Internet, wouldn't even talk to us
>> mere Users. Their customers were other ISPs. They don't talk to
>> retail customers. The ISPs involved all did their tests and
>> measurements, and reported that *their* part of the Internet was working
>> just fine. The software vendors in the Users' computers similarly said
>> their technology was working as it should, nothing to be fixed.
>>
>> No one knew much about Source Quench or other congestion control issues
>> and mechanisms. Or Type of Service. I assume that the IETF had by now
>> also deprecated even the rudimentary and ineffective mechanisms of
>> Source Quench, with no replacement mechanisms defined and deployed.
>>
>> My User friend tried all sorts of possible fixes. As taught by
>> Marketing, he upgraded to higher speeds of Internet service. That was
>> supposed to fix whatever problem you were experiencing. It didn't. He
>> switched to several different ISPs, at each end of the route. No joy.
>>
>> This finger-pointing environment results in a situation where all of the
>> "operators" involved in my User's Internet communications believe that
>> everything of theirs is working fine and the problem must be somewhere
>> else. But the User believes that the Internet is broken, unsuitable for
>> what he's trying to do, and no one is working to fix it.
>>
>> That polar disagreement between the Users and Providers of the Internet
>> was a disturbing (to me at least) revelation.
>>
>> I suspect the situation will deteriorate, since I frequently see
>> articles describing plans to use the Internet for tasks involving
>> real-time remote manipulation (telemedicine, remote surgery, distant
>> control of vehicles, equipment, etc.). My experience is admittedly
>> anecdotal, but I suspect it's not unique.
>>
>> I recommended to my User friend that he might try installing ancient
>> technology - dial-up modems at each end! Amazingly, you can still
>> purchase dial-up modems, even from Amazon. But I also advised him that
>> even such old tech might not be an improvement. If his "voice call"
>> became VOIP at any point along the way, his problems might not change
>> much.
>>
>> His alternative was to forget about doing remote operations over the
>> Internet. It might be easier to simply move.
>>
>> Jack Haverty
>>
>> On 10/5/24 23:29, Vint Cerf wrote:
>>> sounds like your test discovered bufferbloat....
>>>
>>> v
>>>
>>>
>>> On Sat, Oct 5, 2024 at 6:28 PM Jack Haverty <jack at 3kitty.org> wrote:
>>>
>>> IIRC:
>>>
>>> When the internal mechanisms (such as SQ) were being debated and
>>> choices made to create TCP/IP V4 for adoption as the DoD Standard,
>>> the technology world was quite different. At the time (early
>>> 1980s), gateways had very little memory - sometimes only enough to
>>> hold one or at most a few IP datagrams. If a datagram arrived
>>> and there was no place to hold it, SQ back to the source was a way
>>> to say "Slow down. I just had to drop your last datagram".
>>>
>>> Over the decades, memory became a lot more available. So gateways
>>> could easily have space to queue many datagrams. In one test I did
>>> just a few years ago, a stream of datagrams was sent from one site
>>> to another. All were received intact and in order as sent. No SQ
>>> messages were received. But latency soared. Some datagrams took
>>> more than 30 seconds to reach their destination. Memory had
>>> become cheap enough that datagrams could just be held as long as
>>> needed.
>>>
>>> For anyone involved in operating a piece of the Internet, or for
>>> diagnosing users' complaints like "it's too slow", ICMP's
>>> facilities were crucial tools. They were flawed and incomplete,
>>> but still useful as ways to figure out what was happening.
>>>
>>> When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was
>>> not included. As someone involved in diagnosing operational
>>> problems, we yelled, screamed, cajoled, encouraged, lobbied, and
>>> did whatever we could to get the DoD procurement folks to add ICMP
>>> to their list of required implementations.
>>>
>>> This discussion about SQ reminded me of another "gateway issue"
>>> from the 1980s ICCB to-do list - "End-Middle Interactions". I'll
>>> write what I remember about that separately.
>>>
>>> Jack
>>>
>>>
>>>
>>> On 10/5/24 11:26, Craig Partridge via Internet-history wrote:
>>>> All sorts of goodies:
>>>>
>>>> ICMP Echo (what used to power Ping until people decided they
>>>> didn't like
>>>> folks probing)
>>>>
>>>> ICMP Unreachable (port or host)
>>>>
>>>> ICMP Problem Param (diagnostic)
>>>>
>>>> many more.
>>>>
>>>> On Sat, Oct 5, 2024 at 10:50 AM Vint Cerf via Internet-history <
>>>> internet-history at elists.isoc.org> wrote:
>>>>
>>>>> isn't there more to ICMP than source quench? Seems wrong to
>>>>> ignore all ICMP
>>>>> messages.
>>>>>
>>>>> v
>>>>>
>>>>>
>>>>> On Sat, Oct 5, 2024 at 12:04 PM Greg Skinner via
>>>>> Internet-history <
>>>>> internet-history at elists.isoc.org> wrote:
>>>>>
>>>>>> On Oct 3, 2024, at 9:02 AM, Greg Skinner via Internet-history <
>>>>>> internet-history at elists.isoc.org <mailto:
>>>>> internet-history at elists.isoc.org>>
>>>>>> wrote:
>>>>>>> Forwarded for Barbara
>>>>>>>
>>>>>>> ====
>>>>>>>
>>>>>>> Having trouble emailing again so i did some trimming on the
>>>>>>> original
>>>>>> message....
>>>>>>> Putting my packet radio hat back on, a source quench
>>>>>>> message could help
>>>>>> disambiguate whether loss in the network is due to
>>>>>> congestion or
>>>>> something
>>>>>> else (like in wireless, loss due to harsh environments,
>>>>>> jamming,
>>>>>> mobility). I also think it is not obvious what you should
>>>>>> do when you
>>>>>> receive a source quench, but to me trying to understand this
>>>>>> is just part
>>>>>> of trying to see if we can make things work better. How
>>>>>> about what you
>>>>>> could do when you don't receive a source quench but have
>>>>>> experienced
>>>>> loss?
>>>>>>> How is network coding coming along these days?
>>>>>>>
>>>>>>> barbara
>>>>>> Any serious attempts to reinstitute ICMP source quench would
>>>>>> have to go
>>>>>> through the IETF RFC process again because it’s been
>>>>>> deprecated for some
>>>>>> time. [1] Also, many sites block ICMP outright (even though
>>>>>> they’ve been
>>>>>> warned not to do this). [2]
>>>>>>
>>>>>> --gregbo
>>>>>>
>>>>>> [1]https://datatracker.ietf.org/doc/rfc6633/
>>>>>> [2]
>>>>>>
>>>>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf
>>>>>> --
>>>>>> Internet-history mailing list
>>>>>> Internet-history at elists.isoc.org
>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>>>>
>>>>> --
>>>>> Please send any postal/overnight deliveries to:
>>>>> Vint Cerf
>>>>> Google, LLC
>>>>> 1900 Reston Metro Plaza, 16th Floor
>>>>> Reston, VA 20190
>>>>> +1 (571) 213 1346 <tel:(571)%20213-1346>
>>>>>
>>>>>
>>>>> until further notice
>>>>> --
>>>>> Internet-history mailing list
>>>>> Internet-history at elists.isoc.org
>>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>>>
>>>
>>>
>>>
>>> --
>>> Please send any postal/overnight deliveries to:
>>> Vint Cerf
>>> Google, LLC
>>> 1900 Reston Metro Plaza, 16th Floor
>>> Reston, VA 20190
>>> +1 (571) 213 1346
>>>
>>>
>>> until further notice
>>>
>>>
>>>
>>
>>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20241006/ee087cb4/attachment.asc>
More information about the Internet-history
mailing list