[ih] Installed base momentum (was Re: Design choices in SMTP)

Jack Haverty jack at 3kitty.org
Mon Feb 13 16:02:59 PST 2023


IMHO, "buffer bloat" is a consequence of the use of "congestion 
avoidance" as an interim tactic, adopted in the early Internet research 
days to buy some time to figure out how to do "congestion control" in 
the Internet, experimenting with and defining protocols and algorithms 
for inclusion in TCP/IP V4's descendant.   That was on the IAB (then 
ICCB) list of "things we need to work on" back in 1983 or so.

"Buffer bloat" is a consequence of continuing in a "congestion 
avoidance" design, made easier as fiber, memory and processor prices 
plummeted.  It became easy to avoid congestion by adding more resources, 
with the unfortunate consequences on traffic latency.

Congestion control is a hard problem, and permeates the entire network 
as a system level issue.  In the Internet, that system includes the 
"switches" of all kinds as well as the "hosts".   The ARPANET research 
on congestion control tackled this problem when it was all contained 
within the IMPs algorithms and mechanisms.  It seems it is much more 
difficult with TCP, when you have to include mechanisms in all of the 
computers attached to networks - and now there are likely billions of 
them, as well as many players involved in the design and implementation 
of the components.

Jack

On 2/13/23 13:01, Steve Crocker wrote:
> In my view, the answer to 'Has "Congestion Control" in the 
> Internetbeen solved?' is clearly no. I view bufferbloat as one 
> important part of the congestion control problem.
>
> Steve
>
>
> On Mon, Feb 13, 2023 at 3:44 PM Jack Haverty via Internet-history 
> <internet-history at elists.isoc.org> wrote:
>
>     It seems that I didn't receive some messages over the
>     weekend....sorry
>     if anyone has already noted what I say below.
>
>     Re the ARPANET and Congestion Control:   This was definitely a hot
>     topic, in particular after DCA took over operations and the
>     network grew
>     in size.   There were DCA-managed contracts to rework the internal
>     mechanisms of the ARPANET to handle the much larger and diverse
>     networks
>     of IMPs that evolved into the multiple IMP-based networks called the
>     DDN.   Congestion control was just one issue of several that
>     interacted,
>     e.g., routing, flow control, retransmission, buffer management, etc.
>     The IMP design, although a "packet network", in effect had a "serial
>     byte stream" mechanism internally to make sure all data got from
>     source
>     host to destination.  The ARPANET had the equivalent of parts of a
>     TCP
>     built inside the IMPs to guarantee the delivery of a data stream.
>
>     I'm not sure how much historical detail you'll find in traditionally
>     published papers and journals.   Outside of academia that wasn't a
>     priority.  But there were extensive and detailed reports prepared as
>     part of the ARPANET "operations" contracts and delivered to DCA.
>     Here's
>     one 3-volume, multi-year example that discusses a lot of the work
>     in the
>     early 80s on "congestion control" and new internal IMP mechanisms in
>     general:
>
>     https://apps.dtic.mil/sti/citations/ADA053450
>     https://apps.dtic.mil/sti/citations/ADA086338
>     https://apps.dtic.mil/sti/citations/ADA121350
>
>     There's hundreds of pages of detail in those reports and there are
>     others available through DTiC.   I was listed as author on some of
>     these, because at the time that contract was one of "my" contracts --
>     which meant that I had to make sure that the report got written and
>     delivered so we would get paid.   I didn't personally work on the
>     ARPANET technical research, but I did absorb some understanding of
>     the
>     issues and details.  The "IMP Group" was literally just down the hall.
>
>     At the time (early 1980s), I was involved in the early Internet work,
>     when TCP/IP V4 was being created and the various flow and congestion
>     control mechanisms were being defined.  From the ARPANET
>     experience, it
>     was clear to me that the IMP gurus "down the hall" at BBN viewed
>     congestion control as a major issue, and that sometimes surfaced as
>     statements such as "TCP will never work".  TCP didn't address any
>     of the
>     issues of congestion, except by the rudimentary and unproven
>     mechanism
>     of "Source Quench".
>
>     The expectation was that the Internet would work if congestion was
>     avoided rather than controlled, which could be attempted by keeping
>     network capacity above traffic demands, at least long enough that
>     TCP's
>     retransmission and backoff mechanisms in the hosts would throttle
>     down
>     as expected to match what the network substrate was capable of
>     carrying
>     at the time.   Of course those mechanisms were now distributed
>     among the
>     several hosts and network switches (e.g., IMPs, Packet Radios,
>     computer
>     OS, gateways) involved, designed, built, and managed by different
>     organizaions, which made it challenging to predict how it would
>     all behave.
>
>     Even today, as an end user, I can't tell if "congestion control" is
>     implemented and working well, or if congestion is just mostly being
>     avoided by deployment of lots of fiber and lots of buffer memory
>     in all
>     the switching locations where congestion might be expected. That of
>     course results in the phenomenon of "buffer bloat".   That's another
>     question for the Historians.  Has "Congestion Control" in the
>     Internet
>     been solved?  Or avoided?
>
>     Jack Haverty
>
>
>
>     On 2/13/23 08:19, Craig Partridge via Internet-history wrote:
>     > On Sat, Feb 11, 2023 at 7:48 AM Noel Chiappa via Internet-history <
>     > internet-history at elists.isoc.org> wrote:
>     >
>     >>
>     >>      > From: Craig Partridge
>     >>
>     >>      > We figured out congestion collapse well enough for the time
>     >>
>     >> It should be remembered that the ARPANET people (hi!) had
>     perhaps solved
>     >> this
>     >> problem a long time before. I'm trying to remember how
>     explicitly they saw
>     >> this as a separate problem from the issue of running out of
>     buffer space
>     >> for
>     >> message re-assembly at the destination IMP, but I seem to
>     recall that RFNMs
>     >> were seen as a needed throttle to prevent the network as a
>     whole from being
>     >> overrun (i.e. what we now think of as 'congestion', although
>     IIRC that term
>     >> wasn't used then), as well as flow control to the source host
>     (as we would
>     >> now call it).
>     >>
>     >> I don't recall exactly where I saw that, but I'd try the BBN
>     proposal to
>     >> DARPA's RFP, and the first JFIPS paper ("The interface message
>     processor
>     >> for
>     >> the ARPA computer network").
>     >>
>     > I don't recall the details either, though I remember stories of
>     Bob Kahn
>     > going to LA to beat up on the first few ARPANET nodes
>     > because he anticipated various issues, I think including
>     congestion.  And
>     > he found them and fixes were made.
>     >
>     > But remember ARPANET was homogeneous -- same speed for each link
>     and a
>     > single control mechanism.  I think John Nagle was
>     > the first to point out ("On packet switches with infinite
>     storage") that
>     > connecting very different networks had its own challenges.
>     > And to my point, not something that a person working with X.25
>     would have
>     > understood terribly well (yes X.75 gateways existed but
>     > they typically throttled the window size to 2 packets, which hid
>     a lot of
>     > issues).
>     >
>     > Craig
>
>     -- 
>     Internet-history mailing list
>     Internet-history at elists.isoc.org
>     https://elists.isoc.org/mailman/listinfo/internet-history
>



More information about the Internet-history mailing list