[ih] congestion control... solved?

Dave Taht dave.taht at gmail.com
Mon Feb 13 17:48:29 PST 2023


On Mon, Feb 13, 2023 at 1:02 PM Steve Crocker via Internet-history
<internet-history at elists.isoc.org> wrote:
>
> In my view, the answer to 'Has "Congestion Control" in the Internetbeen
> solved?' is clearly no.  I view bufferbloat as one important part of
> the congestion control problem.

Consider me nerdsniped.

Imagine if you will, if somehow, by persistence and quality code,
someone managed to get the flow queuing packet scheduling mechanism
(as well as its close cousin, packet pacing) described in Toke's paper
below into billions of machines, and many of the congested routers, on
the internet, and for all we know, a ton of switches.

http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1251687&dswid=-493

I would really prefer more folk read the paper and the math than me
describe it briefly here, but what it does is that so long as the
ingress rate of packets in a flow is slightly less than the total
egress rate of all the other flows, it observes near-zero queuing. It
was a nice advance over prior attempts at FQ, which always put new
flows at the back of the list. The game theory is good.

It incentivizes an application desiring low latency behavior to
utilize delay, and pacing, to achieve that result. Other, more greedy,
bursty flows, only hurt themselves. Historically, how we treated
congestion control was to violently attempt to shove some other
packets out of the way to clear room for themselves in slow start, and
then slowly probe for more bandwidth, in a sawtooth that was stable,
but not particularly desirable.

Pacing is deeply embedded in Quic, long available in linux for cubic
via the EDF scheduler also, and the slight difference between going
out early when well paced, or seeing other traffic, and probing for
more bandwidth, are detectable on different curves. There's some nifty
work on "FQuic", leveraging multipath...

There are of course, other issues to solve in congestion control,
notably bursty macs like wifi, and other promising probing techniques
like packet chirping and "flow start" that I could talk to in a later
message... and still some use for AQM techniques inc ecn... but I like
to think that the future for solving congestion control is bright,
given the size of the deployment today, the now-observable benefits,
and the adoption curve.

(until I look at the mess that is 5G. I won't go there.)





>
> Steve
>
>
> On Mon, Feb 13, 2023 at 3:44 PM Jack Haverty via Internet-history <
> internet-history at elists.isoc.org> wrote:
>
> > It seems that I didn't receive some messages over the weekend....sorry
> > if anyone has already noted what I say below.
> >
> > Re the ARPANET and Congestion Control:   This was definitely a hot
> > topic, in particular after DCA took over operations and the network grew
> > in size.   There were DCA-managed contracts to rework the internal
> > mechanisms of the ARPANET to handle the much larger and diverse networks
> > of IMPs that evolved into the multiple IMP-based networks called the
> > DDN.   Congestion control was just one issue of several that interacted,
> > e.g., routing, flow control, retransmission, buffer management, etc.
> > The IMP design, although a "packet network", in effect had a "serial
> > byte stream" mechanism internally to make sure all data got from source
> > host to destination.  The ARPANET had the equivalent of parts of a TCP
> > built inside the IMPs to guarantee the delivery of a data stream.
> >
> > I'm not sure how much historical detail you'll find in traditionally
> > published papers and journals.   Outside of academia that wasn't a
> > priority.  But there were extensive and detailed reports prepared as
> > part of the ARPANET "operations" contracts and delivered to DCA. Here's
> > one 3-volume, multi-year example that discusses a lot of the work in the
> > early 80s on "congestion control" and new internal IMP mechanisms in
> > general:
> >
> > https://apps.dtic.mil/sti/citations/ADA053450
> > https://apps.dtic.mil/sti/citations/ADA086338
> > https://apps.dtic.mil/sti/citations/ADA121350
> >
> > There's hundreds of pages of detail in those reports and there are
> > others available through DTiC.   I was listed as author on some of
> > these, because at the time that contract was one of "my" contracts --
> > which meant that I had to make sure that the report got written and
> > delivered so we would get paid.   I didn't personally work on the
> > ARPANET technical research, but I did absorb some understanding of the
> > issues and details.  The "IMP Group" was literally just down the hall.
> >
> > At the time (early 1980s), I was involved in the early Internet work,
> > when TCP/IP V4 was being created and the various flow and congestion
> > control mechanisms were being defined.  From the ARPANET experience, it
> > was clear to me that the IMP gurus "down the hall" at BBN viewed
> > congestion control as a major issue, and that sometimes surfaced as
> > statements such as "TCP will never work".  TCP didn't address any of the
> > issues of congestion, except by the rudimentary and unproven mechanism
> > of "Source Quench".
> >
> > The expectation was that the Internet would work if congestion was
> > avoided rather than controlled, which could be attempted by keeping
> > network capacity above traffic demands, at least long enough that TCP's
> > retransmission and backoff mechanisms in the hosts would throttle down
> > as expected to match what the network substrate was capable of carrying
> > at the time.   Of course those mechanisms were now distributed among the
> > several hosts and network switches (e.g., IMPs, Packet Radios, computer
> > OS, gateways) involved, designed, built, and managed by different
> > organizaions, which made it challenging to predict how it would all behave.
> >
> > Even today, as an end user, I can't tell if "congestion control" is
> > implemented and working well, or if congestion is just mostly being
> > avoided by deployment of lots of fiber and lots of buffer memory in all
> > the switching locations where congestion might be expected. That of
> > course results in the phenomenon of "buffer bloat".   That's another
> > question for the Historians.  Has "Congestion Control" in the Internet
> > been solved?  Or avoided?
> >
> > Jack Haverty
> >
> >
> >
> > On 2/13/23 08:19, Craig Partridge via Internet-history wrote:
> > > On Sat, Feb 11, 2023 at 7:48 AM Noel Chiappa via Internet-history <
> > > internet-history at elists.isoc.org> wrote:
> > >
> > >>
> > >>      > From: Craig Partridge
> > >>
> > >>      > We figured out congestion collapse well enough for the time
> > >>
> > >> It should be remembered that the ARPANET people (hi!) had perhaps solved
> > >> this
> > >> problem a long time before. I'm trying to remember how explicitly they
> > saw
> > >> this as a separate problem from the issue of running out of buffer space
> > >> for
> > >> message re-assembly at the destination IMP, but I seem to recall that
> > RFNMs
> > >> were seen as a needed throttle to prevent the network as a whole from
> > being
> > >> overrun (i.e. what we now think of as 'congestion', although IIRC that
> > term
> > >> wasn't used then), as well as flow control to the source host (as we
> > would
> > >> now call it).
> > >>
> > >> I don't recall exactly where I saw that, but I'd try the BBN proposal to
> > >> DARPA's RFP, and the first JFIPS paper ("The interface message processor
> > >> for
> > >> the ARPA computer network").
> > >>
> > > I don't recall the details either, though I remember stories of Bob Kahn
> > > going to LA to beat up on the first few ARPANET nodes
> > > because he anticipated various issues, I think including congestion.  And
> > > he found them and fixes were made.
> > >
> > > But remember ARPANET was homogeneous -- same speed for each link and a
> > > single control mechanism.  I think John Nagle was
> > > the first to point out ("On packet switches with infinite storage") that
> > > connecting very different networks had its own challenges.
> > > And to my point, not something that a person working with X.25 would have
> > > understood terribly well (yes X.75 gateways existed but
> > > they typically throttled the window size to 2 packets, which hid a lot of
> > > issues).
> > >
> > > Craig
> >
> > --
> > Internet-history mailing list
> > Internet-history at elists.isoc.org
> > https://elists.isoc.org/mailman/listinfo/internet-history
> >
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history



--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC



More information about the Internet-history mailing list