[ih] low latency traffic (was UDP Length Field?)
Joseph Touch
touch at strayalpha.com
Thu Dec 3 16:17:04 PST 2020
> On Dec 3, 2020, at 3:09 PM, John Gilmore via Internet-history <internet-history at elists.isoc.org> wrote:
>
> Jack Haverty via Internet-history <internet-history at elists.isoc.org> wrote:
>> From what I can anecdotally see today, 40 years later, low-latency
>> datagram service on the Internet is not on anyone's radar. I helped a
>> friend investigate his attempts to use a gaming-type app over the
>> Internet last year, and our experiments discovered that packet loss
>> rate was surprisingly (to me) measured at 0%, latency was on average
>> in the hundreds of milliseconds, but had "tails" of data points out to
>> 30 seconds. The Internet "IP datagram service" today seems to be very
>> connection-oriented, delivering every packet but with noticeable very
>> long delays. I suspect this may be a cause of the anomalies we often
>> see today in TV interviews conducted using the Internet.
>
> Actually, there is a very active and somewhat successful effort (active
> queue management) to improve Internet latency.
...
> Given technological gains in semiconductor speed and power, and optical
> fibers, the Internet industry quickly learned that it was cheaper to buy
> higher bandwidth than it was to try to prioritize categories of traffic.
> This also helped with latency since faster networks tend to have lower
> latencies as well as higher overall thruput.
High speed doesn't affect propagation delay - it often increases it (fiber vs. twisted pair).. It does reduce transfer delay (time to transfer a large object).
A bigger issue for TCP is high BW coupled with cheap RAM (the latter noted below).
> ...
> A countervailing trend was caused by TCP's insistence on increasing its
> offered load until a packet is actually dropped, combined with the
> decreasing cost of RAM buffering throughout the network. It was
> inconceivable for 1980s routers to have more than a few packets' worth
> of buffering, so back then, packets dropped immediately after a flow was
> not sustainable. TCP's flow control was designed to respond to that
> signal.
It had two responses to drop - send again or backoff. The former is the wrong choice for contention-based loss; the latter is wrong for error-based loss. Either way you choose, you lose; it chose based on the assumptions of the time (that most losses were contention).
> Now routers and even network interface chips can have many
> seconds' worth of RAM, so 10 to 20 years of designs caused packets to be
> queued rather than dropped. The resulting interaction is called
> Bufferbloat.
It was known long before that (mid 1980s) and was the basis for RED in 1993 or so, before it became called Bufferbloat in 2009.
> Significant work by Van Jacobson and Kathleen Nichols produced a
> "controlled delay" algorithm, CoDel, that could generally, reliably, and
> cheaply condition a router's queues without manual tuning.
There are others that have similar properties, including BLUE.
> ...
> I believe the ultimate cure for high latency would be to fix endpoint
> TCP implementations to be latency-sensitive and not just packet-drop
> sensitive.
There have been several such TCP variants already developed, including Vegas and more notably the currently widely deployed CUBIC.
> But that effort has not succeeded so far, because the
> resulting implementations produce lower thruput when facing competing
> traffic from the old algorithm that floods its own traffic ahead of them
> in router queues. It's a "nice guys finish last" tragedy of the commons
> problem that causes major implementers to not switch to the "slower"
> algorithm. This is why bufferbloat has had to be fixed with CoDel in
> middleboxes, rather than in TCP at the endpoints, somewhat damaging the
> "dumb network core" principle that has allowed the Internet to scale.
> There were hints of this in the early Router Requirements RFCs that said
> routers forced to drop packets should punish end nodes that were not
> backing off their offered load, such as section 5.3.6 of RFC 1812 in
> 1995, but this was still very much a research topic back then (as it is
> today).
That’s basically what ECN is supposed to do, except rather than “punish” it gives early info about drops before the buffer fills up. TCP support for ECN is widely deployed, though not always enabled by default as much as it could be.
Joe
More information about the Internet-history
mailing list