[ih] Internet/Wireless Principle of Levelness

Dave Taht dave at taht.net
Mon Nov 11 13:40:14 PST 2019


Jack Haverty via Internet-history <internet-history at elists.isoc.org>
writes:

> On 11/11/19 8:31 AM, Dave Taht via Internet-history wrote:
>
>> And - of course! it's got the "deep buffers" providers require.

Obviously I am mortally opposed to deep buffering. No more than 60ms
is needed, especially now that modern tcps do pacing.

Instead it's often 400+ in many switches, and much worse in cpe and ISP
head-ends.

> I'm just a User now.  Just last year I helped a friend, another User,
> figure out why his "gaming" app, which depends on interactive behavior
> across the net, was sometimes unusable.  I was curious, since I also
> sometimes see visual and audio artifacts on streaming TV content, making
> TV sometimes similarly unusable, even though I have 150+ Mb/sec internet
> service.   We Users tend to think "Oh, the net's broken again, they're
> probably working on fixing it".

No, it's usually caused by something (else) temporarily using up the
bandwidth and then overbuffering.

Bufferbloat is still at epidemic proportions worldwide.

Users have certainly been trained to expect, in the presence of a big
upload or download, for their network to go hell. It's totally outside
their everyday experience to expect otherwise. 

You, as a relatively smart user, could either apply some form of sqm
(htb shaping with fq_codel or cake) from any of thousands of consumer grade
boxes now shipping... 

... except the one your ISP provides. What we usually recommend nowadays
is putting something like an edgerouter X, turrus omnia, or evenroute in
front of your <200Mbit connection, and an apu2 for Gbit...

reflashed with openwrt, which also has the benefit of better security,
ipv6 support, completely open source, and the latest bufferbloat-beating
algorithms. It's a matter of putting in the right up and down bandwidth
settings, setting the framing correctly for ethernet/dsl/cable and
turning sqm on. 

With this stuff in place, and configured correctly, the day-in day-out
difference in your network is like night and day, for all forms of
traffic. Streaming - especially to two or more devices - works
brilliantly, videoconferencing and voip are glitch free, and the web
always fast, even when torrenting. 

I used to go to doubters houses to set it up - convincing folk - in
email to convince anyone burn the time to try it is rather hit or miss,
no matter how annoyed they are with the behavior of their
network. 

... some folk like pfsense, also. I'm not a fan - we shipped "cake" 3
years back and that seems to have solved every last problem we had and
then some, with per host/per flow fq, some support for classification,
and so on.

https://arxiv.org/pdf/1804.07617.pdf

We fixed wifi 5 years back - while working at gfiber it became obvious
with gige fiber how the horrific bufferbloat shifted to the wifi was - (
https://lwn.net/Articles/705884/ ) and we fixed it. Thoroughly:

https://www.usenix.org/system/files/conference/atc17/atc17-hoiland-jorgensen.pdf

That's shipping in about 30% of the new gear today. (QCA ath9k, ath10k,
mediatek mt76 and intel's wifi-6 chips). In Google starbucks. Etc.

Google's latency vs RvR plots were to die for:
http://flent-newark.bufferbloat.net/~d/Airtime%20based%20queue%20limit%20for%20FQ_CoDel%20in%20wireless%20interface.pdf

Again, it's rare to see the algorithms *configured* in isp provided
CPE as yet - free.fr adopted it (3+ million boxes) in 2012, several
dozen others, mostly in europe, have also - telenor for example - but
elsewhere, don't know. It's needed badly in africa and south america,
especially - but US cable CMTSes suck, in particular. It turns out much
easier to fix the overbuffering problem on the home router with inbound
shaping than it is to convince the ISPs to fix their head ends. We ended
up putting a docsis mode into cake that makes them *rock*...

Smart users can, however, fix it in 5 minutes.

>
> Using the ancient network management tools, we tracked the cause down to
> latency.  The typical latency we measured across the net was 100 msec or
> less.  But occasionally it would jump to several seconds and stay there
> for a while.   I was surprised to see that zero packets were being lost,
> but many were delayed as much as 30 seconds.  Without the ability to dig
> inside the boxes, I can only speculate that such behavior at the IP
> level was what made the gaming app unusable, and could cause those
> artifacts I see in my TV video and audio. 

I'd say it wasn't speculation but truth.

The sawtooth is largely dead in most consumer cpe, and tcp governed more
by the max size of the send and receive windows worldwide.

I ranted (with a ton of data) about all the causes of bufferbloat here -
not just technology but "science" and politics. Actual buffer sizes then
ran to the 10s of seconds in wifi, especially.

https://conferences.sigcomm.org/sigcomm/2014/doc/slides/137.pdf


> My friend tried complaining to his ISPs' tech support, but they all said
> their service was working fine.  Perhaps that is a consequence of the
> "Levelness" that now makes Users' applications involve many different
> service and equipment providers?

If you are trying to bring this back into "internet history" - I'm
not sure what principle would apply. Foolish idealism, on my part? And
Jim's?

For some of the history, scroll down to the beginning of

https://gettys.wordpress.com/category/bufferbloat/

The story of the death of the paper "Red in a different light" is
especially ironic for a variety of reasons... one reviewer scorned it
and suggested the students involved thoroughly review van jacobson's
core papers (when, in that blind review - it was van and kathie that had
written it!)

> Is this latency how Users now see the effects of those "deep buffers"?  
> Why would providers require a feature that makes their customers
> unhappy.....?

It sells more bandwidth. Packet loss of any kind is universally viewed
as "bad". I could go on.

>
> I'm still just being curious about the History of the Internet,
> especially how its service evolved -- as seen by the Users.

I often thought that the tale of the bufferbloat.net effort would make a
good book - about how a bunch of internet originals - jim gettys, van
jacobson, vint cerf, esr, myself - banded together with the FOSS folk,
academia and industry for one last ride at making it possible for the
internet to be a universal communications medium for voice,
videoconferencing, web and all future applications like we'd always
dreamed of...

I have to admit!

The early years *were fun*. I woke up every day filled with new ideas to
"fix the internet", as did my whole team. Of late... being bogged down
by the latest braindead idea in the ietf (l4s vs sce) - I'm merely happy
that *my networks* are great as are those of my friends and those paying
attention.

Despite essentially succeeding at eradicating the bufferbloat problem in
theory and with deployed code in 2012 with fq_codel (now RFC8290), and
having that algorithm now the default in over a billion devices (apple
osx, ios, and linux - and available in freebsd), it's not on (except for
smart users) where it would count most, on the bottleneck routers.

Who knows? maybe another couple years?

I'd like it if I were elected (posthumously) to the internet hall of
fame for the last 9 years I've spent trying to fix the internet, but
thus far don't seem to even rank an entry in wikipedia.

Still, I'd be happy to help y'all here to get at least, your networks,
working better, if you need any help getting this stuff configured,
please contact me offline.

> /Jack




More information about the Internet-history mailing list