[ih] WiFi, DV, and transmission delay

Dave Taht dave at taht.net
Sat Feb 23 05:03:17 PST 2019


jnc at mercury.lcs.mit.edu (Noel Chiappa) writes:

>     > From: Dave Taht
>
>     > The queue may not have been building at the imp but in the app.
>
> What app? Sorry, I'm confused by this. The routing was a somewhat low-level
> function in the IMPs; the hosts never saw it, or knew of it.
>
> Or did you mean that user traffic queues were building in the IMP, and
> the routing updates got buried behind them? I don't recall any more, but
> routing, liveness, etc packets may have had priority over user traffic;
> 3803 might mention it.

I hadn't read that before posting that message. 3803 (
https://apps.dtic.mil/dtic/tr/fulltext/u2/a053450.pdf ) was really,
really good, and the many of the problems documented there have recurred
today. As one example from that document, although it's somewhat outside
the definition of bufferbloat, the IMPs were retrying to send packets an
exorbitant number of times, which translates to bufferbloat in time, if
not actual packets.

(was there a TTL equivalent in arpanet or did that fall out later)

so... moving forward in time...

early 802.11b was made real by a *couple* retries at the mac layer. We'd
learned via the metricom rollout (92-01) and "STRIP" protocol and
watching the first 802.11 implementations not scale up (over ipx/spx,
not ip)... that with the level of
loss we'd got that tcp couldn't scale up without just.a.couple.retries
at the mac layer, so the arlan wireless cards did a couple retries, (I
don't know to this day when everyone converged on this being a "good
idea", but it was about 1998? 9?... and the 802.11b standard, which
followed 802.11, took off like a rocket.

I talked about the metricom and strip experiments in the second half of
this early bufferbloat vs wifi talk at MIT: https://www.youtube.com/watch?v=Wksh2DPHCDI&feature=youtu.be

Over time, retries in wifi - got excessive - nowadays it's somewhere
between 10 and 30 retries in most drivers and firmware- and some are
*infinite*. Seeing 30 sec worth of wifi retries in tropical rain was
what got me off a beach in Nicaragua and into fixing bufferbloat.

worse, those retries...

are on top of what most rate control wifi algorithms do. "Minstrel",
developed in 2008 or so, had made the observation that it was often
faster to retry two transmits at a higher rate than to fall back to a
lower rate.

Minstrel was a really good addition to knowledge about how to make higher
speed encodings work in a noisy radio environment, it's a shame that the
paper didn't make it past academic review. I keep a copy here:

http://blog.cerowrt.org/post/minstrel/

On top of that, radios and dsps got better since 2002, so we needed to
retry less, not more...

I'm still trying to rip out excessive retries everywhere in wifi and not making
much progress.

I don't dare think about what lte does.

...

The other crazy thing we did in the bufferbloat effort was add FQ with
"drop head" rather than fifo with drop tail, at least in much of the
stack above the drivers and firmware. that definitely would not have
worked in the imp era where queues were too small and fq un-thought-of
yet. basic packet prioritization made more sense then with small queues and
well defined control messages.


>
>     > Didn't arpanet also have some forms of flow control?
>
> Yes, RFNM's - a network-level ACK for the delivery of a 'message' (the ARPANET
> term for a user packet) to the host at the far end. 'Messages' were broken up
> into smaller packets (I'm not positive of the term used for them, but I think
> it may - confusingly! - have been 'packet' - let me call them 'frames here)
> inside the network, so at the destination IMP a 're-assembly buffer' had to be
> allocated for messages longer than a frame. RFNM's helped prevent the
> network from being overloaded - but see:
>
>     J.M. McQuillan, W.R. Crowther, B.P. Cosell, D.C. Walden, and F.E. Heart,
>         "Improvements in the Design and Performance of the ARPA Network",
>         Proceedings AFIPS, 1972 FJCC, Vol. 40, pp. 741-754.
>
> for a problem which causes IMPs to wedge. (I'm pretty sure that's available
> online. If not, let us know.)

I'll look.

>
>
>     >> Alas, I don't remember any key names/titles to look up on that work
>     >> ... One name that keep coming up in my mind as associate with it is
>     >> Zaw-Sing Su. There's someone else who I think did more, but I just
>     >> can't rember who it was.
>
> Finally remembered who it was; Jose Garcia-Luna. He added a mechanism to DV
> that basically prevented the formation of loops. I don't recall the details of
> how it worked, but if you visualize a network as a pool of water, and a
> connectivity change is a stone dropped into the pool, then the routing updates
> are like the ripples that spread out from the point of impact.  Anyway, IIRC,
> Jose's mechanism limits changes to a single ripple, so it's even better than
> loop prevention, it bounds the time to respond to a connectivity change (I _think_ -
> it's been decades since I looked at it).

This is a really good description. I can ask juliusz where babel's loop
free "feasibility condition" came from... can't find the relevant paper now.

https://www.irif.fr/~jch/software/babel/babel-20140311.pdf


> Anyway, just Google 'routing "garcia-luna"" and a bunch of his stuff will pop up.
>
> Anyone using a DV routing system without using his algorithm (or an equivalent)
> is really missing. RIP doesn't have it because his work post-dates RIP.

I hope it's something new to me!

>    Noel



More information about the Internet-history mailing list