<div dir="ltr">In the late 1980s Van Jacobson gave a great talk about the interaction between TCP windows and the ARPANET windows (controlled by RFNUM messages as I recall). What I don't remember is whether it was an IETF talk or and End-to-End Task Force talk. If it is an IETF talk, it is in the minutes.<div>Craig</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 18, 2019 at 10:41 AM Dave Taht <<a href="mailto:dave@taht.net">dave@taht.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><a href="mailto:jnc@mercury.lcs.mit.edu" target="_blank">jnc@mercury.lcs.mit.edu</a> (Noel Chiappa) writes:<br>
<br>
> > From: Dave Taht<br>
><br>
> > These days I periodically work on the babeld DV routing protocol. Among<br>
> > other things it now has an RTT based metric ... It's generally been my<br>
> > hope, that with the addition of fair queuing ... that the early<br>
> > experiences with DV being problematic were... early bufferbloat.<br>
><br>
> I don't think so (and in any events IMPs had such a small amount of buffering<br>
> that they couldn't do extreme buffer-bloat if they wanted to!) Newer updates<br>
> being in the processing queue behind older updates (which is kind of the<br>
> key thing that's going on in buffer-bloat, IIRC) were no doubt an issue, but<br>
> the problems were more fundamental.<br>
<br>
The queue may not have been building at the imp but in the app. Didn't<br>
arpanet also have some forms of flow control? I see that the size of the<br>
listen queue was noted as a potential problem in one of the arpanet documents<br>
that I've now read this week, and timeouts in general seem to have been<br>
a hard problem to conceptualize and formalize. <br>
<br>
One big side-effect of the bufferbloat effort is we also switchd the<br>
linux world to head drop queuing rather than tail drop, with suitable<br>
algorithms (rfc8290 and 8289, sch_fq) to resist starvation. TCP RTOs due<br>
to tail drop vanished, stale information such as routing packets or voip<br>
rarely gets sent.<br>
<br>
A lot of folk including myself were very nervous about the stability of<br>
this, but so far so good.<br>
<br>
In terms of tackling many of the problems wifi has - slow bus<br>
arbitration, bursty transmits, I'm kind of fiercely proud of what<br>
we did in "ending the anomaly":<br>
<br>
<a href="https://arxiv.org/pdf/1703.00064.pdf" rel="noreferrer" target="_blank">https://arxiv.org/pdf/1703.00064.pdf</a><br>
<br>
<br>
> The thing is that a DV routing architecture is fundamentally a distributed<br>
> computation - i.e. node A does a tiny bit of the work, hands its intermediate<br>
> data output to node B, repeat many times. This is fundamentally different<br>
> from LS and its descendants, where everyone gets the basic data at about the<br>
> same time, and then does all the computation _locally_, in parallel.<br>
><br>
> Although both links and processors are _much_ faster than they used to be,<br>
> the speed of light (IOW point-point transmission delays) hasn't changed.<br>
> So in whatever equation one uses to describe the settling time (i.e. the<br>
> amount of time needed for everyone's tables to fully update and settle down),<br>
> a major term will not have improved.<br>
<br>
Later on in this thread was extrapolation of wired networks speed without<br>
enough terms to capture how weird wireless ones are in comparison.<br>
<br>
> Although the two different approaches are probably not _that_ far off, now,<br>
> since real-time delay (RTD) in flooding data is also significant. The<br>
> difference will be if the DV calculation requires multiple intermediate<br>
> computational steps (i.e. a node having to process more than one incoming<br>
> routing table update for a given destination), in which the RTD will repeat<br>
> and accumulate, so it will probably always have a higher settling time than<br>
> Map Distibution approaches such as LS.<br>
><br>
> Anyway, if one is dealing with a network in which the rate of connectivity<br>
> change is faster, in real time, than the settling time, hilarity<br>
> ensues.<br>
<br>
Agreed. Add head drop queuing though...<br>
<br>
> The<br>
> ARPANET's use of a poorly smoothed delay metric (well, load-sensitive routing<br>
> _was_ a goal), which increased the dynamicity of the inputs, made things<br>
> considerably worse.<br>
<br>
yes! in the fq world though the delay is a function of the number of<br>
flows, not packets. Much, much smoother.<br>
<br>
><br>
> Now, since then work has been done on DV algorithms, to improve<br>
> thing like count-to-infinity, etc, which I didn't pay a lot of attention to,<br>
> since for other reason (policy routing, Byzantine robustness, etc) I decided<br>
> MD was the way to go. I just did not like the 'distributed computation'<br>
> aspect (which is fundamental in DV); it just felt less robust.<br>
<br>
In terms of a referent for DV vs LS, for the last decade or so meshy<br>
wireless networks were failng due to excessive (oft infinite) queuing.<br>
<br>
Now that's fixed trying to figure out if olsrv2 (LS) or babel (DV) or<br>
something else is on my mind going forward and as is trying to tease<br>
apart "gut historical knowlege" from what we think is fixed in succeding<br>
decades.<br>
<br>
For example packet loss is now a lousy routing metric. Retries at the<br>
mac layer largely eliminate that. Being sensitive to RTT - helps. Can we<br>
be sensitive to smaller variance in RTT than 10s of ms? don't<br>
know. Should we respond to ecn? don't know. should we go control plane<br>
and tie things to out of band data, don't know.<br>
<br>
<br>
><br>
> Anyway, IIRC, that DV work reduced the number of multiple intermediate<br>
> computational steps (above) in many (most? all?) cases, so that would be<br>
> a big help on that influence on settling time.<br>
<br>
My defense of (at least babel's implementation of) dv in this case is<br>
that that individual computation pushes packets in more or less the<br>
right direction incrementally, instead of a global "kerchunk" as in LS.<br>
<br>
><br>
> Alas, I don't remember any key names/titles to look up on that work; maybe<br>
> some early IGRP thing references it? One name that keep coming up in my mind<br>
> as associate with it is Zaw-Sing Su. There's someone else who I think did<br>
> more, but I just can't rember who it was.<br>
><br>
><br>
> But clearly BGP 'works', so maybe this is a case where it's 'good enough',<br>
> and an less capable technology has won out.<br>
><br>
> Anyway, you should read the 'problem analysis' section of BBN 3803;<br>
> available here:<br>
><br>
> <a href="https://apps.dtic.mil/dtic/tr/fulltext/u2/a053450.pdf" rel="noreferrer" target="_blank">https://apps.dtic.mil/dtic/tr/fulltext/u2/a053450.pdf</a><br>
><br>
> and see to what degree it still applies.<br>
<br>
thx!!!!! I wish I'd join this list decades ago!<br>
<br>
> Noel<br>
_______<br>
internet-history mailing list<br>
<a href="mailto:internet-history@postel.org" target="_blank">internet-history@postel.org</a><br>
<a href="http://mailman.postel.org/mailman/listinfo/internet-history" rel="noreferrer" target="_blank">http://mailman.postel.org/mailman/listinfo/internet-history</a><br>
Contact <a href="mailto:list-owner@postel.org" target="_blank">list-owner@postel.org</a> for assistance.<br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr">*****<br><div>Craig Partridge's email account for professional society activities and mailing lists.</div></div></div></div></div></div>