[ih] The History of Protocol R&D?

Eggert, Lars lars at netapp.com
Mon May 26 23:43:56 PDT 2014


Hi,

good questions. Two quick points in reply:

On 2014-5-26, at 19:50, Jack Haverty <jack at 3kitty.org> wrote:
> I assume that use of mathematical models, simulations, and anecdotal experiments is common and publicized in papers, theses, et al, but how are ideas subsequently validated in the broad Internet world, and the results of models and simulations and lab tests verified in the large scale world of The Internet?

this has been getting increasingly difficult with the commercialization of the Internet. Unless you have close ties with an entity that is running productions networks or datacenters, or is controlling a sizable fraction of the end systems, it is very difficult to do any realistic verification.

(I used to think the situation in the 90s was bad, where only researchers with ties to operators could really do practically meaningful routing work, but compared to the folks operating datacenters or controlling mobile platforms, operators are a pleasure to deal with.)

> Taking a specific concrete case - was our guesstimate of 1% as a "normal" packet loss rate valid?   We used to look at the counters and if the rate was much higher than that we took it as an indication of a problem to be investigated and addressed.
> 
> Has the packet-loss-rate of The Internet been going up, or down, over the last 30 years?   Has the duplicate-packets rate improved? (Or whatever other metrics might have surfaced as a measure of proper behavior)
> 
> Did the metric change positively in response to deploying some new idea (e.g., a new congestion control algorithm)?

In terms of measuring various aspects of the (publicly observable) Internet, the proceedings of the ACM IMC conference have been consistently having relevant papers: http://www.sigcomm.org/events/imc-conference

> My TVs have TCP, and it can stream video from halfway around the world.   So can my phone(s).   So can the other millions of devices out there.   If I could look at the Wastebins of the Internet after a typical day, how big a pile of discarded packets would I find, in the various hosts, routers, etc. out there?   Over the History of The Internet, how has that daily operational experience been changing?   How much observed effect have the new algorithms had on getting closer to that theoretical ideal behavior of one byte-mile per user byte-mile?   
> 
> Are the new algorithms even implemented in those devices?   Is anybody watching the gauges and dials of The Internet?

In terms of TCP, the IETF's working groups around the protocol (TCPM, MPTCP, etc.) usually get a pretty good understanding of what's seeing deployment where. (Again, not so much in terms of inside datacenters though - it's considered the secret sauce.)

Lars
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 273 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20140527/d0432eff/attachment.asc>


More information about the Internet-history mailing list