From detlef.bosau at web.de Sun Jun 1 08:21:55 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 01 Jun 2014 17:21:55 +0200 Subject: [ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4] In-Reply-To: <537D0639.6030604@gmail.com> References: <20140521190817.5212618C0E4@mercury.lcs.mit.edu> <537D0639.6030604@gmail.com> Message-ID: <538B4513.2040908@web.de> Am 21.05.2014 22:02, schrieb Brian E Carpenter: > and congestion-prone segments, TCP doesn't work so well. (See > http://www.ietf.org/proceedings/87/slides/slides-87-nwcrg-4.pdf for > example.) Brian Oh yeah. I'm afraid this could end up in a flame war................. I just remember the thread about hop by hop flow control. And of course, this is the same discussion. I once read in Tanenbaums book on computer networks and multicast, when two professors in Berkeley chat, this is of no interest for the rest of the world. (Obviously, Andrew's system model did not consider the NSA.)(Which is not interested in any phone talk in Germany at all, but in contrast to German people who start yawning in those cases, they are eager to listen "Madame The Nought".) Back to the issue. Of course it can well make sense to do error correction at the transport layer, particularly when retransmissions on demand aren't feasible or too expensive, refer e.g. to the TETRYS work by E. Lochin et al. However, both, retransmission and error correction as well are annoying for the rest of the world, both require resources which are not available for others any more. In my view, it is one of our central fallacies that we intertwined error _correction_, error _recovery_ and congestion detection. These are completely different issues and should be carefully distinguished. In addition, we see packet loss as an indication for congestion. Neither packet loss is a reliable congestion indicator (it may be due to congestion or due to corruption as well), neither this is delay, wich can be due to local recovery, due to varying channel codings or line coding, due to varying acess times. When in a chain of 100 hops and links the last link is lossy, so a packet cannot be delivered, it is not always the best solution - to retransmit it locally, - to retransmit it end to end, if the link is lossy, the packet will be corrupted anyway, no matter whether it was sent end to end or locally, - to change the path, ... ... Hence, a packet traveling from source to sink may encounter quite a lot of different scenarios. And I'm (in contrast to the community) not convinced that a whole TCP flow may be controlled only by actions at the end points. I think, we should strongly review our strict "end to end view" here. Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sun Jun 1 12:42:55 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 01 Jun 2014 21:42:55 +0200 Subject: [ih] the slides are funny. Re: Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4] In-Reply-To: <537D0639.6030604@gmail.com> References: <20140521190817.5212618C0E4@mercury.lcs.mit.edu> <537D0639.6030604@gmail.com> Message-ID: <538B823F.6090305@web.de> especially page 14, the dummynet bullshit. (Oh, I have to apologize, as I'm told no one would talk to me, if I was that harsh....) And then a 802.11 test bed which introduces excellency not only into the system model but anticipates problems by standard. "This case is not allowed by the standard, 802.11 doesn't work there, so it works as designed." OMG. (Wasn't there some similar nonsense around there recently? Decongestion control? Dear colleagues, please bear in mind: Trees must die for this nonsense.) (Didn't I once see an advertisement for kind of an all American cooker? Which was ONLY and SIMLY hot? "If the cooker or the kitchen becomes too hot, you may want to adjust the air conditioner." (IIRC that's decongestion control in a nutshell.) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sun Jun 1 13:10:00 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 01 Jun 2014 22:10:00 +0200 Subject: [ih] or in other words: In-Reply-To: <537D0639.6030604@gmail.com> References: <20140521190817.5212618C0E4@mercury.lcs.mit.edu> <537D0639.6030604@gmail.com> Message-ID: <538B8898.7080300@web.de> When you see those results like Coded TCP or Decongestion control, you should have a very careful look where the Lota Bowl is. https://www.youtube.com/watch?v=bm7H7DMchO8 -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sun Jun 1 23:54:17 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 02 Jun 2014 08:54:17 +0200 Subject: [ih] or in other words: In-Reply-To: <538B8898.7080300@web.de> References: <20140521190817.5212618C0E4@mercury.lcs.mit.edu> <537D0639.6030604@gmail.com> <538B8898.7080300@web.de> Message-ID: <538C1F99.2060603@web.de> Interestingly, this CTCP stuff still got attention in 2013: http://www.digitalairwireless.com/wireless-blog/recent/increasing-real-world-bandwidth.html What is most ludicrous here is the headline: Increasing real world bandwidth - Coded TCP The "bandwidth" of a wireless channel is a well defined physical constant and cannot be "increased" by channel coding. (That we "re-defined", more precisely "ill-defined" this term in the CS world is our fault and not the rest of the world's problem.) Another basic insight in wireless networks is, and it took me some time to accept this myself, the fundamental most basic general theorem on wireless networking: "You cannot make a silk purse from a sows ear." And where ever it seems we made such a silk purse, we should carefully check where we got the hidden Lota Bowl or where we pulled our own leg ;-) Am 01.06.2014 22:10, schrieb Detlef Bosau: > When you see those results like Coded TCP or Decongestion control, you > should have a very careful look > where the Lota Bowl is. > > > https://www.youtube.com/watch?v=bm7H7DMchO8 > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From dot at dotat.at Tue Jun 3 04:40:45 2014 From: dot at dotat.at (Tony Finch) Date: Tue, 3 Jun 2014 12:40:45 +0100 Subject: [ih] the state of protocol R&D? In-Reply-To: <53813A6B.1070500@meetinghouse.net> References: <20140525002408.84B9728E137@aland.bbn.com> <53813A6B.1070500@meetinghouse.net> Message-ID: Miles Fidelman wrote: > > At least, to a degree, I guess I'm bemoaning what I see as a general shift in > thinking - away from "let's solve problem with a new protocol" and towards > "let's build a new platform, with an exposed API." "redecentralize" is a slogan for people working against this trend. Tony. -- f.anthony.n.finch http://dotat.at/ Viking: Southeasterly 4 or 5, occasionally 3 in northeast. Slight or moderate. Mainly fair. Moderate or good. From dot at dotat.at Tue Jun 3 04:46:01 2014 From: dot at dotat.at (Tony Finch) Date: Tue, 3 Jun 2014 12:46:01 +0100 Subject: [ih] the state of protocol R&D? In-Reply-To: <53815F2F.4080603@redbarn.org> References: <20140525002408.84B9728E137@aland.bbn.com> <53815300.3080700@dcrocker.net> <53815F2F.4080603@redbarn.org> Message-ID: Paul Vixie wrote: > > transport research, for practical purposes, is dead. SCTP is better in > every way than TCP, but see "edge and middle box" comment above. A good recent counter-example is multicast TCP. Tony. -- f.anthony.n.finch http://dotat.at/ Rockall, Malin: West or southwest, veering northwest for a time, 4 or 5, occasionally 6 in Rockall. Moderate or rough. Showers. Moderate or good. From detlef.bosau at web.de Tue Jun 3 05:43:51 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Tue, 03 Jun 2014 14:43:51 +0200 Subject: [ih] A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> Message-ID: <538DC307.90101@web.de> I presume that I'm allowed to forward some mail by DPR here to the list (if not, DPR may kill me...), however the original mail was sent to the Internet History list and therefore actually intended to reach the public. A quick summary at the beginning: Yes, TCP doesn't manage for sent packets a retransmission queue with copies of the sent packets but maintains an unacknowledged data queue and does GBN basically. This seems to be in contrast to RFC 793, but that's life. A much more important insight into the history of TCP is the "workload discussion" as conducted by Raj Jain and Van Jacobson. Unfortunately, both talk completely at cross purposes and have completely different goals...... Having read the congavoid paper, I noticed that VJ refers to Jains CUTE algorithm in the context of how a flow shall reach equilibrium. Unfortunately, this doesn't really make sense, because slow start and CUTE pursue different goals. - Van Jacobson asks how a flow should reach equlibrium, - Raj Jain assumes a flow to be in equilibrium and asks which workload makes the flow work with an optimum performance. We often mix up "stationary" and "stable". To my understanding, for a queueing system "being stable" means "being stationary", i.e. the queueing system is positively recurrent, i.e., roughly, in human speech: None of the queue lengths will stay beyond all limits for all times but there is a probability > 0 for a queue to reach a finite length at any time. A queueing system is stationary when its arrival rate doesn't permanently exceed its service rate, this is actually nothing else than the "self clocking mechanism" and the equilibrium VJ is talking about. >From RJ's papers I see a focus on the workload and the perfomance of queueing systems. A possible performance metric is the quotient p = average throughput / average sojourn time. If the workload is too little, operators will have idle times, the system is not fully loaded. (=> sojourn time acceptable, throughput to small.) If the workload is too large, too much jobs are not being serviced but reside in queues. (=> throughput fine, sojourn time too large.) >From Jain's work we conclude that a queueing system has an optimum workload - which can be assessed by probing. => Set a workload, assess the system's performance, adjust the workload. Van Jacobson will reach the equilibrium. => Set a workload, if we see drops, the workload is too large. As a consequence, a system may stay perfectly in equilibrium state while seeing buffer bloat in the sense of "a packet's queueing time is more than a half of the packet's sojourne time. I don't know yet, perhaps someone can comment on this one, whether buffer bloat will affect a system's performance. (My gut feeling is: "Yes it will". Because the sojourn time grows inadequately large.) The other, more important, consequence is that probing for "dropfreeness" of a system does not necessarily mean the same as "probing for optimum performance". Detlef Am 20.05.2014 16:49, schrieb David P. Reed: > I really appreciate the work being done to reconstruct the diverse set > of implementations of the end to end TCP flow, congestion, and > measurement specs. > > This work might be a new approach to creating a history of the > Internet... meaning a new way to do what history of technology does best. > > I'd argue that one could award a PhD for that contribution when it > reaches a stage of completion such that others can use it to study the > past. As a work of historical impact it needs citation and commentary. > Worth thinking about how to add citation and commentary to a > simulation - something like knuth's literate programming but for > protocol systems. > > Far better than a list of who did what when, or a set of battles. It's > a contribution to history of the ideas... > > On May 20, 2014, Detlef Bosau wrote: > > Am 19.05.2014 17:02, schrieb Craig Partridge: > > Hi Detlef: I don't keep the 4.3bsd code around anymore, but > here's my recollection of what the code did. 4.3BSD had one > round-trip timeout (RTO) counter per TCP connection. > > > That's the way I find it in the NS2. > > On round-trip timeout, send 1MSS of data starting at the > lowest outstanding sequence number. > > > Which is not yet GBN in its "pure" form, but actually it is, because > CWND is increased with every new ack. And when you call "send_much" when > a new ack arrives (I had a glance at the BSD code myself some years ago, > the routines are named equally there, as far as I've seen, the ns2 cod > e > and the BSD code are extremely similar) the behaviour resembles GBN very > much. > > Set the RTO counter to the next increment. Once an ack is > received, update the sequence numbers and begin slow start > again. What I don't remember is whether 4.3bsd kept track of > multiple outstanding losses and fixed all of them before slow > start or not. > > > OMG. ;-) Who else should remember this, if not Van himself our you? > > However, first of all I have to thank for all the answers here. > > Detlef > > > -- Sent from my Android device with *K-@ Mail > *. > Please excuse my brevity. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfidelman at meetinghouse.net Tue Jun 3 16:48:12 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Tue, 03 Jun 2014 19:48:12 -0400 Subject: [ih] the state of protocol R&D? In-Reply-To: References: <20140525002408.84B9728E137@aland.bbn.com> <53813A6B.1070500@meetinghouse.net> Message-ID: <538E5EBC.6050108@meetinghouse.net> Tony Finch wrote: > Miles Fidelman wrote: >> At least, to a degree, I guess I'm bemoaning what I see as a general shift in >> thinking - away from "let's solve problem with a new protocol" and towards >> "let's build a new platform, with an exposed API." > "redecentralize" is a slogan for people working against this trend. > > Are there many, anymore? Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From jnc at mercury.lcs.mit.edu Tue Jun 3 19:41:03 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 3 Jun 2014 22:41:03 -0400 (EDT) Subject: [ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4] Message-ID: <20140604024103.4DDC018C0F2@mercury.lcs.mit.edu> > From: Detlef Bosau > it can well make sense to do error correction at the transport layer, > particularly when retransmissions on demand aren't feasible or too > expensive > However, both, retransmission and error correction as well are annoying > for the rest of the world, both require resources which are not > available for others any more. Well, yes and no. Here are some thoughts I've had thinking about this. First, start with the point that the endpoint _pretty much_ _has_ to have the mechanism to recognize that a packet has been lost, and retransmit it - no matter what the rest of the design looks like. Why? Because otherwise, the network has to never, ever, lose data - because if, once the host has sent a packet, it cannot reliably notice that it has been lost, and re-send it, the network cannot lose that packet. That means the network has to be a lot more complex: switches have to have a lot of state, they have to have their own mechanism for doing acknowledgements - since an upstream switch cannot discard its copy of a packet until the downstream has definitely gotten a copy - and the upstream has to hold the packet until the downstream acks, etc. etc. (In fact, you wind up with something that looks a lot like the ARPANET.) And even if the design has all that mechanism/state/complexity built in, it's _still_ not really guaranteed: what happens if the switch with the only copy of a packet dies? (Unless the design adopts the rule that there must always be _two_ switches with copies of a packet - even more complexity/etc.) There are good architectural reasons why the endpoint is given the ultimate responsibility for making sure the data gets through: For one, it's really not possible to get someone else to do the job as well as the endpoint can (see above). This is fate-sharing / the end-end principle. For another, once the design does that, the switches become a _lot_ simpler - an additional benefit. When you see things start to line up that way, it's probably a sign that you have found what Erdos would have called 'the design in The Book'. So, now that the design _has_ to have end-end retransmission, adding any other kind of re-transmission is - necessarily - just an optimization. And to decide whether an optimization is worth it, one has to look at a number of aspects: how much complexity it adds, how much it improves the peformance, etc, etc. I understand your feeling that 'doing the retransmission on an end-end basis wastes resources', but... doing local retransmission _as well_ as end-end retransmission (which one _has_ to have - see above) is going to make things more complicated - perhaps _significantly_ more complicated. Just how much more, depends on exactly what is done, and how. E.g. does the mechanism only re-send packets when a damaged packet is received at the down-stream, and the packet is not too badly damaged for the down-stream to figure out which packet was lost, and ask for a re-transmission from the up-stream? This is less complex than the down-stream acking every packet, _but_ ... the up-stream _still_ has to keep a copy of every packet (in case it needs to re-transmit) - and how does it decide when to discard it (since there is no ack)? Add acks, and the up-stream knows for sure when it can ditch its copy - but now there's a lot more complexity, state, etc. Doing local re-transmission is a _lot_ more complexity (and probably limits performance) - and does it buy you enough to make it worth it? Probably not... This is, I gather, basically the line of reasoning that the original designers went through, which led them to the 'smart endpoints, stupid switches' approach. Yes, it means that sometimes you wind up using 'extra' resources, but, _overall_, the design _as a whole_ is simpler, can have much higher performance (a switch sends a packet, then _immediately_ throws away its copy), etc. > In my view, it is one of our central fallacies that we intertwined > error _correction_, error _recovery_ and congestion detection. > These are completely different issues and should be carefully > distinguished. Perhaps. But one needs to think through all the details, and I'm still doing that. (E.g. with Source Quench, the previous discussion showed that the security/etc issues surrounding it as a 'reliable' congestion signal are non-trivial when you start to look into it.) > When in a chain of 100 hops and links the last link is lossy, so a > packet cannot be delivered, it is not always the best solution > - to retransmit it locally, > - to retransmit it end to end, if the link is lossy, the packet will be > corrupted anyway, no matter whether it was sent end to end or locally Sure, but (as I showed above), doing local retransmission means a lot of extra complexity - state, buffering, acks, yadda-yadda. Doing local retransmission may make sense on _some_ links, but doing it _as a system-wide architecture_ probably doesn't make sense - because it can only be an optimization, and it's one with severe costs (complexity, etc). Noel From detlef.bosau at web.de Wed Jun 4 01:44:56 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 10:44:56 +0200 Subject: [ih] [e2e] A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: <538EDC88.5010801@web.de> Am 04.06.2014 02:01, schrieb Andrew Mcgregor: > Bufferbloat definitely does impair performance, by slowing down > feedback it increases the variance of the system workload, which > inevitably causes either packet drops because there is a finite buffer > limit in reach, or by causing such large delays that retransmission > timers fire for packets that are still in flight. In either case, the > system is doing excess work. I absolutely agree with that. And I did not say anything else. It is, however, interesting, that probing schemes as, e.g., VJCC simply don't consider buffer bloat. On the contrary, they produce it because a path is "pumped up" with workload as long as no packets are discarded. We try to alleviate the problem e.g. by ECN, where switches indicate that their buffers grow extremely large, or intentionally discarding packets, e.g. CODDLE, in order to have the senders slow down. However, the basic algorithm in VJCC is chasing congestion - and lead the flow in a nearly congested state again and again. While Jains approaches attempt to achieve am optimum performance. NB: I mentioned a performance metrics throughput / sojourn time, AFAIK this is neither mentioned in the Bible nor in the Quran or the Talmud and the Nobel Price is still pending. I can well imagine that users only assess one of these two parameters, e.g. in a FTP transfer, I'm only interested in a high throughput. In an interactive ssh session, I'm primarily interested in a little sojourn time. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Jun 4 01:56:44 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 10:56:44 +0200 Subject: [ih] or in other words: In-Reply-To: <538EDC88.5010801@web.de> References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> <538EDC88.5010801@web.de> Message-ID: <538EDF4C.7020100@web.de> Reading the congavoid paper and the footnote regarding CUTE, one _could_ think, VJCC and CUTE would pursue the same purpose and the "equilibrium window" CWND and the optimum workload "path capacity" or "path space" in CUTE are the same. No way. And NB: As queues (in queueing theory) are often considered unlimited, a self clocking TCP flow is necessarily in equilibrium state from its very beginning. The challenge is to maximize its performance, hence to find the right trade off between avoiding both, idle times and abundant queue lengths. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Wed Jun 4 04:58:03 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 13:58:03 +0200 Subject: [ih] A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140519150259.35C1E28E137@aland.bbn.com> <537B35D0.9040400@web.de> <538DC307.90101@web.de> Message-ID: <538F09CB.6020903@web.de> Am 04.06.2014 10:59, schrieb Jon Crowcroft: > I dont think there's anything wrong here, "wrong" wouldn't be an adequate word. I think, we have different goals here. > but maybe a note on buffer bloat is in order:- > > alongside the feedback/AIMD and ack clocking mechanisms for tcp, there > was a long discussion on right sizing buffers in the net - since AIMD > naively applied led to the sawtooth rate behaviour in TCP, a back of > envelope calculation ^^^^^^^^^^^^^^^^^^^^^very appropriate for a physicist :-) Even Einstein did so :-) > led to the notion that the bottleneck had to have a buffer to cope > with the peak, which at worst case would be bandwidth*delay product and exactly this might be a problem. What is the "delay" then? The more buffer space you introduce in the path, the greater the delay, and hence the product, will be.... > worth of packets (basically 3/2 times the mean rate) so that when 1 > more packet was sent at that rate, one loss would be incurred > triggering the MD part of AIMD once every ln(W) worth of RTTs...[al > this is academic in reality for lots of reasons, including the various > other triggers like dupacks and the fact that this case is a corner > one - since usually there are lots of flows multiplexed at the > bottleneck(s) and multiple bottlenecks, so the appropriate buffer siz > could be way smaller - and of course, any one running a virtual queue > and rate estimater (i.e. AQM a la codel etc) and especially ding ECN > rather than loss baed feedback can avoid all this rediculsous > provisioning of packet memory all ov er the net > My concern is that I doubt, that this calculation should be done for the "whole path end to end". And of course, you will perhaps provide sufficient buffer that the links will work at full load. Hence, the delay will vary from propagation and serialization delays only (empty queues) and ./. + queueing delays. Extremely rough- > but alas, the rule of thumb for a corner case became dogma for a lot > of router vendors for way too long to get it disestablished.... what doesn't forbid to ask questions ;-) > > and many of the bottlenecks today are near the edge, and more common > than not, probably in the interface between cellular data and > backhaul, where, as you say, thee radio link may not exhibit any kind > of stationary capacity at all etc etc When I got the insight of non stationary radio links fourteen years ago, I certainly would have less grey hairs than today ;-) > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From braden at isi.edu Wed Jun 4 08:36:43 2014 From: braden at isi.edu (Bob Braden) Date: Wed, 04 Jun 2014 08:36:43 -0700 Subject: [ih] internet-history Digest, Vol 85, Issue 2 In-Reply-To: References: Message-ID: <538F3D0B.4040904@meritmail.isi.edu> Detlef, Your recent posts to this list seem rather devoid of relevance to Internet history. Perhaps you meant to be on end2end-interest? Bob Braden From detlef.bosau at web.de Wed Jun 4 11:54:59 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Wed, 04 Jun 2014 20:54:59 +0200 Subject: [ih] internet-history Digest, Vol 85, Issue 2 In-Reply-To: <538F3D0B.4040904@meritmail.isi.edu> References: <538F3D0B.4040904@meritmail.isi.edu> Message-ID: <538F6B83.9050409@web.de> Am 04.06.2014 17:36, schrieb Bob Braden: > > Detlef, > > Your recent posts to this list seem rather devoid of relevance to > Internet history. Perhaps you meant > to be on end2end-interest? > > Bob Braden > There is an overlap, certainly. See my today's posts. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From craig at aland.bbn.com Thu Jun 5 12:03:15 2014 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 05 Jun 2014 15:03:15 -0400 Subject: [ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4] Message-ID: <20140605190315.3658528E137@aland.bbn.com> > I understand your feeling that 'doing the retransmission on an end-end basis > wastes resources', but... doing local retransmission _as well_ as end-end > retransmission (which one _has_ to have - see above) is going to make things > more complicated - perhaps _significantly_ more complicated. Just how much > more, depends on exactly what is done, and how. There's actually an excellent thesis on this topic which shows that local retransmission can be counter productive. Reiner Ludwig, Eliminating Inefficient Cross-Layer Interactions in Wireless Networks, from RWTH Aachen. Craig From detlef.bosau at web.de Thu Jun 5 13:35:04 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 05 Jun 2014 22:35:04 +0200 Subject: [ih] internet-history Digest, Vol 85, Issue 2 In-Reply-To: <538F3D0B.4040904@meritmail.isi.edu> References: <538F3D0B.4040904@meritmail.isi.edu> Message-ID: <5390D478.7010707@web.de> Am 04.06.2014 17:36, schrieb Bob Braden: > > Detlef, > > Your recent posts to this list seem rather devoid of relevance to > Internet history. Perhaps you meant > to be on end2end-interest? > > Bob Braden > Thinking about your remark once again: When we turned to VJCC, we basically ducked out of a problem: Intuitively, the decision which flow on a sender may send is a scheduling decision. In operating systems, we use on demand schedulers for the decision which process is assigned the processor and which isn't. Did the community consider doing so with TCP flows in the middle of the 80s? >From what I see today, applications put their data to sockets that time, sockets sent as much as fitted into a network interface's buffer - and congestion control was used to clean up the mess ;-) Is this too harsh? Or does this describe the situation thirty years ago? -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Thu Jun 5 18:22:41 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 06 Jun 2014 03:22:41 +0200 Subject: [ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4] In-Reply-To: <20140604024103.4DDC018C0F2@mercury.lcs.mit.edu> References: <20140604024103.4DDC018C0F2@mercury.lcs.mit.edu> Message-ID: <539117E1.2080109@web.de> Am 04.06.2014 04:41, schrieb Noel Chiappa: > > > First, start with the point that the endpoint _pretty much_ _has_ to have the > mechanism to recognize that a packet has been lost, and retransmit it - no > matter what the rest of the design looks like. > > Why? Because otherwise, the network has to never, ever, lose data - because > if, once the host has sent a packet, it cannot reliably notice that it has > been lost, and re-send it, the network cannot lose that packet. Absolutely agreed. > > That means the network has to be a lot more complex: switches have to have a > lot of state, they have to have their own mechanism for doing > acknowledgements - since an upstream switch cannot discard its copy of a > packet until the downstream has definitely gotten a copy - and the upstream > has to hold the packet until the downstream acks, etc. etc. It is a trade off here. My own way of thinking through the last 10 years was to "binary". I considered local retransmissions XOR transport layer retransmissions. This is oversimplified. Depending on the link, there may be alternative channel codes, in mobile networks alternative paths..... The challenge is to make the right choice out of the feasible ones here. > > (In fact, you wind up with something that looks a lot like the ARPANET.) Perhaps. In Ethernet, TR, FDDI, ATM etc. we hardly have a retransmission layer. In wireless networks, we do. In ALL (terrestrial) technologies I now. In satellite networks, we generally prefer retransmission free FEC schemes. > > And even if the design has all that mechanism/state/complexity built in, it's > _still_ not really guaranteed: what happens if the switch with the only copy > of a packet dies? (Unless the design adopts the rule that there must always be > _two_ switches with copies of a packet - even more complexity/etc.) you got me wrong. I don't talk about splitting or ACK spoofing. But if e.g. a WiFi plant changes from 16QAM to QPSK to accommodate noise, this choice is made locally. A KA9Q stack from 1991 will perhaps not even know the difference. (Perhaps Craig will correct me here? Or Phil Karn himself?) > > There are good architectural reasons why the endpoint is given the ultimate > responsibility for making sure the data gets through: For one, it's really > not possible to get someone else to do the job as well as the endpoint can > (see above). This is fate-sharing / the end-end principle. Again, you got me wrong. The endpoint is responsible for making sure the eventual delivery. The question is how this is achieved. Once a sender got an ACK for a packet, it can remove the packet from the queue. NOT EARLIER!!!! And certainly this includes a packet retransmission when a packet is not acked on time. However it may include e.g. rerouting - and of course: a socket may cancel a flow if necessary and inform the application accordingly. > > For another, once the design does that, the switches become a _lot_ simpler - > an additional benefit. When you see things start to line up that way, it's > probably a sign that you have found what Erdos would have called 'the design > in The Book'. > > > So, now that the design _has_ to have end-end retransmission, adding any other > kind of re-transmission is - necessarily - just an optimization. My ideas on a flow layer are still quite rough. However, in that case the discussion becomes a lot simpler as I think with a flow layer we can avoid at least congestion related drops. Afterwards the discussion will be different from the current one. At the moment, we use to deal with drops, corruption is a rare exception at least in wired networks. If we could avoid drops, we would deal with corruption only. > > And to decide whether an optimization is worth it, one has to look at a > number of aspects: how much complexity it adds, how much it improves the > peformance, etc, etc. Agreed. It is a trade off which must be properly assessed. > > I understand your feeling that 'doing the retransmission on an end-end basis > wastes resources', but... doing local retransmission _as well_ as end-end > retransmission (which one _has_ to have - see above) is going to make things > more complicated - perhaps _significantly_ more complicated. Just how much > more, depends on exactly what is done, and how. And exactly that's the discussion from Jerry Salzer's paper. I don't see it as an argument to make end to end retransmission mandatory by all means but to carefully place retransmission at the correct place. > > E.g. does the mechanism only re-send packets when a damaged packet is > received at the down-stream, and the packet is not too badly damaged for the > down-stream to figure out which packet was lost, and ask for a > re-transmission from the up-stream? This is less complex than the down-stream > acking every packet, _but_ ... the up-stream _still_ has to keep a copy of > every packet (in case it needs to re-transmit) - and how does it decide when > to discard it (since there is no ack)? Add acks, and the up-stream knows for > sure when it can ditch its copy - but now there's a lot more complexity, > state, etc. > > Doing local re-transmission is a _lot_ more complexity (and probably limits > performance) - and does it buy you enough to make it worth it? Probably not... > In wireless networks, the decision is made. With an interesting consequence: The service times vary a lot. And so does the throuhput. And so does the throughput-delay product, iow: the "path capacity". -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de