From detlef.bosau at web.de Thu Aug 21 13:59:20 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Thu, 21 Aug 2014 22:59:20 +0200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F5E312.6060004@web.de> References: <1408565008.272912094@apps.rackspace.com> <53F513CB.5050700@isi.edu> <53F5E312.6060004@web.de> Message-ID: <53F65DA8.7000907@web.de> In this context, my question is: When were the terms congestion control and flow control coined? I intentionally don't ask for the typical definitions in lectures "flow control is between TCP sender and receiver" and "congestion control is somewhat in between". (A definition like this is vague, and in my opinion, exactly this vagueness is the very problem.) Detlef From touch at isi.edu Thu Aug 21 14:17:23 2014 From: touch at isi.edu (Joe Touch) Date: Thu, 21 Aug 2014 14:17:23 -0700 Subject: [ih] usable archive of list In-Reply-To: References: Message-ID: <53F661E3.1050408@isi.edu> Hi, all, Although this has been overtaken by various solutions, in the future please mail me (the list admin) for such questions. Joe (as list admin) On 7/15/2014 9:29 PM, Randy Bush wrote: > for a paper, i need to find some old messages i remember from this list. > the archive at isi/postel.org is the useless default mailman form, > chopped up into months. like good luck searching that. i would be > happy with a total zip, tarball, or whatever. i do not need sexy > indexing, i have a well-scaled mua. does anyone have a usable archive > of this list? > > randy > From jnc at mercury.lcs.mit.edu Thu Aug 21 14:27:25 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 21 Aug 2014 17:27:25 -0400 (EDT) Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP Message-ID: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> > From: Detlef Bosau > When were the terms congestion control and flow control coined? 'Flow control' (in networking - in communications overall, it goes back even further) is pretty old: RFC-36 (March 1970) talks about it in close to the modern sense (although at that point, it was provided by the network, not by the end-host), so its use in data networks dates back basically to the beginning. 'Congestion control' has also been around for a while - see RFC-802 (November 1981), and then Nagle's magnificent RFC-896 (January 1984), where is appears in pretty much its modern meaning. Noel From agmalis at gmail.com Thu Aug 21 15:46:32 2014 From: agmalis at gmail.com (Andrew G. Malis) Date: Thu, 21 Aug 2014 18:46:32 -0400 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> References: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> Message-ID: Noel, RFC 802 was my first RFC! The congestion being discussed was with regard to the ARPANET. Also, flow control procedures were included in X.25, which was first published in 1976, although it was obviously in progress well before then. Cheers, Andy On Thu, Aug 21, 2014 at 5:27 PM, Noel Chiappa wrote: > > From: Detlef Bosau > > > When were the terms congestion control and flow control coined? > > 'Flow control' (in networking - in communications overall, it goes back > even > further) is pretty old: RFC-36 (March 1970) talks about it in close to the > modern sense (although at that point, it was provided by the network, not > by > the end-host), so its use in data networks dates back basically to the > beginning. > > 'Congestion control' has also been around for a while - see RFC-802 > (November > 1981), and then Nagle's magnificent RFC-896 (January 1984), where is > appears > in pretty much its modern meaning. > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vint at google.com Thu Aug 21 16:33:08 2014 From: vint at google.com (Vint Cerf) Date: Thu, 21 Aug 2014 19:33:08 -0400 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> Message-ID: flow control was found in the ARPANET design pretty explicitly at the IMP layer. NCP needed to get a RFNM ("request for next message") before it would gate another host message into the ARPANET. v On Thu, Aug 21, 2014 at 6:46 PM, Andrew G. Malis wrote: > Noel, > > RFC 802 was my first RFC! The congestion being discussed was with regard > to the ARPANET. > > Also, flow control procedures were included in X.25, which was first > published in 1976, although it was obviously in progress well before then. > > Cheers, > Andy > > > > On Thu, Aug 21, 2014 at 5:27 PM, Noel Chiappa > wrote: > >> > From: Detlef Bosau >> >> > When were the terms congestion control and flow control coined? >> >> 'Flow control' (in networking - in communications overall, it goes back >> even >> further) is pretty old: RFC-36 (March 1970) talks about it in close to the >> modern sense (although at that point, it was provided by the network, not >> by >> the end-host), so its use in data networks dates back basically to the >> beginning. >> >> 'Congestion control' has also been around for a while - see RFC-802 >> (November >> 1981), and then Nagle's magnificent RFC-896 (January 1984), where is >> appears >> in pretty much its modern meaning. >> >> Noel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpgs at ittc.ku.edu Thu Aug 21 17:28:14 2014 From: jpgs at ittc.ku.edu (James P.G. Sterbenz) Date: Thu, 21 Aug 2014 19:28:14 -0500 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> References: <20140821212725.9ED4618C0D2@mercury.lcs.mit.edu> Message-ID: On 21 Aug 2014, at 16:27, Noel Chiappa wrote: >> From: Detlef Bosau > >> When were the terms congestion control and flow control coined? > > 'Flow control' (in networking - in communications overall, it goes back even > further) is pretty old: RFC-36 (March 1970) talks about it in close to the > modern sense (although at that point, it was provided by the network, not by > the end-host), so its use in data networks dates back basically to the > beginning. > > 'Congestion control' has also been around for a while - see RFC-802 (November > 1981), and then Nagle's magnificent RFC-896 (January 1984), where is appears > in pretty much its modern meaning. > > Noel Section 11.3 of Davies book is on congestion control: Donald W. Davies and Derek L.A. Barber, _Communication Networks for Computers_, Wiley, 1973 Chap. 11 Protocols, Terminals and Network Monitoring 11.2 (introduces the NPL and ARPA protocols) 11.3 Congestion in Data Networks - Congestion Control Methods - The Effects of Global Congestion - Isarithmic Congestion Control - The Permit Pool All network historians and scientists should own this book, as well as L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 in which congestion is covered in Chap. 4 on Cigale. There were likely much earlier Cyclades papers mentioning congestion before this retrospective monograph. It is also covered in Mischa Schartz' 1977 textbook. Cheers, James --------------------------------------------------------------------- James P.G. Sterbenz jpgs@{ittc|eecs}.ku.edu jpgs at comp.lancs.ac.uk www.ittc.ku.edu/~jpgs 154 Nichols ITTC|EECS InfoLab21 Lancaster U +1 508 944 3067 The University of Kansas jpgs at tik.ee.ethz.ch jpgs@{acm|ieee|comsoc|computer|m.ieice}.org jpgsterbenz at gmail.com gplus.to/jpgs www.facebook.com/jpgsterbenz jpgs at ittc.ku.edu From jnc at mercury.lcs.mit.edu Fri Aug 22 06:56:05 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 22 Aug 2014 09:56:05 -0400 (EDT) Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP Message-ID: <20140822135605.9C45318C123@mercury.lcs.mit.edu> > From: "James P.G. Sterbenz" > All network historians and scientists should own ... > L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 Indeed - it has an honoured place on my bookshelf. The importance of CYCLADES/CIGALE in the history of data network cannot be over-emphasized, IMO. > in which congestion is covered in Chap. 4 on Cigale. 4.4.6, to be exact. Looking at their congestion control mechanism, it's fairly complex - not sure if it would work in a heterogeneous network like today's Internet, though. Still, interesting... > There were likely much earlier Cyclades papers mentioning congestion > before this retrospective monograph. Yes, about the earliest appears to be: M. Irland, "Queueing analysis of a buffer allocation scheme for a packet switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 There are some slightly earlier ones by him (her?), but they appear to be progress reports on a simulation project which was part of a PhD thesis at the University of Waterloo (completed in April 1977), and not widely distributed. In looking for the references in that book to the congestion work, though, I stumbled across this one: D. W. Davies, "The Control of Congestion in Packet Switching Networks", Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, Palo Alto, Oct. 1971 I don't have access to that, but it would be interesting to see what it covers. Noel From detlef.bosau at web.de Fri Aug 22 07:03:17 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 22 Aug 2014 16:03:17 +0200 Subject: [ih] [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <1408662060.059513826@apps.rackspace.com> References: <1408565008.272912094@apps.rackspace.com> <53F513CB.5050700@isi.edu> <53F5E312.6060004@web.de> <1408662060.059513826@apps.rackspace.com> Message-ID: <53F74DA5.10501@web.de> with cc: to the IH list. Am 22.08.2014 um 01:01 schrieb dpreed at reed.com: > > Detlef - since my posts are by default blocked on e2e, I rarely > respond. But in the Internet, the general idea was that since an > "overlay connection" can be treated as a link from one Internet switch > to another. > This obviously the idea behind e.g. VJCC. > Generally, there is localized flow control between two successive > Internet layer switches, either using the underlying technology, or > using the "envelope" that wraps an IP datagram. This is not specified > in the Internet standards, because the Internet does not define > standards about how IP datagrams are transported, but it is always there. > That's the way it is typically done. > > > > There are rules about how those Internet hop protocols that carry IP > datagrams must work. They must not buffer excessively, they must not > try to be reliable. They must drop IP datagrams when congested. They > may drop IP datagrams if they encounter problems. That all is what > "best efforts" actually means. > That's according to RFC 791. However, what happens in reality? - in 802.11 networks, we have local retransmissions. - in mobile wireless networks, we have local retransmissions. (particularly in 802.11 networks, local retransmissions can be necessary due to packet collisions, hence we should not restrict the number of retransmissions to the same degree as it makes sense in mobile networks.) In networks with excessive bridging (ADSL in huge ATM clouds, huge enterprise networks built with Ethernet) we may have congestion "under the hood", i.e. packet loss which is taken as an indication of congestion, we may have flow control as well. I had a look at Cerf's catenet proposal yesterday, I will have another look at the IP paper by Cerf/Kahn. However, at the moment, I'm about to make a certain "bookmark" (which may become a "question mark") at the point where we separated the transport layer from the network layer, with particular respect to flow control. As far as flow control is offered by subnets (e.g. Ethernet) it is typically a "link" flow control. This may lead to head of line blocking. It would be nice to have a flow based flow control. However, this is not available in all networks. Hence, when I think in a "clean slate way", I would like to understand flow control related to adjacent switches and flow related. Where this is not available or cannot be reasonably achieved, "congestion control" in the sense "where nothing is dropped, everything will eventually pass" is a work around. But this is apparently not the road taken by the community,. > > > > > It's very confused thinking to treat the properties of underlying > networks as the provenance of the Internet design. Instead, this > definition of "best efforts" creates a modular definition of what all > possible underlying technologies must do, and what they MUST NOT do. > > > > On Thursday, August 21, 2014 8:16am, "Detlef Bosau" > said: > > > As far as I see, DPR's idea is to gather congestion information along > > the path using Bloome Filters. > > > > There is one possible problem, which also arises with hopwise credit > > based flow control: The Internet is basically an overlay network. > > So an important issue, sometimes gets a bit lost, is that "adjacent" IP > > nodes are - though being adjacent - not always connected by a point to > > point link but there may be a more or less complex infrastructure in > > between. > > > > Now, congestion may well occur on nodes BETWEEN the IP nodes. (E.g. > > Ethernet bridges, think of remote bridging as used in ADSL.) > > > > The IP packet's payload is not accessible for those "L2 nodes", hence > > these nodes cannot stamp packets with any actual congestion information. > > > > As a consequence, imminent congestion may not be visible for the IP > > based overlay network. > > > > Detlef > > > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From vint at google.com Fri Aug 22 07:33:25 2014 From: vint at google.com (Vint Cerf) Date: Fri, 22 Aug 2014 10:33:25 -0400 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140822135605.9C45318C123@mercury.lcs.mit.edu> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: Donald Davies had the idea of an isarithmic network: a fixed number of packets in the network at all times. Issues however included getting "empty packets" to places with data to send. Like the taxi problem where they end up at favored destinations but are not available without deadheading to favored origins. http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks v On Fri, Aug 22, 2014 at 9:56 AM, Noel Chiappa wrote: > > From: "James P.G. Sterbenz" > > > All network historians and scientists should own ... > > L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 > > Indeed - it has an honoured place on my bookshelf. The importance of > CYCLADES/CIGALE in the history of data network cannot be over-emphasized, > IMO. > > > in which congestion is covered in Chap. 4 on Cigale. > > 4.4.6, to be exact. Looking at their congestion control mechanism, it's > fairly complex - not sure if it would work in a heterogeneous network like > today's Internet, though. Still, interesting... > > > There were likely much earlier Cyclades papers mentioning congestion > > before this retrospective monograph. > > Yes, about the earliest appears to be: > > M. Irland, "Queueing analysis of a buffer allocation scheme for a packet > switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 > > There are some slightly earlier ones by him (her?), but they appear to be > progress reports on a simulation project which was part of a PhD thesis at > the University of Waterloo (completed in April 1977), and not widely > distributed. > > > In looking for the references in that book to the congestion work, though, > I > stumbled across this one: > > D. W. Davies, "The Control of Congestion in Packet Switching Networks", > Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, > Palo Alto, Oct. 1971 > > I don't have access to that, but it would be interesting to see what it > covers. > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Fri Aug 22 12:34:44 2014 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 23 Aug 2014 07:34:44 +1200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140822135605.9C45318C123@mercury.lcs.mit.edu> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F79B54.5080907@gmail.com> On 23/08/2014 01:56, Noel Chiappa wrote: ... > In looking for the references in that book to the congestion work, though, I > stumbled across this one: > > D. W. Davies, "The Control of Congestion in Packet Switching Networks", > Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, > Palo Alto, Oct. 1971 > > I don't have access to that, but it would be interesting to see what it > covers. When working on my book, I came across the fact that Davies worked on computer-based analysis of queueing systems as early as 1956, for the UK Road Research Laboratory (Proc IEE 103 Pt B Suppl, 473?475). Brian From detlef.bosau at web.de Fri Aug 22 12:48:48 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 22 Aug 2014 21:48:48 +0200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F79EA0.2070608@web.de> In a sense, VJCC is nothing else than another twist of an isarithmic network, although the number of packets is kept constant along a flow. *Expecting flames* The "bufferbloat problem" is a twist, in a sense, of the taxi problem then ;-) Am 22.08.2014 um 16:33 schrieb Vint Cerf: > Donald Davies had the idea of an isarithmic network: a fixed number of > packets in the network at all times. Issues however included getting > "empty packets" to places with data to send. Like the taxi problem > where they end up at favored destinations but are not available > without deadheading to favored origins. > > http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks > > v > > -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From detlef.bosau at web.de Fri Aug 22 13:27:51 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Fri, 22 Aug 2014 22:27:51 +0200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F7A7C7.1020804@web.de> Vint, when I may ask you directly: I frequently read your catenet model for internetworking and your paper with Bob Kahn from 74. I'm still to understand your position towards flow control between adjacent (IP-)nodes and the subnets as well. We eventually agreed that subnets must not do flow control (but discard packets, which cannot be served) in order to avoid head of line blocking. Would it make sense (though it might not be possible for practical reasons) to assume / employ a flow based flow control which would even work in and through the concatenated subnets? So we wouldn't have a best effort packet switching but (in a sense) some kind of "flow switching"? Detlef Am 22.08.2014 um 16:33 schrieb Vint Cerf: > Donald Davies had the idea of an isarithmic network: a fixed number of > packets in the network at all times. Issues however included getting > "empty packets" to places with data to send. Like the taxi problem > where they end up at favored destinations but are not available > without deadheading to favored origins. > > http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks > > v -------------- next part -------------- An HTML attachment was scrubbed... URL: From vint at google.com Fri Aug 22 13:37:52 2014 From: vint at google.com (Vint Cerf) Date: Fri, 22 Aug 2014 16:37:52 -0400 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F7A7C7.1020804@web.de> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F7A7C7.1020804@web.de> Message-ID: flow control was partly managed by round-trip time measurements, window size and packet length adjustments on an end/end basis. There are networks that managed flows (OpenFlow recently, Anagran and Caspian) but none that I know about that did so across network boundaries. v On Fri, Aug 22, 2014 at 4:27 PM, Detlef Bosau wrote: > > Vint, when I may ask you directly: I frequently read your catenet model > for internetworking and your paper with Bob Kahn from 74. > > I'm still to understand your position towards flow control between > adjacent (IP-)nodes and the subnets as well. We eventually agreed that > subnets must not do flow control (but discard packets, which cannot be > served) in order to avoid head of line blocking. Would it make sense > (though it might not be possible for practical reasons) to assume / employ > a flow based flow control which would even work in and through the > concatenated subnets? > > So we wouldn't have a best effort packet switching but (in a sense) some > kind of "flow switching"? > > Detlef > > Am 22.08.2014 um 16:33 schrieb Vint Cerf: > > Donald Davies had the idea of an isarithmic network: a fixed number of > packets in the network at all times. Issues however included getting "empty > packets" to places with data to send. Like the taxi problem where they end > up at favored destinations but are not available without deadheading to > favored origins. > > > http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks > > v > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Fri Aug 22 20:34:30 2014 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 22 Aug 2014 20:34:30 -0700 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F80BC6.4080707@3kitty.org> Not just taxis... It's been a looonnggg time, but I still remember studying a lot of mathematics about 50 years ago - queueing theory, graph theory, etc. Used to be able to do it too. My recollection is that terms such as "flow control" and "congestion control" were used in mathematics, well before they were used in computer networks. I suspect the answer to "when were the terms "flow control" and "congestion control" coined will be found in the history of mathematics - not computers. Such terms have been in use a long time. They were coined long before computers. Computer and later network people just used the terms to describe the behavior of flows of bits, just as earlier engineers and scientists used them to describe the flow of people, railroad cars, components in manufacturing lines, warehouse inventory, etc. For example, the problem of where to put railroad tracks, and where to put railroad yards (and how big) to provide "buffers" for flows of goods is fundamentally the same as where to put packet switches, memory, circuits, etc., in computer networks. The whole field of Operations Research is about that kind of math used in engineering, business, etc., long before computers did. Of course computers made it possible to actually do the calculations fast, and that changed the way the math got used. /Jack Haverty On 08/22/2014 07:33 AM, Vint Cerf wrote: > Donald Davies had the idea of an isarithmic network: a fixed number of > packets in the network at all times. Issues however included getting > "empty packets" to places with data to send. Like the taxi problem > where they end up at favored destinations but are not available > without deadheading to favored origins. > > http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks > > v > > > > On Fri, Aug 22, 2014 at 9:56 AM, Noel Chiappa > wrote: > > > From: "James P.G. Sterbenz" > > > > All network historians and scientists should own ... > > L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 > > Indeed - it has an honoured place on my bookshelf. The importance of > CYCLADES/CIGALE in the history of data network cannot be > over-emphasized, > IMO. > > > in which congestion is covered in Chap. 4 on Cigale. > > 4.4.6, to be exact. Looking at their congestion control mechanism, > it's > fairly complex - not sure if it would work in a heterogeneous > network like > today's Internet, though. Still, interesting... > > > There were likely much earlier Cyclades papers mentioning > congestion > > before this retrospective monograph. > > Yes, about the earliest appears to be: > > M. Irland, "Queueing analysis of a buffer allocation scheme for > a packet > switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 > > There are some slightly earlier ones by him (her?), but they > appear to be > progress reports on a simulation project which was part of a PhD > thesis at > the University of Waterloo (completed in April 1977), and not widely > distributed. > > > In looking for the references in that book to the congestion work, > though, I > stumbled across this one: > > D. W. Davies, "The Control of Congestion in Packet Switching > Networks", > Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, > Palo Alto, Oct. 1971 > > I don't have access to that, but it would be interesting to see > what it > covers. > > Noel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larrysheldon at cox.net Fri Aug 22 21:21:16 2014 From: larrysheldon at cox.net (Larry Sheldon) Date: Fri, 22 Aug 2014 23:21:16 -0500 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F816BC.1080407@cox.net> On 8/22/2014 22:34, Jack Haverty wrote: > Not just taxis... > > It's been a looonnggg time, but I still remember studying a lot of > mathematics about 50 years ago - queueing theory, graph theory, etc. > Used to be able to do it too. > > My recollection is that terms such as "flow control" and "congestion > control" were used in mathematics, well before they were used in > computer networks. > > I suspect the answer to "when were the terms "flow control" and > "congestion control" coined will be found in the history of mathematics > - not computers. Such terms have been in use a long time. They were > coined long before computers. > > Computer and later network people just used the terms to describe the > behavior of flows of bits, just as earlier engineers and scientists used > them to describe the flow of people, railroad cars, components in > manufacturing lines, warehouse inventory, etc. > > For example, the problem of where to put railroad tracks, and where to > put railroad yards (and how big) to provide "buffers" for flows of goods > is fundamentally the same as where to put packet switches, memory, > circuits, etc., in computer networks. > > The whole field of Operations Research is about that kind of math used > in engineering, business, etc., long before computers did. > > Of course computers made it possible to actually do the calculations > fast, and that changed the way the math got used. > > /Jack Haverty > > > On 08/22/2014 07:33 AM, Vint Cerf wrote: >> Donald Davies had the idea of an isarithmic network: a fixed number of >> packets in the network at all times. Issues however included getting >> "empty packets" to places with data to send. Like the taxi problem >> where they end up at favored destinations but are not available >> without deadheading to favored origins. >> >> http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks >> >> v >> >> On Fri, Aug 22, 2014 at 9:56 AM, Noel Chiappa > > wrote: >> >> > From: "James P.G. Sterbenz" > > >> >> > All network historians and scientists should own ... >> > L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 >> >> Indeed - it has an honoured place on my bookshelf. The importance of >> CYCLADES/CIGALE in the history of data network cannot be >> over-emphasized, >> IMO. >> >> > in which congestion is covered in Chap. 4 on Cigale. >> >> 4.4.6, to be exact. Looking at their congestion control mechanism, >> it's >> fairly complex - not sure if it would work in a heterogeneous >> network like >> today's Internet, though. Still, interesting... >> >> > There were likely much earlier Cyclades papers mentioning >> congestion >> > before this retrospective monograph. >> >> Yes, about the earliest appears to be: >> >> M. Irland, "Queueing analysis of a buffer allocation scheme for >> a packet >> switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 >> >> There are some slightly earlier ones by him (her?), but they >> appear to be >> progress reports on a simulation project which was part of a PhD >> thesis at >> the University of Waterloo (completed in April 1977), and not widely >> distributed. >> >> >> In looking for the references in that book to the congestion work, >> though, I >> stumbled across this one: >> >> D. W. Davies, "The Control of Congestion in Packet Switching >> Networks", >> Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, >> Palo Alto, Oct. 1971 >> >> I don't have access to that, but it would be interesting to see >> what it >> covers. >> >> Noel >> If you will forgive an intrusion from a lurking ignoramus....... I would be very surprised to learn that there was nothing in the Bell Labs library from the early days to Traffic Engineering in connection with the distance dialing network development. As remember as a toll craftsman in the 1960s tossing terms around (that my aging brain can't recall now) that spoke to congestion and queuing and route advancing and stuff, as if I knew what they all meant. -- The unique Characteristics of System Administrators: The fact that they are infallible; and, The fact that they learn form their mistakes. From dhc2 at dcrocker.net Fri Aug 22 21:24:24 2014 From: dhc2 at dcrocker.net (Dave Crocker) Date: Fri, 22 Aug 2014 21:24:24 -0700 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F80BC6.4080707@3kitty.org> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> Message-ID: <53F81778.809@dcrocker.net> On 8/22/2014 8:34 PM, Jack Haverty wrote: > For example, the problem of where to put railroad tracks, and where to > put railroad yards (and how big) to provide "buffers" for flows of goods > is fundamentally the same as where to put packet switches, memory, > circuits, etc., in computer networks. Yup. In fact I recall seeing an article in the 1970s (IEEE? Kleinrock?) that was about queuing theory and the cover to the periodical showed a rendering of a large railroad switching yard. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jpgs at ittc.ku.edu Fri Aug 22 21:39:21 2014 From: jpgs at ittc.ku.edu (James P.G. Sterbenz) Date: Fri, 22 Aug 2014 23:39:21 -0500 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F81778.809@dcrocker.net> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> <53F81778.809@dcrocker.net> Message-ID: <2F0478BC-CF0F-4F43-A6EF-0C30FD15C6D3@ittc.ku.edu> On 22 Aug 2014, at 23:24, Dave Crocker wrote: > On 8/22/2014 8:34 PM, Jack Haverty wrote: >> For example, the problem of where to put railroad tracks, and where to >> put railroad yards (and how big) to provide "buffers" for flows of goods >> is fundamentally the same as where to put packet switches, memory, >> circuits, etc., in computer networks. > > > Yup. > > In fact I recall seeing an article in the 1970s (IEEE? Kleinrock?) that > was about queuing theory and the cover to the periodical showed a > rendering of a large railroad switching yard. I recall a similar photo on a special issue on fast packet switching in the late 1980s (or maybe very early 1990s) on perhaps IEEE Network or IEEE Communications. Cheers, James --------------------------------------------------------------------- James P.G. Sterbenz jpgs@{ittc|eecs}.ku.edu jpgs at comp.lancs.ac.uk www.ittc.ku.edu/~jpgs 154 Nichols ITTC|EECS InfoLab21 Lancaster U +1 508 944 3067 The University of Kansas jpgs at tik.ee.ethz.ch jpgs@{acm|ieee|comsoc|computer|m.ieice}.org jpgsterbenz at gmail.com gplus.to/jpgs www.facebook.com/jpgsterbenz jpgs at ittc.ku.edu From larrysheldon at cox.net Fri Aug 22 21:51:53 2014 From: larrysheldon at cox.net (Larry Sheldon) Date: Fri, 22 Aug 2014 23:51:53 -0500 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <53F81DE9.8080405@cox.net> On 8/22/2014 23:21, Larry Sheldon wrote: > On 8/22/2014 22:34, Jack Haverty wrote: >> Not just taxis... >> >> It's been a looonnggg time, but I still remember studying a lot of >> mathematics about 50 years ago - queueing theory, graph theory, etc. >> Used to be able to do it too. >> >> My recollection is that terms such as "flow control" and "congestion >> control" were used in mathematics, well before they were used in >> computer networks. >> >> I suspect the answer to "when were the terms "flow control" and >> "congestion control" coined will be found in the history of mathematics >> - not computers. Such terms have been in use a long time. They were >> coined long before computers. >> >> Computer and later network people just used the terms to describe the >> behavior of flows of bits, just as earlier engineers and scientists used >> them to describe the flow of people, railroad cars, components in >> manufacturing lines, warehouse inventory, etc. >> >> For example, the problem of where to put railroad tracks, and where to >> put railroad yards (and how big) to provide "buffers" for flows of goods >> is fundamentally the same as where to put packet switches, memory, >> circuits, etc., in computer networks. >> >> The whole field of Operations Research is about that kind of math used >> in engineering, business, etc., long before computers did. >> >> Of course computers made it possible to actually do the calculations >> fast, and that changed the way the math got used. >> >> /Jack Haverty >> >> >> On 08/22/2014 07:33 AM, Vint Cerf wrote: >>> Donald Davies had the idea of an isarithmic network: a fixed number of >>> packets in the network at all times. Issues however included getting >>> "empty packets" to places with data to send. Like the taxi problem >>> where they end up at favored destinations but are not available >>> without deadheading to favored origins. >>> >>> http://www.researchgate.net/publication/224730989_The_Control_of_Congestion_in_Packet-Switching_Networks >>> >>> >>> v >>> >>> On Fri, Aug 22, 2014 at 9:56 AM, Noel Chiappa >> > wrote: >>> >>> > From: "James P.G. Sterbenz" >> > >>> >>> > All network historians and scientists should own ... >>> > L. Pouzin, _The Cyclades Computer Network_, North-Holland, >>> 1982 >>> >>> Indeed - it has an honoured place on my bookshelf. The >>> importance of >>> CYCLADES/CIGALE in the history of data network cannot be >>> over-emphasized, >>> IMO. >>> >>> > in which congestion is covered in Chap. 4 on Cigale. >>> >>> 4.4.6, to be exact. Looking at their congestion control mechanism, >>> it's >>> fairly complex - not sure if it would work in a heterogeneous >>> network like >>> today's Internet, though. Still, interesting... >>> >>> > There were likely much earlier Cyclades papers mentioning >>> congestion >>> > before this retrospective monograph. >>> >>> Yes, about the earliest appears to be: >>> >>> M. Irland, "Queueing analysis of a buffer allocation scheme for >>> a packet >>> switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 >>> >>> There are some slightly earlier ones by him (her?), but they >>> appear to be >>> progress reports on a simulation project which was part of a PhD >>> thesis at >>> the University of Waterloo (completed in April 1977), and not >>> widely >>> distributed. >>> >>> >>> In looking for the references in that book to the congestion work, >>> though, I >>> stumbled across this one: >>> >>> D. W. Davies, "The Control of Congestion in Packet Switching >>> Networks", >>> Proc. 2nd Symp. on Problems of Optimization of Data Comm. >>> Systems, >>> Palo Alto, Oct. 1971 >>> >>> I don't have access to that, but it would be interesting to see >>> what it >>> covers. >>> >>> Noel >>> > If you will forgive an intrusion from a lurking ignoramus....... I wish my toys would stop helping me edit this stuff..... > > I would be very surprised to learn that there was nothing in the Bell > Labs library from the early days to Traffic Engineering in connection Labs library from the early days of Traffic Engineering in connection > with the distance dialing network development. > > As remember as a toll craftsman in the 1960s tossing terms around (that > I remember as a toll craftsman in the 1960s tossing terms around > (that my aging brain can't recall now) that spoke to congestion and queuing > and route advancing and stuff, as if I knew what they all meant. > > -- The unique Characteristics of System Administrators: The fact that they are infallible; and, The fact that they learn form their mistakes. From jpgs at ittc.ku.edu Fri Aug 22 23:45:04 2014 From: jpgs at ittc.ku.edu (James P.G. Sterbenz) Date: Sat, 23 Aug 2014 01:45:04 -0500 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140822135605.9C45318C123@mercury.lcs.mit.edu> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> Message-ID: <0B52B580-0A4E-41EB-B3F2-0DD70B46F6E1@ittc.ku.edu> On 22 Aug 2014, at 08:56, Noel Chiappa wrote: >> From: "James P.G. Sterbenz" > >> All network historians and scientists should own ... >> L. Pouzin, _The Cyclades Computer Network_, North-Holland, 1982 > > Indeed - it has an honoured place on my bookshelf. The importance of > CYCLADES/CIGALE in the history of data network cannot be over-emphasized, > IMO. > >> in which congestion is covered in Chap. 4 on Cigale. > > 4.4.6, to be exact. Looking at their congestion control mechanism, it's > fairly complex - not sure if it would work in a heterogeneous network like > today's Internet, though. Still, interesting... > >> There were likely much earlier Cyclades papers mentioning congestion >> before this retrospective monograph. > > Yes, about the earliest appears to be: > > M. Irland, "Queueing analysis of a buffer allocation scheme for a packet > switch", Proc. IEEE-NTC '75, New Orleans, Dec. 1975 Thanks for digging up the references; I?ll try to track this one down... > There are some slightly earlier ones by him (her?), but they appear to be > progress reports on a simulation project which was part of a PhD thesis at > the University of Waterloo (completed in April 1977), and not widely > distributed. > > > In looking for the references in that book to the congestion work, though, I > stumbled across this one: > > D. W. Davies, "The Control of Congestion in Packet Switching Networks", > Proc. 2nd Symp. on Problems of Optimization of Data Comm. Systems, > Palo Alto, Oct. 1971 > > I don't have access to that, but it would be interesting to see what it > covers. It is probably a slightly earlier version of http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1091198 There is at least one copy on the open Web if you don?t have IEEE Xplore access, but I?m not going to link it directly in a public forum (having made that mistake once here before). I think I?ll see if my librarians can track the earlier reference down; it would be nice to have. The Davies book I referenced is still readily available on the secondary market; I got my first copy in the late 1970s and then a second one when the GTE Laboratories library (RIP) shut down: www.amazon.com/Communication-Networks-Computers-Wiley-computing/dp/0471198749 And of course there is the almost-as-important followon: Davies, Barber, Price, and Solomonides. _Computer Networks and their Protocols_, Wiley, 1979 I cover all of NPL, Cyclades, and ARPANET in my networking courses; while ARPANET won as the direct architectural predecessor to the Internet, the others were seminal research peers. Cheers, James --------------------------------------------------------------------- James P.G. Sterbenz jpgs@{ittc|eecs}.ku.edu jpgs at comp.lancs.ac.uk www.ittc.ku.edu/~jpgs 154 Nichols ITTC|EECS InfoLab21 Lancaster U +1 508 944 3067 The University of Kansas jpgs at tik.ee.ethz.ch jpgs@{acm|ieee|comsoc|computer|m.ieice}.org jpgsterbenz at gmail.com gplus.to/jpgs www.facebook.com/jpgsterbenz jpgs at ittc.ku.edu From jnc at mercury.lcs.mit.edu Sat Aug 23 05:37:19 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 23 Aug 2014 08:37:19 -0400 (EDT) Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP Message-ID: <20140823123719.AE48818C12E@mercury.lcs.mit.edu> > From: "James P.G. Sterbenz" > I cover all of NPL, Cyclades, and ARPANET in my networking courses; > while ARPANET won as the direct architectural predecessor to the > Internet, the others were seminal research peers. Actually, I would only describe the ARPANET as the _operational_ predecessor to the Internet; for the "architectural predecessor" of _all_ internetworking systems, I would look to CYCLADES/CIGALE. We did use the ARPANET in building the Internet, both as a tool (to exchange e-mail, files, etc) and also also the WAN communication substrate (a role for which it was IMO quite badly suited, for reasons I won't elaborate unless people want to hear them). And of course we took over the _applications_ (e-mail, file transfer, remote login, etc) more or less whole. But architecturally, with the move of key functionality into the end-points, the Internet took a very different path from the ARPANET - a path pioneered by CYCLADES/CIGALE. BTW, in "all internetworking systems", I would include PUP, which might also be worth covering briefly, since I gather the PUP guys did influence TCP/IP somewhat - the exact level would be an interesting historical research project for someone... Noel From jnc at mercury.lcs.mit.edu Sat Aug 23 06:35:12 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 23 Aug 2014 09:35:12 -0400 (EDT) Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP Message-ID: <20140823133512.1CE1818C092@mercury.lcs.mit.edu> > From: Vint Cerf > Donald Davies had the idea of an isarithmic network: a fixed number of > packets in the network at all times. Issues however included getting > "empty packets" to places with data to send. Yes - an interesing approach, but I'm not sure it's workable - especially in a large internet. The paper is interesting though because it does use the terms "flow control" and "congestion control" in pretty much their modern meanings: This so-called "isarithmic" method of congestion control supplements and does not replace end-to-end flow control. And as Brian pointed out, you can clearly see that his earlier work in road networks has influenced his understanding, e.g.: By analogy with road traffic, congestion can be expected to begin at one point in the network and spread as the queues fill and links between switching centres are blocked. It's really interesting to see how many times various people looked at the congestion control issue, and their results didn't really catch on widely, until we all finally 'got the message' after the Internet congestive collapses, and Van's work. Speaking of PUP, it's interesting to see what they did for congestion control: looking in the seminal "PUP: An Internetwork Architecture" (July 1979) they have several sections on congestion control ("2.8. Flow control and congestion control", "5.2. Congestion control and utilization of low-bandwidth channels"), and clearly differentiate between flow and congestion control. However, they don't (at least, in that document) give specifics on their congestion control algorithms in the hosts, just indicate that they use their version of Source Quench as a congestion signal, with only a general gloss on how it's used: The source process can use this information to modify its transmission strategies-for example, to reduce its offered load .. and thereby help to relieve the congestion. Anyone know anything more about congestion control algorithms in PUP? Noel From detlef.bosau at web.de Sat Aug 23 06:56:45 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 23 Aug 2014 15:56:45 +0200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F80BC6.4080707@3kitty.org> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> Message-ID: <53F89D9D.1010300@web.de> Am 23.08.2014 um 05:34 schrieb Jack Haverty: > Not just taxis... > > It's been a looonnggg time, but I still remember studying a lot of > mathematics about 50 years ago - queueing theory, graph theory, etc. > Used to be able to do it too. Although these things are useful if applied correctly, they must not be applied to things where they don't apply. To be more drastically (and perhaps enter some killfiles): To my understanding, the common denominator in buffer bloat, problems in VJCC, problems with isarithmic networks is Little's theorem. Which, and please take hammer, gouge and a plate of marble and gouge it, is L I T T L E `S T H E O R E M , W H I C H D O E S N O T A P P L Y T O C O M P U T E R N E T W O R K S! (This ist not a claim by me - read Little's preconditions and assumptions, they simply do not apply to packet switched networks. Period.) (I apologize for shouting.) Neither does the whole queueing theory as often employed by people, who made great contributions to networking, but when Len Kleinrock and Raj Jain talk about queueing systems, they talk about mathematical models which well provide insight in how systems work - unfortunaley, and that's really a pity, hardly anyone of them applies to computer networks. So a good networker should be able to deal with models in order to understand how systems work - while he is basically an engineer which must keep his feet in solid coupling to the ground. > > My recollection is that terms such as "flow control" and "congestion > control" were used in mathematics, well before they were used in > computer networks. > Where? All our queueing models deal with loss free systems. We don't even have a notion of stability here - although years ago I was asked whether we could prove the Internet to be "stable". The pure question is simply ludicrous and makes obvious that the questioner has absolutely no idea what he is talking about. In a scientifc (!) paper I think to have read a notion (my memory may cheat me here) a queueing system were stable if the queues cannot grow beyond all limits. => Please go back to a basic lecture on stochastic processes, introductory remarks, first two weeks. When I enqueue at the cash point in my local supermarket, the queue sometimes grows beyond all limits. (Which is relative. In some cases, there are 3 customers, each one with 1 item, and the queue is beyond all limits, the collector close to a heart attach and the customers close to insanity, in other cases, there are 50 customers, each with 150 times, and the queueing delay is not observable.) To my understanding, a queue may well grow beyond all limits, and this is perfectly acceptable, if there is a probality > 0, definitely != 0, that the queue will ever return to a finite length or even the queue may run empty. But these are abstractions. With infinite queues, stationary processes, Poisson processes, Markov Processes. A (translated word by word) a glass bead game, perhaps the term "Glasperlenspiel" is known even to the Englisch speaking world. In control theory, professors award PhD students their hat with the words: "Congratulations to your hat, but now, it's time to forget anything what you've learned here and to go outside - and deal with reality." So the new born Dr.-Ing. forgets all about those Kalman Filters, Luenberger Observers and Ljapunov Equations - and starts engineering. > I suspect the answer to "when were the terms "flow control" and > "congestion control" coined will be found in the history of > mathematics - not computers. Such terms have been in use a long > time. They were coined long before computers. Do you have precise definitions? Particularly for flow control. Congestion is easy. "A queueing system is congested when at least one queue is transient." But what is flow control all about here? > > Computer and later network people just used the terms to describe the > behavior of flows of bits, just as earlier engineers and scientists > used them to describe the flow of people, railroad cars, components in > manufacturing lines, warehouse inventory, etc. And that was not always useful. In many of these scenarios, mathematical models have been thoughtlessly applied to where they don't apply. > > For example, the problem of where to put railroad tracks, and where to > put railroad yards (and how big) to provide "buffers" for flows of > goods is fundamentally the same as where to put packet switches, > memory, circuits, etc., in computer networks. > And it is - sorry for being harsh here - sometimes the same nonsense. I already said this some posts ago. To my understanding the main reason for adopting a sliding window scheme in telecommunication is to avoid idle times. (Or deadheading.) Just another example for a wrongly applied model. When I take a taxi, I want to reach my destination ASAP. (And I have only limited compassion for the taxi drivers budget.) (Extremely spoken. Economically, you will find me at the side of Keynes, not at the side of Hayek.) Back to networks: Networks shall convey data as soon as possible - and they shall serve the user. Not the other way round. More precisely: It is simply mot the user's job to keep the lines busy. And actually, we don't keep the lines busy, we often keep the queues overcrowded. This might be a certain shift of paradigm, because in the 70s, lines were extremely expensive. Hence the intention was to fully utilize them. To my knowledge, our actual Tier 1 backbone is - in contrast to the situation back in the 70s - rather a bit overprovisioned. > The whole field of Operations Research is about that kind of math used > in engineering, business, etc., long before computers did. Yes. And meanwhile they do it with appropriate care... (Engineers are not mathematicians. There are very few people who can successfully act in both roles. Many people tend to be extreme in one direction.) > > Of course computers made it possible to actually do the calculations > fast, and that changed the way the math got used. > > /Jack Haverty > Please don't mix up mathematics with computing ;-) *SCNR* -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Sat Aug 23 16:01:29 2014 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 23 Aug 2014 16:01:29 -0700 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <20140823123719.AE48818C12E@mercury.lcs.mit.edu> References: <20140823123719.AE48818C12E@mercury.lcs.mit.edu> Message-ID: <53F91D49.6000101@3kitty.org> On 08/23/2014 05:37 AM, Noel Chiappa wrote: > BTW, in "all internetworking systems", I would include PUP, which might also > be worth covering briefly, since I gather the PUP guys did influence TCP/IP > somewhat - the exact level would be an interesting historical research > project for someone... There was quite a lot of cross-pollination in the late 70s and early 80s between the ARPA Internet projects and the Xerox PARC work (PUP et al). Larry Stewart of PARC was a frequent attendee at Vint's periodic Internet Group meetings, as well as occasionally John Schoch. Probably others too. One of the DARPA regular meetings was hosted by PARC. I recall one session where we were all listening to someone talk about something, and being distracted by all of the Alto monitors scattered around the room. At one point in the session, Dave Clark exclaimed, quite loudly, "Get him!!!". Apparently he was watching an intense game of Maze Wars that was on one of the monitors, and whoever was playing wasn't as good as Dave. There was a lot of such cross-pollination, and quite a few competing Internet architectures and protocol stacks. TCP/IP was only one way to do it. PUP was another at about the same time. DECNET, SPX/IPX, SNA, Appletalk, .... and of course the ISO TPn protocols. I don't remember them all, but they were all certainly part of "all internetworking systems". It may also be historically interesting to track the technical cross pollination by following the people. For example, Radia Perlman was involved in the ARPA Internet work while at BBN, and then went to work for Novell. So it should be no surprise that Novell's SPX/IPX and ARPA's TCP/IP have a lot in common. Same with PUP. I don't know how you might measure how much one project influenced another. It was a very "open" time when people talked freely about what they were doing and why it was better than the others. In my own case, when I left BBN to become "Internet Architect" at Oracle, I helped push through an internetworking technology (with the exciting name "Oracle Networking") that created internets of dissimilar internets. This was circa 1990. Essentially we created a concatenation of reliable virtual circuits so that any client could communicate with any server. E.G., a PC on a Novell LAN using only SPX/IPX could interact with a server on a DEC machine using only DECNET, possibly communicating by using TCP/IP in the middle. Or SNA, etc. When TCP/IP finally "won" the battle and emerged as the universal technology, we no longer needed to do such concatenations so that "internet of internets" could fade away. The History of The Internet is much more complex than just the History of TCP... /Jack From detlef.bosau at web.de Mon Aug 25 10:09:10 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Mon, 25 Aug 2014 19:09:10 +0200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53F80BC6.4080707@3kitty.org> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> Message-ID: <53FB6DB6.2030805@web.de> Am 23.08.2014 um 05:34 schrieb Jack Haverty: > > It's been a looonnggg time, but I still remember studying a lot of > mathematics about 50 years ago - queueing theory, graph theory, etc. > Used to be able to do it too. > > My recollection is that terms such as "flow control" and "congestion > control" were used in mathematics, well before they were used in > computer networks. Hm. I read quite a lot of mathematical models used for computer networks. However, I never happened to see, how flow control and congestion control were modelled. The models were made that abstract, that congestion control and flow control vanished. I really appreciate concrete pointers here. Detlef -------------- next part -------------- An HTML attachment was scrubbed... URL: From jack at 3kitty.org Mon Aug 25 12:44:35 2014 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 25 Aug 2014 12:44:35 -0700 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53FB6DB6.2030805@web.de> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> <53FB6DB6.2030805@web.de> Message-ID: <53FB9223.4050804@3kitty.org> On 08/25/2014 10:09 AM, Detlef Bosau wrote: > Am 23.08.2014 um 05:34 schrieb Jack Haverty: >> >> It's been a looonnggg time, but I still remember studying a lot of >> mathematics about 50 years ago - queueing theory, graph theory, etc. >> Used to be able to do it too. >> >> My recollection is that terms such as "flow control" and "congestion >> control" were used in mathematics, well before they were used in >> computer networks. > > > Hm. I read quite a lot of mathematical models used for computer networks. > > However, I never happened to see, how flow control and congestion > control were modelled. The models were made that abstract, that > congestion control and flow control vanished. > > I really appreciate concrete pointers here. > > Detlef Detlef, I suggest you research the literature of the "Operations Research" branch of mathematics. That is where pure mathematics concepts of queueing theory et al. were applied to real-world problems. I did a quick google search of ("queueing theory" "flow control") and found this example of a mathematical paper discussing queues and "flow control" in computer networks: http://www.jstor.org/discover/10.2307/2582975?uid=2&uid=4&sid=21104558206567 Here's another one, about flow control in "supply chain networks" in the manufacturing environment: http://www.isr.umd.edu/~baras/publications/papers/2012/Ion_Asane_MTNS2012.html What we might call "packet loss" they might call "supply chain disruption". I suspect most of the material you may find online will be about the use of mathematics in computer networks. Unfortunately, most earlier work, before we had the Internet, is probably not available online - it may only be in university libraries. So that's where you may be able to find papers on flow control in pre-computer environments like railroads. Also, the basic mathematical concept which we networking people call "flow control" might have been described using different terminology in, for example, the old railroad or other business examples. I suggest researching Bell Laboratories work from before 1970. They did a lot of theoretical work modelling the telephone network, and in particular the issues of managing many simultaneous voice calls. The problem of designing the telephone network to minimize the probability of busy signals is about the management of multiple simultaneous flows and controlling congestion on circuits in the interior of the network. But they may have used different terminology. My recollection is also that, as you said, the mathematical models were so abstract that they were not very useful in the real world of computer networks. Mathematics could be used to model hypothetical cases, and was useful to see how things might behave in theory. The real-world was sufficiently chaotic and unpredictable that it was difficult to model with sufficient accuracy. That's why the Internet was built by a continuing series of experiments and refinements. At one point, someone published a mathematical paper that proved that the ARPANET would lock up and all traffic flows would stop. This cause some great concerns among the users of the ARPANET who were depending on it. Our analysts at BBN examined that paper in great detail and concluded that it was mathematically correct -- but one of the assumptions made in the model was that every packet switch computer was started at the same moment in time, and all those computers ran at exactly the same speed so instructions in all machines were executed in total synchrony. We advised the users that, if such a situation could be created, the ARPANET would crash, but that the likelihood of that situation of perfect synchrony was so tiny that there was no reason to worry. It was a mathematically interesting theoretical problem, but not a real-world concern. I can't recall when I first encountered the terms "flow control" or "congestion control" in computer networks, or seeing any formal definitions of those terms. My personal view is that "flow control" refers to the management of a single flow of information between two end-points. It could be a TCP connection, or a telephone call, or a stream of railroad cars between a factory and warehouse. There are mechanisms in the endpoints, as well as in the interior, to manage that flow. Conversely, "congestion control" refers to the management of a set of many flows as they compete for resources. If there are too many flows going through a bottleneck, congestion happens and may result in broken flows or "busy signals". These two phenomena interact in complex ways, so it was common in the early Internet work to discuss them both when working on any particular problem. For example, the TCP algorithms in host computers would interact with the routing algorithms in switching components, the error-control algorithms on individual circuits, the load-levelling schemes of server farms, and almost anything else you can imagine that's involved in regulating the flows through the Internet. IMHO, the Internet is way more complex than we know how to model. It's probably at the same level of complexity as other hard problems - weather, astronomy, etc. We didn't model it. We just built it. Hope this helps, /Jack Haverty -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Mon Aug 25 13:11:17 2014 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 26 Aug 2014 08:11:17 +1200 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53FB9223.4050804@3kitty.org> References: <20140822135605.9C45318C123@mercury.lcs.mit.edu> <53F80BC6.4080707@3kitty.org> <53FB6DB6.2030805@web.de> <53FB9223.4050804@3kitty.org> Message-ID: <53FB9865.4080503@gmail.com> I suggest starting with Erlang. Since he died in 1929, you are certainly looking for papers that are mainly to be found on paper ;-). According to Wikipedia his first significant paper was published in 1909. Of course it was about circuit switching. Brian On 26/08/2014 07:44, Jack Haverty wrote: > On 08/25/2014 10:09 AM, Detlef Bosau wrote: >> Am 23.08.2014 um 05:34 schrieb Jack Haverty: >>> It's been a looonnggg time, but I still remember studying a lot of >>> mathematics about 50 years ago - queueing theory, graph theory, etc. >>> Used to be able to do it too. >>> >>> My recollection is that terms such as "flow control" and "congestion >>> control" were used in mathematics, well before they were used in >>> computer networks. >> >> Hm. I read quite a lot of mathematical models used for computer networks. >> >> However, I never happened to see, how flow control and congestion >> control were modelled. The models were made that abstract, that >> congestion control and flow control vanished. >> >> I really appreciate concrete pointers here. >> >> Detlef > > Detlef, > > I suggest you research the literature of the "Operations Research" > branch of mathematics. That is where pure mathematics concepts of > queueing theory et al. were applied to real-world problems. > > I did a quick google search of ("queueing theory" "flow control") and > found this example of a mathematical paper discussing queues and "flow > control" in computer networks: > > http://www.jstor.org/discover/10.2307/2582975?uid=2&uid=4&sid=21104558206567 > > Here's another one, about flow control in "supply chain networks" in the > manufacturing environment: > > http://www.isr.umd.edu/~baras/publications/papers/2012/Ion_Asane_MTNS2012.html > > What we might call "packet loss" they might call "supply chain disruption". > > I suspect most of the material you may find online will be about the use > of mathematics in computer networks. Unfortunately, most earlier work, > before we had the Internet, is probably not available online - it may > only be in university libraries. So that's where you may be able to > find papers on flow control in pre-computer environments like railroads. > > Also, the basic mathematical concept which we networking people call > "flow control" might have been described using different terminology in, > for example, the old railroad or other business examples. > > I suggest researching Bell Laboratories work from before 1970. They > did a lot of theoretical work modelling the telephone network, and in > particular the issues of managing many simultaneous voice calls. The > problem of designing the telephone network to minimize the probability > of busy signals is about the management of multiple simultaneous flows > and controlling congestion on circuits in the interior of the network. > But they may have used different terminology. > > My recollection is also that, as you said, the mathematical models were > so abstract that they were not very useful in the real world of computer > networks. Mathematics could be used to model hypothetical cases, and > was useful to see how things might behave in theory. The real-world > was sufficiently chaotic and unpredictable that it was difficult to > model with sufficient accuracy. That's why the Internet was built by a > continuing series of experiments and refinements. > > At one point, someone published a mathematical paper that proved that > the ARPANET would lock up and all traffic flows would stop. This cause > some great concerns among the users of the ARPANET who were depending on > it. Our analysts at BBN examined that paper in great detail and > concluded that it was mathematically correct -- but one of the > assumptions made in the model was that every packet switch computer was > started at the same moment in time, and all those computers ran at > exactly the same speed so instructions in all machines were executed in > total synchrony. We advised the users that, if such a situation could > be created, the ARPANET would crash, but that the likelihood of that > situation of perfect synchrony was so tiny that there was no reason to > worry. It was a mathematically interesting theoretical problem, but not > a real-world concern. > > I can't recall when I first encountered the terms "flow control" or > "congestion control" in computer networks, or seeing any formal > definitions of those terms. > > My personal view is that "flow control" refers to the management of a > single flow of information between two end-points. It could be a TCP > connection, or a telephone call, or a stream of railroad cars between a > factory and warehouse. There are mechanisms in the endpoints, as well > as in the interior, to manage that flow. > > Conversely, "congestion control" refers to the management of a set of > many flows as they compete for resources. If there are too many flows > going through a bottleneck, congestion happens and may result in broken > flows or "busy signals". > > These two phenomena interact in complex ways, so it was common in the > early Internet work to discuss them both when working on any particular > problem. For example, the TCP algorithms in host computers would > interact with the routing algorithms in switching components, the > error-control algorithms on individual circuits, the load-levelling > schemes of server farms, and almost anything else you can imagine that's > involved in regulating the flows through the Internet. > > IMHO, the Internet is way more complex than we know how to model. It's > probably at the same level of complexity as other hard problems - > weather, astronomy, etc. > > We didn't model it. We just built it. > > Hope this helps, > /Jack Haverty > > From johnl at iecc.com Mon Aug 25 21:39:46 2014 From: johnl at iecc.com (John Levine) Date: 26 Aug 2014 04:39:46 -0000 Subject: [ih] FC vs CC Re: [e2e] Fwd: Re: Once again buffer bloat and CC. Re: A Cute Story. Or: How to talk completely at cross purposes. Re: When was Go Back N adopted by TCP In-Reply-To: <53FB9865.4080503@gmail.com> Message-ID: <20140826043946.3432.qmail@joyce.lan> In article <53FB9865.4080503 at gmail.com> you write: >I suggest starting with Erlang. Since he died in 1929, you are certainly >looking for papers that are mainly to be found on paper ;-). Oh, ye, of little faith: http://runeberg.org/matetids/1920b/0029.html It's written in the universal mathematical language: Danish. R's, John From detlef.bosau at web.de Sat Aug 30 09:25:22 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 30 Aug 2014 18:25:22 +0200 Subject: [ih] why did CC happen at all? Message-ID: <5401FAF2.8070306@web.de> I'm yet to understand the sitch from the ARPAnet to the Internet in 1983, however, if this happened that way, that an Internet host sent a message to its peer using the "message switching system" (may I call it that way?) in the ARPAnet, CC would be an "impossible fact". (Some German readers might enjoy this little text here: http://ingeb.org/Lieder/palmstre.html) In the ARPAnet, congestion was avoided by flow control - and in fact, actually, there is nothing like "congestion" when networks are implemented correctly. To my understanding, "congestion" is an excuse for missing (or botched) flow control. So, what was the scenario, VJ describes in the congavoid paper? Up to know, I always thought, the ARPAnet infrastructure was still in use, although adopted by the Internet protocol stack, but I thought, IP datagrams were sent like ARPAnet messages? Detlef -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sat Aug 30 11:27:10 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 30 Aug 2014 20:27:10 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5401FAF2.8070306@web.de> References: <5401FAF2.8070306@web.de> Message-ID: <5402177E.5020403@web.de> sorry, typo. Am 30.08.2014 um 18:25 schrieb Detlef Bosau: > I'm yet to understand the sitch from the ARPAnet to the Internet in > 1983, however, if this happened that way, that an Internet host sent a > > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > From mfidelman at meetinghouse.net Sat Aug 30 12:31:03 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 30 Aug 2014 15:31:03 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5402177E.5020403@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> Message-ID: <54022677.9050702@meetinghouse.net> Umm... now this makes even less sense. What question are you asking, or what point are you trying to make? Detlef Bosau wrote: > sorry, typo. > > Am 30.08.2014 um 18:25 schrieb Detlef Bosau: >> I'm yet to understand the sitch from the ARPAnet to the Internet in >> 1983, however, if this happened that way, that an Internet host sent a >> >> ------------------------------------------------------------------ >> Detlef Bosau >> Galileistra?e 30 >> 70565 Stuttgart Tel.: +49 711 5208031 >> mobile: +49 172 6819937 >> skype: detlef.bosau >> ICQ: 566129673 >> detlef.bosau at web.de http://www.detlef-bosau.de >> -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From detlef.bosau at web.de Sat Aug 30 13:28:35 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sat, 30 Aug 2014 22:28:35 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54022677.9050702@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> Message-ID: <540233F3.5080403@web.de> Am 30.08.2014 um 21:31 schrieb Miles Fidelman: > Umm... now this makes even less sense. What question are you asking, > or what point are you toirying to make? ;-) (Isn't the correct simley. I'm not smiling here. I'm simply frustrated.) For nearly three decades, we hand out doctoral hats and PhD diplomae for a, may I be honest, if only this one time, nonsense called "congestion control". Now, this afternoon (and not only this afternoon, I deal with these things for more than ten years now), I had a look at this page: http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/hi_flow.shtml Now to the point. The one and only purpose of flow control is to have a sender send not faster than a receiver can accept and process data. Hence, flow control happens INEVITABLY hop to hop, otherwise it is simply no flow control, In TCP, we have a model Sender ----------(miracle)-------------Receiver and we use a "flow control" between receiver and sender, and all text books I know spread the nonsense that the receiver in a TCP session has to limit the sender via "flow control", and we assume a "miracle" as the "link model". Actually, and that's the point, IP is an overlay network - and what I called (miracle) may be completely congested and neither the sender nor the receiver is aware of this congestion. Now, first: If we used the ARPAnet for layer 1 and 2 in the Internet, this congestion would simply not occur, because the ARPAnet (sie link above) offered the necessary means for flowwise (!) hop top hop flow control. Some day, and I don't now this exactly as I did not attend the meetings, we wrote RFC 791 - and simply left out flow control on layer 1 and layer 2. With the very foreseeable result that things would crash. IP nodes communicated using a "cloud" with miraculous "lines", so that we could use a model Sender ------------- (miraculous line) --------- Receiver and this (miraculous line) has a certain "capacity" (it becomes even better: This "capacity" is even stationary....) which can be probed, assessed, modelled and we introduced "congestion control" as a means of throttling the sender to the "rate" and "capacity" (both stationary of couse) of our "(miraculous line)". And now we complain about - buffer bloat, - too long round trip times, - fairness problems, - loss differentiation problems - line underutilization (i.e. lines have to carry probing packets, retransmissions, which use resources) which I'm more and more convinced of, result in the one (IMHO wrong) decision to omit flow control in IP and omit a reasonable scheduling and allocation of resources, and now we are fully focused on fixing the consequences of a botch made 30 years ago. I'm curious about the actual equipment being used for "Internet connections" used in context with the congavoid paper. And perhaps, I will have to read the early papers on how the change from ARPAnet to Internet was actually managed. And my conjecture is that we made some inconsiderate decisions that time, which avenged themselves during the last decades. If I only could, I would give this design a roll back and give TCP a redesign on an elaborated network (and network model) with propper - flow-wise - hop by hop flow control. I'm convinced, this wouldn't only spare us the aforementioned problems but would be cheaper, faster and less resource consumptive than our Internet today. However, I feel a bit isolated here. (And, as I said, as I think about this for many years now, deeply frustrated.) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sat Aug 30 15:58:39 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 31 Aug 2014 00:58:39 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <540233F3.5080403@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> Message-ID: <5402571F.4030901@web.de> And to emphasize this: Have a look at all days traffic in our streets. Or scheduling of processor capacity and memory in computers. There is no single one example for this probing/drop nonsense which we do on computer networks. Frankly spoken, this botch is simle an embarressment. There are dozens of all days life examples where resources must be allocated or assigned, we have well proven algorithms for these purposes. E.g. in Germany, you can travel by car from Flensburg to F?ssen. And there is no need for probing, no need for dropped cars and car corruption is considered an accident. Obviously, we are unable to apply these concepts to networks. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From paul at redbarn.org Sat Aug 30 19:04:33 2014 From: paul at redbarn.org (P Vixie) Date: Sat, 30 Aug 2014 19:04:33 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <540233F3.5080403@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> Message-ID: <2ba2a6bb-8ed4-40d4-926d-4a2f306c155c@email.android.com> On August 30, 2014 1:28:35 PM PDT, Detlef Bosau wrote: ... >Some day, and I don't now this exactly as I did not attend the >meetings, >we wrote RFC 791 - and simply left out flow control on layer 1 and >layer 2. It was not simply left out. The omission of flow control and retransmission at L2 was deliberate, and reflects IP's need to run on links that by their nature cannot support transmission state. Had the RFC 791 authors been willing to limit themselves to a known set of link layer protocols we would not today have any Internet at all. Vixie -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. From mfidelman at meetinghouse.net Sat Aug 30 23:02:07 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 02:02:07 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <540233F3.5080403@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> Message-ID: <5402BA5F.30401@meetinghouse.net> Detlef Bosau wrote: > > For nearly three decades, we hand out doctoral hats and PhD diplomae for > a, may I be honest, if only this one time, nonsense called "congestion > control". > > Now, this afternoon (and not only this afternoon, I deal with these > things for more than ten years now), I had a look at this page: > > http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/hi_flow.shtml > > Now to the point. > > The one and only purpose of flow control is to have a sender send not > faster than a receiver can accept and process data. Right off the bat, you seem to be conflating two very different problems: - Flow control is about limiting the rate at which traffic is received. - Congestion control is about bottlenecks in intermediate resources BETWEEN the sender and receiver. > Now, first: If we used the ARPAnet for layer 1 and 2 in the Internet, > this congestion would simply not occur, because the ARPAnet (sie link > above) offered the necessary means for flowwise (!) hop top hop flow > control. First off, ARPANET was a single network, providing a reliable message service, and an end-to-end flow control mechanism. It was NOT a catenet comprised of networks with varying types of service. Second, while the original BBN 1822 protocol was primarily connection oriented - which allowed for management of resources from end-to-end; 1822L, and later releases of the IMP software supported a datagram service. Third, ARPANET had it's own congestion issues, particularly when datagram service started to be emphasized - nodes and links congested, packets got dropped. Fourth, congestion really became an issue when we moved into the Internet era - with ARPANET being the main "choke point." When you have 1mbps ethernets at the edges, linked by a network built with 64kbps links - you get congestion. That remains the issue today - edges are faster than the center; with the exception of the mobile world, where the edges are the chokepoints. ARPANET-style flow control is not an answer for congestion control across a catenet. Different problems, requiring different solutions. > > Some day, and I don't now this exactly as I did not attend the meetings, > we wrote RFC 791 - and simply left out flow control on layer 1 and layer 2. As Paul Vixie pointed out, "It was not simply left out. The omission of flow control and retransmission at L2 was deliberate, and reflects IP's need to run on links that by their nature cannot support transmission state. Had the RFC 791 authors been willing to limit themselves to a known set of link layer protocols we would not today have any Internet at all." > > If I only could, I would give this design a roll back and give TCP a > redesign on an elaborated network (and network model) with propper > - flow-wise > - hop by hop > flow control. Seems kind of useless, unless it's part of either: a. A connection-oriented service (a la telephony) - with end-to-end call setup and resource reservation. Leads to rather complex (and expensive) switching gear; and pretty much useless for bursty traffic. Perhaps one of the reasons that telcos are migrating to datagram fabrics. b. a store-and-forward network, where packets are re-transmitted hop-by-hop - which is what we're seeing emerge in the realm of delay/disruption-tolerant networks > I'm convinced, this wouldn't only spare us the aforementioned problems > but would be cheaper, faster and less resource consumptive than our > Internet today. You and all those who argued for connection-oriented networks, back in the day (can you say X.25?). Experience suggests that you're wrong. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Sat Aug 30 23:14:37 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 02:14:37 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5402571F.4030901@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> Message-ID: <5402BD4D.1010309@meetinghouse.net> Detlef Bosau wrote: > And to emphasize this: Have a look at all days traffic in our streets. > Or scheduling of processor capacity and memory in computers. > > There is no single one example for this probing/drop nonsense which we > do on computer networks. Frankly spoken, this botch is simle an > embarressment. > > There are dozens of all days life examples where resources must be > allocated or assigned, we have well proven algorithms for these > purposes. E.g. in Germany, you can travel by car from Flensburg to > F?ssen. And there is no need for probing, no need for dropped cars and > car corruption is considered an accident. > > Obviously, we are unable to apply these concepts to networks. > So, you're saying that accidents, rush hours, and construction DON'T cause congestion on the A7/E45? Or that people don't people don't adjust their schedules or routes based on traffic reports? Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From vint at google.com Sat Aug 30 23:24:39 2014 From: vint at google.com (Vint Cerf) Date: Sun, 31 Aug 2014 02:24:39 -0400 Subject: [ih] why did CC happen at all? In-Reply-To: <5401FAF2.8070306@web.de> References: <5401FAF2.8070306@web.de> Message-ID: ARPANET used an overly constrained system called RFNM (request for next message). The mechanism was used to reserve space at the destination IMP ("get a block" "got a block"). however it was possible to send multiple messages over different "links" (logical term) and overload the network that way. It was also possible to overload an intermediate IMP simply by sending traffic between pairs (source/destination) that happened to pass through the same intermediate IMP. The Internet protocols did not use these methods and except for the "congestion encountered" signal, all flow control was end/to/end which still raised the possibility of intermediate router congestion. The TCP flow control was an attempt to adjust to signals from the receiver and signals (dropped packet, congestion encountered) from intermediate nodes. Packet loss was treated as a flow control signal leading to backoff of the retransmission mechanism of TCP. Slow start was a crude way of sensing where the limits of capacity lay. your claim that there is no congestion with "proper" implementation may result in lower resource utilization. Circuit switching dedicates capacity so there is no congestion, except for the failure to get a circuit ("all circuits busy" is a congestion signal). But dedicating capacity removes the implicit statistical multiplexing advantage of packet switching. v On Sat, Aug 30, 2014 at 12:25 PM, Detlef Bosau wrote: > I'm yet to understand the sitch from the ARPAnet to the Internet in > 1983, however, if this happened that way, that an Internet host sent a > message to its peer using the "message switching system" (may I call it > that way?) in the ARPAnet, CC would be an "impossible fact". > > (Some German readers might enjoy this little text here: > http://ingeb.org/Lieder/palmstre.html) > > In the ARPAnet, congestion was avoided by flow control - and in fact, > actually, there is nothing like "congestion" when networks are > implemented correctly. > > To my understanding, "congestion" is an excuse for missing (or botched) > flow control. > > So, what was the scenario, VJ describes in the congavoid paper? Up to > know, I always thought, the ARPAnet infrastructure was still in use, > although adopted by the Internet protocol stack, but I thought, IP > datagrams were sent like ARPAnet messages? > > Detlef > > -- > ------------------------------------------------------------------ > Detlef Bosau > Galileistra?e 30 > 70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 > detlef.bosau at web.de http://www.detlef-bosau.de > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From detlef.bosau at web.de Sun Aug 31 08:22:38 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 31 Aug 2014 17:22:38 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5402BD4D.1010309@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> Message-ID: <54033DBE.50803@web.de> Am 31.08.2014 um 08:14 schrieb Miles Fidelman: >> > > So, you're saying that accidents, rush hours, and construction DON'T > cause congestion on the A7/E45? Or that people don't people don't > adjust their schedules or routes based on traffic reports? > > Miles Fidelman > > At least in Germany, wie try (admittedly without sucess) to avoid traffic congestion by careful planning. It is always a pity to visit a widow only to tell her: "Unfortunately, your husband is not coming home today, he was dropped together with his car this afternoon due to traffic jam near to Frankfurt." Or, since today it was mentioned that our German secretary of defense, Ursula von der Leyen, has seven children. I'm not quite sure whether she is going to solve the problem in the Ukraine by "probing". Send four children to war, if some are dropped halve the window and send only two, now the next try is stop and wait.... (Rumour says, that the US Air Force actually assesses traffic control by probing and drop, I think the project is conducted near to Ramstein Airbase.) Particularly, the ARPANET in its original design offered the necessary equipment to get along without this nonsense. Or, if I may quote a sentence which John Day wrote me in a private communication: "A congestion control scheme, that causes congestion. Funny." When I started thinking about this issue, I hang on BIC and CUBIC and thought, why we do this nonsense only to tell a sender on a wireless link what he already knows, i.e. how fast he may send? The very reason is that we neglected scheduling. And when we got aware of this fact, we replaced scheduling by a) probing and b) Little's law. That's the whole story. And now, we are to overcome the consequences. And we do so for about 25 years. (And we are going still to do so in 25 years, when we don't attack the basic problem: the lack of proper scheduling and proper flow control.) And as the Internet is likely to grow, we will see even more buffers and even more buffer bloat and perhaps even more heterogeneous networks which suffer from loss differentiation caused problems and even more unfairness between mice and elephants and so on. And as we solve buffer utilization problems by adding more and more buffer (which can be probed and utilized) the hardware costs increase as do the round trip times as well. My only intention is to pursue a different way of thinking here. No more, no less. (However, we are that brainwashed by these nonsense PhD projects on "congestion control" who attempt to keep a dead mummy alive, that we rather sacrifice the world than our probing/dropping nonsense.) -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sun Aug 31 08:26:02 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 31 Aug 2014 17:26:02 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54033DBE.50803@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: <54033E8A.2060607@web.de> AFAIK, the relationship of network congestion and networking guys is subject so sociological and psychological studies. It's called "Stockholm syndrome". -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From dhc2 at dcrocker.net Sat Aug 30 20:32:07 2014 From: dhc2 at dcrocker.net (Dave Crocker) Date: Sat, 30 Aug 2014 20:32:07 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <2ba2a6bb-8ed4-40d4-926d-4a2f306c155c@email.android.com> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <2ba2a6bb-8ed4-40d4-926d-4a2f306c155c@email.android.com> Message-ID: <54029737.9060001@dcrocker.net> On 8/30/2014 7:04 PM, P Vixie wrote: > It was not simply left out. The omission of flow control and retransmission at L2 was deliberate, and reflects IP's need to run on links that by their nature cannot support transmission state. Had the RFC 791 authors been willing to limit themselves to a known set of link layer protocols we would not today have any Internet at all. Carrying this point a bit further: 1. The predecessor, the Arpanet, had /lots/ of flow control and retransmission. So the decision for IP was based on experience. 2. IP was intended to be an overlay to a wide array of different kinds of networks. (The "Inter" part of the name was/is significant.) To make operation over that much heterogeneity work, the overlay needs to impose a minimum of requirements. Flow control and retransmission are a long way from minimal. 3. In fact, sometimes flow control and retransmission are poor choices. So those were made optional, depending upon the transport layer, or are imposed locally be the layer below. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Sun Aug 31 11:21:35 2014 From: jeanjour at comcast.net (John Day) Date: Sun, 31 Aug 2014 14:21:35 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54033DBE.50803@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: > > >Or, since today it was mentioned that our German secretary of defense, >Ursula von der Leyen, has seven children. Good grief. Has she figured out what is causing it!? From mfidelman at meetinghouse.net Sun Aug 31 11:46:46 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 14:46:46 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54033DBE.50803@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: <54036D96.9040406@meetinghouse.net> Detlef, Detlef Bosau wrote: > Am 31.08.2014 um 08:14 schrieb Miles Fidelman: >> So, you're saying that accidents, rush hours, and construction DON'T >> cause congestion on the A7/E45? Or that people don't people don't >> adjust their schedules or routes based on traffic reports? >> >> Miles Fidelman >> >> > At least in Germany, wie try (admittedly without sucess) to avoid > traffic congestion by careful planning. So, your earlier statement: > >/ There are dozens of all days life examples where resources must be > />/ allocated or assigned, we have well proven algorithms for these > />/ purposes. E.g. in Germany, you can travel by car from Flensburg to > />/ F?ssen. And there is no need for probing, no need for dropped cars and > />/ car corruption is considered an accident./ is simply bogus. You DON'T "have well proven algorithms for these purposes." > > It is always a pity to visit a widow only to tell her: "Unfortunately, > your husband is not coming home today, he was dropped together with his > car this afternoon due to traffic jam near to Frankfurt." And how is it any better to tell her: "sorry, your husband is not coming home today, by the time the ambulance got to him, he was already dead, due to a traffic jam near to Frankfort." Either way, congestion is real - and there aren't proven algorithms to avoid/prevent it under all circumstances. Dropped packet, stuck in traffic (or a buffer), refused network entry by flow-control push-back --- same end result. > > Particularly, the ARPANET in its original design offered the necessary > equipment to get along without this nonsense. No. It didn't. As several of us who were there, have told you. The documentation is also pretty easy to find - try googling "BBN Report 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. > > And now, we are to overcome the consequences. And we do so for about 25 > years. (And we are going still to do so in 25 years, when we don't > attack the basic problem: the lack of proper scheduling and proper flow > control.) Hard problem, no particularly good solution, despite lots of trying. Partial solutions that work, each under different sets of conditions. A work in progress as things continue to change. > My only intention is to pursue a different way of thinking here. No > more, no less. (However, we are that brainwashed by these nonsense PhD > projects on "congestion control" who attempt to keep a dead mummy alive, > that we rather sacrifice the world than our probing/dropping nonsense.) You started by asking about history. Then you complain that the problem has already been solved - in the ARPANET, and by German traffic engineers; it hasn't, by either. Then you repeat the assertion that all the engineers who've worked the problem, over the years, are producing "nonsense" and are "brainwashing" people. Maligning the work of others is unbecoming and annoying. If you're so much smarter than everyone else, how about generating code, demonstrating it, and publishing some RFCs. Otherwise, perhaps you might be wise to ponder these words of H.L.Mencken: "For every complex problem there is an answer that is clear, simple, and wrong." (with not much respect due) Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Sun Aug 31 12:04:33 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 15:04:33 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54033DBE.50803@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: <540371C1.9010003@meetinghouse.net> Regarding algorithmic solutions: Detlef Bosau wrote (in different messages): Also, in reference to earlier comment re.: > There are dozens of all days life examples where resources must be > allocated or assigned, we have well proven algorithms for these > purposes. E.g. in Germany, you can travel by car from Flensburg to > F?ssen. And there is no need for probing, no need for dropped cars and > car corruption is considered an accident./ Not only is this not the case, the situation is simply different: Cars are self-driving, with drivers making moment to moment decisions as to their routing, potentially with input as to global traffic conditions -- last time I looked, packets can't do that (though, admittedly, routers are doing that for them, as they make next-hop decisions based on the state of routing tables). It's an engineering decision as to which is more effective and efficient - dropping packets in the face of congestion, with end-to-end retransmission, or buffering them in the switches (store-and-forward). An awful lot of experimentation and real-world practice suggests that dropping packets is a lot simpler, and uses fewer resources, than a store-and-forward or connection oriented approach. The equation changes under conditions of high-delay links, network disruption, and such - hence the use of store-and-forward in some of the protocols being developed for delay/disruption-tolerant networks - particularly where inter-planetary distances and delays are a serious consideration. > > Or, since today it was mentioned that our German secretary of defense, > Ursula von der Leyen, has seven children. I'm not quite sure whether she > is going to solve the problem in the Ukraine by "probing". Send four > children to war, if some are dropped halve the window and send only two, > now the next try is stop and wait.... > > (Rumour says, that the US Air Force actually assesses traffic control by > probing and drop, I think the project is conducted near to Ramstein > Airbase.) Ummm.... you're missing all the stuff going on behind the scenes, in the form of routing protocols. Packets are not sent off willy nilly in all directions - they're sent in the directions indicated by routing tables that are updated based on, among other things, resource congestion around the net. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From el at lisse.na Sun Aug 31 12:13:09 2014 From: el at lisse.na (Dr Eberhard W Lisse) Date: Sun, 31 Aug 2014 20:13:09 +0100 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: Yes, being catholic, conservative and privileged :-)-O el Sent from Dr Lisse's iPad mini On Aug 31, 2014, at 19:21, John Day wrote: >> >> >> Or, since today it was mentioned that our German secretary of defense, >> Ursula von der Leyen, has seven children. > > Good grief. Has she figured out what is causing it!? From dave.walden.family at gmail.com Sun Aug 31 12:36:03 2014 From: dave.walden.family at gmail.com (dave.walden.family at gmail.com) Date: Sun, 31 Aug 2014 15:36:03 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54036D96.9040406@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> Message-ID: <3CA69CD3-DDBB-4261-AE03-E4F8CE4B2A3A@gmail.com> > The documentation is also pretty easy to find - try googling "BBN Report 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. >> > There may be better lists of documentation elsewhere, but below are some websites with sone pointers to documentation relevant to the Arpanet iMP and end-to-end design. http://www.walden-family.com/bbn/#networking http://walden-family.com/impcode/ http://www.b?rwolff.de/e2e/baerwolff-matthias-2010-end-to-end-arguments-in-the-internet--principles-practices-and-theory.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfidelman at meetinghouse.net Sun Aug 31 13:02:01 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 16:02:01 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <3CA69CD3-DDBB-4261-AE03-E4F8CE4B2A3A@gmail.com> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> <3CA69CD3-DDBB-4261-AE03-E4F8CE4B2A3A@gmail.com> Message-ID: <54037F39.7070808@meetinghouse.net> dave.walden.family at gmail.com wrote: >> The documentation is also pretty easy to find - try googling "BBN >> Report 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. >>> >> > There may be better lists of documentation elsewhere, but below are > some websites with sone pointers to documentation relevant to the > Arpanet iMP and end-to-end design. > > http://www.walden-family.com/bbn/#networking > > http://walden-family.com/impcode/ > > http://www.b?rwolff.de/e2e/baerwolff-matthias-2010-end-to-end-arguments-in-the-internet--principles-practices-and-theory.pdf Hi Dave, I'd forgotten about this one: http://walden-family.com/public/1970-imp-afips.pdf A fun read. Cheers, Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From tony.li at tony.li Sun Aug 31 13:09:00 2014 From: tony.li at tony.li (Tony Li) Date: Sun, 31 Aug 2014 13:09:00 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <540371C1.9010003@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> Message-ID: On Aug 31, 2014, at 12:04 PM, Miles Fidelman wrote: > Packets are not sent off willy nilly in all directions - they're sent in the directions indicated by routing tables that are updated based on, among other things, resource congestion around the net. Sorry, no. Routing protocols that react to congestion are still a research topic. Tony From jeanjour at comcast.net Sun Aug 31 13:14:03 2014 From: jeanjour at comcast.net (John Day) Date: Sun, 31 Aug 2014 16:14:03 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54033DBE.50803@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: Everyone has been pointing out the technical points of how the ARPANET worked and they are correct. There are two points that are worth taking into account: 1) No one had yet built a packet switching network, so BBN was kind of flying in the dark. It is amazing that it turned out as well as it did. 2) The ARPANET was never intended as a network for doing research on networks. It was intended as a production network to facilitate other research. BBN was very limited in how much experimentation was possible and in what it could try. CYCLADES on the other hand was built as a network for doing research on networking. It was not intended to be a production network. There was no real attempt for the network to be operational for certain hours of the day. (BBN on the other hand could only try new IMPSYSs on Monday nights.) It was partly because it was a research network that Pouzin and crew came up with datagrams. They wanted to have a very basic system that made as few assumptions about how the network should work as possible. The datagram does this. There was quite a lot of work (cited previously in this discussion) on what would be called connection-oriented flow and congestion control of the kind found in the ARPANET and the early X.25 networks. In addition, Pouzin had more minimalist ideas to test alont the lines of what we call connectionless today. And has been cited earlier, they started to look at the problems of congestion control in these minimalist architectures in the early 70s as did others on the project, such as Gerard LeLann and Erol Gelenbe. This was for them a research problem, more than an engineering problem. The decentralized, stochastic nature of a datagram-like approach and the use of an end-to-end transport protocol pioneered with CYCLADES that set the direction for most of the academic research on the 70s and early 80s. As Noel as already described, some working on the Internet were lulled by the central role the ARPANET played in the early Internet to not realizing that the congestion control problem would arise as the ARPANET's role decreased as the Internet grew. By the time this occurred, there were very few other datagram networks operating. CYCLADES had been shut down. The UK and EIN was operating primarily over X.25. This had a lot to do with how we got where we are. John The Internet and the ARPANET have always been much more engineering problems than a platform for network research. Remember (someone can provide the date) but by 74-75, ARPA declared the ARPANET project completed. At 5:22 PM +0200 8/31/14, Detlef Bosau wrote: >Am 31.08.2014 um 08:14 schrieb Miles Fidelman: >>> >> >> So, you're saying that accidents, rush hours, and construction DON'T >> cause congestion on the A7/E45? Or that people don't people don't >> adjust their schedules or routes based on traffic reports? >> >> Miles Fidelman >> >> > >At least in Germany, wie try (admittedly without sucess) to avoid >traffic congestion by careful planning. > >It is always a pity to visit a widow only to tell her: "Unfortunately, >your husband is not coming home today, he was dropped together with his >car this afternoon due to traffic jam near to Frankfurt." > >Or, since today it was mentioned that our German secretary of defense, >Ursula von der Leyen, has seven children. I'm not quite sure whether she >is going to solve the problem in the Ukraine by "probing". Send four >children to war, if some are dropped halve the window and send only two, >now the next try is stop and wait.... > >(Rumour says, that the US Air Force actually assesses traffic control by >probing and drop, I think the project is conducted near to Ramstein >Airbase.) > >Particularly, the ARPANET in its original design offered the necessary >equipment to get along without this nonsense. > >Or, if I may quote a sentence which John Day wrote me in a private >communication: "A congestion control scheme, that causes congestion. Funny." > >When I started thinking about this issue, I hang on BIC and CUBIC and >thought, why we do this nonsense only to tell a sender on a wireless >link what he already knows, i.e. how fast he may send? > >The very reason is that we neglected scheduling. And when we got aware >of this fact, we replaced scheduling by >a) probing and >b) Little's law. > >That's the whole story. > >And now, we are to overcome the consequences. And we do so for about 25 >years. (And we are going still to do so in 25 years, when we don't >attack the basic problem: the lack of proper scheduling and proper flow >control.) > >And as the Internet is likely to grow, we will see even more buffers and >even more buffer bloat and perhaps even more heterogeneous networks >which suffer from loss differentiation caused problems and even more >unfairness between mice and elephants and so on. > >And as we solve buffer utilization problems by adding more and more >buffer (which can be probed and utilized) the hardware costs increase as >do the round trip times as well. > >My only intention is to pursue a different way of thinking here. No >more, no less. (However, we are that brainwashed by these nonsense PhD >projects on "congestion control" who attempt to keep a dead mummy alive, >that we rather sacrifice the world than our probing/dropping nonsense.) > >-- >------------------------------------------------------------------ >Detlef Bosau >Galileistra?e 30 >70565 Stuttgart Tel.: +49 711 5208031 > mobile: +49 172 6819937 > skype: detlef.bosau > ICQ: 566129673 >detlef.bosau at web.de http://www.detlef-bosau.de From detlef.bosau at web.de Sun Aug 31 13:20:31 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 31 Aug 2014 22:20:31 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: <5403838F.9080300@web.de> Am 31.08.2014 um 20:21 schrieb John Day: >> >> >> Or, since today it was mentioned that our German secretary of defense, >> Ursula von der Leyen, has seven children. > > Good grief. Has she figured out what is causing it!? Although it is EXTREMELY off topic, but I cannot resist: Perhaps no one ever tought her (or even better her husband) how to use condoms? But I was basically more interested on your remark on probing. -- ------------------------------------------------------------------ Detlef Bosau Galileistra?e 30 70565 Stuttgart Tel.: +49 711 5208031 mobile: +49 172 6819937 skype: detlef.bosau ICQ: 566129673 detlef.bosau at web.de http://www.detlef-bosau.de From mfidelman at meetinghouse.net Sun Aug 31 13:20:35 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 16:20:35 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> Message-ID: <54038393.5020403@meetinghouse.net> Tony Li wrote: > On Aug 31, 2014, at 12:04 PM, Miles Fidelman wrote: > >> Packets are not sent off willy nilly in all directions - they're sent in the directions indicated by routing tables that are updated based on, among other things, resource congestion around the net. > > Sorry, no. Routing protocols that react to congestion are still a research topic. > Last time I looked, bandwidth and delay were part of the metrics used in at least some routing tables (e.g., Cisco EGIRP) - which are at least indirect measures of congestion. Or am I wrong here? Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From detlef.bosau at web.de Sun Aug 31 13:52:58 2014 From: detlef.bosau at web.de (Detlef Bosau) Date: Sun, 31 Aug 2014 22:52:58 +0200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54036D96.9040406@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> Message-ID: <54038B2A.5010909@web.de> Am 31.08.2014 um 20:46 schrieb Miles Fidelman: >> >/ There are dozens of all days life examples where resources must be >> />/ allocated or assigned, we have well proven algorithms for these >> />/ purposes. E.g. in Germany, you can travel by car from Flensburg to >> />/ F?ssen. And there is no need for probing, no need for dropped >> cars and >> />/ car corruption is considered an accident./ > > is simply bogus. You DON'T "have well proven algorithms for these > purposes." Neither is congestion control for the internet "proven". > > No. It didn't. As several of us who were there, have told you. The > documentation is also pretty easy to find - try googling "BBN Report > 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. Then documentation work like this one here http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml is wrong. However, honestly, I'm a bit frustrated and a bit tired. Both, "bandwidth" and "flow control" have a propper, well defined technical meaning, in both cases this was forged by our CS rabulistic. And more than once, I saw arguments replaced by loudness - and you might believe it or not; I personally prefer the very decent tone, I want to conduct scientific research - and we are neither at Sotheby's nor at a "Tupperware Party". As you see in the quoted pages (and I had a look at some of the protocol standards, I would not write on this one if I had not reflected those matters during the past ten years) the ARPAnet had a flow control mechanism. And the only reason why we dropped flow control in RFC 791 (as you see, I prefer rationales over yelling) was the possibility of head of line blocking, when we do flow control per line. As you see on http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml the ARPANET provided per flow wise flow control for up to 8 flows. Do you agree here? So, actually, there WAS a flow wise flow control - and hence one could overcome head of line blocking problems. So there actually WAS the possibilty for a - per flow - per hop flow control. And now I would really appreciate compelling reasons why this was abandoned! (As you might notice, I'm reviewing the related work for my own research here. Did you?) And when I sound harsh here: Personally, I'm a very decent person. Even the tone in some universities is much too harsh for me, there are always some few "loudspeakers" you shout and yell there insights everywhere and shout down any question. When I ask questions here, I do this because I did not find compelling answers in more than ten years OF EXTREMELY HARD WORK. This particularly includes not only to attend lessons given by some narcissistic professors but reading many of the original papers (including those by VJ, Little, Shannon) carefully, line by line and several times. And yes, I repeat my statement: You can travel by car from Flensburg to F?ssen, WITHOUT probing and WITHOUT congestion loss, and obviously without evem a university education. So there must be something in our networking world, which is different from that real world, and therefore, I ask questions. From jeanjour at comcast.net Sun Aug 31 14:12:35 2014 From: jeanjour at comcast.net (John Day) Date: Sun, 31 Aug 2014 17:12:35 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54038393.5020403@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> Message-ID: Time scales are very different. Congestion could come and go (or become crippling) before today's routing protocols reacted. At 4:20 PM -0400 8/31/14, Miles Fidelman wrote: >Tony Li wrote: >>On Aug 31, 2014, at 12:04 PM, Miles Fidelman >> wrote: >> >>>Packets are not sent off willy nilly in all directions - they're >>>sent in the directions indicated by routing tables that are >>>updated based on, among other things, resource congestion around >>>the net. >> >>Sorry, no. Routing protocols that react to congestion are still a >>research topic. >> > >Last time I looked, bandwidth and delay were part of the metrics >used in at least some routing tables (e.g., Cisco EGIRP) - which >are at least indirect measures of congestion. Or am I wrong here? > >Miles > > >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Sun Aug 31 14:36:43 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 17:36:43 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54038B2A.5010909@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> <54038B2A.5010909@web.de> Message-ID: <5403956B.8050506@meetinghouse.net> Detlef Bosau wrote: > Am 31.08.2014 um 20:46 schrieb Miles Fidelman: >>>> / There are dozens of all days life examples where resources must be >>> />/ allocated or assigned, we have well proven algorithms for these >>> />/ purposes. E.g. in Germany, you can travel by car from Flensburg to >>> />/ F?ssen. And there is no need for probing, no need for dropped >>> cars and >>> />/ car corruption is considered an accident./ >> is simply bogus. You DON'T "have well proven algorithms for these >> purposes." > Neither is congestion control for the internet "proven". But nobody has said it is. You're the one who asserted that there are algorithms from other sources (ARPANET, German traffic engineering) that solve analogous problems, that all the research on congestion control is bogus, and all that we have to do is apply those algorithms. > >> No. It didn't. As several of us who were there, have told you. The >> documentation is also pretty easy to find - try googling "BBN Report >> 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. > Then documentation work like this one here > http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml > is wrong. I have no idea whether that document is accurate or not, or what it's based on. If there's a conflict with primary source material - which is what Dave Walden and I both pointed to; or the memories of folks who built the ARPANET (as several commentators here are) - then, yes, it's wrong. > > However, honestly, I'm a bit frustrated and a bit tired. As am I, and I expect others, by statements like this: > > Both, "bandwidth" and "flow control" have a propper, well defined > technical meaning, in both cases this was forged by our CS rabulistic. I expect quite a few people are also personally insulted, if not royally ticked off. > > And more than once, I saw arguments replaced by loudness - and you might > believe it or not; I personally prefer the very decent tone, I want to > conduct scientific research - and we are neither at Sotheby's nor at a > "Tupperware Party". > > As you see in the quoted pages (and I had a look at some of the protocol > standards, I would not write on this one if I had not reflected those > matters during the past ten years) the ARPAnet had a flow control mechanism. > > And the only reason why we dropped flow control in RFC 791 (as you see, > I prefer rationales over yelling) was the possibility of head of line > blocking, when we do flow control per line. As you see on > http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml > the ARPANET provided per flow wise flow control for up to 8 flows. > > Do you agree here? Yes - ARPANET had a flow control mechanism. It also had congestion control mechanisms. They were different. No - Neither were particularly appropriate for a datagram oriented catenet. And the design decisions that went into the current net have been stated, repeatedly, by several folks who had hands on the code. Me - I was an observer, I did system architecture for the Defense Data Network, which evolved from the ARPANET - I saw a lot of this first hand. > > So, actually, there WAS a flow wise flow control - and hence one could > overcome head of line blocking problems. > > So there actually WAS the possibilty for a > - per flow > - per hop > flow control. > > And now I would really appreciate compelling reasons why this was abandoned! See above, and previous messages. Beyond that, they weren't abandoned, they never applied in the first place. > > (As you might notice, I'm reviewing the related work for my own research > here. Did you?) > > And when I sound harsh here: Personally, I'm a very decent person. Even > the tone in some universities is much too harsh for me, there are always > some few "loudspeakers" you shout and yell there insights everywhere and > shout down any question. > > When I ask questions here, I do this because I did not find compelling > answers in more than ten years OF EXTREMELY HARD WORK. No - you're making wildly wrong and unfounded assertions, and casting character aspersions at an entire R&D community, and folks who've been working very hard at this for 45 years. > > This particularly includes not only to attend lessons given by some > narcissistic professors but reading many of the original papers > (including those by VJ, Little, Shannon) carefully, line by line and > several times. > > And yes, I repeat my statement: You can travel by car from Flensburg to > F?ssen, WITHOUT probing and WITHOUT congestion loss, and obviously > without evem a university education. I note that you're NOT saying "without CONGESTION." People get stuck in traffic, have accidents, and so forth. It's still congestion, the symptoms are different. > > So there must be something in our networking world, which is different > from that real world, and therefore, I ask questions. Yes... packets are virtual things - what matters is that a copy gets through, not the original bits. Dropping and retransmitting is a viable option for dealing with congestion. Doesn't work as well with people or vehicles. (On the other hand, with merchandise, stores are overstocked all the time, and throw stuff away. Again, an optimization strategy.). Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Sun Aug 31 14:37:05 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 17:37:05 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> Message-ID: <54039581.6080100@meetinghouse.net> Point taken. Thanks, Miles John Day wrote: > Time scales are very different. Congestion could come and go (or > become crippling) before today's routing protocols reacted. > > At 4:20 PM -0400 8/31/14, Miles Fidelman wrote: >> Tony Li wrote: >>> On Aug 31, 2014, at 12:04 PM, Miles Fidelman >>> wrote: >>> >>>> Packets are not sent off willy nilly in all directions - they're >>>> sent in the directions indicated by routing tables that are updated >>>> based on, among other things, resource congestion around the net. >>> >>> Sorry, no. Routing protocols that react to congestion are still a >>> research topic. >>> >> >> Last time I looked, bandwidth and delay were part of the metrics used >> in at least some routing tables (e.g., Cisco EGIRP) - which are at >> least indirect measures of congestion. Or am I wrong here? >> >> Miles >> >> >> -- >> In theory, there is no difference between theory and practice. >> In practice, there is. .... Yogi Berra -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From tony.li at tony.li Sun Aug 31 14:40:04 2014 From: tony.li at tony.li (Tony Li) Date: Sun, 31 Aug 2014 14:40:04 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54038393.5020403@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> Message-ID: <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> On Aug 31, 2014, at 1:20 PM, Miles Fidelman wrote: > Last time I looked, bandwidth and delay were part of the metrics used in at least some routing tables (e.g., Cisco EGIRP) - which are at least indirect measures of congestion. Or am I wrong here? Yes, but that?s maximum bandwidth and propagation delay, not queueing delay and folks don?t actually enable that part of the metric anyway. Nothing dynamic here. Oh, and the last poor soul who did enable all of the dynamic features of (E)IGRP ended up with a violently unstable network. Tony From tony.li at tony.li Sun Aug 31 14:55:45 2014 From: tony.li at tony.li (Tony Li) Date: Sun, 31 Aug 2014 14:55:45 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <54038B2A.5010909@web.de> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> <54038B2A.5010909@web.de> Message-ID: <31117552-B448-4B6A-AAAF-1220DE8B2171@tony.li> On Aug 31, 2014, at 1:52 PM, Detlef Bosau wrote: > So there must be something in our networking world, which is different > from that real world, and therefore, I ask questions. Are we ready to talk about the real world? In the real world, the law of large numbers dominates everything else. Core routers are now pushing 100Gbps per interface and carrying millions of flows in parallel. Real world networks are engineered with substantial amounts of over-provisioning, and not unreasonable amounts of buffering. Packet drops are still (relatively) very rare. Queueing delay does occur and is quite sufficient to trigger Slow Start which pretty clearly is effective at reducing congestion. For all practical purposes, it works just fine. Bursts of traffic still happen. If they?re very bad, they cause queueing. If they?re horrible, they cause drops. If you?re concerned about this, the thing to look at is how to avoid a burst that comes from aggregating a very large number of pseudo-random sources with serialization during the aggregation. Short of the overhead of global admission control, and its concomitant exorbitant overhead, there?s not an obvious architectural path forward. Tony From larrysheldon at cox.net Sun Aug 31 15:46:18 2014 From: larrysheldon at cox.net (Larry Sheldon) Date: Sun, 31 Aug 2014 17:46:18 -0500 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <54036D96.9040406@meetinghouse.net> <54038B2A.5010909@web.de> Message-ID: <5403A5BA.9030209@cox.net> On 8/31/2014 16:36, Miles Fidelman wrote: > Detlef Bosau wrote: >> Am 31.08.2014 um 20:46 schrieb Miles Fidelman: >>>>> / There are dozens of all days life examples where resources must be >>>> />/ allocated or assigned, we have well proven algorithms for these >>>> />/ purposes. E.g. in Germany, you can travel by car from Flensburg to >>>> />/ F?ssen. And there is no need for probing, no need for dropped >>>> cars and >>>> />/ car corruption is considered an accident./ >>> is simply bogus. You DON'T "have well proven algorithms for these >>> purposes." >> Neither is congestion control for the internet "proven". > > But nobody has said it is. You're the one who asserted that there are > algorithms from other sources (ARPANET, German traffic engineering) that > solve analogous problems, that all the research on congestion control > is bogus, and all that we have to do is apply those algorithms. > >> >>> No. It didn't. As several of us who were there, have told you. The >>> documentation is also pretty easy to find - try googling "BBN Report >>> 1822," "imp-to-imp protocol", and, "ARPANET 1822L" for starters. >> Then documentation work like this one here >> http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml >> >> is wrong. > > I have no idea whether that document is accurate or not, or what it's > based on. If there's a conflict with primary source material - which is > what Dave Walden and I both pointed to; or the memories of folks who > built the ARPANET (as several commentators here are) - then, yes, it's > wrong. > >> >> However, honestly, I'm a bit frustrated and a bit tired. > > As am I, and I expect others, by statements like this: >> >> Both, "bandwidth" and "flow control" have a propper, well defined >> technical meaning, in both cases this was forged by our CS rabulistic. > > I expect quite a few people are also personally insulted, if not royally > ticked off. > >> >> And more than once, I saw arguments replaced by loudness - and you might >> believe it or not; I personally prefer the very decent tone, I want to >> conduct scientific research - and we are neither at Sotheby's nor at a >> "Tupperware Party". >> >> As you see in the quoted pages (and I had a look at some of the protocol >> standards, I would not write on this one if I had not reflected those >> matters during the past ten years) the ARPAnet had a flow control >> mechanism. >> >> And the only reason why we dropped flow control in RFC 791 (as you see, >> I prefer rationales over yelling) was the possibility of head of line >> blocking, when we do flow control per line. As you see on >> http://www.cs.utexas.edu/users/chris/think/ARPANET/Technical_Tour/sidi_flow.shtml >> >> the ARPANET provided per flow wise flow control for up to 8 flows. >> >> Do you agree here? > > Yes - ARPANET had a flow control mechanism. It also had congestion > control mechanisms. They were different. > > No - Neither were particularly appropriate for a datagram oriented > catenet. And the design decisions that went into the current net have > been stated, repeatedly, by several folks who had hands on the code. Me > - I was an observer, I did system architecture for the Defense Data > Network, which evolved from the ARPANET - I saw a lot of this first hand. > >> >> So, actually, there WAS a flow wise flow control - and hence one could >> overcome head of line blocking problems. >> >> So there actually WAS the possibilty for a >> - per flow >> - per hop >> flow control. >> >> And now I would really appreciate compelling reasons why this was >> abandoned! > > See above, and previous messages. Beyond that, they weren't abandoned, > they never applied in the first place. >> >> (As you might notice, I'm reviewing the related work for my own research >> here. Did you?) >> >> And when I sound harsh here: Personally, I'm a very decent person. Even >> the tone in some universities is much too harsh for me, there are always >> some few "loudspeakers" you shout and yell there insights everywhere and >> shout down any question. >> >> When I ask questions here, I do this because I did not find compelling >> answers in more than ten years OF EXTREMELY HARD WORK. > > No - you're making wildly wrong and unfounded assertions, and casting > character aspersions at an entire R&D community, and folks who've been > working very hard at this for 45 years. > >> >> This particularly includes not only to attend lessons given by some >> narcissistic professors but reading many of the original papers >> (including those by VJ, Little, Shannon) carefully, line by line and >> several times. >> >> And yes, I repeat my statement: You can travel by car from Flensburg to >> F?ssen, WITHOUT probing and WITHOUT congestion loss, and obviously >> without evem a university education. > > I note that you're NOT saying "without CONGESTION." People get stuck in > traffic, have accidents, and so forth. It's still congestion, the > symptoms are different. >> >> So there must be something in our networking world, which is different >> from that real world, and therefore, I ask questions. > > Yes... packets are virtual things - what matters is that a copy gets > through, not the original bits. Dropping and retransmitting is a viable > option for dealing with congestion. Doesn't work as well with people or > vehicles. (On the other hand, with merchandise, stores are overstocked > all the time, and throw stuff away. Again, an optimization strategy.). I have concluded that this thread has been highjacked by trolls (high maintenance troll, but trolls none the less. On my way to the spam-filter maintenance screen, let me just say a couple of things, for what ever they might be worth. One of the most offensive things I have heard in a while is that some would accuse me of regarding the value of my unique and irreplaceable children as equal to an intrinsically valueless and infinitely replaceable bit of somebody's Angry Birds game. It is amazing to me (a non-participant) that the present state of internet traffic handling was achieved (apparently) through the application of inane and irrelevant similes and metaphors; and blatant disrespect for the participants in the discussions. -- The unique Characteristics of System Administrators: The fact that they are infallible; and, The fact that they learn from their mistakes. From dhc2 at dcrocker.net Sun Aug 31 16:19:48 2014 From: dhc2 at dcrocker.net (Dave Crocker) Date: Sun, 31 Aug 2014 16:19:48 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> Message-ID: <5403AD94.6040707@dcrocker.net> > The ARPANET was never intended as a network for doing research on > networks. It was intended as a production network to facilitate other > research. BBN was very limited in how much experimentation was possible > and in what it could try. So they put the first IMP into UCLA, where the Network Measurement Center was -- Kleinrock, and all that -- on a whim? My understanding is that the primary goal was experimentation, but in the form of monitoring use and trying different algorithms, rather than by conducting artificial traffic exercises. One might think of this as networking as a very different kind of social experiment than we think of today... My other understanding is that the extent of the direct benefit to users wasn't quite anticipated, which made it increasingly difficult to make changes to the net that could bring it down. So it was a few years before they had to start explicitly scheduling time slots for experiments. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mfidelman at meetinghouse.net Sun Aug 31 17:43:35 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 20:43:35 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> Message-ID: <5403C137.5060408@meetinghouse.net> Tony Li wrote: > On Aug 31, 2014, at 1:20 PM, Miles Fidelman wrote: > >> Last time I looked, bandwidth and delay were part of the metrics used in at least some routing tables (e.g., Cisco EGIRP) - which are at least indirect measures of congestion. Or am I wrong here? > > Yes, but that?s maximum bandwidth and propagation delay, not queueing delay and folks don?t actually enable that part of the metric anyway. Nothing dynamic here. > > Oh, and the last poor soul who did enable all of the dynamic features of (E)IGRP ended up with a violently unstable network. > > So is it really the case that there's no dynamic adaptation in the net, except if there's a major cable cut or some such? I guess I haven't been paying attention of late. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Sun Aug 31 17:54:29 2014 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 31 Aug 2014 20:54:29 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403AD94.6040707@dcrocker.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <5403AD94.6040707@dcrocker.net> Message-ID: <5403C3C5.9020102@meetinghouse.net> Dave Crocker wrote: >> The ARPANET was never intended as a network for doing research on >> networks. It was intended as a production network to facilitate other >> research. BBN was very limited in how much experimentation was possible >> and in what it could try. > > So they put the first IMP into UCLA, where the Network Measurement > Center was -- Kleinrock, and all that -- on a whim? > > My understanding is that the primary goal was experimentation, but in > the form of monitoring use and trying different algorithms, rather than > by conducting artificial traffic exercises. One might think of this as > networking as a very different kind of social experiment than we think > of today... My understanding was that the primary goal was reducing the cost of researcher access to unique computers spread across academia - i.e., cutting the cost of travel dollars and leased lines. Dave Walden posted links to the original ARPA RFP and BBN's proposal - interesting reading (and historical!) - at: http://www.walden-family.com/bbn/arpanet-rfq.pdf and http://www.walden-family.com/bbn/arpanet-prop-ocr.pdf respectively. The proposal did have a section on experimentation, but it was only about 2 pages long. > > My other understanding is that the extent of the direct benefit to users > wasn't quite anticipated, which made it increasingly difficult to make > changes to the net that could bring it down. So it was a few years > before they had to start explicitly scheduling time slots for experiments. > I got to BBN a few years later, but my sense is that what was really unanticipated was the amount of operational use, by military types, which led pretty directly to the DDN. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From vint at google.com Sun Aug 31 17:56:06 2014 From: vint at google.com (Vint Cerf) Date: Sun, 31 Aug 2014 20:56:06 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403AD94.6040707@dcrocker.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <5403AD94.6040707@dcrocker.net> Message-ID: Dave, et al, the ARPANET project was intended to solve the problem of sharing of access to computing resources and especially research results among the institutions funded by ARPA to do research in computer science and artificial intelligence. The method chosen was a radical departure from conventional circuit switching and you are correct that the first node installed at UCLA was motivated in part by interest in the mathematical queueing models that Leonard Kleinrock had used in his dissertation research at MIT. I was the principal programmer for the Network Measurement Center. It was clear that ARPA wanted to have the utility of the network to solve its resource sharing problem but to also take advantage of studying its behavior. The first time I met Bob Kahn and Dave Walden was on the occasion of their visit to UCLA in late 1969 or early 1970 to conduct a series of experiments to generate traffic and observe the way in which the IMPs and their protocols and algorithms responded. Bob Kahn had concerns that under certain conditions the network would lock up and this visit was a first opportunity to use the then 4-node network to stress its capacity. In the course of a couple of weeks, Bob designed and I programmed a series of traffic generation and network measurement experiments that indeed locked the network up multiple times and in multiple ways. Reassembly lockup and store-and-forward lockup stand out in my mind in particular. Thanks to pressure from Larry Roberts and the leadership of Steve Crocker, the Network Working group developed a collection of applications and associated protocols such as TELNET, FTP, and networked electronic mail, as well as demonstrations of multi-computer computation and cooperation (e.g. distributed air traffic control) that were shown at the ICCC 1972 event in Washington, DC, that was organized by Bob Kahn at Larry's request. Very early in the ARPANET development, Larry became aware of the packet switching work at the UK National Physical Laboratory and from an interaction in 1967 with Roger Scantlebury, representing Donald Davies' team at NPL at an ACM Conference. Larry was persuaded to use the highest speeds available (then 50 kb/s) for the backbone circuits of the network. The higher the circuit speed, the lower the latency in the network and the variability of queuing delays. Bob Kahn and I learned about the CYCLADES/CIGALE network at IRIA in 1973 and visited there where we met Louis Pouzin, Hubert Zimmermann, Gerard LeLann among others, By 1974, Gerard spent a year at Stanford contributing to the development of TCP/IP. Also in 1973, the Ethernet was invented by Bob Metcalfe and David Boggs and their work on the PARC Universal Packet and related protocols also influenced the design of TCP/IP. By July 1975, ARPA concluded that the ARPANET had reached sufficient stability and it could be handed off to the then Defense Communications Agency (DCA, now Defense Information Systems Agency) for operation. During the period 1973-1982, ARPA focused on the design and implementation of the Internet and at the point where this was activated in January, 1983, the participating military sites were separated from the academic research sites and the network split into MILNET and the renewed research ARPANET, both nets becoming part of the Internet. Other agencies implemented their own pieces of the Internet. The Department of Energy developed the ESNET and NASA developed the NSINET while NSF developed the NSFNET. NSF also facilitated the interconnection of other IP-based research networks in the US and elsewhere and even the commercial X.25 networks to the growing Internet. By 1995, NSF concluded that the availability of Internet service from the commercial sector was sufficient that it could shut down the NSFNET. ARPA shut down the ARPANET in 1990, in part because the growing NSFNET had substantially more nodes and capacity than the 50 Kb/s ARPANET backbone so the research sites of the ARPANET transferred to the so-called regional NSF network or commercially provided IP networks of the time. I know you know all this, Dave, so this is just to try to illustrate that the ARPANET and the many other networks that followed had dual roles as objects of research and utility. I think the Packet Radio and Packet Satellite networks that shaped the Internet's design had similar roles and, in particular, the Packet Satellite network became the sole source of access to the Internet for the European groups that had been part of the extended ARPANET. Peter Kirstein's University College London group, in addition to their pioneering implementation of TCP/IP, also had to make their Packet Satellite connection work operationally to support a good deal of traffic between European and US research communities. They switched to operational use of TCP/IP during 1982, a year before the rest of the ARPANET community. Vint On Sun, Aug 31, 2014 at 7:19 PM, Dave Crocker wrote: > > > The ARPANET was never intended as a network for doing research on > > networks. It was intended as a production network to facilitate other > > research. BBN was very limited in how much experimentation was possible > > and in what it could try. > > > So they put the first IMP into UCLA, where the Network Measurement > Center was -- Kleinrock, and all that -- on a whim? > > My understanding is that the primary goal was experimentation, but in > the form of monitoring use and trying different algorithms, rather than > by conducting artificial traffic exercises. One might think of this as > networking as a very different kind of social experiment than we think > of today... > > My other understanding is that the extent of the direct benefit to users > wasn't quite anticipated, which made it increasingly difficult to make > changes to the net that could bring it down. So it was a few years > before they had to start explicitly scheduling time slots for experiments. > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amckenzie3 at yahoo.com Sun Aug 31 18:07:28 2014 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Sun, 31 Aug 2014 18:07:28 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403AD94.6040707@dcrocker.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <5403AD94.6040707@dcrocker.net> Message-ID: <1409533648.5127.YahooMailNeo@web163805.mail.gq1.yahoo.com> Dave, Of course ARPA wanted to know whether the network delivered to them met the performance specs of the RFP, so they gave UCLA a contract to measure network performance and for that UCLA needed an IMP. But if you read the ARPA RFP, or Larry Roberts' first paper presented at the 1970 SJCC, you will see very little about doing research on networking and a great deal about wanting a network to support the computer science research ARPA was already funding at places like SRI, Utah, BBN, MIT, Lincoln Lab, Rand SDC, Harvard, UCSB, Carnegie, etc. As soon as the network began carrying user traffic there was considerable tension between the Network Measurement Center at UCLA, which wanted to conduct tests to see what types and levels of traffic would break the network, and the Network Operation Center at BBN, which wanted the network to be perceived by its users as being as reliable as at the electric service. As manager of the NOC, I was in the middle of a lot of that tension. ARPA's orders to me were generally "keep it running". Of course ARPA may have given conflicting orders to UCLA - I don't know. Cheers, Alex ________________________________ From: Dave Crocker To: internet-history at postel.org Sent: Sunday, August 31, 2014 7:19 PM Subject: Re: [ih] Why did congestion happen at all? Re: why did CC happen at all? > The ARPANET was never intended as a network for doing research on > networks. It was intended as a production network to facilitate other > research. BBN was very limited in how much experimentation was possible > and in what it could try. So they put the first IMP into UCLA, where the Network Measurement Center was -- Kleinrock, and all that -- on a whim? My understanding is that the primary goal was experimentation, but in the form of monitoring use and trying different algorithms, rather than by conducting artificial traffic exercises. One might think of this as networking as a very different kind of social experiment than we think of today... My other understanding is that the extent of the direct benefit to users wasn't quite anticipated, which made it increasingly difficult to make changes to the net that could bring it down. So it was a few years before they had to start explicitly scheduling time slots for experiments. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian.e.carpenter at gmail.com Sun Aug 31 18:30:04 2014 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 01 Sep 2014 13:30:04 +1200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> Message-ID: <5403CC1C.70207@gmail.com> On 01/09/2014 09:40, Tony Li wrote: > On Aug 31, 2014, at 1:20 PM, Miles Fidelman wrote: > >> Last time I looked, bandwidth and delay were part of the metrics used in at least some routing tables (e.g., Cisco EGIRP) - which are at least indirect measures of congestion. Or am I wrong here? > > > Yes, but that?s maximum bandwidth and propagation delay, not queueing delay and folks don?t actually enable that part of the metric anyway. Nothing dynamic here. > > Oh, and the last poor soul who did enable all of the dynamic features of (E)IGRP ended up with a violently unstable network. I vividly recall Ross Callon speaking about why QOS routing doesn't work at an IETF meeting at least ten years ago, using the analogy of dancing in your own shadow, with a practical demonstration that it can't be done. Brian From brian.e.carpenter at gmail.com Sun Aug 31 18:40:34 2014 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 01 Sep 2014 13:40:34 +1200 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403C137.5060408@meetinghouse.net> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> <5403C137.5060408@meetinghouse.net> Message-ID: <5403CE92.8030109@gmail.com> On 01/09/2014 12:43, Miles Fidelman wrote: > Tony Li wrote: >> On Aug 31, 2014, at 1:20 PM, Miles Fidelman >> wrote: >> >>> Last time I looked, bandwidth and delay were part of the metrics used >>> in at least some routing tables (e.g., Cisco EGIRP) - which are at >>> least indirect measures of congestion. Or am I wrong here? >> >> Yes, but that?s maximum bandwidth and propagation delay, not queueing >> delay and folks don?t actually enable that part of the metric anyway. >> Nothing dynamic here. >> >> Oh, and the last poor soul who did enable all of the dynamic features >> of (E)IGRP ended up with a violently unstable network. >> >> > > So is it really the case that there's no dynamic adaptation in the net, > except if there's a major cable cut or some such? I guess I haven't > been paying attention of late. "Interface down" is a much clearer signal than "Path might be congested". The former *must* be dealt with; if you shift load as a result of the latter, you shift the congestion, and the result is oscillation between one congested path and another. Some people do believe that an ECMP scenario can benefit from flow analysis: http://tools.ietf.org/html/draft-ietf-opsawg-large-flow-load-balancing but this doesn't seem prone to oscillation, doesn't fundamentally change routing topoloogy, and is a modest step. Brian From dhc2 at dcrocker.net Sun Aug 31 19:12:05 2014 From: dhc2 at dcrocker.net (Dave Crocker) Date: Sun, 31 Aug 2014 19:12:05 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <1409533648.5127.YahooMailNeo@web163805.mail.gq1.yahoo.com> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <5403AD94.6040707@dcrocker.net> <1409533648.5127.YahooMailNeo@web163805.mail.gq1.yahoo.com> Message-ID: <5403D5F5.4000601@dcrocker.net> On 8/31/2014 6:07 PM, Alex McKenzie wrote: > But if you read the ARPA RFP, or Larry Roberts' first paper presented at > the 1970 SJCC, you will see very little about doing research on > networking and a great deal about wanting a network to support the > computer science research ARPA was already funding at places like SRI, > Utah, BBN, MIT, Lincoln Lab, Rand SDC, Harvard, UCSB, Carnegie, etc. Ack to you and Vint. Thanks. I had indeed not read those docs and as you know came in after things were already operational for a few years. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From bernie at fantasyfarm.com Sun Aug 31 19:26:27 2014 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Sun, 31 Aug 2014 22:26:27 -0400 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403C3C5.9020102@meetinghouse.net> References: <5401FAF2.8070306@web.de>, <5403AD94.6040707@dcrocker.net>, <5403C3C5.9020102@meetinghouse.net> Message-ID: <5403D953.22155.1256539B@bernie.fantasyfarm.com> On 31 Aug 2014 at 20:54, Miles Fidelman wrote: > Dave Crocker wrote: > > My other understanding is that the extent of the direct benefit to > users > > wasn't quite anticipated, which made it increasingly difficult to > make > > changes to the net that could bring it down. So it was a few years > > before they had to start explicitly scheduling time slots for > experiments. > > I got to BBN a few years later, but my sense is that what was really > unanticipated was the amount of operational use, by military types, > which led pretty directly to the DDN. Dave's right: there were many "untested" technologies that went into the ARPAnet and it was expected that there'd be an extended period in which it'd be flaky, tests would crash it regularly, etc. It was a surprise that when it was turned on it just kind of worked, which quickly changed the focus of the work on it. There were still experiments run, but the emphasis switched to "well, now we've got this damn thing, what are we going to *do* with it". I think the "operational" use came less with military types than with "business" types -- clerks, secretaries, etc. People were doing real, routine work over the ARPAnet and very quickly were expecting it just to "be up", Same thing with experiments being run [distributed OS, encrypted speech, etc]: the fact of the ARPAnet _just_working_ was almost taken for granted. I can't remember any more [maybe Dave does] but we had something like a two hour slot once a week in which we could tinker with the IMP code [something like 6-8AM on Tuesdays??] /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From jnc at mercury.lcs.mit.edu Sun Aug 31 20:02:56 2014 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 31 Aug 2014 23:02:56 -0400 (EDT) Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? Message-ID: <20140901030256.7F4A218C10D@mercury.lcs.mit.edu> > From: Tony Li > the last poor soul who did enable all of the dynamic features of > (E)IGRP ended up with a vio lently unstable network. The so-called 'new ARPANET routing algorithm' was done in part because the original load-dependent DV routing algorithm was not very stable. See BBN Report 3803 for more. > From: Brian E Carpenter > if you shift load as a result of the latter, you shift the congestion, > and the result is oscillation between one congested path and another. Yes, they ran into this problem with the new routing algorithm too; it had to be damped to prevent oscillations. See: Atul Khanna, John Zinky, "The Revised ARPANET Routing Metric" Noel From tony.li at tony.li Sun Aug 31 20:15:10 2014 From: tony.li at tony.li (Tony Li) Date: Sun, 31 Aug 2014 20:15:10 -0700 Subject: [ih] Why did congestion happen at all? Re: why did CC happen at all? In-Reply-To: <5403CE92.8030109@gmail.com> References: <5401FAF2.8070306@web.de> <5402177E.5020403@web.de> <54022677.9050702@meetinghouse.net> <540233F3.5080403@web.de> <5402571F.4030901@web.de> <5402BD4D.1010309@meetinghouse.net> <54033DBE.50803@web.de> <540371C1.9010003@meetinghouse.net> <54038393.5020403@meetinghouse.net> <45357437-B743-4040-ADF5-F33156C1F83C@tony.li> <5403C137.5060408@meetinghouse.net> <5403CE92.8030109@gmail.com> Message-ID: <5EE3CF6C-C102-4A27-A9ED-BC02ECAAC1E2@tony.li> On Aug 31, 2014, at 6:40 PM, Brian E Carpenter wrote: > Some people do believe that an ECMP scenario can benefit from > flow analysis: > http://tools.ietf.org/html/draft-ietf-opsawg-large-flow-load-balancing > but this doesn't seem prone to oscillation, doesn't fundamentally > change routing topoloogy, and is a modest step. Some other people believe that an appropriate application of control theory might allow us to apply damping (e.g., Kalman filters) that would give us both reasonable stability and sufficient reactivity. AFAIK, this is an open research area that no one is interested in funding. Tony