From mbaer at cs.tu-berlin.de Tue Jun 1 08:07:52 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Tue, 01 Jun 2010 17:07:52 +0200 Subject: [ih] principles of the internet Message-ID: <4C052248.2050608@cs.tu-berlin.de> Dear all, I am in the middle of an argumentative research exercise in which I try to map a set of principles that are central to the Internet (descriptive principles, as informed by practices and universality of applicability; not normative principles following purposes other than system stability and individual liberty). Since most here have plenty of hands-on experience I would be very appreciative of some feedback -- on-list, off-list; long, short; however you like it. I made up the following list: 1. original end-to-end arguments and economic efficiency concerns (speaking to completeness and efficiency of implementation) 2. modularity, minimal coupling, and layering (speaking to the general architecture) 3. least privilege, and best effort (speaking to the actual shape of the interdependencies) 4. cascadability and symmetry (speaking to the rules of efficient and flexible protocol design) 5. running code, complexity avoidance, rough consensus, and path dependence (speaking to the governance process and its stability) Thanks for all your takes. Matthias -- Matthias B?rwolff www.b?rwolff.de From dcrocker at gmail.com Tue Jun 1 08:33:39 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Tue, 01 Jun 2010 08:33:39 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C052248.2050608@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> Message-ID: <4C052853.4040601@gmail.com> On 6/1/2010 8:07 AM, Matthias B?rwolff wrote: > Dear all, > > I am in the middle of an argumentative research exercise in which I try > to map a set of principles that are central to the Internet (descriptive > principles, as informed by practices and universality of applicability; > not normative principles following purposes other than system stability > and individual liberty). Since most here have plenty of hands-on > experience I would be very appreciative of some feedback -- on-list, > off-list; long, short; however you like it. > > I made up the following list: > > 1. original end-to-end arguments and economic efficiency concerns > (speaking to completeness and efficiency of implementation) I recommend separating these. I also suggest distinguishing reliability from economics. My understanding of the original motivations for work on packet switching were both reduced communications costs /and/ robustness against failures of communication components. > 2. modularity, minimal coupling, and layering (speaking to the general > architecture) > > 3. least privilege, and best effort (speaking to the actual shape of the > interdependencies) I do not automatically see how 'least privilege' was represented in the early work on networking. I suppose that it applies for any system that is highly distributed, but I'm used to the term being applied for security concerns rather than operations. > 4. cascadability and symmetry (speaking to the rules of efficient and > flexible protocol design) What do you mean by the term "cascadability"? What do you mean by "symmetry" in this context. > 5. running code, complexity avoidance, rough consensus, and path > dependence (speaking to the governance process and its stability) This sub-list merges at least two very different areas of concern. One is the process for developing specification and the other pertains to technical characteristics of specifications. The latter is also reflected elsewhere in your list. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mbaer at cs.tu-berlin.de Tue Jun 1 08:47:27 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Tue, 01 Jun 2010 17:47:27 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C052853.4040601@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> Message-ID: <4C052B8F.3030609@cs.tu-berlin.de> Dave, thanks for your response. On 06/01/2010 05:33 PM, Dave Crocker wrote: > >> 4. cascadability and symmetry (speaking to the rules of efficient and >> flexible protocol design) > > What do you mean by the term "cascadability"? What do you mean by > "symmetry" in this context. > There are ways in which you can have protocols doing similar things (e.g. file transfer) but which are not possible to concatenate (or cascade) without violating their semantics (pertaining to acknowledgments and other such control actions). Symmetry just means that neither end is conceptually master or slave, instead both being equal peers (cf. Telnet symmetry). > >> 5. running code, complexity avoidance, rough consensus, and path >> dependence (speaking to the governance process and its stability) > > This sub-list merges at least two very different areas of concern. One > is the process for developing specification and the other pertains to > technical characteristics of specifications. The latter is also > reflected elsewhere in your list. Agreed, the latter stands out somewhat. As for the separation you mention, I would think, though, that the feasibility of getting anywhere involves both the organization of the process, and the characteristics of the resultant artifacts. I will have to think more about this. Matthias > > d/ -- Matthias B?rwolff www.b?rwolff.de From dcrocker at gmail.com Tue Jun 1 09:02:23 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Tue, 01 Jun 2010 09:02:23 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C052B8F.3030609@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> Message-ID: <4C052F0F.1090201@gmail.com> On 6/1/2010 8:47 AM, Matthias B?rwolff wrote: >>> 4. cascadability and symmetry (speaking to the rules of efficient and >>> flexible protocol design) >> >> What do you mean by the term "cascadability"? What do you mean by >> "symmetry" in this context. > > There are ways in which you can have protocols doing similar things > (e.g. file transfer) but which are not possible to concatenate (or > cascade) without violating their semantics (pertaining to > acknowledgments and other such control actions). Do you have some examples of 'cascadability' for the Arpanet or Internet? I am still not quite seeing how it applies to the design or operation of the protocols. > Symmetry just means that neither end is conceptually master or slave, > instead both being equal peers (cf. Telnet symmetry). At the application level, symmetry is either absent or a myth, for almost all applications. It also is not always present at lower layers (e.g., dhcp and I believe TLS.) Hmmm. For that matter, I suspect the Arpanet NCP was not all that symmetrical, although I do not remember enough of the details. I also do not remember how symmetrical the IMP behavior was. (But perhaps the Arpanet is going back too far for your discussion.) The telnet reference is particularly salient. When I was running an Internet software stack development group, we were seeking authorization to sell to the US government and they sent out a person to verify our standards compliance. During this, he required that we demonstrate that a host could sent the user (that is, serve to client) to WILL ECHO. In other words, we had to support having the user's system echo data from the host server back to the host server. We noted that this was illogical and highly undesireable. He calmly agreed with us but them pointed to the symmetry of the protocol and said the formalities required us to support it. We hacked the change long enough to pass the test, continuously shaking our heads in disbelief. >>> 5. running code, complexity avoidance, rough consensus, and path >>> dependence (speaking to the governance process and its stability) >> >> This sub-list merges at least two very different areas of concern. One >> is the process for developing specification and the other pertains to >> technical characteristics of specifications. The latter is also >> reflected elsewhere in your list. > > Agreed, the latter stands out somewhat. As for the separation you > mention, I would think, though, that the feasibility of getting anywhere > involves both the organization of the process, and the characteristics > of the resultant artifacts. I will have to think more about this. I am not saying that they are not each important, but that combining them might confuse them as distinguishing concepts. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Tue Jun 1 10:23:17 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 13:23:17 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C052B8F.3030609@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> Message-ID: Are you talking about principles in the formal sense, i.e. deriving from fundamentals, or more in the folklore sense, i.e. those contributing to myth? I presume the latter. Strictly speaking, your definition of symmetry is correct. The other way to put it is that the protocol machines on either end of the flow are the same. I commiserate with Dave's tale of dumb conformance testers. I have seen them myself. However, I would disagree with him that all application protocols must be asymmetrical. While it represents the majority of those done so far, I would also contend that our applications so far are pretty rudimentary. I think the Telnet example is quite exemplary. The vast majority of that type of protocol at the time were asymmetrical. Their designers lacked the insight to see it as a symmetrical problem. Most thought that it was a terminal to host protocol or a remote login protocol (some textbooks still describe Telnet that way), when in fact it is a device driver protocol. But then these points are moot since there is really only one application protocol required anyway. The variety was just an indication of our lack of understanding at the time. The economic issue is a good one. I always point to the failure of the early NETRJE as an example of the obvious (elegant) solution being wrong for economic reasons. It is the case that once an asymmetrical protocol is introduced into an architecture, it makes very difficult to build anything on top of it. The running code/rough consensus is probably one of the more important aspects of driving the Internet to a artisan tradition rather than a more scientific one. This is has probably contributed most to the stagnation we currently see. An odd complement to the aversion to complexity, there seems to be an aversion to sophistication that leads to greater simplicity. The other phenomena which I have noticed but not been able to explain was that if one group did X (and didn't quite do it right), the Internet would do "not X" rather than "let us show you how to get that right." It might be worthwhile to distinguish those principles that pre-dated the Internet itself vs those that were developed in the various precursors and were or were not taken up by the Internet Take care, John At 17:47 +0200 2010/06/01, Matthias B?rwolff wrote: >Dave, thanks for your response. > >On 06/01/2010 05:33 PM, Dave Crocker wrote: >> >>> 4. cascadability and symmetry (speaking to the rules of efficient and >>> flexible protocol design) >> >> What do you mean by the term "cascadability"? What do you mean by >> "symmetry" in this context. >> > >There are ways in which you can have protocols doing similar things >(e.g. file transfer) but which are not possible to concatenate (or >cascade) without violating their semantics (pertaining to >acknowledgments and other such control actions). > >Symmetry just means that neither end is conceptually master or slave, >instead both being equal peers (cf. Telnet symmetry). > >> >>> 5. running code, complexity avoidance, rough consensus, and path >>> dependence (speaking to the governance process and its stability) >> >> This sub-list merges at least two very different areas of concern. One >> is the process for developing specification and the other pertains to >> technical characteristics of specifications. The latter is also >> reflected elsewhere in your list. > >Agreed, the latter stands out somewhat. As for the separation you >mention, I would think, though, that the feasibility of getting anywhere >involves both the organization of the process, and the characteristics >of the resultant artifacts. I will have to think more about this. > >Matthias > >> >> d/ > >-- >Matthias B?rwolff >www.b?rwolff.de From dcrocker at gmail.com Tue Jun 1 10:38:25 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Tue, 01 Jun 2010 10:38:25 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> Message-ID: <4C054591.5070708@gmail.com> On 6/1/2010 10:23 AM, John Day wrote: > However, I would disagree with him that all > application protocols must be asymmetrical. Perhaps some other Dave posted what you are responding to, and I missed it, but if you mean my note: I never made such a claim. I made a statistical assertion of what is and has been -- as you also note -- but not what must be. > It is the case that once an asymmetrical protocol is introduced into an > architecture, it makes very difficult to build anything on top of it. like HTTP? > The running code/rough consensus is probably one of the more important > aspects of driving the Internet to a artisan tradition rather than a > more scientific one. This is has probably contributed most to the > stagnation we currently see. > > An odd complement to the aversion to complexity, there seems to be an > aversion to sophistication that leads to greater simplicity. I assume that the goal of this exercise requires ignoring the rather remarkable complexity that has crept into much of the recent work in the IETF? d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Tue Jun 1 11:10:15 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 14:10:15 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C054591.5070708@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> <4C054591.5070708@gmail.com> Message-ID: At 10:38 -0700 2010/06/01, Dave Crocker wrote: >On 6/1/2010 10:23 AM, John Day wrote: >> However, I would disagree with him that all >>application protocols must be asymmetrical. > >Perhaps some other Dave posted what you are responding to, and I >missed it, but if you mean my note: I never made such a claim. > >I made a statistical assertion of what is and has been -- as you >also note -- but not what must be. Sorry I misinterpreted this: "At the application level, symmetry is either absent or a myth, for almost all applications." Although, I don't see any reference here that restricts it to the past. I am not ready to concede that what we have seen so far is representative of the space. As I said, everything so far is pretty rudimentary. However, it is true that for data transfer protocols all properly designed ones are both symmetrical and soft state. Of course, it is always possible to botch the job and violate that rule. > >>It is the case that once an asymmetrical protocol is introduced into an >>architecture, it makes very difficult to build anything on top of it. > >like HTTP? Yes, or X.29 or most anything else that came out of Europe. Building an asymmetrical protocol on top can sometimes work, but generally this constitutes a dead end. This is where the distinction between the application protocol and the application becomes useful. > >>The running code/rough consensus is probably one of the more important >>aspects of driving the Internet to a artisan tradition rather than a >>more scientific one. This is has probably contributed most to the >>stagnation we currently see. >> >>An odd complement to the aversion to complexity, there seems to be an >>aversion to sophistication that leads to greater simplicity. > >I assume that the goal of this exercise requires ignoring the rather >remarkable complexity that has crept into much of the recent work in >the IETF? Actually, I was referring to the choice of SNMP [sic] over HEMS. While what you say is very true, there is nothing remarkable about it. This is the normal course for a partial design that is independently patched to fix point problems. It is exactly what you would expect. Take care, John From dcrocker at gmail.com Tue Jun 1 11:28:17 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Tue, 01 Jun 2010 11:28:17 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> <4C054591.5070708@gmail.com> Message-ID: <4C055141.9090705@gmail.com> On 6/1/2010 11:10 AM, John Day wrote: > I am not ready to > concede that what we have seen so far is representative of the space. I'm pretty sure that nearly 40 years and some billions of users qualifies as 'representative'. Far from ideal and certainly with plenty of terrain yet to be covered, but still solidly qualifying for a number of definitions of representative... > However, it is true that for data transfer protocols all properly > designed ones are both symmetrical and soft state. Of course, it is > always possible to botch the job and violate that rule. > >> >>> It is the case that once an asymmetrical protocol is introduced into an >>> architecture, it makes very difficult to build anything on top of it. >> >> like HTTP? My point was there there is an enormous range of value-add functionality built on top of HTTP, which much of the world views as a transport protocol. >> I assume that the goal of this exercise requires ignoring the rather >> remarkable complexity that has crept into much of the recent work in >> the IETF? > > Actually, I was referring to the choice of SNMP [sic] over HEMS. Then you will be amused to spend time wandering around many of the more recent exercises. You'll probably most enjoy anything from the RAI area, but possibly also mobility and... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From richard at bennett.com Tue Jun 1 11:49:33 2010 From: richard at bennett.com (Richard Bennett) Date: Tue, 01 Jun 2010 14:49:33 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C052248.2050608@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> Message-ID: <4C05563D.6040506@bennett.com> It seems that you're not reaching back all the way to the beginning in your search for principles, and have therefore landed in some of the mythology rather than the core ideas. It seems to me that you can't describe the Internet without acknowledging two facts as primary: 1. Packet-switching. The Internet is one of a series of exercises in the practical application of packet-switching technology to computer networking that began with ARPANET and continued through SATNET, PRNET, CYCLADES, the Internet, DECNet, XNS, SNA, and ISO OSI. The Internet protocols all more or less assume a packet-switching service, and in cases seek to control this service. 2. Internetworking. The Internet assumes that there will be a service that transports IP reliably. The Internet also defers certain decisions about the management of packets - or frames, to be more precise - to a networking function that is specific to a given technology such as ARPANET, PRNET, etc. Hence the Internet specification is incomplete by design, insofar as it leaves the layer two and layer one design to the operator of the layer two networks. Many of the principles in your list - simplicity, end to end, and symmetry, for example - are actually side-effects of the project, which was inter-networking packet-based networks built on diverse technologies. The Internet protocols are agnostic about privilege and best-effort, as these are layer two functions that are simply outside the scope of a system that goes from layer three to layer seven (or whatever number you assign to applications.) So the absence of a particular function in the IP stack doesn't always mean that it's frowned upon, it can mean that it's expected to be provided somewhere else. I don't know that economics has much to do with this, beyond the assumption that packet-switching is more economical for human-computer interactions than circuit-switching is. The Internet wasn't designed by economists. You do have to beware that the Internet has developed its own hype machine that imbues it with all sorts of magical properties that probably werent' in the minds of its original designers. The End-to-End Arguments paper, for example, followed the design of the protocols by nearly ten years, so to the extent that it describes the Internet at all, it could only be taken as a post-hoc explanation. And it doesn't even pretend to describe the Internet really, it's a general tome on distributed systems. On 6/1/2010 11:07 AM, Matthias B?rwolff wrote: > Dear all, > > I am in the middle of an argumentative research exercise in which I try > to map a set of principles that are central to the Internet (descriptive > principles, as informed by practices and universality of applicability; > not normative principles following purposes other than system stability > and individual liberty). Since most here have plenty of hands-on > experience I would be very appreciative of some feedback -- on-list, > off-list; long, short; however you like it. > > I made up the following list: > > 1. original end-to-end arguments and economic efficiency concerns > (speaking to completeness and efficiency of implementation) > > 2. modularity, minimal coupling, and layering (speaking to the general > architecture) > > 3. least privilege, and best effort (speaking to the actual shape of the > interdependencies) > > 4. cascadability and symmetry (speaking to the rules of efficient and > flexible protocol design) > > 5. running code, complexity avoidance, rough consensus, and path > dependence (speaking to the governance process and its stability) > > Thanks for all your takes. > > Matthias > > -- Richard Bennett Research Fellow Information Technology and Innovation Foundation Washington, DC From jeanjour at comcast.net Tue Jun 1 11:56:58 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 14:56:58 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C055141.9090705@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> <4C054591.5070708@gmail.com> <4C055141.9090705@gmail.com> Message-ID: At 11:28 -0700 2010/06/01, Dave Crocker wrote: >On 6/1/2010 11:10 AM, John Day wrote: >> I am not ready to >>concede that what we have seen so far is representative of the space. > >I'm pretty sure that nearly 40 years and some billions of users >qualifies as 'representative'. Far from ideal and certainly with >plenty of terrain yet to be covered, but still solidly qualifying >for a number of definitions of representative... This is a common mistake. Billions of users doing the same thing is not an exploration of the problem space any more than lots of implementations in daily use is stress testing. > >>However, it is true that for data transfer protocols all properly >>designed ones are both symmetrical and soft state. Of course, it is >>always possible to botch the job and violate that rule. >> >>> >>>>It is the case that once an asymmetrical protocol is introduced into an >>>>architecture, it makes very difficult to build anything on top of it. >>> >>>like HTTP? > >My point was there there is an enormous range of value-add >functionality built on top of HTTP, which much of the world views as >a transport protocol. Transport protocols are characterized by feedback mechanisms requiring synchronization. I was very careful to distinguish the application protocol from the application. The fact that much of the world views it as a Transport protocol merely indicates the failure of education. It remains that building another protocol on top of an asymmetric protocol has always proved cumbersome. > > >>>I assume that the goal of this exercise requires ignoring the rather >>>remarkable complexity that has crept into much of the recent work in >>>the IETF? >> >>Actually, I was referring to the choice of SNMP [sic] over HEMS. > >Then you will be amused to spend time wandering around many of the >more recent exercises. You'll probably most enjoy anything from the >RAI area, but possibly also mobility and... Well, the mobility stuff is laboring under the weight of trying to do something with half an architecture. Not surprising it is such a mess. Take care, John > >d/ >-- > > Dave Crocker > Brandenburg InternetWorking > bbiw.net From jeanjour at comcast.net Tue Jun 1 12:47:00 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 15:47:00 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C05563D.6040506@bennett.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> Message-ID: SNA was not a packet switched network, but a traditional data comm network. A packet switched network besides sending packets was a peer network and SNA was definitely not. (Although pedantically it was. Codex use to take umbrage that their stat muxes (that supported) SNA were NOT doing packet switching. By the mid-80s I finally got them to admit that the primary difference was the granularity of the packet switching.) But the point is that SNA came from entirely different paradigm than the packet switched idea. At 14:49 -0400 2010/06/01, Richard Bennett wrote: >It seems that you're not reaching back all the >way to the beginning in your search for >principles, and have therefore landed in some of >the mythology rather than the core ideas. It >seems to me that you can't describe the Internet >without acknowledging two facts as primary: > >1. Packet-switching. The Internet is one of a >series of exercises in the practical application >of packet-switching technology to computer >networking that began with ARPANET and continued >through SATNET, PRNET, CYCLADES, the Internet, >DECNet, XNS, SNA, and ISO OSI. The Internet >protocols all more or less assume a >packet-switching service, and in cases seek to >control this service. > >2. Internetworking. The Internet assumes that >there will be a service that transports IP >reliably. The Internet also defers certain >decisions about the management of packets - or >frames, to be more precise - to a networking >function that is specific to a given technology >such as ARPANET, PRNET, etc. Hence the Internet >specification is incomplete by design, insofar >as it leaves the layer two and layer one design >to the operator of the layer two networks. > >Many of the principles in your list - >simplicity, end to end, and symmetry, for >example - are actually side-effects of the >project, which was inter-networking packet-based >networks built on diverse technologies. > >The Internet protocols are agnostic about >privilege and best-effort, as these are layer >two functions that are simply outside the scope >of a system that goes from layer three to layer >seven (or whatever number you assign to >applications.) So the absence of a particular >function in the IP stack doesn't always mean >that it's frowned upon, it can mean that it's >expected to be provided somewhere else. > >I don't know that economics has much to do with >this, beyond the assumption that >packet-switching is more economical for >human-computer interactions than >circuit-switching is. The Internet wasn't >designed by economists. > >You do have to beware that the Internet has >developed its own hype machine that imbues it >with all sorts of magical properties that >probably werent' in the minds of its original >designers. The End-to-End Arguments paper, for >example, followed the design of the protocols by >nearly ten years, so to the extent that it >describes the Internet at all, it could only be >taken as a post-hoc explanation. And it doesn't >even pretend to describe the Internet really, >it's a general tome on distributed systems. > > >On 6/1/2010 11:07 AM, Matthias B?rwolff wrote: >>Dear all, >> >>I am in the middle of an argumentative research exercise in which I try >>to map a set of principles that are central to the Internet (descriptive >>principles, as informed by practices and universality of applicability; >>not normative principles following purposes other than system stability >>and individual liberty). Since most here have plenty of hands-on >>experience I would be very appreciative of some feedback -- on-list, >>off-list; long, short; however you like it. >> >>I made up the following list: >> >>1. original end-to-end arguments and economic efficiency concerns >>(speaking to completeness and efficiency of implementation) >> >>2. modularity, minimal coupling, and layering (speaking to the general >>architecture) >> >>3. least privilege, and best effort (speaking to the actual shape of the >>interdependencies) >> >>4. cascadability and symmetry (speaking to the rules of efficient and >>flexible protocol design) >> >>5. running code, complexity avoidance, rough consensus, and path >>dependence (speaking to the governance process and its stability) >> >>Thanks for all your takes. >> >>Matthias >> >> > >-- >Richard Bennett >Research Fellow >Information Technology and Innovation Foundation >Washington, DC From dcrocker at gmail.com Tue Jun 1 13:00:50 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Tue, 01 Jun 2010 13:00:50 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C05563D.6040506@bennett.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> Message-ID: <4C0566F2.1040605@gmail.com> On 6/1/2010 11:49 AM, Richard Bennett wrote: > The Internet protocols are agnostic about privilege and best-effort, as Absent standardized QOS, IP is best effort and the transport-level reliability mechanisms reflect this, even as weak as they were (intentionally) made to be. This was a major shift from the degree of delivery assurance attempted for the Arpanet IMP infrastructure, which was reflected in the /lack/ of host-to-host reliability mechanism in the NCP. > these are layer two functions that are simply outside the scope of a Except that layer two is not end-to-end and therefore cannot make end-to-end service assertions or enforce them. > I don't know that economics has much to do with this, beyond the > assumption that packet-switching is more economical for human-computer > interactions than circuit-switching is. The Internet wasn't designed by > economists. Cost-savings, by avoiding NxM combinatorial explosion of communications lines, was an explicit and frequently cited motivation for the work, at least in terms of what I heard when I came on board in the early 70s. Surviving a "hostile battlefield" was the other, which meant conventional, not nuclear, conditions. At the time, I believe folks didn't quite anticipate that commercial communications environments would also look pretty hostile... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mbaer at cs.tu-berlin.de Tue Jun 1 13:31:51 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Tue, 01 Jun 2010 22:31:51 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C052F0F.1090201@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> <4C052F0F.1090201@gmail.com> Message-ID: <4C056E37.2060508@cs.tu-berlin.de> On 06/01/2010 06:02 PM, Dave Crocker wrote: > > > On 6/1/2010 8:47 AM, Matthias B?rwolff wrote: >>>> 4. cascadability and symmetry (speaking to the rules of efficient and >>>> flexible protocol design) >>> >>> What do you mean by the term "cascadability"? What do you mean by >>> "symmetry" in this context. >> >> There are ways in which you can have protocols doing similar things >> (e.g. file transfer) but which are not possible to concatenate (or >> cascade) without violating their semantics (pertaining to >> acknowledgments and other such control actions). > > Do you have some examples of 'cascadability' for the Arpanet or > Internet? I am still not quite seeing how it applies to the design or > operation of the protocols. The notion of "cascadability" arose in the context of interconnection of different networks (Internet versus X.25/X.75), and has primarily been addressed in some early Pouzin INWG notes (ca. 1973) -- the basic point being that only a fairly simple service can be cascaded at all. The term "cascadable" pops up in a 1979 Gien and Zimmerman paper. Generally speaking, things like virtual circuits, end-to-end acknowledgements, and buffer allocations seem to inhibit cascadability. Examples for protocols that are not easily cascadable are Arpanet's RFNMed normal message service which made any interconnection (e.g. with Alohanet) at the packet level very awkward. Moving to the application protocol level, an extreme example would be the logical impossibility of concatenating sequential FTP with MIT's experimental Blast protocol (send the whole file at once without flow control or sequential acknowledgments, and then wait for a list of holes to resend). Examples for nicely cascadable protocols are Arpanet raw messages (which were not RFNMed) and IP. Coming to think of it, maybe the notion of cascadability is a little too obvious and trivial to actually count as a principle. (But then again, in retrospect things often look much more obvious than at the time when they were contested.) > > >> Symmetry just means that neither end is conceptually master or slave, >> instead both being equal peers (cf. Telnet symmetry). > > At the application level, symmetry is either absent or a myth, for > almost all applications. True, applying a symmetry requirement/principle to the app protocol level would probably go a little too far. For there is enough room for all sorts of protocols there without prejudicing others, so there's no need to be adamant about such things. > > It also is not always present at lower layers (e.g., dhcp and I believe > TLS.) > > Hmmm. For that matter, I suspect the Arpanet NCP was not all that > symmetrical, although I do not remember enough of the details. I also > do not remember how symmetrical the IMP behavior was. (But perhaps the > Arpanet is going back too far for your discussion.) As for packet forwarding, the IMPs were symmetrical as far as I can tell; there wasn't much going on other than sending individual packets either way (and waiting for ACKs, and then trow packets away; or resend them if no ACK was coming back). The Arpanet Host-Host protocol was made symmetrical at some point, I believe (it wouldn't matter who'd issue a Request for Connection first, or if both issued them ar once; and both sides could send stuff, once the connection was up). > > The telnet reference is particularly salient. When I was running an > Internet software stack development group, we were seeking authorization > to sell to the US government and they sent out a person to verify our > standards compliance. During this, he required that we demonstrate that > a host could sent the user (that is, serve to client) to WILL ECHO. In > other words, we had to support having the user's system echo data from > the host server back to the host server. > > We noted that this was illogical and highly undesireable. He calmly > agreed with us but them pointed to the symmetry of the protocol and said > the formalities required us to support it. > > We hacked the change long enough to pass the test, continuously shaking > our heads in disbelief. > > >>>> 5. running code, complexity avoidance, rough consensus, and path >>>> dependence (speaking to the governance process and its stability) >>> >>> This sub-list merges at least two very different areas of concern. One >>> is the process for developing specification and the other pertains to >>> technical characteristics of specifications. The latter is also >>> reflected elsewhere in your list. >> >> Agreed, the latter stands out somewhat. As for the separation you >> mention, I would think, though, that the feasibility of getting anywhere >> involves both the organization of the process, and the characteristics >> of the resultant artifacts. I will have to think more about this. > > I am not saying that they are not each important, but that combining > them might confuse them as distinguishing concepts. > > d/ -- Matthias B?rwolff www.b?rwolff.de From jeanjour at comcast.net Tue Jun 1 13:31:42 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 16:31:42 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C0566F2.1040605@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> Message-ID: At 13:00 -0700 2010/06/01, Dave Crocker wrote: >On 6/1/2010 11:49 AM, Richard Bennett wrote: >>The Internet protocols are agnostic about privilege and best-effort, as > >Absent standardized QOS, IP is best effort and the transport-level >reliability mechanisms reflect this, even as weak as they were >(intentionally) made to be. > >This was a major shift from the degree of delivery assurance >attempted for the Arpanet IMP infrastructure, which was reflected in >the /lack/ of host-to-host reliability mechanism in the NCP. Yes, this was the basic datagram innovation pioneered by CYCLADES, which is the fundamental shift in the thinking. I sometimes characterize the distinction as packet switching was "continental drift" but datagrams were "plate tectonics." >>these are layer two functions that are simply outside the scope of a This was the hop-by-hop error control seen in the ARPANet and later advocated by the PTTs in X.25. Pouzin's insight was that the hosts weren't going to trust the network no matter what, so it didn't have to be perfect. Building reliable systems from unreliable parts was in the air at the time, i.e. the von Neumann paper. >Except that layer two is not end-to-end and therefore cannot make >end-to-end service assertions or enforce them. Right, but is necessary. Layer two must provide enough error control to make end-to-end error control at layer 4 cost effective. Since most loss at layer 3 is due to congestion, that implies that layer two should not be worse than the congestion losses. If it is, layer 4 error control becomes very inefficient. > >>I don't know that economics has much to do with this, beyond the >>assumption that packet-switching is more economical for human-computer >>interactions than circuit-switching is. The Internet wasn't designed by >>economists. > >Cost-savings, by avoiding NxM combinatorial explosion of >communications lines, was an explicit and frequently cited >motivation for the work, at least in terms of what I heard when I >came on board in the early 70s. Circuit switching didn't require that. I never heard that argument. The arguments I heard (and are the arguments in Baran's report) were that circuit switches required long connection set up times and effectively statically allocated resources to flows, where datagrams required very little set up time (counting transport connect time) and pooled (or dynamic) resource allocation, which is always much more effective. Voice was characterized by long connection times and continues data flow, whereas data had short connection times and burst data flows. Where data connection times were less than the set up time for circuits. > >Surviving a "hostile battlefield" was the other, which meant >conventional, not nuclear, conditions. At the time, I believe folks >didn't quite anticipate that commercial communications environments >would also look pretty hostile... > > Indeed. > >d/ >-- > > Dave Crocker > Brandenburg InternetWorking > bbiw.net From mbaer at cs.tu-berlin.de Tue Jun 1 13:48:33 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Tue, 01 Jun 2010 22:48:33 +0200 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C052853.4040601@gmail.com> <4C052B8F.3030609@cs.tu-berlin.de> Message-ID: <4C057221.4070508@cs.tu-berlin.de> On 06/01/2010 07:23 PM, John Day wrote: > Are you talking about principles in the formal sense, i.e. deriving from > fundamentals, or more in the folklore sense, i.e. those contributing to > myth? I presume the latter. I was thinking more about principles in the way Denning has reasoned about them in his Library of Great Computing Principles project, that is, be (1) universal, (2) recurrent, and (3) broadly influential (http://cs.gmu.edu/cne/pjd/GP/gp_criteria.html). My idea was to find those that have shaped the Internet. > > Strictly speaking, your definition of symmetry is correct. The other > way to put it is that the protocol machines on either end of the flow > are the same. I commiserate with Dave's tale of dumb conformance > testers. I have seen them myself. However, I would disagree with him > that all application protocols must be asymmetrical. While it represents > the majority of those done so far, I would also contend that our > applications so far are pretty rudimentary. I think the Telnet example > is quite exemplary. The vast majority of that type of protocol at the > time were asymmetrical. Their designers lacked the insight to see it as > a symmetrical problem. Most thought that it was a terminal to host > protocol or a remote login protocol (some textbooks still describe > Telnet that way), when in fact it is a device driver protocol. But then > these points are moot since there is really only one application > protocol required anyway. The variety was just an indication of our > lack of understanding at the time. > > The economic issue is a good one. I always point to the failure of the > early NETRJE as an example of the obvious (elegant) solution being wrong > for economic reasons. > > It is the case that once an asymmetrical protocol is introduced into an > architecture, it makes very difficult to build anything on top of it. > > The running code/rough consensus is probably one of the more important > aspects of driving the Internet to a artisan tradition rather than a > more scientific one. This is has probably contributed most to the > stagnation we currently see. (Off topic) I can't help but think that the running code/rough consensus tradition has helped finding common ground among the vastly different parties involved, which in turn was the basis for making progress at all. It sure might be prone to all sorts of bias and even mediocrity, but how else to do it? Can't see much of an alternative. > > An odd complement to the aversion to complexity, there seems to be an > aversion to sophistication that leads to greater simplicity. > > The other phenomena which I have noticed but not been able to explain > was that if one group did X (and didn't quite do it right), the Internet > would do "not X" rather than "let us show you how to get that right." > > It might be worthwhile to distinguish those principles that pre-dated > the Internet itself vs those that were developed in the various > precursors and were or were not taken up by the Internet That sounds like a good idea. However, it would probably take quite some effort to do it, given the world of prior experiments, experiences, many of which are poorly documented at that. Arpanet and early Internet are probably the easiest to find literature about, and follow actual stages of evolution. Matthias > > Take care, > John > > At 17:47 +0200 2010/06/01, Matthias B?rwolff wrote: >> Dave, thanks for your response. >> >> On 06/01/2010 05:33 PM, Dave Crocker wrote: >>> >>>> 4. cascadability and symmetry (speaking to the rules of efficient and >>>> flexible protocol design) >>> >>> What do you mean by the term "cascadability"? What do you mean by >>> "symmetry" in this context. >>> >> >> There are ways in which you can have protocols doing similar things >> (e.g. file transfer) but which are not possible to concatenate (or >> cascade) without violating their semantics (pertaining to >> acknowledgments and other such control actions). >> >> Symmetry just means that neither end is conceptually master or slave, >> instead both being equal peers (cf. Telnet symmetry). >> >>> >>>> 5. running code, complexity avoidance, rough consensus, and path >>>> dependence (speaking to the governance process and its stability) >>> >>> This sub-list merges at least two very different areas of concern. One >>> is the process for developing specification and the other pertains to >>> technical characteristics of specifications. The latter is also >>> reflected elsewhere in your list. >> >> Agreed, the latter stands out somewhat. As for the separation you >> mention, I would think, though, that the feasibility of getting anywhere >> involves both the organization of the process, and the characteristics >> of the resultant artifacts. I will have to think more about this. >> >> Matthias >> >>> >>> d/ >> >> -- >> Matthias B?rwolff >> www.b?rwolff.de > -- Matthias B?rwolff www.b?rwolff.de From dot at dotat.at Tue Jun 1 13:53:57 2010 From: dot at dotat.at (Tony Finch) Date: Tue, 1 Jun 2010 21:53:57 +0100 Subject: [ih] principles of the internet In-Reply-To: <4C05563D.6040506@bennett.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> Message-ID: On Tue, 1 Jun 2010, Richard Bennett wrote: > > I don't know that economics has much to do with this, beyond the assumption > that packet-switching is more economical for human-computer interactions than > circuit-switching is. The Internet wasn't designed by economists. What about Internet-scale topology / the EGP -> BGP transition / privatization and the removal of the NSFNET backbone. Tony. -- f.anthony.n.finch http://dotat.at/ SHANNON: SOUTHWEST BACKING SOUTHEAST 4 OR 5. ROUGH OR VERY ROUGH. MAINLY FAIR. GOOD. From mbaer at cs.tu-berlin.de Tue Jun 1 13:54:14 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Tue, 01 Jun 2010 22:54:14 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C0566F2.1040605@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> Message-ID: <4C057376.40708@cs.tu-berlin.de> On 06/01/2010 10:00 PM, Dave Crocker wrote: > > > On 6/1/2010 11:49 AM, Richard Bennett wrote: >> The Internet protocols are agnostic about privilege and best-effort, as > > Absent standardized QOS, IP is best effort and the transport-level > reliability mechanisms reflect this, even as weak as they were > (intentionally) made to be. Best effort to me seems absolutely central to the "Internet architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6 which really nicely elaborates the notion. > > This was a major shift from the degree of delivery assurance attempted > for the Arpanet IMP infrastructure, which was reflected in the /lack/ of > host-to-host reliability mechanism in the NCP. > > >> these are layer two functions that are simply outside the scope of a > > Except that layer two is not end-to-end and therefore cannot make > end-to-end service assertions or enforce them. > > >> I don't know that economics has much to do with this, beyond the >> assumption that packet-switching is more economical for human-computer >> interactions than circuit-switching is. The Internet wasn't designed by >> economists. > > Cost-savings, by avoiding NxM combinatorial explosion of communications > lines, was an explicit and frequently cited motivation for the work, at > least in terms of what I heard when I came on board in the early 70s. +1 the avoidance of the nxm problem is all over the literature from the time (also, Padlipsky's term "common intermediary representations" comes to mind) > > Surviving a "hostile battlefield" was the other, which meant > conventional, not nuclear, conditions. At the time, I believe folks > didn't quite anticipate that commercial communications environments > would also look pretty hostile... > > > d/ -- Matthias B?rwolff www.b?rwolff.de From jnc at mercury.lcs.mit.edu Tue Jun 1 14:31:06 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 1 Jun 2010 17:31:06 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> > From: John Day > Yes, this was the basic datagram innovation pioneered by CYCLADES, > which is the fundamental shift in the thinking. I sometimes > characterize the distinction as packet switching was "continental > drift" but datagrams were "plate tectonics." Going to disagree with you there. Don't get me wrong, CYCLADES was a _HUGE_ step forward, and considering the gap from the ARPANET to the Internet, CYCLADES is much close to the latter than the former. So it was a critical trail-breaker. Still... The ARPANet really did pretty well implement the radical Baran/etc model of the world, the model of packets. That was the really fundamental change in the world, the strata-breaking event (to continue the geological metaphor). Prior to that, circuits (with a whole range of key attributes, such as explicit setup/tear-down, fixed sharing of resources, stream service model, etc, etc). After that... And everything else since then has been adjustments to that major change in direction, IMO. The ARPANet really did expose the datagram paradigm to the users (from its perspective, the hosts): for example, there was no 'connection open' or 'connection close' _from the host to the IMP_ - the host just sent packets to whereever, and whenever, it wanted. Yes, it had to obey flow-control restrictions, or it could be blocked - but even if a host did obey flow-control, it could be blocked for reasons beyond its control/understanding. And, yes, the Host-Host protocols sort of 'made' the actual users use the network as VCs, but that was I think for other reasons (which I can only guess, but I would guess that keeping the circuit paradigm made getting into that whole new world easier). The big change going from the ARPANET (not Host-Host Protocol, see above) to the Internet (in terms of the _placement_ of function - the Internet of course added other capabilities, such as being able to use a diverse range of technologies, but that's different) was to make the hosts responsible for reliable transmission (checksums, sequence numbers, timeouts, retransmissions). Was that as big as going to packets to begin with? It was big, sure, but as big as going to packets? Noel From jeanjour at comcast.net Tue Jun 1 14:46:52 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 17:46:52 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C057376.40708@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> Message-ID: At 22:54 +0200 2010/06/01, Matthias B?rwolff wrote: >On 06/01/2010 10:00 PM, Dave Crocker wrote: >> >> >> On 6/1/2010 11:49 AM, Richard Bennett wrote: >>> The Internet protocols are agnostic about privilege and best-effort, as >> >> Absent standardized QOS, IP is best effort and the transport-level >> reliability mechanisms reflect this, even as weak as they were >> (intentionally) made to be. > >Best effort to me seems absolutely central to the "Internet >architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6 >which really nicely elaborates the notion. This is the contribution from Pouzin implemented in CYCLADES, which Metcalfe picks up on for the more limited environment of the LAN. > >> >> This was a major shift from the degree of delivery assurance attempted >> for the Arpanet IMP infrastructure, which was reflected in the /lack/ of >> host-to-host reliability mechanism in the NCP. >> >> >>> these are layer two functions that are simply outside the scope of a >> >> Except that layer two is not end-to-end and therefore cannot make >> end-to-end service assertions or enforce them. >> >> >>> I don't know that economics has much to do with this, beyond the >>> assumption that packet-switching is more economical for human-computer >>> interactions than circuit-switching is. The Internet wasn't designed by >>> economists. >> >> Cost-savings, by avoiding NxM combinatorial explosion of communications >> lines, was an explicit and frequently cited motivation for the work, at >> least in terms of what I heard when I came on board in the early 70s. > >+1 the avoidance of the nxm problem is all over the literature from the >time (also, Padlipsky's term "common intermediary representations" comes >to mind) This use of n x m is very different than Dave's use about connectivity. This is the concept that was called the canonical form. It was critically important in the early network, but actually proves to be a transitional concept. It is absolutely necessary when the same application is developed in isolation: terminals, file systems, etc. But once networks become common, new applications are designed from the start to be used on different systems over a network. So they are their canonical form. I always thought this was quite interesting. Since at one time, it was trying to formalize the idea of canonical form is what drove me to reading too much Frege. ;-) Then to find out, that the existence of the network makes the problem go away was amusing. > > >> Surviving a "hostile battlefield" was the other, which meant >> conventional, not nuclear, conditions. At the time, I believe folks >> didn't quite anticipate that commercial communications environments >> would also look pretty hostile... >> >> >> d/ > >-- >Matthias B?rwolff >www.b?rwolff.de From jnc at mercury.lcs.mit.edu Tue Jun 1 14:59:50 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 1 Jun 2010 17:59:50 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100601215950.A31206BE5BE@mercury.lcs.mit.edu> > From: Richard Bennett > The End-to-End Arguments paper, for example, followed the design of the > protocols by nearly ten years A cautionary note: just because something turned up in a paper in year X, that doesn't mean it wasn't thought of, and guiding things, well before that. For example, fate-sharing is first described (that I know of) in the "Design Philosophy of the DARPA Internet Protocols" paper, from 1988. However, the 'fate-sharing' ideas date back much earlier than that paper - I recall seeing that idea all laid out (including the term 'fate-sharing') on a set of slides Dave Clark had used for a presentation sometime around '77-'78, when I first joined the Internet effort. Why it didn't make it onto paper until a decade later, I have no idea... Noel From jeanjour at comcast.net Tue Jun 1 15:07:48 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 18:07:48 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> References: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> Message-ID: At 17:31 -0400 2010/06/01, Noel Chiappa wrote: > > From: John Day > > > Yes, this was the basic datagram innovation pioneered by CYCLADES, > > which is the fundamental shift in the thinking. I sometimes > > characterize the distinction as packet switching was "continental > > drift" but datagrams were "plate tectonics." > >Going to disagree with you there. > >Don't get me wrong, CYCLADES was a _HUGE_ step forward, and considering the >gap from the ARPANET to the Internet, CYCLADES is much close to the latter >than the former. So it was a critical trail-breaker. Still... > >The ARPANet really did pretty well implement the radical Baran/etc model of >the world, the model of packets. That was the really fundamental change in the >world, the strata-breaking event (to continue the geological metaphor). Prior >to that, circuits (with a whole range of key attributes, such as explicit >setup/tear-down, fixed sharing of resources, stream service model, etc, >etc). After that... And everything else since then has been adjustments to >that major change in direction, IMO. Sorry, but neither Baran nor the ARPANET were a datagram network. There are two aspects to being a datagram network: 1) the independent routing of the packets, and 2) the network does not try to recover all failures, but leaves most of that to the hosts. There is nothing about the IMP subnet that was "building reliable systems from unreliable parts." Also Baran's report and the ARPANET had much more in common with the virtual circuit approach to packet switching than the datagram approach. Later the ARPANET added Type 3 packets to provide a datagram service, but it was not part of the original. Also, the ARPANET was designed to be reliable. The IMP subnet was not designed to lose packets. This is not a datagram network in the sense of CYCLADES or the Internet. My geologic analogy was more to show that often when a paradigm shift occurs, it actually comes in stages with many contributors to get from the old model to the new. All of the steps are necessary and probably all of them could not have been made by one person. Actually your point about packet switching being a radical break is interesting. (I must have said this before on this list) In listening to people talk about it, a pattern emerges. If your formative years were in the world of traditional telecom, then yes, packet switching is a watershed change in your thinking. But if you are just a little bit younger (and the shift is really only a couple of years) and your formative years were more with computing, then packet switching is "obvious." (You want to send data between two computers? Okay, the data is in buffers, pick up a buffer and send it. Pretty obvious.) But the idea, that packets could move independently, that the system could be stochastic. That was mind-blowing. > >The ARPANet really did expose the datagram paradigm to the users (from its >perspective, the hosts): for example, there was no 'connection open' or >'connection close' _from the host to the IMP_ - the host just sent packets to >whereever, and whenever, it wanted. > >Yes, it had to obey flow-control restrictions, or it could be blocked - but >even if a host did obey flow-control, it could be blocked for reasons beyond >its control/understanding. > >And, yes, the Host-Host protocols sort of 'made' the actual users use the >network as VCs, but that was I think for other reasons (which I can only >guess, but I would guess that keeping the circuit paradigm made getting into >that whole new world easier). > > >The big change going from the ARPANET (not Host-Host Protocol, see above) to >the Internet (in terms of the _placement_ of function - the Internet of course >added other capabilities, such as being able to use a diverse range of >technologies, but that's different) was to make the hosts responsible for >reliable transmission (checksums, sequence numbers, timeouts, >retransmissions). Was that as big as going to packets to begin with? It was >big, sure, but as big as going to packets? Correct. That (i.e. make the hosts responsible for reliable transmission (checksums, sequence numbers, timeouts, retransmissions) is what the CYCLADES TS protocol (its transport protocol) did in 1972. It was not new with the Internet. This was the whole point of a datagram based network. Take care, John From bernie at fantasyfarm.com Tue Jun 1 15:21:43 2010 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Tue, 01 Jun 2010 18:21:43 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> References: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> Message-ID: <4C054FB7.10629.483BF66@bernie.fantasyfarm.com> On 1 Jun 2010 at 17:31, Noel Chiappa wrote: > The big change going from the ARPANET (not Host-Host Protocol, see above) to > the Internet (in terms of the _placement_ of function - the Internet of course > added other capabilities, such as being able to use a diverse range of > technologies, but that's different) was to make the hosts responsible for > reliable transmission (checksums, sequence numbers, timeouts, > retransmissions). I have an odd perspective on this and it seemed to me that the thrust for this kind of change was that the phone lines were *MUCH* better than we expected them to be [and the current fiberoptic links are even better]. When Bob Kahn cobbled up the checksum equation the IMPs were to use over the 50Kb circuits, it was assumed that the lines would be pretty much full all the time and that the lines would perform according to AT&T's specs. (and indeed we even calculated [but I can't remember the details any more] how often a broken-but-undetected packet would get through) It was very conservative and checksummed hop-to-hop [since if you expect a lot of errors, that's more efficient that sending it all the way to the other end only to discover that it got broken back at hop-1]. But we quickly discovered that the reality was we had almost *no* retransmissions and while that didn't cause any change right-off [since the modems were still generating and checking their 24-bit checksums and the IMPs were dutifully retransmitting hop-by-hop] it *DID* indicate that a change in the direction of end-to-end would likely result in much better throughput than hop-to-hop. /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From richard at bennett.com Tue Jun 1 15:25:36 2010 From: richard at bennett.com (Richard Bennett) Date: Tue, 01 Jun 2010 18:25:36 -0400 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> Message-ID: <4C0588E0.5040706@bennett.com> Metcalfe's Ethernet was the canonical best-effort network, with a single service level and no point to point retransmission, although it did automatically retransmit following collisions using the truncated binary exponential backoff algorithm that was later adopted in a moderately different form by Jacobson's Algorithm in TCP. Modern hub and spoke "Ethernets" don't have collisions, retransmission, or single service level, but they do have flow control. I've always thought it interesting that TCP seeks to mimic the behavior of the layer 2 protocol most commonly in use, whatever it may be at any given time, and that analysts often mistake particular forms of this mimicry for fundamental design elements when in fact the actual principle is reflection. Network engineering is too close to the process of making things work to understand all the motivating factors - it's like trying to write your own biography and expecting it to be objective. On 6/1/2010 5:46 PM, John Day wrote: > At 22:54 +0200 2010/06/01, Matthias B?rwolff wrote: >> On 06/01/2010 10:00 PM, Dave Crocker wrote: >>> >>> >>> On 6/1/2010 11:49 AM, Richard Bennett wrote: >>>> The Internet protocols are agnostic about privilege and >>>> best-effort, as >>> >>> Absent standardized QOS, IP is best effort and the transport-level >>> reliability mechanisms reflect this, even as weak as they were >>> (intentionally) made to be. >> >> Best effort to me seems absolutely central to the "Internet >> architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6 >> which really nicely elaborates the notion. > > This is the contribution from Pouzin implemented in CYCLADES, which > Metcalfe picks up on for the more limited environment of the LAN. > >> >>> >>> This was a major shift from the degree of delivery assurance attempted >>> for the Arpanet IMP infrastructure, which was reflected in the >>> /lack/ of >>> host-to-host reliability mechanism in the NCP. >>> >>> >>>> these are layer two functions that are simply outside the scope of a >>> >>> Except that layer two is not end-to-end and therefore cannot make >>> end-to-end service assertions or enforce them. >>> >>> >>>> I don't know that economics has much to do with this, beyond the >>>> assumption that packet-switching is more economical for >>>> human-computer >>>> interactions than circuit-switching is. The Internet wasn't >>>> designed by >>>> economists. >>> >>> Cost-savings, by avoiding NxM combinatorial explosion of >>> communications >>> lines, was an explicit and frequently cited motivation for the >>> work, at >>> least in terms of what I heard when I came on board in the early 70s. >> >> +1 the avoidance of the nxm problem is all over the literature from the >> time (also, Padlipsky's term "common intermediary representations" comes >> to mind) > > This use of n x m is very different than Dave's use about > connectivity. This is the concept that was called the canonical > form. It was critically important in the early network, but actually > proves to be a transitional concept. It is absolutely necessary when > the same application is developed in isolation: terminals, file > systems, etc. But once networks become common, new applications are > designed from the start to be used on different systems over a > network. So they are their canonical form. > > I always thought this was quite interesting. Since at one time, it was > trying to formalize the idea of canonical form is what drove me to > reading too much Frege. ;-) Then to find out, that the existence of > the network makes the problem go away was amusing. > >> > >>> Surviving a "hostile battlefield" was the other, which meant >>> conventional, not nuclear, conditions. At the time, I believe folks >>> didn't quite anticipate that commercial communications environments >>> would also look pretty hostile... >>> >>> >>> d/ >> >> -- >> Matthias B?rwolff >> www.b?rwolff.de > > -- Richard Bennett Research Fellow Information Technology and Innovation Foundation Washington, DC From amckenzie3 at yahoo.com Tue Jun 1 15:50:15 2010 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Tue, 1 Jun 2010 15:50:15 -0700 (PDT) Subject: [ih] principles of the internet In-Reply-To: <4C056E37.2060508@cs.tu-berlin.de> Message-ID: <921246.24895.qm@web30605.mail.mud.yahoo.com> Matthais, The IMP system was quite symmetrical, as you say. Also, NCP was made symmetrical from the beginning, by intention. I can't remember whose idea it was that NCP should be symmetrical, but possibly it was Steve Crocker. Regards, Alex McKenzie --- On Tue, 6/1/10, Matthias B?rwolff wrote: > > Hmmm.? For that matter, I suspect the Arpanet NCP was not all that > > symmetrical, although I do not remember enough of the details.? I also > > do not remember how symmetrical the IMP behavior was.? (But perhaps the > > Arpanet is going back too far for your discussion.) > As for packet forwarding, the IMPs were symmetrical as far > as I can > tell; there wasn't much going on other than sending > individual packets > either way (and waiting for ACKs, and then trow packets > away; or resend > them if no ACK was coming back). The Arpanet Host-Host > protocol was made > symmetrical at some point, I believe (it wouldn't matter > who'd issue a > Request for Connection first, or if both issued them ar > once; and both > sides could send stuff, once the connection was up). From jeanjour at comcast.net Tue Jun 1 16:11:25 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 19:11:25 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C054FB7.10629.483BF66@bernie.fantasyfarm.com> References: <20100601213107.02B316BE5AE@mercury.lcs.mit.edu> <4C054FB7.10629.483BF66@bernie.fantasyfarm.com> Message-ID: I remember that as well, but not from the first hand experience that you do. But I do remember seeing a paper that quoted line error rates and being struck by the fact that the longest links in the net had some of the lowest error rates while the shorter ones were much higher. In particular I remember that Illinois to Utah (one hop) had almost no errors, but Rome NY to Boston also one hop had one of the highest error rates! But I assumed that the latter went through much older equipment and than the former. At 18:21 -0400 2010/06/01, Bernie Cosell wrote: >On 1 Jun 2010 at 17:31, Noel Chiappa wrote: > >> The big change going from the ARPANET (not Host-Host Protocol, see above) to >> the Internet (in terms of the _placement_ of function - the >>Internet of course >> added other capabilities, such as being able to use a diverse range of >> technologies, but that's different) was to make the hosts responsible for >> reliable transmission (checksums, sequence numbers, timeouts, >> retransmissions). > >I have an odd perspective on this and it seemed to me that the thrust for >this kind of change was that the phone lines were *MUCH* better than we >expected them to be [and the current fiberoptic links are even better]. > >When Bob Kahn cobbled up the checksum equation the IMPs were to use over >the 50Kb circuits, it was assumed that the lines would be pretty much >full all the time and that the lines would perform according to AT&T's >specs. (and indeed we even calculated [but I can't remember the details >any more] how often a broken-but-undetected packet would get through) It >was very conservative and checksummed hop-to-hop [since if you expect a >lot of errors, that's more efficient that sending it all the way to the >other end only to discover that it got broken back at hop-1]. But we >quickly discovered that the reality was we had almost *no* >retransmissions and while that didn't cause any change right-off [since >the modems were still generating and checking their 24-bit checksums and >the IMPs were dutifully retransmitting hop-by-hop] it *DID* indicate that >a change in the direction of end-to-end would likely result in much >better throughput than hop-to-hop. > > /Bernie\ > >-- >Bernie Cosell Fantasy Farm Fibers >mailto:bernie at fantasyfarm.com Pearisburg, VA > --> Too many people, too few sheep <-- From amckenzie3 at yahoo.com Tue Jun 1 16:21:53 2010 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Tue, 1 Jun 2010 16:21:53 -0700 (PDT) Subject: [ih] principles of the internet In-Reply-To: Message-ID: <449698.76682.qm@web30604.mail.mud.yahoo.com> I disagree with John's definition of what it means to be a datagram network. In my opinion, all that is required is the independent routing of packets. In this sense, the ARPANET presented a classic datagram interface to its users (Hosts), except that the units crossing the Host-IMP interface were called "messages". Messages were truly independently routed. HOWEVER, the ARPANET did go to great lengths to insure that messages, once accepted, were correctly delivered to the recipient with high probability. ARPANET also kept messages between a given pair of Hosts in order. These two design decisions put a great deal of complication into the IMPs. It should be remembered, though, that the original concept of the ARPANET was that each Host would contain a program to do all the store-and-forward functions. It was Wes Clark's idea that a minicomputer should sit next to each Host to do all the hard jobs (routing, ordering, error detection/correction, etc) so that each Host did not have to write programs to get these jobs done. Larry Roberts was enthusiastic about this idea because it provided a more cost-effective way of getting the programming done, done on time, and done correctly. So it was a design decision that the complexity SHOULD all go in the IMPs. An additional source of possible confusion is that inside the ARPANET, the IMPs broke messages into shorter units that were called "packets", and from internally the network treated the packets independently (eg, as datagrams in either John's sense or my sense). Finally, there is a very real sense in which the ARPANET was designed to build a celiable message communication system from unreliable parts. Each circuit and each IMP was an unreliable part, yet the Host messages were delivered correctly, and acknowledged by RFNMs, with very high reliability. We can all see now that the Internet we know couldn't be built out of a bunch of independently designed and implemented ARPANET-like networks and offer reasonable bandwidth and delay characteristics. We can also see that it could be (and was) built out of independently designed and implemented Cyclade-like (actually, for purists, Cigale-like) networks. But that wasn't the perspective in 1968. Regards, Alex McKenzie --- On Tue, 6/1/10, John Day wrote: > > Sorry, but neither Baran nor the ARPANET were a datagram > network. > There are two aspects to being a datagram network:? 1) > the > independent routing of the packets, and 2) the network does > not try > to recover all failures, but leaves most of that to the > hosts. > There is nothing about the IMP subnet that was "building > reliable > systems from unreliable parts." Also Baran's report and the > ARPANET > had much more in common with the virtual circuit approach > to packet > switching than the datagram approach. From jnc at mercury.lcs.mit.edu Tue Jun 1 17:44:58 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 1 Jun 2010 20:44:58 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> > From: John Day > Sorry, but neither Baran nor the ARPANET were a datagram network. Well, it all depends on how you define 'datagram network', doesn't it?! :-) > There are two aspects to being a datagram network: 1) the independent > routing of the packets, and 2) the network does not try to recover all > failures, but leaves most of that to the hosts. Those are both important, but I would say that 'no call setup' is equally important. By your definition, an ATM network might be a 'datagram network' (well, maybe not, I guess it doesn't have true independent routing of packets through intermediate nodes). And if you go look at the detail in 1822, there is an error code in there for 'packet not received at the other end', with the implication that it's up to the host to retry (although as we previously discussed some months back, no host seems to have actually done so, since in practise the network was too reliable to bother). Was the ARPANET a driven-snow pure datagram network? No, it didn't fully have the 'unreliable network' thing. But it still was a huge step towards the pure datagram network of today - it had packets, it had the pooled resource allocation, it had no call setup, it had independent routing of packets, etc, etc. All it was missing was the 'hosts do reliability' thing. You may claim that this was the hard step intellectually, and that what came before (all the stuff the ARPANET did) was sort of 'engineering necessity'. But I seem to recall at least once Vint saying that the 'hosts do reliability' thing was just unavoidable for them once they tried to hook SATNET (or a PRNET) to the ARPANET, that the ARPANET model just didn't work once you tried to hook a number of networks together; that's exactly why TCP/IP wound up looking the way it did. So there was 'engineering necessity' there too. I think what may be going on here is that a lot of ideas that are 'obvious' in retrospect aren't actually so obvious beforehand - and unless you lived through the phase-change, you don't really appreciate, at a gut level, just how 'non-obvious' they really were beforehand. (Like the WWW.... but I digress! :-) You and I never lived in a world in which the idea of a packet didn't exist, so I'm not sure we can really understand how 'non-obvious' that idea was, before Baran et al. You, I gather, did live through the idea of 'hosts do reliability', so you probably do have an idea of how 'non-obvious' it was. But perhaps that you lived through one, and not the other, is affecting your analysis of how important they were, relative to each other? Yes, CYCLADES was very important in both i) floating the idea of 'hosts do reliability', and ii) showing that a working network could actually be built out of it... but much as I want to honour it (and see that it's remembered), I still don't think it's as big a step as the step to the ARPANET. Noel From jnc at mercury.lcs.mit.edu Tue Jun 1 17:59:03 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 1 Jun 2010 20:59:03 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> > From: Alex McKenzie > I disagree with John's definition of what it means to be a datagram > network. In my opinion, all that is required is the independent routing > of packets. He'd probably call that a packet network... :-) But I do think he has a bit of a point, though, that the _service interface_ offered to the user is important. For example, you could, today, build a network that was POTS user interface, but independently routed packets inside. (Nobody would bother to do such a crazy thing, I agree, but it's technically possible! :-) But I wouldn't really call the result a 'datagram network'.... > the ARPANET did go to great lengths to insure that messages, once > accepted, were correctly delivered to the recipient with high > probability. ARPANET also kept messages between a given pair of Hosts > in order. These two design decisions put a great deal of complication > into the IMPs. > It should be remembered, though, that the original concept of the > ARPANET was that each Host would contain a program to do all the > store-and-forward functions. It was Wes Clark's idea that a > minicomputer should sit next to each Host to do all the hard jobs > ... so that each Host did not have to write programs to get these jobs > done. Larry Roberts was enthusiastic about this idea because it > provided a more cost-effective way of getting the programming done, > done on time, and done correctly. So it was a design decision that the > complexity SHOULD all go in the IMPs. Excellent point. Noel From bernie at fantasyfarm.com Tue Jun 1 18:35:18 2010 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Tue, 01 Jun 2010 21:35:18 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> References: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> Message-ID: <4C057D16.23135.534280B@bernie.fantasyfarm.com> On 1 Jun 2010 at 20:44, Noel Chiappa wrote: > And if you go look at the detail in 1822, there is an error code in there for > 'packet not received at the other end', with the implication that it's up to > the host to retry (although as we previously discussed some months back, no > host seems to have actually done so, since in practise the network was too > reliable to bother). I don't have a copy of 1822, but weren't there some kind of no-rfnm packets and perhaps that was an error for those? [it was what the folks doing speech over the ARPAnet were using for 'streaming audio', such as it was over 56K lines]. [gad, I'm getting really senile: I can't remember what those kinds of packets were but the setup was basically a foreshadow of the distinction later between using UDP or TCP] /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From jeanjour at comcast.net Tue Jun 1 18:45:38 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 21:45:38 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> References: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> Message-ID: At 20:44 -0400 2010/06/01, Noel Chiappa wrote: > > From: John Day > > > Sorry, but neither Baran nor the ARPANET were a datagram network. > >Well, it all depends on how you define 'datagram network', doesn't it?! :-) > > > There are two aspects to being a datagram network: 1) the independent > > routing of the packets, and 2) the network does not try to recover all > > failures, but leaves most of that to the hosts. > >Those are both important, but I would say that 'no call setup' is equally >important. By your definition, an ATM network might be a 'datagram network' >(well, maybe not, I guess it doesn't have true independent routing of packets >through intermediate nodes). Strictly speaking there is always some form of "call setup" even if it is by "ad-hoc" means, i.e. some code it in or a management system configures it. Something must ensure there is something that expects the packet on the other end. There is no magic. The idea that connectionless has no connection set up turns out to be a problem of drawing boundaries so you can ignore it. It is there. You have to look to other characteristics, such as independence of routing, assuming higher layer reliability, etc. > >And if you go look at the detail in 1822, there is an error code in there for >'packet not received at the other end', with the implication that it's up to >the host to retry (although as we previously discussed some months back, no >host seems to have actually done so, since in practise the network was too >reliable to bother). Correct. Grossman once told me he remembered either Heart or Walden pounding a table in a meeting saying words to the effect "my network won't lose anything." ;-) > >Was the ARPANET a driven-snow pure datagram network? No, it didn't fully have >the 'unreliable network' thing. But it still was a huge step towards the pure >datagram network of today - it had packets, it had the pooled resource >allocation, it had no call setup, it had independent routing of packets, etc, >etc. All it was missing was the 'hosts do reliability' thing. Yes, this is my point, the paradigm shift was not a step function. Baran starts it, the ARPANET takes a few more steps, but conceptually it is CYCLADES that first puts all the elements together. You are aware that BBN guys were making monthly trips to INRIA while CYCLADES was being built to advise them and help them avoid some of the mistakes they made? (Walden told me that). I am beginning to suspect that you misunderstood my geologic analogy. It was not a question of right and wrong. Continental drift was not wrong, it was right. Continental drift got people to look at the problem which lead to further insights. Plate tectonics refined the concept One was not possible without the other. The same here. > >You may claim that this was the hard step intellectually, and that what came >before (all the stuff the ARPANET did) was sort of 'engineering necessity'. No, not a hard step. That is the whole point it wasn't. > >But I seem to recall at least once Vint saying that the 'hosts do >reliability' thing was just unavoidable for them once they tried to hook >SATNET (or a PRNET) to the ARPANET, that the ARPANET model just didn't work >once you tried to hook a number of networks together; that's exactly why >TCP/IP wound up looking the way it did. So there was 'engineering necessity' >there too. Yes, this is much later. As we moved out to encompass more things, the complexities began to arise. This is also lead to the realization later that the structure of the Data Link and Network Layers were not as simple as we first thought, nor was it always the same. > > >I think what may be going on here is that a lot of ideas that are 'obvious' >in retrospect aren't actually so obvious beforehand - and unless you lived >through the phase-change, you don't really appreciate, at a gut level, just >how 'non-obvious' they really were beforehand. > >(Like the WWW.... but I digress! :-) > >You and I never lived in a world in which the idea of a packet didn't exist, >so I'm not sure we can really understand how 'non-obvious' that idea was, >before Baran et al. You, I gather, did live through the idea of 'hosts do >reliability', so you probably do have an idea of how 'non-obvious' it was. >But perhaps that you lived through one, and not the other, is affecting your >analysis of how important they were, relative to each other? Well, actually I did live in a world with out packets (for a little bit) but as a computer person not as a telecom person. This is what I mean. As with most things in history, when talking to a participant, their view of what happened is conditioned by their history. They are interpreting events and facts in terms of their background. This is what I mean that for people of one age group whose background was telecom they had always thought communication meant one thing. For another group with an entirely different experience thinking in terms of distributed systems, it was something completely different. > >Yes, CYCLADES was very important in both i) floating the idea of 'hosts do >reliability', and ii) showing that a working network could actually be built >out of it... but much as I want to honour it (and see that it's remembered), >I still don't think it's as big a step as the step to the ARPANET. You needed both. Baran's ideas weren't real until there was the ARPANET. (The thing that saved our bacon was that it was the DoD who did and was willing to spend like crazy on it. Because otherwise it wouldn't have looked as good as it did. Imagine if the Net had 9.6K lines instead of 56K. It would have been seen entirely differently. But it was seeing the ARPANET that got Pouzin to thinking what the next step was. Pouzin's ideas weren't real until CYCLADES was operational. CYCLADES wouldn't have worked as well as it did without the input from BBN. You are looking at this too much as an either-or. It is the messiness that I am trying to draw attention to. And that it wasn't entirely a US effort. If you read this list one gets the idea that the only things that ever happened were either at ISI or MIT and few guys at Stanford. Also that for most of the world (of researchers) the Internet wasn't even on their screens. Perhaps it should have been. We might not be in the mess we currently find ourselves. This was a very creative period. There was huge exchange of ideas and some very insightful people. Take care, John > Noel From jeanjour at comcast.net Tue Jun 1 18:52:48 2010 From: jeanjour at comcast.net (John Day) Date: Tue, 1 Jun 2010 21:52:48 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> References: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> Message-ID: At 20:59 -0400 2010/06/01, Noel Chiappa wrote: > > From: Alex McKenzie > > > I disagree with John's definition of what it means to be a datagram > > network. In my opinion, all that is required is the independent routing > > of packets. > >He'd probably call that a packet network... :-) But I do think he has a bit >of a point, though, that the _service interface_ offered to the user is >important. > >For example, you could, today, build a network that was POTS user interface, >but independently routed packets inside. (Nobody would bother to do such a >crazy thing, I agree, but it's technically possible! :-) But I wouldn't >really call the result a 'datagram network'.... I would. Datagrams are internal function of the layer. Whether they are used has no reason to be visible over the layer boundary. One requests a certain form of service. The layer determines what it has to do to provide it. How it does it is none of your business. Hiding internal operation is what a layer is all about. > > > the ARPANET did go to great lengths to insure that messages, once > > accepted, were correctly delivered to the recipient with high > > probability. ARPANET also kept messages between a given pair of Hosts > > in order. These two design decisions put a great deal of complication > > into the IMPs. > > It should be remembered, though, that the original concept of the > > ARPANET was that each Host would contain a program to do all the > > store-and-forward functions. It was Wes Clark's idea that a > > minicomputer should sit next to each Host to do all the hard jobs > > ... so that each Host did not have to write programs to get these jobs > > done. Larry Roberts was enthusiastic about this idea because it > > provided a more cost-effective way of getting the programming done, > > done on time, and done correctly. So it was a design decision that the > > complexity SHOULD all go in the IMPs. > >Excellent point. Right. The IMP as front end. I remember the RFCs discussing the buffering problems after the Christmas lock up and coming to the conclusion that no amount of memory in the IMP would guarantee it couldn't happen, so the switches would have to be able to throw things away and final reassembly would have to be in the hosts. > > Noel From mbaer at cs.tu-berlin.de Wed Jun 2 01:56:42 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Wed, 02 Jun 2010 10:56:42 +0200 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> Message-ID: <4C061CCA.1070304@cs.tu-berlin.de> On 06/01/2010 11:46 PM, John Day wrote: >> Best effort to me seems absolutely central to the "Internet >> architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6 >> which really nicely elaborates the notion. > > This is the contribution from Pouzin implemented in CYCLADES, which > Metcalfe picks up on for the more limited environment of the LAN. > Pouzin's contribution notwithstanding, Metcalfe's thesis' chapter 6 to me is the first proper elaboration of best effort as a philosophy; I spare you the copious quotes, it is readily available on the web. Just a brief one: \begin{quote} Imagine that we are a component process in the midst of some large system. There are two extreme attitudes we might have toward the system and toward the several component processes upon which we depend. We might believe the processes around us to be so reliable, irreplaceable, and interdependent that, if one should fail, there would be little point in trying to carry on. Or, we might believe the processes around us to be so unreliable, expendable, and independent that, if some should fail, there would be considerable potential in our being able to patch things up to struggle on, weakened, but doing our job. This second attitude is characteristic of what we call the ``best-efforts'' philosophy of interprocess communication; it is based on our desire to give the system our best efforts and, to do so, on our expecting only as much from the processes upon which we depend. (pp.\,6-25\,f.) \end{quote} To my knowledge, Pouzin has never put it that clearly in writing. Also, best effort may be argued to have been a principle that was applied to the Arpanet before Pouzin's Cyclades. Sure, there were VCs, but, in all, failure and the recovery from such was very much a default assumption in the whole system (a point that Metcalfe acknowledges, too). Matthias From mbaer at cs.tu-berlin.de Wed Jun 2 02:08:15 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Wed, 02 Jun 2010 11:08:15 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C057D16.23135.534280B@bernie.fantasyfarm.com> References: <20100602004458.8C9976BE60B@mercury.lcs.mit.edu> <4C057D16.23135.534280B@bernie.fantasyfarm.com> Message-ID: <4C061F7F.8090700@cs.tu-berlin.de> On 06/02/2010 03:35 AM, Bernie Cosell wrote: > On 1 Jun 2010 at 20:44, Noel Chiappa wrote: > >> And if you go look at the detail in 1822, there is an error code in there for >> 'packet not received at the other end', with the implication that it's up to >> the host to retry (although as we previously discussed some months back, no >> host seems to have actually done so, since in practise the network was too >> reliable to bother). > > I don't have a copy of 1822, but weren't there some kind of no-rfnm > packets and perhaps that was an error for those? [it was what the folks > doing speech over the ARPAnet were using for 'streaming audio', such as > it was over 56K lines]. [gad, I'm getting really senile: I can't > remember what those kinds of packets were but the setup was basically a > foreshadow of the distinction later between using UDP or TCP] Sure, the "raw messages", introduced in 1974 -- one packet messages, no flow control, no end-to-end acknowledgments, just sending it off and hoping for the best. They were used internally to get IMP statistics to the NCC, and used by a handful of sites in packet speech experiments. Matthias > > /Bernie\ > -- Matthias B?rwolff www.b?rwolff.de From johnl at iecc.com Wed Jun 2 07:14:42 2010 From: johnl at iecc.com (John Levine) Date: 2 Jun 2010 14:14:42 -0000 Subject: [ih] principles of the internet In-Reply-To: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> Message-ID: <20100602141442.39467.qmail@joyce.lan> >For example, you could, today, build a network that was POTS user interface, >but independently routed packets inside. (Nobody would bother to do such a >crazy thing, I agree, but it's technically possible! :-) But I wouldn't >really call the result a 'datagram network'.... I was under the impression that's how most current phone switches and transmission networks work. All IP on the inside, backwards compatible interfaces at the edges. R's, John From dot at dotat.at Wed Jun 2 07:25:40 2010 From: dot at dotat.at (Tony Finch) Date: Wed, 2 Jun 2010 15:25:40 +0100 Subject: [ih] principles of the internet In-Reply-To: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> References: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> Message-ID: On Tue, 1 Jun 2010, Noel Chiappa wrote: > For example, you could, today, build a network that was POTS user interface, > but independently routed packets inside. (Nobody would bother to do such a > crazy thing, I agree, but it's technically possible! :-) That's what the BT 21CN project is about: replacing the core of the telephone network with VOIP. The edges remain POTS for most subscribers. Tony. -- f.anthony.n.finch http://dotat.at/ THAMES DOVER WIGHT: MAINLY NORTHEASTERLY 3 OR 4, INCREASING 5 AT TIMES LATER. SLIGHT. FOG PATCHES. MODERATE OR GOOD, OCCASIONALLY VERY POOR. From mfidelman at meetinghouse.net Wed Jun 2 08:37:35 2010 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Wed, 02 Jun 2010 11:37:35 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100602141442.39467.qmail@joyce.lan> References: <20100602141442.39467.qmail@joyce.lan> Message-ID: <4C067ABF.8060609@meetinghouse.net> John Levine wrote: >> For example, you could, today, build a network that was POTS user interface, >> but independently routed packets inside. (Nobody would bother to do such a >> crazy thing, I agree, but it's technically possible! :-) But I wouldn't >> really call the result a 'datagram network'.... >> > I was under the impression that's how most current phone switches and > transmission networks work. All IP on the inside, backwards compatible > interfaces at the edges. > I'm not sure I'd consider IP over MPLS or ATM as "independently routed" packets. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From dcrocker at gmail.com Wed Jun 2 10:40:44 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Wed, 02 Jun 2010 10:40:44 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> Message-ID: <4C06979C.9080603@gmail.com> On 6/2/2010 7:25 AM, Tony Finch wrote: > On Tue, 1 Jun 2010, Noel Chiappa wrote: >> For example, you could, today, build a network that was POTS user interface, >> but independently routed packets inside. ... > That's what the BT 21CN project is about: replacing the core of the > telephone network with VOIP. The edges remain POTS for most subscribers. This model of retaining interface behaviors while replacing the internals is remarkably well-established. Netbios and Ethernet are salient examples for this grup. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From craig at aland.bbn.com Wed Jun 2 11:17:10 2010 From: craig at aland.bbn.com (Craig Partridge) Date: Wed, 02 Jun 2010 14:17:10 -0400 Subject: [ih] principles of the internet Message-ID: <20100602181710.DF34728E137@aland.bbn.com> > This model of retaining interface behaviors while replacing the internals is > remarkably well-established. > > Netbios and Ethernet are salient examples for this grup. > > d/ Agreed, and yet it took the technical community a long time to grasp the concept. Consider that the whole ATM debate in the 1990s, in retrospect was a question of whether to force end systems to use an API designed to make it easy to switch data -- after some fussing, we put the switches inside routers and convert to-and-from IP packet formats inside the router at the edges of the switch. SONET, at its core, is a specification for moving groups of voice samples at 125ms intervals. Enjoy! Craig From jnc at mercury.lcs.mit.edu Wed Jun 2 12:18:32 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 2 Jun 2010 15:18:32 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100602191832.6C6A36BE626@mercury.lcs.mit.edu> > From: John Day > Strictly speaking there is always some form of "call setup" even if it > is by "ad-hoc" means, i.e. some code it in or a management system > configures it. Something must ensure there is something that expects > the packet on the other end. I thought we were talking about the network, not the applications? There's clearly a big difference _in the network_ if it has some sort of call setup, or if it's pure datagram (send packets to anywhere, anytime, no prior anything). > the paradigm shift was not a step function. Baran starts it, the > ARPANET takes a few more steps, but conceptually it is CYCLADES that > first puts all the elements together. > ... > Continental drift got people to look at the problem which lead to > further insights. Plate tectonics refined the concept One was not > possible without the other. Ah, got it. Yes, I think we agree - but I think we have been all along, we've only been arguing about how big the various steps are in relationship to each other... :-) > The thing that saved our bacon was that it was the DoD who did and was > willing to spend like crazy on it. Because otherwise it wouldn't have > looked as good as it did. Not just DoD, but DARPA specifically. Remember the story about how some part of DoD was about to be dragooned into doing packets, and Baran pulled the plug because he knew they'd screw it up, and he knew that that would taint packet switching for a long time, so it was better to can the effort before it did that. Too lazy to go look it up (don't recall exactly where I read it, so it might take a while to find), but it shows great smarts on his part, IMO. Noel From jeanjour at comcast.net Wed Jun 2 13:03:56 2010 From: jeanjour at comcast.net (John Day) Date: Wed, 2 Jun 2010 16:03:56 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100602191832.6C6A36BE626@mercury.lcs.mit.edu> References: <20100602191832.6C6A36BE626@mercury.lcs.mit.edu> Message-ID: Don't get me wrong. The ARPANET was huge in terms of proving that Baran's ideas would work and over provisioning made sure that we were able to try some pretty advanced stuff and make it work. When I tell students that we had a PC with a touch screen accessing distributed databases over the Net in 1975, they think I am pulling their leg. In terms of conceptual advances, first there was packet switching and then the refinement datagrams. CYCLADES wasn't as influential as it should have been because Louis ran afoul of the French PTT. That is something else I have thought was significant. The ARPANET had the DOD as a protector even if ATT didn't occupy the same political position as the PTT, whereas IRIA hand no where near the political clout to protect CYCLADES. We were very lucky. At 15:18 -0400 2010/06/02, Noel Chiappa wrote: > > From: John Day > > > Strictly speaking there is always some form of "call setup" even if it > > is by "ad-hoc" means, i.e. some code it in or a management system > > configures it. Something must ensure there is something that expects > > the packet on the other end. > >I thought we were talking about the network, not the applications? There's >clearly a big difference _in the network_ if it has some sort of call setup, >or if it's pure datagram (send packets to anywhere, anytime, no prior >anything). > > > the paradigm shift was not a step function. Baran starts it, the > > ARPANET takes a few more steps, but conceptually it is CYCLADES that > > first puts all the elements together. > > ... > > Continental drift got people to look at the problem which lead to > > further insights. Plate tectonics refined the concept One was not > > possible without the other. > >Ah, got it. Yes, I think we agree - but I think we have been all along, we've >only been arguing about how big the various steps are in relationship to each >other... :-) > > > > The thing that saved our bacon was that it was the DoD who did and was > > willing to spend like crazy on it. Because otherwise it wouldn't have > > looked as good as it did. > >Not just DoD, but DARPA specifically. Remember the story about how some part >of DoD was about to be dragooned into doing packets, and Baran pulled the >plug because he knew they'd screw it up, and he knew that that would taint >packet switching for a long time, so it was better to can the effort before >it did that. > >Too lazy to go look it up (don't recall exactly where I read it, so it might >take a while to find), but it shows great smarts on his part, IMO. > > Noel From richard at bennett.com Wed Jun 2 16:51:08 2010 From: richard at bennett.com (Richard Bennett) Date: Wed, 02 Jun 2010 16:51:08 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C061CCA.1070304@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> Message-ID: <4C06EE6C.3020602@bennett.com> An HTML attachment was scrubbed... URL: From richard at bennett.com Wed Jun 2 16:55:01 2010 From: richard at bennett.com (Richard Bennett) Date: Wed, 02 Jun 2010 16:55:01 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C06979C.9080603@gmail.com> References: <20100602005903.A01A56BE60B@mercury.lcs.mit.edu> <4C06979C.9080603@gmail.com> Message-ID: <4C06EF55.5040505@bennett.com> An HTML attachment was scrubbed... URL: From mbaer at cs.tu-berlin.de Thu Jun 3 01:08:39 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Thu, 03 Jun 2010 10:08:39 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C06EE6C.3020602@bennett.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C06EE6C.3020602@bennett.com> Message-ID: <4C076307.6050307@cs.tu-berlin.de> On 06/03/2010 01:51 AM, Richard Bennett wrote: > Um, have you ever written code, Matthias? Every programmer has had to deal with Does this make a difference to our argument? > the question of how much checking you do on the inputs to your function and how > you recover from errors returned by the functions you call. You can arrange the > choices about how to address these questions according to some continuum with > unreasonable absolutes at the boundaries and engage in pseudo-philosophical > discourse over the nature of reality and consciousness, or you can make a > pragmatic decision based on the nature of the problem you're trying to solve and > your experience with the system. I may be missing something, but I don't quite see the point you're trying to make. Of course, without more or less tacit assumptions, functional bindings on statistical grounds, and even hard bilateral state you don't get anywhere -- but how does that change an overall philosophical default assumption that one may draw? > > A good system programmer doesn't try to apply some dogmatic rule set about the > correct way to do IPC, he makes decisions grounded in a realistic assessment of > the behavior of the components in question. If you look a how the Internet > actually works - as opposed to the musings of graduate students - you'll see > that it doesn't actually implement a "best efforts" model per the definition you > cite. TCP assumes an extremely high degree of packet integrity, so much so that > it can safely ascribe packet loss to congestion rather than line noise or > wireless collisions. That's not the scenario in Metcalfe's thesis by any stretch > of the imagination. See above; again I don't see how the TCP example is changing Metcalfe's argument at all, which is not about what you make out of the components you find, but about the nature of things (if that term doesn't put you off too much) in a thin-wire setting. To quote from Metcalfe once more: "But why make an issue out of something as simple as this ``best-efforts'' idea? Why call it a philosophy? Why give it a name at all? For the simple reason that, without a conscious effort to do otherwise, computer people (especially) find it easy to neglect the potential offered by thin-wire isolation \emdash they've worked in centralized environments for so long." > > You also need to be careful about the use of the expression "best efforts" as it > means at least three different things now days. Let me know. Thanks. > > RB > > On 6/2/2010 1:56 AM, Matthias B?rwolff wrote: >> >> On 06/01/2010 11:46 PM, John Day wrote: >> >>>> Best effort to me seems absolutely central to the "Internet >>>> architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6 >>>> which really nicely elaborates the notion. >>>> >>> This is the contribution from Pouzin implemented in CYCLADES, which >>> Metcalfe picks up on for the more limited environment of the LAN. >>> >>> >> Pouzin's contribution notwithstanding, Metcalfe's thesis' chapter 6 to >> me is the first proper elaboration of best effort as a philosophy; I >> spare you the copious quotes, it is readily available on the web. Just a >> brief one: >> >> \begin{quote} >> Imagine that we are a component process in the midst of some large >> system. There are two extreme attitudes we might have toward the system >> and toward the several component processes upon which we depend. We >> might believe the processes around us to be so reliable, irreplaceable, >> and interdependent that, if one should fail, there would be little point >> in trying to carry on. Or, we might believe the processes around us to >> be so unreliable, expendable, and independent that, if some should fail, >> there would be considerable potential in our being able to patch things >> up to struggle on, weakened, but doing our job. This second attitude is >> characteristic of what we call the ``best-efforts'' philosophy of >> interprocess communication; it is based on our desire to give the system >> our best efforts and, to do so, on our expecting only as much from the >> processes upon which we depend. >> (pp.\,6-25\,f.) >> \end{quote} >> >> To my knowledge, Pouzin has never put it that clearly in writing. >> >> Also, best effort may be argued to have been a principle that was >> applied to the Arpanet before Pouzin's Cyclades. Sure, there were VCs, >> but, in all, failure and the recovery from such was very much a default >> assumption in the whole system (a point that Metcalfe acknowledges, too). >> >> Matthias >> > > -- > Richard Bennett > Research Fellow > Information Technology and Innovation Foundation > Washington, DC > -- Matthias B?rwolff www.b?rwolff.de From dcrocker at gmail.com Thu Jun 3 06:17:12 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 06:17:12 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C061CCA.1070304@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> Message-ID: <4C07AB58.7090306@gmail.com> On 6/2/2010 1:56 AM, Matthias B?rwolff wrote: > Pouzin's contribution notwithstanding, Metcalfe's thesis' chapter 6 to > me is the first proper elaboration of best effort as a philosophy; ... > To my knowledge, Pouzin has never put it that clearly in writing. I think one of the other postings made a comment similar to what I'm going to say here, but just to underscore my own sense of that period: It was quite common for things to be documented very much post hoc. This gives a highly skewed view to diligent historians reading the literature, but it makes near-term efforts at oral history particularly valuable. One of the issues emerging from some of the sub-threads in this exchange is the need to be clear and precise about the application of a term or concept. Given all the layering that these systems have/had, one layer might have had a property that another did not. (For example, Arpanet IMP was classic stateless packet/message model, while NCP was virtual, end-to-end circuits.) This certainly means that debate, about whether a particular characteristic was present in a particular system, needs to be specific about the specific /part/ or layer of the system that did or did not have the characteristic. As for best-effort, certainly Alohanet was the epitome of the construct and, of course, that predated Alohanet. (Metcalfe's Ethernet design started from a paper he was given, describing Alohanet.) The complexity of the Arpanet design and layering might permit a bit of debate about whether it qualified as being based on best effort. Alohanet's simplicity does not (permit debate.) d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dcrocker at gmail.com Thu Jun 3 06:51:53 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 06:51:53 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C07AB58.7090306@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> Message-ID: <4C07B379.3040806@gmail.com> On 6/3/2010 6:17 AM, Dave Crocker wrote: > As for best-effort, certainly Alohanet was the epitome of the construct > and, of course, that predated Alohanet. (Metcalfe's Ethernet design > started from a paper he was given, describing Alohanet.) I've been privately informed of a possible typographical error in this text. Having so far only had one cup of coffee, I am certainly having some difficulty with the concept of something preceding itself. But I hadn't even finished that cup when typing the previous note, which I'd love to blame for the error... Anyhow, yeah, I meant Alohanet preceded Ethernet, as I hope folks guessed from the parenthetical. But this being a history discussion, it's worth documenting the correction. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mbaer at cs.tu-berlin.de Thu Jun 3 07:44:29 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Thu, 03 Jun 2010 16:44:29 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C07AB58.7090306@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> Message-ID: <4C07BFCD.4070904@cs.tu-berlin.de> On 06/03/2010 03:17 PM, Dave Crocker wrote: > > > On 6/2/2010 1:56 AM, Matthias B?rwolff wrote: >> Pouzin's contribution notwithstanding, Metcalfe's thesis' chapter 6 to >> me is the first proper elaboration of best effort as a philosophy; > ... >> To my knowledge, Pouzin has never put it that clearly in writing. > > > I think one of the other postings made a comment similar to what I'm > going to say here, but just to underscore my own sense of that period: > > It was quite common for things to be documented very much post hoc. > This gives a highly skewed view to diligent historians reading the > literature, but it makes near-term efforts at oral history particularly > valuable. I appreciate this point. (Which is one of the reasons I keep hitting this list with my questions.) Without heavy triangulation and extreme caution it is hard to draw meaningful conclusions. (And sure enough there have been plenty of failed attempts in the secondary literature.) > > One of the issues emerging from some of the sub-threads in this exchange > is the need to be clear and precise about the application of a term or > concept. Given all the layering that these systems have/had, one layer > might have had a property that another did not. (For example, Arpanet > IMP was classic stateless packet/message model, while NCP was virtual, > end-to-end circuits.) > > This certainly means that debate, about whether a particular > characteristic was present in a particular system, needs to be specific > about the specific /part/ or layer of the system that did or did not > have the characteristic. To return to my initial go with the list of principles, let me put forward a revised version applicable to the classic definition of the Internet ("roughly transitive closure of IP-speaking systems"): The classic end-to-end arguments (as exemplified by the case of file transfer to the standard of the application ends, not the more fuzzy reasonings about possibly excessive efforts in the network to the detriment of other applications) place a lower bound on the level and type of functionality that needs to sit with the application ends (which may not have to be very much). Economic efficiency concerns and the imperative of complexity avoidance further narrow the actual balance of functions (in the abstract). Plus, the concern for economy of interface between application ends and intermediary nodes severely limits the scope for having functions implemented in some cooperative way (see congestion control, two open-loop systems; routing, almost exclusively in the network; and fragmentation, completely with the end hosts), discrete parts will thus fall to either side, and the communication, if any, will be implicit rather than explicit (see the fate of IP options and ICMP messages). So far, we still have plenty of scope for functions in the network. However, once we take minimal coupling, least privilege, cascadability, and best effort into account, there is actually a fairly low upper bound on what functions the intermediary network nodes may assume without collapsing the potential scale of the Internet to trivial proportions. That sort of summarizes my current gut feeling about the reality of the Internet (and silently neglects the point that the Internet today is probably anything but IP connectivity). > > As for best-effort, certainly Alohanet was the epitome of the construct > and, of course, that predated Alohanet. (Metcalfe's Ethernet design > started from a paper he was given, describing Alohanet.) > > The complexity of the Arpanet design and layering might permit a bit of > debate about whether it qualified as being based on best effort. > Alohanet's simplicity does not (permit debate.) Why not? Alohanet (pure Aloha) capacity was at 1/(2e), and downlink data was not even acknowledged (for performance reasons). Ethernet capacity has been at some 98 percent right away, and packets hardly ever got lost (safe in screwed up installations). Best effort certainly doesn's mean no effort (as in Alohanet), but has probably always meant "reasonable", "sane" effort. (But that's just my two cents, Richard was gonna enlighten us us to the three meanings of best effort.) Matthias > > > d/ -- Matthias B?rwolff www.b?rwolff.de From dcrocker at gmail.com Thu Jun 3 08:10:16 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 08:10:16 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C07BFCD.4070904@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> Message-ID: <4C07C5D8.3080409@gmail.com> On 6/3/2010 7:44 AM, Matthias B?rwolff wrote: > To return to my initial go with the list of principles, let me put > forward a revised version applicable to the classic definition of the > Internet ("roughly transitive closure of IP-speaking systems"): Since you are providing a precise and, yes, popular view of the definition of the Internet, I'll offer an elaboration rather than correct. (I'm assuming that "transitive closure" is meant to imply any IP speaker with direct or routed access to the public, default-free multi-provider backbone.) The elaboration is based on the difference between focusing on layer 3 versus focusing on layer 7. For details: To Be "On" the Internet For some unknown reason, my own preference is to focus on perspective of the application layer, and not be distracted by possible limitations or variations at layer 3, unless they get in the way of application use. But that's just me... (and most users.) > So far, we still have plenty of scope for functions in the network. > However, once we take minimal coupling, least privilege, cascadability, > and best effort into account, there is actually a fairly low upper bound > on what functions the intermediary network nodes may assume without > collapsing the potential scale of the Internet to trivial proportions. The real challenge is justifying this assessment well enough to guide future designers. There is a persistent tendency to believe that changes are easy to make within the infrastructure. >> The complexity of the Arpanet design and layering might permit a bit of >> debate about whether it qualified as being based on best effort. >> Alohanet's simplicity does not (permit debate.) > > Why not? Alohanet (pure Aloha) capacity was at 1/(2e), and downlink data > was not even acknowledged (for performance reasons). Ethernet capacity > has been at some 98 percent right away, and packets hardly ever got lost > (safe in screwed up installations). Best effort certainly doesn's mean > no effort (as in Alohanet), but has probably always meant "reasonable", > "sane" effort. (But that's just my two cents, Richard was gonna > enlighten us us to the three meanings of best effort.) Alohanet was not no effort. It had retransmission. But it merely kept things extremely simple. That its version of 'best' was relatively poor and that the Ethernet's version was a lot better does not make either less an example of 'best'. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mbaer at cs.tu-berlin.de Thu Jun 3 08:48:53 2010 From: mbaer at cs.tu-berlin.de (=?ISO-8859-1?Q?Matthias_B=E4rwolff?=) Date: Thu, 03 Jun 2010 17:48:53 +0200 Subject: [ih] principles of the internet In-Reply-To: <4C07C5D8.3080409@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07C5D8.3080409@gmail.com> Message-ID: <4C07CEE5.5080201@cs.tu-berlin.de> On 06/03/2010 05:10 PM, Dave Crocker wrote: > > >>> The complexity of the Arpanet design and layering might permit a bit of >>> debate about whether it qualified as being based on best effort. >>> Alohanet's simplicity does not (permit debate.) >> >> Why not? Alohanet (pure Aloha) capacity was at 1/(2e), and downlink data >> was not even acknowledged (for performance reasons). Ethernet capacity >> has been at some 98 percent right away, and packets hardly ever got lost >> (safe in screwed up installations). Best effort certainly doesn's mean >> no effort (as in Alohanet), but has probably always meant "reasonable", >> "sane" effort. (But that's just my two cents, Richard was gonna >> enlighten us us to the three meanings of best effort.) > > > Alohanet was not no effort. It had retransmission. > > But it merely kept things extremely simple. That its version of 'best' > was relatively poor and that the Ethernet's version was a lot better > does not make either less an example of 'best'. > Just an aside correction: Alohanet only had acknowledgments and retransmission on the uplink. The broadcast channel (from the central hub to all stations) had no retransmission as it was considered reasonably reliable (there could be no collisions, after all). Another fun fact about the Alohanet: a station would have retransmitted only 2 times, after which it would give up and leave retransmissions to the user -- the rationale being that this way the third retransmission interval would be larger and more random so as to avoid yet another collision. It rarely happened given the fairly low volume use of the network, but it was in the specs (probably written after the fact, but anyway). Matthias > d/ -- Matthias B?rwolff www.b?rwolff.de From jeanjour at comcast.net Thu Jun 3 09:48:24 2010 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Jun 2010 12:48:24 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C07BFCD.4070904@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> Message-ID: > > >Why not? Alohanet (pure Aloha) capacity was at 1/(2e), and downlink data >was not even acknowledged (for performance reasons). Ethernet capacity >has been at some 98 percent right away, and packets hardly ever got lost >(safe in screwed up installations). Best effort certainly doesn's mean >no effort (as in Alohanet), but has probably always meant "reasonable", >"sane" effort. (But that's just my two cents, Richard was gonna >enlighten us us to the three meanings of best effort.) Am I missing something here? To my mind, Alohanet and Ethernet make the same "effort." The difference is the theoretical limit of the media. Ethernet is higher because the aether has been replaced by coax. Not because Ethernet made more effort than Alohanet. I think I am seconding David's comment. >Matthias > >> >> >> d/ > >-- >Matthias B?rwolff >www.b?rwolff.de From dcrocker at gmail.com Thu Jun 3 10:04:35 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 10:04:35 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> Message-ID: <4C07E0A3.3030300@gmail.com> > Ethernet is higher because the aether has been replaced by coax. Not > because Ethernet made more effort than Alohanet. I have always understood that the difference was due to the differing particulars of problem detection and recovery mechanisms, rather than due to the medium itself. And the particular radio arrangement used by the Alohanet folk was not the only medium lacking carrier sense and/or collision detect possibilities. But exponential backoff was a fundamental improvement, which probably explains why it became the standard model for retries. And, of course, it's entirely medium independent. I vaguely recall coming across a study that evaluated the incremental benefit of each of Ethernet's features, in terms of improved channel use. But since I don't work in this space, I never retain the details of such tidbits. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jnc at mercury.lcs.mit.edu Thu Jun 3 10:22:59 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 3 Jun 2010 13:22:59 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> > From: John Day > Alohanet and Ethernet make the same "effort." The difference is the > theoretical limit of the media. Ethernet is higher because the aether > has been replaced by coax. Not because Ethernet made more effort than > Alohanet. Not sure if this is what you're referring to with your reference to "aether ... replaced by coax", but... Ethernet's channel access algorithm (CSMA-CD) is slightly different from Aloha's (which was, IIRC, CSMA). (To get the -CD to work semi-reliably they had to limit the network's physical size, and increase the minimum packet size, in ways that weren't feasible with the aether.) I seem to recall that adding the -CD upped the theoretical throughput (as a %-age of channel bit rate) considerably. Noel From jeanjour at comcast.net Thu Jun 3 10:29:45 2010 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Jun 2010 13:29:45 -0400 Subject: [ih] principles of the internet In-Reply-To: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> References: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> Message-ID: Well, yea, that too! ;-) The point is that difference between Aloha and Ethernet is mostly physics not effort! ;-) (more or less) Ensuring that all receivers could hear all transmitters (which also requires bounding the length) is what makes the main difference. by greatly reducing the probability of a collision. At 13:22 -0400 2010/06/03, Noel Chiappa wrote: > > From: John Day > > > Alohanet and Ethernet make the same "effort." The difference is the > > theoretical limit of the media. Ethernet is higher because the aether > > has been replaced by coax. Not because Ethernet made more effort than > > Alohanet. > >Not sure if this is what you're referring to with your reference to "aether >... replaced by coax", but... Ethernet's channel access algorithm (CSMA-CD) is >slightly different from Aloha's (which was, IIRC, CSMA). (To get the -CD to >work semi-reliably they had to limit the network's physical size, and increase >the minimum packet size, in ways that weren't feasible with the aether.) I >seem to recall that adding the -CD upped the theoretical throughput (as a >%-age of channel bit rate) considerably. > > Noel From jeanjour at comcast.net Thu Jun 3 10:44:42 2010 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Jun 2010 13:44:42 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C07BFCD.4070904@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> Message-ID: > > >To return to my initial go with the list of principles, let me put >forward a revised version applicable to the classic definition of the >Internet ("roughly transitive closure of IP-speaking systems"): > >The classic end-to-end arguments (as exemplified by the case of file >transfer to the standard of the application ends, not the more fuzzy >reasonings about possibly excessive efforts in the network to the >detriment of other applications) place a lower bound on the level and >type of functionality that needs to sit with the application ends (which >may not have to be very much). Sorry I find the above very difficult to parse. Can you try again? What classic e2e arguments are you referring to? Those in 1972 or those in 1982? > >Economic efficiency concerns and the imperative of complexity avoidance >further narrow the actual balance of functions (in the abstract). Plus, >the concern for economy of interface between application ends and What does this mean? >intermediary nodes severely limits the scope for having functions >implemented in some cooperative way (see congestion control, two >open-loop systems; routing, almost exclusively in the network; and Probably shouldn't have been and isn't now. This was hold over beads-on-a-string think. >fragmentation, completely with the end hosts), discrete parts will thus Fragmentation is not limited to the hosts. Reassembly is, not fragmentation. >fall to either side, and the communication, if any, will be implicit >rather than explicit (see the fate of IP options and ICMP messages). > >So far, we still have plenty of scope for functions in the network. >However, once we take minimal coupling, least privilege, cascadability, >and best effort into account, there is actually a fairly low upper bound >on what functions the intermediary network nodes may assume without >collapsing the potential scale of the Internet to trivial proportions. > >That sort of summarizes my current gut feeling about the reality of the >Internet (and silently neglects the point that the Internet today is >probably anything but IP connectivity). If I can make any sense out of this at all, it does not ring true for me. In the ARPANET and other early networks, it always seemed to me that big difference was that this was being done by Operating System people, not telecom people. Hence there was an attempt to put an OS imprint on it. Make it look like it would look if they were facilities of an OS. That is why I think (and others) believed it succeeded. One of the things most perturbing in the Internet of the last 20 years is the degree to which telecom concepts have reasserted themselves. As for other issues, we talked of principles such as relaying at layer (N) required error control at (N+1). The media access relayed (MAC), data link any necessary error control, network relayed, transport did any necessary error control, applications could relay (and hence need routing) but mail didn't do end to end error control, but check-point recover in FTP did. I don't see the OS perspective in all of this and it was key to our thinking. At least for a lot of us. Have you looked at Distributed Systems Architecture and Implementations by Lampson, et al.? Take care, John From dcrocker at gmail.com Thu Jun 3 11:10:08 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 11:10:08 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> Message-ID: <4C07F000.80408@gmail.com> > applications could relay (and hence need > routing) but mail didn't do end to end error control, but check-point > recover in FTP did. It's worth noting that check-point restart was a later addition to FTP. My impression is that it's had limited uptake. Email later added data integrity, through MIME Content-MD5, and as a side-effect of content protection through PEM, PGP or S/MIME and, more recently, DKIM, if one is loose about the function. It also added delivery confirmation and error reporting via DSN and MDN notifications. It had non-standardized non-delivery notices pretty much from the start; these were later standardized. These all provide a pretty good platform for developing a re-transmission mechanism, if the sender wishes to pursue it. (email relaying was also a later value-added mechanism, first through Parc's internal conventions, UUCP such as at Berkeley and CSNet at Udel; later standardized through DNS MX records, but let's not go down that... path.) d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dcrocker at gmail.com Thu Jun 3 11:18:18 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 11:18:18 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> Message-ID: <4C07F1EA.4050301@gmail.com> > The point is that difference between Aloha > and Ethernet is mostly physics not effort! ;-) (more or less) The point of my earlier posting is that my impression from the analyses I've scanned of the beneficial effects of different control components for access is that it is far more productive to analyze in terms of those mechanisms that were employed or not, rather than in terms of channel physics. Some differences due to channel physics, sure. "Main" or "Major" difference, possibly not. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Thu Jun 3 12:34:47 2010 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Jun 2010 15:34:47 -0400 Subject: [ih] principles of the internet In-Reply-To: <4C07F000.80408@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: At 11:10 -0700 2010/06/03, Dave Crocker wrote: >>applications could relay (and hence need >>routing) but mail didn't do end to end error control, but check-point >>recover in FTP did. > > >It's worth noting that check-point restart was a later addition to >FTP. My impression is that it's had limited uptake. Really. Later when? It was in the 1973 version. Also in the UK color books which actually had a sliding window. > >Email later added data integrity, through MIME Content-MD5, and as a >side-effect of content protection through PEM, PGP or S/MIME and, >more recently, DKIM, if one is loose about the function. It also >added delivery confirmation and error reporting via DSN and MDN >notifications. It had non-standardized non-delivery notices pretty >much from the start; these were later standardized. These all >provide a pretty good platform for developing a re-transmission >mechanism, if the sender wishes to pursue it. Integrity, yes. But not reliability. There is no MPL for mail. > >(email relaying was also a later value-added mechanism, first >through Parc's internal conventions, UUCP such as at Berkeley and >CSNet at Udel; later standardized through DNS MX records, but let's >not go down that... path.) Correct. Mail was originally two commands in FTP, but that must have been later. Take care, John > >d/ >-- > > Dave Crocker > Brandenburg InternetWorking > bbiw.net From dcrocker at gmail.com Thu Jun 3 12:50:54 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Thu, 03 Jun 2010 12:50:54 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: <4C08079E.6050502@gmail.com> On 6/3/2010 12:34 PM, John Day wrote: > At 11:10 -0700 2010/06/03, Dave Crocker wrote: >> It's worth noting that check-point restart was a later addition to >> FTP. My impression is that it's had limited uptake. > > Really. Later when? It was in the 1973 version. Also in the UK color > books which actually had a sliding window. ahh. forgot that. >> >> Email later added data integrity, through MIME Content-MD5, and as a >> side-effect of content protection through PEM, PGP or S/MIME and, more >> recently, DKIM, if one is loose about the function. It also added >> delivery confirmation and error reporting via DSN and MDN >> notifications. It had non-standardized non-delivery notices pretty >> much from the start; these were later standardized. These all provide >> a pretty good platform for developing a re-transmission mechanism, if >> the sender wishes to pursue it. > > Integrity, yes. But not reliability. There is no MPL for mail. Please re-read the last sentence of my paragraph. >> (email relaying was also a later value-added mechanism, first through >> Parc's internal conventions, UUCP such as at Berkeley and CSNet at >> Udel; later standardized through DNS MX records, but let's not go down >> that... path.) > > Correct. Mail was originally two commands in FTP, but that must have > been later. It was also later than initial SMTP, although domain names were built into the email work of 1982. MX wasn't developed and viable for 3-5 more years. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From richard at bennett.com Thu Jun 3 13:16:26 2010 From: richard at bennett.com (Richard Bennett) Date: Thu, 03 Jun 2010 13:16:26 -0700 Subject: [ih] principles of the internet In-Reply-To: <4C07CEE5.5080201@cs.tu-berlin.de> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07C5D8.3080409@gmail.com> <4C07CEE5.5080201@cs.tu-berlin.de> Message-ID: <4C080D9A.5080806@bennett.com> An HTML attachment was scrubbed... URL: From richard at bennett.com Thu Jun 3 13:28:02 2010 From: richard at bennett.com (Richard Bennett) Date: Thu, 03 Jun 2010 13:28:02 -0700 Subject: [ih] principles of the internet In-Reply-To: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> References: <20100603172259.4C05F6BE5D8@mercury.lcs.mit.edu> Message-ID: <4C081052.9080001@bennett.com> An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Thu Jun 3 14:10:38 2010 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 3 Jun 2010 17:10:38 -0400 (EDT) Subject: [ih] principles of the internet Message-ID: <20100603211038.354956BE635@mercury.lcs.mit.edu> > From: John Day >> To get the -CD to work semi-reliably they had to limit the network's >> physical size, and increase the minimum packet size > Ensuring that all receivers could hear all transmitters (which also > requires bounding the length) is what makes the main difference. by > greatly reducing the probability of a collision. Maybe we're saying the same thing, but there's a 'simultaneity' issue as well. If you have three stations (A, B and C) all in a line (effectively, which is what a wire gives you), with some distance between them, then: If A transmits a relatively short message to B at the same time as C transmits a relatively short message to B, C's message doesn't start to get to A until after A is done transmitting its message (and vice versa at C). So neither A nor C sees a collision - but at B, in between them, the two messages _do_ collide. In other words, for the -CD to work 'right', a transmitter has to occupy the 'whole' shared medium for long enough that other any station trying to transmit will definitely see a collision (in cases where they start to transmit basically simultaneously). That's the reason Ethernet has restrictions on i) physical size and ii) minimum message length - messages which are too short may produce the scenario above (and if you make the network larger, you need to make the minimum message size larger too, because end-end propogation time then becomes larger). Noel From richard at bennett.com Thu Jun 3 15:14:48 2010 From: richard at bennett.com (Richard Bennett) Date: Thu, 03 Jun 2010 15:14:48 -0700 Subject: [ih] principles of the internet In-Reply-To: <20100603211038.354956BE635@mercury.lcs.mit.edu> References: <20100603211038.354956BE635@mercury.lcs.mit.edu> Message-ID: <4C082958.3030208@bennett.com> An HTML attachment was scrubbed... URL: From dot at dotat.at Fri Jun 4 10:43:56 2010 From: dot at dotat.at (Tony Finch) Date: Fri, 4 Jun 2010 18:43:56 +0100 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: On Thu, 3 Jun 2010, John Day wrote: > > There is no MPL for mail. If you mean "maximum packet lifetime" then how about RFC 5321 section 6.3: Simple counting of the number of "Received:" header fields in a message has proven to be an effective, although rarely optimal, method of detecting loops in mail systems. SMTP servers using this technique SHOULD use a large rejection threshold, normally at least 100 Received entries. Whatever mechanisms are used, servers MUST contain provisions for detecting and stopping trivial loops. Tony. -- f.anthony.n.finch http://dotat.at/ CROMARTY FORTH TYNE DOGGER: SOUTHEAST 4 OR 5, BECOMING VARIABLE 3 OR 4. SMOOTH OR SLIGHT, OCCASIONALLY MODERATE AT FIRST IN CROMARTY. FOG PATCHES FOR A TIME, SHOWERS LATER IN TYNE AND DOGGER. MODERATE OR GOOD, OCCASIONALLY VERY POOR. From dot at dotat.at Fri Jun 4 10:44:33 2010 From: dot at dotat.at (Tony Finch) Date: Fri, 4 Jun 2010 18:44:33 +0100 Subject: [ih] principles of the internet In-Reply-To: <4C07F000.80408@gmail.com> References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: On Thu, 3 Jun 2010, Dave Crocker wrote: > > Email later added data integrity, through MIME Content-MD5, and as a > side-effect of content protection through PEM, PGP or S/MIME and, more > recently, DKIM, if one is loose about the function. It also added delivery > confirmation and error reporting via DSN and MDN notifications. It had > non-standardized non-delivery notices pretty much from the start; these were > later standardized. These all provide a pretty good platform for developing a > re-transmission mechanism, if the sender wishes to pursue it. Sounds like a recipe for loops to me :-) Tony. -- f.anthony.n.finch http://dotat.at/ GERMAN BIGHT: NORTH, BECOMING VARIABLE, 3 OR 4. SMOOTH OR SLIGHT. FAIR. MODERATE OR GOOD. From dcrocker at gmail.com Fri Jun 4 11:04:41 2010 From: dcrocker at gmail.com (Dave Crocker) Date: Fri, 04 Jun 2010 11:04:41 -0700 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: <4C094039.9030308@gmail.com> On 6/4/2010 10:44 AM, Tony Finch wrote: >> These all provide a pretty good platform for developing a >> re-transmission mechanism, if the sender wishes to pursue it. > > Sounds like a recipe for loops to me :-) I've heard that before. Maybe from you. Maybe from you. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Fri Jun 4 11:14:13 2010 From: jeanjour at comcast.net (John Day) Date: Fri, 4 Jun 2010 14:14:13 -0400 Subject: [ih] principles of the internet In-Reply-To: References: <4C052248.2050608@cs.tu-berlin.de> <4C05563D.6040506@bennett.com> <4C0566F2.1040605@gmail.com> <4C057376.40708@cs.tu-berlin.de> <4C061CCA.1070304@cs.tu-berlin.de> <4C07AB58.7090306@gmail.com> <4C07BFCD.4070904@cs.tu-berlin.de> <4C07F000.80408@gmail.com> Message-ID: Not the same thing. One must ensure that all copies of a message are out of the network. Dick Watson proved that reliable transfer requires bounding 3 timers. Actually, it is stronger than that, bounding them is both necessary and sufficient. Applies to all protocols that do synchronization. At 18:43 +0100 2010/06/04, Tony Finch wrote: >On Thu, 3 Jun 2010, John Day wrote: >> >> There is no MPL for mail. > >If you mean "maximum packet lifetime" then how about RFC 5321 section 6.3: > > Simple counting of the number of "Received:" header fields in a > message has proven to be an effective, although rarely optimal, > method of detecting loops in mail systems. SMTP servers using this > technique SHOULD use a large rejection threshold, normally at least > 100 Received entries. Whatever mechanisms are used, servers MUST > contain provisions for detecting and stopping trivial loops. > >Tony. >-- >f.anthony.n.finch http://dotat.at/ >CROMARTY FORTH TYNE DOGGER: SOUTHEAST 4 OR 5, BECOMING VARIABLE 3 OR 4. SMOOTH >OR SLIGHT, OCCASIONALLY MODERATE AT FIRST IN CROMARTY. FOG PATCHES FOR A TIME, >SHOWERS LATER IN TYNE AND DOGGER. MODERATE OR GOOD, OCCASIONALLY VERY POOR.