From eric.gade at gmail.com Tue Feb 1 08:42:14 2011 From: eric.gade at gmail.com (Eric Gade) Date: Tue, 1 Feb 2011 16:42:14 +0000 Subject: [ih] DCA, the NIC, and the Gulf War Message-ID: In the course of my research on the NIC in the 1980s, I've come across and interesting incident. It seems that on December 7th 1990, DCA sent an email to the NIC instructing them to "roll-back" database changes that had been made in the process of implementing RFC 1174. Their argument was that such changes hadn't been payed for. This seems to have incited a good amount of anger on the NIC side of things, and the response was the rather timely example that reverting to pre-RFC 1174 would remove important *military*hosts from the DNS which would have drastic effects on Operation Desert Shield. This is all the more interesting because a month prior to this exchange, the NIC had removed KW from the host table at the behest of it's esponsible authority, which at that time was CSNET. IAB members were asked specifically whether this removal was inspired by the contemporary political situation (ie, the Iraqi invasion of Kuwait). Their response was that the NIC had indeed followed the correct procedure in obeying CSNET's request precisely because that organization was the Administrative and Technical Contact. Such an overtly political response, in my mind, dodged the question. Or perhaps the question itself was incorrect. I suppose it should have been: why did CSNET make this request? I would like to see if anyone can comment further on either of these events, especially those that may have been involved at the time. -- Eric G -------------- next part -------------- An HTML attachment was scrubbed... URL: From galmes at tamu.edu Mon Feb 7 20:01:04 2011 From: galmes at tamu.edu (Guy Almes) Date: Mon, 07 Feb 2011 22:01:04 -0600 Subject: [ih] Ken Olsen's impact on the Internet Message-ID: <4D50C000.3060003@tamu.edu> I just read the NYT obituary on Ken Olsen. I know very little about Olsen's life, but his (and Digital's) particular style of computer building had several impacts on the Internet. One is the prevalence of PDP-10s as hosts on the ARPAnet. If memory serves, DEC used to run an advertisement on the back cover of the CACM, and on one, they bragged about DEC computers showing up so much as ARPAnet hosts. At least the ad writer seemed to understand the significance of this, using cartoon techniques to suggest the idea-sharing and collaborations that were happening because of this. Another, more mixed, is DEC's lukewarm support for the IP-based Internet, preferring the proprietary DECnet product line. While, technically, the DECnet work deserves much praise, the business dynamics of pushing DECnet in preference to the Internet are illustrative of blindspots that led to Digital's demise. I'd be interested in comments from others on the list, -- Guy From mfidelman at meetinghouse.net Mon Feb 7 20:38:02 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 07 Feb 2011 23:38:02 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D50C000.3060003@tamu.edu> References: <4D50C000.3060003@tamu.edu> Message-ID: <4D50C8AA.1040905@meetinghouse.net> Guy Almes wrote: > Another, more mixed, is DEC's lukewarm support for the IP-based > Internet, preferring the proprietary DECnet product line. While, > technically, the DECnet work deserves much praise, the business > dynamics of pushing DECnet in preference to the Internet are > illustrative of blindspots that led to Digital's demise. Well... DEC resisted TCP/IP a lot less than Wang; and lasted a lot longer. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From dave.walden.family at gmail.com Mon Feb 7 21:00:41 2011 From: dave.walden.family at gmail.com (Dave Walden) Date: Mon, 07 Feb 2011 21:00:41 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D50C8AA.1040905@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50C8AA.1040905@meetinghouse.net> Message-ID: <4d50ce0d.d44de50a.3956.ffffb283@mx.google.com> >Well... DEC resisted TCP/IP a lot less than Wang; and lasted a lot longer. My memory is that mostly all the hardware vendors, still living in the world of proprietary software locked into their hardware, resisted TCP/IP. I'm think I remember a trend where user demand forced them to add TCP/IP in parallel with their proprietary network standard and then eventually, the bulk of the traffic went to the Internet via TCP/IP since users (e.g., big corporations) in fact didn't want to be locked into a single vendor. From dhc2 at dcrocker.net Mon Feb 7 21:42:01 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Mon, 07 Feb 2011 21:42:01 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D50C000.3060003@tamu.edu> References: <4D50C000.3060003@tamu.edu> Message-ID: <4D50D7A9.5080308@dcrocker.net> On 2/7/2011 8:01 PM, Guy Almes wrote: > Another, more mixed, is DEC's lukewarm support for the IP-based Internet, > preferring the proprietary DECnet product line. While, technically, the DECnet > work deserves much praise, the business dynamics of pushing DECnet in preference > to the Internet are illustrative of blindspots that led to Digital's demise. DEC was not lukewarm. It was actively hostile. It pressed for OSI because it thought it could control the outcome. By the time DEC finally realized that TCP/IP was going to win, DEC was very far behind the curve and never really caught up. (The Field Service guys were closest to the customer and saw the writing on the wall the earliest, so they provided funding for an Internet tech transfer lab that I started, but there was an entire corporate culture devoted to stovepipe solutions for customer capture with private solutions.) Upper management wanted the change to IP, but there were about 110,000 other employees and middle-managers that had trouble buying in. But yeah, PDP-10/Tenex for the Arpanet and later the PDP-11/Vax/Unix were hugely popular for hosts. For Unix, you had to get the hardware from DEC and the software license from Bell Labs. In order the help the hardware sales, DEC had a special group up in New Hampshire doing Unix device drivers. At every Usenix meeting (attendance in those early days number of around 40-100) the team leader, Armando Stettner, would give a status report on the device driver work. At the first larger meeting (300 people in Santa Monica) he got up as usual, but started by saying that he was tired of having people say they wanted to get both the hardware and the Unix software from one place, and when was DEC going to offer a Unix license? So, he said, he could finally announce that DEC was indeed going to offer a Unix license. He then bent down and held up a New Hampshire-style green automobile license plate that said UNIX, with Live Free or Die at the bottom. He had one for every attendee. I treasure mine... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From vint at google.com Tue Feb 8 00:39:20 2011 From: vint at google.com (Vint Cerf) Date: Tue, 8 Feb 2011 03:39:20 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D50D7A9.5080308@dcrocker.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> Message-ID: Ken's initiative in creating affordable, departmental scale computers was without question a major element of ARPANET and Internet growth (so was the SUN Workstation). However, Dave Crocker is correct that DEC actively resisted TCP/IP at the Board level. Ken railed against TCP/IP. But at the DEC labs, the story was different. Similar resistance would be encountered at IBM and at HP but in those cases, too, it was their laboratories that proceeded to develop TCP/IP for their most popular operating systems. When it finally became clear that TCP/IP would be demanded by customers, the official resistance ended. UNIX and its derivatives had a great deal to do with this. ARPA funded the development of the Berkeley release of TCP/IP for UNIX and this was an important element of the adoption of TCP/IP in the academic community as well. Digital also played a big role in the development of MCI Mail and the linkage between MCI Mail and the Internet (taking place around 1988-89) broke a policy barrier that prohibited the carriage of commercial network traffic on the US Government sponsored backbones (especially NSFNET and ARPANET, ESNET and NSINET). Digital and its laboratories played a prominent role in the evolution networked computing, but experienced its own white water problems adapting to the changing universe produced by the Internet and open source software. vint On Tue, Feb 8, 2011 at 12:42 AM, Dave CROCKER wrote: > > > On 2/7/2011 8:01 PM, Guy Almes wrote: >> >> Another, more mixed, is DEC's lukewarm support for the IP-based Internet, >> preferring the proprietary DECnet product line. While, technically, the >> DECnet >> work deserves much praise, the business dynamics of pushing DECnet in >> preference >> to the Internet are illustrative of blindspots that led to Digital's >> demise. > > > DEC was not lukewarm. ?It was actively hostile. ?It pressed for OSI because > it thought it could control the outcome. > > By the time DEC finally realized that TCP/IP was going to win, DEC was very > far behind the curve and never really caught up. ?(The Field Service guys > were closest to the customer and saw the writing on the wall the earliest, > so they provided funding for an Internet tech transfer lab that I started, > but there was an entire corporate culture devoted to stovepipe solutions for > customer capture with private solutions.) ?Upper management wanted the > change to IP, but there were about 110,000 other employees and > middle-managers that had trouble buying in. > > But yeah, PDP-10/Tenex for the Arpanet and later the PDP-11/Vax/Unix were > hugely popular for hosts. > > For Unix, you had to get the hardware from DEC and the software license from > Bell Labs. ?In order the help the hardware sales, DEC had a special group up > in New Hampshire doing Unix device drivers. ?At every Usenix meeting > (attendance in those early days number of around 40-100) the team leader, > Armando Stettner, would give a status report on the device driver work. > > At the first larger meeting (300 people in Santa Monica) he got up as usual, > but started by saying that he was tired of having people say they wanted to > get both the hardware and the Unix software from one place, and when was DEC > going to offer a Unix license? > > So, he said, he could finally announce that DEC was indeed going to offer a > Unix license. > > He then bent down and held up a New Hampshire-style green automobile license > plate that said UNIX, with Live Free or Die at the bottom. ?He had one for > every attendee. > > I treasure mine... > > d/ > > > -- > > ?Dave Crocker > ?Brandenburg InternetWorking > ?bbiw.net > From jnc at mercury.lcs.mit.edu Wed Feb 9 10:40:15 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 9 Feb 2011 13:40:15 -0500 (EST) Subject: [ih] Ken Olsen's impact on the Internet Message-ID: <20110209184015.A2BEE6BE555@mercury.lcs.mit.edu> > From: Guy Almes > his (and Digital's) particular style of computer building had > several impacts on the Internet. > One is the prevalence of PDP-10s as hosts on the ARPAnet. _All_ the early TCP/IP routers (gateways, back then) were PDP-11 based (for a variety of reasons we can explore if anyone cares). Off the top of my head, from memory: - The earliest IP routers, the BBN ELF-based boxes - The later BBN MOS-based boxes - The Fuzzballs - The SRI MOS boxes (Port Expanders, etc) - The machines at UCL - The Lincoln Labs voice thingys (although those may have been just hosts) - The first two MIT routers (the ARPANet GW; the internal inter-LAN router) - The MIT C-GW machines - The Stanford multi-protocol router (basis of the Cisco router) My sincere apologies to anyone whom I have left out! Some other early internetworking projects also used PDP-11s a lot: - The MIT CHAOSnet used PDP-11 routers very extensively Not sure about PUP - they had, IIRC, a few PDP-11 baaed boxes, but I think most of their routers were Alto-based. > Another, more mixed, is DEC's lukewarm support for the IP-based > Internet, preferring the proprietary DECnet product line. Like everyone - they almost all preferred their proprietary thing. So it was a mixed bag: their hardware was _really_ important, but the company itself, meh. > From: Dave Walden > I'm think I remember a trend where user demand forced them to add > TCP/IP in parallel with their proprietary network standard and then > eventually, the bulk of the traffic went to the Internet via TCP/IP > since users (e.g., big corporations) in fact didn't want to be > locked into a single vendor. Not so much a single vendor, as network effect, IIRC: via the Internet you could communicate with _anyone_, whereas the proprietary network let you into a much less universal set. Noel From richard at bennett.com Wed Feb 9 13:28:41 2011 From: richard at bennett.com (Richard Bennett) Date: Wed, 09 Feb 2011 13:28:41 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D50D7A9.5080308@dcrocker.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> Message-ID: <4D530709.6020304@bennett.com> The general understanding among computer companies in the mid-80s was that TCP/IP was a fine proof-of-concept, but the real network was going to be OSI. This wasn't any sort of conspiracy as much as it was a recognition that large scale networks needed a different kind of system for addressing and routing than the one that IPv4 provided, and that TCP would have problems on fatter pipes. The OSI development process was ultimately unsuccessful for a number of reasons (too many cooks, counter-lobbying by IBM, the clambering of the European PTTs for connection-oriented systems, the lack of any real champions, etc.) so the networking industry was left to do the best they could with TCP and IP. So here we are at the end of the road for the IPv4 addressing and routing system and nobody loves IPv6 but a handful of bald and bearded IETFers who never tire of telling the youngsters to shut up because they weren't there at the creation. RB On 2/7/2011 9:42 PM, Dave CROCKER wrote: > > > On 2/7/2011 8:01 PM, Guy Almes wrote: >> Another, more mixed, is DEC's lukewarm support for the IP-based >> Internet, >> preferring the proprietary DECnet product line. While, technically, >> the DECnet >> work deserves much praise, the business dynamics of pushing DECnet in >> preference >> to the Internet are illustrative of blindspots that led to Digital's >> demise. > > > DEC was not lukewarm. It was actively hostile. It pressed for OSI > because it thought it could control the outcome. > > By the time DEC finally realized that TCP/IP was going to win, DEC was > very far behind the curve and never really caught up. (The Field > Service guys were closest to the customer and saw the writing on the > wall the earliest, so they provided funding for an Internet tech > transfer lab that I started, but there was an entire corporate culture > devoted to stovepipe solutions for customer capture with private > solutions.) Upper management wanted the change to IP, but there were > about 110,000 other employees and middle-managers that had trouble > buying in. > > But yeah, PDP-10/Tenex for the Arpanet and later the PDP-11/Vax/Unix > were hugely popular for hosts. > > For Unix, you had to get the hardware from DEC and the software > license from Bell Labs. In order the help the hardware sales, DEC had > a special group up in New Hampshire doing Unix device drivers. At > every Usenix meeting (attendance in those early days number of around > 40-100) the team leader, Armando Stettner, would give a status report > on the device driver work. > > At the first larger meeting (300 people in Santa Monica) he got up as > usual, but started by saying that he was tired of having people say > they wanted to get both the hardware and the Unix software from one > place, and when was DEC going to offer a Unix license? > > So, he said, he could finally announce that DEC was indeed going to > offer a Unix license. > > He then bent down and held up a New Hampshire-style green automobile > license plate that said UNIX, with Live Free or Die at the bottom. He > had one for every attendee. > > I treasure mine... > > d/ > > -- Richard Bennett From jnc at mercury.lcs.mit.edu Wed Feb 9 16:25:34 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 9 Feb 2011 19:25:34 -0500 (EST) Subject: [ih] Ken Olsen's impact on the Internet Message-ID: <20110210002534.6DD8C6BE558@mercury.lcs.mit.edu> > From: Richard Bennett > The OSI development process was ultimately unsuccessful for a number of > reasons (too many cooks, counter-lobbying by IBM, the clambering of the > European PTTs for connection-oriented systems, the lack of any real > champions, etc.) Actually, IMO the biggest reason why TCP/IP wound up on top was simple: installed base, installed base, installed base. Other factors, such as the ones you mention, along with more mature implementations, people coming out of university familiar with it, etc, etc helped, but installed base was - and remains - the "location, location, location" of networking. Noel From richard at bennett.com Wed Feb 9 16:35:03 2011 From: richard at bennett.com (Richard Bennett) Date: Wed, 09 Feb 2011 16:35:03 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <20110210002534.6DD8C6BE558@mercury.lcs.mit.edu> References: <20110210002534.6DD8C6BE558@mercury.lcs.mit.edu> Message-ID: <4D5332B7.5050108@bennett.com> Yup, first mover advantage in networks gets you a huge lead, and when there's no second mover, it becomes absolute. There really was no second mover, since the proprietary systems were never in the game by definition, and OSI never graduated from kindergarten. One major supporter's OSI implementation for the interoperability workshop was cobbled together out of pieces of code written in seven programming languages, a couple of them interpreted. The specs were impossible to decode and you couldn't begin to even think about interoperability without an agreement on subsets that was as deep as the process for writing the over-optioned specs themselves. And the difference between a network that works and no network at all is about a gazillion times bigger than the difference between a network that works comfortably today and one that barely works today but might work better ten years from now. The Internet won by default. RB On 2/9/2011 4:25 PM, Noel Chiappa wrote: > > From: Richard Bennett > > > The OSI development process was ultimately unsuccessful for a number of > > reasons (too many cooks, counter-lobbying by IBM, the clambering of the > > European PTTs for connection-oriented systems, the lack of any real > > champions, etc.) > > Actually, IMO the biggest reason why TCP/IP wound up on top was simple: > installed base, installed base, installed base. > > Other factors, such as the ones you mention, along with more mature > implementations, people coming out of university familiar with it, etc, etc > helped, but installed base was - and remains - the "location, location, > location" of networking. > > Noel -- Richard Bennett From tony.li at tony.li Wed Feb 9 17:02:58 2011 From: tony.li at tony.li (Tony Li) Date: Wed, 9 Feb 2011 17:02:58 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D5332B7.5050108@bennett.com> References: <20110210002534.6DD8C6BE558@mercury.lcs.mit.edu> <4D5332B7.5050108@bennett.com> Message-ID: <74E3B0F5-438E-420C-A757-18167D70E37E@tony.li> On Feb 9, 2011, at 4:35 PM, Richard Bennett wrote: > There really was no second mover, since the proprietary systems were never in the game by definition, and OSI never graduated from kindergarten. One major supporter's OSI implementation for the interoperability workshop was cobbled together out of pieces of code written in seven programming languages, a couple of them interpreted. The specs were impossible to decode and you couldn't begin to even think about interoperability without an agreement on subsets that was as deep as the process for writing the over-optioned specs themselves. And the difference between a network that works and no network at all is about a gazillion times bigger than the difference between a network that works comfortably today and one that barely works today but might work better ten years from now. > > The Internet won by default. If I squint, I'd claim that UUCP/Usenet and BITnet might count as second movers. I think it's obvious that it wasn't even a race. Tony From craig at aland.bbn.com Thu Feb 10 06:02:41 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 10 Feb 2011 09:02:41 -0500 Subject: [ih] TCP fat pipe chronology (was Ken Olsen's impact on the Internet) Message-ID: <20110210140241.9085B28E137@aland.bbn.com> > The general understanding among computer companies in the mid-80s was > that TCP/IP was a fine proof-of-concept, but the real network was going > to be OSI. This wasn't any sort of conspiracy as much as it was a > recognition that large scale networks needed a different kind of system > for addressing and routing than the one that IPv4 provided, and that TCP > would have problems on fatter pipes. Didn't want to let the error in chronology of TCP on fatter pipes slip past. TCP fat pipe issues arose in 1988 as people were starting to envision working on substantially faster channels (about that time Ira Richer of DARPA started sprinkling a little money to look at gigabit issues in advance of Kahn's gigabit testbed effort). 1988 is almost precisely when OSI was swept from the US market and shortly before it became OBE in Europe as well. No one in the mid-80s has any clue that there was an issue. Thanks! Craig From galmes at tamu.edu Thu Feb 10 07:14:55 2011 From: galmes at tamu.edu (Guy Almes) Date: Thu, 10 Feb 2011 09:14:55 -0600 Subject: [ih] TCP fat pipe chronology (was Ken Olsen's impact on the Internet) In-Reply-To: <20110210140241.9085B28E137@aland.bbn.com> References: <20110210140241.9085B28E137@aland.bbn.com> Message-ID: <4D5400EF.2090001@tamu.edu> Craig et al., With regard to TCP and fat pipes, there were (at least) two kinds of things going on: <> changes to algorithms while leaving the TCP protocol untouched (e.g., improved retransmit timers and VJ's wonderful congestion window work), and <> eventual changes in the TCP protocol (e.g., window scaling) In this context, how did DECnet Phase-IV fit in? Was it more capable, less so, or about the same as TCP? I know the HEPnet and SPAN folks were making heavy use shipping (what then passed for) large files around the world. Separate question: how would OSI (=?? DECnet Phase-V??) have compared? Curious, -- Guy On 2/10/11 8:02 AM, Craig Partridge wrote: >> The general understanding among computer companies in the mid-80s was >> that TCP/IP was a fine proof-of-concept, but the real network was going >> to be OSI. This wasn't any sort of conspiracy as much as it was a >> recognition that large scale networks needed a different kind of system >> for addressing and routing than the one that IPv4 provided, and that TCP >> would have problems on fatter pipes. > > Didn't want to let the error in chronology of TCP on fatter pipes slip past. > > TCP fat pipe issues arose in 1988 as people were starting to envision > working on substantially faster channels (about that time Ira Richer of > DARPA started sprinkling a little money to look at gigabit issues in advance > of Kahn's gigabit testbed effort). 1988 is almost precisely when OSI was > swept from the US market and shortly before it became OBE in Europe as well. > > No one in the mid-80s has any clue that there was an issue. > > Thanks! > > Craig > From craig at aland.bbn.com Thu Feb 10 07:36:29 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 10 Feb 2011 10:36:29 -0500 Subject: [ih] TCP fat pipe chronology (was Ken Olsen's impact on the Internet) Message-ID: <20110210153629.A8D4828E137@aland.bbn.com> > Craig et al., > With regard to TCP and fat pipes, there were (at least) two kinds of > things going on: > <> changes to algorithms while leaving the TCP protocol untouched (e.g., > improved retransmit timers and VJ's wonderful congestion window work), and DEC (aka Raj Jain and KK Ramakrishnan) tumbled onto the timer and congestion problems about the same time as Van, Phil Karn, Lixia Zhang, and I did. Raj and Lixia both wrote papers in 1986 describing the retransmission ambiguity problem that Phil and I solved in 1987. Raj and KK did work showing that additive increase and multiplicative decrease was the right congestion approach concurrently with and independent of Van's work. Raj and KK's paper was published in the same session at SIGCOMM '88 as Van's work (still one of my favorite SIGCOMM sessions of all time). Raj and KK's approach (the DECbit scheme) went into DECnet Phase-IV. Van was the only one to figure out that a better round-trip time estimator was required and how to do it. > <> eventual changes in the TCP protocol (e.g., window scaling) To my knowledge, only the Internet community ended up working on the window scaling and extended sequence number problem. > In this context, how did DECnet Phase-IV fit in? Was it more > capable, less so, or about the same as TCP? I know the HEPnet and SPAN > folks were making heavy use shipping (what then passed for) large files > around the world. DECnet Phase-IV was a pretty formidable protocol suite and was deployed at a scale that one could learn a lot from its protocols. Remember that the networking community first discovered the unintentional clock synchronization problem in large DECNET installations. Indeed, if memory serves, DECNET was the protocol suite that forced us to really understand the challenges of the coexistence of multiple protocols on a single network (e.g. same Ethernet and same long-haul links). > Separate question: how would OSI (=?? DECnet Phase-V??) have compared? My recollection (I have documentation to back it up and thus easily could be wrong) was that DEC Phase-V was going to be a step backward. I recall DEC folks lamenting all the work to retrofit DECNET Phase-IV features that they'd painfully learned were needed into Phase-V and worrying about how to get OSI to adopt the improvements. Thanks! Craig From dhc2 at dcrocker.net Thu Feb 10 09:33:15 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 10 Feb 2011 09:33:15 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D530709.6020304@bennett.com> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> Message-ID: <4D54215B.2060506@dcrocker.net> On 2/9/2011 1:28 PM, Richard Bennett wrote: > The general understanding among computer companies in the mid-80s was that > TCP/IP was a fine proof-of-concept, but the real network was going to be OSI. I believe that was also the expectation within the Internet technical community, even to the point of there being efforts to hand TCP/IP over to the OSI folks. (Who rather rudely rebuffed the offer.) > This wasn't any sort of conspiracy as much as it was a recognition that large > scale networks needed a different kind of system for addressing and routing than > the one that IPv4 provided, and that TCP would have problems on fatter pipes. This implies both a more careful understanding of the problem and solution spaces, as well as a sufficiently deep understanding of TCP/IP's limitations, than actually took place. In other words, what you are citing was common rhetoric but had no substance, in my observation. As for addressing and routing, the OSI world eventually produced something useful for /interior/ routing, but never for inter-organization routing. So whatever the claimed concerns, after 15 years of effort, the OSI world produced nothing viable for Internet scale addressing or routing. As with most OSI work, the deliverable of field utility was always two years from now. For email addressing, the OSI model chosen was actually unworkable at at scale. For the same mailbox in your organization, you needed a different public address for each provider (common carrier) that you were connected to. This made for some amusing, if quite silly, business cards. > The OSI development process was ultimately unsuccessful for a number of reasons > (too many cooks, counter-lobbying by IBM, the clambering of the European PTTs > for connection-oriented systems, the lack of any real champions, etc.) so the > networking industry was left to do the best they could with TCP and IP. There might have been efforts within the OSI world to defeat OSI, but everything I saw from the outside says quite the opposite. Industry and government commitment to OSI was massive, to the level of religion. Rather, what I saw were two core, strategic errors. The first was horrendously complex, interdependent technology components and the second was a failure to understand the need to obtain real-world operational field experience quickly and base revisions on it. (Deploy something useful as quickly as possible and grow the service technology from the experience.) The error on the technical side was pretty classic "big system syndrome" along with a failure to adequately understand end-to-end interoperability requirements. Observe, for example, the number of different and non-interoperable connection-based transport protocols seeking to provide essentially the same type of service to the client layer (TP0-TP4). The premise of trying to optimize for different underlying network environments is quite natural but proves fatal in this service space. (As I understand it, the TCP effort had a close call with this same issue, when LANs started to be popular. I heard there was strong pressure to have a version of TCP tailored LANs but that Vint vetoed it.) From a design standpoint, there is a classic tradeoff between universality versus (local) optimization. For an integrative, large-scale service, the former has proven far, far more important than the latter. The big system syndrome meant that it was not possible to get essential operational experience early and learn from it. (For reference, the biggest contribution to OSI field experience for applications came from the Internet, with Marshal Rose's OSI Application-over-TCP package, ISODE [RSC 1006]. So much for claims the Internet was hostile to OSI...) d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From tony.li at tony.li Thu Feb 10 10:11:17 2011 From: tony.li at tony.li (Tony Li) Date: Thu, 10 Feb 2011 10:11:17 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D54215B.2060506@dcrocker.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> Message-ID: <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> > As for addressing and routing, the OSI world eventually produced something useful for /interior/ routing, but never for inter-organization routing. So whatever the claimed concerns, after 15 years of effort, the OSI world produced nothing viable for Internet scale addressing or routing. As with most OSI work, the deliverable of field utility was always two years from now. This is absolutely correct for routing, but absolutely incorrect for addressing. OSI mandated an addressing architecture that both aggregated and was variable length. The Internet still hasn't learned this lesson and insists on a fixed length, non-scalable addressing scheme. Tony From jnc at mercury.lcs.mit.edu Thu Feb 10 11:09:29 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 10 Feb 2011 14:09:29 -0500 (EST) Subject: [ih] Ken Olsen's impact on the Internet Message-ID: <20110210190929.A5D5B6BE550@mercury.lcs.mit.edu> > From: Dave CROCKER >> The general understanding among computer companies in the mid-80s >> was that TCP/IP was a fine proof-of-concept, but the real network >> was going to be OSI. > I believe that was also the expectation within the Internet > technical community In some quarters. Not all of us agreed. ;-) (PS: See below for more on this...) > the OSI world eventually produced something useful for /interior/ > routing, but never for inter-organization routing. There are also those of us who held (and hold) a dim view of all the routing products of the TCP/IP world... ;-) > For email addressing, the OSI model chosen was actually unworkable > .. For the same mailbox in your organization, you needed a different > public address for each provider (common carrier) that you were > connected to. Yet another design that never quite got the namespaces right for path, location and identity... ;-) > a failure to adequately understand end-to-end interoperability > requirements. Observe, for example, the number of different and > non-interoperable connection-based transport protocols seeking to > provide essentially the same type of service to the client layer > (TP0-TP4). Yet another place where the importance of network effects (i.e. maximizing the size of the pool of potential communicatees) was not understood by that community... :-( > As I understand it, the TCP effort had a close call with this same > issue, when LANs started to be popular. I heard there was strong > pressure to have a version of TCP tailored LANs but that Vint vetoed > it. I'm not sure quite which one you are referring to? Is this the 'trailer header' stuff from Berkeley? I'm not sure that Vint needed to (or, by then, had the capability to) stomp on any of this - it was pretty clear to most people that such things were a bad idea (and why). There were also things like that attempt to design a 'hardware-friendly' transport protocol (i.e. one optimized for implementation in hardware) - was XCP the name? - and that also went nowhere, for similar reasons. > So much for claims the Internet was hostile to OSI... But it was! I mean, we were _polite_ to the OSI people and all, but some of us (many of us?) had every intention of killing OSI stone dead. An amusing story which makes this point: I recall the first time I met Lyman Chapin, it was at ISI (don't recall which meeting, but it was very early on though - ca. mid-80s or so). In the corridor during a break, he was explaining to me how we Internet people were 'politically unsophisticated' (I think those were his words - that was the sense of them, anyway). I recall quite distinctly thinking at the time 'it is not in the interest of those who want TCP/IP to win to disabuse this person of his misconceptions'! Some years later (I think it was after the IAB/IESG blowup in '92) I told him this story, to which he smacked his head and said "Did I really say that?!" :-) Enough water had passed by then that he was amused, which I was glad of. Silly competition in some sense, really. Oh well... Noel From craig at aland.bbn.com Thu Feb 10 11:26:48 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 10 Feb 2011 14:26:48 -0500 Subject: [ih] XTP (was Re: Ken Olsen's impact on the Internet) Message-ID: <20110210192648.F054928E137@aland.bbn.com> > > As I understand it, the TCP effort had a close call with this same > > issue, when LANs started to be popular. I heard there was strong > > pressure to have a version of TCP tailored LANs but that Vint vetoed > > it. > > I'm not sure quite which one you are referring to? Is this the 'trailer > header' stuff from Berkeley? I'm not sure that Vint needed to (or, by > then, had the capability to) stomp on any of this - it was pretty clear to > most people that such things were a bad idea (and why). > > There were also things like that attempt to design a 'hardware-friendly' > transport protocol (i.e. one optimized for implementation in hardware) > - was XCP the name? - and that also went nowhere, for similar reasons. It was XTP -- Greg Chesson's believe that if the protocol was in hardware it would be simpler and deliver data faster than the software implementations that were always fighting with the network cards. (XTP = eXpress Transfer Protocol). It has been a while, but my recollection is that there were some interesting ideas in the resulting spec but it clearly was no simpler or faster. The basic state complexity was comparable to TCP, the impulse to optimize features to the hardware/network added wrinkles that hurt, and there was no innovation in congestion control that enabled XTP to send faster than TCP over a link with unknown properties. Thanks! Craig From dhc2 at dcrocker.net Thu Feb 10 11:39:19 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 10 Feb 2011 11:39:19 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <20110210190929.A5D5B6BE550@mercury.lcs.mit.edu> References: <20110210190929.A5D5B6BE550@mercury.lcs.mit.edu> Message-ID: <4D543EE7.6060406@dcrocker.net> On 2/10/2011 11:09 AM, Noel Chiappa wrote: > > From: Dave CROCKER > > I believe that was also the expectation within the Internet > > technical community > > In some quarters. Not all of us agreed. ;-) That's always true in our community. It's why we seek /rough/ consensus rather than complete and why we need to be clear about whether something represents an individual's view versus rough consensus. > (PS: See below for more on this...) > > > the OSI world eventually produced something useful for /interior/ > > routing, but never for inter-organization routing. > > There are also those of us who held (and hold) a dim view of all the routing > products of the TCP/IP world... ;-) It's pretty well established that none of it really works, witnessed by how badly the Internet performs. > > As I understand it, the TCP effort had a close call with this same > > issue, when LANs started to be popular. I heard there was strong > > pressure to have a version of TCP tailored LANs but that Vint vetoed > > it. > > I'm not sure quite which one you are referring to? Is this the 'trailer > header' stuff from Berkeley? I'm not sure that Vint needed to (or, by > then, had the capability to) stomp on any of this - it was pretty clear to > most people that such things were a bad idea (and why). I'm talking about TCP, not IP. And this was a second-hand anecdote. I wasn't there. > > So much for claims the Internet was hostile to OSI... > > But it was! I mean, we were _polite_ to the OSI people and all, but some > of us (many of us?) had every intention of killing OSI stone dead. Not all of us were always polite. (But that's merely a variant of the above observation that our community is never monolithic in its views...) d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Thu Feb 10 12:12:39 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 10 Feb 2011 15:12:39 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <20110210190929.A5D5B6BE550@mercury.lcs.mit.edu> References: <20110210190929.A5D5B6BE550@mercury.lcs.mit.edu> Message-ID: I would like to thank Noel for clarifying that the whole point of this exercise was winning, not figuring out the best solution possible. Which was not done by either group. OSI was not it. It had fundamental flaws that could not be fixed. (Although, none of them have been noted in this discussion.) OSI was limited in what it could do by its wider participation (sometimes called politics). In any case, a standards committee is no place to solve problems. The Internet had the opportunity to make considerable progress filling in the problems identified in the ARPANET. However, it chose to stand pat behind Moore's Law and continual patching under the rubric "small incremental change." All of this got pushed into the real world before fundamental problems had been solved and now when the world is relying on it. We have to figure out how to fix it. A task we have yet to step up to. At 14:09 -0500 2011/02/10, Noel Chiappa wrote: > > From: Dave CROCKER > > >> The general understanding among computer companies in the mid-80s > >> was that TCP/IP was a fine proof-of-concept, but the real network > >> was going to be OSI. > > > I believe that was also the expectation within the Internet > > technical community > >In some quarters. Not all of us agreed. ;-) > >(PS: See below for more on this...) > > > the OSI world eventually produced something useful for /interior/ > > routing, but never for inter-organization routing. > >There are also those of us who held (and hold) a dim view of all the routing >products of the TCP/IP world... ;-) > > > For email addressing, the OSI model chosen was actually unworkable > > .. For the same mailbox in your organization, you needed a different > > public address for each provider (common carrier) that you were > > connected to. > >Yet another design that never quite got the namespaces right for path, >location and identity... ;-) > > > > a failure to adequately understand end-to-end interoperability > > requirements. Observe, for example, the number of different and > > non-interoperable connection-based transport protocols seeking to > > provide essentially the same type of service to the client layer > > (TP0-TP4). > >Yet another place where the importance of network effects (i.e. maximizing >the size of the pool of potential communicatees) was not understood by >that community... :-( > > > As I understand it, the TCP effort had a close call with this same > > issue, when LANs started to be popular. I heard there was strong > > pressure to have a version of TCP tailored LANs but that Vint vetoed > > it. > >I'm not sure quite which one you are referring to? Is this the 'trailer >header' stuff from Berkeley? I'm not sure that Vint needed to (or, by >then, had the capability to) stomp on any of this - it was pretty clear to >most people that such things were a bad idea (and why). > >There were also things like that attempt to design a 'hardware-friendly' >transport protocol (i.e. one optimized for implementation in hardware) >- was XCP the name? - and that also went nowhere, for similar reasons. > > > > So much for claims the Internet was hostile to OSI... > >But it was! I mean, we were _polite_ to the OSI people and all, but some >of us (many of us?) had every intention of killing OSI stone dead. > >An amusing story which makes this point: I recall the first time I met Lyman >Chapin, it was at ISI (don't recall which meeting, but it was very early on >though - ca. mid-80s or so). In the corridor during a break, he was explaining >to me how we Internet people were 'politically unsophisticated' (I think those >were his words - that was the sense of them, anyway). I recall quite >distinctly thinking at the time 'it is not in the interest of those who want >TCP/IP to win to disabuse this person of his misconceptions'! Some years later >(I think it was after the IAB/IESG blowup in '92) I told him this story, to >which he smacked his head and said "Did I really say that?!" :-) Enough water >had passed by then that he was amused, which I was glad of. Silly competition >in some sense, really. Oh well... > > Noel From jeanjour at comcast.net Thu Feb 10 19:18:56 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 10 Feb 2011 22:18:56 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> Message-ID: At 10:11 -0800 2011/02/10, Tony Li wrote: > > As for addressing and routing, the OSI world eventually produced >something useful for /interior/ routing, but never for >inter-organization routing. So whatever the claimed concerns, after >15 years of effort, the OSI world produced nothing viable for >Internet scale addressing or routing. As with most OSI work, the >deliverable of field utility was always two years from now. > >This is absolutely correct for routing, but absolutely incorrect for >addressing. OSI mandated an addressing architecture that both >aggregated and was variable length. The Internet still hasn't >learned this lesson and insists on a fixed length, non-scalable >addressing scheme. Tony, I believe that you are wrong on the first point. There was an Inter-Domain Routing Protocol developed in OSI. From the Acknowledgements section of RFC1771 (BGP-4): "This updated version of the document is the product of the IETF IDR Working Group with Yakov Rekhter and Tony Li as editors. Certain sections of the document borrowed heavily from IDRP [7], which is the OSI counterpart of BGP. For this credit should be given to the ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles Kunzinger (IBM Corp.) who was the IDRP editor within that group." If memory serves the IETF and ISO versions were developed in parallel by the same people. Unlike the Internet, the OSI stack did have a full addressing architecture and distinguished between addressing the interface and addressing the node. Application Process names were location independent and not tied to any particular address, unlike domain names. I was a little confused by Dave Crocker's comment. about X.400 email addresses. It doesn't sound right since they were in the Application Layer and names were location independent. However, since X.400 was primarily developed by the PTT faction of OSI, I can readily believe that their implementations were tied to providers. Although I doubt this was the general case. But I can check. Take care, John From dhc2 at dcrocker.net Thu Feb 10 19:38:12 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 10 Feb 2011 19:38:12 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> Message-ID: <4D54AF24.8040301@dcrocker.net> On 2/10/2011 7:18 PM, John Day wrote: > I was a little confused by Dave Crocker's comment. about X.400 email addresses. > It doesn't sound right since they were in the Application Layer and names were > location independent. However, since X.400 was primarily developed by the PTT > faction of OSI, I can readily believe that their implementations were tied to > providers. Although I doubt this was the general case. But I can check. I have no idea what "location independent" naming this refers to, but it wasn't X.400 email addresses: Addressing was designed as a set of attribute/value pairs, including "Administrative Management Domain (ADMD)" which specified the carrier. You had a different address for each ADMD you could access. In a coarse-grained manner, this was source routing, for a model of organization -> carrier -> organization Eventually, there enough organizations connected to all the x.400 telco carriers -- yes, I mean all, because the total wasn't large -- for an X.400 hack to be allowed which explicitly specified a null ADMD, meaning "use whichever one you want". Separately, the human factors of a long sequence of textual attribute/value pairs was impressively unwieldy. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From tony.li at tony.li Thu Feb 10 22:57:22 2011 From: tony.li at tony.li (Tony Li) Date: Thu, 10 Feb 2011 22:57:22 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> Message-ID: <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> John, > I believe that you are wrong on the first point. There was an Inter-Domain Routing Protocol developed in OSI. > > From the Acknowledgements section of RFC1771 (BGP-4): > > "This updated version of the document is the product of the IETF IDR Working Group with Yakov Rekhter and Tony Li as editors. Certain > sections of the document borrowed heavily from IDRP [7], which is the OSI counterpart of BGP. For this credit should be given to the ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles Kunzinger (IBM Corp.) who was the IDRP editor within that group." With all due respect, BGP was first developed for IP. Yakov then ported it to OSI, where it became IDRP. > If memory serves the IETF and ISO versions were developed in parallel by the same people. I would argue that it was closer to sequentially with alternating phases, but it was certainly Yakov's doing, with Lyman, Charlie's, and Sue's help on the OSI side, and Kirk and myself on the IP side. When it came to BGP4, we ended up backporting much of the work on aggregation from IDRP back into BGP. Thus the quote above. Tony From bob.hinden at gmail.com Fri Feb 11 00:40:33 2011 From: bob.hinden at gmail.com (Bob Hinden) Date: Fri, 11 Feb 2011 09:40:33 +0100 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> Message-ID: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> Tony, My memory matches yours. That is, BGP was first developed for IP, a new version (IDRP) was created for OSI, and the improvements were later brought back in to BGP. Bob On Feb 11, 2011, at 7:57 AM, Tony Li wrote: > > John, > > >> I believe that you are wrong on the first point. There was an Inter-Domain Routing Protocol developed in OSI. >> >> From the Acknowledgements section of RFC1771 (BGP-4): >> >> "This updated version of the document is the product of the IETF IDR Working Group with Yakov Rekhter and Tony Li as editors. Certain >> sections of the document borrowed heavily from IDRP [7], which is the OSI counterpart of BGP. For this credit should be given to the ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles Kunzinger (IBM Corp.) who was the IDRP editor within that group." > > > With all due respect, BGP was first developed for IP. Yakov then ported it to OSI, where it became IDRP. > > >> If memory serves the IETF and ISO versions were developed in parallel by the same people. > > > I would argue that it was closer to sequentially with alternating phases, but it was certainly Yakov's doing, with Lyman, Charlie's, and Sue's help on the OSI side, and Kirk and myself on the IP side. When it came to BGP4, we ended up backporting much of the work on aggregation from IDRP back into BGP. Thus the quote above. > > Tony > > > From richard at bennett.com Sun Feb 13 16:58:04 2011 From: richard at bennett.com (Richard Bennett) Date: Sun, 13 Feb 2011 16:58:04 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> Message-ID: <4D587E1C.6070008@bennett.com> IT seems that competition has a role to play in the development of networking standards. RB On 2/11/2011 12:40 AM, Bob Hinden wrote: > Tony, > > My memory matches yours. That is, BGP was first developed for IP, a new version (IDRP) was created for OSI, and the improvements were later brought back in to BGP. > > Bob > > > On Feb 11, 2011, at 7:57 AM, Tony Li wrote: > >> John, >> >> >>> I believe that you are wrong on the first point. There was an Inter-Domain Routing Protocol developed in OSI. >>> >>> From the Acknowledgements section of RFC1771 (BGP-4): >>> >>> "This updated version of the document is the product of the IETF IDR Working Group with Yakov Rekhter and Tony Li as editors. Certain >>> sections of the document borrowed heavily from IDRP [7], which is the OSI counterpart of BGP. For this credit should be given to the ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles Kunzinger (IBM Corp.) who was the IDRP editor within that group." >> >> With all due respect, BGP was first developed for IP. Yakov then ported it to OSI, where it became IDRP. >> >> >>> If memory serves the IETF and ISO versions were developed in parallel by the same people. >> >> I would argue that it was closer to sequentially with alternating phases, but it was certainly Yakov's doing, with Lyman, Charlie's, and Sue's help on the OSI side, and Kirk and myself on the IP side. When it came to BGP4, we ended up backporting much of the work on aggregation from IDRP back into BGP. Thus the quote above. >> >> Tony >> >> >> > -- Richard Bennett From galmes at tamu.edu Sun Feb 13 20:13:03 2011 From: galmes at tamu.edu (Guy Almes) Date: Sun, 13 Feb 2011 22:13:03 -0600 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D587E1C.6070008@bennett.com> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> Message-ID: <4D58ABCF.4050204@tamu.edu> Richard, I'm sure that's true generally, and I suppose there were examples in the great TCP/IP-vs-OSI era. In this particular case, however, it was more cooperation than competition. The same people (Yakov and others) were involved in both and just trying to get the best technical work into both BGP and IDRP. -- Guy On 2/13/11 6:58 PM, Richard Bennett wrote: > IT seems that competition has a role to play in the development of > networking standards. > > RB > > On 2/11/2011 12:40 AM, Bob Hinden wrote: >> Tony, >> >> My memory matches yours. That is, BGP was first developed for IP, a >> new version (IDRP) was created for OSI, and the improvements were >> later brought back in to BGP. >> >> Bob >> >> >> On Feb 11, 2011, at 7:57 AM, Tony Li wrote: >> >>> John, >>> >>> >>>> I believe that you are wrong on the first point. There was an >>>> Inter-Domain Routing Protocol developed in OSI. >>>> >>>> From the Acknowledgements section of RFC1771 (BGP-4): >>>> >>>> "This updated version of the document is the product of the IETF IDR >>>> Working Group with Yakov Rekhter and Tony Li as editors. Certain >>>> sections of the document borrowed heavily from IDRP [7], which is >>>> the OSI counterpart of BGP. For this credit should be given to the >>>> ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles >>>> Kunzinger (IBM Corp.) who was the IDRP editor within that group." >>> >>> With all due respect, BGP was first developed for IP. Yakov then >>> ported it to OSI, where it became IDRP. >>> >>> >>>> If memory serves the IETF and ISO versions were developed in >>>> parallel by the same people. >>> >>> I would argue that it was closer to sequentially with alternating >>> phases, but it was certainly Yakov's doing, with Lyman, Charlie's, >>> and Sue's help on the OSI side, and Kirk and myself on the IP side. >>> When it came to BGP4, we ended up backporting much of the work on >>> aggregation from IDRP back into BGP. Thus the quote above. >>> >>> Tony >>> >>> >>> >> > From richard at bennett.com Sun Feb 13 20:28:07 2011 From: richard at bennett.com (Richard Bennett) Date: Sun, 13 Feb 2011 20:28:07 -0800 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D58ABCF.4050204@tamu.edu> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> Message-ID: <4D58AF57.9010006@bennett.com> Not so much, there were more people in the OSI and Internet groups in the 1980s who didn't work with the other group, it just took too much time to do both. We've all seen examples where a given standards body chose the wrong proposal, and other examples where it imported a better one than the local product. (Good) standards are apparently scarce resources. On 2/13/2011 8:13 PM, Guy Almes wrote: > Richard, > I'm sure that's true generally, and I suppose there were examples in > the great TCP/IP-vs-OSI era. > In this particular case, however, it was more cooperation than > competition. The same people (Yakov and others) were involved in both > and just trying to get the best technical work into both BGP and IDRP. > -- Guy > > On 2/13/11 6:58 PM, Richard Bennett wrote: >> IT seems that competition has a role to play in the development of >> networking standards. >> >> RB >> >> On 2/11/2011 12:40 AM, Bob Hinden wrote: >>> Tony, >>> >>> My memory matches yours. That is, BGP was first developed for IP, a >>> new version (IDRP) was created for OSI, and the improvements were >>> later brought back in to BGP. >>> >>> Bob >>> >>> >>> On Feb 11, 2011, at 7:57 AM, Tony Li wrote: >>> >>>> John, >>>> >>>> >>>>> I believe that you are wrong on the first point. There was an >>>>> Inter-Domain Routing Protocol developed in OSI. >>>>> >>>>> From the Acknowledgements section of RFC1771 (BGP-4): >>>>> >>>>> "This updated version of the document is the product of the IETF IDR >>>>> Working Group with Yakov Rekhter and Tony Li as editors. Certain >>>>> sections of the document borrowed heavily from IDRP [7], which is >>>>> the OSI counterpart of BGP. For this credit should be given to the >>>>> ANSI X3S3.3 group chaired by Lyman Chapin (BBN) and to Charles >>>>> Kunzinger (IBM Corp.) who was the IDRP editor within that group." >>>> >>>> With all due respect, BGP was first developed for IP. Yakov then >>>> ported it to OSI, where it became IDRP. >>>> >>>> >>>>> If memory serves the IETF and ISO versions were developed in >>>>> parallel by the same people. >>>> >>>> I would argue that it was closer to sequentially with alternating >>>> phases, but it was certainly Yakov's doing, with Lyman, Charlie's, >>>> and Sue's help on the OSI side, and Kirk and myself on the IP side. >>>> When it came to BGP4, we ended up backporting much of the work on >>>> aggregation from IDRP back into BGP. Thus the quote above. >>>> >>>> Tony >>>> >>>> >>>> >>> >> -- Richard Bennett From amyzing at talsever.com Sun Feb 13 22:11:39 2011 From: amyzing at talsever.com (Amelia A Lewis) Date: Mon, 14 Feb 2011 01:11:39 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D58AF57.9010006@bennett.com> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> Message-ID: <20110214011139211091.14c54835@talsever.com> A Unix application reports status zero for success. Any other number (not actually an infinite choice, alas, to undermine the analogy) is failure. It's not always true that there's only one right answer, in standards work. It's not a bad approximation, though. It *is* always true, I think, that the folks who have committed their energy and reputation to a "wrong" (less-optimal) answer are unlikely to back down. Saying "oh, I was wrong, and you were right," is ... well, in specifications work, it's probably more unusual than the annual recurrence of 1 April. It's probably more common for someone to abandon a position, adopting a new one, and indignantly rejecting any suggestion that they ever advocated otherwise. Still more common, in my experience, is that the captain goes down with the ship. "What iceberg, dammit?" I don't know that OSI and TCP/IP were that much in "competition," though. The OSI stack was backed by a consortium and by governments, and everyone knew it was gonna be better. When it was finished, and worked. TCP/IP was just something that worked. Eating what's set before you, solving the problems you face without waiting to solve the problems that you think *will* be faced, seems to be a recipe for getting food on the table by dinnertime. It's likely that it leads to problems, but the problems are gonna be defined by the cook who produces something, not the one who plans a feast for tomorrow. Matter of having diners still living, or something. That is, while OSI may have solved a number of thorny technical problems in theory, it isn't clear that they were practical problems that anyone was ever going to face. TCP/IP addressed the practical problems. It created practical problems. I don't know that these two approaches are properly labeled as 'competition.' Amy! On Sun, 13 Feb 2011 20:28:07 -0800, Richard Bennett wrote: > Not so much, there were more people in the OSI and Internet groups in > the 1980s who didn't work with the other group, it just took too much > time to do both. We've all seen examples where a given standards > body chose the wrong proposal, and other examples where it imported a > better one than the local product. (Good) standards are apparently > scarce resources. > > On 2/13/2011 8:13 PM, Guy Almes wrote: >> Richard, >> I'm sure that's true generally, and I suppose there were examples >> in the great TCP/IP-vs-OSI era. >> In this particular case, however, it was more cooperation than >> competition. The same people (Yakov and others) were involved in >> both and just trying to get the best technical work into both BGP >> and IDRP. >> -- Guy > > -- > Richard Bennett -- Amelia A. Lewis amyzing {at} talsever.com The less I seek my source for some definitive, the closer I am to fine. -- Indigo Girls From mfidelman at meetinghouse.net Mon Feb 14 05:43:58 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 14 Feb 2011 08:43:58 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <20110214011139211091.14c54835@talsever.com> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> Message-ID: <4D59319E.1040700@meetinghouse.net> Amelia A Lewis wrote: > I don't know that OSI and TCP/IP were that much in "competition," > though. The OSI stack was backed by a consortium and by governments, > and everyone knew it was gonna be better. When it was finished, and > worked. TCP/IP was just something that worked. > I can relate to this comment. I watched a lot of this from the sidelines; I was at BBN from 1985-92, working on various aspects of the ARPANET transition to the Defense Data Network - and dealing with some of the hassles of reconciling policy with reality (remember the "dual stack" policy; sort of like the Ada policy). TCP/IP was developed bottom up - "rough consensus and running code" - by academics and engineers. OSI was an attempt to impose a classical, top-down, standards approach - write the standard by committee, then fix things later. As I remember it, the only justification I ever heard for OSI was a political one: "the Europeans will never accept a standard developed by the US Dept. of Defense," and there was a lot of muscle put behind OSI from the General Accounting Office (I never quite understood the politics there). From where I sat, the technical arguments about how OSI solved a bunch of problems, came across as rationalizations from people who spent their time sitting on standards bodies, rather than building things. Anyway, as I recall, the first European INTEROP came along, the OSI folks were still saying "just wait, it will be great when we get there," and everyone else was blown away by the fact TCP/IP was working and commercially available from multiple vendors. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From eric.gade at gmail.com Mon Feb 14 06:46:23 2011 From: eric.gade at gmail.com (Eric Gade) Date: Mon, 14 Feb 2011 14:46:23 +0000 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D59319E.1040700@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> Message-ID: On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman wrote: > OSI was an attempt to impose a classical, top-down, standards approach > It is my understanding that a top-down process is fairly uncommon as far as the formation of international technical standards are concerned, and that OSI was abberant in this regard. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From mfidelman at meetinghouse.net Mon Feb 14 07:26:53 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 14 Feb 2011 10:26:53 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> Message-ID: <4D5949BD.4040505@meetinghouse.net> Eric Gade wrote: > On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman > > wrote: > > OSI was an attempt to impose a classical, top-down, standards approach > > It is my understanding that a top-down process is fairly uncommon as > far as the formation of international technical standards are > concerned, and that OSI was abberant in this regard. Really? With the exception of IETF standards, I've seen pretty much everything else get written by committee, then promulgated, then fixed in later revisions. As far as I can tell, the bottom-up model, based on "rough consensus and running code," as well as multiple interoperable implementations ? with a very slow progression from experimental to recommended to mandatory ? is unique to IETF. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From eric.gade at gmail.com Mon Feb 14 08:25:10 2011 From: eric.gade at gmail.com (Eric Gade) Date: Mon, 14 Feb 2011 16:25:10 +0000 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D5949BD.4040505@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> Message-ID: I mean in terms of ISO standards in general, not just networking or computer-based messaging or what have you. It's abnormal for them as an organization to take on a task in that way. On Mon, Feb 14, 2011 at 3:26 PM, Miles Fidelman wrote: > Eric Gade wrote: > > On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman < >> mfidelman at meetinghouse.net > wrote: >> >> OSI was an attempt to impose a classical, top-down, standards approach >> >> It is my understanding that a top-down process is fairly uncommon as far >> as the formation of international technical standards are concerned, and >> that OSI was abberant in this regard. >> > Really? With the exception of IETF standards, I've seen pretty much > everything else get written by committee, then promulgated, then fixed in > later revisions. > > As far as I can tell, the bottom-up model, based on "rough consensus and > running code," as well as multiple interoperable implementations ? with a > very slow progression from experimental to recommended to mandatory ? is > unique to IETF. > > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > > -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From larrysheldon at cox.net Mon Feb 14 09:41:54 2011 From: larrysheldon at cox.net (Larry Sheldon) Date: Mon, 14 Feb 2011 11:41:54 -0600 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D5949BD.4040505@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> Message-ID: <4D596962.9010809@cox.net> On 2/14/2011 9:26 AM, Miles Fidelman wrote: > Eric Gade wrote: >> On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman >> > wrote: >> >> OSI was an attempt to impose a classical, top-down, standards approach >> >> It is my understanding that a top-down process is fairly uncommon as >> far as the formation of international technical standards are >> concerned, and that OSI was abberant in this regard. > Really? With the exception of IETF standards, I've seen pretty much > everything else get written by committee, then promulgated, then fixed > in later revisions. > > As far as I can tell, the bottom-up model, based on "rough consensus and > running code," as well as multiple interoperable implementations ? with > a very slow progression from experimental to recommended to mandatory ? > is unique to IETF. That would be an interesting thing to study. Seems to me, just off the top of my head, that an awful lot of the important inventions went from "wow, look at how neat this is" to "I wonder if there is a way to make use of (aka if there is a way to turn a buck or bead or clam or ...) this. Not the other way around. Who do you reckon was funding the Committee To Develop A Way To Cook Meat? -- Superfluity does not vitiate California Civil Code quote-#3537 Life should not be a journey to the grave with the intention of arriving safely in an attractive and well preserved body, but rather to skid in sideways, your body thoroughly used up, totally worn out and screaming,"Yah hoo! What a ride!" ripped from "GM" Roper http://lwolt.wordpress.com/ http://tinyurl.com/269dspw # <-- Where I live 1 From jeanjour at comcast.net Mon Feb 14 09:51:44 2011 From: jeanjour at comcast.net (John Day) Date: Mon, 14 Feb 2011 12:51:44 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> Message-ID: Yes, this is true. The idea was that technology was moving fast enough that one needed to standardize to a point in the future. The mistake that OSI made which is at the root of all others was inviting the PTT to participate as a joint project. Their desires were definitely rooted in maintaining the status quo. Most of the flaws in the OSI model can be traced to the PTTs. Although some were just the state of understanding and are found in the Internet architecture as well. So is standardizing to a point in the future a bad idea? It is always hard to predict the future, but what the future needs is usually at a more detailed level than the standards need to address. But as with any standards process the most important thing is to get them reasonably stable before most people think they are important. At 14:46 +0000 2011/02/14, Eric Gade wrote: >On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman ><mfidelman at meetinghouse.net> >wrote: > > > >OSI was an attempt to impose a classical, top-down, standards approach > >It is my understanding that a top-down process is fairly uncommon as >far as the formation of international technical standards are >concerned, and that OSI was abberant in this regard. > > > >-- >Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanjour at comcast.net Mon Feb 14 09:54:38 2011 From: jeanjour at comcast.net (John Day) Date: Mon, 14 Feb 2011 12:54:38 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D5949BD.4040505@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> Message-ID: Yes, I believe you are correct. Most standards committees codify current practice. As Smolin would characterize it, the IETF is the only group to base its work on a craft tradition. At 10:26 -0500 2011/02/14, Miles Fidelman wrote: >Eric Gade wrote: >>On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman >>> >>wrote: >> >> OSI was an attempt to impose a classical, top-down, standards approach >> >>It is my understanding that a top-down process is fairly uncommon >>as far as the formation of international technical standards are >>concerned, and that OSI was abberant in this regard. >Really? With the exception of IETF standards, I've seen pretty much >everything else get written by committee, then promulgated, then >fixed in later revisions. > >As far as I can tell, the bottom-up model, based on "rough consensus >and running code," as well as multiple interoperable implementations >- with a very slow progression from experimental to recommended to >mandatory - is unique to IETF. > >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From jeanjour at comcast.net Mon Feb 14 09:59:07 2011 From: jeanjour at comcast.net (John Day) Date: Mon, 14 Feb 2011 12:59:07 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D5949BD.4040505@meetinghouse.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> Message-ID: Yes, I believe you are correct. Most standards committees codify current practice after the science and engineering have been done. As Smolin would characterize it, the IETF is the only group to base its work on a craft tradition. At 10:26 -0500 2011/02/14, Miles Fidelman wrote: >Eric Gade wrote: >>On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman >>> >>wrote: >> >> OSI was an attempt to impose a classical, top-down, standards approach >> >>It is my understanding that a top-down process is fairly uncommon >>as far as the formation of international technical standards are >>concerned, and that OSI was abberant in this regard. >Really? With the exception of IETF standards, I've seen pretty much >everything else get written by committee, then promulgated, then >fixed in later revisions. > >As far as I can tell, the bottom-up model, based on "rough consensus >and running code," as well as multiple interoperable implementations >- with a very slow progression from experimental to recommended to >mandatory - is unique to IETF. > >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From jeanjour at comcast.net Mon Feb 14 10:31:23 2011 From: jeanjour at comcast.net (John Day) Date: Mon, 14 Feb 2011 13:31:23 -0500 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: <4D596962.9010809@cox.net> References: <4D50C000.3060003@tamu.edu> <4D50D7A9.5080308@dcrocker.net> <4D530709.6020304@bennett.com> <4D54215B.2060506@dcrocker.net> <049F28A7-6337-4498-97CF-2DDE482BEC77@tony.li> <5A8FD54D-1F49-4F0F-9CD6-8D7B97443696@tony.li> <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> <4D596962.9010809@cox.net> Message-ID: At 11:41 -0600 2011/02/14, Larry Sheldon wrote: >On 2/14/2011 9:26 AM, Miles Fidelman wrote: >>Eric Gade wrote: >>>On Mon, Feb 14, 2011 at 1:43 PM, Miles Fidelman >>>> wrote: >>> >>>OSI was an attempt to impose a classical, top-down, standards approach >>> >>>It is my understanding that a top-down process is fairly uncommon as >>>far as the formation of international technical standards are >>>concerned, and that OSI was abberant in this regard. >>Really? With the exception of IETF standards, I've seen pretty much >>everything else get written by committee, then promulgated, then fixed >>in later revisions. >> >>As far as I can tell, the bottom-up model, based on "rough consensus and >>running code," as well as multiple interoperable implementations - with >>a very slow progression from experimental to recommended to mandatory - >>is unique to IETF. > >That would be an interesting thing to study. Seems to me, just off >the top of my head, that an awful lot of the important inventions >went from "wow, look at how neat this is" to "I wonder if there is a >way to make use of (aka if there is a way to turn a buck or bead or >clam or ...) this. Not the other way around. > >Who do you reckon was funding the Committee To Develop A Way To Cook Meat? >-- You miss the point. It is not about developing technology but standards. Why would one want to standardize cooking meat? In networks, it was clear at the beginning that standards were necessary. Note that computer standards were done before networks but were not that important nor followed that closely. FORTRAN and COBOL differed immensely between systems. There was no real agreement on character set. But the ARPANET needed a NWG immediately what it came up with had to be followed closely or nothing worked. In general, companies detest standards and see them as a necessary evil. From p.schow at comcast.net Mon Feb 14 11:23:04 2011 From: p.schow at comcast.net (Peter Schow) Date: Mon, 14 Feb 2011 12:23:04 -0700 Subject: [ih] Ken Olsen's impact on the Internet In-Reply-To: References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> <4D596962.9010809@cox.net> Message-ID: <20110214192304.GA17762@panacea.comcast.net> On Mon, Feb 14, 2011 at 01:31:23PM -0500, John Day wrote: > In general, companies detest standards and see them as a necessary evil. Unisys viewed the OSI standards as an opportunity for system interoperability, among its five core (there were others) hardware/OS platforms obtained via their "power of 2" merger between Burroughs and Sperry. Most of these systems could not talk to each other. If OSI ever had a high point in the USA, it was probably coming off a 1988 industry conference in Baltimore where MAP/TOP, among other things, were featured prominently. I wasn't there but I joined Unisys shortly thereafter and OSI momentum was through the roof, internally. Every network engineer or manager could be seen with FTAM and MHS documents on their desk. Less than a year later, the TCP/IP reality check sunk in and the OSI energy was gone. Luckily, like Vint mentions for DEC, TCP/IP development was already happening in parallel by lab-like organizations within the company, fueled somewhat by contracts with the US government. If not for standards such as TCP/IP and OSI, Unisys would have been left to invent their own glue. So in this case, the standards were welcome. In this sense, I guess TCP/IP has simplified the merger & acqusition process for systems and network companies, making it a lot easier when combining/merging product lines. From richard at bennett.com Mon Feb 14 14:00:24 2011 From: richard at bennett.com (Richard Bennett) Date: Mon, 14 Feb 2011 14:00:24 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <20110214192304.GA17762@panacea.comcast.net> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> <4D596962.9010809@cox.net> <20110214192304.GA17762@panacea.comcast.net> Message-ID: <4D59A5F8.6040408@bennett.com> Right, there's always a lot of motivation for standards, especially in networking where interoperability is job 1. My recollection of the process of networking standards back in the late 70s and early 80s was that TCP/IP was meant to be discarded in favor of a more mature approach after some experience was gained with internetworking. Hence, shortcuts were taken with respect to things like the IPv4 address and the reliance on the WKS convention that everyone knew were sub-optimal at the time. But somehow this TCP/IP successor standard that incorporated the acquired wisdom was never developed. Why is that? RB On 2/14/2011 11:23 AM, Peter Schow wrote: > On Mon, Feb 14, 2011 at 01:31:23PM -0500, John Day wrote: > >> In general, companies detest standards and see them as a necessary evil. > Unisys viewed the OSI standards as an opportunity for system > interoperability, among its five core (there were others) hardware/OS > platforms obtained via their "power of 2" merger between Burroughs > and Sperry. Most of these systems could not talk to each other. > > If OSI ever had a high point in the USA, it was probably coming off > a 1988 industry conference in Baltimore where MAP/TOP, among other things, > were featured prominently. I wasn't there but I joined Unisys shortly > thereafter and OSI momentum was through the roof, internally. Every > network engineer or manager could be seen with FTAM and MHS documents on > their desk. Less than a year later, the TCP/IP reality check sunk > in and the OSI energy was gone. Luckily, like Vint mentions for DEC, > TCP/IP development was already happening in parallel by lab-like > organizations within the company, fueled somewhat by contracts with > the US government. > > If not for standards such as TCP/IP and OSI, Unisys would have been > left to invent their own glue. So in this case, the standards were welcome. > In this sense, I guess TCP/IP has simplified the merger& acqusition > process for systems and network companies, making it a lot easier when > combining/merging product lines. > -- Richard Bennett From bernie at fantasyfarm.com Mon Feb 14 15:03:53 2011 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Mon, 14 Feb 2011 18:03:53 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D59A5F8.6040408@bennett.com> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com>, <20110214192304.GA17762@panacea.comcast.net>, <4D59A5F8.6040408@bennett.com> Message-ID: <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> On 14 Feb 2011 at 14:00, Richard Bennett wrote: > My recollection of the process of networking standards back in the late > 70s and early 80s was that TCP/IP was meant to be discarded in favor of > a more mature approach after some experience was gained with > internetworking. You mean akin to how SMTP, "Simple" mail transport protocol, was supposed to be replaced by something better when we figured out what email was going to be...:o) /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From mfidelman at meetinghouse.net Mon Feb 14 16:45:43 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 14 Feb 2011 19:45:43 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com>, <20110214192304.GA17762@panacea.comcast.net>, <4D59A5F8.6040408@bennett.com> <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> Message-ID: <4D59CCB7.8080502@meetinghouse.net> Bernie Cosell wrote: > On 14 Feb 2011 at 14:00, Richard Bennett wrote: > > >> My recollection of the process of networking standards back in the late >> 70s and early 80s was that TCP/IP was meant to be discarded in favor of >> a more mature approach after some experience was gained with >> internetworking. >> > You mean akin to how SMTP, "Simple" mail transport protocol, was supposed > to be replaced by something better when we figured out what email was > going to be...:o) > I'm always amused by the way X.500 disappeared in favor of LDAP ("Simple" and "Light Weight" seem to generally win vis over-engineered. :-) Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Mon Feb 14 16:47:28 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 14 Feb 2011 19:47:28 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D59A5F8.6040408@bennett.com> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <4D587E1C.6070008@bennett.com> <4D58ABCF.4050204@tamu.edu> <4D58AF57.9010006@bennett.com> <20110214011139211091.14c54835@talsever.com> <4D59319E.1040700@meetinghouse.net> <4D5949BD.4040505@meetinghouse.net> <4D596962.9010809@cox.net> <20110214192304.GA17762@panacea.comcast.net> <4D59A5F8.6040408@bennett.com> Message-ID: <4D59CD20.7000308@meetinghouse.net> Richard Bennett wrote: > Right, there's always a lot of motivation for standards, especially in > networking where interoperability is job 1. Well also in things like electrical connectors, nuts and bolts, ..... > > My recollection of the process of networking standards back in the > late 70s and early 80s was that TCP/IP was meant to be discarded in > favor of a more mature approach after some experience was gained with > internetworking. Hence, shortcuts were taken with respect to things > like the IPv4 address and the reliance on the WKS convention that > everyone knew were sub-optimal at the time. But somehow this TCP/IP > successor standard that incorporated the acquired wisdom was never > developed. > > Why is that? Perhaps because incremental evolutionary improvement tends to win out vis a vis over-engineering things? Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From jack at 3kitty.org Mon Feb 14 21:37:04 2011 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 14 Feb 2011 21:37:04 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> , <20110214192304.GA17762@panacea.comcast.net> , <4D59A5F8.6040408@bennett.com> <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> Message-ID: <1297748224.2659.346.camel@localhost> This discussion reminded me of a personal experience that I had about 20 years ago. I think of it as my "Both Sides Now" experience (Google Joni Mitchell - my apologies to the Folk Song crowd). In 1990, I left the "Networking World", and moved "higher in the stack", lured to Oracle as "Internet Architect". At the time, TCP/IP had a sizable installed base, and was an official DoD standard, but was still usually characterized, outside of the academic/research community, as an interim experiment pending the deployment of the OSI global standards. Big and small technology vendors also had their own networking technologies and products - SNA, DECNET, NetWare, Vines, Appletalk, etc. All of these technologies had sizable installed bases, which were to be replaced by OSI (at least that's what they said...). Oracle's business was (and is) databases, which live at the core of most sizable corporation's business mechanisms. Those corporations all had some kind of data center(s) and network(s) and their entire business mechanisms depended on it. Some were "IBM shops", some were "DecNet shops", etc. But they also typically had one or more other technologies in place. Perhaps DecNet was in Engineering, NetWare in Administration, Appletalk in Marketing, etc. Although TCP/IP provided transport, email, and such services, other technologies provided more than TCP. E.G., where TCP still only had the 1970s FTP for dealing with files, others had the ability to share files in ways that made them as easy to use as local files. Where TCP had Telnet, others had the ability to use remote CRT screens. Etc. Working "higher in the stack", I encountered mostly people who were in corporations' IT departments, involved with the business' data - as opposed to people in Communications departments, involved with phone lines, and such. The corporations were also mostly not technology companies. Think finance, manufacturing, retail, commodities, shipping, government, etc. Going up the stack, I discovered I had passed through the clouds (remember we always used to draw networks as clouds?). I was now seeing the network technology from the other side - hence the "Both Sides Now" view. The multiplicity of technologies was driving IT guys everywhere crazy. With a data-centric viewpoint, it was hard to get the business data accessible to everyone in the corporation who needed it, since they were often on different technologies. So we borrowed some ideas from the Internet world, and created a product technology which was essentially transport-level routers. This enabled, for example, someone on an Appletalk machine in Marketing to access data on a database running in an SNA world, to look at data that had been just input from an engineer on a machine in DecNet. We called the transport gateways "Interchanges" - but mostly I could use the same slides I had used to talk about routers between networks to talk about Interchanges between different protocol worlds. Our products only worked for database traffic - that made it a simple enough problem to actually implement. Just imagine the problems of testing -- I remember that there were over 30 different implementations of TCP just for PCs, and we had to test all of them (this was before TCP was built-in to Windows). Until OSI appeared, this made things workable, although I'm sure you can imagine how complex it was to set up and operate such environments. We ran our own multiprotocol internal worldwide network, so we felt the pain too. Sometime in 1991, IIRC, we held a "Network Forum", which was basically a week-long very small conference where the attendees were several dozen of the CIOs or equivalent from a broad range of customers. Many different industries from multiple continents. As part of the agenda, each attendee described their current network environment - e.g., the SNA shop with a splash of NetWare, etc. It was a broad mix as you'd expect. But there were no "TCP shops", although almost everyone had TCP somewhere in their organization. Later on that week, we went around the room again and asked everyone to tell us what they could about their future plans - where were they heading, which technical path(s), timing, etc. The responses absolutely astounded me. Everyone was planning to go fully to TCP, as fast as possible. Everyone. Let me say that again - everyone. No one was a TCP shop then. Everyone intended to become a "TCP Shop". As fast as they could. Everyone. Many hours of discussion later, I could see the pattern: - TCP had a large installed base, and could be observed to be working - the US government had committed to TCP as a standard, and was enforcing it by procurement policy - TCP was delivering what OSI was promising - TCP was delivering functional systems, while OSI was delivering lots of paper - TCP was enough like OSI that, when/if OSI appeared it should be relatively straightforward to migrate - the new hires in IT, coming out of universities all over the world, knew all about TCP when they started work; they rarely knew anything about SNA, DecNet, OSI, etc., and they weren't very interested in learning - their own internal efforts with TCP had so far been refreshingly successful (no doubt because of those new hires...) - their competitors' similar efforts seemed to be successful (scary!) - they were tired of waiting, and couldn't stand, or afford, the multi-technology morass and perennial promises any longer - TCP was very likely to be a lot less expensive to procure and operate than the current systems - TCP breaks the "lock-in" of a vendor-specific technology, and puts the customer (IT) back in control There wasn't any mention of anything like "technical superiority", or comparisons of protocol features, or anything like that. In short - TCP does the job, is proven, costs less, and works now. Our "Interchange" technology turned out to be quite useful, but more as a migration tool, allowing the various IT components to be moved into the TCP world in a well-orchestrated fashion. The business functions could continue to function even as the components got moved from one technology to another. In some small way, this probably helped the TCP conversion also. So, I think that the main driving force behind TCP's explosion into the mainstream was the Users (CIOs and staffs) in all those corporations in all those industries around the world who saw it as the solution to their problems. When the Web appeared a few years later, it cast all those migration plans in concrete, by providing "the" way for all those companies to interact with their customers, suppliers, regulators, investors, etc. The Web made a wonderful GUI for database applications. But only over TCP. Hope you found this interesting, /Jack AIRI - As I Remember It From jnc at mercury.lcs.mit.edu Mon Feb 14 22:20:37 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 15 Feb 2011 01:20:37 -0500 (EST) Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet Message-ID: <20110215062037.1DE8918C096@mercury.lcs.mit.edu> > From: Richard Bennett > My recollection of the process of networking standards .. was that > TCP/IP was meant to be discarded in favor of a more mature approach > after some experience was gained with internetworking. Hence, shortcuts > were taken with respect to things like the IPv4 address ... that > everyone knew were sub-optimal at the time. But somehow this TCP/IP > successor standard that incorporated the acquired wisdom was never > developed. > Why is that? Economics. The added value of any/all additional features in the 'next generation' networking stuff was less than the cost to convert to it -> nobody converted. 'Network effects' (the size of the installed base _you could communicate with_, which is, after all the whole point of a _communication_ protocol) exacerbated both the diminuation of the benefit, and the cost of the conversion. (Why convert to something if... you can can only talk to very few people using it?) > From: Jack Haverty > - the new hires in IT, coming out of universities all over the world, > knew all about TCP when they started work This factor is often overlooked, but it's significant. It drove the spread of Unix, and it's driving Linux now. > Our "Interchange" technology turned out to be quite useful, but more as > a migration tool, allowing the various IT components to be moved into > the TCP world in a well-orchestrated fashion. The business functions > could continue to function even as the components got moved from one > technology to another. In some small way, this probably helped the TCP > conversion also. Ditto for the 'multi-protocol router/backbone' concept at the networking layer. Noel From dhc2 at dcrocker.net Tue Feb 15 05:36:45 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Tue, 15 Feb 2011 05:36:45 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <1297748224.2659.346.camel@localhost> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> , <20110214192304.GA17762@panacea.comcast.net> , <4D59A5F8.6040408@bennett.com> <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> <1297748224.2659.346.camel@localhost> Message-ID: <4D5A816D.60507@dcrocker.net> On 2/14/2011 9:37 PM, Jack Haverty wrote: > Later on that week, we went around the room again and asked everyone to > tell us what they could about their future plans - where were they > heading, which technical path(s), timing, etc. > > The responses absolutely astounded me. In the late 80s I was running an engineering shop that did TCP stacks for a variety of platforms. (We were one of your PC choices, although the cash cow was for DEC's VMS.) In anticipation of the industry switch to OSI, we added an OSI stack as an OEM package and made a point to build a TCP-to-OSI gateway package to help folks with their transition to the newer global standard. When we went to sell it, we found that the only transition products folks were asking for was OSI-to-TCP. Seriously. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From galmes at tamu.edu Tue Feb 15 06:29:23 2011 From: galmes at tamu.edu (Guy Almes) Date: Tue, 15 Feb 2011 08:29:23 -0600 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <1297748224.2659.346.camel@localhost> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> , <20110214192304.GA17762@panacea.comcast.net> , <4D59A5F8.6040408@bennett.com> <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> <1297748224.2659.346.camel@localhost> Message-ID: <4D5A8DC3.8040402@tamu.edu> Jack, Thanks for the interesting and thoughtful comments. It definitely has the ring of truth. Buried toward the end is one of the major (at the time) underestimated impacts of the NSFnet program. We thought that the NSFnet was for the science users (at it was!), but "letting the undergraduates in the dormitories connect too" may have been equally important. The graduating classes of '89 through about '97 were unbelievable change agents. -- Guy On 2/14/11 11:37 PM, Jack Haverty wrote: > ... > - the new hires in IT, coming out of universities all over the world, > knew all about TCP when they started work; they rarely knew anything > about SNA, DecNet, OSI, etc., and they weren't very interested in > learning > - their own internal efforts with TCP had so far been refreshingly > successful (no doubt because of those new hires...) >... From johnl at iecc.com Tue Feb 15 23:48:30 2011 From: johnl at iecc.com (John Levine) Date: 16 Feb 2011 07:48:30 -0000 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> Message-ID: <20110216074830.11070.qmail@joyce.lan> >You mean akin to how SMTP, "Simple" mail transport protocol, was supposed >to be replaced by something better when we figured out what email was >going to be...:o) It may yet happen if we ever figure it out. Regards, John Levine, johnl at iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly From dot at dotat.at Wed Feb 16 04:15:25 2011 From: dot at dotat.at (Tony Finch) Date: Wed, 16 Feb 2011 12:15:25 +0000 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <20110216074830.11070.qmail@joyce.lan> References: <20110216074830.11070.qmail@joyce.lan> Message-ID: On Wed, 16 Feb 2011, John Levine wrote: > >You mean akin to how SMTP, "Simple" mail transport protocol, was supposed > >to be replaced by something better when we figured out what email was > >going to be...:o) > > It may yet happen if we ever figure it out. I think it already happened with MIME. It probably needs to happen again, though :-) Tony. -- f.anthony.n.finch http://dotat.at/ HUMBER THAMES DOVER WIGHT PORTLAND: NORTH BACKING WEST OR NORTHWEST, 5 TO 7, DECREASING 4 OR 5, OCCASIONALLY 6 LATER IN HUMBER AND THAMES. MODERATE OR ROUGH. RAIN THEN FAIR. GOOD. From vint at google.com Wed Feb 16 04:34:46 2011 From: vint at google.com (Vint Cerf) Date: Wed, 16 Feb 2011 07:34:46 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: the obvious next thing would be some kind of uniform agreement on strong authentication of the source of email and protecting contents. I know about PGP of course, but it's not uniformly implemented and I think we could usefully try again. Last time we tried, it was called PEM and suffered from too pure a hierarchy of certificates, I think. v On Wed, Feb 16, 2011 at 7:15 AM, Tony Finch wrote: > On Wed, 16 Feb 2011, John Levine wrote: > >> >You mean akin to how SMTP, "Simple" mail transport protocol, was supposed >> >to be replaced by something better when we figured out what email was >> >going to be...:o) >> >> It may yet happen if we ever figure it out. > > I think it already happened with MIME. It probably needs to happen again, > though :-) > > Tony. > -- > f.anthony.n.finch ? ?http://dotat.at/ > HUMBER THAMES DOVER WIGHT PORTLAND: NORTH BACKING WEST OR NORTHWEST, 5 TO 7, > DECREASING 4 OR 5, OCCASIONALLY 6 LATER IN HUMBER AND THAMES. MODERATE OR > ROUGH. RAIN THEN FAIR. GOOD. > From craig at aland.bbn.com Wed Feb 16 04:39:11 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Wed, 16 Feb 2011 07:39:11 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet Message-ID: <20110216123911.12C3128E137@aland.bbn.com> > On Wed, 16 Feb 2011, John Levine wrote: > > > >You mean akin to how SMTP, "Simple" mail transport protocol, was supposed > > >to be replaced by something better when we figured out what email was > > >going to be...:o) > > > > It may yet happen if we ever figure it out. > > I think it already happened with MIME. It probably needs to happen again, > though :-) Bite thy tongue :-). (I'm reminded of Mike O'Brien's injunction many years ago to never say the three letters "MTP" in his presence please). More seriously... SMTP is rev 3 of email delivery (with a 2nd system syndrome step): FTP begat MTP begat SMTP. MIME is rev N in a continuing saga of email header/body format rules: RFC 561 -> RFC 680 (which never quite launched) -> RFC 724/733 -> RFC 822 -> -> MIME One could derive lessons here about many things such as operational demands driving innovation (certainly true for SMTP), 2nd system syndrome (both paths had it), the benefits of actually using the system one is designing, etc... Craig From jcurran at istaff.org Wed Feb 16 05:19:48 2011 From: jcurran at istaff.org (John Curran) Date: Wed, 16 Feb 2011 08:19:48 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: On Feb 16, 2011, at 7:34 AM, Vint Cerf wrote: > the obvious next thing would be some kind of uniform agreement on > strong authentication of the source of email and protecting contents. > I know about PGP of course, but it's not uniformly implemented and I > think we could usefully try again. Last time we tried, it was called > PEM and suffered from too pure a hierarchy of certificates, I think. That's one route to take, and has been our general approach to such problems (encryption and authentication at the application layer.) This has generally resulted in us getting application-specific encryption, and no useful authentication at all; at which point, the workarounds to no authentication appear, mostly IP address-based. On the present trajectory, these workarounds will all fail shortly, as the shortage of IPv4 address space causes black market unregistered use, and the abundance of IPv6 space makes "disposable address blocks" (similar to "saturday night special" disposable handguns) quite possible. This needs to be fixed asap via policy policy, or we have to completely give up on any expectations of useful identity information from the network layer and significantly improve our efforts in application-based authentication. Apologies for the digression... /John From randy at psg.com Wed Feb 16 06:39:11 2011 From: randy at psg.com (Randy Bush) Date: Wed, 16 Feb 2011 22:39:11 +0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: > On the present trajectory, these workarounds will all fail shortly, as > the shortage of IPv4 address space causes black market unregistered > use or arin could stop the canutian fantasy of holding back the tide and the ipv4 space would be registered. randy From jcurran at istaff.org Wed Feb 16 06:58:25 2011 From: jcurran at istaff.org (John Curran) Date: Wed, 16 Feb 2011 09:58:25 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: <25BFE5A9-20F4-4D44-90AE-8D1983ECBF50@istaff.org> On Feb 16, 2011, at 9:39 AM, Randy Bush wrote: >> On the present trajectory, these workarounds will all fail shortly, as >> the shortage of IPv4 address space causes black market unregistered >> use > > or arin could stop the canutian fantasy of holding back the tide and the > ipv4 space would be registered. Randy - Those who want to engage in crime (spam, ddos, etc.) decided long, long ago that they wouldn't both with maintaining accurate information if not required to do so. Luckily, most of them use a land line, credit card or bank account some point, so law enforcement isn't completely helpless. /John From randy at psg.com Wed Feb 16 07:02:07 2011 From: randy at psg.com (Randy Bush) Date: Wed, 16 Feb 2011 23:02:07 +0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <25BFE5A9-20F4-4D44-90AE-8D1983ECBF50@istaff.org> References: <20110216074830.11070.qmail@joyce.lan> <25BFE5A9-20F4-4D44-90AE-8D1983ECBF50@istaff.org> Message-ID: >>> On the present trajectory, these workarounds will all fail shortly, >>> as the shortage of IPv4 address space causes black market >>> unregistered use >> or arin could stop the canutian fantasy of holding back the tide and >> the ipv4 space would be registered. > Randy - Those who want to engage in crime (spam, ddos, etc.) decided > long, long ago that they wouldn't both with maintaining accurate > information if not required to do so. Luckily, most of them use a > land line, credit card or bank account some point, so law enforcement > isn't completely helpless. my mistake. abject apologies. i thought we were talking about email authentication and not child porn, terrorism, and black helicopters. randy From galvin+internet-history at elistx.com Wed Feb 16 08:13:51 2011 From: galvin+internet-history at elistx.com (James Galvin) Date: Wed, 16 Feb 2011 11:13:51 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: Actually, PEM predates PGP. PGP was followed by MOSS (MIME Object Security Services RFC 1848) within the IETF and S/MIME outside the IETF by RSA. Both appeared at the same time. This was back when the RSA algorithm patent still applied so MOSS never had a chance. Later S/MIME entered the IETF standards track. Fifteen years later it has evolved to include the same features and functionality of MOSS, as well as the certificate-based solution from PEM that it always had. PGP and S/MIME each enjoy a small amount of success but still we don't have a broadly deployed secure email service. The MOSS specification is historic now. If we are going to try again I would suggest focusing on the infrastructure, MTA-MTA communications. I think there could be real value in leveraging DNSSEC for MX and DKIM protection, but that's a discussion for a different home. Jim -- On February 16, 2011 7:34:46 AM -0500 Vint Cerf wrote regarding Re: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet -- > the obvious next thing would be some kind of uniform agreement on > strong authentication of the source of email and protecting contents. > I know about PGP of course, but it's not uniformly implemented and I > think we could usefully try again. Last time we tried, it was called > PEM and suffered from too pure a hierarchy of certificates, I think. > > v > > > On Wed, Feb 16, 2011 at 7:15 AM, Tony Finch wrote: > > On Wed, 16 Feb 2011, John Levine wrote: > > > >> > You mean akin to how SMTP, "Simple" mail transport protocol, was > >> > supposed to be replaced by something better when we figured out > >> > what email was going to be...:o) > >> > >> It may yet happen if we ever figure it out. > > > > I think it already happened with MIME. It probably needs to happen > > again, though :-) > > > > Tony. > > -- > > f.anthony.n.finch ? ?http://dotat.at/ > > HUMBER THAMES DOVER WIGHT PORTLAND: NORTH BACKING WEST OR > > NORTHWEST, 5 TO 7, DECREASING 4 OR 5, OCCASIONALLY 6 LATER IN > > HUMBER AND THAMES. MODERATE OR ROUGH. RAIN THEN FAIR. GOOD. > > > From johnl at iecc.com Wed Feb 16 09:13:20 2011 From: johnl at iecc.com (John R. Levine) Date: 16 Feb 2011 09:13:20 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: <20110216074830.11070.qmail@joyce.lan> Message-ID: > the obvious next thing would be some kind of uniform agreement on > strong authentication of the source of email and protecting contents. > I know about PGP of course, but it's not uniformly implemented and I > think we could usefully try again. Last time we tried, it was called > PEM and suffered from too pure a hierarchy of certificates, I think. S/MIME is implemented in all of the popular MUAs, but nobody uses it because of the key distribution problems. DKIM is a simpler signing scheme where the granularity is a domain rather than a user, and puts keys in the DNS. It seems to be reasonably successful, in large part because it's primarily MTA->MTA rather than MUA->MUA and there's a lot fewer MTAs to configure. Regards, John Levine, johnl at iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2304 bytes Desc: S/MIME Cryptographic Signature URL: From jnc at mercury.lcs.mit.edu Wed Feb 16 09:23:24 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Feb 2011 12:23:24 -0500 (EST) Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet Message-ID: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> > From: James Galvin > Actually, PEM predates PGP. I suspect Vint remembers that - and has the scars to prove it! I vaguely but distinctly recall attending an IAB? meeting in New Hampshire? at which there was a lot of discussion of PEM - I remember Steve Kent being there, and suspect Vint was too. This was years before PGP (which of course appeared in part as a counterpoint to PEM, since the top-down authentication model of PEM didn't sit well with everyone). Noel From dhc2 at dcrocker.net Wed Feb 16 09:51:32 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Wed, 16 Feb 2011 09:51:32 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> References: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> Message-ID: <4D5C0EA4.4080209@dcrocker.net> > This was years before PGP (which of course appeared in part as a counterpoint > to PEM, since the top-down authentication model of PEM didn't sit well with > everyone). FWIW, I suspect Phil did not know about PEM. A couple of histories about PGP don't mention the connection, plus Phil was not in the IETF mix: Note the 1991 date. RFC 989, defining PEM, is dated 1987. Still the odds are good that PEM did not motivate PGP. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jwise at draga.com Wed Feb 16 10:50:01 2011 From: jwise at draga.com (Jim Wise) Date: Wed, 16 Feb 2011 13:50:01 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D5C0EA4.4080209@dcrocker.net> (Dave CROCKER's message of "Wed, 16 Feb 2011 09:51:32 -0800") References: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> <4D5C0EA4.4080209@dcrocker.net> Message-ID: <87oc6byfty.fsf@gondolin.draga.com> Dave CROCKER writes: >> This was years before PGP (which of course appeared in part as a counterpoint >> to PEM, since the top-down authentication model of PEM didn't sit well with >> everyone). > > > FWIW, I suspect Phil did not know about PEM. A couple of histories about > PGP don't mention the connection, plus Phil was not in the IETF mix: > > > > Note the 1991 date. > > RFC 989, defining PEM, is dated 1987. Still the odds are good that PEM did > not motivate PGP. FWIW, the 2.3A release of PGP, from July 1, 1993, specifically mentions PEM in the documentation, commenting that: This probablistic fault-tolerant method of determining public key legitimacy is one of the principle strengths of PGP's key management architecture, as compared with PEM, for decentralized social environments. That's the oldest version of PGP that I have sources for, though, so I don't know whether this connection was made in the original 1.0 release of June 1991. -- Jim Wise jwise at draga.com -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 192 bytes Desc: not available URL: From vint at google.com Wed Feb 16 13:12:09 2011 From: vint at google.com (Vint Cerf) Date: Wed, 16 Feb 2011 16:12:09 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> Message-ID: Correct on all counts. Sorry if my first message suggested otherwise. V ----- Original Message ----- From: Noel Chiappa [mailto:jnc at mercury.lcs.mit.edu] Sent: Wednesday, February 16, 2011 12:23 PM To: internet-history at postel.org Cc: jnc at mercury.lcs.mit.edu Subject: Re: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet > From: James Galvin > Actually, PEM predates PGP. I suspect Vint remembers that - and has the scars to prove it! I vaguely but distinctly recall attending an IAB? meeting in New Hampshire? at which there was a lot of discussion of PEM - I remember Steve Kent being there, and suspect Vint was too. This was years before PGP (which of course appeared in part as a counterpoint to PEM, since the top-down authentication model of PEM didn't sit well with everyone). Noel From jack at 3kitty.org Wed Feb 16 14:10:13 2011 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 16 Feb 2011 14:10:13 -0800 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: References: Message-ID: <1297894213.2590.160.camel@localhost> On Wed, 2011-02-16 at 16:12 -0500, Vint Cerf wrote: Hi Vint - sorry for the broadcast, but when I try to email vint at google.com, I get: " : Sorry, no mailbox here by that name. (#5.1.1) " Now, in SillyValley that's sometimes how people find out that they've been let go -- but somehow I doubt that's the case here.... /Jack From jnc at mercury.lcs.mit.edu Wed Feb 16 14:12:38 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 16 Feb 2011 17:12:38 -0500 (EST) Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet Message-ID: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> > From: Dave CROCKER > I suspect Phil did not know about PEM. Somewhat to my surprise (I thought everyone working in the email area knew about PEM, which had been going on for years - the duration being a source of a lot of unhappiness in some quarters), this turns out to be true. I found the following 'PGP Marks 10th Anniversary' note (5 Jun 2001) from Phil Z: "a week before PGP's first release, I discovered the existence of another email encryption standard called Privacy Enhanced Mail (PEM)" (available at: http://www.linuxtoday.com/security/mailprint.php3?action=pv<sn=2001-06-06-004-20-SC-SW) > Still the odds are good that PEM did not motivate PGP. Not originally, no. But it sounds like Phil considered certain technical choices made in PEM to be non-optimal, and I think his feeling that he had a 'better mousetrap;' was part (in addition to public response) as to why he kept working on it. In addition, he _did_ know of the PEM trust model, and didn't like it, when he did the PGP trust model: "PEM used 56-bit DES to encrypt messages, which I did not regards as strong cryptography. Also, PEM absolutely required every message to be signed, and revealed the signature outside the encryption envelope ... I started designing the PGP trust model, which I did not have time to finish in the first release. Fifteen months later, in September 1992, we released PGP 2.0 ... PGP 2.0 had the now-famous PGP trust model, essentially in its present form." So I think I wasn't entirely off-base in my original comment ("PGP .. which of course appeared in part as a counterpoint to PEM, since the top-down authentication model of PEM didn't sit well with everyone")! :-) Noel From bernie at fantasyfarm.com Wed Feb 16 15:06:36 2011 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Wed, 16 Feb 2011 18:06:36 -0500 Subject: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> Message-ID: <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> I'm a little confused [which usually means I'm misunderstanding something]. the springboard for this sort-of-side topic was my comment [somewhat in jest] about SMTP still being with us and from there a discussion of what, if anything, might actually be a candidate for a successor to it. Am I correct that every proposal that's floated by so far has involved EVERY email sender and recipient having a personal public key? If not, then perhaps someone could help me understand where I got confused. If so, then is there any rational way even to consider a system that might involve allocating [and managing] several hundred million public keys? AFAIK the PKI system barely works now... if every person who wants to participate in email 2.0 has to get a personal public- key, that's going to be a bit of a mess, no? /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From galvin+internet-history at elistx.com Wed Feb 16 16:52:05 2011 From: galvin+internet-history at elistx.com (James Galvin) Date: Wed, 16 Feb 2011 19:52:05 -0500 Subject: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> Message-ID: <48D5D02947CE7BA002348FEB@James-Galvin-2.local> -- On February 16, 2011 6:06:36 PM -0500 Bernie Cosell wrote regarding [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet -- > Am I correct that every proposal > that's floated by so far has involved EVERY email sender and > recipient having a personal public key? If so, then > is there any rational way even to consider a system that might > involve allocating [and managing] several hundred million public > keys? AFAIK the PKI system barely works now... if every person who > wants to participate in email 2.0 has to get a personal public- key, > that's going to be a bit of a mess, no? I believe that DNSSEC makes this eminently doable. As a concept, change an email address to a domain name by replacing the "@" with a ".". Then lookup the public key for that user. For that matter, lookup the certificate for that user, which could even be self-signed. PKI never worked Internet-wide because there was never an effective Internet-wide distribution system. Revocation could be supported either similarly to what DNSSEC does for itself or simply by not being present in the zone. Other solutions are also possible. Next stop: world peace. Jim From randy at psg.com Wed Feb 16 17:20:54 2011 From: randy at psg.com (Randy Bush) Date: Thu, 17 Feb 2011 09:20:54 +0800 Subject: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <48D5D02947CE7BA002348FEB@James-Galvin-2.local> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> <48D5D02947CE7BA002348FEB@James-Galvin-2.local> Message-ID: > I believe that DNSSEC makes this eminently doable. i do not believe that dnssec has made anything eminently doable. it is a real mess which we are struggling to learn how to deploy. read the dns-ops lists. i would not put anything on that horse's back until it has been shown to be stable and reliable. randy From bernie at fantasyfarm.com Wed Feb 16 18:10:59 2011 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Wed, 16 Feb 2011 21:10:59 -0500 Subject: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <48D5D02947CE7BA002348FEB@James-Galvin-2.local> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu>, <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com>, <48D5D02947CE7BA002348FEB@James-Galvin-2.local> Message-ID: <4D5C83B3.28367.1E65BE3F@bernie.fantasyfarm.com> On 16 Feb 2011 at 19:52, James Galvin wrote: > -- On February 16, 2011 6:06:36 PM -0500 Bernie Cosell > wrote regarding [ih] secure email was The > Internet Plan; was: Ken Olsen's impact on the Internet -- > > > Am I correct that every proposal > > that's floated by so far has involved EVERY email sender and > > recipient having a personal public key? > As a concept, change an email address to a domain name by replacing the > "@" with a ".". Then lookup the public key for that user. For that > matter, lookup the certificate for that user, which could even be > self-signed. I'm still trying to catch up with this, so bear with me if you could: you're suggesting that, for example, that when you signed up for the IH mailing list and decided to use "galvin+internet-history at elistx.com" as your email address for it, rather than just typing that in and starting to use it you'd first have to go to some central registry somewhere and "register" that address and have it get its own keyset, and then appropriately enter its private key into every email client you'll ever use [else you can't properly _generate_ an email from "galvin+internet-history at elistx.com", yes?]. Is this about how it'd work? Seems a bit cumbersome, but I guess it'd prevent anyone from forging email that'd look like it came from you. /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From john.g.linn at gmail.com Thu Feb 17 03:30:57 2011 From: john.g.linn at gmail.com (John Linn) Date: Thu, 17 Feb 2011 06:30:57 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D5C0EA4.4080209@dcrocker.net> References: <20110216172324.EA45618C0C0@mercury.lcs.mit.edu> <4D5C0EA4.4080209@dcrocker.net> Message-ID: <4D5D06F1.2000208@gmail.com> I edited RFC-989 and some successor documents on the PEM side of the fence, and have no way of knowing whether/when PEM results might have influenced PGP. Note, however, that RFC-989 didn't include PEM's PKI aspect; rather, it emphasized the message processing layer (e.g., defining the Printable Encoding that was subsequently adapted as MIME's base64). That layer was intended for use either with PKI or pairwise symmetric keys, and both types of implementations were prototyped. I don't believe the certification infrastructure was documented in RFC form until RFC-1114, in August 1989. FWIW (perhaps already historic, after < 24 hours), I attempted to post the following paragraph to the list, but it seems to have dropped silently, perhaps because of a sender address mismatch: "S/MIME succeeded PEM, extending and generalizing the message-level content protection facilities, but its availability also didn't trigger deployment of broadly-adopted user certification infrastructure. Absent such infrastructure, email security suffers from a significant first phone effect; there's little incentive to begin using it until and unless your communicating peers do so as well, which isn't likely to happen unless many members of a community develop interest in parallel. This hasn't yet taken place at anywhere near a general and pervasive level, but the reason may have more to do with user demand than with availability of technology. S/MIME support has been widely available in email clients for many years, but may also be one of the Internet's best-deployed examples of latent, unexecuted code." --jl On 02/16/2011 12:51 PM, Dave CROCKER wrote: > >> This was years before PGP (which of course appeared in part as a >> counterpoint >> to PEM, since the top-down authentication model of PEM didn't sit >> well with >> everyone). > > > FWIW, I suspect Phil did not know about PEM. A couple of histories > about PGP don't mention the connection, plus Phil was not in the IETF > mix: > > > > Note the 1991 date. > > RFC 989, defining PEM, is dated 1987. Still the odds are good that > PEM did not motivate PGP. > > d/ From galvin+internet-history at elistx.com Thu Feb 17 06:28:56 2011 From: galvin+internet-history at elistx.com (James Galvin) Date: Thu, 17 Feb 2011 09:28:56 -0500 Subject: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <4D5C83B3.28367.1E65BE3F@bernie.fantasyfarm.com> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> <48D5D02947CE7BA002348FEB@James-Galvin-2.local> <4D5C83B3.28367.1E65BE3F@bernie.fantasyfarm.com> Message-ID: <15CBD5C57B44F6DB01BC1A83@James-Galvin-2.local> -- On February 16, 2011 9:10:59 PM -0500 Bernie Cosell wrote regarding Re: [ih] secure email was The Internet Plan; was: Ken Olsen's impact on the Internet -- > On 16 Feb 2011 at 19:52, James Galvin wrote: > > > -- On February 16, 2011 6:06:36 PM -0500 Bernie Cosell > > wrote regarding [ih] secure email was The > > Internet Plan; was: Ken Olsen's impact on the Internet -- > > > > As a concept, change an email address to a domain name by replacing > > the "@" with a ".". Then lookup the public key for that user. > > For that matter, lookup the certificate for that user, which could > > even be self-signed. > > you're suggesting that, for example, that when you signed up for the > IH mailing list and decided to use > "galvin+internet-history at elistx.com" as your email address for it, > rather than just typing that in and starting to use it you'd first > have to go to some central registry somewhere and "register" that > address and have it get its own keyset, and then appropriately enter > its private key into every email client you'll ever use [else you > can't properly _generate_ an email from > "galvin+internet-history at elistx.com", yes?]. Is this about how it'd > work? Seems a bit cumbersome, but I guess it'd prevent anyone from > forging email that'd look like it came from you. "Cumbersome" is just a matter of programming, right? :-) What you propose is one way it could work. I've got others (yes more than one), but this is not the appropriate forum in which to discuss email futures. I will say that we should work to protect the infrastructure first, MTA-MTA communication. We now have two important and useful pieces: DKIM and DNSSEC. We could accomplish a lot if we could bring them together. Jim From dhc2 at dcrocker.net Thu Feb 17 06:44:19 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 17 Feb 2011 06:44:19 -0800 Subject: [ih] "secure" email, take 5 or 6 or 7 (was Re: secure email was The Internet Plan; was: Ken Olsen's impact on the Internet) In-Reply-To: <48D5D02947CE7BA002348FEB@James-Galvin-2.local> References: <20110216221238.2D14018C0C6@mercury.lcs.mit.edu> <4D5C587C.28280.1DBCEDE3@bernie.fantasyfarm.com> <48D5D02947CE7BA002348FEB@James-Galvin-2.local> Message-ID: <4D5D3443.1050009@dcrocker.net> On 2/16/2011 4:52 PM, James Galvin wrote: > I believe that DNSSEC makes this eminently doable. PEM, MOSS, PGP, S/MIME and probably several more previous efforts make pretty clear that the major challenge for email security is administrative, not technical. Whatever is going to succeed, it is going to have to have massively better user and operations human factors, especially with respect to administration. I can imagine DNSSEC being helpful to that, although its painfully slow development and uptake do not bode well. Still, there /is/ uptake and I am /finally/ confident that a sufficient DNSSEC infrastructure will eventually arrive. However I don't have a sense of its on-going OA&M burden. The alternative is DKIM, which is already tailored to message signing and is far easier to deploy and operate. However it's semantics are intentionally more modest than folks have in mind here. It does not authenticate a message, frequent statements to the contrary notwithstanding. It authenticates the presence of an identifier in the message, but that presence does not mean that the contents are valid, not even the FROM: field. Relatively small tweaks to DKIM's use could change this. It wouldn't be "DKIM" but it could re-use almost all of DKIM's details. (Note that the formal semantics of a protocol are not necessarily defined by packet and data details, but by the port number the application uses. Hence SMTP has different semantics on port 25 than on port 587, in spite of being the 'same' protocol... The equivalent to a new port number for DKIM could be a different header field from the DKIM-Signature field used to hold a DKIM signature in a message.) It happens that I've recently been working on a re-purposing of DKIM to this end. I floated a preliminary effort by the DKIM working group, but the timing was not right. So a couple of us are pursuing it separately. A draft will be available soon. This thread, as well as some market pull by a private industry activity, have escalated the priority of the effort. Watch this space. For a couple of years, there has been some background interest in finding ways for DNSSEC and DKIM to be complementary. My current view is that this will work best by having DKIM-ish technology provide the message security services and having DNSSEC close the security hole of using the DNS for storing keys. The incentive for doing this depends on fear of a compromised DNS. With respect to email security this probably is highest when the use of message security is high-value, such as for financially-based transactional mail. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From eric.gade at gmail.com Thu Feb 17 07:40:49 2011 From: eric.gade at gmail.com (Eric Gade) Date: Thu, 17 Feb 2011 15:40:49 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: Sorry to rudely change the topic, but since people appear to be in a lively mood for debate, I wanted to ask a very broad question. To what extent can we consider the DDN NIC of the late 80s as a "model" for other similar network administrative organizations throughout the world? -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From woody at pch.net Thu Feb 17 08:08:16 2011 From: woody at pch.net (Bill Woodcock) Date: Thu, 17 Feb 2011 08:08:16 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Feb 17, 2011, at 7:40 AM, Eric Gade wrote: > Sorry to rudely change the topic, but since people appear to be in a lively mood for debate, I wanted to ask a very broad question. To what extent can we consider the DDN NIC of the late 80s as a "model" for other similar network administrative organizations throughout the world? Model at what level? As a functional diagram, requests came in, were validated, then either discarded or fulfilled. That seems to encapsulate the limited-resource-allocation function at its simplest... I assume you mean at some more complex level? -Bill -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (Darwin) iEYEARECAAYFAk1dR/AACgkQGvQy4xTRsBEHKACgzTuZTjrBCNt/285o+p9FwmUY w2AAn3qBpJrs2NFk93xZmQFmtAGbNhRl =nTS8 -----END PGP SIGNATURE----- From craig at aland.bbn.com Thu Feb 17 08:29:03 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 17 Feb 2011 11:29:03 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110217162903.1377D28E137@aland.bbn.com> > Sorry to rudely change the topic, but since people appear to be in a lively > mood for debate, I wanted to ask a very broad question. To what extent can > we consider the DDN NIC of the late 80s as a "model" for other similar > network administrative organizations throughout the world? Hi Eric: Could you define "network administrative organizations"? I googled and got network admins = network operations -- and the DDN NIC did not do most of the things we'd consider network operations. The prototype network operations centers were at BBN [ARPANET/MILNET/CSNET] in the early 1980s and then, once NSFNET started, at the NSFNET regional networks -- I suspect AT&T Indian Hills [ihnp4] had a pretty sophisticated operation too. The NIC did things that ARIN such do now. Thanks! Craig From eric.gade at gmail.com Thu Feb 17 09:24:25 2011 From: eric.gade at gmail.com (Eric Gade) Date: Thu, 17 Feb 2011 17:24:25 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110217162903.1377D28E137@aland.bbn.com> References: <20110217162903.1377D28E137@aland.bbn.com> Message-ID: On Thu, Feb 17, 2011 at 4:29 PM, Craig Partridge wrote: > > The NIC did things that ARIN such do now. > This is closer to what I meant. I used administrative in a broad sense -- referring to the registration of names and addresses, things of that sort. I have documents from the late 80s that discuss Mexican and Japanese university/national public network representatives emailing and in some cases visiting the NIC, not only to see how the DDN worked but to observe the day to day operations of the NIC. It seems to me that some of the NIC's most important functions (my personal interests are with the DNS) were designed with the idea in mind that OSI would replace them *or* incorporate them into a higher-level, more global structure. If that is the case -- and I think I have pretty good evidence when it comes to the DNS side of things -- then whether or not the NIC served as a model for similar organizations around the world is important. It means that, at least in part, they would have reflected some of the teleology of OSI. In a more general sense, I bring this up because we could have a more nuanced historical discussion on the list. Don't get me wrong -- X25 v TCP/IP is definitely interesting, and discussions of the "failure" of OSI are both useful and seem to still ignite a decent emotional response. I think it could be more constructive, however, to consider the truism of "the coming of OSI" in the 80s and the effects that had on the system we have today. To deny that it had no influence on both technical and structural aspects of the ARPAnet and its children might be a little short-sighted, though I'm not suggesting that anyone has been doing that. After sifting through a lot of material, I'm ready to argue that this OSI truism had a fairly important influence on the DNS. I'm equally prepared to be verbally blindfolded, given a camel light, and put before the firing squad of criticism. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig at aland.bbn.com Thu Feb 17 09:47:51 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 17 Feb 2011 12:47:51 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110217174751.B2AE228E137@aland.bbn.com> > In a more general sense, I bring this up because we could have a more > nuanced historical discussion on the list. Don't get me wrong -- X25 v > TCP/IP is definitely interesting, and discussions of the "failure" of OSI > are both useful and seem to still ignite a decent emotional response. I > think it could be more constructive, however, to consider the truism of "the > coming of OSI" in the 80s and the effects that had on the system we have > today. To deny that it had no influence on both technical and structural > aspects of the ARPAnet and its children might be a little short-sighted, > though I'm not suggesting that anyone has been doing that. After sifting > through a lot of material, I'm ready to argue that this OSI truism had a > fairly important influence on the DNS. I'm equally prepared to be verbally > blindfolded, given a camel light, and put before the firing squad of > criticism. Hi Eric: Understood re: administrative and I don't have anything to add there (ex-NIC folks on this list will). Regarding OSI and the DNS. I don't know about NIC administrative procedures. But regarding other aspects of the DNS, you should understand that the DNS was used as a weapon against "the coming of OSI". So far as I can tell, the design of the DNS had zero relation to OSI/X.500 naming. It probably had some influence from Grapevine (as I recall, PVM says he didn't use Grapevine as an input, but reading the namedroppers list is it pretty clear [at least to me] that others commenting on his work were influenced by Grapevine -- things such as the initial two-level DNS naming system which reads just like Grapevine's two-level system). Then in January 1986, when the central question of finalizing DNS TLDs was decided, it was explicitly decided to structure .US to make it completely useless for X.500 migration. I was at the meeting and remember thinking that the decision had elements that might make Jon Postel (who made the decision) into a technological King Cnut, but subsequent events made it an effective way to thwart OSI. (Brief sketch: the TLD meeting was more than finalizing Internet TLDs, it determined the naming schemes for UUNET and CSNET and BITNET [all of whom were at the meeting]. So it created a common email addressing system that spanned the 4 biggest email networks. The biggest app was email and by keeping the naming schemes very distinct, it meant that transitioning to OSI required a painful, organization wide, change in email addresses. In contrast, when NSFNET came along and UUNET, CSNET and BITNET collapsed into the Internet, the email transition was largely seamless for users and created a single network that was too big to imagine transitioning). Thanks! Craig From eric.gade at gmail.com Thu Feb 17 10:16:59 2011 From: eric.gade at gmail.com (Eric Gade) Date: Thu, 17 Feb 2011 18:16:59 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110217174751.B2AE228E137@aland.bbn.com> References: <20110217174751.B2AE228E137@aland.bbn.com> Message-ID: On Thu, Feb 17, 2011 at 5:47 PM, Craig Partridge wrote: > Then in January 1986, when the central question of finalizing DNS TLDs > was decided, it was explicitly decided to structure .US to make it > completely useless for X.500 migration. This is really fascinating stuff. There is little documentation on Postel's perspective on such matters, aside from anecdotes and what's in public, because his archives at USC are off-limits (though they have been sorted) until an 'appropriate' amount of time has passed. Still I managed to find interesting things in the NIC collection. One is a paper that Postel co-authored with Mockapetris for a conference and presented in April 1985. Using IFIP and DNS propositions as examples, they go on to list the differences between "Tree Systems" and "Attribute Systems." Here is the section that interests me the most: > *One solution would be to layer an attribute system on top of a > self-sufficient tree system...* > > I assumed that this was -- at the very least -- a concession made to the OSI community, or that it was a way to somehow justify (though how necessary would justification be?) the DNS being developed. I think there may be a case for saying that the inclusion of ccTLDs in the first place was inspired -- at least in part -- by OSI advocates precisely because that's the type of organization they wanted at the top. The fact that the codes were pulled from an ISO list might not be mere coincidence either, though I'm going out on a limb with that. In terms of actual organizations and authorities, that's an interesting case too. A lot of the correspondence seems to indicate not only that the NIC didn't want the job of TLD registration and administration, but that there were expectations that international organizations would take these over, and suggestions included such orgs as ANSI (side note: when ANSI started registering OSI names in 1988 or 89, it was charging a whopping $1k per name. NIC was still doing this for free I think, but meeting notes show that early projections of pricing were around $70). -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From cos at aaaaa.org Thu Feb 17 10:21:40 2011 From: cos at aaaaa.org (Ofer Inbar) Date: Thu, 17 Feb 2011 13:21:40 -0500 Subject: [ih] The Internet Plan; was: Ken Olsen's impact on the Internet In-Reply-To: <1297748224.2659.346.camel@localhost> References: <55394E20-2EA7-4595-971A-2B74E616C492@gmail.com> <20110214192304.GA17762@panacea.comcast.net> <4D59A5F8.6040408@bennett.com> <4D59B4D9.23032.136DB9F1@bernie.fantasyfarm.com> <1297748224.2659.346.camel@localhost> Message-ID: <20110217182140.GZ13584@mip.aaaaa.org> Jack Haverty wrote: > Sometime in 1991, IIRC, we held a "Network Forum", which was > Everyone was planning to go fully to TCP, as fast as possible. > Everyone. > - TCP had a large installed base, and could be observed to be working > - TCP was delivering what OSI was promising > - TCP was delivering functional systems, while OSI was delivering lots > of paper > - TCP was enough like OSI that, when/if OSI appeared it should be > relatively straightforward to migrate This reminds me of a quotation I used to use in email and Usenet signatures in the early 90s a lot: "OSI is a beautiful dream, and TCP/IP is living it!" -- Einar Stefferud , IETF mailing list, 12 May 1992 -- Cos From jeanjour at comcast.net Thu Feb 17 10:51:24 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 13:51:24 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110217174751.B2AE228E137@aland.bbn.com> References: <20110217174751.B2AE228E137@aland.bbn.com> Message-ID: I would agree with Craig. We knew from the early 70s we needed a directory of some sort and Grapevine was the first attempt. There was little or no influence between DNS and X.500. X.500 in the hands of the PTT faction and some misguided generalists quickly got out of hand trying to be far more than simply resolving application names to network addresses. Making it so general made it useless for that purpose. The X.25, TCP/IP debate was broader than an OSI/Internet issue. That debate was really a X.25 vs Transport Layer at all debate, which had started around 1974. Years before OSI began. The transport side of this debate was championed by INWG at the beginning. n that debate, the research networks (NPL, CYCLADES, EIN, ARPANET/Internet, and various other European research centers as well as XNS) were all advocating a transport protocol while the PTTs argued that an end-to-end Transport protocol was unnecessary. X.25 was all you needed. For the Europeans, X.25 was forced on them by their PTTs but everyone knew it was not a reliable service. (Resets lost data). So at the very least Transport would operate over X.25. I believe that the ARPANet used X.25 in place 1822 at the edge later. (X.25 was strictly speaking an IMP-Host protocol or as they called it, a DTE-DCE interface.) This was the beginning of the connection/connectionless war that continued at its most intense within OSI once it started. In OSI it was the war between CLNP/TP4 vs X.25/TP0 also known as X.25 only. I could never figure out what the PTT position was all about since it was obvious that X.25 would not handle the bandwidth that we were using. But in those pre-deregulation days, the PTTs stunting business growth in Europe was the norm. Take care, John At 12:47 -0500 2011/02/17, Craig Partridge wrote: > > In a more general sense, I bring this up because we could have a more >> nuanced historical discussion on the list. Don't get me wrong -- X25 v >> TCP/IP is definitely interesting, and discussions of the "failure" of OSI >> are both useful and seem to still ignite a decent emotional response. I >> think it could be more constructive, however, to consider the truism of "the >> coming of OSI" in the 80s and the effects that had on the system we have >> today. To deny that it had no influence on both technical and structural >> aspects of the ARPAnet and its children might be a little short-sighted, >> though I'm not suggesting that anyone has been doing that. After sifting >> through a lot of material, I'm ready to argue that this OSI truism had a >> fairly important influence on the DNS. I'm equally prepared to be verbally >> blindfolded, given a camel light, and put before the firing squad of >> criticism. > >Hi Eric: > >Understood re: administrative and I don't have anything to add there >(ex-NIC folks on this list will). > >Regarding OSI and the DNS. I don't know about NIC administrative procedures. >But regarding other aspects of the DNS, you should understand that the DNS >was used as a weapon against "the coming of OSI". > >So far as I can tell, the design of the DNS had zero relation to OSI/X.500 >naming. It probably had some influence from Grapevine (as I recall, PVM >says he didn't use Grapevine as an input, but reading the namedroppers >list is it pretty clear [at least to me] that others commenting on his work >were influenced by Grapevine -- things such as the initial two-level DNS >naming system which reads just like Grapevine's two-level system). > >Then in January 1986, when the central question of finalizing DNS TLDs >was decided, it was explicitly decided to structure .US to make it >completely useless for X.500 migration. I was at the meeting and >remember thinking that the decision had elements that might make Jon >Postel (who made the decision) into a technological King Cnut, but >subsequent events made it an effective way to thwart OSI. (Brief >sketch: the TLD meeting was more than finalizing Internet TLDs, it >determined the naming schemes for UUNET and CSNET and BITNET [all of >whom were at the meeting]. So it created a common email addressing >system that spanned the 4 biggest email networks. The biggest app >was email and by keeping the naming schemes very distinct, it meant >that transitioning to OSI required a painful, organization wide, change >in email addresses. In contrast, when NSFNET came along and UUNET, >CSNET and BITNET collapsed into the Internet, the email transition was >largely seamless for users and created a single network that was too big >to imagine transitioning). > >Thanks! > >Craig From jeanjour at comcast.net Thu Feb 17 11:01:35 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 14:01:35 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217174751.B2AE228E137@aland.bbn.com> Message-ID: The distinction that is being made here seems to be the one I was pointing out: What we thought was needed was purely something that mapped Application names to network addresses. DNS is not quite that. Grapevine was that. X.500 went way beyond that to try to include everything you might want to look up. Actually, one might think of X.500 as Google without the web! ;-) So X.500 was far too much to effectively do the simple job that was necessary. Much of this going overboard did come from the IFIP work that was going on. This was the group who thought that defining the syntax of a protocol was a formal description of the protocol. When you tried to explain to them there were actions or procedures that had to be defined to be a protoocl specification, they looked at you like you were from Mars. Take care, John At 18:16 +0000 2011/02/17, Eric Gade wrote: >On Thu, Feb 17, 2011 at 5:47 PM, Craig Partridge ><craig at aland.bbn.com> wrote: > >Then in January 1986, when the central question of finalizing DNS TLDs >was decided, it was explicitly decided to structure .US to make it >completely useless for X.500 migration. > > >This is really fascinating stuff. >There is little documentation on Postel's perspective on such >matters, aside from anecdotes and what's in public, because his >archives at USC are off-limits (though they have been sorted) until >an 'appropriate' amount of time has passed. > >Still I managed to find interesting things in the NIC collection. >One is a paper that Postel co-authored with Mockapetris for a >conference and presented in April 1985. Using IFIP and DNS >propositions as examples, they go on to list the differences between >"Tree Systems" and "Attribute Systems." Here is the section that >interests me the most: > >One solution would be to layer an attribute system on top of a >self-sufficient tree system... > >I assumed that this was -- at the very least -- a concession made to >the OSI community, or that it was a way to somehow justify (though >how necessary would justification be?) the DNS being developed. > >I think there may be a case for saying that the inclusion of ccTLDs >in the first place was inspired -- at least in part -- by OSI >advocates precisely because that's the type of organization they >wanted at the top. The fact that the codes were pulled from an ISO >list might not be mere coincidence either, though I'm going out on a >limb with that. > >In terms of actual organizations and authorities, that's an >interesting case too. A lot of the correspondence seems to indicate >not only that the NIC didn't want the job of TLD registration and >administration, but that there were expectations that international >organizations would take these over, and suggestions included such >orgs as ANSI (side note: when ANSI started registering OSI names in >1988 or 89, it was charging a whopping $1k per name. NIC was still >doing this for free I think, but meeting notes show that early >projections of pricing were around $70). > >-- >Eric > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric.gade at gmail.com Thu Feb 17 11:10:07 2011 From: eric.gade at gmail.com (Eric Gade) Date: Thu, 17 Feb 2011 19:10:07 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217174751.B2AE228E137@aland.bbn.com> Message-ID: Am I mistaken in thinking that X.400 uses attribute names as well? I'm trying to distinguish where all those IFIP ideas went exactly. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig at aland.bbn.com Thu Feb 17 12:37:57 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 17 Feb 2011 15:37:57 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110217203757.636D928E137@aland.bbn.com> > I think there may be a case for saying that the inclusion of ccTLDs in > the first place was inspired -- at least in part -- by OSI advocates > precisely because that's the type of organization they wanted at the top. > The fact that the codes were pulled from an ISO list might not be mere > coincidence either, though I'm going out on a limb with that. Actually I think you're out on the limb... The original plan, as I recall, was to simply have gTLDs. But somewhere before the final TLD meeting at SRI in January 1986, there was a decision to allow the UK to have a TLD. Most likely this reflected a request from Peter Kirstein. At the January 1986 meeting, I think (and I'll note, it was not the central topic of the meeting and I am relying on memory, so this recollection could be faulty), we agreed to let people decide whether to register in their country or in a gTLD. I do remember an impassioned, by Postel standards, statement from Jon about why would a university register under its country when its most important attribute was that it was an educational institution. I also remember that either at the meeting or soon after, there was a brief discussion of how to vet applications for ccTLDs -- Jon did not want to be in the business of deciding who was a country and who was not. I believe Jon was aware at that time of the two Germanys problem (which was not, as you might imagine, between East and West Germany, but rather an internal fight in West Germany between the PTT and a university [Karlsruhe?] about who controlled the major Internet link into Germany and had pulled CSNET into a diplomatic mess [US State Dept, Germany embassy, etc all leaning on CSNET which correctly sussed that the PTT was incompetent and was loath to abandon the competent university which was providing free Internet email to any academic in Germany who requested it]). So Jon discovered that ISO produced a list of country abbreviations that was blessed by the UN or some such as reflecting the international concensus of who was and was not a country and said "I'll use this list". This proved wise (e.g. the Macedonia tiff a few years later in which Jon could simply say "I'm making no diplomatic decisions, I'm simply following the internationally approved list"). Thanks! Craig PS: Side note -- while there was considerable debate on namedroppers, my sense is that most of the key naming decisions where made by Ken Harrenstein and Jon and they coordinated with each other. Ken did most of the analysis and arguing of points and Jon periodically would announce a decision. From jeanjour at comcast.net Thu Feb 17 13:28:34 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 16:28:34 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110217203757.636D928E137@aland.bbn.com> References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: Yea, I told him the same thing. Only forgot to do a reply all. I don't think there was much interaction between those two groups at all. The idea of organizing by country pre-dates OSI by decades. ;-) At 15:37 -0500 2011/02/17, Craig Partridge wrote: > > I think there may be a case for saying that the inclusion of ccTLDs in >> the first place was inspired -- at least in part -- by OSI advocates >> precisely because that's the type of organization they wanted at the top. >> The fact that the codes were pulled from an ISO list might not be mere >> coincidence either, though I'm going out on a limb with that. > >Actually I think you're out on the limb... > >The original plan, as I recall, was to simply have gTLDs. But somewhere >before the final TLD meeting at SRI in January 1986, there was a decision to >allow the UK to have a TLD. Most likely this reflected a request from >Peter Kirstein. > >At the January 1986 meeting, I think (and I'll note, it was not the >central topic of the meeting and I am relying on memory, so this recollection >could be faulty), we agreed to let people decide whether to register >in their country or in a gTLD. I do remember an impassioned, by Postel >standards, statement from Jon about why would a university register under >its country when its most important attribute was that it was an educational >institution. > >I also remember that either at the meeting or soon after, there was >a brief discussion of how to vet applications for ccTLDs -- Jon did not >want to be in the business of deciding who was a country and who was not. >I believe Jon was aware at that time of the two Germanys problem (which >was not, as you might imagine, between East and West Germany, but rather >an internal fight in West Germany between the PTT and a university >[Karlsruhe?] >about who controlled the major Internet link into Germany and had pulled >CSNET into a diplomatic mess [US State Dept, Germany embassy, etc all leaning >on CSNET which correctly sussed that the PTT was incompetent and was loath >to abandon the competent university which was providing free Internet email >to any academic in Germany who requested it]). > >So Jon discovered that ISO produced a list of country abbreviations that >was blessed by the UN or some such as reflecting the international concensus >of who was and was not a country and said "I'll use this list". This proved >wise (e.g. the Macedonia tiff a few years later in which Jon could simply >say "I'm making no diplomatic decisions, I'm simply following the >internationally approved list"). > >Thanks! > >Craig > >PS: Side note -- while there was considerable debate on namedroppers, my >sense is that most of the key naming decisions where made by Ken Harrenstein >and Jon and they coordinated with each other. Ken did most of the analysis >and arguing of points and Jon periodically would announce a decision. From eric.gade at gmail.com Thu Feb 17 14:53:26 2011 From: eric.gade at gmail.com (Eric Gade) Date: Thu, 17 Feb 2011 22:53:26 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: > > The original plan, as I recall, was to simply have gTLDs. But somewhere > before the final TLD meeting at SRI in January 1986, there was a decision > to > allow the UK to have a TLD. Most likely this reflected a request from > Peter Kirstein. No one from this list nor anyone else I tried to contact could give me a definitive answer on when this decision was made. I had to try and figure it out myself. It appears that something changed between May and July of 1984. In July, a draft RFC was posted that included the ISO-3166 list for the first time. Four months beforehand, Postel first announced to Namedroppers that he felt there should be countries represented somewhere in the hierarchy. This came after a fairly significant amount of lobbying by all kinds of people, but many of them had OSI sympathies. Of course the idea of organizing by countries predates OSI. The idea of organizing DNS by countries, however, doesn't. This wasn't a common-sense solution either. The biggest concern in the first few years was to find a way to quell the voices calling for naming structures that reflected network topology, and many believed that organizational (as opposed to geographic) would solve the immediate concerns, given the landscape of the connected nets (think AT&T, Xerox, etc). I am not retroactively trying to politicize these issues, because in the documents people at the time explicitly describe these problems as political. Again, I'm going on what I've found, which may be an incomplete picture. But take the counterfactual: without the prominence of OSI issues in the general discourse, which itself brought at least some of the attention of Arpanauts to international geopolitics, would there have been the ccTLDs in the system? I would say no. You can argue that UK is an exception because of the UCL link and I would of course concede the point. But I don't think it's fair to argue that suddenly including UK opens up the entire ISO list, especially since they don't even follow the standard. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From wmaton at ottix.net Thu Feb 17 15:05:57 2011 From: wmaton at ottix.net (William F. Maton) Date: Thu, 17 Feb 2011 18:05:57 -0500 (EST) Subject: [ih] Looking for DDN-NEWS 22 Message-ID: Hi all, All this talk of the DDN NIC got me to go looking for some documentation I had rescued from my Mac SE floppies (may that SE R.I.P), hich lead me to a tangent to encounter RFC 1032 (the Domain Adminsitrators Guide, no less!) which lead me to a tangent to seek DDN NEWS. I finally found these in the musem directory off the RFC Editor website. However, unless DDN-NEWS 22 has been renamed, anyone know where to find a copy? Or was this never published? Just curious, that's all. Thanks, wfms From jeanjour at comcast.net Thu Feb 17 15:56:55 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 18:56:55 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: The thing is I don't know what you think the "OSI issues" were? I would have to look, but I don't think in 1984 that the X.500 work had started and if it had it would have been very early. They would have been coming up with a directory protocol and trying to throw everything in that someone might use for a naming tree. There was certainly no consideration of what sorts of naming trees would actually be created, or for that matter who was going to create them. It certainly would not have been "OSI." Lots of people had ideas but there was no OSI position on it. That I can guarantee. The date on X.500 is 1990. Generally took 4-5 years to do this and the X.500 stuff was highly controversial within OSI. The naming and addressing addendum to 7498 didn't complete until 88. It was just getting started in 84. X.500 didn't even start until Part 3 was well along, because I sent one of those guys to shepherd X.500. Again, what you are labeling OSI issues really seems to be after the fact. You appear to have fallen prey to the "effect of TS Eliot on Shakespeare" phenomena (with apologies to David Lodge). At 22:53 +0000 2011/02/17, Eric Gade wrote: >The original plan, as I recall, was to simply have gTLDs. But somewhere >before the final TLD meeting at SRI in January 1986, there was a decision to >allow the UK to have a TLD. Most likely this reflected a request from >Peter Kirstein. > > >No one from this list nor anyone else I tried to contact could give >me a definitive answer on when this decision was made. I had to try >and figure it out myself. It appears that something changed between >May and July of 1984. In July, a draft RFC was posted that included >the ISO-3166 list for the first time. Four months beforehand, Postel >first announced to Namedroppers that he felt there should be >countries represented somewhere in the hierarchy. This came after a >fairly significant amount of lobbying by all kinds of people, but >many of them had OSI sympathies. > >Of course the idea of organizing by countries predates OSI. The idea >of organizing DNS by countries, however, doesn't. This wasn't a >common-sense solution either. The biggest concern in the first few >years was to find a way to quell the voices calling for naming >structures that reflected network topology, and many believed that >organizational (as opposed to geographic) would solve the immediate >concerns, given the landscape of the connected nets (think AT&T, >Xerox, etc). > >I am not retroactively trying to politicize these issues, because in >the documents people at the time explicitly describe these problems >as political. Again, I'm going on what I've found, which may be an >incomplete picture. But take the counterfactual: without the >prominence of OSI issues in the general discourse, which itself >brought at least some of the attention of Arpanauts to international >geopolitics, would there have been the ccTLDs in the system? I would >say no. You can argue that UK is an exception because of the UCL >link and I would of course concede the point. But I don't think it's >fair to argue that suddenly including UK opens up the entire ISO >list, especially since they don't even follow the standard. > >-- >Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhc2 at dcrocker.net Thu Feb 17 16:08:02 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 17 Feb 2011 16:08:02 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110217203757.636D928E137@aland.bbn.com> References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: <4D5DB862.50201@dcrocker.net> On 2/17/2011 12:37 PM, Craig Partridge wrote: > PS: Side note -- while there was considerable debate on namedroppers, my > sense is that most of the key naming decisions where made by Ken Harrenstein > and Jon and they coordinated with each other. Ken did most of the analysis > and arguing of points and Jon periodically would announce a decision. I've heard Jake Feinler assert that she resolved some of these choices. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From eric.gade at gmail.com Thu Feb 17 16:16:20 2011 From: eric.gade at gmail.com (Eric Gade) Date: Fri, 18 Feb 2011 00:16:20 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: I partially agree with you, and I didn't mean to engage in that kind of tautology. Of course, what I mean is IFIP, whose work was intended to contribute to OSI (IFIP reports describe the WG 6.5 work as pre-standards work for OSI). I should point out, however, that I never referred to X.500 and that has crept into the conversation in some other way. This also may just be a matter of dissonant worldviews. Where in OSI you see a series of discrete, technically explicit standards, I see an (overly?) ambitious, top-down standards project for computer networking that was unprecedented by international standards work at the time. It reflects a profounding optimistic perspective that relies on a consistently global view concerning the application of these technologies. Those involved in this overal project were obviously going to bring this optimism and global perspective to whatever related projects that they were involved with. IFIP people were involved with DNS and the work of IFIP was the closest related to the same issues that DNS addressed. On Thu, Feb 17, 2011 at 11:56 PM, John Day wrote: > The thing is I don't know what you think the "OSI issues" were? > > I would have to look, but I don't think in 1984 that the X.500 work had > started and if it had it would have been very early. They would have been > coming up with a directory protocol and trying to throw everything in that > someone might use for a naming tree. There was certainly no consideration > of what sorts of naming trees would actually be created, or for that matter > who was going to create them. It certainly would not have been "OSI." > > Lots of people had ideas but there was no OSI position on it. That I can > guarantee. The date on X.500 is 1990. Generally took 4-5 years to do this > and the X.500 stuff was highly controversial within OSI. > > The naming and addressing addendum to 7498 didn't complete until 88. It > was just getting started in 84. X.500 didn't even start until Part 3 was > well along, because I sent one of those guys to shepherd X.500. > > Again, what you are labeling OSI issues really seems to be after the fact. > You appear to have fallen prey to the "effect of TS Eliot on Shakespeare" > phenomena (with apologies to David Lodge). > > > At 22:53 +0000 2011/02/17, Eric Gade wrote: > > The original plan, as I recall, was to simply have gTLDs. But somewhere > before the final TLD meeting at SRI in January 1986, there was a decision > to > allow the UK to have a TLD. Most likely this reflected a request from > Peter Kirstein. > > > > No one from this list nor anyone else I tried to contact could give me a > definitive answer on when this decision was made. I had to try and figure it > out myself. It appears that something changed between May and July of 1984. > In July, a draft RFC was posted that included the ISO-3166 list for the > first time. Four months beforehand, Postel first announced to Namedroppers > that he felt there should be countries represented somewhere in the > hierarchy. This came after a fairly significant amount of lobbying by all > kinds of people, but many of them had OSI sympathies. > > > > Of course the idea of organizing by countries predates OSI. The idea of > organizing DNS by countries, however, doesn't. This wasn't a common-sense > solution either. The biggest concern in the first few years was to find a > way to quell the voices calling for naming structures that reflected network > topology, and many believed that organizational (as opposed to geographic) > would solve the immediate concerns, given the landscape of the connected > nets (think AT&T, Xerox, etc). > > > > I am not retroactively trying to politicize these issues, because in the > documents people at the time explicitly describe these problems as > political. Again, I'm going on what I've found, which may be an incomplete > picture. But take the counterfactual: without the prominence of OSI issues > in the general discourse, which itself brought at least some of the > attention of Arpanauts to international geopolitics, would there have been > the ccTLDs in the system? I would say no. You can argue that UK is an > exception because of the UCL link and I would of course concede the point. > But I don't think it's fair to argue that suddenly including UK opens up the > entire ISO list, especially since they don't even follow the standard. > > > > -- > > Eric > > > -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanjour at comcast.net Thu Feb 17 16:29:05 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 19:29:05 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: At 0:16 +0000 2011/02/18, Eric Gade wrote: >I partially agree with you, and I didn't mean to engage in that kind >of tautology. Of course, what I mean is IFIP, whose work was >intended to contribute to OSI (IFIP reports describe the WG 6.5 work >as pre-standards work for OSI). I should point out, however, that I >never referred to X.500 and that has crept into the conversation in >some other way. The only OSI directory work was X.500. So if you are talking about OSI views on directory you can only mean X.500. IFIP was a liaison organization to ISO, not a member body. You really need to tighten up your language. IFIP is not OSI. > >This also may just be a matter of dissonant worldviews. Where in OSI >you see a series of discrete, technically explicit standards, I see >an (overly?) ambitious, top-down standards project for computer >networking that was unprecedented by international standards work at >the time. It reflects a I know what OSI was. I was rapporteur of the Reference Model, head of US delegation to WG1. You have third hand view of OSI, I have a first hand view. I am well aware at how ambitious it was and also well aware of its internal conflicts. Whether it was top down or not, there was an attempt to ensure that the various parts all fit to a common structure. Although that was easier said than done. >profounding optimistic perspective that relies on a consistently >global view concerning the application of these technologies. Those >involved in this overal project were obviously going to bring this >optimism and global perspective to whatever related projects that >they were involved with. IFIP people were involved with DNS and the >work of IFIP was the closest related to the same issues that DNS >addressed. Again, IFIP was not OSI. They didn't even have a vote in ISO. They didn't even constitute a majority of the X.500 subgroup. If you want to talk about the influence of IFIP on DNS that do that. But don't tell me that it is the influence of OSI on DNS. You might even want to chronicle the influence of IFIP on OSI. But then you weren't there so I am sure you have a better perspective than I do. Take care, John From eric.gade at gmail.com Thu Feb 17 17:21:45 2011 From: eric.gade at gmail.com (Eric Gade) Date: Fri, 18 Feb 2011 01:21:45 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: On Fri, Feb 18, 2011 at 12:29 AM, John Day wrote: > Again, IFIP was not OSI. They didn't even have a vote in ISO. They didn't > even constitute a majority of the X.500 subgroup. > This is useful, constructive information that I appreciate. > But then you weren't there so I am sure you have a better perspective than > I do. > This is not. In fact, this kind of unnecessary snarkiness and vitriol pops up frequently on this list, much to its discredit. I'm always up for a lively dialogue, and, perhaps most importantly, I'm always willing to admit when I'm wrong. You are right -- I do need to tighten up my language. A discourse like this where we try to establish historical circumstances should involve precisely these kinds of concessions. Nowhere was I making an attempt to 'tell' anyone anything. I was simply trying to elucidate some of my current findings. Knowledge, rather than ego-bruising, was and still is my intention. > You have third hand view of OSI, I have a first hand view. This attitude also frequently shows itself on the list, and is almost entirely useless to the task of history. A third-person view is exactly what is needed. I'm not doubting anyone's technical expertise, or the prominence of the role they played in all of this. That is not my concern at all. In fact, I would very much like to incorporate first hand views into my overall third hand view. That's what this is all about. Isn't it? -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig at aland.bbn.com Thu Feb 17 17:23:27 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 17 Feb 2011 20:23:27 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110218012328.035D928E137@aland.bbn.com> Jake certainly convened and chaired the larger meetings where decisions were finalized. She's on this list, I think, and can probably fill us in on her role in more detail. Thanks! Craig > > > On 2/17/2011 12:37 PM, Craig Partridge wrote: > > PS: Side note -- while there was considerable debate on namedroppers, my > > sense is that most of the key naming decisions where made by Ken Harrenstei > n > > and Jon and they coordinated with each other. Ken did most of the analysis > > and arguing of points and Jon periodically would announce a decision. > > > I've heard Jake Feinler assert that she resolved some of these choices. > > d/ > > > -- > > Dave Crocker > Brandenburg InternetWorking > bbiw.net ******************** Craig Partridge Chief Scientist, BBN Technologies E-mail: craig at aland.bbn.com or craig at bbn.com Phone: +1 517 324 3425 From richard at bennett.com Thu Feb 17 17:41:56 2011 From: richard at bennett.com (Richard Bennett) Date: Thu, 17 Feb 2011 17:41:56 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: <4D5DCE64.3020805@bennett.com> In what sense was OSI top-down? The OSI process was every bit as much a bottoms-up, participant-driven process as IEEE 802 is today. If there ever was a top-down standards process in the networking world directed by two or three lords of the purse, it certainly wasn't OSI. On 2/17/2011 4:16 PM, Eric Gade wrote: > > This also may just be a matter of dissonant worldviews. Where in OSI > you see a series of discrete, technically explicit standards, I see an > (overly?) ambitious, top-down standards project for computer > networking that was unprecedented by international standards work at > the time. It reflects a profounding optimistic perspective that relies > on a consistently global view concerning the application of these > technologies. Those involved in this overal project were obviously > going to bring this optimism and global perspective to whatever > related projects that they were involved with. IFIP people were > involved with DNS and the work of IFIP was the closest related to the > same issues that DNS addressed. > From dhc2 at dcrocker.net Thu Feb 17 17:50:33 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Thu, 17 Feb 2011 17:50:33 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: <4D5DD069.7080906@dcrocker.net> On 2/17/2011 4:16 PM, Eric Gade wrote: > I partially agree with you, and I didn't mean to engage in that kind of > tautology. Of course, what I mean is IFIP, whose work was intended to contribute > to OSI (IFIP reports describe the WG 6.5 work as pre-standards work for OSI). I > should point out, however, that I never referred to X.500 and that has crept > into the conversation in some other way. Right. IFIP WG 6.5 was foundational for each of the X.400 and X.500 efforts. "Pre-standards" work is exactly the term that is used to describe it's role for these. IFIP WG 6.5 was where the email UA/MTA model was developed. At the time, that functional split provided a fundamental improvement in thinking about the design of email services; it was based on four existing, very disparate systems. (I was in the pre-X.400 work for several years and the later pre-x.500 work for a couple of meetings.) -- Dave Crocker Brandenburg InternetWorking bbiw.net From eric.gade at gmail.com Thu Feb 17 18:04:43 2011 From: eric.gade at gmail.com (Eric Gade) Date: Fri, 18 Feb 2011 02:04:43 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5DCE64.3020805@bennett.com> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: On Fri, Feb 18, 2011 at 1:41 AM, Richard Bennett wrote: > In what sense was OSI top-down? The OSI process was every bit as much a > bottoms-up, participant-driven process as IEEE 802 is today. If there ever > was a top-down standards process in the networking world directed by two or > three lords of the purse, it certainly wasn't OSI. We sort of got into this last week, but didn't push it too far. OSI is unique from an international standards perspective because the was prescriptive. As far as I know, it was an unprecedented move for ISO (and maybe national standards orgs?) because they typically standardized existing practices. OSI was, to my knowledge, mandated in some way where it was creating practices rather than standardizing existing ones. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard at bennett.com Thu Feb 17 18:50:13 2011 From: richard at bennett.com (Richard Bennett) Date: Thu, 17 Feb 2011 18:50:13 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: <4D5DDE65.1090207@bennett.com> An HTML attachment was scrubbed... URL: From jklensin at gmail.com Thu Feb 17 19:02:25 2011 From: jklensin at gmail.com (John Klensin) Date: Thu, 17 Feb 2011 22:02:25 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: Since I was more than a little involved in some of this (I vague remember Jon's telling me about conversations with Ken, but there were other conversations going on too), let me add a little calibration... On 2/17/11, Eric Gade wrote: >> >> The original plan, as I recall, was to simply have gTLDs. But somewhere >> before the final TLD meeting at SRI in January 1986, there was a decision >> to >> allow the UK to have a TLD. Most likely this reflected a request from >> Peter Kirstein. "request from Peter Kirstein" is certainly what I was told... or possibly something stronger than a request. My recollection, again from what he told me a while later, is that Jon started worrying about names of countries, authorities for what was a country, and appropriate abbreviated names almost as soon as the idea of country TLDs came up. I don't remember how Jon found the ISO 3166 list or who pointed him to it although I probably did know at one time. In any event, it met the criteria he was looking for --an authoritative list, with someone else as the authority (so that IANA didn't need to decide), of both countries and territories and codes that could be used for them. At the time, ISO 3166 contained a provision strongly encouraging (nearly mandating) that anyone who wanted to use the code system contact the secretariat for the Registration Authority (I'm pretty sure it was still a Registration Authority at the time) at DIN and discuss the intended use. As I recall, Jon wrote them a letter describing the possible DNS application and got back a response that amounted to "don't do that, use OSI identifiers". Then there was a bit of a negotiation, probably in 85 but possibly earlier (I could reconstruct the dates, but it would take a lot of work), that I got involved in partially because I was back and forth to Berlin at the time, after which we agreed that we were going to use the 3166 list and they agreed that they couldn't prevent us from doing so. Parts of those discussions strongly influenced RFC 1571 in spite of the fact that Jon didn't get around to initiating writing it until many years later. >... > Four months beforehand, Postel first announced to Namedroppers > that he felt there should be countries represented somewhere in the > hierarchy. This came after a fairly significant amount of lobbying by all > kinds of people, but many of them had OSI sympathies. I know of the UK request. I know that there were some very clear "if the UK gets one, we may want one too" indications. And, as Craig points out, once email routing to all sorts of other networks because part of the story, country domains, even for countries with no actual TCP/IP Internet connectivity, became obvious because many of the connections used dialup phone lines (which were, of course, very much country (and national PTT)-based. I don't know about the "fairly significant lobbying" effort. As far as the "OSI sympathies" are concerned, lots of people and governments say OSI as inevitable and others learned to speak OSI language because the terminology was handy and better-defined, even when it didn't map very well onto the Internet (as Mike Padlipsky and others pointed out quite forcefully). Whether either the sense of inevitability or the use of the vocabulary constituted "OSI sympathies" is probably in the mind of the beholder. > Of course the idea of organizing by countries predates OSI. The idea of > organizing DNS by countries, however, doesn't. This wasn't a common-sense > solution either. The biggest concern in the first few years was to find a > way to quell the voices calling for naming structures that reflected network > topology, and many believed that organizational (as opposed to geographic) > would solve the immediate concerns, given the landscape of the connected > nets (think AT&T, Xerox, etc). But, if one believes in a distributed administrative hierarchy, organizing by countries -- especially for countries with their own network plans and plans to connect them to the Internet but not necessarily to be running TCP/IP and the associated applications suites-- actually is a common-sense solution. > I am not retroactively trying to politicize these issues, because in the > documents people at the time explicitly describe these problems as > political. Again, I'm going on what I've found, which may be an incomplete > picture. But take the counterfactual: without the prominence of OSI issues > in the general discourse, which itself brought at least some of the > attention of Arpanauts to international geopolitics, would there have been > the ccTLDs in the system? I would say no. You can argue that UK is an > exception because of the UCL link and I would of course concede the point. > But I don't think it's fair to argue that suddenly including UK opens up the > entire ISO list, especially since they don't even follow the standard. Well, actually they did, if one counts the time-honored practice of anticipating a standard a bit and then getting it wrong. I haven't gone back and sorted out the chronology, but 3166 itself wasn't very old when the DNS started using it. And, apparently (according to what I was told in the mid-80s and again in the late 90s -- the latter by someone who had been the BSI representative to ISO TC 46 and the 3166 Maintenance Agency at the time) the 3166 code was originally "UK" but either BSI or Her Majesty's Government changed their minds just before the standard was adopted. There was also a story for a while that UK was used for the DNS in order to avoid confusion with inevitable OSI naming, but I don't know whether that was accurate or apocryphal. In any event, what including UK opened up was not "the entire ISO list" but the quest for a list of entities and codes that would prevent Jon/IANA from getting embroiled in debates about who was eligible for a TLD and what the TLD name should be. The use of the 3166 list ("entire" or otherwise) was the result of that search. john From jklensin at gmail.com Thu Feb 17 19:21:08 2011 From: jklensin at gmail.com (John Klensin) Date: Thu, 17 Feb 2011 22:21:08 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: On 2/17/11, Eric Gade wrote: > We sort of got into this last week, but didn't push it too far. OSI is > unique from an international standards perspective because the was > prescriptive. As far as I know, it was an unprecedented move for ISO (and > maybe national standards orgs?) because they typically standardized existing > practices. OSI was, to my knowledge, mandated in some way where it was > creating practices rather than standardizing existing ones. Eric, I won't make any claims about cause and effect -- I don't know, I imagine those who might know would disagree, it it is probably off-topic for this list -- but there were several panics in some of the national standards bodies in the 80s about how to make or keep themselves relevant in information technology-related areas. Those concerns created a great deal of ferment, out of which came, among many other things, the notion of "anticipatory standards" as differentiated from "standards reflecting existing practice in industry". Arguably, other symptoms included the creation of ISO/IEC JTC1 in 1987 after several years of discussions and the ISO TC 97 - CCITT Joint Development Agreement (which JTC1 and ITU-T later inherited). It is not a very big step from "anticipatory standards" to "standards development bodies defining basic architectural ("reference") models and designing their own protocols. Again, no assertions about causes, but the general climate of the times may have been much more important to the unfolding of some of these developments than you seem to have inferred. From jeanjour at comcast.net Thu Feb 17 20:01:02 2011 From: jeanjour at comcast.net (John Day) Date: Thu, 17 Feb 2011 23:01:02 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: John, All you say here about what happened in the 80s is true. The formation of JTC1 etc. But that was quite late to the game. The idea of standardizing to a point in the future was set prior by set the first meeting of SC16 in March 1978 and the Joint Development with CCITT by 1979/80 was quite early. (The biggest mistake in the whole effort). At that time, the idea was that things were changing so fast that one had to shoot for a point in the future. The world views between the computer companies and the European PTTs were so different and the PTTs saw so much at stake, there was no way anything good could have come from it. It might have been better had the cooperation with CCITT not happened. But with no deregulation even considered in 1979, the European computer manufactures didn't have much choice. To some degree this may well have been a strategy to get out ahead of IBM and the PTTs. Given their dominance in the markets, had they not attempted something like that and gone with standardizing current practice it would have been SNA over X.25, instead of TP4 over CLNP. At 22:21 -0500 2011/02/17, John Klensin wrote: >On 2/17/11, Eric Gade wrote: > >> We sort of got into this last week, but didn't push it too far. OSI is >> unique from an international standards perspective because the was >> prescriptive. As far as I know, it was an unprecedented move for ISO (and >> maybe national standards orgs?) because they typically standardized existing >> practices. OSI was, to my knowledge, mandated in some way where it was >> creating practices rather than standardizing existing ones. > >Eric, > >I won't make any claims about cause and effect -- I don't know, I >imagine those who might know would disagree, it it is probably >off-topic for this list -- but there were several panics in some of >the national standards bodies in the 80s about how to make or keep >themselves relevant in information technology-related areas. Those >concerns created a great deal of ferment, out of which came, among >many other things, the notion of "anticipatory standards" as >differentiated from "standards reflecting existing practice in >industry". Arguably, other symptoms included the creation of ISO/IEC >JTC1 in 1987 after several years of discussions and the ISO TC 97 - >CCITT Joint Development Agreement (which JTC1 and ITU-T later >inherited). It is not a very big step from "anticipatory standards" >to "standards development bodies defining basic architectural >("reference") models and designing their own protocols. > >Again, no assertions about causes, but the general climate of the >times may have been much more important to the unfolding of some of >these developments than you seem to have inferred. From harald at alvestrand.no Fri Feb 18 01:35:58 2011 From: harald at alvestrand.no (Harald Alvestrand) Date: Fri, 18 Feb 2011 10:35:58 +0100 Subject: [ih] .ARPA meaning change (Re: Dot Com etc) In-Reply-To: <9C6AF440-9EF1-4D04-8188-11F302F48ADD@google.com> References: <4B577323.2040905@isi.edu> <1264027356.3494.42.camel@localhost> <7B4642EE-0BE2-4E33-8918-66188F6618C1@transsys.com> <9C6AF440-9EF1-4D04-8188-11F302F48ADD@google.com> Message-ID: <4D5E3D7E.3020400@alvestrand.no> On 01/21/10 00:27, Vint Cerf wrote: > Louis, thanks for reminding us about the interim use of .arpa until > registration of names in the other 7 TLDs occurred. I'd forgotten > about that. Later, .arpa was used for reverse lookup and other > infrastructure mechanisms. There's a short history of '.arpa' in RFC 3172, September 2001, which documents the change from "Arpanet" to its current meaning, dating it to "during 2000". The -00 version of draft-iab-arpa is dated April 2001, Geoff Huston edited it. I distinctly remember coming up with the "Address and Routing Parameter Area" backronym during an IESG call - must have been in my 1995-1999 tenure, or possibly my memory is faulty and it was during my 1999-2001 IAB tenure... > > v > > On Jan 20, 2010, at 6:05 PM, Louis Mamakos wrote: > >> There was, of course, the .ARPA domain that came first. One day, all >> of the hosts in the SRI-NIC's HOSTS.TXT file grew aliases with the >> .ARPA suffix. For some period of time during the transition to the >> operational DNS, the NIC continued to add hosts with domain names >> (other than .ARPA) to the HOSTS.TXT file. >> >> I suppose the real "flag day" for the DNS was when the HOSTS.TXT file >> stopped getting updated or distributed. >> >> The HOSTS.TXT file also contained (classfull) network names as >> networks were allocated out of the IPv4 address space. I don't think >> this capability was really ever reimplemented in the DNS, especially >> when CIDR and classless network prefxes came on the scene and you >> couldn't obviously identify the "network" number by examination. >> Few programs really depended on this, and now we've got WHOIS and the >> like to bang against the registrars. >> >> Louis Mamakos >> >> On Jan 20, 2010, at 5:42 PM, Jack Haverty wrote: >> >>> Hi Bob! >>> >>> I also have the feeling that Jon put the list together, since as I >>> recall he was the only one of us organized enough to deal with such >>> things... >>> >>> As to *why* that initial list was chosen, my recollection is that it >>> simply reflected the demographics of the emerging "Internet community" >>> at the time. There were lots of governmental entities and lots of >>> schools. The "rest of world" were commercial, or companies. >>> >>> Plus it was likely that someone from each TLD subgroup would step up >>> and >>> volunteer to be the coordinator/arbitrator of name etiquette within >>> that >>> subgroup. You couldn't have a TLD unless there was someone willing to >>> manage it. >>> >>> The nascent Internet was very US-centric, again reflecting the >>> demographics. Gov meant US government. Com was US companies, weighted >>> toward government contractors such as BBN or Linkabit - I can't recall >>> any non-US companies being involved until later in the game. >>> >>> I think .com originally was derived from "company" rather than >>> "commercial". The .com's weren't thought of as "businesses" in the >>> sense of places that consumers go to buy things. They were companies >>> doing government contract work. The Internet was not chartered to >>> interconnect businesses - it was a military command-and-control >>> prototype network, being built by educational, governmental, and >>> contractors. If anybody had suggested that businesses were to be >>> included, it would have raised flotillas of red flags in the >>> administrative ranks of government and PTTs. Hence .com -- not .biz. >>> >>> I don't recall anybody ever thinking we were creating an organizational >>> structure to encompass hundreds of millions of entities covering the >>> entire planet in support of all human activities. And it certainly >>> wasn't supposed to last for 30+ years, even as an experiment. It just >>> happened to turn out that way. >>> >>> IIRC, there weren't any major debates or counterproposals or such about >>> TLDs. The TLD list just wasn't that big a deal (at the time). The >>> Internet was an *experiment* which, like all experiments, was supposed >>> to end. CCITT, ISO, and such organizations were inventing the official >>> technologies for the future of data communications. We know now how >>> that turned out Whatever TLD list and such was used in the Internet >>> wasn't supposed to last long. So a specific logistical decision like >>> the TLD list wasn't all that important - at the time. >>> >>> I agree that whatever discussion happened was almost certainly carried >>> out mostly on the email lists which served as the primary way for >>> everybody to interact between quarterly meetings, and then Jon and crew >>> most likely put the initial list together, and there wasn't any real >>> opposition so it became real. >>> >>> It's very difficult to identify who "invented" anything in those days. >>> There was lots of discussions, ideas, and strawmen passed around in >>> emails and then eventually somebody wrote the document or wrote the >>> code >>> to capture the "rough consensus" of the discussion. >>> >>> /Jack >>> >>> >>> On Wed, 2010-01-20 at 13:18 -0800, Bob Braden wrote: >>>> >>>> internet-history-request at postel.org wrote: >>>> >>>>> >>>>>> Does anyone know why .com; .edu and .gov were chosen? I know it >>>>>> seems >>>>>> simple, but why .com instead of something like .biz? >>>>> >>>> >>>> I recall seeing those TLD names on Jon's white board at the time. I >>>> feel >>>> quite certain that they came out of Jon's head, but were ratified by >>>> discussions with Paul. >>>> >>>> Bob Braden >>>> >>>> >>>> >>> >> >> > > From harald at alvestrand.no Fri Feb 18 02:12:49 2011 From: harald at alvestrand.no (Harald Alvestrand) Date: Fri, 18 Feb 2011 11:12:49 +0100 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: <4D5E4621.9080103@alvestrand.no> On 02/18/11 00:56, John Day wrote: > The thing is I don't know what you think the "OSI issues" were? > > I would have to look, but I don't think in 1984 that the X.500 work > had started and if it had it would have been very early. They would > have been coming up with a directory protocol and trying to throw > everything in that someone might use for a naming tree. There was > certainly no consideration of what sorts of naming trees would > actually be created, or for that matter who was going to create them. > It certainly would not have been "OSI." > > Lots of people had ideas but there was no OSI position on it. That I > can guarantee. The date on X.500 is 1990. Generally took 4-5 years > to do this and the X.500 stuff was highly controversial within OSI. The first version of X.500 was in the ITU-T Blue Book, which was published in 1988. The Red Book (1984) did not have a directory service. The first "operational" X.500 RFC was RFC 1218, "Naming scheme for c=US", dated April 1991, but it's also referred to in Karen Sollins' "A Plan for Internet Directory Services", July 1989. It seems reasonable to assume that the basic design work was carried out in 1984-1988. Harald From eric.gade at gmail.com Fri Feb 18 02:53:46 2011 From: eric.gade at gmail.com (Eric Gade) Date: Fri, 18 Feb 2011 10:53:46 +0000 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) In-Reply-To: <4D5E4621.9080103@alvestrand.no> References: <20110217203757.636D928E137@aland.bbn.com> <4D5E4621.9080103@alvestrand.no> Message-ID: On Fri, Feb 18, 2011 at 10:12 AM, Harald Alvestrand wrote: > > > The first "operational" X.500 RFC was RFC 1218, "Naming scheme for c=US", > dated April 1991, but it's also referred to in Karen Sollins' "A Plan for > Internet Directory Services", July 1989. > Interesting. There is a draft of an unpublished paper written by Sollins in Jan 1983 -- titled "Naming, Conversations, and Federation" -- that is in the NIC collection (I've been informed that this went on to be included in her dissertation). Its very presence in that collection probably means that someone at the NIC was familiar with these kinds of ideas at the time, possibly Jake Feinler or others that went on to do IFIP work. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaap at NLnetLabs.nl Fri Feb 18 03:07:34 2011 From: jaap at NLnetLabs.nl (Jaap Akkerhuis) Date: Fri, 18 Feb 2011 12:07:34 +0100 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> (Eric? wrote) No one from this list nor anyone else I tried to contact could give me a definitive answer on when this decision was made. I had to try and figure it out myself. It appears that something changed between May and July of 1984. In July, a draft RFC was posted that included the ISO-3166 list for the first time. Four months beforehand, Postel first announced to Namedroppers that he felt there should be countries represented somewhere in the hierarchy. This came after a fairly significant amount of lobbying by all kinds of people, but many of them had OSI sympathies. My collegue Piet Beertema was involved in the discussions about which list to use. I remember that among other possible lists where the UPU list and the list of Road Vehicle signs (or something like that). The last one has variable length codes which didn't make it attractive. The ISO list was indeed interesting because the UN is involved. And the 2-char instead of the 3-char codes where attractive because there would be no clashes with existing domains (com, edu, net). I alsways think that the whole DNS became popular independent of the transport protocol due to the uumapping project and similar stuff (the JANET gateway etc). jaap From mfidelman at meetinghouse.net Fri Feb 18 04:14:43 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 07:14:43 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: <4D5E62B3.6010804@meetinghouse.net> Eric Gade wrote: > > On Fri, Feb 18, 2011 at 1:41 AM, Richard Bennett > wrote: > > In what sense was OSI top-down? The OSI process was every bit as > much a bottoms-up, participant-driven process as IEEE 802 is > today. If there ever was a top-down standards process in the > networking world directed by two or three lords of the purse, it > certainly wasn't OSI. > > We sort of got into this last week, but didn't push it too far. OSI is > unique from an international standards perspective because the was > prescriptive. As far as I know, it was an unprecedented move for ISO > (and maybe national standards orgs?) because they typically > standardized existing practices. You've said that before. Can you elaborate with some examples of where ISO has simply codified existing practice? Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Fri Feb 18 04:27:31 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 07:27:31 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5DDE65.1090207@bennett.com> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5DDE65.1090207@bennett.com> Message-ID: <4D5E65B3.3040804@meetinghouse.net> Richard Bennett wrote: > People bring proposals to standards bodies, who choose among them, > modify them, accept them and reject them. "Existing practices" in the > most interesting cases are confined to the lab or even to simulation > these days. Which is distinctly different than bottom-up in the IETF sense. The distinction isn't bottom-up vs. top-down, it's more one of semi-collaborative, get it right ("rough consensus and running code" so to speak) vs. vendor's battling out who's existing products will have to be modified when the standard gets finalized. > I think you're making a false distinction between OSI and other > networking standards. OSI problem was mainly that it was not top-down > enough, had too many cooks, and had to offer too many options to > achieve consensus. And that comes back to the lack of a bottom-up process that emphasized running code. My impression of the OSI work was that it was way too theoretical and political. There are real lessons to be learned here. I see a lot of the same dynamics in today's geospatial standards work through the OGC - lot's of theoretical wrangling, resulting in standards that sort of work, but have to be fixed in later revisions, and that largely get ignored by most of the world (take a look at how many people use ESRI's proprietary stuff, vs. Google's APIs, vs. OGC standard WMS and WFS; or maybe look at the rapid adoption of RESTful intefaces vs. W3C web service standards). Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From jeanjour at comcast.net Fri Feb 18 04:39:58 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 07:39:58 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> References: <20110217203757.636D928E137@aland.bbn.com> <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> Message-ID: Just out of curiosity could some explain how the UN got involved with ISO 3316? The connection is not obvious to me. Thanks. At 12:07 +0100 2011/02/18, Jaap Akkerhuis wrote: > > (Eric? wrote) > > No one from this list nor anyone else I tried to contact could give me a > definitive answer on when this decision was made. I had to try >and figure it > out myself. It appears that something changed between May and >July of 1984. > In July, a draft RFC was posted that included the ISO-3166 list for the > first time. Four months beforehand, Postel first announced to Namedroppers > that he felt there should be countries represented somewhere in the > hierarchy. This came after a fairly significant amount of lobbying by all > kinds of people, but many of them had OSI sympathies. > >My collegue Piet Beertema was involved in the discussions about >which list to use. I remember that among other possible lists where >the UPU list and the list of Road Vehicle signs (or something like >that). The last one has variable length codes which didn't make it >attractive. The ISO list was indeed interesting because the UN is >involved. And the 2-char instead of the 3-char codes where attractive >because there would be no clashes with existing domains (com, edu, >net). > >I alsways think that the whole DNS became popular independent of >the transport protocol due to the uumapping project and similar >stuff (the JANET gateway etc). > > jaap From jeanjour at comcast.net Fri Feb 18 04:41:42 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 07:41:42 -0500 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) In-Reply-To: <4D5E4621.9080103@alvestrand.no> References: <20110217203757.636D928E137@aland.bbn.com> <4D5E4621.9080103@alvestrand.no> Message-ID: Yes, Harald. What I was referring to were the dates when the New Work Item was approved in ISO and perhaps even the date of the first DP ballot. That would tell us when it started. The documents were progressed together in ISO and CCITT. I believe in those days CCITT was still on a 4 year cycle. Colored books were published every 4 years and what was done at that point went in. ISO on the other hand published standards when they passed the IS ballot. Given the two year difference between the CCITT date and the ISO date, I wonder if the CCITT version was the ISO DIS, rather than the IS. At 11:12 +0100 2011/02/18, Harald Alvestrand wrote: >On 02/18/11 00:56, John Day wrote: >>The thing is I don't know what you think the "OSI issues" were? >> >>I would have to look, but I don't think in 1984 that the X.500 work >>had started and if it had it would have been very early. They would >>have been coming up with a directory protocol and trying to throw >>everything in that someone might use for a naming tree. There was >>certainly no consideration of what sorts of naming trees would >>actually be created, or for that matter who was going to create >>them. It certainly would not have been "OSI." >> >>Lots of people had ideas but there was no OSI position on it. That >>I can guarantee. The date on X.500 is 1990. Generally took 4-5 >>years to do this and the X.500 stuff was highly controversial >>within OSI. >The first version of X.500 was in the ITU-T Blue Book, which was >published in 1988. The Red Book (1984) did not have a directory >service. > >The first "operational" X.500 RFC was RFC 1218, "Naming scheme for >c=US", dated April 1991, but it's also referred to in Karen Sollins' >"A Plan for Internet Directory Services", July 1989. > >It seems reasonable to assume that the basic design work was carried >out in 1984-1988. > > Harald From jeanjour at comcast.net Fri Feb 18 04:47:28 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 07:47:28 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E62B3.6010804@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> Message-ID: Screw threads, highway signs, paper size, HDLC, Transport Layer, Session Layer, Network Layer At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >Eric Gade wrote: >> >>On Fri, Feb 18, 2011 at 1:41 AM, Richard Bennett >>> wrote: >> >> In what sense was OSI top-down? The OSI process was every bit as >> much a bottoms-up, participant-driven process as IEEE 802 is >> today. If there ever was a top-down standards process in the >> networking world directed by two or three lords of the purse, it >> certainly wasn't OSI. >>We sort of got into this last week, but didn't push it too far. OSI >>is unique from an international standards perspective because the >>was prescriptive. As far as I know, it was an unprecedented move >>for ISO (and maybe national standards orgs?) because they typically >>standardized existing practices. >You've said that before. Can you elaborate with some examples of >where ISO has simply codified existing practice? > >Miles Fidelman > >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From jaap at NLnetLabs.nl Fri Feb 18 05:31:57 2011 From: jaap at NLnetLabs.nl (Jaap Akkerhuis) Date: Fri, 18 Feb 2011 14:31:57 +0100 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> Message-ID: <201102181331.p1IDVvRB078672@bartok.nlnetlabs.nl> Just out of curiosity could some explain how the UN got involved with ISO 3316? The connection is not obvious to me. In short, the connection is defined in ISO 3166 itself. For new country, the Statistical Bureau of the UN in New York asks for it when they need it for statistical purposes. The ISO 3166 Maintenance Agency then allocates a two and three alpha code according to the rules in ISO 3166. jaap From mfidelman at meetinghouse.net Fri Feb 18 05:35:52 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 08:35:52 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> Message-ID: <4D5E75B8.1070809@meetinghouse.net> John Day wrote: > At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >> You've said that before. Can you elaborate with some examples of >> where ISO has simply codified existing practice? Screw threads, highway signs, paper size, HDLC, Transport Layer, Session Layer, Network Layer I was all set to buy "screw threads" - until I read the Wikipedia article on http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization Re. Transport, Session, Network layer: how can you say that with a straight face, after all the recent discussion here? (I don't see an ISO number stamped on TCP/IP.) Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From nigel at channelisles.net Fri Feb 18 05:47:54 2011 From: nigel at channelisles.net (Nigel Roberts) Date: Fri, 18 Feb 2011 13:47:54 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> Message-ID: <4D5E788A.5060805@channelisles.net> ISO is part of the UN, isn't it? One of the ways on to the ISO-3166-1 is via the UN Statistics Bureau. Nigel On 02/18/2011 12:39 PM, John Day wrote: > Just out of curiosity could some explain how the UN got involved with > ISO 3316? > > The connection is not obvious to me. > > Thanks. > > > At 12:07 +0100 2011/02/18, Jaap Akkerhuis wrote: >> (Eric? wrote) >> >> No one from this list nor anyone else I tried to contact could give me a >> definitive answer on when this decision was made. I had to try and >> figure it >> out myself. It appears that something changed between May and July of >> 1984. >> In July, a draft RFC was posted that included the ISO-3166 list for the >> first time. Four months beforehand, Postel first announced to >> Namedroppers >> that he felt there should be countries represented somewhere in the >> hierarchy. This came after a fairly significant amount of lobbying by all >> kinds of people, but many of them had OSI sympathies. >> >> My collegue Piet Beertema was involved in the discussions about >> which list to use. I remember that among other possible lists where >> the UPU list and the list of Road Vehicle signs (or something like >> that). The last one has variable length codes which didn't make it >> attractive. The ISO list was indeed interesting because the UN is >> involved. And the 2-char instead of the 3-char codes where attractive >> because there would be no clashes with existing domains (com, edu, >> net). >> >> I alsways think that the whole DNS became popular independent of >> the transport protocol due to the uumapping project and similar >> stuff (the JANET gateway etc). >> >> jaap > > From jklensin at gmail.com Fri Feb 18 05:53:32 2011 From: jklensin at gmail.com (John Klensin) Date: Fri, 18 Feb 2011 08:53:32 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: On 2/17/11, John Day wrote: > John, > > All you say here about what happened in the 80s is true. The > formation of JTC1 etc. But that was quite late to the game. Absolutely. JTC1 didn't come together until nearly the end of the decade although the work started much earlier. If I recall, it was the brainchild of Joe DeBlasi (IBM's corporate head of standards, later ACM Exec Dir). I chaired ACM's (late and mostly unlamented) Standards Committee from about 1986 and so got to watch that part of the process from ANSI/ISSB among other places. But I was less concerned about the specific standardization events -- many of which were fairly peripheral to the OSI developments and Internet/OSI interactions -- than the degree to which they indicated that the environment was fermenting, making some adventures by standards development bodies possible that would not have been possible before and might not even be possible some years later (with the emphasis on "might" -- some of what is now going on in ITU-T may not be that much different). > The idea of standardizing to a point in the future was set prior by > set the first meeting of SC16 in March 1978 and the Joint Development > with CCITT by 1979/80 was quite early. (The biggest mistake in the > whole effort). At that time, the idea was that things were changing > so fast that one had to shoot for a point in the future. Carl Cargill has made the claim on several occasions that he invented anticipatory standardization. I've had no reason to disbelieve him even though we disagreed (at least at the time and for some years thereafter) as to whether it was a great idea or a disaster waiting to happen). If this is important, someone might check with him on both dates and how things unfolded at levels considerably above any one CCITT / ITU-T SC or ISO WG or EG. > The world views between the computer companies and the European PTTs > were so different and the PTTs saw so much at stake, there was no way > anything good could have come from it. Yes. But I think actually an almost-separate problem at the standards policy level, even though I've assumed it played out most dramatically at the SG / WG one. > It might have been better had the cooperation with CCITT not > happened. But with no deregulation even considered in 1979, the > European computer manufactures didn't have much choice. Part of what also drove those collaborations (both TC97-CCITT and the later formation of JTC1) was a realization by both companies and governments/ PTTs that they were spending a lot of resources sending people (often the same people) to parallel meetings, often to advocate particular results in one and to provide a defensive/blocking force in the other. Joint development agreements and consolidation were supposed to fix that. With a quarter-century of hindsight, it didn't work very well and still doesn't. > To some degree this may well have been a strategy to get out ahead of > IBM and the PTTs. Given their dominance in the markets, had they not > attempted something like that and gone with standardizing current > practice it would have been SNA over X.25, instead of TP4 over CLNP. Yes. But also more complicated. If this is important, someone should try to find Joe and read him out -- that perspective would be, IMO, very useful. john From ajs at crankycanuck.ca Fri Feb 18 06:07:57 2011 From: ajs at crankycanuck.ca (Andrew Sullivan) Date: Fri, 18 Feb 2011 09:07:57 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E65B3.3040804@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5DDE65.1090207@bennett.com> <4D5E65B3.3040804@meetinghouse.net> Message-ID: <20110218140757.GE66684@shinkuro.com> On Fri, Feb 18, 2011 at 07:27:31AM -0500, Miles Fidelman wrote: > Which is distinctly different than bottom-up in the IETF sense. The > distinction isn't bottom-up vs. top-down, it's more one of > semi-collaborative, get it right ("rough consensus and running code" so > to speak) vs. vendor's battling out who's existing products will have to > be modified when the standard gets finalized. I wish I felt that the line could be drawn quite that brightly, at least in the current incarnation of the IETF. A -- Andrew Sullivan ajs at crankycanuck.ca From jeanjour at comcast.net Fri Feb 18 06:08:36 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 09:08:36 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E65B3.3040804@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5DDE65.1090207@bennett.com> <4D5E65B3.3040804@meetinghouse.net> Message-ID: At 7:27 -0500 2011/02/18, Miles Fidelman wrote: >Richard Bennett wrote: >>People bring proposals to standards bodies, who choose among them, >>modify them, accept them and reject them. "Existing practices" in >>the most interesting cases are confined to the lab or even to >>simulation these days. > >Which is distinctly different than bottom-up in the IETF sense. The >distinction isn't bottom-up vs. top-down, it's more one of >semi-collaborative, get it right ("rough consensus and running code" >so to speak) vs. vendor's battling out who's existing products will >have to be modified when the standard gets finalized. That depends on how close they are when they start! ;-) It has nothing to do with the nature of the organizations. The IETF has been fortunate in that many of their projects have been with people more or less on the same page. Although in recent years that has changed considerably and I think you would find the politics within the IETF these days to come close to the level in the OSI. (We are talking OSI here and not the wider environment of ISO.) By putting the computer companies and the PTTs in the same meeting it was worse than oil and water, perhaps oil and gasoline! Also it isn't just whose product has to change. What was being proposed by OSI was completely counter to both IBM's and the PTT's business models. Needless to say, they weren't going to take that lying down. > >>I think you're making a false distinction between OSI and other >>networking standards. OSI problem was mainly that it was not >>top-down enough, had too many cooks, and had to offer too many >>options to achieve consensus. >And that comes back to the lack of a bottom-up process that >emphasized running code. My impression of the OSI work was that it >was way too theoretical and political. All standards are bottom up. If participants don't choose to work on it, then it doesn't happen. Since OSI participants was primarily corporations they were hesitant to commit money to implementation until they knew there was product. They didn't have the advantage of being government subsidized as the Internet did. > >There are real lessons to be learned here. I see a lot of the same >dynamics in today's geospatial standards work through the OGC - >lot's of theoretical wrangling, resulting in standards that sort of >work, but have to be fixed in later revisions, and that largely get >ignored by most of the world (take a look at how many people use >ESRI's proprietary stuff, vs. Google's APIs, vs. OGC standard WMS >and WFS; or maybe look at the rapid adoption of RESTful intefaces >vs. W3C web service standards). This is true of all standards organizations that have been around a long time. Look at all the RFCs that are not in current use. There are many lessons to be learned here. The social dynamics of these processes is more than a little fascinating. > >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From jeanjour at comcast.net Fri Feb 18 06:15:53 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 09:15:53 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: I couldn't agree more! Some one really needs to talk to DeBlasi. If I ever met him it was only once or twice. I didn't work at those esoteric levels you did! ;-) But I was constantly coming up against his handiwork in the strategy the IBM delegates took. It sure seemed that Joe was a master of electro-political engineering! We were seldom in agreement but he was very good at what he did. ;-) At 8:53 -0500 2011/02/18, John Klensin wrote: >On 2/17/11, John Day wrote: >> John, >> >> All you say here about what happened in the 80s is true. The >> formation of JTC1 etc. But that was quite late to the game. > >Absolutely. JTC1 didn't come together until nearly the end of the >decade although the work started much earlier. If I recall, it was >the brainchild of Joe DeBlasi (IBM's corporate head of standards, >later ACM Exec Dir). I chaired ACM's (late and mostly unlamented) >Standards Committee from about 1986 and so got to watch that part of >the process from ANSI/ISSB among other places. But I was less >concerned about the specific standardization events -- many of which >were fairly peripheral to the OSI developments and Internet/OSI >interactions -- than the degree to which they indicated that the >environment was fermenting, making some adventures by standards >development bodies possible that would not have been possible before >and might not even be possible some years later (with the emphasis on >"might" -- some of what is now going on in ITU-T may not be that much >different). > >> The idea of standardizing to a point in the future was set prior by >> set the first meeting of SC16 in March 1978 and the Joint Development >> with CCITT by 1979/80 was quite early. (The biggest mistake in the >> whole effort). At that time, the idea was that things were changing >> so fast that one had to shoot for a point in the future. > >Carl Cargill has made the claim on several occasions that he invented >anticipatory standardization. I've had no reason to disbelieve him >even though we disagreed (at least at the time and for some years >thereafter) as to whether it was a great idea or a disaster waiting to >happen). If this is important, someone might check with him on both >dates and how things unfolded at levels considerably above any one >CCITT / ITU-T SC or ISO WG or EG. > >> The world views between the computer companies and the European PTTs >> were so different and the PTTs saw so much at stake, there was no way >> anything good could have come from it. > >Yes. But I think actually an almost-separate problem at the standards >policy level, even though I've assumed it played out most dramatically >at the SG / WG one. > >> It might have been better had the cooperation with CCITT not >> happened. But with no deregulation even considered in 1979, the >> European computer manufactures didn't have much choice. > >Part of what also drove those collaborations (both TC97-CCITT and the >later formation of JTC1) was a realization by both companies and >governments/ PTTs that they were spending a lot of resources sending >people (often the same people) to parallel meetings, often to advocate >particular results in one and to provide a defensive/blocking force in >the other. Joint development agreements and consolidation were >supposed to fix that. With a quarter-century of hindsight, it didn't >work very well and still doesn't. > >> To some degree this may well have been a strategy to get out ahead of >> IBM and the PTTs. Given their dominance in the markets, had they not >> attempted something like that and gone with standardizing current >> practice it would have been SNA over X.25, instead of TP4 over CLNP. > >Yes. But also more complicated. If this is important, someone should >try to find Joe and read him out -- that perspective would be, IMO, >very useful. > > john From jeanjour at comcast.net Fri Feb 18 06:25:13 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 09:25:13 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E75B8.1070809@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> Message-ID: At 8:35 -0500 2011/02/18, Miles Fidelman wrote: >John Day wrote: >>At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >>>You've said that before. Can you elaborate with some examples of >>>where ISO has simply codified existing practice? >Screw threads, highway signs, paper size, HDLC, Transport Layer, >Session Layer, Network Layer > >I was all set to buy "screw threads" - until I read the Wikipedia article on >http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization > >Re. Transport, Session, Network layer: how can you say that with a >straight face, after all the recent discussion here? (I don't see >an ISO number stamped on TCP/IP.) I figured you would take the bait. ;-) TP4 was INWG 96 which was CYCLADES TS which had been operational since 1972. Network: X.25 was an ISO standard that had been in use since 1976. Session: Was lifted (for better or worse, mostly worse) from SGVIII Videotex standards that were built and operating in France. No there is no ISO number stamped on TCP. That decision was worked out in an open process in IFIP WG6.1 prior to start of OSI, which chose a modified CYCLADES TS. As long as we are on the topic, all of the IEEE 802 standards are also ISO standards. Ethernet was in use for close to 10 years before it was an ISO standard. From craig at aland.bbn.com Fri Feb 18 06:42:56 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 18 Feb 2011 09:42:56 -0500 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) Message-ID: <20110218144256.2D71E28E137@aland.bbn.com> > > The first "operational" X.500 RFC was RFC 1218, "Naming scheme for c=US", > > dated April 1991, but it's also referred to in Karen Sollins' "A Plan for > > Internet Directory Services", July 1989. > > > > Interesting. There is a draft of an unpublished paper written by Sollins in > Jan 1983 -- titled "Naming, Conversations, and Federation" -- that is in the > NIC collection (I've been informed that this went on to be included in her > dissertation). Its very presence in that collection probably means that > someone at the NIC was familiar with these kinds of ideas at the time, > possibly Jake Feinler or others that went on to do IFIP work. Side note -- you should generally assume *everyone* in the Internet community was familiar with *everyone* else in the Internet community and the US OSI representation team (which was heavily ex-ARPANET types) until about 1986, if not later. It was a very small world -- I joined in 1983 and knew most folks by reputation if not by face by 1985 and I was fresh out of college in 1983... Thanks! Craig From craig at aland.bbn.com Fri Feb 18 06:45:37 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 18 Feb 2011 09:45:37 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110218144537.1D68D28E137@aland.bbn.com> > In what sense was OSI top-down? The OSI process was every bit as much a > bottoms-up, participant-driven process as IEEE 802 is today. If there > ever was a top-down standards process in the networking world directed > by two or three lords of the purse, it certainly wasn't OSI. If you read the original OS layering paper by Hubert Zimmerman it is clearly a top-down management work plan. Useful to compare it with the ARPANET layering paper of a few years later. The difference is Zim's "here's how we'll break up the problem of developing standards" vs. "here's why creating TELNET led us to a layered architecture". Thanks! Craig > > On 2/17/2011 4:16 PM, Eric Gade wrote: > > > > This also may just be a matter of dissonant worldviews. Where in OSI > > you see a series of discrete, technically explicit standards, I see an > > (overly?) ambitious, top-down standards project for computer > > networking that was unprecedented by international standards work at > > the time. It reflects a profounding optimistic perspective that relies > > on a consistently global view concerning the application of these > > technologies. Those involved in this overal project were obviously > > going to bring this optimism and global perspective to whatever > > related projects that they were involved with. IFIP people were > > involved with DNS and the work of IFIP was the closest related to the > > same issues that DNS addressed. > > ******************** Craig Partridge Chief Scientist, BBN Technologies E-mail: craig at aland.bbn.com or craig at bbn.com Phone: +1 517 324 3425 From jaap at NLnetLabs.nl Fri Feb 18 06:46:44 2011 From: jaap at NLnetLabs.nl (Jaap Akkerhuis) Date: Fri, 18 Feb 2011 15:46:44 +0100 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E788A.5060805@channelisles.net> References: <20110217203757.636D928E137@aland.bbn.com> <201102181107.p1IB7YGL072623@bartok.nlnetlabs.nl> <4D5E788A.5060805@channelisles.net> Message-ID: <201102181446.p1IEkiLJ034785@bartok.nlnetlabs.nl> ISO is part of the UN, isn't it? No. One of the ways on to the ISO-3166-1 is via the UN Statistics Bureau. In New York to be precise. It is the major way. There are other ways, but these are exception. They always involve a member of the iso 3166/MA, often the convener. jaap From vint at google.com Fri Feb 18 07:15:34 2011 From: vint at google.com (Vint Cerf) Date: Fri, 18 Feb 2011 10:15:34 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> Message-ID: john, I thought INWG 96 was a compromise that was not identical to, though it drew heavily upon, the Cyclades TS protocol? v On Fri, Feb 18, 2011 at 9:25 AM, John Day wrote: > At 8:35 -0500 2011/02/18, Miles Fidelman wrote: > >> John Day wrote: >> >>> At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >>> >>>> You've said that before. Can you elaborate with some examples of where >>>> ISO has simply codified existing practice? >>>> >>> Screw threads, highway signs, paper size, HDLC, Transport Layer, Session >> Layer, Network Layer >> >> I was all set to buy "screw threads" - until I read the Wikipedia article >> on >> http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization >> >> Re. Transport, Session, Network layer: how can you say that with a >> straight face, after all the recent discussion here? (I don't see an ISO >> number stamped on TCP/IP.) >> > > I figured you would take the bait. ;-) > > TP4 was INWG 96 which was CYCLADES TS which had been operational since > 1972. > > Network: X.25 was an ISO standard that had been in use since 1976. > > Session: Was lifted (for better or worse, mostly worse) from SGVIII > Videotex standards that were built and operating in France. > > No there is no ISO number stamped on TCP. That decision was worked out in > an open process in IFIP WG6.1 prior to start of OSI, which chose a modified > CYCLADES TS. > > As long as we are on the topic, all of the IEEE 802 standards are also ISO > standards. Ethernet was in use for close to 10 years before it was an ISO > standard. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhc2 at dcrocker.net Fri Feb 18 07:51:04 2011 From: dhc2 at dcrocker.net (Dave CROCKER) Date: Fri, 18 Feb 2011 07:51:04 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110218144537.1D68D28E137@aland.bbn.com> References: <20110218144537.1D68D28E137@aland.bbn.com> Message-ID: <4D5E9568.3020709@dcrocker.net> On 2/18/2011 6:45 AM, Craig Partridge wrote: > If you read the original OS layering paper by Hubert Zimmerman it is > clearly a top-down management work plan. Useful to compare it with > the ARPANET layering paper of a few years later. The difference is Zim's > "here's how we'll break up the problem of developing standards" vs. > "here's why creating TELNET led us to a layered architecture". As I recall, documentation of the Arpanet approach to layering occurred as a response to the OSI papers. Prior to that it was de facto but not documented. One consequence is that different folk divided things up differently. The Arpanet approach was shown as anywhere from 3 to 5 layers... In fact, I used to do presentations that mapped Arpanet and Internet details rather comfortably to the 7-layer model. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From craig at aland.bbn.com Fri Feb 18 08:02:41 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 18 Feb 2011 11:02:41 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110218160241.6209E28E137@aland.bbn.com> > > > On 2/18/2011 6:45 AM, Craig Partridge wrote: > > If you read the original OS layering paper by Hubert Zimmerman it is > > clearly a top-down management work plan. Useful to compare it with > > the ARPANET layering paper of a few years later. The difference is Zim's > > "here's how we'll break up the problem of developing standards" vs. > > "here's why creating TELNET led us to a layered architecture". > > > As I recall, documentation of the Arpanet approach to layering occurred as a > response to the OSI papers. Prior to that it was de facto but not documented That was the received wisdom I got in 1983. But when I went digging in 1987 or so, I discovered it wasn't true. The ARPANET layering paper was published by Davidson et al at the 5th IEEE Data Comm Conference in 1977. Zim's paper appeared in 1980. Thanks! Craig From jeanjour at comcast.net Fri Feb 18 07:54:16 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 10:54:16 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> Message-ID: You are undoubtedly correct. After all, you are the lead author of it. BTW, could you tell me the difference between INWG 96 and INWG 96.1? At 10:15 -0500 2011/02/18, Vint Cerf wrote: >john, > >I thought INWG 96 was a compromise that was not identical to, though >it drew heavily upon, the Cyclades TS protocol? > >v > > >On Fri, Feb 18, 2011 at 9:25 AM, John Day ><jeanjour at comcast.net> wrote: > >At 8:35 -0500 2011/02/18, Miles Fidelman wrote: > >John Day wrote: > >At 7:14 -0500 2011/02/18, Miles Fidelman wrote: > >You've said that before. Can you elaborate with some examples of >where ISO has simply codified existing practice? > >Screw threads, highway signs, paper size, HDLC, Transport Layer, >Session Layer, Network Layer > >I was all set to buy "screw threads" - until I read the Wikipedia article on >http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization > >Re. Transport, Session, Network layer: how can you say that with a >straight face, after all the recent discussion here? (I don't see >an ISO number stamped on TCP/IP.) > > >I figured you would take the bait. ;-) > >TP4 was INWG 96 which was CYCLADES TS which had been operational since 1972. > >Network: X.25 was an ISO standard that had been in use since 1976. > >Session: Was lifted (for better or worse, mostly worse) from SGVIII >Videotex standards that were built and operating in France. > >No there is no ISO number stamped on TCP. That decision was worked >out in an open process in IFIP WG6.1 prior to start of OSI, which >chose a modified CYCLADES TS. > >As long as we are on the topic, all of the IEEE 802 standards are >also ISO standards. Ethernet was in use for close to 10 years >before it was an ISO standard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanjour at comcast.net Fri Feb 18 07:58:15 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 10:58:15 -0500 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) In-Reply-To: <20110218144256.2D71E28E137@aland.bbn.com> References: <20110218144256.2D71E28E137@aland.bbn.com> Message-ID: What you say about the US OSI group and the Internet group was true for the Network Layer work and may have been true for the X.500/X.400 work. I was wondering about that. However, for the Transport and above work this was not the case at all. There were very few people in SC16.21 with Internet experience. The connectionless network layer group in OSI was almost entirely Internet people. At 9:42 -0500 2011/02/18, Craig Partridge wrote: > > > The first "operational" X.500 RFC was RFC 1218, "Naming scheme for c=US", >> > dated April 1991, but it's also referred to in Karen Sollins' "A Plan for >> > Internet Directory Services", July 1989. >> > >> >> Interesting. There is a draft of an unpublished paper written by Sollins in >> Jan 1983 -- titled "Naming, Conversations, and Federation" -- that is in the >> NIC collection (I've been informed that this went on to be included in her >> dissertation). Its very presence in that collection probably means that >> someone at the NIC was familiar with these kinds of ideas at the time, >> possibly Jake Feinler or others that went on to do IFIP work. > >Side note -- you should generally assume *everyone* in the Internet >community was familiar with *everyone* else in the Internet community and >the US OSI representation team (which was heavily ex-ARPANET types) >until about 1986, if not later. It was a very small world -- I joined >in 1983 and knew most folks by reputation if not by face by 1985 and >I was fresh out of college in 1983... > >Thanks! > >Craig From jeanjour at comcast.net Fri Feb 18 08:02:22 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 11:02:22 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110218144537.1D68D28E137@aland.bbn.com> References: <20110218144537.1D68D28E137@aland.bbn.com> Message-ID: Not exactly an apples and apple comparison. When Telnet was done, no one was quite sure what the architecture was. By 1978, there was a pretty good idea of what needed to be done and how to coordinate it. The same could be said of the Internet model done a few years after Zimmermann's paper. Are you suggesting that standards should not build on current knowledge? OSI was trying to assemble a set of standards. It needed a coordinating framework. At 9:45 -0500 2011/02/18, Craig Partridge wrote: > > In what sense was OSI top-down? The OSI process was every bit as much a >> bottoms-up, participant-driven process as IEEE 802 is today. If there >> ever was a top-down standards process in the networking world directed >> by two or three lords of the purse, it certainly wasn't OSI. > >If you read the original OS layering paper by Hubert Zimmerman it is >clearly a top-down management work plan. Useful to compare it with >the ARPANET layering paper of a few years later. The difference is Zim's >"here's how we'll break up the problem of developing standards" vs. >"here's why creating TELNET led us to a layered architecture". > >Thanks! > >Craig > >> >> On 2/17/2011 4:16 PM, Eric Gade wrote: >> > >> > This also may just be a matter of dissonant worldviews. Where in OSI >> > you see a series of discrete, technically explicit standards, I see an >> > (overly?) ambitious, top-down standards project for computer >> > networking that was unprecedented by international standards work at >> > the time. It reflects a profounding optimistic perspective that relies >> > on a consistently global view concerning the application of these >> > technologies. Those involved in this overal project were obviously >> > going to bring this optimism and global perspective to whatever >> > related projects that they were involved with. IFIP people were >> > involved with DNS and the work of IFIP was the closest related to the >> > same issues that DNS addressed. >> > >******************** >Craig Partridge >Chief Scientist, BBN Technologies >E-mail: craig at aland.bbn.com or craig at bbn.com >Phone: +1 517 324 3425 From mfidelman at meetinghouse.net Fri Feb 18 08:03:24 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 11:03:24 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5DDE65.1090207@bennett.com> <4D5E65B3.3040804@meetinghouse.net> Message-ID: <4D5E984C.30607@meetinghouse.net> John Day wrote: > At 7:27 -0500 2011/02/18, Miles Fidelman wrote: >> Which is distinctly different than bottom-up in the IETF sense. The >> distinction isn't bottom-up vs. top-down, it's more one of >> semi-collaborative, get it right ("rough consensus and running code" >> so to speak) vs. vendor's battling out who's existing products will >> have to be modified when the standard gets finalized. > > That depends on how close they are when they start! ;-) It has > nothing to do with the nature of the organizations. The IETF has been > fortunate in that many of their projects have been with people more or > less on the same page. Although in recent years that has changed > considerably and I think you would find the politics within the IETF > these days to come close to the level in the OSI. (We are talking OSI > here and not the wider environment of ISO.) > All standards are bottom up. If participants don't choose to work on > it, then it doesn't happen. That's true whether something is top-down or bottom-up. From an engineering point of view, it's a question of: let's write the standard, see if people can implement it, then see if it works, and then we'll fix it; vs., let's let things evolve, and then codify things once they're working. >> >> There are real lessons to be learned here. I see a lot of the same >> dynamics in today's geospatial standards work through the OGC - lot's >> of theoretical wrangling, resulting in standards that sort of work, >> but have to be fixed in later revisions, and that largely get ignored >> by most of the world (take a look at how many people use ESRI's >> proprietary stuff, vs. Google's APIs, vs. OGC standard WMS and WFS; >> or maybe look at the rapid adoption of RESTful intefaces vs. W3C web >> service standards). > This is true of all standards organizations that have been around a > long time. Look at all the RFCs that are not in current use. > > There are many lessons to be learned here. The social dynamics of > these processes is more than a little fascinating. Anybody have any good anecdotes about RSS and Atom? That seems like a particularly good example of a recent standard that started at the grass roots, went through lots of politics, and ended up as an IETF standard that's only partially adopted. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From vint at google.com Fri Feb 18 08:06:04 2011 From: vint at google.com (Vint Cerf) Date: Fri, 18 Feb 2011 11:06:04 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> Message-ID: holy cow - i have no idea after all this time. I am not even sure I have copies... v On Fri, Feb 18, 2011 at 10:54 AM, John Day wrote: > You are undoubtedly correct. After all, you are the lead author of it. > > BTW, could you tell me the difference between INWG 96 and INWG 96.1? > > At 10:15 -0500 2011/02/18, Vint Cerf wrote: > > john, > > > I thought INWG 96 was a compromise that was not identical to, though it > drew heavily upon, the Cyclades TS protocol? > > > v > > > > On Fri, Feb 18, 2011 at 9:25 AM, John Day wrote: > > At 8:35 -0500 2011/02/18, Miles Fidelman wrote: > > John Day wrote: > > At 7:14 -0500 2011/02/18, Miles Fidelman wrote: > > You've said that before. Can you elaborate with some examples of where ISO > has simply codified existing practice? > > Screw threads, highway signs, paper size, HDLC, Transport Layer, Session > Layer, Network Layer > > I was all set to buy "screw threads" - until I read the Wikipedia article > on > http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization > > Re. Transport, Session, Network layer: how can you say that with a straight > face, after all the recent discussion here? (I don't see an ISO number > stamped on TCP/IP.) > > > I figured you would take the bait. ;-) > > TP4 was INWG 96 which was CYCLADES TS which had been operational since > 1972. > > Network: X.25 was an ISO standard that had been in use since 1976. > > Session: Was lifted (for better or worse, mostly worse) from SGVIII > Videotex standards that were built and operating in France. > > No there is no ISO number stamped on TCP. That decision was worked out in > an open process in IFIP WG6.1 prior to start of OSI, which chose a modified > CYCLADES TS. > > As long as we are on the topic, all of the IEEE 802 standards are also ISO > standards. Ethernet was in use for close to 10 years before it was an ISO > standard. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig at aland.bbn.com Fri Feb 18 08:08:25 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 18 Feb 2011 11:08:25 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110218160825.80CE628E137@aland.bbn.com> Hi John: Sorry if I was not clear. The issue was top down vs. bottom up. My sense if you read Zim's paper and think about what was around at the time it is clear that he's creating a top down organization into which to sort proposals -- many of which he's not sure exist or what they will be. That's top down mixed with some bottom up. Thanks! Craig > Not exactly an apples and apple comparison. When Telnet was done, no > one was quite sure what the architecture was. By 1978, there was a > pretty good idea of what needed to be done and how to coordinate it. > > The same could be said of the Internet model done a few years after > Zimmermann's paper. > > Are you suggesting that standards should not build on current knowledge? > > OSI was trying to assemble a set of standards. It needed a > coordinating framework. > > > At 9:45 -0500 2011/02/18, Craig Partridge wrote: > > > In what sense was OSI top-down? The OSI process was every bit as much a > >> bottoms-up, participant-driven process as IEEE 802 is today. If there > >> ever was a top-down standards process in the networking world directed > >> by two or three lords of the purse, it certainly wasn't OSI. > > > >If you read the original OS layering paper by Hubert Zimmerman it is > >clearly a top-down management work plan. Useful to compare it with > >the ARPANET layering paper of a few years later. The difference is Zim's > >"here's how we'll break up the problem of developing standards" vs. > >"here's why creating TELNET led us to a layered architecture". > > > >Thanks! > > > >Craig > > > >> > >> On 2/17/2011 4:16 PM, Eric Gade wrote: > >> > > >> > This also may just be a matter of dissonant worldviews. Where in OSI > >> > you see a series of discrete, technically explicit standards, I see an > >> > (overly?) ambitious, top-down standards project for computer > >> > networking that was unprecedented by international standards work at > >> > the time. It reflects a profounding optimistic perspective that relies > >> > on a consistently global view concerning the application of these > >> > technologies. Those involved in this overal project were obviously > >> > going to bring this optimism and global perspective to whatever > >> > related projects that they were involved with. IFIP people were > >> > involved with DNS and the work of IFIP was the closest related to the > >> > same issues that DNS addressed. > >> > > >******************** > >Craig Partridge > >Chief Scientist, BBN Technologies > >E-mail: craig at aland.bbn.com or craig at bbn.com > >Phone: +1 517 324 3425 ******************** Craig Partridge Chief Scientist, BBN Technologies E-mail: craig at aland.bbn.com or craig at bbn.com Phone: +1 517 324 3425 From mfidelman at meetinghouse.net Fri Feb 18 08:17:21 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 11:17:21 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> Message-ID: <4D5E9B91.8090006@meetinghouse.net> John Day wrote: > At 8:35 -0500 2011/02/18, Miles Fidelman wrote: >> John Day wrote: >>> At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >>>> You've said that before. Can you elaborate with some examples of >>>> where ISO has simply codified existing practice? >> Screw threads, highway signs, paper size, HDLC, Transport Layer, >> Session Layer, Network Layer >> >> I was all set to buy "screw threads" - until I read the Wikipedia >> article on >> http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization >> >> Re. Transport, Session, Network layer: how can you say that with a >> straight face, after all the recent discussion here? (I don't see an >> ISO number stamped on TCP/IP.) > > I figured you would take the bait. ;-) > > TP4 was INWG 96 which was CYCLADES TS which had been operational since > 1972. > > Network: X.25 was an ISO standard that had been in use since 1976. > > Session: Was lifted (for better or worse, mostly worse) from SGVIII > Videotex standards that were built and operating in France. Ahhh.... the pick one from column A, pick one from column B, and see if they fit together approach. Which also neglects TP0-3, and as I recall ISO-IP (excuse me, CLNP) was crammed in as an afterthought. > > No there is no ISO number stamped on TCP. That decision was worked > out in an open process in IFIP WG6.1 prior to start of OSI, which > chose a modified CYCLADES TS. Again, a political, top-down process - rather than one based on moving something from experimental->recommended->mandatory status. > > As long as we are on the topic, all of the IEEE 802 standards are also > ISO standards. Ethernet was in use for close to 10 years before it > was an ISO standard. Because IEEE is the protocol standards agent for ANSI which is the US representative to ISO (if I have the terminology correct). IEEE 802 is a pretty good example of starting with competing products, and then creating a standard that forces every vendor to modify their stuff just ever so slightly. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From vint at google.com Fri Feb 18 08:32:28 2011 From: vint at google.com (Vint Cerf) Date: Fri, 18 Feb 2011 11:32:28 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <20110218160241.6209E28E137@aland.bbn.com> References: <20110218160241.6209E28E137@aland.bbn.com> Message-ID: i think there was a much earlier layering paper, lead author postel, 1972? Spring Joint or Fall Joint Computer Conference. v On Fri, Feb 18, 2011 at 11:02 AM, Craig Partridge wrote: > > > > > > On 2/18/2011 6:45 AM, Craig Partridge wrote: > > > If you read the original OS layering paper by Hubert Zimmerman it is > > > clearly a top-down management work plan. Useful to compare it with > > > the ARPANET layering paper of a few years later. The difference is > Zim's > > > "here's how we'll break up the problem of developing standards" vs. > > > "here's why creating TELNET led us to a layered architecture". > > > > > > As I recall, documentation of the Arpanet approach to layering occurred > as a > > response to the OSI papers. Prior to that it was de facto but not > documented > > That was the received wisdom I got in 1983. But when I went digging in > 1987 or so, I discovered it wasn't true. > > The ARPANET layering paper was published by Davidson et al at the 5th > IEEE Data Comm Conference in 1977. Zim's paper appeared in 1980. > > Thanks! > > Craig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeanjour at comcast.net Fri Feb 18 09:39:06 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 12:39:06 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5E9B91.8090006@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> Message-ID: >John Day wrote: >>At 8:35 -0500 2011/02/18, Miles Fidelman wrote: >>>John Day wrote: >>>>At 7:14 -0500 2011/02/18, Miles Fidelman wrote: >>>>>You've said that before. Can you elaborate with some examples >>>>>of where ISO has simply codified existing practice? >>>Screw threads, highway signs, paper size, HDLC, Transport Layer, >>>Session Layer, Network Layer >>> >>>I was all set to buy "screw threads" - until I read the Wikipedia article on >>>http://en.wikipedia.org/wiki/Screw_thread#History_of_standardization >>> >>>Re. Transport, Session, Network layer: how can you say that with a >>>straight face, after all the recent discussion here? (I don't see >>>an ISO number stamped on TCP/IP.) >> >>I figured you would take the bait. ;-) >> >>TP4 was INWG 96 which was CYCLADES TS which had been operational since 1972. >> >>Network: X.25 was an ISO standard that had been in use since 1976. >> >>Session: Was lifted (for better or worse, mostly worse) from >>SGVIII Videotex standards that were built and operating in France. > >Ahhh.... the pick one from column A, pick one from column B, and see >if they fit together approach. Which also neglects TP0-3, and as I >recall ISO-IP (excuse me, CLNP) was crammed in as an afterthought. You are kidding! CLNP was part of the plan all the time from day one. It was the fight over CO/CL that put off starting. Luckily there isn't much to it. It also required getting the IONL in place so that the place of internetworking vs X.25 as SNAC was clear. The only reason the US was participating at all was to have a connectionless network layer. Good grief, what have you been smoking? I thought say that. First of all no one really cared about anything but TP4 the other were just the PTTs attempt to discourage the use of a Transport Layer. But if you must know: TP0 came from SGVIII. TP1 came from SGVII TP2 was I believe from the UK colored books. TP3 was some weirndess from the Germans. >> >>No there is no ISO number stamped on TCP. That decision was worked >>out in an open process in IFIP WG6.1 prior to start of OSI, which >>chose a modified CYCLADES TS. >Again, a political, top-down process - rather than one based on >moving something from experimental->recommended->mandatory status. How do you figure? IFIP 6.1 was hardly a top down process. And hardly political. It was primarily the research networking people. Do you make this stuff up? What part of operational since 1972 did you not understand? CYCLADES was an experimental network doing network research. There was a lot of experimentation with it, it was recommended. No standards are mandatory. That is why ISO is a *voluntary* standards organization and why ITU issues Recommendations not standards. There was a perceived need for an international Transport protocol. Vint chaired it, right Vint? There were a few proposals among them TCP, which was relatively new at the time. None of them were very old. Since the concept of a transport layer was pretty new at the time. I think you need to get your facts straight. >> >>As long as we are on the topic, all of the IEEE 802 standards are >>also ISO standards. Ethernet was in use for close to 10 years >>before it was an ISO standard. >Because IEEE is the protocol standards agent for ANSI which is the >US representative to ISO (if I have the terminology correct). IEEE >802 is a pretty good example of starting with competing products, >and then creating a standard that forces every vendor to modify >their stuff just ever so slightly. Sometimes. Yes, you are correct. Although I have no idea why IEEE bothers. Ethernet is an ISO standard. What you describe is very much the case in IEEE today. It was less so at the beginning but even there one had competing products: Ethernet, token bus, token ring. It was what a lot of people wanted but it was the processs produced. John From mfidelman at meetinghouse.net Fri Feb 18 09:55:09 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 12:55:09 -0500 Subject: [ih] X.500 document history (Re: NIC, InterNIC, and Modelling Administration) In-Reply-To: References: <20110218144256.2D71E28E137@aland.bbn.com> Message-ID: <4D5EB27D.8050002@meetinghouse.net> John Day wrote: > However, for the Transport and above work this was not the case at > all. There were very few people in SC16.21 with Internet experience. Seems to me that says a lot right there. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From arussell at jhu.edu Fri Feb 18 11:36:28 2011 From: arussell at jhu.edu (Andrew Russell) Date: Fri, 18 Feb 2011 14:36:28 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> Message-ID: A recent interview with Joe DeBlasi by Arthur Norberg is available for free at http://portal.acm.org/citation.cfm?id=1720595. DeBlasi seemed eager to talk about standards, and provides some insight on pages 6-9 or so. For example, "when you represented IBM, there were two things that really came into play. The first was to protect IBM?s business interest; to make sure things were not done in a way that would adversely impact the investment we had in our major technologies. The second was to manage that process so there was a good feeling on both sides and we were not being obstructionists. We made major contributions to all the standards efforts. I believe these efforts were positive for us, our customers and the industry." Andy On Feb 18, 2011, at 9:15 AM, John Day wrote: > I couldn't agree more! Some one really needs to talk to DeBlasi. > > If I ever met him it was only once or twice. I didn't work at those esoteric levels you did! ;-) But I was constantly coming up against his handiwork in the strategy the IBM delegates took. It sure seemed that Joe was a master of electro-political engineering! > > We were seldom in agreement but he was very good at what he did. ;-) > > At 8:53 -0500 2011/02/18, John Klensin wrote: >> On 2/17/11, John Day wrote: >>> John, >>> >>> All you say here about what happened in the 80s is true. The >>> formation of JTC1 etc. But that was quite late to the game. >> >> Absolutely. JTC1 didn't come together until nearly the end of the >> decade although the work started much earlier. If I recall, it was >> the brainchild of Joe DeBlasi (IBM's corporate head of standards, >> later ACM Exec Dir). I chaired ACM's (late and mostly unlamented) >> Standards Committee from about 1986 and so got to watch that part of >> the process from ANSI/ISSB among other places. But I was less >> concerned about the specific standardization events -- many of which >> were fairly peripheral to the OSI developments and Internet/OSI >> interactions -- than the degree to which they indicated that the >> environment was fermenting, making some adventures by standards >> development bodies possible that would not have been possible before >> and might not even be possible some years later (with the emphasis on >> "might" -- some of what is now going on in ITU-T may not be that much >> different). >> >>> The idea of standardizing to a point in the future was set prior by >>> set the first meeting of SC16 in March 1978 and the Joint Development >>> with CCITT by 1979/80 was quite early. (The biggest mistake in the >>> whole effort). At that time, the idea was that things were changing >>> so fast that one had to shoot for a point in the future. >> >> Carl Cargill has made the claim on several occasions that he invented >> anticipatory standardization. I've had no reason to disbelieve him >> even though we disagreed (at least at the time and for some years >> thereafter) as to whether it was a great idea or a disaster waiting to >> happen). If this is important, someone might check with him on both >> dates and how things unfolded at levels considerably above any one >> CCITT / ITU-T SC or ISO WG or EG. >> >>> The world views between the computer companies and the European PTTs >>> were so different and the PTTs saw so much at stake, there was no way >>> anything good could have come from it. >> >> Yes. But I think actually an almost-separate problem at the standards >> policy level, even though I've assumed it played out most dramatically >> at the SG / WG one. >> >>> It might have been better had the cooperation with CCITT not >>> happened. But with no deregulation even considered in 1979, the >>> European computer manufactures didn't have much choice. >> >> Part of what also drove those collaborations (both TC97-CCITT and the >> later formation of JTC1) was a realization by both companies and >> governments/ PTTs that they were spending a lot of resources sending >> people (often the same people) to parallel meetings, often to advocate >> particular results in one and to provide a defensive/blocking force in >> the other. Joint development agreements and consolidation were >> supposed to fix that. With a quarter-century of hindsight, it didn't >> work very well and still doesn't. >> >>> To some degree this may well have been a strategy to get out ahead of >>> IBM and the PTTs. Given their dominance in the markets, had they not >>> attempted something like that and gone with standardizing current >>> practice it would have been SNA over X.25, instead of TP4 over CLNP. >> >> Yes. But also more complicated. If this is important, someone should >> try to find Joe and read him out -- that perspective would be, IMO, >> very useful. >> >> john > From mfidelman at meetinghouse.net Fri Feb 18 12:09:39 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 15:09:39 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> Message-ID: <4D5ED203.2030008@meetinghouse.net> John Day wrote: > > You are kidding! CLNP was part of the plan all the time from day > one. It was the fight over CO/CL that put off starting. Luckily > there isn't much to it. It also required getting the IONL in place so > that the place of internetworking vs X.25 as SNAC was clear. > > The only reason the US was participating at all was to have a > connectionless network layer. Good grief, what have you been smoking? As I recall it, from the grousing of our (BBN) folks who were off to the meetings: 1. Mike Corrigan at GSA was pushing very, very hard for the US Government, including DoD, to go all OSI (I never quite understood why). 2. Nobody thought it would work, but those of us in the DoD world had to live with the "dual stack" model (which never really deployed as far as I can tell). 3. Folks were off to meet about CLNP as a defensive strategy - just in case the &*&!s (chose your pejorative) really pushed OSI through. > I thought say that. First of all no one really cared about anything > but TP4 the other were just the PTTs attempt to discourage the use of > a Transport Layer. But if you must know: Does not compute..... Who needs TP4 over a connectionless network layer? And if only the US folks cared about CLNP.... Am I missing something, or wasn't the idea TP0 (or null) over X.25 vs. TP4 over CLNP? >>> >>> No there is no ISO number stamped on TCP. That decision was worked >>> out in an open process in IFIP WG6.1 prior to start of OSI, which >>> chose a modified CYCLADES TS. >> Again, a political, top-down process - rather than one based on >> moving something from experimental->recommended->mandatory status. > > How do you figure? IFIP 6.1 was hardly a top down process. And > hardly political. It was primarily the research networking people. Do > you make this stuff up? > > What part of operational since 1972 did you not understand? > > CYCLADES was an experimental network doing network research. There > was a lot of experimentation with it, it was recommended. No > standards are mandatory. That is why ISO is a *voluntary* standards > organization and why ITU issues Recommendations not standards. There's operational and there's operational. ARPANET was carrying military traffic, and being split to form the Defense Data Network, while CYCLADES was being killed by the PTTs. >>> >>> As long as we are on the topic, all of the IEEE 802 standards are >>> also ISO standards. Ethernet was in use for close to 10 years >>> before it was an ISO standard. >> Because IEEE is the protocol standards agent for ANSI which is the US >> representative to ISO (if I have the terminology correct). IEEE 802 >> is a pretty good example of starting with competing products, and >> then creating a standard that forces every vendor to modify their >> stuff just ever so slightly. > > Sometimes. Yes, you are correct. Although I have no idea why IEEE > bothers. Ethernet is an ISO standard. What you describe is very much > the case in IEEE today. It was less so at the beginning but even > there one had competing products: Ethernet, token bus, token ring. > It was what a lot of people wanted but it was the processs produced. I'm not sure why IEEE bothers either, but they seem to be doing something right with the 802 line of standards. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From dot at dotat.at Fri Feb 18 12:22:35 2011 From: dot at dotat.at (Tony Finch) Date: Fri, 18 Feb 2011 20:22:35 +0000 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> Message-ID: On Thu, 17 Feb 2011, John Klensin wrote: > > > But I don't think it's fair to argue that suddenly including UK opens > > up the entire ISO list, especially since they don't even follow the > > standard. > > Well, actually they did, if one counts the time-honored practice of > anticipating a standard a bit and then getting it wrong. I haven't > gone back and sorted out the chronology, but 3166 itself wasn't very > old when the DNS started using it. First version was 1974, apparently. > And, apparently (according to what I was told in the mid-80s and again > in the late 90s -- the latter by someone who had been the BSI > representative to ISO TC 46 and the 3166 Maintenance Agency at the time) > the 3166 code was originally "UK" but either BSI or Her Majesty's > Government changed their minds just before the standard was adopted. Their FAQ argues that "united" and "kingdom" are avoided because they are not very distinguishing terms - though that didn't stop them allocating the US code. There is also the precedent of the international car label, which dates back to the 1940s. > There was also a story for a while that UK was used for the DNS in order > to avoid confusion with inevitable OSI naming, but I don't know whether > that was accurate or apocryphal. I believe UK was chosen for the top level of the JANET NRS in 1982-1983, and it got grandfathered into the DNS. Tony. -- f.anthony.n.finch http://dotat.at/ Humber, Thames: Southeast 4 or 5, occasionally 6, increasing 7 in Humber later. Moderate. Rain later. Moderate or good, occasionally poor. From jeanjour at comcast.net Fri Feb 18 12:55:59 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 15:55:59 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5ED203.2030008@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> Message-ID: At 15:09 -0500 2011/02/18, Miles Fidelman wrote: >John Day wrote: >> >>You are kidding! CLNP was part of the plan all the time from day >>one. It was the fight over CO/CL that put off starting. Luckily >>there isn't much to it. It also required getting the IONL in place >>so that the place of internetworking vs X.25 as SNAC was clear. >> >>The only reason the US was participating at all was to have a >>connectionless network layer. Good grief, what have you been >>smoking? > >As I recall it, from the grousing of our (BBN) folks who were off to >the meetings: > >1. Mike Corrigan at GSA was pushing very, very hard for the US >Government, including DoD, to go all OSI (I never quite understood >why). > >2. Nobody thought it would work, but those of us in the DoD world >had to live with the "dual stack" model (which never really deployed >as far as I can tell). > >3. Folks were off to meet about CLNP as a defensive strategy - just >in case the &*&!s (chose your pejorative) really pushed OSI through. As far as working goes, in 1992 cisco's largest customer by far was a deployed CLNP network. > >>I thought say that. First of all no one really cared about >>anything but TP4 the other were just the PTTs attempt to discourage >>the use of a Transport Layer. But if you must know: >Does not compute..... Who needs TP4 over a connectionless network >layer? And if only the US folks cared about CLNP.... Am I missing >something, or wasn't the idea TP0 (or null) over X.25 vs. TP4 over >CLNP? In the US, the only thing anyone cared about was TP4 over CLNP. TP0 over X.25 was known to not be reliable. There were European PTTs who said that was what they wanted, but no one else did. As I have said, the Classes of transport was the PTTs response to not being able to stop it. > >>>> >>>>No there is no ISO number stamped on TCP. That decision was >>>>worked out in an open process in IFIP WG6.1 prior to start of >>>>OSI, which chose a modified CYCLADES TS. >>>Again, a political, top-down process - rather than one based on >>>moving something from experimental->recommended->mandatory status. >> >>How do you figure? IFIP 6.1 was hardly a top down process. And >>hardly political. It was primarily the research networking people. >>Do you make this stuff up? >> >>What part of operational since 1972 did you not understand? >> >>CYCLADES was an experimental network doing network research. There >>was a lot of experimentation with it, it was recommended. No >>standards are mandatory. That is why ISO is a *voluntary* >>standards organization and why ITU issues Recommendations not >>standards. > >There's operational and there's operational. ARPANET was carrying >military traffic, and being split to form the Defense Data Network, >while CYCLADES was being killed by the PTTs. I don't know what this means. Yes, CYCLADES was an embarrassment to the French PTT and they were eventually able to shut it down. But it was a real network and some very good people working on it. It is unfortunate that it was shut down because they were doing good work. There was very little network research going on in the US. >>>> >>>>As long as we are on the topic, all of the IEEE 802 standards are >>>>also ISO standards. Ethernet was in use for close to 10 years >>>>before it was an ISO standard. >>>Because IEEE is the protocol standards agent for ANSI which is the >>>US representative to ISO (if I have the terminology correct). >>>IEEE 802 is a pretty good example of starting with competing >>>products, and then creating a standard that forces every vendor to >>>modify their stuff just ever so slightly. >> >>Sometimes. Yes, you are correct. Although I have no idea why IEEE >>bothers. Ethernet is an ISO standard. What you describe is very >>much the case in IEEE today. It was less so at the beginning but >>even there one had competing products: Ethernet, token bus, token >>ring. It was what a lot of people wanted but it was the processs >>produced. > >I'm not sure why IEEE bothers either, but they seem to be doing >something right with the 802 line of standards. Over a decade ago, I told them not to bother. IEEE has international recognition. There is no point to it. From matthias at baerwolff.de Fri Feb 18 13:33:42 2011 From: matthias at baerwolff.de (Matthias =?ISO-8859-1?Q?B=E4rwolff?=) Date: Fri, 18 Feb 2011 22:33:42 +0100 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110218160241.6209E28E137@aland.bbn.com> Message-ID: <1298064823.9729.0.camel@a22m.mbaer-home> According to my research ca. one year ago, there are a host of prior papers essentially anticipating in full the point made in the 1977 Davidson et al paper. - RFC 137 (April 1971) sports an explicit reference to three protocol levels (and so do various RFCs and other NWG documents from the time) - McKenzie (Host/Host Protocol for the ARPA Network, NIC 8246, January 1972) offers a good deal of elaboration on the higher layer stuff - McKenzie (Host/Host Protocol Design Considerations, INWG General Note 16, 1973) discusses some of the trade-offs in layering elegance versus the cost of passing (that is, copying) messages up and down - Walden (Host-to-Host Protocols, 1975; reprinted in a 1978 McQuillan/Cerf tutorial published by IEEE) comes up with the famous Figure with all the Arpanet protocols stacked on top of each other (and some bypassing others) -- that Figure also features in the 1977 Davidson et al paper However, the difference between protocols and interfaces was only beginning to be appreciated then, which is readily apparent e.g. from the Crocker et al 1972 paper (Function-Oriented Protocols for the ARPA Computer Network, AFIPS 1972 Spring) (the one I take it Vint was refering to below) which takes IMP-IMP protocol to be below the IMP-Host protocol which in turn is below the Host-Host protocol. Of course, IMP-IMP and IMP-Host are at the /same/ layer, upon which there is sender-IMP-to-receiver-IMP (with the error correction and flow control at that level), and then there is Host-Host, and then all the app level stuff. This take on layering has been well put in Dave Clark's 1974 PhD thesis, but can also be infered fairly straightforwardly from the BBN reports at the time, particularly the evolution of the ever more distant host stuff that was added virtually right away in the very early 1970s. Matthias P.S. The above list is by absolutely no means to be taken as exhaustive, especially in the INWG line of documents there are bound to be useful elaborations of the layering notion, e.g. by Pouzin who in 1973 was very adamant about clean and inviolable separation of layers already. Am Freitag, den 18.02.2011, 11:32 -0500 schrieb Vint Cerf: > i think there was a much earlier layering paper, lead author postel, > 1972? Spring Joint or Fall Joint Computer Conference. > > > v > > > On Fri, Feb 18, 2011 at 11:02 AM, Craig Partridge > wrote: > > > > > > On 2/18/2011 6:45 AM, Craig Partridge wrote: > > > If you read the original OS layering paper by Hubert > Zimmerman it is > > > clearly a top-down management work plan. Useful to > compare it with > > > the ARPANET layering paper of a few years later. The > difference is Zim's > > > "here's how we'll break up the problem of developing > standards" vs. > > > "here's why creating TELNET led us to a layered > architecture". > > > > > > As I recall, documentation of the Arpanet approach to > layering occurred as a > > response to the OSI papers. Prior to that it was de facto > but not documented > > > That was the received wisdom I got in 1983. But when I went > digging in > 1987 or so, I discovered it wasn't true. > > The ARPANET layering paper was published by Davidson et al at > the 5th > IEEE Data Comm Conference in 1977. Zim's paper appeared in > 1980. > > Thanks! > > Craig > > From tony.li at tony.li Fri Feb 18 13:39:44 2011 From: tony.li at tony.li (Tony Li) Date: Fri, 18 Feb 2011 13:39:44 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> Message-ID: <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> >> 3. Folks were off to meet about CLNP as a defensive strategy - just in case the &*&!s (chose your pejorative) really pushed OSI through. > > As far as working goes, in 1992 cisco's largest customer by far was a deployed CLNP network. With all due respect, I beg to differ. It's true that cisco did have one real, deployed, sizable pure CLNP network, but they were not even close to being the largest customer. Tony From matthias at baerwolff.de Fri Feb 18 13:45:16 2011 From: matthias at baerwolff.de (Matthias =?iso-8859-1?Q?B=E4rwolff?=) Date: Fri, 18 Feb 2011 22:45:16 +0100 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110218160241.6209E28E137@aland.bbn.com> Message-ID: <20110218214516.GA10340@gmail.com> According to my research ca. one year ago, there are a host of prior papers essentially anticipating in full the point made in the 1977 Davidson et al paper. - RFC 137 (April 1971) sports an explicit reference to three protocol levels (and so do various RFCs and other NWG documents from the time) - McKenzie (Host/Host Protocol for the ARPA Network, NIC 8246, January 1972) offers a good deal of elaboration on the higher layer stuff - McKenzie (Host/Host Protocol Design Considerations, INWG General Note 16, 1973) discusses some of the trade-offs in layering elegance versus the cost of passing (that is, copying) messages up and down - Walden (Host-to-Host Protocols, 1975; reprinted in a 1978 McQuillan/Cerf tutorial published by IEEE) comes up with the famous Figure with all the Arpanet protocols stacked on top of each other (and some bypassing others) -- that Figure also features in the 1977 Davidson et al paper However, the difference between protocols and interfaces was only beginning to be appreciated then, which is readily apparent e.g. from the Crocker et al 1972 paper (Function-Oriented Protocols for the ARPA Computer Network, AFIPS 1972 Spring) (the one I take it Vint was refering to below) which takes IMP-IMP protocol to be below the IMP-Host protocol which in turn is below the Host-Host protocol. Of course, IMP-IMP and IMP-Host are at the /same/ layer, upon which there is sender-IMP-to-receiver-IMP (with the error correction and flow control at that level), and then there is Host-Host, and then all the app level stuff. This take on layering has been well put in Dave Clark's 1974 PhD thesis, but can also be infered fairly straightforwardly from the BBN reports at the time, particularly the evolution of the ever more distant host stuff that was added virtually right away in the very early 1970s. Matthias P.S. The above list is by absolutely no means to be taken as exhaustive, especially in the INWG line of documents there are bound to be useful elaborations of the layering notion, e.g. by Pouzin who in 1973 was very adamant about clean and inviolable separation of layers already. On Fri, Feb 18, 2011 at 11:32:28AM -0500, Vint Cerf wrote: > i think there was a much earlier layering paper, lead author postel, 1972? > Spring Joint or Fall Joint Computer Conference. > v > On Fri, Feb 18, 2011 at 11:02 AM, Craig Partridge wrote: > > > On 2/18/2011 6:45 AM, Craig Partridge wrote: > > > > If you read the original OS layering paper by Hubert Zimmerman it is > > > > clearly a top-down management work plan. Useful to compare it with > > > > the ARPANET layering paper of a few years later. The difference is > > Zim's > > > > "here's how we'll break up the problem of developing standards" vs. > > > > "here's why creating TELNET led us to a layered architecture". > > > As I recall, documentation of the Arpanet approach to layering occurred > > as a > > > response to the OSI papers. Prior to that it was de facto but not > > documented > > That was the received wisdom I got in 1983. But when I went digging in > > 1987 or so, I discovered it wasn't true. > > The ARPANET layering paper was published by Davidson et al at the 5th > > IEEE Data Comm Conference in 1977. Zim's paper appeared in 1980. > > Thanks! > > Craig -- Matthias B?rwolff www.b?rwolff.de From richard at bennett.com Fri Feb 18 14:04:23 2011 From: richard at bennett.com (Richard Bennett) Date: Fri, 18 Feb 2011 14:04:23 -0800 Subject: [ih] What's special about the Internet? In-Reply-To: <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> Message-ID: <4D5EECE7.6080107@bennett.com> It occurs to me that the thing that makes the Internet (and ARPANET, CYCLADES, and OSI) special as a historical artifact is that the "terminals" for Internet use existed before the network itself. In the case of all previous networks (such as telephone, telegraph, radio/TV and Telex) the terminal was created as part of the network design, but the Internet had to work with what was already there. And of course, not just with existing terminals but with existing networks. And not just existing computers, but existing computer programs. It seems to me that's more important than claims about bottom-up standards or any such dubious claims and it's so basic that people tend to overlook it. -- Richard Bennett From wmaton at ottix.net Fri Feb 18 14:26:23 2011 From: wmaton at ottix.net (William F. Maton) Date: Fri, 18 Feb 2011 17:26:23 -0500 (EST) Subject: [ih] What's special about the Internet? In-Reply-To: <4D5EECE7.6080107@bennett.com> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> <4D5EECE7.6080107@bennett.com> Message-ID: On Fri, 18 Feb 2011, Richard Bennett wrote: > It occurs to me that the thing that makes the Internet (and ARPANET, > CYCLADES, and OSI) special as a historical artifact is that the "terminals" > for Internet use existed before the network itself. In the case of all True in a sense. Another way to look at the telephone system analogy is to treat the human as the terminal that wanted to communicate a-far, so the equipment needed (including the technological end terminal) enabled the 'human terminal' inter-communicate. But you're right, the computer terminals came first, then the realisation to transfer data between (hence the network was born) came next. Is a discussion on terminals within 'ih' list's scope? wfms From jeanjour at comcast.net Fri Feb 18 14:23:47 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 17:23:47 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <1298064823.9729.0.camel@a22m.mbaer-home> References: <20110218160241.6209E28E137@aland.bbn.com> <1298064823.9729.0.camel@a22m.mbaer-home> Message-ID: >Of course, IMP-IMP and IMP-Host are at the /same/ layer, upon which >there is sender-IMP-to-receiver-IMP (with the error correction and flow >control at that level), and then there is Host-Host, and then all the The proper terminology here is that they were different layers of the same rank. I don't think there was much confusion if any about interfaces and protocols in the ARPANET or the research networks in general. Interfaces were within a system, i.e. APIs and protocols were between systems. This was all "in the air" from Dykstra's THE and the common practive even then of building software as blackboxes. This use of interface as between software modules within a system was quite common in computing. Although not in telelcom. But then we weren't doing telecom. There was confusion once we encountered the CCITT who considered interfaces as being between boxes. Especially between boxes owned by different entities. But that was the old beads-on-a-string model that was being replaced. To them interface and protocol were the same thing. >app level stuff. This take on layering has been well put in Dave Clark's >1974 PhD thesis, but can also be infered fairly straightforwardly from >the BBN reports at the time, particularly the evolution of the ever more >distant host stuff that was added virtually right away in the very early >1970s. > >Matthias > >P.S. The above list is by absolutely no means to be taken as exhaustive, >especially in the INWG line of documents there are bound to be useful >elaborations of the layering notion, e.g. by Pouzin who in 1973 was very >adamant about clean and inviolable separation of layers already. > >Am Freitag, den 18.02.2011, 11:32 -0500 schrieb Vint Cerf: >> i think there was a much earlier layering paper, lead author postel, >> 1972? Spring Joint or Fall Joint Computer Conference. >> >> >> v >> >> >> On Fri, Feb 18, 2011 at 11:02 AM, Craig Partridge >> wrote: >> > >> > >> > On 2/18/2011 6:45 AM, Craig Partridge wrote: >> > > If you read the original OS layering paper by Hubert >> Zimmerman it is >> > > clearly a top-down management work plan. Useful to >> compare it with >> > > the ARPANET layering paper of a few years later. The >> difference is Zim's >> > > "here's how we'll break up the problem of developing >> standards" vs. >> > > "here's why creating TELNET led us to a layered >> architecture". >> > >> > >> > As I recall, documentation of the Arpanet approach to >> layering occurred as a >> > response to the OSI papers. Prior to that it was de facto >> but not documented >> >> >> That was the received wisdom I got in 1983. But when I went >> digging in >> 1987 or so, I discovered it wasn't true. >> >> The ARPANET layering paper was published by Davidson et al at >> the 5th >> IEEE Data Comm Conference in 1977. Zim's paper appeared in >> 1980. >> >> Thanks! >> >> Craig >> >> From mfidelman at meetinghouse.net Fri Feb 18 14:36:23 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 17:36:23 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> Message-ID: <4D5EF467.8030906@meetinghouse.net> John, Now I have to ask you what YOU've been smoking. John Day wrote: > As far as working goes, in 1992 cisco's largest customer by far was a > deployed CLNP network. Who, if anybody, was using CLNP for anything in 1992? By then the Internet had gone commercial, with about 20,000 or so nets and about a million hosts linked by IP. > I don't know what this means. Yes, CYCLADES was an embarrassment to > the French PTT and they were eventually able to shut it down. But it > was a real network and some very good people working on it. It is > unfortunate that it was shut down because they were doing good work. > > There was very little network research going on in the US. I don't believe CYCLADES ever grew beyond 20 hosts. As to network research in the US, BBN was DARPA's biggest contractor (still is, I think), and at least when I was there most of that money was going into .... network research. And then there was an awful lot of money going to a lot of universities, and a lot of corporate research going on. >>> Sometimes. Yes, you are correct. Although I have no idea why IEEE >>> bothers. Ethernet is an ISO standard. What you describe is very >>> much the case in IEEE today. It was less so at the beginning but >>> even there one had competing products: Ethernet, token bus, token >>> ring. It was what a lot of people wanted but it was the processs >>> produced. >> >> I'm not sure why IEEE bothers either, but they seem to be doing >> something right with the 802 line of standards. > > Over a decade ago, I told them not to bother. IEEE has international > recognition. There is no point to it. Huh? IEEE has been pretty effective as a standards body in a number of areas - 802, laboratory interconnection, Firewire, POSIX, as well as some of its more traditional electrical machinery, power, telegraph, and radio . As a standards body, its activities date back to the 1880s (AIEE which later merged with IRE to become IEEE). -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From richard at bennett.com Fri Feb 18 14:36:14 2011 From: richard at bennett.com (Richard Bennett) Date: Fri, 18 Feb 2011 14:36:14 -0800 Subject: [ih] What's special about the Internet? In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> <4D5EECE7.6080107@bennett.com> Message-ID: <4D5EF45E.1040908@bennett.com> I didn't bring this up to discuss terminals, of course. On 2/18/2011 2:26 PM, William F. Maton wrote: > On Fri, 18 Feb 2011, Richard Bennett wrote: > >> It occurs to me that the thing that makes the Internet (and ARPANET, >> CYCLADES, and OSI) special as a historical artifact is that the >> "terminals" for Internet use existed before the network itself. In >> the case of all > > True in a sense. Another way to look at the telephone system analogy > is to treat the human as the terminal that wanted to communicate > a-far, so the equipment needed (including the technological end > terminal) enabled the 'human terminal' inter-communicate. > > But you're right, the computer terminals came first, then the > realisation to transfer data between (hence the network was born) came > next. > > Is a discussion on terminals within 'ih' list's scope? > > > wfms -- Richard Bennett From jmamodio at gmail.com Fri Feb 18 14:51:26 2011 From: jmamodio at gmail.com (Jorge Amodio) Date: Fri, 18 Feb 2011 16:51:26 -0600 Subject: [ih] What's special about the Internet? In-Reply-To: <4D5EECE7.6080107@bennett.com> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <8C1ADDF4-7FE9-4FA3-9039-48D4CCC0215B@tony.li> <4D5EECE7.6080107@bennett.com> Message-ID: IMHO one of the most special features of the Internet was that as an "artifact" it didn't have a company logo ... My .02 Jorge From dave.walden.family at gmail.com Fri Feb 18 14:57:54 2011 From: dave.walden.family at gmail.com (Dave Walden) Date: Fri, 18 Feb 2011 14:57:54 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <1298064823.9729.0.camel@a22m.mbaer-home> References: <20110218160241.6209E28E137@aland.bbn.com> <1298064823.9729.0.camel@a22m.mbaer-home> Message-ID: <4d5ef98f.81a5e60a.0207.7c54@mx.google.com> Matthias, I'm not sure if my email address is accepted by the IH list. If not, will you please forward this message to the list. See my notes below your notes below. Dave You said: According to my research ca. one year ago, there are a host of prior papers essentially anticipating in full the point made in the 1977 Davidson et al paper. > - Walden (Host-to-Host Protocols, 1975; reprinted in a 1978 > McQuillan/Cerf tutorial published by IEEE) comes up with the famous > Figure with all the Arpanet protocols stacked on top of each other > (and some bypassing others) -- that Figure also features in the 1977 > Davidson et al paper The following is not intended to add substance to the current discussion of laying on the IH list -- just to give a tiny personal note. The full reference to Walden, 1975 is "Host-to-Host Protocols," International Computer State of the Art Report No. 24: Network Systems and Software, Infotech, Maidenhead, England, published 1975, pp. 287-316; reprinted in A Practical View of Computer Communication Protocols, J.M. McQuillan and V.G. Cerf, IEEE, 1978, pp. 172-204. Lyman Chapin told me at the time of his book with David Piscitello (Open Systems Networking) that my 1975 paper was the first (published?) instance he had found of that protocol layers figure. That sort of surprised me as at the time the idea of layering had been around (in my memory) since nearly the beginning, and my memory was that we used that figure rather widely. I do believe (although old memories can be faulty) that I in fact was the person who first drew that *particular version* of the laying figure to illustrate how it seemed to me that the layers went together and that there was some skipping around layers. I'm not sure if I drew it for the 1975 conference or earlier for some technical report to ARPA or other informal note, and surely it was based on common knowledge at the time. If the 1977 Davidson paper being mentioned is our six-author paper on TELNET, the history there is that I conceived of the paper and asked the other authors to participate, and Bob Thomas and I did the integration of the individual parts with the paper passed around for review as is normal for collaborative writing projects. The author order is of course alphabetical. Including the layers figure in there was surely my idea. From dave.walden.family at gmail.com Fri Feb 18 14:57:54 2011 From: dave.walden.family at gmail.com (Dave Walden) Date: Fri, 18 Feb 2011 14:57:54 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <1298064823.9729.0.camel@a22m.mbaer-home> References: <20110218160241.6209E28E137@aland.bbn.com> <1298064823.9729.0.camel@a22m.mbaer-home> Message-ID: <4d5ef98f.81a5e60a.0207.7c54@mx.google.com> Matthias, I'm not sure if my email address is accepted by the IH list. If not, will you please forward this message to the list. See my notes below your notes below. Dave You said: According to my research ca. one year ago, there are a host of prior papers essentially anticipating in full the point made in the 1977 Davidson et al paper. > - Walden (Host-to-Host Protocols, 1975; reprinted in a 1978 > McQuillan/Cerf tutorial published by IEEE) comes up with the famous > Figure with all the Arpanet protocols stacked on top of each other > (and some bypassing others) -- that Figure also features in the 1977 > Davidson et al paper The following is not intended to add substance to the current discussion of laying on the IH list -- just to give a tiny personal note. The full reference to Walden, 1975 is "Host-to-Host Protocols," International Computer State of the Art Report No. 24: Network Systems and Software, Infotech, Maidenhead, England, published 1975, pp. 287-316; reprinted in A Practical View of Computer Communication Protocols, J.M. McQuillan and V.G. Cerf, IEEE, 1978, pp. 172-204. Lyman Chapin told me at the time of his book with David Piscitello (Open Systems Networking) that my 1975 paper was the first (published?) instance he had found of that protocol layers figure. That sort of surprised me as at the time the idea of layering had been around (in my memory) since nearly the beginning, and my memory was that we used that figure rather widely. I do believe (although old memories can be faulty) that I in fact was the person who first drew that *particular version* of the laying figure to illustrate how it seemed to me that the layers went together and that there was some skipping around layers. I'm not sure if I drew it for the 1975 conference or earlier for some technical report to ARPA or other informal note, and surely it was based on common knowledge at the time. If the 1977 Davidson paper being mentioned is our six-author paper on TELNET, the history there is that I conceived of the paper and asked the other authors to participate, and Bob Thomas and I did the integration of the individual parts with the paper passed around for review as is normal for collaborative writing projects. The author order is of course alphabetical. Including the layers figure in there was surely my idea. From tony.li at tony.li Fri Feb 18 15:18:09 2011 From: tony.li at tony.li (Tony Li) Date: Fri, 18 Feb 2011 15:18:09 -0800 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5EF467.8030906@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <4D5EF467.8030906@meetinghouse.net> Message-ID: <06126F96-4484-43BB-B7DF-9F1D41D394B2@tony.li> On Feb 18, 2011, at 2:36 PM, Miles Fidelman wrote: > John Day wrote: >> As far as working goes, in 1992 cisco's largest customer by far was a deployed CLNP network. > > Who, if anybody, was using CLNP for anything in 1992? There was a large, national government network that was purely CLNP at that time. This network drove the development of ISO-IGRP and IS-IS within IOS. I'm sorry I can't name names, but this can be corroborated with Dave Katz. In addition, there were a number of companies that were actively deploying CLNP in a pilot mode in order to understand it. Most of this was to comply with governmental directives. AFAIK, no one else was doing it in a truly mission critical way. Tony From jeanjour at comcast.net Fri Feb 18 16:00:06 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 19:00:06 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4d5ef98f.81a5e60a.0207.7c54@mx.google.com> References: <20110218160241.6209E28E137@aland.bbn.com> <1298064823.9729.0.camel@a22m.mbaer-home> <4d5ef98f.81a5e60a.0207.7c54@mx.google.com> Message-ID: Yea, I agree with Dave. Layering was very well established by 75. There have to be papers perhaps not in conferences or journals, but it was common easily by 72, if not before. At 14:57 -0800 2011/02/18, Dave Walden wrote: >Matthias, >I'm not sure if my email address is accepted by the IH list. If >not, will you please forward this message to the list. See my notes >below your notes below. >Dave > >You said: > According to my research ca. one year ago, there are a host of prior > papers essentially anticipating in full the point made in the 1977 > Davidson et al paper. >> - Walden (Host-to-Host Protocols, 1975; reprinted in a 1978 >> McQuillan/Cerf tutorial published by IEEE) comes up with the famous >> Figure with all the Arpanet protocols stacked on top of each other >> (and some bypassing others) -- that Figure also features in the 1977 >> Davidson et al paper > >The following is not intended to add substance to the current >discussion of laying on the IH list -- just to give a tiny personal >note. The full reference to Walden, 1975 is "Host-to-Host >Protocols," International Computer State of the Art Report No. 24: >Network Systems and Software, Infotech, Maidenhead, England, >published 1975, pp. 287-316; reprinted in A Practical View of >Computer Communication Protocols, J.M. McQuillan and V.G. Cerf, >IEEE, 1978, pp. 172-204. > >Lyman Chapin told me at the time of his book with David Piscitello >(Open Systems Networking) that my 1975 paper was the first >(published?) instance he had found of that protocol layers figure. >That sort of surprised me as at the time the idea of layering had >been around (in my memory) since nearly the beginning, and my memory >was that we used that figure rather widely. I do believe (although >old memories can be faulty) that I in fact was the person who first >drew that *particular version* of the laying figure to illustrate >how it seemed to me that the layers went together and that there was >some skipping around layers. I'm not sure if I drew it for the 1975 >conference or earlier for some technical report to ARPA or other >informal note, and surely it was based on common knowledge at the >time. > >If the 1977 Davidson paper being mentioned is our six-author paper >on TELNET, the history there is that I conceived of the paper and >asked the other authors to participate, and Bob Thomas and I did the >integration of the individual parts with the paper passed around for >review as is normal for collaborative writing projects. The author >order is of course alphabetical. Including the layers figure in >there was surely my idea. From jeanjour at comcast.net Fri Feb 18 15:55:57 2011 From: jeanjour at comcast.net (John Day) Date: Fri, 18 Feb 2011 18:55:57 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: <4D5EF467.8030906@meetinghouse.net> References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <4D5EF467.8030906@meetinghouse.net> Message-ID: At 17:36 -0500 2011/02/18, Miles Fidelman wrote: >John, > >Now I have to ask you what YOU've been smoking. > >John Day wrote: >>As far as working goes, in 1992 cisco's largest customer by far was >>a deployed CLNP network. > >Who, if anybody, was using CLNP for anything in 1992? By then the >Internet had gone commercial, with about 20,000 or so nets and about >a million hosts linked by IP. See Tony's response. > >>I don't know what this means. Yes, CYCLADES was an embarrassment >>to the French PTT and they were eventually able to shut it down. >>But it was a real network and some very good people working on it. >>It is unfortunate that it was shut down because they were doing >>good work. >> >>There was very little network research going on in the US. > >I don't believe CYCLADES ever grew beyond 20 hosts. As to network >research in the US, BBN was DARPA's biggest contractor (still is, I >think), and at least when I was there most of that money was going >into .... network research. And then there was an awful lot of >money going to a lot of universities, and a lot of corporate >research going on. This is interesting and I hadn't thought about it until it was pointed out to me. But it was the case. BBN was DARPA's biggest contractor for building and operating the net. How many days a week could BBN take the net to run experiments on say routing or congestion control etc. Very quickly, the ARPANET was an operational network to support others research. I remember people telling me that the CYCLADES on the other hand was constantly being commandeered by the INRIA guys to run experiments and couldn't really be used the way we used the ARPANET. They were doing a lot of research on networks, we had to catch as catch can other stuff and what BBN could accomplish when they were doing IMPsys loads on Monday nights. And much of the early work on congestion control and related performance issues did come from those researchers, LeLann, Gelenbe, etc. > >>>>Sometimes. Yes, you are correct. Although I have no idea why >>>>IEEE bothers. Ethernet is an ISO standard. What you describe is >>>>very much the case in IEEE today. It was less so at the >>>>beginning but even there one had competing products: Ethernet, >>>>token bus, token ring. It was what a lot of people wanted but it >>>>was the processs produced. >>> >>>I'm not sure why IEEE bothers either, but they seem to be doing >>>something right with the 802 line of standards. >> >>Over a decade ago, I told them not to bother. IEEE has >>international recognition. There is no point to it. > >Huh? IEEE has been pretty effective as a standards body in a number >of areas - 802, laboratory interconnection, Firewire, POSIX, as well >as some of its more traditional electrical machinery, power, >telegraph, and radio . As a standards body, its activities date >back to the 1880s (AIEE which later merged with IRE to become IEEE). I was agreeing with you that there was no point to IEEE sending their stuff to ISO. I never saw the point in it. >-- >In theory, there is no difference between theory and practice. >In practice, there is. .... Yogi Berra From mfidelman at meetinghouse.net Fri Feb 18 16:42:28 2011 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 18 Feb 2011 19:42:28 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration In-Reply-To: References: <20110217203757.636D928E137@aland.bbn.com> <4D5DCE64.3020805@bennett.com> <4D5E62B3.6010804@meetinghouse.net> <4D5E75B8.1070809@meetinghouse.net> <4D5E9B91.8090006@meetinghouse.net> <4D5ED203.2030008@meetinghouse.net> <4D5EF467.8030906@meetinghouse.net> Message-ID: <4D5F11F4.1080008@meetinghouse.net> John Day wrote: > At 17:36 -0500 2011/02/18, Miles Fidelman wrote: >> John Day wrote: >>> As far as working goes, in 1992 cisco's largest customer by far was >>> a deployed CLNP network. >> Who, if anybody, was using CLNP for anything in 1992? By then the >> Internet had gone commercial, with about 20,000 or so nets and about >> a million hosts linked by IP. > > See Tony's response. Which one? This one: > With all due respect, I beg to differ. It's true that cisco did have > one real, deployed, sizable pure CLNP network, but they were not even > close to being the largest customer. or this one: > There was a large, national government network that was purely CLNP at that time. This network drove the development of ISO-IGRP and IS-IS within IOS. I'm sorry I can't name names, but this can be corroborated with Dave Katz. > > In addition, there were a number of companies that were actively deploying CLNP in a pilot mode in order to understand it. Most of this was to comply with governmental directives. AFAIK, no one else was doing it in a truly mission critical way. > Ok... so there was one large CLNP network. > This is interesting and I hadn't thought about it until it was pointed > out to me. But it was the case. BBN was DARPA's biggest contractor > for building and operating the net. How many days a week could BBN > take the net to run experiments on say routing or congestion control > etc. Very quickly, the ARPANET was an operational network to support > others research. Actually, no. By 1992, most of the money was coming through DCA, and was going to BBN Communications - the commercial network group. Huge amounts of money continued to (and still do) flow to BBN Labs (now Raytheon) for more researchy things. Most of the time, I was in BBN Communications, focusing more on DDN deployment; I expect others could comment more on the researchy things that were going on. (Granted, some of those funds went to research in areas outside of networking.) > I remember people telling me that the CYCLADES on the other hand was > constantly being commandeered by the INRIA guys to run experiments and > couldn't really be used the way we used the ARPANET. They were doing > a lot of research on networks, we had to catch as catch can other > stuff and what BBN could accomplish when they were doing IMPsys loads > on Monday nights. And much of the early work on congestion control > and related performance issues did come from those researchers, > LeLann, Gelenbe, etc. Well, there were also the WIDEBAND net and SATNET, work on streaming protocols, and a lot of tasks came my way related to network management and network security. I also recall that our network analysis group kept loading funny software into the switches to do various kinds of performance analysis and tuning. There was also an awful lot of work going on around routing protocols. And then, outside of BBN, there was an awful lot of network research at places like MIT, USC, Berkeley, Xerox PARC - CYCLADES and INRIA were certainly not the only ones doing network research in the '80s. >>>>> Sometimes. Yes, you are correct. Although I have no idea why >>>>> IEEE bothers. Ethernet is an ISO standard. What you describe is >>>>> very much the case in IEEE today. It was less so at the beginning >>>>> but even there one had competing products: Ethernet, token bus, >>>>> token ring. It was what a lot of people wanted but it was the >>>>> processs produced. >>>> >>>> I'm not sure why IEEE bothers either, but they seem to be doing >>>> something right with the 802 line of standards. >>> >>> Over a decade ago, I told them not to bother. IEEE has international >>> recognition. There is no point to it. >> >> Huh? IEEE has been pretty effective as a standards body in a number >> of areas - 802, laboratory interconnection, Firewire, POSIX, as well >> as some of its more traditional electrical machinery, power, >> telegraph, and radio . As a standards body, its activities date back >> to the 1880s (AIEE which later merged with IRE to become IEEE). > > I was agreeing with you that there was no point to IEEE sending their > stuff to ISO. I never saw the point in it. Ahhh.... I think it had more to do with agreements between IEEE, ANSI, et. al., regarding who had authority to set national standards. There's some good history at http://www.ieeeghn.org/wiki/index.php/History_of_Institute_of_Electrical_and_Electronic_Engineers_(IEEE)_Standards Interestingly, IEEE standards activities date back to 1885 - which, I figured, had to make them the oldest standards body around, at least for technical things. But, a little research yielded the interesting factoid that the ITU dates back to 1865 - formed to support standardization and interconnection of telegraph networks. -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra From craig at aland.bbn.com Sat Feb 19 05:22:57 2011 From: craig at aland.bbn.com (Craig Partridge) Date: Sat, 19 Feb 2011 08:22:57 -0500 Subject: [ih] NIC, InterNIC, and Modelling Administration Message-ID: <20110219132257.EDEB228E137@aland.bbn.com> > >I don't believe CYCLADES ever grew beyond 20 hosts. As to network > >research in the US, BBN was DARPA's biggest contractor (still is, I > >think), and at least when I was there most of that money was going > >into .... network research. And then there was an awful lot of > >money going to a lot of universities, and a lot of corporate > >research going on. > > This is interesting and I hadn't thought about it until it was > pointed out to me. But it was the case. BBN was DARPA's biggest > contractor for building and operating the net. How many days a week > could BBN take the net to run experiments on say routing or > congestion control etc. Very quickly, the ARPANET was an operational > network to support others research. When did BBN-NET (net 8) get rolling? (For those who don't know, net 8 was originally an IMP network that was used as BBN's internal corporate LAN/WAN). By the time I showed up in 1983 it was larger (in terms of IMPs) than ARPANET and was where the testing was done. Thanks! Craig From eric.gade at gmail.com Tue Feb 22 10:46:24 2011 From: eric.gade at gmail.com (Eric Gade) Date: Tue, 22 Feb 2011 18:46:24 +0000 Subject: [ih] Question for IAB People Message-ID: Hi. I'm trying to work out whether references to 'InterNICs' from as early as 1987 meant something different from the 'InterNIC' (*international* NIC) that was initially proposed at an IAB meeting in April 1990. It may be the case that this former idea of InterNIC was of a 'template' that other NICs would possibly model themselves off of. It is my understanding that from around 1987 - 1989 there was an 'InterNIC' WG. My suspicion is that this actually refers to finding methods for exchanging information between different NICs in some standardized fashion. Later on, the name seems to refer to a blueprint for an actual organization. It's hard to resolve this because of *the* InterNIC that eventually came into existence. -- Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssurya at ieee.org Mon Feb 28 16:06:23 2011 From: ssurya at ieee.org (Stephen Suryaputra) Date: Mon, 28 Feb 2011 19:06:23 -0500 Subject: [ih] The origin of variable length packets Message-ID: Hi, I?m reading Paul Baran?s paper: P. Baran, "On Distributed Communications Networks," Communications Systems, IEEE Transactions on, vol. 12, pp. 1-9, 1964. And saw something interesting. The message block (aka "packet") described in this paper is a fixed size of 128 bytes. And there is even a mention saying that the fixed size enables high speed implementation of switches. Out of curiosity, I tried to trace when the packet becomes variable length. But, no success. This paper indicates that there is a maximum packet size, so it's already variable by 1974: V. Cerf and R. Kahn, "A Protocol for Packet Network Intercommunication," Communications, IEEE Transactions on, vol. 22, pp. 637-648, 1974. Any pointer or reasons why the packet becomes variable length later on? A reference would be really appreciated. Thanks, Stephen. From jnc at mercury.lcs.mit.edu Mon Feb 28 19:50:38 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 28 Feb 2011 22:50:38 -0500 (EST) Subject: [ih] The origin of variable length packets Message-ID: <20110301035038.5971718C10B@mercury.lcs.mit.edu> > From: Stephen Suryaputra > Any pointer or reasons why the packet becomes variable length later on? I would assume/guess that the first well-known and wide-scale use was in the ARPANet. (Which was pretty much the first general packet network I know of - were they any proprietary things before that, does anyone know?) The first variable length data items transmitted between compturers (although I would tend to doubt they thought of them as packets) might be hard to track down. It might have been some of the early computer-computer experiments, e.g. the kind of thing Larry Roberts did at Lincoln Labs (which definitely had variable length messages); another early system that might have had variable length data items was SAGE (since that also had computer-computer links between centers, although I don't know offhand of a source that talks about that level of detail on the communication aspects of SAGE). > A reference would be really appreciated. For Larry Roberts' work: Thomas Marill, Lawrence G. Roberts, "Toward A Cooperative Network Of Time-Shared Computers", Fall AFIPS Conference, October 1966 For the ARPANET: Frank Heart, Robert Kahn, Severo Ornstein, William Crowther, David Walden, The Interface Message Processor for the ARPA Computer Network (1970 Spring Joint Computer Conference, AFIPS Proc. Vol. 36, pp. 551.567, 1970) For SAGE, although there are a number of things about it, for instance the one listed here: http://en.wikipedia.org/wiki/Semi_Automatic_Ground_Environment#Further_reading Like I said, I don't know of anything there that goes into a lot of technical detail on the communication stuff, though. (I looked through a couple, including the 'Annals of the History of Computing' issue.) In particular, there's a rumor that SAGE had the first email, but the communication part of the system especially is so poorly documented in the open literature I've never been able to track that down. There is a fair amount on the AN/FSQ-7 computer, and some on the programming, but the whole communication aspect (other than the early radar data transmission) is seemingly not covered anywhere. Noel From dave.walden.family at gmail.com Mon Feb 28 20:47:43 2011 From: dave.walden.family at gmail.com (Dave Walden) Date: Mon, 28 Feb 2011 20:47:43 -0800 Subject: [ih] The origin of variable length packets In-Reply-To: <20110301035038.5971718C10B@mercury.lcs.mit.edu> References: <20110301035038.5971718C10B@mercury.lcs.mit.edu> Message-ID: <4d6c7a76.8705ec0a.636a.ffffcee9@mx.google.com> At 07:50 PM 2/28/2011, Noel Chiappa wrote: >I would assume/guess that the first well-known and wide-scale use was in the >ARPANet.For the ARPANET: > Frank Heart, Robert Kahn, Severo Ornstein, William Crowther, David Walden, > The Interface Message Processor for the ARPA Computer Network (1970 Spring > Joint Computer Conference, AFIPS Proc. Vol. 36, pp. 551.567, 1970) >Hi, I'm not close to home right now, so I can't look up what ARPA's call for bids (RFQ) to develop the ARPANET IMP's specified. I sort of feel that it already specified that the host computers could send messages that were variable length up to about 8,000 bits, and the packet-subnet of IMPs broke these messages into about 1,000 bit packets with the last packet in a message being possibly being shorter than a full 1,000 bits. Anyway, that's the way, as I remember, that we initially implemented it in the IMPs. I do remember that there was a preconception of bi-modal message traffic with file transfers being broken into 8,000 bit messages, and interactive terminal traffic being messages of only 10s or 100s of bits, i.e., one packet or less. I also think I remember that the (I think 24-bit) CRC on inter-IMP packets was calculated to have the desired error detection rate based on 1,000 bit packets. Dave -- home address: 12 Linden Rd., E. Sandwich, MA 02537; home ph=508-888-7655; Portland ph = 971-279-2173; cell ph = 503-757-3137; Sara cell ph = 508-280-0446 email address: dave at walden-family.com; website(s): http://www.walden-family.com/ From vint at google.com Mon Feb 28 21:06:37 2011 From: vint at google.com (Vint Cerf) Date: Tue, 1 Mar 2011 00:06:37 -0500 Subject: [ih] The origin of variable length packets In-Reply-To: <4d6c7a76.8705ec0a.636a.ffffcee9@mx.google.com> References: <20110301035038.5971718C10B@mercury.lcs.mit.edu> <4d6c7a76.8705ec0a.636a.ffffcee9@mx.google.com> Message-ID: correct, dave On Mon, Feb 28, 2011 at 11:47 PM, Dave Walden wrote: > At 07:50 PM 2/28/2011, Noel Chiappa wrote: > >> I would assume/guess that the first well-known and wide-scale use was in >> the >> ARPANet.For the ARPANET: >> >> Frank Heart, Robert Kahn, Severo Ornstein, William Crowther, David >> Walden, >> The Interface Message Processor for the ARPA Computer Network (1970 >> Spring >> Joint Computer Conference, AFIPS Proc. Vol. 36, pp. 551.567, 1970) >> > > Hi, >> > > I'm not close to home right now, so I can't look up what ARPA's call for > bids (RFQ) to develop the ARPANET IMP's specified. I sort of feel that it > already specified that the host computers could send messages that were > variable length up to about 8,000 bits, and the packet-subnet of IMPs broke > these messages into about 1,000 bit packets with the last packet in a > message being possibly being shorter than a full 1,000 bits. Anyway, that's > the way, as I remember, that we initially implemented it in the IMPs. I do > remember that there was a preconception of bi-modal message traffic with > file transfers being broken into 8,000 bit messages, and interactive > terminal traffic being messages of only 10s or 100s of bits, i.e., one > packet or less. I also think I remember that the (I think 24-bit) CRC on > inter-IMP packets was calculated to have the desired error detection rate > based on 1,000 bit packets. > > Dave > > > > -- > home address: 12 Linden Rd., E. Sandwich, MA 02537; home ph=<508-888-7655> > 508-888-7655; > Portland ph = <971-279-2173>971-279-2173; cell ph = <503-757-3137> > 503-757-3137; Sara cell ph = <508-280-0446>508-280-0446 > email address: dave at walden-family.com; website(s): > http://www.walden-family.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Feb 28 21:23:50 2011 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 1 Mar 2011 00:23:50 -0500 (EST) Subject: [ih] The origin of variable length packets Message-ID: <20110301052350.87A4018C109@mercury.lcs.mit.edu> > From: Dave Walden > I can't look up what ARPA's call for bids (RFQ) to develop the ARPANET > IMP's specified. > ... > .. the packet-subnet of IMPs broke these messages into about 1,000 bit > packets with the last packet in a message being possibly being shorter > than a full 1,000 bits. I'm also too lazy to go check the RFQ or the BBN proposal, but I did look at the Heart et al paper, and although it doesn't _explicitly_ say that the IMPs used shorter packets, and give the details on how, there are a lot of things that implicitly say so, e.g.: "a line fully loaded with short packets will require more computation than a line with all long packets ... a line will typically carr a variety of different length packets" (pg. 564) Noel