From feinler at earthlink.net Mon Feb 4 14:57:27 2013 From: feinler at earthlink.net (Elizabeth Feinler) Date: Mon, 4 Feb 2013 14:57:27 -0800 Subject: [ih] Internet Hall of Fame Recommendations Needed Message-ID: The Internet Society is seeking nominations for the Internet Hall of Fame by Feb. 15th, 2013 This link has the details and form to fill out. http://internethalloffame.org/nominations Hope you will think about who should be there, and make sure they get nominated. Thanks, Jake Feinler -------------- next part -------------- An HTML attachment was scrubbed... URL: From justine at eecs.berkeley.edu Thu Feb 7 10:23:02 2013 From: justine at eecs.berkeley.edu (Justine Sherry) Date: Thu, 7 Feb 2013 10:23:02 -0800 Subject: [ih] The story of BGP? Message-ID: Hi Folks, I was in the graduate networking class yesterday at Berkeley and we were discussing the origin of BGP for Interdomain routing, and we realized we were all a bit vague on the history of BGP and how it developed. Our (graduate students') understanding goes something like this: Pre-1994: EGP, hierarchical Internet to NSFNet Some point in 1994: "Flag Day" and everyone switches to BGP Since 1994: Minimal evolution in BGP There are two big gaps here, of course. (1) Where did BGP come from, who drafted the spec, why was it settled on as what we all switched to in 1994? Were there alternatives in mind? (2) How is the BGP we switched to in 1994 different from the BGP we used today, and who drove those changes? Does anyone have any pointers to a summary of this history or interesting experiences to share? Thank you! Cheers, Justine (& assorted networking graduate students) From craig at aland.bbn.com Thu Feb 7 10:49:19 2013 From: craig at aland.bbn.com (Craig Partridge) Date: Thu, 07 Feb 2013 13:49:19 -0500 Subject: [ih] The story of BGP? Message-ID: <20130207184919.D1AEB28E137@aland.bbn.com> I believe you'll find a lot of what you want to know in Yakov Rehkter's talk on BGP at 18 (http://www.youtube.com/watch?v=_Mn4kKVBdaM). > There are two big gaps here, of course. > (1) Where did BGP come from, who drafted the spec, why was it settled > on as what we all switched to in 1994? Were there alternatives in > mind? As you'll see, the story is that a couple of very frustrated engineers drafted BGP over lunch on a napkin. They were frustrated because the IETF was flailing around looking at alternatives. > (2) How is the BGP we switched to in 1994 different from the BGP we > used today, and who drove those changes? Many people can speak on this topic better than I can. Thanks! Craig From sghuter at nsrc.org Thu Feb 7 11:05:09 2013 From: sghuter at nsrc.org (Steven G. Huter) Date: Thu, 7 Feb 2013 11:05:09 -0800 (PST) Subject: [ih] The story of BGP? In-Reply-To: References: Message-ID: one good reference if you have not yet reviewed it would be IETF RFC 1105 http://www.ietf.org/rfc/rfc1105.txt and the subsequent updates to that document, RFCs 1163, 1267, 1771, etc. steve huter From jnc at mercury.lcs.mit.edu Thu Feb 7 12:05:55 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 7 Feb 2013 15:05:55 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130207200555.4D77C18C0E3@mercury.lcs.mit.edu> > From: Justine Sherry > why was it settled on as what we all switched to in 1994? Were there > alternatives in mind? > ... > Does anyone have ... interesting experiences to share? L-rd, I should be a total font of useful information here, because I was in the middle of all of it, but alas, this period is dim in my memory - very dim, sigh. I know there was an EGP++ that a number of us worked on for a while (I forget who led the effort, it must have been someone at BBN, I think). I don't recall anything at all about it, alas (although I probably have some old drafts in a large stack of papers upstairs if anyone _really_ cares). I assume it must have loosened the 'no-cycles' restriction on EGP2 (I _think_ that's what we called the version of EGP that was widely deployed, which had some differences from the version that Eric Rosen first proposed - but don't put much weight on that memory), but I don't recall anything about it. I think what happened next was that I became wholly dissatisfied with both the direction and pace of EGP++, and tried to get Proteon, which was then in tight with a lot of the NSF regionals, to lead an effort to do something better - I wanted to do something link-state based - the working name was FGP (in the spirit of the languages, B, C and D...). I tried to convince John Moy that he was capable of doing it, but he demurred (which was ironic, because only very slightly later he did OSPF - a link-state routing protocol). Dave Clark backed up that position to the Proteon board, so I wasn't able to convince Proteon to do it. At about that time, or shortly thereafter, IBM won the NSF backbone contract, and they needed something better than EGP, and so Yakov wound up putting together BGP. > How is the BGP we switched to in 1994 different from the BGP we used > today, and who drove those changes? Well, there were three main 'phases' (and I don't recall the order of the first two, but I'm sure the RFCs will tell). The first was that Yakov did an 'improved' BGP for use with the ISO stack, and at some point an upgrade to the IETF BGP basically took that up. The second was that the decision to do CIDR (the chief recommendation of the ROAD effort) meant we had to upgrade BGP to carry masks. (These may have been folded into one upgrade? Don't recall... Destination-Vector protocols, they're all fundamentally junk, I don't pay much attention to the details.) Since then, the third 'phase' is that there has been a series of improvements (I think mostly done by adding attributes), things like communities, etc. And there are things like route-reflectors. But I don't know if any of them are major protocol changes, though (although repeat comment about lack of attention to DV protocols). Noel From jnc at mercury.lcs.mit.edu Thu Feb 7 14:48:46 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 7 Feb 2013 17:48:46 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> > I probably have some old drafts in a large stack of papers upstairs So, I was unable to resist, and did a quick dip into the piles of paper, and rescued a few things, which do have the advantage of not fading like my memory! > EGP2 (I _think_ that's what we called the version of EGP that was > widely deployed That was indeed the version number of the deployed EGP - see RFC-904. The name of the proposed follow-on was probably EGP3. I did find a document from March, 1987, by Mike St. Johns and Jose Rodriguez entitled "EGP Version 2, Revisions and Extensions to EGP", which says that "While EGP2 on paper and logically is a new version of EGP, the version number ... REMAINS the same as in EGP" (i.e. 2). I think we must have decided that was confusing, because RFC-1093 (which also contains some good history of the situation prior to BGP) contains a reference to: Marianne L. Gardner and Mike Karels, "Exterior Gateway Protocol, Version 3, Revisions and Extensions", Working Notes of the IETF WG on EGP, February 1988 which matches my dim memory (that the still-born 'next version' of EGP was EGP3). I have found other references to EGP3, too, such as the "Status Report of the Open Routing Working Group" (chair Marianne Lepp), January 1989. > I assume it must have loosened the 'no-cycles' restriction One other problem was that the updates (EGP routing messages were single packet) were getting too big; the March '87 document describes an incremental update mechanism. > I think what happened next After looking at a few documents, I think I have a better idea of what happened. BGP came out in June 1989, but IIRC it was a pretty quick hack; e.g. I have a document from Guy Almes, March 1989 (i.e. 3 months prior) entitled "Midterm Inter-AS Routing Architecture" which makes no mention of it. I think what happened was that the EGP3 effort (which probably started in early 1987 or so) quickly metastasized (a most appropriate word) into something called the Open Routing Working Group, which started down the road to full-blown policy routing. (See the appendix to RFC-1126, "Goals and functional requirements for inter-autonomous system routing" for more.) (I have some minutes/agendas from ORWG meetings in September and November, 1988, if anyone cares.) This happened pretty quickly - I have memos written by Ross Callon from December, 1987 in which the group is moving on from 'an improvement to EGP' to 'let's do a real policy routing architecture, good for a very large network'. That eventually resulted in IDPR (RFC-1478, etc) but it took forever. So in the interim, people went off in a variety of different ways. One was Yakov: > Yakov did an 'improved' BGP for use with the ISO stack, and at some > point an upgrade to the IETF BGP basically took that up. The original BGP was, as stated, a quick hack because the NSF backbone needed something better than EGP2. As part of the inability of the ORWG to get a policy routing design done quickly that everyone liked, Yakov later did his own policy routing thing, IDRP (note very slight acronym difference - that was deliberate), which I think was first an ISO proposal (it does not seem to have ever existed as an RFC), and was the thing that I remembered as later being taken up into BGP (not sure if it was BGP-3 or BGP-4 - maybe 4, looking at the references?). IDRP also got used in a proposal called 'Unified Routing' (see RFC-1322, "A Unified Approach to Inter-Domain Routing", May 1992). > I .. tried to get Proteon .. to lead an effort to do something better - > I wanted to do something link-state based - the working name was FGP > (in the spirit of the languages, B, C and D...). I initially tried this (it would have been circa 1987-1988 or so, I remember a meeting at Proteon with, I think, Hans-Werner Braun) to discuss it. After that blew up, I wound up doing Nimrod (which was very similar to IDPR, but was not purely an inter-AS protocol, but a 'top to bottom' routing architecture). But that's a whole 'nother story... BTW, there actually was a proposed 'DGP', too - the 'Dissimilar Gateway Protocol'. None of the three memos I have about it has a date or name, but I have this dim memory that it was David Mills, and circa 1987. Noel From louie at transsys.com Thu Feb 7 15:20:26 2013 From: louie at transsys.com (Louis Mamakos) Date: Thu, 7 Feb 2013 18:20:26 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> Message-ID: <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Some other random thoughts.. I think one of the drivers for a replacement for EGP was the arrival of the NSFNET, and the need to support a topology that wasn't the mostly-strict hierarchy that was rooted in the single set of core routers on the ARPANET. The NSFNET backbone along with the various NSF sponsored regional networks as well as other research networks were quite a challenge to glue together, with somewhat ill-defined borders between networks and IGP domains that spanned multiple networks and their administrators. A better tool was desperately needed. This was in addition to EGP-2 suffering under the ever increasing size of the route announcements. If I recall, there was a lack of incremental updates and EGP-2 relied on IP reassembly of very large fragmented IP datagrams. A single dropped fragment in practice rendered the entire announcement useless, and I think there were some concerns on how large a package some operating systems were going to be willing to reassemble. The NSFNET and scores of networks would only add to the pressure of the ever-growing size of the EGP announcements. Other random thought: CIDR arrived in BGP-4. I remember the transition from BGP-3 to BGP-4 and while strictly speaking not a flag-day, the coexistence of both was intended to be limited because of the difficulty in understanding how classfull and classless announcements would coexist. I'm not sure what the successor to BGP-4 will be, but it will be called BGP-4 and be backwards compatible and incrementally deployable. :-) Certainly BGP-4 has been around since the mid-1990's now, right? It has withstood the onslaught of tremendous improvements since then. Louis Mamakos From jack at 3kitty.org Thu Feb 7 15:36:17 2013 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 7 Feb 2013 15:36:17 -0800 Subject: [ih] The story of BGP? In-Reply-To: References: Message-ID: Hi Justine, I wrote up some historical recollections a few years ago - see http://mailman.postel.org/pipermail/internet-history/2010-February/001219.html and search in that rather long message for "subway strap" for the anecdote about the genesis of BGP. As I recall, we set up the basic architecture for a multi-system Internet and invented the concept of "Autonomous Systems" (ASes). >From that, we defined the initial EGP, documented in RFC 827, as the simplest possible scheme to achieve just the most basic connectivity. To paraphrase Einstein - "as simple as possible, but not simpler". That first EGP was intended more as a "firewall" mechanism than anything else. It would enable one AS, such as the "core gateways" to be operated and managed by one group (e.g., us at BBN), to hopefully be unaffected by whatever might go on in some other AS. Extra-AS information would simply be viewed as suspect, and intra-AS information would dominate all routing decisions. We explicitly (probably with ARPA encouragement) left further evolution of EGP to Someone Else. If BBN had continued to define and evolve EGP it wouldn't have been a very good test of whether or not the architecture really allowed pieces of the Internet to be developed, managed, and evolved independently. That vacuum probably led to the engineers-and-napkins scenario somewhat later, and the definition of BGP as a replacement for the intentionally rudimentary EGP. >From the perspective of the "core" system, EGP made the "core gateways" a lot less vulnerable to whatever Dave Mills, Noel Chappa, Jim Mathis, and others did to their own Autonomous Systems....! They'd try something new, and we'd then get the complaints that the Internet was broken. This is a good example of the somewhat mundane but crucial mechanisms we had to put into the Internet to enable a single Internet to simultaneously support research and experimental work as well as reliable infrastructure-class communications. Bob Hinden may remember more about that era, since as I recall he (I think with Alan Sheltzer and Mike Brescia) was the one who had to make it actually work. /Jack Haverty On Thu, Feb 7, 2013 at 10:23 AM, Justine Sherry wrote: > Hi Folks, > > I was in the graduate networking class yesterday at Berkeley and we > were discussing the origin of BGP for Interdomain routing, and we > realized we were all a bit vague on the history of BGP and how it > developed. > > Our (graduate students') understanding goes something like this: > Pre-1994: EGP, hierarchical Internet to NSFNet > Some point in 1994: "Flag Day" and everyone switches to BGP > Since 1994: Minimal evolution in BGP > > There are two big gaps here, of course. > (1) Where did BGP come from, who drafted the spec, why was it settled > on as what we all switched to in 1994? Were there alternatives in > mind? > (2) How is the BGP we switched to in 1994 different from the BGP we > used today, and who drove those changes? > > Does anyone have any pointers to a summary of this history or > interesting experiences to share? Thank you! > > Cheers, > Justine (& assorted networking graduate students) From LarrySheldon at cox.net Thu Feb 7 16:42:55 2013 From: LarrySheldon at cox.net (Larry Sheldon) Date: Thu, 07 Feb 2013 18:42:55 -0600 Subject: [ih] Domains and Networks (was Re: The story of BGP?) In-Reply-To: References: Message-ID: <51144A0F.1060504@cox.net> On 2/7/2013 12:23 PM, Justine Sherry wrote: > I was in the graduate networking class yesterday at Berkeley and we > were discussing the origin of BGP for Interdomain routing, and we That introduction clangs. We all know that I have no cachet nor credential to offer in this place, but in an earlier life had occasional opportunities to ask people to us terms correctly lest the confuse or signal confusion. At that time and place routing happened between networks, oblivious to the presence or absence (or even existence in thing like IPX networks) of domain boundaries. I don't think even the use of IP forces the existence of a domain structure. -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. ICBM Data: http://g.co/maps/e5gmy (Adapted from Stephen Pinker) From jnc at mercury.lcs.mit.edu Thu Feb 7 17:48:08 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 7 Feb 2013 20:48:08 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130208014808.EC47418C0E0@mercury.lcs.mit.edu> > From: Louis Mamakos > EGP-2 suffering under the ever increasing size of the route announcements. Oi, vey! Don't remind me about them!! If you remember, the big difference between EGP-1 and EGP-2 was that in EGP-2, the route (technically, reachability) entries were stuffed into the packet in the most space-efficient way possible - which was an order which was totally unrelated to the way anyone would ever store them in a routing database! I remember writing a bunch of code to run over the routing database and build a tree (using nodes allocated from the heap) which was organized in a way which was optimal for generating the bizarre format of EGP route packets. And then when the routing table changed... The thing about that crazy packet format was that it probably only bought us a year or so over EGP-1 (since the EGP-2 updates were probably 50% of the size of the EGP-1 ones). It would have been much smarter to simply go to multi-packet updates straight off (and probably less code even, given the amount of code it took to build the tree, etc). Noel From adrian at creative.net.au Fri Feb 8 00:30:29 2013 From: adrian at creative.net.au (Adrian Chadd) Date: Fri, 8 Feb 2013 00:30:29 -0800 Subject: [ih] The story of BGP? In-Reply-To: <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: Hi, On 7 February 2013 15:20, Louis Mamakos wrote: > Other random thought: CIDR arrived in BGP-4. I remember the transition from BGP-3 to BGP-4 and while strictly speaking not a flag-day, the coexistence of both was intended to be limited because of the difficulty in understanding how classfull and classless announcements would coexist. I'm not sure what the successor to BGP-4 will be, but it will be called BGP-4 and be backwards compatible and incrementally deployable. :-) Certainly BGP-4 has been around since the mid-1990's now, right? It has withstood the onslaught of tremendous improvements since then. The communities allowed for a lot of extension with BGPv4. There's also the IPv6 support with BGPv4. It's worth reading up on the evolution of that, as well as the discussions that went (and are?) going on with relation to table sizes. Finally, I do remember in the late 90s and early 2000s a bunch of research into CPU and network effects of BGP, specifically: * The behaviour of BGP with non-instantaneous route updates, causing repetitive route additions/withdraws * .. and the BGP dampening stuff, for both announcements and CPU churn So a lot of BGPv4 behaviour "tweaking" went on without changing the protocol itself. 2c, Adrian From dot at dotat.at Fri Feb 8 03:51:08 2013 From: dot at dotat.at (Tony Finch) Date: Fri, 8 Feb 2013 11:51:08 +0000 Subject: [ih] Domains and Networks (was Re: The story of BGP?) In-Reply-To: <51144A0F.1060504@cox.net> References: <51144A0F.1060504@cox.net> Message-ID: Larry Sheldon wrote: > On 2/7/2013 12:23 PM, Justine Sherry wrote: > > > I was in the graduate networking class yesterday at Berkeley and we > > were discussing the origin of BGP for Interdomain routing, and we > > That introduction clangs. We all know that I have no cachet nor credential to > offer in this place, but in an earlier life had occasional opportunities to > ask people to us terms correctly lest the confuse or signal confusion. > > At that time and place routing happened between networks, oblivious to the > presence or absence (or even existence in thing like IPX networks) of domain > boundaries. I don't think even the use of IP forces the existence of a domain > structure. I gather from RFC 1069 that the term "inter-domain routing" comes from the OSI protocols. It says ``... the concept of "routing domains" as used in ANSI and ISO. This concept is similar to, but not identical with, the concept of "Autonomous System" used in the Internet.'' Tony. -- f.anthony.n.finch http://dotat.at/ Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first. Rough, becoming slight or moderate. Showers, rain at first. Moderate or good, occasionally poor at first. From jeanjour at comcast.net Fri Feb 8 05:17:05 2013 From: jeanjour at comcast.net (John Day) Date: Fri, 8 Feb 2013 08:17:05 -0500 Subject: [ih] Domains and Networks (was Re: The story of BGP?) In-Reply-To: References: <51144A0F.1060504@cox.net> Message-ID: Yes, both intra-domain and inter-domain routing as well as the detailed structure of the network layer, i.e. dividing it into 3 sublayers, were developed in OSI. This is what lead to the idea of using link state locally (for networks) and distance vector globally (for internets). There was a good debate on intra-domain routing between a proposal by Dave Piscatello and Dave Oran. They ended up choosing Oran's, which became IS-IS. Piscatello's proposal had some interesting properties. At 11:51 AM +0000 2/8/13, Tony Finch wrote: >Larry Sheldon wrote: >> On 2/7/2013 12:23 PM, Justine Sherry wrote: >> >> > I was in the graduate networking class yesterday at Berkeley and we >> > were discussing the origin of BGP for Interdomain routing, and we >> >> That introduction clangs. We all know that I have no cachet nor >>credential to >> offer in this place, but in an earlier life had occasional opportunities to >> ask people to us terms correctly lest the confuse or signal confusion. >> >> At that time and place routing happened between networks, oblivious to the >> presence or absence (or even existence in thing like IPX networks) of domain >> boundaries. I don't think even the use of IP forces the existence >>of a domain >> structure. > >I gather from RFC 1069 that the term "inter-domain routing" comes from the >OSI protocols. It says ``... the concept of "routing domains" as used in >ANSI and ISO. This concept is similar to, but not identical with, the >concept of "Autonomous System" used in the Internet.'' > >Tony. >-- >f.anthony.n.finch http://dotat.at/ >Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at first. >Rough, becoming slight or moderate. Showers, rain at first. Moderate or good, >occasionally poor at first. From jnc at mercury.lcs.mit.edu Fri Feb 8 08:09:11 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 8 Feb 2013 11:09:11 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130208160911.9764118C101@mercury.lcs.mit.edu> > From: Justine Sherry Just looking back at your original message, a few comments, better answering your original questions (which didn't really get answered, I think). > Pre-1994: EGP, hierarchical Internet to NSFNet > Some point in 1994: "Flag Day" and everyone switches to BGP > Since 1994: Minimal evolution in BGP I'm not sure where 1994 comes from (that's the date on BGP-4, is that it?), but it's wrong. The conversion to BGP happened considerably before that. By the time of the BGP-1 spec in mid-1989, BGP was already in use; (from RFC-1105): At the time of this writing, the Border Gateway Protocol implementations exist for cisco routers as well as for the NSFNET Nodal Switching Systems I don't recall how long EGP2 remained in use after mid-1989, but it wasn't long, IIRC. And there was no 'flag day' - the Internet was already too big, and used too heavily, for that kind of thing. And of course BGP has continued to evolve since 1994 (communities, route reflectors, dampening, iBGP, yadda-yadda) although it's as much in operational practises as in the basic protocol. > why was it settled on as what we all switched to in 1994? Were there > alternatives in mind? There weren't any real alternatives, in the early stages (around 1989). EGP3 never happened (I think because most of the 'routing' people wanted to build something with more capabilities), and anything more sophisticted would have taken, and did take, too long. A few other things were bruited (FGP, DGP, Guy Almes' thing, whose name escapes me at the moment - maybe MIRA?) but none were more than paper - whereas BGP had implementations available. Moving off EGP2 was absolutely necessary (as Louie notes, the single-IP-packet routing updates were terminally limiting), and BGP-1 was the only thing available. Later on, IDPR made it into code (and Nimrod got half-way done), but I think the thing that sunk them was completely different: they arrived while the Internet was in a phase of extremely explosive growth, and people were running flat out just trying to keep up with the growth in traffic. Switching to a whole new routing architecture just didn't have a snowball's chance of happening. CIDR only happened because it was absolutely critical, and it involved only minimal changes. Not being involved deeply in IDPR, I can't speak for them, but I know with Nimrod a lot of people thought it was really neat, and very powerful, but it was pretty clear that people just didn't have enough spare time/energy/resources to make it happen. And then we'd missed the window - the Internet was too big to make that kind of change, and its evolution had become _completely_ driven by relatively short-term cost/benefit considerations. So unless there was an absolute necessity for something different - and there wasn't - there was no way to replace BGP. > How is the BGP we switched to in 1994 different from the BGP we used > today, and who drove those changes? BGP-1 to today, or BGP-4 circa 1994 to today? (Which was just after CIDR was taken up, which was in September 1993.) There's a big difference between those two. But someone else can answer that better. Noel From jcurran at istaff.org Fri Feb 8 08:35:19 2013 From: jcurran at istaff.org (John Curran) Date: Fri, 8 Feb 2013 11:35:19 -0500 Subject: [ih] The story of BGP? In-Reply-To: <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: On Feb 7, 2013, at 6:20 PM, Louis Mamakos wrote: > Some other random thoughts.. > > I think one of the drivers for a replacement for EGP was the arrival of the NSFNET, and the need to support a topology that wasn't the mostly-strict hierarchy that was rooted in the single set of core routers on the ARPANET. The NSFNET backbone along with the various NSF sponsored regional networks as well as other research networks were quite a challenge to glue together, with somewhat ill-defined borders between networks and IGP domains that spanned multiple networks and their administrators. A better tool was desperately needed. :-) One good place to find historic references to some of these challenges and changes is in the NSFNET sponsored "Internet Monthly Report" series... e.g. : "... Internet Monthly Report November 1989 ROUTING AREA REPORT Director: Bob Hinden (BBN) The major issue in this area is the topic of a standard internal gateway routing protocol (IGP). The IESG discussed this in detail at the open meeting in Hawaii. We plan to make this tpoic the focus of a special meeting at the next IETF meeting at Florida State University (Feb 6-9, 1990). Because of its importance and its early promise, we have also decided to form a WG to specifically examine at the experimental Border Gateway Protocol (BGP). One possible outcome would be for BGP to eventually replace EGP as the exterior gateway routing protocol. Another possible outcome might be that the better parts of BGP could become a basis for a new or better EGP. Phill Gross " FYI, /John From scott.brim at gmail.com Fri Feb 8 09:26:45 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 12:26:45 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> Message-ID: On Thu, Feb 7, 2013 at 5:48 PM, Noel Chiappa wrote: > BGP came out in June 1989, but IIRC it was a pretty quick hack; e.g. I > have a document from Guy Almes, March 1989 (i.e. 3 months prior) entitled > "Midterm Inter-AS Routing Architecture" which makes no mention of it. Noel, is that the one with the path vector (domain level route segments) idea? Did it discuss "0, 1, infinity"? The development of BGP was a confluence, not an invention of one or two people. Yes there was the attempt at incremental changes in FGP, and in addition we were fooling around with precursors for loop-avoidance, e.g. "source-asserted trees". By the way iirc Jeff Honig's implementation of BGP in Gated interworked very quickly after the Austin IETF. Scott From scott.brim at gmail.com Fri Feb 8 09:30:22 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 12:30:22 -0500 Subject: [ih] The story of BGP? In-Reply-To: <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: On Thu, Feb 7, 2013 at 6:20 PM, Louis Mamakos wrote: > Some other random thoughts.. > > I think one of the drivers for a replacement for EGP was the arrival of the NSFNET, and the need to support a topology that wasn't the mostly-strict hierarchy that was rooted in the single set of core routers on the ARPANET. The NSFNET backbone along with the various NSF sponsored regional networks as well as other research networks were quite a challenge to glue together, with somewhat ill-defined borders between networks and IGP domains that spanned multiple networks and their administrators. A better tool was desperately needed. One of the most entertaining moments in my history of IETF involvement was when Hans-Werner Braun and I explained NSFNet and ARPAnet routing interworking. Everything was still hierarchical so we did it all with RIP and a lot of following default routes. Dave Clark slapped his forehead. Jon Postel just shook his head. Yes we needed something like BGP but that took a few years. Scott From scott.brim at gmail.com Fri Feb 8 09:34:37 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 12:34:37 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130208160911.9764118C101@mercury.lcs.mit.edu> References: <20130208160911.9764118C101@mercury.lcs.mit.edu> Message-ID: On Fri, Feb 8, 2013 at 11:09 AM, Noel Chiappa wrote: > I don't recall how long EGP2 remained in use after mid-1989, but it wasn't > long, IIRC. And there was no 'flag day' - the Internet was already too big, > and used too heavily, for that kind of thing. Was the shutdown of the ARPAnet a big factor? I don't remember the order of things. Scott From jnc at mercury.lcs.mit.edu Fri Feb 8 11:04:23 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 8 Feb 2013 14:04:23 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130208190423.68A8018C10B@mercury.lcs.mit.edu> > From: Scott Brim >> a document from Guy Almes, March 1989 (i.e. 3 months prior) entitled >> "Midterm Inter-AS Routing Architecture" > Noel, is that the one with the path vector (domain level route > segments) idea? Did it discuss "0, 1, infinity"? Haven't the foggiest - I only glanced at the first page or two briefly. I've scanned it in (too lazy to OCR it), you can see it here: http://ana-3.lcs.mit.edu/~jnc/history/Almes_[1-6].jpg I don't seem to have a copy of any version of EGP3, and I couldn't find one online. If anyone has a copy, I would most appreciate it. > Was the shutdown of the ARPAnet a big factor? I don't think so; I think it was more the growth of the NSFNet, the regionals, etc (and the 'Net as a whole) which did it. I don't know if the increasing number of 'back-door' connections directly between non-core AS's was a factor: I suspect the stated inability of EGP2 to handle cycles in the topology (which I think people hacked around with EGP<->IGP metric translation tables, etc) probably wasn't as big a driver as EGP2's lack of multi-packet routing updates; as the routing table got bigger, that just wouldn't fly. But I wasn't in operations any more by then, others would know more about that. Noel From jack at 3kitty.org Fri Feb 8 11:12:41 2013 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 8 Feb 2013 11:12:41 -0800 Subject: [ih] The story of BGP? In-Reply-To: References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: Agreed, NSFNET was a major driver of the EGP->BGP evolution. I think to understand the bigger picture of that evolution one should also look at what was going on at the time outside the academic/research community. In particular, there was pent-up demand for solutions in operational worlds in government and commercial uses. There were also companies trying to establish themselves as suppliers as the Internet technology broke out of the research world - something DARPA et al were pushing as "technology transfer" and encouraging COTS - Commercial Off The Shelf solutions. Everyone wanted products they could just buy - and it was hard to get the idea of things like Dave's "fuzzballs" through the approval chain. I still recall one of innumerable meetings, sometime in the mid/late 80s. The meeting was called by DCA, to discuss options for incorporating those newfangled LANs into the Defense Data Network. The focus was clearly on operational reliability, as opposed to research. There were a variety of government users there, but not DARPA or NSF. It was an "operations crowd". I think it was held in the bowels of the Pentagon. Maybe 20 or 30 people total. One of my roles at the time at BBN was running a "DDN System Engineering" contract, whose work involved whatever it took to get various government systems - computers, applications, etc., converted over to use the Defense Data Network. Those user systems by then all had some LANs, and anticipated more, and wanted to know how to hook that new stuff into the communications backbone. That's why I was sitting at the meeting. The question was "what's the plan!?" Another seat at the table was occupied by Len Bosack (one of the cisco founders); cisco was a startup company at the time. Len gave a presentation about cisco's product (i.e., a COTS router), and how it could be incorporated into the DDN environment to integrate LANs into the overall system. I don't recall his details, but clearly EGP had a key role in such a picture. It would allow each governmental unit to manage and control its own environment, all of them linked together by the DDN backbone - very similar to what the NSF environment did as well. As "system engineer" contractor for DDN, I was asked "will Len's scheme work?" Literally. Since this kind of scenario was precisely what EGP was intended to enable, it didn't take long to say "Yes it could work." Everyone seemed surprised, and I remember explaining a bit, in non-technical lingo for the brass, about autonomous systems and the like. I think they liked hearing someone from BBN say that, since we were the ones who had gotten the DDN up and running. In retrospect, I think they had all expected me, as the rep from BBN, to say "Hell no, it will only work if you buy everything from BBN." What I actually explained was that they could buy routers from any vendor, including BBN, and the Internet system architecture would support such a multi-vendor implementation (as Bob Kahn had promoted - see my "subway strap" email). Now you know why I wasn't in Sales... They may have gone forward anyway with cisco (their router was *much* less expensive), but I probably made it a lot easier by supporting the "EGP/AS approach". Of course, as technologists we all know that it isn't quite that simple, and there had to be a lot of work to iron out a more robust and powerful tool than EGP, suitable for large-scale deployment in demanding situations. I don't know anything about what happened inside cisco after that DCA meeting, but I think it's no accident that the BGP spec a bit later was co-authored by cisco. Hope this helps reveal a little more of the history... /Jack Haverty Point Arena, CA Feb 8, 2013 On Fri, Feb 8, 2013 at 8:35 AM, John Curran wrote: > On Feb 7, 2013, at 6:20 PM, Louis Mamakos wrote: > >> Some other random thoughts.. >> >> I think one of the drivers for a replacement for EGP was the arrival of the NSFNET, and the need to support a topology that wasn't the mostly-strict hierarchy that was rooted in the single set of core routers on the ARPANET. The NSFNET backbone along with the various NSF sponsored regional networks as well as other research networks were quite a challenge to glue together, with somewhat ill-defined borders between networks and IGP domains that spanned multiple networks and their administrators. A better tool was desperately needed. > > :-) > > One good place to find historic references to some of these challenges and > changes is in the NSFNET sponsored "Internet Monthly Report" series... e.g. > : > > "... > Internet Monthly Report November 1989 > ROUTING AREA REPORT > Director: Bob Hinden (BBN) > > The major issue in this area is the topic of a standard internal > gateway routing protocol (IGP). The IESG discussed this in detail > at the open meeting in Hawaii. We plan to make this tpoic the > focus of a special meeting at the next IETF meeting at Florida > State University (Feb 6-9, 1990). > > Because of its importance and its early promise, we have also > decided to form a WG to specifically examine at the experimental > Border Gateway Protocol (BGP). One possible outcome would be for > BGP to eventually replace EGP as the exterior gateway routing > protocol. Another possible outcome might be that the better parts > of BGP could become a basis for a new or better EGP. > > Phill Gross " > > FYI, > /John > > > > > > > > From scott.brim at gmail.com Fri Feb 8 11:21:03 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 14:21:03 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130208190423.68A8018C10B@mercury.lcs.mit.edu> References: <20130208190423.68A8018C10B@mercury.lcs.mit.edu> Message-ID: On Fri, Feb 8, 2013 at 2:04 PM, Noel Chiappa wrote: > > From: Scott Brim > > >> a document from Guy Almes, March 1989 (i.e. 3 months prior) entitled > >> "Midterm Inter-AS Routing Architecture" > > > Noel, is that the one with the path vector (domain level route > > segments) idea? Did it discuss "0, 1, infinity"? > > Haven't the foggiest - I only glanced at the first page or two briefly. > I've scanned it in (too lazy to OCR it), you can see it here: > > http://ana-3.lcs.mit.edu/~jnc/history/Almes_[1-6].jpg Yes, that's the one I was thinking about. My memory is a sieve these days but iirc in the Topology Engineering Working Group (I chaired) we were trying to make routing work and wishing we had a real EGP. The Interconnectivity Working Group was where this path vector idea came up, and was the incubator for BGP. I say my memory is a sieve because (1) I thought for sure I remembered these discussions at the the Stanford IETF, but that was after Austin, and (2) I know Guy was leading the WG but I don't know if he came up with path vector. I think he might have but there was a lot of group discussion. From jcurran at istaff.org Fri Feb 8 11:41:35 2013 From: jcurran at istaff.org (John Curran) Date: Fri, 8 Feb 2013 14:41:35 -0500 Subject: [ih] The story of BGP? In-Reply-To: References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: On Feb 8, 2013, at 12:30 PM, Scott Brim wrote: > On Thu, Feb 7, 2013 at 6:20 PM, Louis Mamakos wrote: >> Some other random thoughts.. >> >> I think one of the drivers for a replacement for EGP was the arrival of the NSFNET, and the need to support a topology that wasn't the mostly-strict hierarchy that was rooted in the single set of core routers on the ARPANET. The NSFNET backbone along with the various NSF sponsored regional networks as well as other research networks were quite a challenge to glue together, with somewhat ill-defined borders between networks and IGP domains that spanned multiple networks and their administrators. A better tool was desperately needed. > > One of the most entertaining moments in my history of IETF involvement > was when Hans-Werner Braun and I explained NSFNet and ARPAnet routing > interworking. Everything was still hierarchical so we did it all with > RIP and a lot of following default routes. Dave Clark slapped his > forehead. Jon Postel just shook his head. Yes we needed something > like BGP but that took a few years. Yes, it was RFC 1092/1093 that nicely documented the problem with strictly hierarchical EGP routing when the topology actually wasn't hierarchical... EGP+IGRP combined with the exceptions for the interesting lateral connections often resulted in breakage for anyone at multiple NSFNET NSS connections (e.g. CSNET with NSS 8/JVNC, NSS 6/SDSC) unless some real care was taken in configs. /John From scott.brim at gmail.com Fri Feb 8 12:02:21 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 15:02:21 -0500 Subject: [ih] The story of BGP? In-Reply-To: References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: On Fri, Feb 8, 2013 at 2:41 PM, John Curran wrote: >> One of the most entertaining moments in my history of IETF involvement >> was when Hans-Werner Braun and I explained NSFNet and ARPAnet routing >> interworking. Everything was still hierarchical so we did it all with >> RIP and a lot of following default routes. Dave Clark slapped his >> forehead. Jon Postel just shook his head. Yes we needed something >> like BGP but that took a few years. > > Yes, it was RFC 1092/1093 that nicely documented the problem with strictly hierarchical > EGP routing when the topology actually wasn't hierarchical... EGP+IGRP combined with the > exceptions for the interesting lateral connections often resulted in breakage for anyone > at multiple NSFNET NSS connections (e.g. CSNET with NSS 8/JVNC, NSS 6/SDSC) unless some > real care was taken in configs. 1989 is the earliest we have for documenting that? We knew it at least in 1986. From jcurran at istaff.org Fri Feb 8 12:21:15 2013 From: jcurran at istaff.org (John Curran) Date: Fri, 8 Feb 2013 15:21:15 -0500 Subject: [ih] The story of BGP? In-Reply-To: References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: On Feb 8, 2013, at 3:02 PM, Scott Brim wrote: > On Fri, Feb 8, 2013 at 2:41 PM, John Curran wrote: >> Yes, it was RFC 1092/1093 that nicely documented the problem with strictly hierarchical >> EGP routing when the topology actually wasn't hierarchical... EGP+IGRP combined with the >> exceptions for the interesting lateral connections often resulted in breakage for anyone >> at multiple NSFNET NSS connections (e.g. CSNET with NSS 8/JVNC, NSS 6/SDSC) unless some >> real care was taken in configs. > > 1989 is the earliest we have for documenting that? We knew it at least in 1986. I don't know (I was an mostly SMTP/header-people person prior to coming into BBN in 1990 and getting firsthand experience any Internet-wide routing... :-) I will note RFC 975 (D. Mills) provides a similar example in the process of documenting the need for "hop count"/administrative distance in EGP. FYI, /John From craig at aland.bbn.com Fri Feb 8 12:30:16 2013 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 08 Feb 2013 15:30:16 -0500 Subject: [ih] The story of BGP? Message-ID: <20130208203016.C252628E137@aland.bbn.com> > On Fri, Feb 8, 2013 at 2:41 PM, John Curran wrote: > >> One of the most entertaining moments in my history of IETF involvement > >> was when Hans-Werner Braun and I explained NSFNet and ARPAnet routing > >> interworking. Everything was still hierarchical so we did it all with > >> RIP and a lot of following default routes. Dave Clark slapped his > >> forehead. Jon Postel just shook his head. Yes we needed something > >> like BGP but that took a few years. > > > > Yes, it was RFC 1092/1093 that nicely documented the problem with strictly > hierarchical > > EGP routing when the topology actually wasn't hierarchical... EGP+IGRP comb > ined with the > > exceptions for the interesting lateral connections often resulted in breaka > ge for anyone > > at multiple NSFNET NSS connections (e.g. CSNET with NSS 8/JVNC, NSS 6/SDSC) > unless some > > real care was taken in configs. > > 1989 is the earliest we have for documenting that? We knew it at least in 19 > 86. I think some is documented in the Mills/HWB paper at SIGCOMM '87. Craig From scott.brim at gmail.com Fri Feb 8 12:54:15 2013 From: scott.brim at gmail.com (Scott Brim) Date: Fri, 8 Feb 2013 15:54:15 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130208203016.C252628E137@aland.bbn.com> References: <20130208203016.C252628E137@aland.bbn.com> Message-ID: On Fri, Feb 8, 2013 at 3:30 PM, Craig Partridge wrote: > I think some is documented in the Mills/HWB paper at SIGCOMM '87. January 2, 1987 is when we hooked up Cornell and Columbia and created a routing loop with ARPAnet that blackholed all of HP's traffic. Dave Mills said "we need route filters" and etc. From louie at transsys.com Fri Feb 8 13:15:02 2013 From: louie at transsys.com (Louis Mamakos) Date: Fri, 8 Feb 2013 16:15:02 -0500 Subject: [ih] The story of BGP? In-Reply-To: References: <20130208203016.C252628E137@aland.bbn.com> Message-ID: <5247B7C4-E134-4228-BC46-8A0BDF37FFFE@transsys.com> On Feb 8, 2013, at 3:54 PM, Scott Brim wrote: > On Fri, Feb 8, 2013 at 3:30 PM, Craig Partridge wrote: >> I think some is documented in the Mills/HWB paper at SIGCOMM '87. > > January 2, 1987 is when we hooked up Cornell and Columbia and created > a routing loop with ARPAnet that blackholed all of HP's traffic. Dave > Mills said "we need route filters" and etc. There's all sorts of subtle behavior in there, too. Early NSFNET days had regional networks and university networks with "backdoor" connectivity sharing trying to share RIP to "make it work" somehow. RIP can only count so high, and 16 == infinity. If you think about the network diameter, at 16 hops you have the zone of death, where a route announcement is "poisoned" and unreachable even via a default route the long way around. This all seems so very obvious now, but we were gluing stuff together with spit, bailing wire and gated. I recall there are some very carefully crafted tables of metric transformations inside of gated when injecting routes learned from, e.g., HELLO into RIP and vice versa. Louis Mamakos From LarrySheldon at cox.net Fri Feb 8 13:43:28 2013 From: LarrySheldon at cox.net (Larry Sheldon) Date: Fri, 08 Feb 2013 15:43:28 -0600 Subject: [ih] Domains and Networks (was Re: The story of BGP?) In-Reply-To: References: <51144A0F.1060504@cox.net> Message-ID: <51157180.2060307@cox.net> On 2/8/2013 7:17 AM, John Day wrote: > Yes, both intra-domain and inter-domain routing as well as the detailed > structure of the network layer, i.e. dividing it into 3 sublayers, were > developed in OSI. > > This is what lead to the idea of using link state locally (for networks) > and distance vector globally (for internets). There was a good debate > on intra-domain routing between a proposal by Dave Piscatello and Dave > Oran. They ended up choosing Oran's, which became IS-IS. Piscatello's > proposal had some interesting properties. I guess I never got into those parts of the world (ANSI and OSI). A shame that technical folks re-use words and essentially destroy their meanings for people not confined to one clique. In my admin tole I was constantly having to deal with people who could not separate IP addressing from domain addressing. A trivial nit, but it worth raising because I was reminded that the world was bigger than my clique. >> f.anthony.n.finch http://dotat.at/ >> Forties, Cromarty: East, veering southeast, 4 or 5, occasionally 6 at >> first. >> Rough, becoming slight or moderate. Showers, rain at first. Moderate >> or good, >> occasionally poor at first. I'll work on it. -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. ICBM Data: http://g.co/maps/e5gmy (Adapted from Stephen Pinker) From jnc at mercury.lcs.mit.edu Fri Feb 8 14:29:46 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 8 Feb 2013 17:29:46 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130208222946.55C1918C125@mercury.lcs.mit.edu> > From: Louis Mamakos > This all seems so very obvious now, Well, it was obvious to _some_ of us back then, too... :-) But your basic point (about 20/20/hindsight) ties into something I've been pondering, which is 'why didn't the routing people (of whom I am one) move _promptly_ to do an EGP3, a short-term upgrade that would fix the most pressing problems - why did we all get de-railed into the whole expansive Open Routing thing'? And I think the answer is that we looked at things very differently back then than we do/would now. My sense/recollection is that we were more focused on doing a really good design - for the very long term, since I think we could all see the eventual lifetime and growth of the Internet - and not just throwing something together that would last us a couple of years. (Think DNS, which was clearly a design intended for indefinite life, and which has to some degree met those goals - although of course we've had to retrofit security.) If you look at the Requirements document for Open Routing, we clearly wanted to do something which would meet expansive long-term goals. In particular, in the 1987-88 timeframe, I don't think any of us foresaw how rapidly the Internet would start to grow in the near future (early 90s), or how key a piece of infrastructure it would become. And we didn't forsee how quickly the pressure to provide 'something better' would become excruciating. I think we probably thought we had more time than we actually did, and were more focused on doing a really good design, a la DNS - only the technical problems of a large-scale routing architecture were much harder than the technical problems of a large-scale name resolution system. (Which is part of why things moved slowly....) Also, nowadays we have a much better understanding of how economics is key in protocol deployment, of how the short-term cost/benefit ratio is really crucial. I think we still mentally operating in a more 'we design the right thing and DARPA tells everyone to go deploy it model'. The notion that the market/users (and from the perspective of the routing designers, the operations people at the regionals were 'customers') would really drive the evolution (as opposed to simply telling us what their reqirements were, and then sitting back to wait for the answer to be designed and delivered) was foreign to us at that point. And so the Internet is now stuck with this obsolete 1960s-grade routing architecture (in architectural terms, the whole BGP-4/IGP system is really not that much advanced over the routing in Baran's original design)... Noel From galmes at tamu.edu Fri Feb 8 15:42:45 2013 From: galmes at tamu.edu (Guy Almes) Date: Fri, 08 Feb 2013 17:42:45 -0600 Subject: [ih] The story of BGP? In-Reply-To: References: <20130207224846.B487318C0D8@mercury.lcs.mit.edu> <6352CB9C-9B76-4CF1-BD67-85C6A0A874B4@transsys.com> Message-ID: <51158D75.2010302@tamu.edu> Scott et al., The last few days have been busy, so I'm only now reviewing this *very interesting* thread. I'll focus one the first of the original questions that Justine asked: Where did it come from? In the late 1980s I was leading one of the NSFnet regional networks and becoming active in the IETF. We had an Interconnectivity Working Group to sort out issues involving the interplay of the new (e.g. non-government) parts of the rapidly growing Internet. The situation in the early NSFnet days included the use of EGP2 as the common exterior gateway protocol. The motivations was captured well by several (including by Yakov on his Google tech talk): <> EGP2 would exchange complete sets of routes and it was layered directly over IP, so as the number of network numbers increased, it could not keep up. <> EGP2 presupposed a very flat AS interconnectivity graph, essentially ARPAnet-centric to a shocking degree. Our little Interconnectivity Working Group was told *not* to invent a new protocol, and we were asked in 1988 to comment on a then-draft spec for a successor to EGP2, viz. EGP3. EGP3 was, in many ways, well done and it was incremental and thus might have scaled well with regard to the rapidly growing number of networks. But EGP3 remained 'flat' in the sense above. Although we could see many trends in the evolution of Internet AS topology, I was one of those who, attracted by the many advantages of the hierarchical NSFnet backbone-regional-campus structure, were attracted to retain a notion of hierarchy of ASes, while recognizing that each agency (thing of 'agency' in the broad sense of an organization with a 'backbone' of sorts) might have its own top-level. Thus we considered what it would take to support a kind of forest of ASes, i.e., a set of hierarchical tree-structures. Could you make a minimal change to EGP3 and capture this forest idea? Maybe, by keeping two or maybe three ASes to capture where a route came from, things would work. At one point in a break during a meeting at NASA Ames, we discussed a computer programming idea called the "zero one infinity" concept, where you should have either zero of something, exactly one of something, or an open-ended array of many of something. If keeping more than one AS was needed, maybe we should keep the whole AS path for each route. This idea was discarded at the time, but obviously reemerged in BGP and it is one of the key good ideas of BGP. So we were reluctant to jettison the idea of hierarchy, and we were reluctant to entertain variable-length AS-paths in a protocol, and we'd been told not to do a protocol. Clearly, in hindsight, we were too timid. This was the situation in late 1988 and at the Interconnectivity Working Group session at the January 1989 IETF meeting in Austin. During a break after that working group session, Yakov Rekhter and Kirk Lougheed wrote the three napkins that made key breakthroughs: <> use of TCP to get rid of message length issue (and solve other problems), <> incremental updates, and (to me most important) <> use of a full AS path for each route. Yakhov does a great job describing this "three napkin" design and how, within a few months, there were multiple, well two at least, independent implementations. With BGP, any motivation for doing EGP3 evaporated. By the Hawaii IETF (October 1989), there was already the beginning of a version 2 BGP. Practicality, implementability, and flexibility were so evident so quickly. I hope this is useful to Justine and others. And I'll leave to others the telling of the story post-1989. -- Guy On 2/8/13 2:02 PM, Scott Brim wrote: > On Fri, Feb 8, 2013 at 2:41 PM, John Curran wrote: >>> One of the most entertaining moments in my history of IETF involvement >>> was when Hans-Werner Braun and I explained NSFNet and ARPAnet routing >>> interworking. Everything was still hierarchical so we did it all with >>> RIP and a lot of following default routes. Dave Clark slapped his >>> forehead. Jon Postel just shook his head. Yes we needed something >>> like BGP but that took a few years. >> >> Yes, it was RFC 1092/1093 that nicely documented the problem with strictly hierarchical >> EGP routing when the topology actually wasn't hierarchical... EGP+IGRP combined with the >> exceptions for the interesting lateral connections often resulted in breakage for anyone >> at multiple NSFNET NSS connections (e.g. CSNET with NSS 8/JVNC, NSS 6/SDSC) unless some >> real care was taken in configs. > > 1989 is the earliest we have for documenting that? We knew it at least in 1986. > From craig at aland.bbn.com Fri Feb 8 16:14:59 2013 From: craig at aland.bbn.com (Craig Partridge) Date: Fri, 08 Feb 2013 19:14:59 -0500 Subject: [ih] The story of BGP? Message-ID: <20130209001459.A32DB28E138@aland.bbn.com> > But your basic point (about 20/20/hindsight) ties into something I've been > pondering, which is 'why didn't the routing people (of whom I am one) move > _promptly_ to do an EGP3, a short-term upgrade that would fix the most > pressing problems - why did we all get de-railed into the whole expansive > Open Routing thing'? > > And I think the answer is that we looked at things very differently back then > than we do/would now. > > My sense/recollection is that we were more focused on doing a really good > design - for the very long term, since I think we could all see the eventual > lifetime and growth of the Internet - and not just throwing something > together that would last us a couple of years. (Think DNS, which was clearly > a design intended for indefinite life, and which has to some degree met those > goals - although of course we've had to retrofit security.) If you look at > the Requirements document for Open Routing, we clearly wanted to do something > which would meet expansive long-term goals. Hi Noel: This doing it right long term versus doing something that solved an immediate need issue shows up repeatedly in IETF behavior the period 1989-1994 or so. Routing was one. Network Management was another. 8-bit Email nearly got wrapped around the axle too. Craig From jcurran at istaff.org Fri Feb 8 16:53:38 2013 From: jcurran at istaff.org (John Curran) Date: Fri, 8 Feb 2013 19:53:38 -0500 Subject: [ih] The story of BGP? In-Reply-To: <20130209001459.A32DB28E138@aland.bbn.com> References: <20130209001459.A32DB28E138@aland.bbn.com> Message-ID: <7B5ECA4D-28BC-4E20-96A3-48D096519426@istaff.org> On Feb 8, 2013, at 7:14 PM, Craig Partridge wrote: > > This doing it right long term versus doing something that solved an immediate > need issue shows up repeatedly in IETF behavior the period 1989-1994 or so. > Routing was one. Network Management was another. 8-bit Email nearly got > wrapped around the axle too. "IPv4 to IPng" has definitely earned a spot on that list; we solved the apparent immediate need, and decided not to undertake a loc-id split nor variable/path-based locators. I probably could live with this tradeoff for getting it done fast, but we actually didn't get it done, instead leaving transition out of the spec for the next generation and not even getting any actual backward compatibility with IPv4 as a result. If we're not going to "do it right for the long-term", it's kinda important that we nail getting the "immediate" solution right... /John Disclaimer: My $.02; YMMV. From paul at redbarn.org Fri Feb 8 19:08:33 2013 From: paul at redbarn.org (Paul Vixie) Date: Fri, 08 Feb 2013 19:08:33 -0800 Subject: [ih] The story of BGP? In-Reply-To: <7B5ECA4D-28BC-4E20-96A3-48D096519426@istaff.org> References: <20130209001459.A32DB28E138@aland.bbn.com> <7B5ECA4D-28BC-4E20-96A3-48D096519426@istaff.org> Message-ID: <5115BDB1.30503@redbarn.org> John Curran wrote: > On Feb 8, 2013, at 7:14 PM, Craig Partridge wrote: >> This doing it right long term versus doing something that solved an immediate >> need issue shows up repeatedly in IETF behavior the period 1989-1994 or so. >> Routing was one. Network Management was another. 8-bit Email nearly got >> wrapped around the axle too. > > "IPv4 to IPng" has definitely earned a spot on that list; we solved the > apparent immediate need, and decided not to undertake a loc-id split nor > variable/path-based locators. I probably could live with this tradeoff > for getting it done fast, but we actually didn't get it done, instead > leaving transition out of the spec for the next generation and not even > getting any actual backward compatibility with IPv4 as a result. > > If we're not going to "do it right for the long-term", it's kinda important > that we nail getting the "immediate" solution right... the reason, in 1996, why we didn't put dnssec on its own port number, and fix all of the other crud that was wrong on udp/53, is that we wanted it to be done before 2000. if we'd known we had sixteen years to work with, we'd've cut deeper earlier. paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony.li at tony.li Fri Feb 8 21:03:14 2013 From: tony.li at tony.li (Tony Li) Date: Fri, 8 Feb 2013 21:03:14 -0800 Subject: [ih] The story of BGP? In-Reply-To: References: Message-ID: On Feb 7, 2013, at 10:23 AM, Justine Sherry wrote: > Does anyone have any pointers to a summary of this history or > interesting experiences to share? Hi Justine, I'm coming into this a bit late in the conversation, but being a first hand participant wanted to offer my $.02. Rather than pester the list with a whole lot of individual replies, I'm going to aggregate my replies to all of the comments that have been made to the list so far. Please see the original messages for correct attributions. > I believe you'll find a lot of what you want to know in Yakov Rehkter's > talk on BGP at 18 (http://www.youtube.com/watch?v=_Mn4kKVBdaM). Seconded. This is just a primer for everything else, of course. My part of the story begins in 1991, when I joined cisco and took over maintenance of EGP and BGP. > This was in addition to EGP-2 suffering under the ever increasing size of the route announcements. If I recall, there was a lack of incremental updates and EGP-2 relied on IP reassembly of very large fragmented IP datagrams. A single dropped fragment in practice rendered the entire announcement useless, and I think there were some concerns on how large a package some operating systems were going to be willing to reassemble. The NSFNET and scores of networks would only add to the pressure of the ever-growing size of the EGP announcements. This is exactly correct. > Was the shutdown of the ARPAnet a big factor? Absolutely. The creation of the NSFnet regionals added thousands of prefixes to the routing tables very quickly, causing EGP updates to grow rapidly. With cisco's implementation IP reassembly hadn't been truly stressed, and EGP uncovered several bugs, including internal buffer sizes that were simply unable to contain the reassembled packets. These buffer sizes had to be increased several times to keep up with the table growth. This pain became obvious to everyone, and was coupled with the significant pain of route filtering that had to be used to prevent the looping that has already been discussed. The operator community was very vocal in their need for something, and BGP at that point was the only real alternative. As of 1991, cisco's implementation was somewhat immature. While it largely complied with the letter of the specification, it had numerous structural issues that became apparent with even moderate usage. The operational community (much credit to smd, asp, roll, vaf, et. al.) began testing the application of BGP-3 by running it in parallel with EGP, in some cases by route redistribution (aka route leaking) and in many cases by tunneling. This led to very frequent (usually daily) bug reports that caused us to generate very frequent software changes (usually daily). Many of the structural issues within the implementation were addressed in this cycle. The hub of this activity was the isp-geeks mailing list, internal to cisco and its customers running these test images. > I'm not sure where 1994 comes from (that's the date on BGP-4, is that it?), > but it's wrong. The transition was more in the '91-'93 window. The urgency to publish the RFC was far lower than the need to have working code and a working network. As things stabilized, carriers started to phase BGP into production, usually on a peer-by-peer basis as a replacement for EGP, with redistribution still being used to interconnect with the remainder of routing. As this process continued, it rapidly encompassed the full set of Tier 1 ISPs. > Other random thought: CIDR arrived in BGP-4. I remember the transition from BGP-3 to BGP-4 and while strictly speaking not a flag-day, the coexistence of both was intended to be limited because of the difficulty in understanding how classfull and classless announcements would coexist. CIDR was an outcome of the ROAD discussions. It became obvious that we needed to be classless and carry prefixes. Yakov had already worked out the mechanisms and issues with doing this within IDRP, so he dropped that into the BGP spec and we massaged that into BGP-4. Paul Traina took over the Cisco implementation and did the enhancements for BGP-4. The primary issue there was all about how to deal with aggregation. Integration with classfull announcements was obviously an issue, so what we tried to do was to deploy BGP-4 with only classfull prefixes at first. Once that stabilized and was pervasive, we added prefixes. > (2) How is the BGP we switched to in 1994 different from the BGP we > used today, and who drove those changes? At the bottom line, it's tough to say that we 'switched' to BGP. As we have been changing the tires on a moving car the whole time, without a single flag day, it has been more a careful process of incremental evolution. As Yakov describes, there have been extensions to BGP, primarily for 2547. However, it's hard to say that it is very different. For better or worse, we've bolted a bag onto the side, but at the heart, BGP is fundamentally unchanged. While this may seem like we have not made forward progress, I'm actually mostly thankful that we haven't broken things. The network has become MUCH more conservative in its deployment policies since the early days, and with the increased scrutiny, the returns on major changes will be limited. > Finally, I do remember in the late 90s and early 2000s a bunch of > research into CPU and network effects of BGP, specifically: > > * The behaviour of BGP with non-instantaneous route updates, causing > repetitive route additions/withdraws > * .. and the BGP dampening stuff, for both announcements and CPU churn It's probably worth noting that not a lot of that work has made it into production and that it's very likely that we could tweak further to improve convergence and stability. As noted, this is an implementation and best practices issue and not a protocol issue per-se. > Also, nowadays we have a much better understanding of how economics is key in > protocol deployment, of how the short-term cost/benefit ratio is really > crucial. Rather than an economic viewpoint, I view it as a psychology of disaster avoidance. We (all of humanity) seem to be unwilling to make architectural changes to working systems until they are on the brink of collapse. (Ref: "Why the Internet only just works" Mark Handley) Unfortunately, that's not very good engineering, and without the DARPA mandate or similar leadership, it seems like it's impossible to do better. > And so the Internet is now stuck with this obsolete 1960s-grade routing > architecture (in architectural terms, the whole BGP-4/IGP system is really > not that much advanced over the routing in Baran's original design)? That's hardly fair. Where we are is far past what Baran originally described. Tho it's true that it's far short of where we can and should be. Regards, Tony p.s. All errors above are defects straight from my unrefreshed DRAM, which I take full responsibility for. If you have further questions, ask now, because tomorrow it might be gone. From ian.peter at ianpeter.com Sat Feb 9 14:17:29 2013 From: ian.peter at ianpeter.com (Ian Peter) Date: Sun, 10 Feb 2013 09:17:29 +1100 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: References: Message-ID: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> John Curran's comments below are leading me to think it would be worthwhile to document some major lessons which can be learnt from Internet history. Elsewhere, there are substantial efforts underway to define Internet Rights and Principles, Internet Core Values etc, and these efforts hopefully will be useful in internet governance evolution. So what lessons could be learnt from Internet history which could inform these efforts? I'm happy to suggest a couple to get the ball rolling, but I am sure there are many others. Here's my start. 1. Think long term. Plenty of examples discussed here (good and bad). We need to plan for an Internet that is around forever, not for a quick fix that patches an immediate problem while giving rise to longer term problems. 2. Keep it open. Nothing demonstrates this more to me than the difference between an open platform such as the World Wide Web and a proprietary application such as Facebook (the latter now becoming a defacto Internet to a younger generation whose only "Internet" access is via mobile apps). This proprietary ownership raises a series of issues around privacy, access, rights, and jurisdiction which are quite different on a open platform. Anyway, that's a start. I am sure there are others and a couple are in my mind as I write. But I would be interested to hear from this list as regards the lessons which either have been learnt, or should have been learnt, from Internet history thus far. Ian Peter Message: 2 Date: Fri, 8 Feb 2013 19:53:38 -0500 From: John Curran Subject: Re: [ih] The story of BGP? To: Craig Partridge Cc: internet-history at postel.org, Noel Chiappa Message-ID: <7B5ECA4D-28BC-4E20-96A3-48D096519426 at istaff.org> Content-Type: text/plain; charset=us-ascii On Feb 8, 2013, at 7:14 PM, Craig Partridge wrote: > > This doing it right long term versus doing something that solved an > immediate > need issue shows up repeatedly in IETF behavior the period 1989-1994 or > so. > Routing was one. Network Management was another. 8-bit Email nearly got > wrapped around the axle too. "IPv4 to IPng" has definitely earned a spot on that list; we solved the apparent immediate need, and decided not to undertake a loc-id split nor variable/path-based locators. I probably could live with this tradeoff for getting it done fast, but we actually didn't get it done, instead leaving transition out of the spec for the next generation and not even getting any actual backward compatibility with IPv4 as a result. If we're not going to "do it right for the long-term", it's kinda important that we nail getting the "immediate" solution right... /John Disclaimer: My $.02; YMMV. From jnc at mercury.lcs.mit.edu Sun Feb 10 06:07:36 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 10 Feb 2013 09:07:36 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130210140736.57CAC18C12A@mercury.lcs.mit.edu> > From: Tony Li > I'm not sure where 1994 comes from (that's the date on BGP-4, is that >> it?), but it's wrong. > The transition was more in the '91-'93 window. The urgency to publish > the RFC was far lower tha n the need to have working code and a working > network. Are you talking about the transition to BGP-4, or the transition to BGP? My comment was about the transition to use of BGP, which my memory (perhaps falsely) indicated had gotten a fair amount of use quite quickly. However, it sounds like I may have relied too heavily on the RFC dates, and that BGP came along much more slowly than that? >> Was the shutdown of the ARPAnet a big factor? > Absolutely. I'm trying to see how this can be? As far I as can work out, the need to move to something better than EGP was driven by the growth of the network (growth both in terms of number of total destinations, as well as the richness of the inter-connections), and that growth was not driven in any way by the ARPANET (or its demise). Rather, the growth was driven by the deployment of LAN technologies (which provide very high speeds), and the increasing numbers of smaller machines (initially mini-computers, and then personal machines), which provided a need for LANs (and hence for the growth in the number of destinations). There was, slightly later on, a parallel evolution in transmission technology (i.e. fiber), which drove inter-connection richness (and higher long-distance speeds). (Simply speeding up the ARPANET, to avoid the growth of the alternate backbones, was not an option. The ARPANET's whole architecture was just not suitable for high speeds. In particular, the 8-outstanding-packet limit would have been a killer - particularly as all IP traffic shared one 'link'.) As far as I can see, the ARPANET was only important as the first long-distance backbone, to tie together all the disparate sites - and thus the need for a protocol family which could handle such a large network (unlike other early protocol families like PUP and CHAOS, which were clearly single-site oriented, although they eventually were used in slightly larger contexts). As best I can recall, when the ARPANET was finally turned off (after a long process of shrinkage) it was pretty much a non-event, as was the process of shrinkage - people just signed up with regionals - which had originally been set up to serve those people who couldn't get on the ARPANET. > The creation of the NSFnet regionals added thousands of prefixes to the > routing tables very quickly, causing EGP updates to grow rapidly. > ... > This pain became obvious to everyone, and was coupled with the > significant pain of route filtering that had to be used to prevent the > looping that has already been discussed. All true, but this had nothing to do with the existence, or non-existence, of the ARPANET (see above). Noel From jnc at mercury.lcs.mit.edu Sun Feb 10 06:29:39 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 10 Feb 2013 09:29:39 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130210142939.E0DB818C12A@mercury.lcs.mit.edu> > From: Guy Almes > we were asked in 1988 to comment on a then-draft spec for a successor > to EGP2, viz. EGP3. EGP3 was, in many ways, well done and it was > incremental and thus might have scaled well with regard to the rapidly > growing number of networks. > But EGP3 remained 'flat' in the sense above. If this is true, it probably explains why EGP3 never got much traction. I wish I had a later EGP3 draft spec to confirm this - I'm looking through my pile of paper (and actually organizing and cataloging it!), but so far, no luck. I suspect that's why I went off to do FGP - I suspect I could see the need to support cycles in the topology coming soon. (Proteon was heavily involved in selling to the regionals, and tried to sell to the NSF backbone, so we knew a lot about what they were all up to.) And speaking of FGP, while setting up to file the stuff I was organizing, I ran across a (thin!) folder labelled "FGP"! I won't bore you all with all the details, but a few useful tidbits. It does appear to date from the end of 1986; I see an email from Hans-Werner Braun from October 1986, and a Proteon memo about a potential contract from November, 1986. We had apparently discussed it with "Steve" at NSF (Steve Wolff, I assume), to see if we could get money out of them to support the effort, and Scott Brim had offered a chunk of money from NYSRENET. A short (3-page) design note which I wrote indicates that it had the following goals: - Ability to handle a larger Internet - Cycles - no topology restrictions - Quick adaption to topology changes - No counting to infinity - Low overhead - updates not transmitted every N seconds - Better metric than just hop-count Actually, I guess the design note is of some interest, as it reveals how limited my/our understanding of routing was at that date. It's clearly a whole order of magnitude (or more) less advanced than Nimrod. But I digress..:-) Anyway, one other item of interest in the file is a timeline chart for the proposed contract/effort. It shows things like 'Requirements [definition, I asssume]', in December, 1986, 'Spec writing' in Feb/March 1987, implementations in May/June, 'Trials' in June/July, 'Spec update' in August, and 'Spec relase' in September. Probably a little optimistic on the schedule... :-) Noel From feinler at earthlink.net Sun Feb 10 16:32:32 2013 From: feinler at earthlink.net (Elizabeth Feinler) Date: Sun, 10 Feb 2013 16:32:32 -0800 Subject: [ih] Internet History In-Reply-To: References: Message-ID: Dear All, I am glad to see Ian Peter's article on "Lessons to be learned from Internet history." Just preserving it, as you all are doing so thoroughly, is an important function, not to mention lessons that might be learned. FYI, saving Internet History came up for discussion in Geneva in April this year, and is finally culminating in a History BOF at the upcoming IETF meeting in Orlando, Fl in March. The BOF is being held: IETF 86 Meeting Caribe Royale 8101 World Center Drive Orlando, FL Tel: +1 407-238-8000 Monday, March 11, 2013 1540-1710 pm, in room Carribean 1 https://datatracker.ietf.org/meeting/86/agenda.html Marc Weber of the Computer History Museum here in Mountain View, CA has agreed to serve as Coordinator Please come by if you are attending the meeting, or pass the word to those who will be there. As a first step we would just like to identify who is seriously collecting Internet history, the extent of their collections, where located, and how best to interact with them if one has a donation of archives or artifacts - a kind of Internet History FYI or resource document. Some of the major contributors/collectors, such as yourselves, are well known. However, this is not necessarily true in Africa, Asia, the Middle East, and South America where so much is happening. If you have contacts in these regions, please make them aware of the BOF. We welcome their participation. As we quickly transition from a paper world to a digital one, the question is will we build the Tower of Babel or the Great Library at Alexandria, Internet style? Much work is now being done, but much is yet to come. Very exciting stuff with no end of interesting problems to solve.! Regards, Jake Feinler ------- > Send internet-history mailing list submissions to > internet-history at postel.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.postel.org/mailman/listinfo/internet-history > or, via email, send a message with subject or body 'help' to > internet-history-request at postel.org > > You can reach the person managing the list at > internet-history-owner at postel.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of internet-history digest..." > > > Today's Topics: > > 1. Lessons to be learnt from Internet history (Ian Peter) > 2. Re: The story of BGP? (Noel Chiappa) > 3. Re: The story of BGP? (Noel Chiappa) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 10 Feb 2013 09:17:29 +1100 > From: "Ian Peter" > Subject: [ih] Lessons to be learnt from Internet history > To: > Message-ID: <6B7D7170BF0B4BA6924309407E8D959C at Toshiba> > Content-Type: text/plain; format=flowed; charset="iso-8859-1"; > reply-type=original > > John Curran's comments below are leading me to think it would be worthwhile > to document some major lessons which can be learnt from Internet history. > Elsewhere, there are substantial efforts underway to define Internet Rights > and Principles, Internet Core Values etc, and these efforts hopefully will > be useful in internet governance evolution. > > So what lessons could be learnt from Internet history which could inform > these efforts? I'm happy to suggest a couple to get the ball rolling, but I > am sure there are many others. > > Here's my start. > > 1. Think long term. > > Plenty of examples discussed here (good and bad). We need to plan for an > Internet that is around forever, not for a quick fix that patches an > immediate problem while giving rise to longer term problems. > > 2. Keep it open. > > Nothing demonstrates this more to me than the difference between an open > platform such as the World Wide Web and a proprietary application such as > Facebook (the latter now becoming a defacto Internet to a younger generation > whose only "Internet" access is via mobile apps). This proprietary ownership > raises a series of issues around privacy, access, rights, and jurisdiction > which are quite different on a open platform. > > Anyway, that's a start. I am sure there are others and a couple are in my > mind as I write. But I would be interested to hear from this list as regards > the lessons which either have been learnt, or should have been learnt, from > Internet history thus far. > > Ian Peter > > > > > > > Message: 2 > Date: Fri, 8 Feb 2013 19:53:38 -0500 > From: John Curran > Subject: Re: [ih] The story of BGP? > To: Craig Partridge > Cc: internet-history at postel.org, Noel Chiappa > > Message-ID: <7B5ECA4D-28BC-4E20-96A3-48D096519426 at istaff.org> > Content-Type: text/plain; charset=us-ascii > > On Feb 8, 2013, at 7:14 PM, Craig Partridge wrote: >> >> This doing it right long term versus doing something that solved an >> immediate >> need issue shows up repeatedly in IETF behavior the period 1989-1994 or >> so. >> Routing was one. Network Management was another. 8-bit Email nearly got >> wrapped around the axle too. > > "IPv4 to IPng" has definitely earned a spot on that list; we solved the > apparent immediate need, and decided not to undertake a loc-id split nor > variable/path-based locators. I probably could live with this tradeoff > for getting it done fast, but we actually didn't get it done, instead > leaving transition out of the spec for the next generation and not even > getting any actual backward compatibility with IPv4 as a result. > > If we're not going to "do it right for the long-term", it's kinda important > that we nail getting the "immediate" solution right... > > /John > > Disclaimer: My $.02; YMMV. > > > > > > > ------------------------------ > > Message: 2 > Date: Sun, 10 Feb 2013 09:07:36 -0500 (EST) > From: jnc at mercury.lcs.mit.edu (Noel Chiappa) > Subject: Re: [ih] The story of BGP? > To: internet-history at postel.org, justine at eecs.berkeley.edu > Cc: jnc at mercury.lcs.mit.edu > Message-ID: <20130210140736.57CAC18C12A at mercury.lcs.mit.edu> > >> From: Tony Li >>> I'm not sure where 1994 comes from (that's the date on BGP-4, is that >>> it?), but it's wrong. > >> The transition was more in the '91-'93 window. The urgency to publish >> the RFC was far lower tha n the need to have working code and a working >> network. > > Are you talking about the transition to BGP-4, or the transition to BGP? My > comment was about the transition to use of BGP, which my memory (perhaps > falsely) indicated had gotten a fair amount of use quite quickly. > > However, it sounds like I may have relied too heavily on the RFC dates, and > that BGP came along much more slowly than that? > > >>> Was the shutdown of the ARPAnet a big factor? > >> Absolutely. > > I'm trying to see how this can be? > > As far I as can work out, the need to move to something better than EGP was > driven by the growth of the network (growth both in terms of number of total > destinations, as well as the richness of the inter-connections), and that > growth was not driven in any way by the ARPANET (or its demise). > > Rather, the growth was driven by the deployment of LAN technologies (which > provide very high speeds), and the increasing numbers of smaller machines > (initially mini-computers, and then personal machines), which provided a need > for LANs (and hence for the growth in the number of destinations). There was, > slightly later on, a parallel evolution in transmission technology (i.e. > fiber), which drove inter-connection richness (and higher long-distance > speeds). > > (Simply speeding up the ARPANET, to avoid the growth of the alternate > backbones, was not an option. The ARPANET's whole architecture was just not > suitable for high speeds. In particular, the 8-outstanding-packet limit would > have been a killer - particularly as all IP traffic shared one 'link'.) > > As far as I can see, the ARPANET was only important as the first > long-distance backbone, to tie together all the disparate sites - and thus > the need for a protocol family which could handle such a large network > (unlike other early protocol families like PUP and CHAOS, which were clearly > single-site oriented, although they eventually were used in slightly larger > contexts). > > As best I can recall, when the ARPANET was finally turned off (after a long > process of shrinkage) it was pretty much a non-event, as was the process of > shrinkage - people just signed up with regionals - which had originally been > set up to serve those people who couldn't get on the ARPANET. > >> The creation of the NSFnet regionals added thousands of prefixes to the >> routing tables very quickly, causing EGP updates to grow rapidly. >> ... >> This pain became obvious to everyone, and was coupled with the >> significant pain of route filtering that had to be used to prevent the >> looping that has already been discussed. > > All true, but this had nothing to do with the existence, or non-existence, of > the ARPANET (see above). > > Noel > > > ------------------------------ > > Message: 3 > Date: Sun, 10 Feb 2013 09:29:39 -0500 (EST) > From: jnc at mercury.lcs.mit.edu (Noel Chiappa) > Subject: Re: [ih] The story of BGP? > To: internet-history at postel.org > Cc: justine at eecs.berkeley.edu, jnc at mercury.lcs.mit.edu > Message-ID: <20130210142939.E0DB818C12A at mercury.lcs.mit.edu> > >> From: Guy Almes > >> we were asked in 1988 to comment on a then-draft spec for a successor >> to EGP2, viz. EGP3. EGP3 was, in many ways, well done and it was >> incremental and thus might have scaled well with regard to the rapidly >> growing number of networks. >> But EGP3 remained 'flat' in the sense above. > > If this is true, it probably explains why EGP3 never got much traction. > > I wish I had a later EGP3 draft spec to confirm this - I'm looking through my > pile of paper (and actually organizing and cataloging it!), but so far, no > luck. > > I suspect that's why I went off to do FGP - I suspect I could see the need to > support cycles in the topology coming soon. (Proteon was heavily involved in > selling to the regionals, and tried to sell to the NSF backbone, so we knew a > lot about what they were all up to.) > > > And speaking of FGP, while setting up to file the stuff I was organizing, I > ran across a (thin!) folder labelled "FGP"! I won't bore you all with all the > details, but a few useful tidbits. > > It does appear to date from the end of 1986; I see an email from Hans-Werner > Braun from October 1986, and a Proteon memo about a potential contract from > November, 1986. We had apparently discussed it with "Steve" at NSF (Steve > Wolff, I assume), to see if we could get money out of them to support the > effort, and Scott Brim had offered a chunk of money from NYSRENET. > > A short (3-page) design note which I wrote indicates that it had the > following goals: > > - Ability to handle a larger Internet > - Cycles - no topology restrictions > - Quick adaption to topology changes > - No counting to infinity > - Low overhead - updates not transmitted every N seconds > - Better metric than just hop-count > > Actually, I guess the design note is of some interest, as it reveals how > limited my/our understanding of routing was at that date. It's clearly a > whole order of magnitude (or more) less advanced than Nimrod. But I > digress..:-) > > Anyway, one other item of interest in the file is a timeline chart for the > proposed contract/effort. It shows things like 'Requirements [definition, I > asssume]', in December, 1986, 'Spec writing' in Feb/March 1987, > implementations in May/June, 'Trials' in June/July, 'Spec update' in August, > and 'Spec relase' in September. > > Probably a little optimistic on the schedule... :-) > > Noel > > > ------------------------------ > > _______________________________________________ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > > > End of internet-history Digest, Vol 71, Issue 8 > *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Sun Feb 10 18:57:35 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 10 Feb 2013 21:57:35 -0500 (EST) Subject: [ih] The story of BGP? Message-ID: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> > From: Tony Li >>> The transition was more in the '91-'93 window. The urgency to publish >>> the RFC was far lower than the need to have working code and a >>> working network. >> Are you talking about the transition to BGP-4, or the transition to >> BGP? > The transition to BGP. Ah, OK. But the BGP-3 spec came out in October '91, so it seems the specs didn't lag as much as you seem to be indicating it did? (There was also a version of the BGP spec which came out in June 1990 - RFC-1163 - which doesn't have a number on it, but that must have been BGP-2, although I'm not sure that term was ever used.) >>>> Was the shutdown of the ARPAnet a big factor? >>> Absolutely. >> I'm trying to see how this can be? > Recall that with the ARPAnet, we were all directly homed to 14/8 (and = > 10/8 for MILnet). ??? The ARPANET was 10/8, and the MILNET 26/8? 14.8 was the global X.25 network? > Shutting down the ARPAnet is what triggered the creation of NSFnet, > which in turn resulted in the NSFnet regional networks. > ... > NSFnet would not have been necessary except for the planned > decommissioning of ARPAnet. Ah, now I see what your reasoning is. However, I'm not sure this is what happened? The 56KB NSFNET was started in 1985 (I remember the Proteon/Cisco/Fuzzball selection meeting), and entered service early in 1986. However, the decision to shut down the ARPANET was made by Mark Pullen, who came to DARPA in 1987. Likewise, the first regionals were started before 1987. IIRC, NYSRENET and SURANET were both started before then. Yes, the availability of the NSFNET and regionals allowed Mark to 'pull the plug' on the ARPANET - but the evolution of the Internet to a 'multi-backbone' system was already underway when he did so - and, in fact, it was inevitable. Remember, use of the ARPANET was restricted to people with a DoD/DoE/NASA contract/connection, whereas NSF wanted to make their network accessible to everyone: that was the reason behind CSNET (in the early 1980s) and the 56K-phase NSFNET. In addition, as I indicated previously: >> Simply speeding up the ARPANET, to avoid the growt of the alternate >> backbones, was not an option. The ARPANET's whole architecture was >> just not suitable for high speeds. In particular, the >> 8-outstanding-packet limit [per 'link'] would have been a killer - >> particularly as all IP traffic shared one 'link'.) So, again, I'm not sure the shutdown of the ARPANET was that important an influence in the evolution of the Internet. BTW, there's a nice site with a bunch of presentations on it: http://www.nsfnet-legacy.org/ about the background to, and the history of, the NSFNet, and also some of the regionals. One of the sessions: http://www.nsfnet-legacy.org/archives/04--T1.pdf includes a presentation by Yakov giving the early history of BGP. (Maybe this is the same thing as the YouTube thingy?) Noel From galmes at tamu.edu Mon Feb 11 08:15:33 2013 From: galmes at tamu.edu (Guy Almes) Date: Mon, 11 Feb 2013 10:15:33 -0600 Subject: [ih] The story of BGP? In-Reply-To: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> References: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> Message-ID: <51191925.3020107@tamu.edu> Noel, Very good points. Sticking with Justine's original questions and with focus on the transition from the ARPAnet-centric EGP2 era to the early BGP era, what was important was not the ARPAnet being shut down in 1988, but the emergence of the *highly* multi-AS Internet. And the slightly subtle thing about it was that, not only was the number of ASes growing rapidly, but that their topology became distinctly non-hierarchical. My memory of the 1986-87 era is that all the stated intentions of official policy makers was to support the continuation of some kind of hierarchy. To speak positively, for example, the quite hierarchical NSFnet structure (with a single backbone connecting roughly two dozen regional networks, each connecting few => dozens of universities) was capable of rapid and somewhat orderly growth due to this hierarchy. And 1987-era talk of the Interagency Research Internet promoted the collegial notion of a kind of forest of such hierarchical structures and this, in principle, was sensible and important in keeping the Internet "whole" during this period. Thus forms of EGP2 successors that would have supported a tree or a forest or a directed acyclic graph of ASes or a forest of such DAGs (with a distinct up-down aspect to each AS-to-AS connection with the exception of major inter-agency exchange points) had a certain appeal. But evolving EGP2 to support such an "orderly" forest of DAGs was doomed, both because of the complexity of dealing with it and also because, "on the ground" cyclic/non-hierarchical interconnections of ASes were beginning to happen. In this context, BGP made several breakthroughs: <> use of full AS path is the "metric" <> cutting the Gordian knot of hierarchy by simply accepting a general topology <> accepting variable-length fields within BGP messages (if only to support these full AS paths) <> using TCP and thereby dramatically simplifying the protocol We've had BGP around for so long that we are tempted to underestimate these breakthroughs. (I omit incremental updates only because it was present in the EGP3 draft and (purely personal opinion) it was less of a breakthrough than the others.) There's much to admire here. The key aspect of BGP that, even after the acceptance of BGP as the common exterior gateway protocol, was non-obvious was the messiness of interpreting/using these full AS paths as "metrics". Yakov's emphasis was that when a given BGP-speaking router receives multiple alternate routes for a given prefix (with differing next-hop border routers and differing full AS paths), the decision of which of the alternate routes to use and propagate is a "local decision". By insisting on its being a purely local decision, BGP itself was, of course, dramatically simplified and the nature of the inter-AS topology was allowed to grow in highly dynamic (viral?) ways. One can, however, imagine trying to work out a non-local way of interpreting these full AS paths. In hindsight, that would have been a bit doomed, but it was not totally obvious at the time. *If* this had been done, there would have been several advantages: <> better routes (usually), <> simpler BGP configurations (by avoiding the often-byzantine tactics to explain the logic of your "local decision" to your border router, and <> the possibility of a link-state successor to BGP. And, of course, key disadvantages: <> elevating local inter-AS routing decisions, which inevitably mixed technical, operational, and business aspects to being non-local / community / political decisions. Instead, of course, we have very messy BGP configurations. These stemmed, in part, from the early BGP implementations using the full AS path *length* as the de-facto "metric". That was implementable, but obviously resulted in weak selections of inter-AS routes. The patches to BGP to ameliorate this weakness are perhaps both regrettable and inevitable. -- Guy On 2/10/13 8:57 PM, Noel Chiappa wrote: > > From: Tony Li > > >>> The transition was more in the '91-'93 window. The urgency to publish > >>> the RFC was far lower than the need to have working code and a > >>> working network. > > >> Are you talking about the transition to BGP-4, or the transition to > >> BGP? > > > The transition to BGP. > > Ah, OK. But the BGP-3 spec came out in October '91, so it seems the specs > didn't lag as much as you seem to be indicating it did? > > (There was also a version of the BGP spec which came out in June 1990 - > RFC-1163 - which doesn't have a number on it, but that must have been BGP-2, > although I'm not sure that term was ever used.) > > > >>>> Was the shutdown of the ARPAnet a big factor? > > >>> Absolutely. > > >> I'm trying to see how this can be? > > > Recall that with the ARPAnet, we were all directly homed to 14/8 (and = > > 10/8 for MILnet). > > ??? The ARPANET was 10/8, and the MILNET 26/8? 14.8 was the global X.25 > network? > > > Shutting down the ARPAnet is what triggered the creation of NSFnet, > > which in turn resulted in the NSFnet regional networks. > > ... > > NSFnet would not have been necessary except for the planned > > decommissioning of ARPAnet. > > Ah, now I see what your reasoning is. However, I'm not sure this is what > happened? > > The 56KB NSFNET was started in 1985 (I remember the Proteon/Cisco/Fuzzball > selection meeting), and entered service early in 1986. However, the decision > to shut down the ARPANET was made by Mark Pullen, who came to DARPA in 1987. > Likewise, the first regionals were started before 1987. IIRC, NYSRENET and > SURANET were both started before then. > > Yes, the availability of the NSFNET and regionals allowed Mark to 'pull the > plug' on the ARPANET - but the evolution of the Internet to a > 'multi-backbone' system was already underway when he did so - and, in fact, > it was inevitable. > > Remember, use of the ARPANET was restricted to people with a DoD/DoE/NASA > contract/connection, whereas NSF wanted to make their network accessible to > everyone: that was the reason behind CSNET (in the early 1980s) and the > 56K-phase NSFNET. In addition, as I indicated previously: > > >> Simply speeding up the ARPANET, to avoid the growt of the alternate > >> backbones, was not an option. The ARPANET's whole architecture was > >> just not suitable for high speeds. In particular, the > >> 8-outstanding-packet limit [per 'link'] would have been a killer - > >> particularly as all IP traffic shared one 'link'.) > > So, again, I'm not sure the shutdown of the ARPANET was that important an > influence in the evolution of the Internet. > > > BTW, there's a nice site with a bunch of presentations on it: > > http://www.nsfnet-legacy.org/ > > about the background to, and the history of, the NSFNet, and also some of the > regionals. One of the sessions: > > http://www.nsfnet-legacy.org/archives/04--T1.pdf > > includes a presentation by Yakov giving the early history of BGP. (Maybe this > is the same thing as the YouTube thingy?) > > Noel > From dhc2 at dcrocker.net Mon Feb 11 08:58:38 2013 From: dhc2 at dcrocker.net (Dave Crocker) Date: Mon, 11 Feb 2013 08:58:38 -0800 Subject: [ih] The story of BGP? In-Reply-To: <51191925.3020107@tamu.edu> References: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> <51191925.3020107@tamu.edu> Message-ID: <5119233E.7050805@dcrocker.net> On 2/11/2013 8:15 AM, Guy Almes wrote: > And the slightly subtle thing about it was that, not only was the > number of ASes growing rapidly, but that their topology became > distinctly non-hierarchical. This highlights two different transition milestones, I think: * Moving from a network of independent hosts (machines) to an internetwork of independent networks. * Moving from a monopolistic backbone model to a competitive backbone model. I've always understood BGP to be significant for the latter. I've also understood that there were some non-BBN IP backbones, before NSFNET but that routing for them was done in a very hand-crafted manner, and that NSFNET served as a forcing function to produce a routing model that comfortably supported multiple, independent and competing backbones. The point about alternative models for (administrative? topological?) structuring of a backbone's internals (hierarchical vs. non-) is interesting. Might be worth expanding on that... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From tony.li at tony.li Mon Feb 11 09:38:54 2013 From: tony.li at tony.li (Tony Li) Date: Mon, 11 Feb 2013 09:38:54 -0800 Subject: [ih] The story of BGP? In-Reply-To: <51191925.3020107@tamu.edu> References: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> <51191925.3020107@tamu.edu> Message-ID: Hi Guy, > And, of course, key disadvantages: > <> elevating local inter-AS routing decisions, which inevitably mixed technical, operational, and business aspects to being non-local / community / political decisions. > > Instead, of course, we have very messy BGP configurations. > These stemmed, in part, from the early BGP implementations using the full AS path *length* as the de-facto "metric". > That was implementable, but obviously resulted in weak selections of inter-AS routes. The patches to BGP to ameliorate this weakness are perhaps both regrettable and inevitable. I'll agree with the inevitable portion. As we found when working on IDPR, the need for interesting peering policies in a newly commercialized Internet was paramount. Unlike the benign dictatorship of DARPA or the more cooperative but still segregationist NSFnet, the wild and wooly commercial Internet had many ISPs who were mutually hostile. Sometimes openly so, but more frequently under the table. The ability to craft policy that optimized for local routing at the expense of one's neighbors (i.e., hot potato routing) was one expression of this. So, this was indeed an inevitable outcome of commercialization. However, I'm not sure that it was all that regrettable. The result is far more expressive than a centralized (and politicized) routing authority or even link-state protocol could have ever supported. As such, it has proven to be extremely flexible and allowed the Internet to grow freely. If BGP complexity is the price of freedom, it's well worth paying. Regards, Tony From galmes at tamu.edu Mon Feb 11 10:01:10 2013 From: galmes at tamu.edu (Guy Almes) Date: Mon, 11 Feb 2013 12:01:10 -0600 Subject: [ih] The story of BGP? In-Reply-To: References: <20130211025735.6E80C18C0F3@mercury.lcs.mit.edu> <51191925.3020107@tamu.edu> Message-ID: <511931E6.4020809@tamu.edu> Tony, Your points make sense to me. I'll stick with my "regrettable" word, if only in two limited senses: <> the degree to which the elegant "full AS path" metric combined with the "local decision" idea resulted in the current highly complex and somewhat fragile inter-AS routing structure was probably greater than we imagined back in 1989, <> it's just a reminder that both the engineering technology of BGP itself *and* the highly dynamic nature of the operational and business dynamics of the 1989-ish Internet combined in interesting ways. I will *not* contradict that, to the degree that this messiness bought us the wonderful scalability of the Internet, this messiness/complexity was worth it. -- Guy On 2/11/13 11:38 AM, Tony Li wrote: > > Hi Guy, > >> And, of course, key disadvantages: >> <> elevating local inter-AS routing decisions, which inevitably mixed technical, operational, and business aspects to being non-local / community / political decisions. >> >> Instead, of course, we have very messy BGP configurations. >> These stemmed, in part, from the early BGP implementations using the full AS path *length* as the de-facto "metric". >> That was implementable, but obviously resulted in weak selections of inter-AS routes. The patches to BGP to ameliorate this weakness are perhaps both regrettable and inevitable. > > > I'll agree with the inevitable portion. As we found when working on IDPR, the need for interesting peering policies in a newly commercialized Internet was paramount. Unlike the benign dictatorship of DARPA or the more cooperative but still segregationist NSFnet, the wild and wooly commercial Internet had many ISPs who were mutually hostile. Sometimes openly so, but more frequently under the table. > > The ability to craft policy that optimized for local routing at the expense of one's neighbors (i.e., hot potato routing) was one expression of this. > > So, this was indeed an inevitable outcome of commercialization. However, I'm not sure that it was all that regrettable. The result is far more expressive than a centralized (and politicized) routing authority or even link-state protocol could have ever supported. As such, it has proven to be extremely flexible and allowed the Internet to grow freely. > > If BGP complexity is the price of freedom, it's well worth paying. > > Regards, > Tony > > From bob.hinden at gmail.com Tue Feb 19 01:14:00 2013 From: bob.hinden at gmail.com (Bob Hinden) Date: Tue, 19 Feb 2013 11:14:00 +0200 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> Message-ID: Ian, > > 1. Think long term. > > Plenty of examples discussed here (good and bad). We need to plan for an Internet that is around forever, not for a quick fix that patches an immediate problem while giving rise to longer term problems. > While this sounds good, I think in practice it has significant problems. Many of the problems we see now were understood when the Internet was first developed, but we didn't have practical solutions to them. Had we insisted on solving everything, it's very likely that nothing would have been done, or it would have been impractical to deploy given the technology at the time. Just because you understand the problem, doesn't mean you can solve it. I also note that "forever" is a very long time. We aren't that good at predicting the future to know what will be needed in 10 years, much less a hundred years or more. Bob From jcurran at istaff.org Tue Feb 19 04:38:19 2013 From: jcurran at istaff.org (John Curran) Date: Tue, 19 Feb 2013 07:38:19 -0500 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> Message-ID: <09923E8B-473B-4949-AB97-09C7600C47F0@istaff.org> On Feb 19, 2013, at 4:14 AM, Bob Hinden wrote: >> We need to plan for an Internet that is around forever, not for a quick fix that patches an immediate problem while giving rise to longer term problems. > > While this sounds good, I think in practice it has significant problems. Many of the problems we see now were understood when the Internet was first developed, but we didn't have practical solutions to them. Had we insisted on solving everything, it's very likely that nothing would have been done, or it would have been impractical to deploy given the technology at the time. Just because you understand the problem, doesn't mean you can solve it. IMHO, our challenge has not been in facing problems beyond solving, but rather the tendency to skimp on fully defining problems before moving on to solution phase... Engineers tend to start imagining new fields and protocol exchanges upon hearing of any issue, and yet in many cases the problems we face in the Internet have and will continue to include economic or political aspects which dominate the available solution space. > I also note that "forever" is a very long time. We aren't that good at predicting the future to know what will be needed in 10 years, much less a hundred years or more. I agree on this aspect; forever is a long-time and not likely something to serve as a useful planning horizon. However, planning for "the foreseeable future", i.e. for as long and as well as we can imagine, _is_ quite reasonable. /John From LarrySheldon at cox.net Tue Feb 19 13:33:12 2013 From: LarrySheldon at cox.net (Larry Sheldon) Date: Tue, 19 Feb 2013 15:33:12 -0600 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> Message-ID: <5123EF98.20905@cox.net> On 2/19/2013 3:14 AM, Bob Hinden wrote: [Ed. note: I don't who said what is quoted immediately below--"Ian"?] >> 1. Think long term. >> >> Plenty of examples discussed here (good and bad). We need to plan >> for an Internet that is around forever, not for a quick fix that >> patches an immediate problem while giving rise to longer term >> problems. > While this sounds good, I think in practice it has significant > problems. Many of the problems we see now were understood when the > Internet was first developed, but we didn't have practical solutions > to them. Had we insisted on solving everything, it's very likely > that nothing would have been done, or it would have been impractical > to deploy given the technology at the time. Just because you > understand the problem, doesn't mean you can solve it. I think that may be the the most prevalent and at the same most poorly understood in all of my experience with systems development. I was a grunt in several large-scale projects to mechanize the production of telephone directories from the data on the service orders. The first one was to mechanize all directories--White Pages (delivered to subscribers), information reprint (delivered to the Information Operators monthly) and its supplement (delivered daily--an interesting document printed on IBM 1403's with print trains make of letters lying on their sides), Delivery Lists (arranged for the walking path of the ring-and-flingers) and labels (for the ones that were mailed), and the Yellow Pages. That one accomplished all but the last. The Yellow Pages are, it turns out a very different product from the White Pages and have little in common with them once you get a few yards away from the presses. The second one was a Bell Labs project that I was not a part of, but since we were the only Operating Company with a functioning White Pages system, we were selected as the trial company. The third one was back with us after Bell Labs gave up. Remember that I was a pretty small gear in each of these, but from my perspective (both then and more maturely -- better aged -- now) the single thing that killed the first two project were what we have here. In the first case, the designed system was required to be perfect in every regard--an impossible task from the gitgo because the policies and practices of the company's Northern Region were very different and often contrary to those of the Southern Region. (Some of us realized that the policies and practices of the Southern Region were more like those of the "independent" company that had the franchise for most of the land in the southern region. My guess is that both we and the "independent" had to use the same printer-contractor because the PUC required that the listings be interfiled.) I think Bell Labs ran aground on a much bigger sand bar--trying to develop a system for 21 or 22 operating companies. We finally succeeded by an undocumented practice of "settling" all issues for 50% (or so) of what was wanted, then immediately filing a change request for 100% of the balance. > I also note that "forever" is a very long time. We aren't that good > at predicting the future to know what will be needed in 10 years, > much less a hundred years or more. Another aspect of the "forever" notion is that for any system with more than a few variable, the number of permutations and combinations quickly reaches an astronomical number. Until you get into the Real World? there is know way to know which really exist--there were uncounted "features" that cost a lot that were never exercised, and a similar number of things that had not made the cut that were frequent flyers. (I recall an incident where a program that ran daily in each of the nine computer centers failed in several of them the same night. It turns out that the problem was a coding error that had been made when the program was first coded a number of years before. There is a technical term that used to be popular for the idea that everything has to be perfect for a thousand years before anything is implemented--analysis paralysis. -- Requiescas in pace o email Two identifying characteristics of System Administrators: Ex turpi causa non oritur actio Infallibility, and the ability to learn from their mistakes. ICBM Data: http://g.co/maps/e5gmy (Adapted from Stephen Pinker) From richard at bennett.com Tue Feb 19 15:54:47 2013 From: richard at bennett.com (Richard Bennett) Date: Tue, 19 Feb 2013 15:54:47 -0800 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: <5123EF98.20905@cox.net> References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> <5123EF98.20905@cox.net> Message-ID: <512410C7.4020003@bennett.com> Well, yeah, it's hard to design future-proof systems but it can be done by building in hooks for the substitution of newer versions of things. My understanding of TCP/IP is the operating consensus in the early 80s was that TCP/IP was a temporary protocol that would someday be replaced wholesale with something better, probably OSI. But that didn't happen for a number of reasons and as a result we have this system that's very, very difficult to upgrade. The IPv6 "transition" shows just how hard it is. Specifically, the IPv4 address could have had some sort of format/length indicator, but it doesn't because it was never meant to last. Maybe it's just as well because a length byte in the header probably would have meant that some other vital piece of information would have to go. On 2/19/2013 1:33 PM, Larry Sheldon wrote: > On 2/19/2013 3:14 AM, Bob Hinden wrote: > [Ed. note: I don't who said what is quoted immediately below--"Ian"?] > >>> 1. Think long term. >>> >>> Plenty of examples discussed here (good and bad). We need to plan >>> for an Internet that is around forever, not for a quick fix that >>> patches an immediate problem while giving rise to longer term >>> problems. > >> While this sounds good, I think in practice it has significant >> problems. Many of the problems we see now were understood when the >> Internet was first developed, but we didn't have practical solutions >> to them. Had we insisted on solving everything, it's very likely >> that nothing would have been done, or it would have been impractical >> to deploy given the technology at the time. Just because you >> understand the problem, doesn't mean you can solve it. > > I think that may be the the most prevalent and at the same most poorly > understood in all of my experience with systems development. > > I was a grunt in several large-scale projects to mechanize the > production of telephone directories from the data on the service orders. > > The first one was to mechanize all directories--White Pages (delivered > to subscribers), information reprint (delivered to the Information > Operators monthly) and its supplement (delivered daily--an interesting > document printed on IBM 1403's with print trains make of letters lying > on their sides), Delivery Lists (arranged for the walking path of the > ring-and-flingers) and labels (for the ones that were mailed), and the > Yellow Pages. > > That one accomplished all but the last. The Yellow Pages are, it > turns out a very different product from the White Pages and have > little in common with them once you get a few yards away from the > presses. > > The second one was a Bell Labs project that I was not a part of, but > since we were the only Operating Company with a functioning White > Pages system, we were selected as the trial company. > > The third one was back with us after Bell Labs gave up. > > Remember that I was a pretty small gear in each of these, but from my > perspective (both then and more maturely -- better aged -- now) the > single thing that killed the first two project were what we have here. > > In the first case, the designed system was required to be perfect in > every regard--an impossible task from the gitgo because the policies > and practices of the company's Northern Region were very different and > often contrary to those of the Southern Region. (Some of us realized > that the policies and practices of the Southern Region were more like > those of the "independent" company that had the franchise for most of > the land in the southern region. My guess is that both we and the > "independent" had to use the same printer-contractor because the PUC > required that the listings be interfiled.) > > I think Bell Labs ran aground on a much bigger sand bar--trying to > develop a system for 21 or 22 operating companies. > > We finally succeeded by an undocumented practice of "settling" all > issues for 50% (or so) of what was wanted, then immediately filing a > change request for 100% of the balance. > >> I also note that "forever" is a very long time. We aren't that good >> at predicting the future to know what will be needed in 10 years, >> much less a hundred years or more. > > Another aspect of the "forever" notion is that for any system with > more than a few variable, the number of permutations and combinations > quickly reaches an astronomical number. Until you get into the Real > World? there is know way to know which really exist--there were > uncounted "features" that cost a lot that were never exercised, and a > similar number of things that had not made the cut that were frequent > flyers. (I recall an incident where a program that ran daily in each > of the nine computer centers failed in several of them the same > night. It turns out that the problem was a coding error that had been > made when the program was first coded a number of years before. > > There is a technical term that used to be popular for the idea that > everything has to be perfect for a thousand years before anything is > implemented--analysis paralysis. > -- Richard Bennett From jack at 3kitty.org Tue Feb 19 22:39:04 2013 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 19 Feb 2013 22:39:04 -0800 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: <512410C7.4020003@bennett.com> References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> <5123EF98.20905@cox.net> <512410C7.4020003@bennett.com> Message-ID: IIRC (it's been almost 35 years!), there was a *lot* of discussion about the need to deal with two semi-conflicting issues: 1) the evolution of the protocol, and 2) the need to have a stable snapshot that could be used to build early real systems. In the TCP2.x era, usage was all in the form of demos and experiments. To become real, the technology needed to be usable as communications infrastructure - out of the lab and into production as a service - The Internet. I was one of the first round TCP implementors, who all did a series of the early implementations, with identifiers like TCP2.5, TCP2.5+, etc. You couldn't distinguish these variants easily from packet contents - you had to know what version the other guy was using. That was OK for demos and experiments. I did the Unix TCP, Dave Clark did Multics, Bill Plummer did Tenex, Bob Braden did IBM, Jim Mathis did MOS, Dave Mills did Fuzzballs, etc. (Who did I forget...? Sorry...) That "stable snapshot" was intended to be created as TCP 3. But days after TCP 3 came out some problems were discovered, so TCP 4 was quickly created and led through a lengthy process of documentation and standardization. The concrete set very hard. BTW, TCP 3 is a good illustration of a mistake, i.e., a lesson learned. Every earlier TCP in the 2.x series was implemented before it was documented formally. The document captured what had been proven to work. TCP 3 was invented and documented before it was implemented. It didn't work. TCP 4 was implemented, and then documented and standardized. It works. There was a *lot* of discussion about what belonged in packet headers, and especially how big fields should be. We considered, and rejected, larger addresses. One major concern was efficiency. Most network packets at the time were short - often just a single byte of user data from remote-login echoing from users typing at terminals. Once you put all the headers in front of that byte, you'd end up with a *maximum* efficiency of use of the communications line of a few percent. That kind of characteristic of the technology could have killed it, if the beancounters heard that the $$$$s being spent on those expensive comm lines were 95+% overhead. So we tried to keep fields very small, and even invented "IP options" as a way to extend a header to carry more stuff if you really needed it for some particular application. We knew that some of those header fields would be too small. But we didn't agree on which ones. So we expected V4 to be replaced, quickly, and a series of Vx++ to be the norm, as it had been in the prior year or two. We certainly didn't expect IPV4 to last for 30+ years! In order to permit evolution, and to make it possible for software to unambiguously determine which vintage of the protocol it was receiving, the IPV4 header contained 4 bits of version number, which had to be "4" for IPV4. That 4 bits was consciously and explicitly put at the beginning of the packet header. Therefore any subsequent version, e.g., V5, V6, etc., could do whatever it wanted with the rest of the header - change address lengths, etc. Any "old" V4 implementation could reliably detect that a V5+ packet was beyond its capability, and reject it rather than doing something stupid with the rest of the header as if it were V4. (See RFC722 for the thoughts behind this). Basically, unless the first 4 bits of a packet contained the value 4, an IPV4 implementation would throw the packet away. If/when a V5, V6, etc., was defined, they could define the rest of the packet header however they liked. This was the ultimate "hook" for evolution. At every quarterly meeting, there was always a long (15-20 item) list of "things we have to deal with later", such as "Expressway Routing" or "Multi-homed Hosts". So we knew there were longer term issues to be solved, but if they didn't impact immediate needs, the current IP version could be used to implement user systems. So it was well known that IPV4 was an interim solution. In fact, given the prior experience with earlier versions of TCP/IP, the protocol (including header formats, state machine, etc.) I think we expected that the next version after IPV4 would occur after one or two more meeting, e.g., in about 6 months max. Of course it didn't happen that way - IPV4 had its 30th birthday not long ago. There *was* a general feeling that IP would be eventually superceded by OSI/CCITT/ISO efforts, once they got their act together and produced the "real" solution to all requirements. Meanwhile, we just kept writing code and deploying systems that used TCP/IP to interact. In retrospect, I think that the documentation and standardization of IPV4 was probably the critical event that caused the decades-long extension of the IPV6 evolution. V4 was a stable platform, and it worked for a lot of the things that people wanted to do. So it quickly acquired a user base, and an installed base of equipment and software. With every new installation, it became more tedious to change. Also important, there was little if any pressure to advance. Were there *any* applications that people wanted to do over the Internet that couldn't be done because of IPV4 limitations? Running out of addresses is the only showstopper I can think of, and it's just happening, probably, now. Unless someone creates some new magic like NAT. IMHO, IPV6 has been around for quite a while but it hasn't been technical reasons delaying its adoption. It's more that there has been little need to do so, as perceived from the users' vantage point. It's not been too hard, it's been not urgent enough to overcome inertia. Getting back to lessons learned... I think the primary lesson is that one should think long-term, but act short-term. Going through a series of rapid cycles of deploying and operating real systems unearthed situations and problems that would not have been anticipated by years of whiteboard-ware and thinking and meeting. You have to do both. /Jack Haverty On Tue, Feb 19, 2013 at 3:54 PM, Richard Bennett wrote: > Well, yeah, it's hard to design future-proof systems but it can be done by > building in hooks for the substitution of newer versions of things. My > understanding of TCP/IP is the operating consensus in the early 80s was that > TCP/IP was a temporary protocol that would someday be replaced wholesale > with something better, probably OSI. But that didn't happen for a number of > reasons and as a result we have this system that's very, very difficult to > upgrade. The IPv6 "transition" shows just how hard it is. > > Specifically, the IPv4 address could have had some sort of format/length > indicator, but it doesn't because it was never meant to last. > > Maybe it's just as well because a length byte in the header probably would > have meant that some other vital piece of information would have to go. > > > > On 2/19/2013 1:33 PM, Larry Sheldon wrote: >> >> On 2/19/2013 3:14 AM, Bob Hinden wrote: >> [Ed. note: I don't who said what is quoted immediately below--"Ian"?] >> >>>> 1. Think long term. >>>> >>>> Plenty of examples discussed here (good and bad). We need to plan >>>> for an Internet that is around forever, not for a quick fix that >>>> patches an immediate problem while giving rise to longer term >>>> problems. >> >> >>> While this sounds good, I think in practice it has significant >>> problems. Many of the problems we see now were understood when the >>> Internet was first developed, but we didn't have practical solutions >>> to them. Had we insisted on solving everything, it's very likely >>> that nothing would have been done, or it would have been impractical >>> to deploy given the technology at the time. Just because you >>> understand the problem, doesn't mean you can solve it. >> >> >> I think that may be the the most prevalent and at the same most poorly >> understood in all of my experience with systems development. >> >> I was a grunt in several large-scale projects to mechanize the production >> of telephone directories from the data on the service orders. >> >> The first one was to mechanize all directories--White Pages (delivered to >> subscribers), information reprint (delivered to the Information Operators >> monthly) and its supplement (delivered daily--an interesting document >> printed on IBM 1403's with print trains make of letters lying on their >> sides), Delivery Lists (arranged for the walking path of the >> ring-and-flingers) and labels (for the ones that were mailed), and the >> Yellow Pages. >> >> That one accomplished all but the last. The Yellow Pages are, it turns >> out a very different product from the White Pages and have little in common >> with them once you get a few yards away from the presses. >> >> The second one was a Bell Labs project that I was not a part of, but since >> we were the only Operating Company with a functioning White Pages system, we >> were selected as the trial company. >> >> The third one was back with us after Bell Labs gave up. >> >> Remember that I was a pretty small gear in each of these, but from my >> perspective (both then and more maturely -- better aged -- now) the single >> thing that killed the first two project were what we have here. >> >> In the first case, the designed system was required to be perfect in every >> regard--an impossible task from the gitgo because the policies and practices >> of the company's Northern Region were very different and often contrary to >> those of the Southern Region. (Some of us realized that the policies and >> practices of the Southern Region were more like those of the "independent" >> company that had the franchise for most of the land in the southern region. >> My guess is that both we and the "independent" had to use the same >> printer-contractor because the PUC required that the listings be >> interfiled.) >> >> I think Bell Labs ran aground on a much bigger sand bar--trying to develop >> a system for 21 or 22 operating companies. >> >> We finally succeeded by an undocumented practice of "settling" all issues >> for 50% (or so) of what was wanted, then immediately filing a change request >> for 100% of the balance. >> >>> I also note that "forever" is a very long time. We aren't that good >>> at predicting the future to know what will be needed in 10 years, >>> much less a hundred years or more. >> >> >> Another aspect of the "forever" notion is that for any system with more >> than a few variable, the number of permutations and combinations quickly >> reaches an astronomical number. Until you get into the Real World? there is >> know way to know which really exist--there were uncounted "features" that >> cost a lot that were never exercised, and a similar number of things that >> had not made the cut that were frequent flyers. (I recall an incident where >> a program that ran daily in each of the nine computer centers failed in >> several of them the same night. It turns out that the problem was a coding >> error that had been made when the program was first coded a number of years >> before. >> >> There is a technical term that used to be popular for the idea that >> everything has to be perfect for a thousand years before anything is >> implemented--analysis paralysis. >> > > -- > Richard Bennett > From jnc at mercury.lcs.mit.edu Wed Feb 20 09:28:44 2013 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 20 Feb 2013 12:28:44 -0500 (EST) Subject: [ih] Lessons to be learnt from Internet history Message-ID: <20130220172844.25D1718C0A5@mercury.lcs.mit.edu> > From: John Curran >> Many of the problems we see now were understood when the Internet was >> first developed, but we didn't have practical solutions to them. > our challenge has not been in facing problems beyond solving, but rather > the tendency to skimp on fully defining problems before moving on to > solution phase... > ... in many cases the problems we face in the Internet have and will > continue to include economic or political aspects which dominate the > available solution space. My sense is that the picture is more complicted. In some cases, our understanding of the issues is indeed now a lot more complete than it was in the early days of the Internet - we didn't do better then because we couldn't. Examples of this include congestion (pre-Van), and routing. More recently, the evolution of HTML is another case where we had to learn as we went. In some of these places, we managed to include enough generality that we could deploy better stuff as our understanding increased - e.g. re-transmission and congestion. I'm not sure we had an _explicit_ goal of being able to deploy better algorithsm, but given that we were trying different stuff then, I think it just naturally happened that the thing we deployed had that flexibility. In other places, we did have knowledge, but we deliberately chose to not do things: some examples are security, separation of location and identity, and addressing in general. Admittedly, security is a complex situation, because we have had some new tools become available (e.g. public keys) over time. And also I think security suffered from some of what you allude to with disparate external factors - e.g. early work on secure email proposed a model that aligned well with one group of users (military/government), but not the 'ordinary' users, leading to poor uptake. But we surely could have done better than we did (I speak of security overall, not just email). > forever is a long-time and not likely something to serve as a useful > planning horizon. However, planning for "the foreseeable future", i.e. > for as long and as well as we can imagine, _is_ quite reasonable. Yes, but a lot of the time I think it's pure luck whether we get something with a good lifetime or not. (I think a big part of that luck is the person who winds up doing the design for a particular newly-needed piece, to be frank. Some are much better than others.) And the choices are often driven by short-term considerations, and trying to put out fires. Take DNS for example. We were lucky there - the design had a lot of room to grow. But it could easily not have. The recent discussion of the origins of BGP shows all these factors at work. We didn't have great knowledge of routing, but we had some. Nonetheless, we didn't do something that was on the outer limits of what we could do - for reasons I won't take time to analyze in detail (basically, it was 'Pogoitis' - "We have met the enemy", etc). I suspect the BGP designers probably wouldn't have guessed that it would successfully function as well as it does for a system of this size. And later on we did have a fair amount of work go into more advanced routing architectures, but they were left to the side (again, for complex reasons I won't analyze here). Balancing 'getting it running with the resouces available' and 'doing something with a long lifetime' is still a struggle. I was recently driven to desperation by the unwillingness of a key LISP protagonist to adopt a packet format (aka interface semantics) which had more flexibility and adaptability. The reason? 'It was easier/quicker to do the kludgy hack.' But I guess all human works are like this - a combination of varying levels of luck, skill, chance, people and circumstances. Noel From jack at 3kitty.org Wed Feb 20 13:08:57 2013 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 20 Feb 2013 13:08:57 -0800 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: References: <6B7D7170BF0B4BA6924309407E8D959C@Toshiba> <5123EF98.20905@cox.net> <512410C7.4020003@bennett.com> Message-ID: On Tue, Feb 19, 2013 at 10:39 PM, Jack Haverty wrote: > I was one of the first round TCP implementors, who all did a series of > the early implementations, with identifiers like TCP2.5, TCP2.5+, etc. > You couldn't distinguish these variants easily from packet contents - > you had to know what version the other guy was using. That was OK for > demos and experiments. I did the Unix TCP, Dave Clark did Multics, > Bill Plummer did Tenex, Bob Braden did IBM, Jim Mathis did MOS, Dave > Mills did Fuzzballs, etc. (Who did I forget...? Sorry...) Several people reminded me of names that I didn't remember: - Dick Karp at Stanford on a PDP-11/40, - Mike Wingfield at BBN on a PDP-11/70 (how could I forget - he was sitting a few feet away from me in the computer room!), - Gary Grossman / John Day at Illinois on Unix I'm not sure which version of TCPIP was used in these implementations. My list of names was from my recollection of the series of meetings and trials in the 77/78 timeframe, which was when TCP2.x was evolving into TCP4, and in particular the first V4 implementations appeared that were able to interoperate. There were certainly implementations of TCP before that timeframe, leading up to 2.5, as well as many afterwards, as TCP4 got widely deployed. My list is not complete. The 77/78 timeframe seemed like an interesting milestone since that's arguably when the TCPIP V4 that we still know today came into existence - the process and progress of that "birth of The Internet" pretty well documented in things like IENs 69, 70, and 77. In particular, the series of "bakeoffs" where different implementations were connected for the first time was a crucial part of the process. I was recalling the people who were arrayed in offices along a hallway at ISI over a weekend, trying to talk to each other. The value of that process seems, IMHO, to be one of the lessons learned. One of Jon's comments in the minutes observed that the specification was as likely to be changed as the implementation as we tried to achieve interoperability to finalize the spec which became IPV4. The mantra "Rough Consensus and Running Code" ruled, but the bakeoffs where the mechanism for smoothing out all those rough edges to create a usable specification. Perhaps some historian will compile a timeline of all those early TCP/IP implementations? There's a lot of data in the IENs et al, but I've never encountered any place where it's all pulled together in a cohesive way to show the genesis of the Internet. /Jack From scott.brim at gmail.com Thu Feb 21 04:27:11 2013 From: scott.brim at gmail.com (Scott Brim) Date: Thu, 21 Feb 2013 07:27:11 -0500 Subject: [ih] Lessons to be learnt from Internet history In-Reply-To: <20130220172844.25D1718C0A5@mercury.lcs.mit.edu> References: <20130220172844.25D1718C0A5@mercury.lcs.mit.edu> Message-ID: I would add in that many times those with technical knowledge feel helpless (or don't even think of trying) to change the business strategy, so they work within self-imposed constraints. On Wed, Feb 20, 2013 at 12:28 PM, Noel Chiappa wrote: > > From: John Curran > > >> Many of the problems we see now were understood when the Internet was > >> first developed, but we didn't have practical solutions to them. > > > our challenge has not been in facing problems beyond solving, but rather > > the tendency to skimp on fully defining problems before moving on to > > solution phase... > > ... in many cases the problems we face in the Internet have and will > > continue to include economic or political aspects which dominate the > > available solution space. > > My sense is that the picture is more complicted. > > In some cases, our understanding of the issues is indeed now a lot more > complete than it was in the early days of the Internet - we didn't do better > then because we couldn't. Examples of this include congestion (pre-Van), and > routing. More recently, the evolution of HTML is another case where we had to > learn as we went. > > In some of these places, we managed to include enough generality that we could > deploy better stuff as our understanding increased - e.g. re-transmission and > congestion. I'm not sure we had an _explicit_ goal of being able to deploy > better algorithsm, but given that we were trying different stuff then, I think > it just naturally happened that the thing we deployed had that flexibility. > > In other places, we did have knowledge, but we deliberately chose to not do > things: some examples are security, separation of location and identity, and > addressing in general. > > Admittedly, security is a complex situation, because we have had some new > tools become available (e.g. public keys) over time. And also I think security > suffered from some of what you allude to with disparate external factors - > e.g. early work on secure email proposed a model that aligned well with one > group of users (military/government), but not the 'ordinary' users, leading to > poor uptake. But we surely could have done better than we did (I speak of > security overall, not just email). > > > > forever is a long-time and not likely something to serve as a useful > > planning horizon. However, planning for "the foreseeable future", i.e. > > for as long and as well as we can imagine, _is_ quite reasonable. > > Yes, but a lot of the time I think it's pure luck whether we get something > with a good lifetime or not. (I think a big part of that luck is the person > who winds up doing the design for a particular newly-needed piece, to be > frank. Some are much better than others.) And the choices are often driven by > short-term considerations, and trying to put out fires. > > Take DNS for example. We were lucky there - the design had a lot of room to > grow. But it could easily not have. > > The recent discussion of the origins of BGP shows all these factors at work. > We didn't have great knowledge of routing, but we had some. Nonetheless, we > didn't do something that was on the outer limits of what we could do - for > reasons I won't take time to analyze in detail (basically, it was 'Pogoitis' - > "We have met the enemy", etc). I suspect the BGP designers probably wouldn't > have guessed that it would successfully function as well as it does for a > system of this size. And later on we did have a fair amount of work go into > more advanced routing architectures, but they were left to the side (again, > for complex reasons I won't analyze here). > > Balancing 'getting it running with the resouces available' and 'doing > something with a long lifetime' is still a struggle. I was recently driven to > desperation by the unwillingness of a key LISP protagonist to adopt a packet > format (aka interface semantics) which had more flexibility and adaptability. > The reason? 'It was easier/quicker to do the kludgy hack.' > > > But I guess all human works are like this - a combination of varying levels of > luck, skill, chance, people and circumstances. > > Noel