From dan at lynch.com Wed Aug 4 14:46:33 2021 From: dan at lynch.com (Dan Lynch) Date: Wed, 4 Aug 2021 14:46:33 -0700 Subject: [ih] distributed network control: Usenet In-Reply-To: <20210801041536.64745256C917@ary.qy> References: <20210801041536.64745256C917@ary.qy> Message-ID: <88524D34-B039-456B-B843-DA5B1BF0A112@lynch.com> Was/is alt.binaries mostly porn encoded or executable files being shared or stolen? Or something else??? Dan Cell 650-776-7313 > On Jul 31, 2021, at 9:15 PM, John Levine via Internet-history wrote: > > ?It appears that Greg Skinner via Internet-history said: >> I?m surprised, given how popular the web had become by then. How was this determined? > > I'm not sure I believe it, but the amount of traffix in alt.binaries.whatever was and is very large. > There's a lot of encoded video. > >>>> On Jul 25, 2021, at 2:16 PM, Bob Purvy wrote: >>> >>> When I first joined Packeteer in 1998, Usenet accounted for an overwhelming percentage of the Internet traffic. >>> >>> On Sun, Jul 25, 2021 at 2:08 PM Greg Skinner via Internet-history > > wrote: >>> >>> On Jul 20, 2021, at 3:45 PM, John Gilmore > wrote: >>>> The Usenet had no central point of control, and was contemporaneous with >>>> the ARPANET and early Internet. Its software was even rewritten several >>>> times by different parties (e.g. A News, B News, C News, Notesfiles, >>>> NNTP). Its global discussion groups (net.foo) were evolved by mutual >>>> agreement (comp.foo, sci.bar, etc) and then later successfully forked >>>> (alt) when the primary sites feared hosting discussions that others >>>> wanted to have (e.g. on sex and drugs). >>>> >>>> Does anybody know the status of the Usenet today? I got off it >>>> years ago. >>> >>> BTW, it?s available via Google Groups >. Some newsgroups go back >> to the early 1980s. >>> >>> ?gregbo >>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From chet.ramey at case.edu Thu Aug 5 06:33:14 2021 From: chet.ramey at case.edu (Chet Ramey) Date: Thu, 5 Aug 2021 09:33:14 -0400 Subject: [ih] distributed network control: Usenet In-Reply-To: <88524D34-B039-456B-B843-DA5B1BF0A112@lynch.com> References: <20210801041536.64745256C917@ary.qy> <88524D34-B039-456B-B843-DA5B1BF0A112@lynch.com> Message-ID: On 8/4/21 5:46 PM, Dan Lynch via Internet-history wrote: > Was/is alt.binaries mostly porn encoded or executable files being shared or stolen? Yes. My sense, back when I ran news.cwru.edu, was that the former consumed more bandwidth and disk storage, but it was both. -- ``The lyf so short, the craft so long to lerne.'' - Chaucer ``Ars longa, vita brevis'' - Hippocrates Chet Ramey, UTech, CWRU chet at case.edu http://tiswww.cwru.edu/~chet/ From brian at platohistory.org Fri Aug 20 13:30:23 2021 From: brian at platohistory.org (Brian Dear) Date: Fri, 20 Aug 2021 14:30:23 -0600 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> Message-ID: Bob, Might I suggest, if you?re curious about PLATO, you rely on a more in-depth history, available in my book The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) [1] which that 33-minute podcast episode seems to be a hodgepodge summary of. In the real deal, my book, you might find a more engaging exploration of the historical, technological, business, and societal influences of PLATO and why it?s important. Regarding your being at U of I at the same time: if you were an undergrad from say 64-68, there?s a very good chance you would not have come across PLATO which was still in its formative stages and not deployed widely at all on campus. Things started scaling significantly around 1972 with the launch of the CDC CYBER mainframe-based system that grew to over 1000 terminals, all over campus. However, if you were working on a Master?s degree within the ivory tower of the CS dept from 68-73, a dept that with few exceptions looked down upon PLATO as a silly toy not worthy of even brief curiosity, it?s possible you still would have overlooked it. Even though DCL was very close the CERL lab on Mathews Ave. Anyway, check out the book?it?s all about the Illinois story, as well as the influence (in both directions) between the PLATO project and the Xerox PARC Alto/SmallTalk/Dynabook projects. - Brian [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) > On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history wrote: > > I just listened to the episode > > about > PLATO on The History of Computing podcast, mostly because I'm being > interviewed for it tomorrow on my book > > . > > I know we've covered this before, but I think the "influence" of PLATO is a > bit overstated. I hesitate to be too dogmatic about that, but after all, > you would think I'd have heard more about it, being at the U of I at the > same time as he's talking about here. Maybe it had more influence at *other* > sites? > > On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Forgot reply-all. >> >>> Begin forwarded message: >>> >>> From: John Day >>> Subject: Re: [ih] How Plato Influenced the Internet >>> Date: June 10, 2021 at 14:46:35 EDT >>> To: Clem Cole >>> >>> Plato had very little if any influence on the ARPANET. I can?t say about >> the other way. We were the ARPANET node and saw very little of them. We >> were in different buildings on the engineering campus a couple of blocks >> from each other, neither of which was the CS building. This is probably a >> case of people looking at similar problems and coming to similar >> conclusions, or from the authors point of view, doing the same thing in >> totally different ways. >>> >>> I do remember once when the leader of our group, Pete Alsberg, was >> teaching an OS class and someone from Plato was taking it and brought up >> what they were doing for the next major system release. In class, they did >> a back of the envelope calculation of when the design would hit the wall. >> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found >> himself talking to Bitzer and related the story from the class. Bitzer got >> kind of embarrassed and it turned out they had hit the wall a couple of >> days before as the class? estimate predicted. ;-) Other than having >> screens we could use, we didn?t put much stock in their work. >>> >>> (The wikipedia page on Plato says it was first used Illiac I. It may be >> true, but it must not have done much because Illiac I had 40 bit words with >> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II >> and III) were asynchronous hardware.) >>> >>> As Ryoko always said, I could be wrong, but I doubt it. >>> >>> John >>> >>>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < >> internet-history at elists.isoc.org> wrote: >>>> >>>> FWIW: Since Plato was just brought up, I'll point a vector to some >> folks. >>>> If you read Dear's book, it tends to credit the walled garden' system >>>> Plato with a lot of the things the Internet would eventually be known. >> How >>>> much truth there is, I can not say. But there is a lot of good stuff in >>>> here and it really did impact a lot of us as we certainly had seen that >>>> scheme, when we started to do things later. >>>> >>>> So ... if you have not yet read it, see if you can get a copy of Brian >>>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System >> and >>>> the Dawn of Cyberculture* ISBN-10 1101871555 >>>> >>>> In my own case, Plato was used for some Physics courses and I >>>> personally never was one of the 'Plato ga-ga' type folks, although I did >>>> take on course using it and thought the graphics were pretty slick. >> But, I >>>> had all the computing power I needed with full ARPANET access between >> the >>>> Computer Center and CMU's EE and CS Depts. But I do have friends that >> were >>>> Physics, Chem E, and Mat Sci that all thought it was amazing and liked >> it >>>> much better than the required FORTRAN course they had to take using TSS >> on >>>> the IBM 360/67. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Fri Aug 20 13:34:35 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 20 Aug 2021 13:34:35 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> Message-ID: I will. I was a DCL rat, and we'd just occasionally meet people from PLATO, but that was it. I had a friend who took Latin and they used PLATO. I also used it in a Psych 100 experiment. And that's the extent of my contact. On Fri, Aug 20, 2021 at 1:30 PM Brian Dear wrote: > Bob, > > Might I suggest, if you?re curious about PLATO, you rely on a more > in-depth history, available in my book The Friendly Orange Glow: The Untold > Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) [1] > which that 33-minute podcast episode seems to be a hodgepodge summary of. > In the real deal, my book, you might find a more engaging exploration of > the historical, technological, business, and societal influences of PLATO > and why it?s important. > > Regarding your being at U of I at the same time: if you were an undergrad > from say 64-68, there?s a very good chance you would not have come across > PLATO which was still in its formative stages and not deployed widely at > all on campus. Things started scaling significantly around 1972 with the > launch of the CDC CYBER mainframe-based system that grew to over 1000 > terminals, all over campus. However, if you were working on a Master?s > degree within the ivory tower of the CS dept from 68-73, a dept that with > few exceptions looked down upon PLATO as a silly toy not worthy of even > brief curiosity, it?s possible you still would have overlooked it. Even > though DCL was very close the CERL lab on Mathews Ave. > > Anyway, check out the book?it?s all about the Illinois story, as well as > the influence (in both directions) between the PLATO project and the Xerox > PARC Alto/SmallTalk/Dynabook projects. > > - Brian > > [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) > > > > On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > > I just listened to the episode > < > https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 > > > about > PLATO on The History of Computing podcast, mostly because I'm being > interviewed for it tomorrow on my book > < > https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= > > > . > > I know we've covered this before, but I think the "influence" of PLATO is a > bit overstated. I hesitate to be too dogmatic about that, but after all, > you would think I'd have heard more about it, being at the U of I at the > same time as he's talking about here. Maybe it had more influence at > *other* > sites? > > On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > > Forgot reply-all. > > Begin forwarded message: > > From: John Day > Subject: Re: [ih] How Plato Influenced the Internet > Date: June 10, 2021 at 14:46:35 EDT > To: Clem Cole > > Plato had very little if any influence on the ARPANET. I can?t say about > > the other way. We were the ARPANET node and saw very little of them. We > were in different buildings on the engineering campus a couple of blocks > from each other, neither of which was the CS building. This is probably a > case of people looking at similar problems and coming to similar > conclusions, or from the authors point of view, doing the same thing in > totally different ways. > > > I do remember once when the leader of our group, Pete Alsberg, was > > teaching an OS class and someone from Plato was taking it and brought up > what they were doing for the next major system release. In class, they did > a back of the envelope calculation of when the design would hit the wall. > That weekend at a party, (Champaign-Urbana isn?t that big) Pete found > himself talking to Bitzer and related the story from the class. Bitzer got > kind of embarrassed and it turned out they had hit the wall a couple of > days before as the class? estimate predicted. ;-) Other than having > screens we could use, we didn?t put much stock in their work. > > > (The wikipedia page on Plato says it was first used Illiac I. It may be > > true, but it must not have done much because Illiac I had 40 bit words with > 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II > and III) were asynchronous hardware.) > > > As Ryoko always said, I could be wrong, but I doubt it. > > John > > On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > FWIW: Since Plato was just brought up, I'll point a vector to some > > folks. > > If you read Dear's book, it tends to credit the walled garden' system > Plato with a lot of the things the Internet would eventually be known. > > How > > much truth there is, I can not say. But there is a lot of good stuff in > here and it really did impact a lot of us as we certainly had seen that > scheme, when we started to do things later. > > So ... if you have not yet read it, see if you can get a copy of Brian > Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System > > and > > the Dawn of Cyberculture* ISBN-10 1101871555 > > In my own case, Plato was used for some Physics courses and I > personally never was one of the 'Plato ga-ga' type folks, although I did > take on course using it and thought the graphics were pretty slick. > > But, I > > had all the computing power I needed with full ARPANET access between > > the > > Computer Center and CMU's EE and CS Depts. But I do have friends that > > were > > Physics, Chem E, and Mat Sci that all thought it was amazing and liked > > it > > much better than the required FORTRAN course they had to take using TSS > > on > > the IBM 360/67. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > From vint at google.com Sat Aug 21 04:58:38 2021 From: vint at google.com (Vint Cerf) Date: Sat, 21 Aug 2021 07:58:38 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> Message-ID: I worked on ARPANET from 1968-1972 and then started on Internet in 1973. I joined ARPA in 1976 and stayed there working on Internet until end of 1982. I visited the campus, met with Don Bitzer in the late 1960s (hazy memory), was impressed by the display technology, but, honestly, do not recall much influence on the Internet work. It was certainly an example of time-shared application that could be expanded by way of networks like ARPANET and Internet but I don't recall direct influence on, e.g., protocol development, packet switch design, application layer protocols. v On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > I will. I was a DCL rat, and we'd just occasionally meet people from PLATO, > but that was it. > > I had a friend who took Latin and they used PLATO. I also used it in a > Psych 100 experiment. And that's the extent of my contact. > > On Fri, Aug 20, 2021 at 1:30 PM Brian Dear wrote: > > > Bob, > > > > Might I suggest, if you?re curious about PLATO, you rely on a more > > in-depth history, available in my book The Friendly Orange Glow: The > Untold > > Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) > [1] > > which that 33-minute podcast episode seems to be a hodgepodge summary of. > > In the real deal, my book, you might find a more engaging exploration of > > the historical, technological, business, and societal influences of PLATO > > and why it?s important. > > > > Regarding your being at U of I at the same time: if you were an undergrad > > from say 64-68, there?s a very good chance you would not have come across > > PLATO which was still in its formative stages and not deployed widely at > > all on campus. Things started scaling significantly around 1972 with the > > launch of the CDC CYBER mainframe-based system that grew to over 1000 > > terminals, all over campus. However, if you were working on a Master?s > > degree within the ivory tower of the CS dept from 68-73, a dept that with > > few exceptions looked down upon PLATO as a silly toy not worthy of even > > brief curiosity, it?s possible you still would have overlooked it. Even > > though DCL was very close the CERL lab on Mathews Ave. > > > > Anyway, check out the book?it?s all about the Illinois story, as well as > > the influence (in both directions) between the PLATO project and the > Xerox > > PARC Alto/SmallTalk/Dynabook projects. > > > > - Brian > > > > [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) > > > > > > > > On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > I just listened to the episode > > < > > > https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 > > > > > about > > PLATO on The History of Computing podcast, mostly because I'm being > > interviewed for it tomorrow on my book > > < > > > https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= > > > > > . > > > > I know we've covered this before, but I think the "influence" of PLATO > is a > > bit overstated. I hesitate to be too dogmatic about that, but after all, > > you would think I'd have heard more about it, being at the U of I at the > > same time as he's talking about here. Maybe it had more influence at > > *other* > > sites? > > > > On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > Forgot reply-all. > > > > Begin forwarded message: > > > > From: John Day > > Subject: Re: [ih] How Plato Influenced the Internet > > Date: June 10, 2021 at 14:46:35 EDT > > To: Clem Cole > > > > Plato had very little if any influence on the ARPANET. I can?t say about > > > > the other way. We were the ARPANET node and saw very little of them. We > > were in different buildings on the engineering campus a couple of blocks > > from each other, neither of which was the CS building. This is probably a > > case of people looking at similar problems and coming to similar > > conclusions, or from the authors point of view, doing the same thing in > > totally different ways. > > > > > > I do remember once when the leader of our group, Pete Alsberg, was > > > > teaching an OS class and someone from Plato was taking it and brought up > > what they were doing for the next major system release. In class, they > did > > a back of the envelope calculation of when the design would hit the wall. > > That weekend at a party, (Champaign-Urbana isn?t that big) Pete found > > himself talking to Bitzer and related the story from the class. Bitzer > got > > kind of embarrassed and it turned out they had hit the wall a couple of > > days before as the class? estimate predicted. ;-) Other than having > > screens we could use, we didn?t put much stock in their work. > > > > > > (The wikipedia page on Plato says it was first used Illiac I. It may be > > > > true, but it must not have done much because Illiac I had 40 bit words > with > > 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II > > and III) were asynchronous hardware.) > > > > > > As Ryoko always said, I could be wrong, but I doubt it. > > > > John > > > > On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < > > > > internet-history at elists.isoc.org> wrote: > > > > > > FWIW: Since Plato was just brought up, I'll point a vector to some > > > > folks. > > > > If you read Dear's book, it tends to credit the walled garden' system > > Plato with a lot of the things the Internet would eventually be known. > > > > How > > > > much truth there is, I can not say. But there is a lot of good stuff in > > here and it really did impact a lot of us as we certainly had seen that > > scheme, when we started to do things later. > > > > So ... if you have not yet read it, see if you can get a copy of Brian > > Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System > > > > and > > > > the Dawn of Cyberculture* ISBN-10 1101871555 > > > > In my own case, Plato was used for some Physics courses and I > > personally never was one of the 'Plato ga-ga' type folks, although I did > > take on course using it and thought the graphics were pretty slick. > > > > But, I > > > > had all the computing power I needed with full ARPANET access between > > > > the > > > > Computer Center and CMU's EE and CS Depts. But I do have friends that > > > > were > > > > Physics, Chem E, and Mat Sci that all thought it was amazing and liked > > > > it > > > > much better than the required FORTRAN course they had to take using TSS > > > > on > > > > the IBM 360/67. > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From jeanjour at comcast.net Sat Aug 21 06:25:12 2021 From: jeanjour at comcast.net (John Day) Date: Sat, 21 Aug 2021 09:25:12 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> Message-ID: <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> Yea, the display technology was interesting but otherwise it was just another timesharing system, and really not all that great of one. Here is part of what I sent them last night off-list. I didn?t think others would be interested. I have said it here before: To refresh others: Our group was the ARPANET node at Illinois and developed 3 OSs from scratch. One proven secure, and one which experimented fairly deeply with the leverage of an extensible OS language, in addition to the what is recounted here. Pete Alsberg became the manager of the group at one point (slightly tweaked): > Around 75, Alsberg was teaching an OS class at DCL. One of the Plato guys was taking the course. They were in the process of building the new Plato OS. They did a computation in class and predicted that the system would top out at so many terminals. (Well, below their target.) Soon after that, Alsberg was at a party with the guy in charge of the system development. (The name is on the tip of my tongue, I can see his face. Remind me of a few names, it wasn?t Bitzer, Finally came to me. (Weird) It was Paul Tenszar (sp? that doesn?t look right) ). The guy got a funny look on his face. They had hit the limit that day and it was within one or two terminals of the prediction. > > As I said earlier, we put the first Unix on the ?Net in the summer of ?75. Got a few of the Plato terminals. Stripped down Unix to fit on an LSI-11, added touch to the screen. all on a short cabinet that so you could sit at and use it. Hooked up to NARIS the land-use planning system for the 6 counties around Chicago as an ?intelligent terminal.? You could do maps down to the square mile with different patterns for use, the usual graphs, etc. all without a keyboard. (it had a keyboard but it was only needed for data entry.) We also installed a few of the terminals with different application in DC (Reston), at CINCPAC in Hawaii, and possible a few others I don?t remember. While we were doing that, we also added non-blocking IPC to UNIX. All it had was pipes. NARIS was a distributed database application that used databases on both coasts and we used the ARPANET. We did a fair amount of distributed database research in that period as well and worked out the issues of network partition, which as near as I can tell are still not clearly understood. > > I am afraid we just didn?t see Plato as all that interesting, a traditional mainframe and terminals system. And they pretty well kept to themselves. We were probably both wrong. Take care, John The reason I remember the one at CINCPAC was that one of the stories that got back was, that they had been invited to an evening at the CINC?s home. The next Saturday morning our guy showed up at CINCPAC in a T-shirt and shorts to work on the installation. The guard was giving him grief, when the CINC strode in. Greeted our guy with a friendly remark about the gathering a few nights before and headed on in. The following conversation ensued: The guard a bit uncomfortable: You know the CINC? Yup. You know him pretty well? Yup. Errr, I guess it would be okay. ;-) And our guy was able to get to work. > On Aug 21, 2021, at 07:58, Vint Cerf via Internet-history wrote: > > I worked on ARPANET from 1968-1972 and then started on Internet in 1973. I > joined ARPA in 1976 and stayed there working on Internet until end of 1982. > I visited the campus, met with Don Bitzer in the late 1960s (hazy memory), > was impressed by the display technology, but, honestly, do not recall much > influence on the Internet work. It was certainly an example of time-shared > application that could be expanded by way of networks like ARPANET and > Internet but I don't recall direct influence on, e.g., protocol > development, packet switch design, application layer protocols. > > v > > > On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > >> I will. I was a DCL rat, and we'd just occasionally meet people from PLATO, >> but that was it. >> >> I had a friend who took Latin and they used PLATO. I also used it in a >> Psych 100 experiment. And that's the extent of my contact. >> >> On Fri, Aug 20, 2021 at 1:30 PM Brian Dear wrote: >> >>> Bob, >>> >>> Might I suggest, if you?re curious about PLATO, you rely on a more >>> in-depth history, available in my book The Friendly Orange Glow: The >> Untold >>> Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) >> [1] >>> which that 33-minute podcast episode seems to be a hodgepodge summary of. >>> In the real deal, my book, you might find a more engaging exploration of >>> the historical, technological, business, and societal influences of PLATO >>> and why it?s important. >>> >>> Regarding your being at U of I at the same time: if you were an undergrad >>> from say 64-68, there?s a very good chance you would not have come across >>> PLATO which was still in its formative stages and not deployed widely at >>> all on campus. Things started scaling significantly around 1972 with the >>> launch of the CDC CYBER mainframe-based system that grew to over 1000 >>> terminals, all over campus. However, if you were working on a Master?s >>> degree within the ivory tower of the CS dept from 68-73, a dept that with >>> few exceptions looked down upon PLATO as a silly toy not worthy of even >>> brief curiosity, it?s possible you still would have overlooked it. Even >>> though DCL was very close the CERL lab on Mathews Ave. >>> >>> Anyway, check out the book?it?s all about the Illinois story, as well as >>> the influence (in both directions) between the PLATO project and the >> Xerox >>> PARC Alto/SmallTalk/Dynabook projects. >>> >>> - Brian >>> >>> [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) >>> >>> >>> >>> On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>> I just listened to the episode >>> < >>> >> https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 >>>> >>> about >>> PLATO on The History of Computing podcast, mostly because I'm being >>> interviewed for it tomorrow on my book >>> < >>> >> https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= >>>> >>> . >>> >>> I know we've covered this before, but I think the "influence" of PLATO >> is a >>> bit overstated. I hesitate to be too dogmatic about that, but after all, >>> you would think I'd have heard more about it, being at the U of I at the >>> same time as he's talking about here. Maybe it had more influence at >>> *other* >>> sites? >>> >>> On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>> Forgot reply-all. >>> >>> Begin forwarded message: >>> >>> From: John Day >>> Subject: Re: [ih] How Plato Influenced the Internet >>> Date: June 10, 2021 at 14:46:35 EDT >>> To: Clem Cole >>> >>> Plato had very little if any influence on the ARPANET. I can?t say about >>> >>> the other way. We were the ARPANET node and saw very little of them. We >>> were in different buildings on the engineering campus a couple of blocks >>> from each other, neither of which was the CS building. This is probably a >>> case of people looking at similar problems and coming to similar >>> conclusions, or from the authors point of view, doing the same thing in >>> totally different ways. >>> >>> >>> I do remember once when the leader of our group, Pete Alsberg, was >>> >>> teaching an OS class and someone from Plato was taking it and brought up >>> what they were doing for the next major system release. In class, they >> did >>> a back of the envelope calculation of when the design would hit the wall. >>> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found >>> himself talking to Bitzer and related the story from the class. Bitzer >> got >>> kind of embarrassed and it turned out they had hit the wall a couple of >>> days before as the class? estimate predicted. ;-) Other than having >>> screens we could use, we didn?t put much stock in their work. >>> >>> >>> (The wikipedia page on Plato says it was first used Illiac I. It may be >>> >>> true, but it must not have done much because Illiac I had 40 bit words >> with >>> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II >>> and III) were asynchronous hardware.) >>> >>> >>> As Ryoko always said, I could be wrong, but I doubt it. >>> >>> John >>> >>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < >>> >>> internet-history at elists.isoc.org> wrote: >>> >>> >>> FWIW: Since Plato was just brought up, I'll point a vector to some >>> >>> folks. >>> >>> If you read Dear's book, it tends to credit the walled garden' system >>> Plato with a lot of the things the Internet would eventually be known. >>> >>> How >>> >>> much truth there is, I can not say. But there is a lot of good stuff in >>> here and it really did impact a lot of us as we certainly had seen that >>> scheme, when we started to do things later. >>> >>> So ... if you have not yet read it, see if you can get a copy of Brian >>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System >>> >>> and >>> >>> the Dawn of Cyberculture* ISBN-10 1101871555 >>> >>> In my own case, Plato was used for some Physics courses and I >>> personally never was one of the 'Plato ga-ga' type folks, although I did >>> take on course using it and thought the graphics were pretty slick. >>> >>> But, I >>> >>> had all the computing power I needed with full ARPANET access between >>> >>> the >>> >>> Computer Center and CMU's EE and CS Depts. But I do have friends that >>> >>> were >>> >>> Physics, Chem E, and Mat Sci that all thought it was amazing and liked >>> >>> it >>> >>> much better than the required FORTRAN course they had to take using TSS >>> >>> on >>> >>> the IBM 360/67. >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From geoff at iconia.com Sun Aug 22 16:54:49 2021 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 22 Aug 2021 13:54:49 -1000 Subject: [ih] yahoo is selectively censoring/rejecting incoming emails In-Reply-To: References: Message-ID: yours truly has a mailing list called Interesting Stuff (IS) for which there are a number of recipients are xxx at yahoo.com today, when sending out a message on IS having to do with an alternative COVID presentation of data/facts, yours truly got back SMTP level bounces for each of the yahoo.com recipients of the form: [...] ----- The following addresses had permanent fatal errors ----- <[xxx]@yahoo.com> (reason: 554 Message not allowed - [299]) ----- Transcript of session follows ----- ... while talking to mta5.am0.yahoodns.net.: >>> DATA <<< *554 Message not allowed *- [299] 554 5.0.0 Service unavailable [...] so in order for the original (censored/"not allowed") message to get through, yours truly had to go through a several times trial and error iterative process of "massaging" the subject of the message to an "innocuous" point for which the yahoo censorship filter could not "detect" and thus reject. this was a new/"historical/hysterical experience" of having an email summarily rejected (by a webmail provider) at the SMTP level for a subject matter for/of which does not adhere to/with The "Accepted" (MainStream) Narrative. we live in crazy times! -- Geoff.Goodfellow at iconia.com living as The Truth is True -- Geoff.Goodfellow at iconia.com living as The Truth is True From johnl at iecc.com Sun Aug 22 19:12:28 2021 From: johnl at iecc.com (John Levine) Date: 22 Aug 2021 22:12:28 -0400 Subject: [ih] yahoo is selectively censoring/rejecting incoming emails In-Reply-To: Message-ID: <20210823021230.B137126AC0CE@ary.qy> It appears that the keyboard of geoff goodfellow via Internet-history said: > ----- The following addresses had permanent fatal errors ----- ><[xxx]@yahoo.com> > (reason: 554 Message not allowed - [299]) That's probably a DMARC failure. They've been doing this since about 2014 so I don't understand why you just noticed it now. It's not personal, but it's quite annoying. Do keep in mind that Yahoo's goal (actually Apollo Global Management, which owns the remains of what used to be Yahoo and AOL) is to accept the mail that their users want, reject the mail their users do not want, and minimize the number of complaints. They absolutely do not care at all about people sending them mail other than the extent to which delivering or not delivering the mail causes complaints from their users. R's, John From brian at platohistory.org Sun Aug 22 20:35:39 2021 From: brian at platohistory.org (Brian Dear) Date: Sun, 22 Aug 2021 21:35:39 -0600 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> Message-ID: <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> No disrespect, but I can?t help chuckle a tiny bit reading these assessments/dismissals of PLATO by ARPANET folks and computer science and engineering folks. (I?m reminded of the movie Amadeus, when Mozart is commenting on Salieri?s music.) Of course there?s no PLATO influence on ARPANET or ?the Internet? at the low levels, including protocols, etc. PLATO?s utterly custom architecture was designed for a very specific turnkey education system with homogenous hardware. That was its mission. PLATO wasn?t designed for packet networks, which had too much overhead for the fast responsiveness PLATO's creators insisted on (in my book I call it ?the fast round trip??about 120ms character echo response time 100% of the time). In 1973, a team at UC Santa Barbara spent about a year attempting to connect a PLATO IV terminal, with the fancy gas-plasma graphics display, to the ARPANET, in order to play the games that was quickly becoming legend on PLATO (see RFC 600). Writing off PLATO as ?just another time-sharing system? completely misses the point of PLATO?s narrow, specialized purpose and mission, funded by ARPA and NSF, and more importantly misses the historical significance of the explosion, starting in 1973, of unexpected, yet incredibly clever and creative applications that were built on top of an architecture intended for delivery of interactive computer-based educational lessons, simulations, testing, and so on. This explosion of unexpected apps contributed by the user/hacker community included TERM-talk (live 2-way instant messaging in character-by-character typed chat, which Unix people later implemented as ?talk? on that OS); chat rooms (akin to early IRC, but also live character-by-character typed chat this time in multiple channels, some open, some private); monitor mode (full screen sharing while in a TERM-talk), Pnotes (PLATO?s email application), TERM-consult (live tech support chat with online support teams), Notes (PLATO?s message forum system), the charset editor (which inspired Xerox PARC to do a bitmap graphics editor for the Alto), and of course the countless multiplayer graphical games, including tons of MUDs, 3D space and airplane simulators, etc. There was no other time-sharing system that had the applications PLATO had, nor the platform for creative expression that PLATO had in the 70s. But it meant nothing to the ARPANET crowd and was essentially dismissed as unimportant. PLATO?s always seemed to me like a "digital Galapagos." It evolved off on its own, with few if any predators, utterly disconnected from the rest of the world. The rest of the world would choose?wisely?to build a heterogenous network of protocols and standards to which all sorts of hardware and networks of networks could connect. That was a brilliant solution that enabled scale?something that was going to be exceedingly difficult, impractical, and expensive to do with PLATO. So we wound up with the Net and you had PLATO becoming more and more foreign and impossibly exotic and detached. However, PLATO hackers went off and worked at Apple, IBM, DEC, Data General, Sun, Atari, Google, and many other places. (PLATO Notes was the direct inspiration for Ray Ozzie, a PLATO software engineer and student at U of I, who later who named his Lotus application ?Notes?, which about 100 million people used for a little while, but I digress.) Years later, in a 25000-square-foot exhibit of the first 2000 years of computing, the esteemed Computer History Museum chose not to include any PLATO hardware or software or mention any of the countless innovations done during that era by that community, and in fact the only fleeting mention of the word ?PLATO? in the entire exhibit space at all was (and I don?t even know if it?s still there) found inside a small glass exhibit case near the end of the long IKEA-like maze through the museum rooms, that depicted tiny screen shots from AOL, CompuServe, Prodigy . . . and PLATO . . . with a little caption dismissing the whole lot as examples of ?walled gardens." So it goes. - Brian > On Aug 21, 2021, at 7:25 AM, John Day via Internet-history wrote: > > Yea, the display technology was interesting but otherwise it was just another timesharing system, and really not all that great of one. Here is part of what I sent them last night off-list. I didn?t think others would be interested. I have said it here before: > > To refresh others: Our group was the ARPANET node at Illinois and developed 3 OSs from scratch. One proven secure, and one which experimented fairly deeply with the leverage of an extensible OS language, in addition to the what is recounted here. Pete Alsberg became the manager of the group at one point (slightly tweaked): > >> Around 75, Alsberg was teaching an OS class at DCL. One of the Plato guys was taking the course. They were in the process of building the new Plato OS. They did a computation in class and predicted that the system would top out at so many terminals. (Well, below their target.) Soon after that, Alsberg was at a party with the guy in charge of the system development. (The name is on the tip of my tongue, I can see his face. Remind me of a few names, it wasn?t Bitzer, Finally came to me. (Weird) It was Paul Tenszar (sp? that doesn?t look right) ). The guy got a funny look on his face. They had hit the limit that day and it was within one or two terminals of the prediction. >> >> As I said earlier, we put the first Unix on the ?Net in the summer of ?75. Got a few of the Plato terminals. Stripped down Unix to fit on an LSI-11, added touch to the screen. all on a short cabinet that so you could sit at and use it. Hooked up to NARIS the land-use planning system for the 6 counties around Chicago as an ?intelligent terminal.? You could do maps down to the square mile with different patterns for use, the usual graphs, etc. all without a keyboard. (it had a keyboard but it was only needed for data entry.) We also installed a few of the terminals with different application in DC (Reston), at CINCPAC in Hawaii, and possible a few others I don?t remember. While we were doing that, we also added non-blocking IPC to UNIX. All it had was pipes. NARIS was a distributed database application that used databases on both coasts and we used the ARPANET. We did a fair amount of distributed database research in that period as well and worked out the issues of network partition, which as near as I can tell are still not clearly understood. >> >> I am afraid we just didn?t see Plato as all that interesting, a traditional mainframe and terminals system. And they pretty well kept to themselves. We were probably both wrong. > > Take care, > John > > The reason I remember the one at CINCPAC was that one of the stories that got back was, that they had been invited to an evening at the CINC?s home. The next Saturday morning our guy showed up at CINCPAC in a T-shirt and shorts to work on the installation. The guard was giving him grief, when the CINC strode in. Greeted our guy with a friendly remark about the gathering a few nights before and headed on in. The following conversation ensued: The guard a bit uncomfortable: You know the CINC? Yup. You know him pretty well? Yup. Errr, I guess it would be okay. ;-) And our guy was able to get to work. > >> On Aug 21, 2021, at 07:58, Vint Cerf via Internet-history wrote: >> >> I worked on ARPANET from 1968-1972 and then started on Internet in 1973. I >> joined ARPA in 1976 and stayed there working on Internet until end of 1982. >> I visited the campus, met with Don Bitzer in the late 1960s (hazy memory), >> was impressed by the display technology, but, honestly, do not recall much >> influence on the Internet work. It was certainly an example of time-shared >> application that could be expanded by way of networks like ARPANET and >> Internet but I don't recall direct influence on, e.g., protocol >> development, packet switch design, application layer protocols. >> >> v >> >> >> On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> I will. I was a DCL rat, and we'd just occasionally meet people from PLATO, >>> but that was it. >>> >>> I had a friend who took Latin and they used PLATO. I also used it in a >>> Psych 100 experiment. And that's the extent of my contact. >>> >>> On Fri, Aug 20, 2021 at 1:30 PM Brian Dear wrote: >>> >>>> Bob, >>>> >>>> Might I suggest, if you?re curious about PLATO, you rely on a more >>>> in-depth history, available in my book The Friendly Orange Glow: The >>> Untold >>>> Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) >>> [1] >>>> which that 33-minute podcast episode seems to be a hodgepodge summary of. >>>> In the real deal, my book, you might find a more engaging exploration of >>>> the historical, technological, business, and societal influences of PLATO >>>> and why it?s important. >>>> >>>> Regarding your being at U of I at the same time: if you were an undergrad >>>> from say 64-68, there?s a very good chance you would not have come across >>>> PLATO which was still in its formative stages and not deployed widely at >>>> all on campus. Things started scaling significantly around 1972 with the >>>> launch of the CDC CYBER mainframe-based system that grew to over 1000 >>>> terminals, all over campus. However, if you were working on a Master?s >>>> degree within the ivory tower of the CS dept from 68-73, a dept that with >>>> few exceptions looked down upon PLATO as a silly toy not worthy of even >>>> brief curiosity, it?s possible you still would have overlooked it. Even >>>> though DCL was very close the CERL lab on Mathews Ave. >>>> >>>> Anyway, check out the book?it?s all about the Illinois story, as well as >>>> the influence (in both directions) between the PLATO project and the >>> Xerox >>>> PARC Alto/SmallTalk/Dynabook projects. >>>> >>>> - Brian >>>> >>>> [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) >>>> >>>> >>>> >>>> On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>> I just listened to the episode >>>> < >>>> >>> https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 >>>>> >>>> about >>>> PLATO on The History of Computing podcast, mostly because I'm being >>>> interviewed for it tomorrow on my book >>>> < >>>> >>> https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= >>>>> >>>> . >>>> >>>> I know we've covered this before, but I think the "influence" of PLATO >>> is a >>>> bit overstated. I hesitate to be too dogmatic about that, but after all, >>>> you would think I'd have heard more about it, being at the U of I at the >>>> same time as he's talking about here. Maybe it had more influence at >>>> *other* >>>> sites? >>>> >>>> On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>> Forgot reply-all. >>>> >>>> Begin forwarded message: >>>> >>>> From: John Day >>>> Subject: Re: [ih] How Plato Influenced the Internet >>>> Date: June 10, 2021 at 14:46:35 EDT >>>> To: Clem Cole >>>> >>>> Plato had very little if any influence on the ARPANET. I can?t say about >>>> >>>> the other way. We were the ARPANET node and saw very little of them. We >>>> were in different buildings on the engineering campus a couple of blocks >>>> from each other, neither of which was the CS building. This is probably a >>>> case of people looking at similar problems and coming to similar >>>> conclusions, or from the authors point of view, doing the same thing in >>>> totally different ways. >>>> >>>> >>>> I do remember once when the leader of our group, Pete Alsberg, was >>>> >>>> teaching an OS class and someone from Plato was taking it and brought up >>>> what they were doing for the next major system release. In class, they >>> did >>>> a back of the envelope calculation of when the design would hit the wall. >>>> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found >>>> himself talking to Bitzer and related the story from the class. Bitzer >>> got >>>> kind of embarrassed and it turned out they had hit the wall a couple of >>>> days before as the class? estimate predicted. ;-) Other than having >>>> screens we could use, we didn?t put much stock in their work. >>>> >>>> >>>> (The wikipedia page on Plato says it was first used Illiac I. It may be >>>> >>>> true, but it must not have done much because Illiac I had 40 bit words >>> with >>>> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II >>>> and III) were asynchronous hardware.) >>>> >>>> >>>> As Ryoko always said, I could be wrong, but I doubt it. >>>> >>>> John >>>> >>>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < >>>> >>>> internet-history at elists.isoc.org> wrote: >>>> >>>> >>>> FWIW: Since Plato was just brought up, I'll point a vector to some >>>> >>>> folks. >>>> >>>> If you read Dear's book, it tends to credit the walled garden' system >>>> Plato with a lot of the things the Internet would eventually be known. >>>> >>>> How >>>> >>>> much truth there is, I can not say. But there is a lot of good stuff in >>>> here and it really did impact a lot of us as we certainly had seen that >>>> scheme, when we started to do things later. >>>> >>>> So ... if you have not yet read it, see if you can get a copy of Brian >>>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System >>>> >>>> and >>>> >>>> the Dawn of Cyberculture* ISBN-10 1101871555 >>>> >>>> In my own case, Plato was used for some Physics courses and I >>>> personally never was one of the 'Plato ga-ga' type folks, although I did >>>> take on course using it and thought the graphics were pretty slick. >>>> >>>> But, I >>>> >>>> had all the computing power I needed with full ARPANET access between >>>> >>>> the >>>> >>>> Computer Center and CMU's EE and CS Depts. But I do have friends that >>>> >>>> were >>>> >>>> Physics, Chem E, and Mat Sci that all thought it was amazing and liked >>>> >>>> it >>>> >>>> much better than the required FORTRAN course they had to take using TSS >>>> >>>> on >>>> >>>> the IBM 360/67. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> 1435 Woodhurst Blvd >> McLean, VA 22102 >> 703-448-0965 >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Sun Aug 22 20:50:07 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 22 Aug 2021 20:50:07 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: I'll just pick up on one thing you said: *the charset editor (which inspired Xerox PARC to do a bitmap graphics editor for the Alto),* This is easily checked. Charles Simonyi is still around, as is Tom Malloy. Are they going to argue with that, or accept it? On Sun, Aug 22, 2021 at 8:35 PM Brian Dear via Internet-history < internet-history at elists.isoc.org> wrote: > No disrespect, but I can?t help chuckle a tiny bit reading these > assessments/dismissals of PLATO by ARPANET folks and computer science and > engineering folks. (I?m reminded of the movie Amadeus, when Mozart is > commenting on Salieri?s music.) > > Of course there?s no PLATO influence on ARPANET or ?the Internet? at the > low levels, including protocols, etc. PLATO?s utterly custom architecture > was designed for a very specific turnkey education system with homogenous > hardware. That was its mission. > > PLATO wasn?t designed for packet networks, which had too much overhead for > the fast responsiveness PLATO's creators insisted on (in my book I call it > ?the fast round trip??about 120ms character echo response time 100% of the > time). > > In 1973, a team at UC Santa Barbara spent about a year attempting to > connect a PLATO IV terminal, with the fancy gas-plasma graphics display, to > the ARPANET, in order to play the games that was quickly becoming legend on > PLATO (see RFC 600). > > Writing off PLATO as ?just another time-sharing system? completely misses > the point of PLATO?s narrow, specialized purpose and mission, funded by > ARPA and NSF, and more importantly misses the historical significance of > the explosion, starting in 1973, of unexpected, yet incredibly clever and > creative applications that were built on top of an architecture intended > for delivery of interactive computer-based educational lessons, > simulations, testing, and so on. This explosion of unexpected apps > contributed by the user/hacker community included TERM-talk (live 2-way > instant messaging in character-by-character typed chat, which Unix people > later implemented as ?talk? on that OS); chat rooms (akin to early IRC, but > also live character-by-character typed chat this time in multiple channels, > some open, some private); monitor mode (full screen sharing while in a > TERM-talk), Pnotes (PLATO?s email application), TERM-consult (live tech > support chat with online support teams), Notes (PLATO?s message forum > system), the charset editor (which inspired Xerox PARC to do a bitmap > graphics editor for the Alto), and of course the countless multiplayer > graphical games, including tons of MUDs, 3D space and airplane simulators, > etc. There was no other time-sharing system that had the applications PLATO > had, nor the platform for creative expression that PLATO had in the 70s. > But it meant nothing to the ARPANET crowd and was essentially dismissed as > unimportant. > > PLATO?s always seemed to me like a "digital Galapagos." It evolved off on > its own, with few if any predators, utterly disconnected from the rest of > the world. The rest of the world would choose?wisely?to build a > heterogenous network of protocols and standards to which all sorts of > hardware and networks of networks could connect. That was a brilliant > solution that enabled scale?something that was going to be exceedingly > difficult, impractical, and expensive to do with PLATO. So we wound up with > the Net and you had PLATO becoming more and more foreign and impossibly > exotic and detached. However, PLATO hackers went off and worked at Apple, > IBM, DEC, Data General, Sun, Atari, Google, and many other places. (PLATO > Notes was the direct inspiration for Ray Ozzie, a PLATO software engineer > and student at U of I, who later who named his Lotus application ?Notes?, > which about 100 million people used for a little while, but I digress.) > > Years later, in a 25000-square-foot exhibit of the first 2000 years of > computing, the esteemed Computer History Museum chose not to include any > PLATO hardware or software or mention any of the countless innovations done > during that era by that community, and in fact the only fleeting mention of > the word ?PLATO? in the entire exhibit space at all was (and I don?t even > know if it?s still there) found inside a small glass exhibit case near the > end of the long IKEA-like maze through the museum rooms, that depicted tiny > screen shots from AOL, CompuServe, Prodigy . . . and PLATO . . . with a > little caption dismissing the whole lot as examples of ?walled gardens." > > So it goes. > > - Brian > > > > > On Aug 21, 2021, at 7:25 AM, John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > Yea, the display technology was interesting but otherwise it was just > another timesharing system, and really not all that great of one. Here is > part of what I sent them last night off-list. I didn?t think others would > be interested. I have said it here before: > > > > To refresh others: Our group was the ARPANET node at Illinois and > developed 3 OSs from scratch. One proven secure, and one which experimented > fairly deeply with the leverage of an extensible OS language, in addition > to the what is recounted here. Pete Alsberg became the manager of the group > at one point (slightly tweaked): > > > >> Around 75, Alsberg was teaching an OS class at DCL. One of the Plato > guys was taking the course. They were in the process of building the new > Plato OS. They did a computation in class and predicted that the system > would top out at so many terminals. (Well, below their target.) Soon after > that, Alsberg was at a party with the guy in charge of the system > development. (The name is on the tip of my tongue, I can see his face. > Remind me of a few names, it wasn?t Bitzer, Finally came to me. (Weird) It > was Paul Tenszar (sp? that doesn?t look right) ). The guy got a funny look > on his face. They had hit the limit that day and it was within one or two > terminals of the prediction. > >> > >> As I said earlier, we put the first Unix on the ?Net in the summer of > ?75. Got a few of the Plato terminals. Stripped down Unix to fit on an > LSI-11, added touch to the screen. all on a short cabinet that so you could > sit at and use it. Hooked up to NARIS the land-use planning system for the > 6 counties around Chicago as an ?intelligent terminal.? You could do maps > down to the square mile with different patterns for use, the usual graphs, > etc. all without a keyboard. (it had a keyboard but it was only needed for > data entry.) We also installed a few of the terminals with different > application in DC (Reston), at CINCPAC in Hawaii, and possible a few others > I don?t remember. While we were doing that, we also added non-blocking IPC > to UNIX. All it had was pipes. NARIS was a distributed database application > that used databases on both coasts and we used the ARPANET. We did a fair > amount of distributed database research in that period as well and worked > out the issues of network partition, which as near as I can tell are still > not clearly understood. > >> > >> I am afraid we just didn?t see Plato as all that interesting, a > traditional mainframe and terminals system. And they pretty well kept to > themselves. We were probably both wrong. > > > > Take care, > > John > > > > The reason I remember the one at CINCPAC was that one of the stories > that got back was, that they had been invited to an evening at the CINC?s > home. The next Saturday morning our guy showed up at CINCPAC in a T-shirt > and shorts to work on the installation. The guard was giving him grief, > when the CINC strode in. Greeted our guy with a friendly remark about the > gathering a few nights before and headed on in. The following conversation > ensued: The guard a bit uncomfortable: You know the CINC? Yup. You know him > pretty well? Yup. Errr, I guess it would be okay. ;-) And our guy was able > to get to work. > > > >> On Aug 21, 2021, at 07:58, Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> I worked on ARPANET from 1968-1972 and then started on Internet in > 1973. I > >> joined ARPA in 1976 and stayed there working on Internet until end of > 1982. > >> I visited the campus, met with Don Bitzer in the late 1960s (hazy > memory), > >> was impressed by the display technology, but, honestly, do not recall > much > >> influence on the Internet work. It was certainly an example of > time-shared > >> application that could be expanded by way of networks like ARPANET and > >> Internet but I don't recall direct influence on, e.g., protocol > >> development, packet switch design, application layer protocols. > >> > >> v > >> > >> > >> On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < > >> internet-history at elists.isoc.org> wrote: > >> > >>> I will. I was a DCL rat, and we'd just occasionally meet people from > PLATO, > >>> but that was it. > >>> > >>> I had a friend who took Latin and they used PLATO. I also used it in a > >>> Psych 100 experiment. And that's the extent of my contact. > >>> > >>> On Fri, Aug 20, 2021 at 1:30 PM Brian Dear > wrote: > >>> > >>>> Bob, > >>>> > >>>> Might I suggest, if you?re curious about PLATO, you rely on a more > >>>> in-depth history, available in my book The Friendly Orange Glow: The > >>> Untold > >>>> Story of the PLATO System and the Dawn of Cyberculture (Pantheon, > 2017) > >>> [1] > >>>> which that 33-minute podcast episode seems to be a hodgepodge summary > of. > >>>> In the real deal, my book, you might find a more engaging exploration > of > >>>> the historical, technological, business, and societal influences of > PLATO > >>>> and why it?s important. > >>>> > >>>> Regarding your being at U of I at the same time: if you were an > undergrad > >>>> from say 64-68, there?s a very good chance you would not have come > across > >>>> PLATO which was still in its formative stages and not deployed widely > at > >>>> all on campus. Things started scaling significantly around 1972 with > the > >>>> launch of the CDC CYBER mainframe-based system that grew to over 1000 > >>>> terminals, all over campus. However, if you were working on a Master?s > >>>> degree within the ivory tower of the CS dept from 68-73, a dept that > with > >>>> few exceptions looked down upon PLATO as a silly toy not worthy of > even > >>>> brief curiosity, it?s possible you still would have overlooked it. > Even > >>>> though DCL was very close the CERL lab on Mathews Ave. > >>>> > >>>> Anyway, check out the book?it?s all about the Illinois story, as well > as > >>>> the influence (in both directions) between the PLATO project and the > >>> Xerox > >>>> PARC Alto/SmallTalk/Dynabook projects. > >>>> > >>>> - Brian > >>>> > >>>> [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) > >>>> > >>>> > >>>> > >>>> On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < > >>>> internet-history at elists.isoc.org> wrote: > >>>> > >>>> I just listened to the episode > >>>> < > >>>> > >>> > https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 > >>>>> > >>>> about > >>>> PLATO on The History of Computing podcast, mostly because I'm being > >>>> interviewed for it tomorrow on my book > >>>> < > >>>> > >>> > https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= > >>>>> > >>>> . > >>>> > >>>> I know we've covered this before, but I think the "influence" of PLATO > >>> is a > >>>> bit overstated. I hesitate to be too dogmatic about that, but after > all, > >>>> you would think I'd have heard more about it, being at the U of I at > the > >>>> same time as he's talking about here. Maybe it had more influence at > >>>> *other* > >>>> sites? > >>>> > >>>> On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < > >>>> internet-history at elists.isoc.org> wrote: > >>>> > >>>> Forgot reply-all. > >>>> > >>>> Begin forwarded message: > >>>> > >>>> From: John Day > >>>> Subject: Re: [ih] How Plato Influenced the Internet > >>>> Date: June 10, 2021 at 14:46:35 EDT > >>>> To: Clem Cole > >>>> > >>>> Plato had very little if any influence on the ARPANET. I can?t say > about > >>>> > >>>> the other way. We were the ARPANET node and saw very little of them. > We > >>>> were in different buildings on the engineering campus a couple of > blocks > >>>> from each other, neither of which was the CS building. This is > probably a > >>>> case of people looking at similar problems and coming to similar > >>>> conclusions, or from the authors point of view, doing the same thing > in > >>>> totally different ways. > >>>> > >>>> > >>>> I do remember once when the leader of our group, Pete Alsberg, was > >>>> > >>>> teaching an OS class and someone from Plato was taking it and brought > up > >>>> what they were doing for the next major system release. In class, they > >>> did > >>>> a back of the envelope calculation of when the design would hit the > wall. > >>>> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found > >>>> himself talking to Bitzer and related the story from the class. Bitzer > >>> got > >>>> kind of embarrassed and it turned out they had hit the wall a couple > of > >>>> days before as the class? estimate predicted. ;-) Other than having > >>>> screens we could use, we didn?t put much stock in their work. > >>>> > >>>> > >>>> (The wikipedia page on Plato says it was first used Illiac I. It may > be > >>>> > >>>> true, but it must not have done much because Illiac I had 40 bit words > >>> with > >>>> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and > II > >>>> and III) were asynchronous hardware.) > >>>> > >>>> > >>>> As Ryoko always said, I could be wrong, but I doubt it. > >>>> > >>>> John > >>>> > >>>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < > >>>> > >>>> internet-history at elists.isoc.org> wrote: > >>>> > >>>> > >>>> FWIW: Since Plato was just brought up, I'll point a vector to some > >>>> > >>>> folks. > >>>> > >>>> If you read Dear's book, it tends to credit the walled garden' system > >>>> Plato with a lot of the things the Internet would eventually be known. > >>>> > >>>> How > >>>> > >>>> much truth there is, I can not say. But there is a lot of good stuff > in > >>>> here and it really did impact a lot of us as we certainly had seen > that > >>>> scheme, when we started to do things later. > >>>> > >>>> So ... if you have not yet read it, see if you can get a copy of > Brian > >>>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System > >>>> > >>>> and > >>>> > >>>> the Dawn of Cyberculture* ISBN-10 1101871555 > >>>> > >>>> In my own case, Plato was used for some Physics courses and I > >>>> personally never was one of the 'Plato ga-ga' type folks, although I > did > >>>> take on course using it and thought the graphics were pretty slick. > >>>> > >>>> But, I > >>>> > >>>> had all the computing power I needed with full ARPANET access between > >>>> > >>>> the > >>>> > >>>> Computer Center and CMU's EE and CS Depts. But I do have friends that > >>>> > >>>> were > >>>> > >>>> Physics, Chem E, and Mat Sci that all thought it was amazing and liked > >>>> > >>>> it > >>>> > >>>> much better than the required FORTRAN course they had to take using > TSS > >>>> > >>>> on > >>>> > >>>> the IBM 360/67. > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> > >>>> > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> > >>>> > >>> -- > >>> Internet-history mailing list > >>> Internet-history at elists.isoc.org > >>> https://elists.isoc.org/mailman/listinfo/internet-history > >>> > >> > >> > >> -- > >> Please send any postal/overnight deliveries to: > >> Vint Cerf > >> 1435 Woodhurst Blvd > >> McLean, VA 22102 > >> 703-448-0965 > >> > >> until further notice > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From geoff at iconia.com Sun Aug 22 20:52:09 2021 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 22 Aug 2021 17:52:09 -1000 Subject: [ih] yahoo is selectively censoring/rejecting incoming emails In-Reply-To: <20210823021230.B137126AC0CE@ary.qy> References: <20210823021230.B137126AC0CE@ary.qy> Message-ID: totally incorrect John -- just like the previous instance here on the IH list with its filtering/deleting/censoring with any mention of a certain word that Joe (as list admin) discovered/iteratively debugged (which won't be mentioned since it might cause this reply to "disappear") we now this at the SMTP reject-o level for a public webmail provider of not "allowing" certain subject matters to go through to its user base. yours truly can axiomatically say with metaphysical certitude this was not a DMARC failure. how can yours truly make such an unabashed claim? because like what Joe did here on the IH list to "discover" the forbidden word/phase yours truly did similar with yahoo by sending many iterations of a modified message to yahoo until one finally did not get SMTP reject-o'd and got through -- thereby revealing exactly/precisely what was "forbidden". as a side note, Joe has informed yours truly that: "This post does not appear to be in scope for this list. Please limit your posts to matters of Internet history in the future. Joe (list admin)" this is not the "network" we all "grew up" on/with, geoff On Sun, Aug 22, 2021 at 4:19 PM John Levine wrote: > It appears that the keyboard of geoff goodfellow via Internet-history < > geoff at iconia.com> said: > > ----- The following addresses had permanent fatal errors ----- > ><[xxx]@yahoo.com> > > (reason: 554 Message not allowed - [299]) > > That's probably a DMARC failure. They've been doing this since about 2014 > so I don't understand > why you just noticed it now. It's not personal, but it's quite annoying. > > Do keep in mind that Yahoo's goal (actually Apollo Global Management, > which owns the remains of what used to be Yahoo and AOL) is to accept > the mail that their users want, reject the mail their users do not > want, and minimize the number of complaints. They absolutely do not > care at all about people sending them mail other than the extent to > which delivering or not delivering the mail causes complaints from > their users. > > R's, > John > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From touch at strayalpha.com Sun Aug 22 22:35:31 2021 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sun, 22 Aug 2021 22:35:31 -0700 Subject: [ih] yahoo is selectively censoring/rejecting incoming emails In-Reply-To: References: <20210823021230.B137126AC0CE@ary.qy> Message-ID: <4D2A4C68-33B5-425B-BDD9-4659E6583B32@strayalpha.com> Hi, all, > On Aug 22, 2021, at 8:52 PM, the keyboard of geoff goodfellow via Internet-history wrote: > > as a side note, Joe has informed yours truly that: > > "This post does not appear to be in scope for this list. > > Please limit your posts to matters of Internet history in the future. > > Joe (list admin)? I did privately - though it seems to not have been heeded, so I?ll do so publicly. All parties are asked to take this discussion elsewhere, as it is not on-topic for this list. Joe (list admin) From clemc at ccc.com Mon Aug 23 06:34:03 2021 From: clemc at ccc.com (Clem Cole) Date: Mon, 23 Aug 2021 09:34:03 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: below... On Sun, Aug 22, 2021 at 11:35 PM Brian Dear via Internet-history < internet-history at elists.isoc.org> wrote: > live 2-way instant messaging in character-by-character typed chat, which > Unix people later implemented as ?talk? > I'm in an interesting position here because I started this thread and I am author the author of Unix talk [and also person responsible for the horrid error sending the rendezvous information in vax native order, not network order]. As I said in my original email, I played with Plato, most games and graphics as an undergrad; but I had access the PDP-10's, the GPD2 - Graphics Wonders, the ARPAnet and UNIX which had a much higher influence on me. I think Brian is right, that some people like Ray Ozzie,have said Plato had a profound influence on them. I do think that people that saw some of the features of Plato, remembered them when they did other systems. What I took from my limited Plato use, was how simple graphics could be more easily integrated. The GDPs were (are awesome) but took an PDP-11/20 to drive them and a lot of programming. I was also introduced to PLOT10 on the IBM S/360 running TSS, as it turns out before I saw the GDPs. Later I would have two of the 'Killer-Bs' [Kelly Booth and John Beatty] as officemates at Tektronix Labs, which very much polished that thinking about graphics, when we did the Magnolia workstation. But without a doubt, an early experience trying to write a 'program' to draw on the screen was with Plato, which I found easier than trying to do something similar in FORTRAN and PLOT-10. That said, Brian, I never saw or used the PLATO 2-way chat scheme, so it *did not have any* effect on me when I wrote talk(1). The UNIX program was born from need. Many of us hated walking up the hill from our apts more than once a day [Cory and Evan's hall are about ? way up the Berkeley hills -- most cheap grad apts were in 'down the hill' nearing Berkeley's downtown or Emeryville]. As grad students, we could only afford a single phone line at home, so talk(1) was created so I could ask one of my officemates to mount a mag tape or reboot a hung system in the UCB CAD lab, without having to hang up the phone line. We had the Unix write(1) that I think Ken wrote originally. That certainly was an influence, and I wanted something a little more interactive. Peter Moore suggested (and built) the split screen idea using the curses library, as the original version has been line-by-line, more like write(1); which the sources to it, I do not think left the CAD machines. Sam Leffler got it from me for the 4.1a release. Talk was developed not as a social thing, it was convience to allow us to do work in the evening. Which I think is different from what Brian describes in his book. Yes, it might have later been used for that also, but Plato did not have any influence. Clem Cole ? From brian at platohistory.org Mon Aug 23 06:57:47 2021 From: brian at platohistory.org (Brian Dear) Date: Mon, 23 Aug 2021 07:57:47 -0600 Subject: [ih] Fwd: How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: To be clear, PLATO III (1963-1972) had an graphical, interactive charset/bitmap editor (created to replace the extremely tedious exercise of creating character sets by declaring a series of octal codes), and that graphical editor was ported to the more sophisticated PLATO IV (1972 onwards). The PLATO III bitmap editor was first done around 1967. I was never able to establish a direct linkage between the PARC?s bitmap font editor and PLATO?s, but put ?em side to side and the resemblance is uncanny. Would welcome any insights from PARC alumni. - Brian On Sunday, August 22, 2021, Bob Purvy wrote: > I'll just pick up on one thing you said: > > *the charset editor (which inspired Xerox PARC to do a bitmap graphics > editor for the Alto),* > > This is easily checked. Charles Simonyi is still around, as is Tom Malloy. > Are they going to argue with that, or accept it? > > On Sun, Aug 22, 2021 at 8:35 PM Brian Dear via Internet-history < > internet-history at elists.isoc.org> wrote: > >> No disrespect, but I can?t help chuckle a tiny bit reading these >> assessments/dismissals of PLATO by ARPANET folks and computer science and >> engineering folks. (I?m reminded of the movie Amadeus, when Mozart is >> commenting on Salieri?s music.) >> >> Of course there?s no PLATO influence on ARPANET or ?the Internet? at the >> low levels, including protocols, etc. PLATO?s utterly custom architecture >> was designed for a very specific turnkey education system with homogenous >> hardware. That was its mission. >> >> PLATO wasn?t designed for packet networks, which had too much overhead >> for the fast responsiveness PLATO's creators insisted on (in my book I call >> it ?the fast round trip??about 120ms character echo response time 100% of >> the time). >> >> In 1973, a team at UC Santa Barbara spent about a year attempting to >> connect a PLATO IV terminal, with the fancy gas-plasma graphics display, to >> the ARPANET, in order to play the games that was quickly becoming legend on >> PLATO (see RFC 600). >> >> Writing off PLATO as ?just another time-sharing system? completely misses >> the point of PLATO?s narrow, specialized purpose and mission, funded by >> ARPA and NSF, and more importantly misses the historical significance of >> the explosion, starting in 1973, of unexpected, yet incredibly clever and >> creative applications that were built on top of an architecture intended >> for delivery of interactive computer-based educational lessons, >> simulations, testing, and so on. This explosion of unexpected apps >> contributed by the user/hacker community included TERM-talk (live 2-way >> instant messaging in character-by-character typed chat, which Unix people >> later implemented as ?talk? on that OS); chat rooms (akin to early IRC, but >> also live character-by-character typed chat this time in multiple channels, >> some open, some private); monitor mode (full screen sharing while in a >> TERM-talk), Pnotes (PLATO?s email application), TERM-consult (live tech >> support chat with online support teams), Notes (PLATO?s message forum >> system), the charset editor (which inspired Xerox PARC to do a bitmap >> graphics editor for the Alto), and of course the countless multiplayer >> graphical games, including tons of MUDs, 3D space and airplane simulators, >> etc. There was no other time-sharing system that had the applications PLATO >> had, nor the platform for creative expression that PLATO had in the 70s. >> But it meant nothing to the ARPANET crowd and was essentially dismissed as >> unimportant. >> >> PLATO?s always seemed to me like a "digital Galapagos." It evolved off on >> its own, with few if any predators, utterly disconnected from the rest of >> the world. The rest of the world would choose?wisely?to build a >> heterogenous network of protocols and standards to which all sorts of >> hardware and networks of networks could connect. That was a brilliant >> solution that enabled scale?something that was going to be exceedingly >> difficult, impractical, and expensive to do with PLATO. So we wound up with >> the Net and you had PLATO becoming more and more foreign and impossibly >> exotic and detached. However, PLATO hackers went off and worked at Apple, >> IBM, DEC, Data General, Sun, Atari, Google, and many other places. (PLATO >> Notes was the direct inspiration for Ray Ozzie, a PLATO software engineer >> and student at U of I, who later who named his Lotus application ?Notes?, >> which about 100 million people used for a little while, but I digress.) >> >> Years later, in a 25000-square-foot exhibit of the first 2000 years of >> computing, the esteemed Computer History Museum chose not to include any >> PLATO hardware or software or mention any of the countless innovations done >> during that era by that community, and in fact the only fleeting mention of >> the word ?PLATO? in the entire exhibit space at all was (and I don?t even >> know if it?s still there) found inside a small glass exhibit case near the >> end of the long IKEA-like maze through the museum rooms, that depicted tiny >> screen shots from AOL, CompuServe, Prodigy . . . and PLATO . . . with a >> little caption dismissing the whole lot as examples of ?walled gardens." >> >> So it goes. >> >> - Brian >> >> >> >> > On Aug 21, 2021, at 7:25 AM, John Day via Internet-history < >> internet-history at elists.isoc.org> wrote: >> > >> > Yea, the display technology was interesting but otherwise it was just >> another timesharing system, and really not all that great of one. Here is >> part of what I sent them last night off-list. I didn?t think others would >> be interested. I have said it here before: >> > >> > To refresh others: Our group was the ARPANET node at Illinois and >> developed 3 OSs from scratch. One proven secure, and one which experimented >> fairly deeply with the leverage of an extensible OS language, in addition >> to the what is recounted here. Pete Alsberg became the manager of the group >> at one point (slightly tweaked): >> > >> >> Around 75, Alsberg was teaching an OS class at DCL. One of the Plato >> guys was taking the course. They were in the process of building the new >> Plato OS. They did a computation in class and predicted that the system >> would top out at so many terminals. (Well, below their target.) Soon after >> that, Alsberg was at a party with the guy in charge of the system >> development. (The name is on the tip of my tongue, I can see his face. >> Remind me of a few names, it wasn?t Bitzer, Finally came to me. (Weird) It >> was Paul Tenszar (sp? that doesn?t look right) ). The guy got a funny look >> on his face. They had hit the limit that day and it was within one or two >> terminals of the prediction. >> >> >> >> As I said earlier, we put the first Unix on the ?Net in the summer of >> ?75. Got a few of the Plato terminals. Stripped down Unix to fit on an >> LSI-11, added touch to the screen. all on a short cabinet that so you could >> sit at and use it. Hooked up to NARIS the land-use planning system for the >> 6 counties around Chicago as an ?intelligent terminal.? You could do maps >> down to the square mile with different patterns for use, the usual graphs, >> etc. all without a keyboard. (it had a keyboard but it was only needed for >> data entry.) We also installed a few of the terminals with different >> application in DC (Reston), at CINCPAC in Hawaii, and possible a few others >> I don?t remember. While we were doing that, we also added non-blocking IPC >> to UNIX. All it had was pipes. NARIS was a distributed database application >> that used databases on both coasts and we used the ARPANET. We did a fair >> amount of distributed database research in that period as well and worked >> out the issues of network partition, which as near as I can tell are still >> not clearly understood. >> >> >> >> I am afraid we just didn?t see Plato as all that interesting, a >> traditional mainframe and terminals system. And they pretty well kept to >> themselves. We were probably both wrong. >> > >> > Take care, >> > John >> > >> > The reason I remember the one at CINCPAC was that one of the stories >> that got back was, that they had been invited to an evening at the CINC?s >> home. The next Saturday morning our guy showed up at CINCPAC in a T-shirt >> and shorts to work on the installation. The guard was giving him grief, >> when the CINC strode in. Greeted our guy with a friendly remark about the >> gathering a few nights before and headed on in. The following conversation >> ensued: The guard a bit uncomfortable: You know the CINC? Yup. You know him >> pretty well? Yup. Errr, I guess it would be okay. ;-) And our guy was able >> to get to work. >> > >> >> On Aug 21, 2021, at 07:58, Vint Cerf via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> >> >> I worked on ARPANET from 1968-1972 and then started on Internet in >> 1973. I >> >> joined ARPA in 1976 and stayed there working on Internet until end of >> 1982. >> >> I visited the campus, met with Don Bitzer in the late 1960s (hazy >> memory), >> >> was impressed by the display technology, but, honestly, do not recall >> much >> >> influence on the Internet work. It was certainly an example of >> time-shared >> >> application that could be expanded by way of networks like ARPANET and >> >> Internet but I don't recall direct influence on, e.g., protocol >> >> development, packet switch design, application layer protocols. >> >> >> >> v >> >> >> >> >> >> On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < >> >> internet-history at elists.isoc.org> wrote: >> >> >> >>> I will. I was a DCL rat, and we'd just occasionally meet people from >> PLATO, >> >>> but that was it. >> >>> >> >>> I had a friend who took Latin and they used PLATO. I also used it in a >> >>> Psych 100 experiment. And that's the extent of my contact. >> >>> >> >>> On Fri, Aug 20, 2021 at 1:30 PM Brian Dear >> wrote: >> >>> >> >>>> Bob, >> >>>> >> >>>> Might I suggest, if you?re curious about PLATO, you rely on a more >> >>>> in-depth history, available in my book The Friendly Orange Glow: The >> >>> Untold >> >>>> Story of the PLATO System and the Dawn of Cyberculture (Pantheon, >> 2017) >> >>> [1] >> >>>> which that 33-minute podcast episode seems to be a hodgepodge >> summary of. >> >>>> In the real deal, my book, you might find a more engaging >> exploration of >> >>>> the historical, technological, business, and societal influences of >> PLATO >> >>>> and why it?s important. >> >>>> >> >>>> Regarding your being at U of I at the same time: if you were an >> undergrad >> >>>> from say 64-68, there?s a very good chance you would not have come >> across >> >>>> PLATO which was still in its formative stages and not deployed >> widely at >> >>>> all on campus. Things started scaling significantly around 1972 with >> the >> >>>> launch of the CDC CYBER mainframe-based system that grew to over 1000 >> >>>> terminals, all over campus. However, if you were working on a >> Master?s >> >>>> degree within the ivory tower of the CS dept from 68-73, a dept that >> with >> >>>> few exceptions looked down upon PLATO as a silly toy not worthy of >> even >> >>>> brief curiosity, it?s possible you still would have overlooked it. >> Even >> >>>> though DCL was very close the CERL lab on Mathews Ave. >> >>>> >> >>>> Anyway, check out the book?it?s all about the Illinois story, as >> well as >> >>>> the influence (in both directions) between the PLATO project and the >> >>> Xerox >> >>>> PARC Alto/SmallTalk/Dynabook projects. >> >>>> >> >>>> - Brian >> >>>> >> >>>> [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) >> >>>> >> >>>> >> >>>> >> >>>> On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < >> >>>> internet-history at elists.isoc.org> wrote: >> >>>> >> >>>> I just listened to the episode >> >>>> < >> >>>> >> >>> https://podcasts.apple.com/us/podcast/the-history-of- >> computing/id1472463802?i=1000511301793 >> >>>>> >> >>>> about >> >>>> PLATO on The History of Computing podcast, mostly because I'm being >> >>>> interviewed for it tomorrow on my book >> >>>> < >> >>>> >> >>> https://www.amazon.com/Inventing-Future-Albert-Cory/ >> dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= >> >>>>> >> >>>> . >> >>>> >> >>>> I know we've covered this before, but I think the "influence" of >> PLATO >> >>> is a >> >>>> bit overstated. I hesitate to be too dogmatic about that, but after >> all, >> >>>> you would think I'd have heard more about it, being at the U of I at >> the >> >>>> same time as he's talking about here. Maybe it had more influence at >> >>>> *other* >> >>>> sites? >> >>>> >> >>>> On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < >> >>>> internet-history at elists.isoc.org> wrote: >> >>>> >> >>>> Forgot reply-all. >> >>>> >> >>>> Begin forwarded message: >> >>>> >> >>>> From: John Day >> >>>> Subject: Re: [ih] How Plato Influenced the Internet >> >>>> Date: June 10, 2021 at 14:46:35 EDT >> >>>> To: Clem Cole >> >>>> >> >>>> Plato had very little if any influence on the ARPANET. I can?t say >> about >> >>>> >> >>>> the other way. We were the ARPANET node and saw very little of >> them. We >> >>>> were in different buildings on the engineering campus a couple of >> blocks >> >>>> from each other, neither of which was the CS building. This is >> probably a >> >>>> case of people looking at similar problems and coming to similar >> >>>> conclusions, or from the authors point of view, doing the same thing >> in >> >>>> totally different ways. >> >>>> >> >>>> >> >>>> I do remember once when the leader of our group, Pete Alsberg, was >> >>>> >> >>>> teaching an OS class and someone from Plato was taking it and >> brought up >> >>>> what they were doing for the next major system release. In class, >> they >> >>> did >> >>>> a back of the envelope calculation of when the design would hit the >> wall. >> >>>> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found >> >>>> himself talking to Bitzer and related the story from the class. >> Bitzer >> >>> got >> >>>> kind of embarrassed and it turned out they had hit the wall a couple >> of >> >>>> days before as the class? estimate predicted. ;-) Other than having >> >>>> screens we could use, we didn?t put much stock in their work. >> >>>> >> >>>> >> >>>> (The wikipedia page on Plato says it was first used Illiac I. It may >> be >> >>>> >> >>>> true, but it must not have done much because Illiac I had 40 bit >> words >> >>> with >> >>>> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( >> and II >> >>>> and III) were asynchronous hardware.) >> >>>> >> >>>> >> >>>> As Ryoko always said, I could be wrong, but I doubt it. >> >>>> >> >>>> John >> >>>> >> >>>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < >> >>>> >> >>>> internet-history at elists.isoc.org> wrote: >> >>>> >> >>>> >> >>>> FWIW: Since Plato was just brought up, I'll point a vector to some >> >>>> >> >>>> folks. >> >>>> >> >>>> If you read Dear's book, it tends to credit the walled garden' system >> >>>> Plato with a lot of the things the Internet would eventually be >> known. >> >>>> >> >>>> How >> >>>> >> >>>> much truth there is, I can not say. But there is a lot of good >> stuff in >> >>>> here and it really did impact a lot of us as we certainly had seen >> that >> >>>> scheme, when we started to do things later. >> >>>> >> >>>> So ... if you have not yet read it, see if you can get a copy of >> Brian >> >>>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO >> System >> >>>> >> >>>> and >> >>>> >> >>>> the Dawn of Cyberculture* ISBN-10 1101871555 >> >>>> >> >>>> In my own case, Plato was used for some Physics courses and I >> >>>> personally never was one of the 'Plato ga-ga' type folks, although I >> did >> >>>> take on course using it and thought the graphics were pretty slick. >> >>>> >> >>>> But, I >> >>>> >> >>>> had all the computing power I needed with full ARPANET access between >> >>>> >> >>>> the >> >>>> >> >>>> Computer Center and CMU's EE and CS Depts. But I do have friends >> that >> >>>> >> >>>> were >> >>>> >> >>>> Physics, Chem E, and Mat Sci that all thought it was amazing and >> liked >> >>>> >> >>>> it >> >>>> >> >>>> much better than the required FORTRAN course they had to take using >> TSS >> >>>> >> >>>> on >> >>>> >> >>>> the IBM 360/67. >> >>>> -- >> >>>> Internet-history mailing list >> >>>> Internet-history at elists.isoc.org >> >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> >>>> >> >>>> >> >>>> >> >>>> -- >> >>>> Internet-history mailing list >> >>>> Internet-history at elists.isoc.org >> >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> >>>> >> >>>> -- >> >>>> Internet-history mailing list >> >>>> Internet-history at elists.isoc.org >> >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> >>>> >> >>>> >> >>>> >> >>> -- >> >>> Internet-history mailing list >> >>> Internet-history at elists.isoc.org >> >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >>> >> >> >> >> >> >> -- >> >> Please send any postal/overnight deliveries to: >> >> Vint Cerf >> >> 1435 Woodhurst Blvd >> >> >> McLean, VA 22102 >> >> >> 703-448-0965 >> >> >> >> until further notice >> >> -- >> >> Internet-history mailing list >> >> Internet-history at elists.isoc.org >> >> https://elists.isoc.org/mailman/listinfo/internet-history >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > From jeanjour at comcast.net Mon Aug 23 07:12:12 2021 From: jeanjour at comcast.net (John Day) Date: Mon, 23 Aug 2021 10:12:12 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: Yea, I would say that Englebart?s NLS had more influence on us and the people at PARC. We had TNLS running over the ?Net at Illinois. Many of Englebart?s group ended up at PARC. With graphics, Ivan Sutherland and Newman and Sproul were probably get biggest influence. We weren?t doing a lot of fancy stuff, although I remember Knowles coming up with a pretty nifty hidden line algorithm. As for chat-like programs, TENEX had the ability to share screens and in 1970 or 71, Jim Calvin, either just before he left Case-Western or just after he got to BBN, ?extended? it as a multi-user ?teleconferencing? program. That *was* a social media. Most of us had terminals at home and we would spend hours in the evenings discussing the problems of the world, collaborating on code, etc. Someone wrote articles on it and demonstrated it at ICCC ?72. Most of those applications were invented several times. It is not uncommon in the history of technology (it has been observed back several centuries) that it isn?t so much direct transfer of technology but more someone brings back a story along the lines of, ?I saw this thing that did thus and so and kind of looks like t.? Which gives someone the idea, that if it exists, then how it must work like this.? It isn?t quite independent invention, but it isn?t quite direct influence either. John > On Aug 23, 2021, at 09:34, Clem Cole wrote: > > below... > > On Sun, Aug 22, 2021 at 11:35 PM Brian Dear via Internet-history > wrote: > live 2-way instant messaging in character-by-character typed chat, which Unix people later implemented as ?talk? > I'm in an interesting position here because I started this thread and I am author the author of Unix talk [and also person responsible for the horrid error sending the rendezvous information in vax native order, not network order]. > > As I said in my original email, I played with Plato, most games and graphics as an undergrad; but I had access the PDP-10's, the GPD2 - Graphics Wonders, the ARPAnet and UNIX which had a much higher influence on me. I think Brian is right, that some people like Ray Ozzie,have said Plato had a profound influence on them. I do think that people that saw some of the features of Plato, remembered them when they did other systems. > > What I took from my limited Plato use, was how simple graphics could be more easily integrated. The GDPs were (are awesome) but took an PDP-11/20 to drive them and a lot of programming. I was also introduced to PLOT10 on the IBM S/360 running TSS, as it turns out before I saw the GDPs. Later I would have two of the 'Killer-Bs' [Kelly Booth and John Beatty] as officemates at Tektronix Labs, which very much polished that thinking about graphics, when we did the Magnolia workstation. But without a doubt, an early experience trying to write a 'program' to draw on the screen was with Plato, which I found easier than trying to do something similar in FORTRAN and PLOT-10. > > That said, Brian, I never saw or used the PLATO 2-way chat scheme, so it did not have any effect on me when I wrote talk(1). The UNIX program was born from need. Many of us hated walking up the hill from our apts more than once a day [Cory and Evan's hall are about ? way up the Berkeley hills -- most cheap grad apts were in 'down the hill' nearing Berkeley's downtown or Emeryville]. As grad students, we could only afford a single phone line at home, so talk(1) was created so I could ask one of my officemates to mount a mag tape or reboot a hung system in the UCB CAD lab, without having to hang up the phone line. We had the Unix write(1) that I think Ken wrote originally. That certainly was an influence, and I wanted something a little more interactive. Peter Moore suggested (and built) the split screen idea using the curses library, as the original version has been line-by-line, more like write(1); which the sources to it, I do not think left the CAD machines. Sam Leffler got it from me for the 4.1a release. > > Talk was developed not as a social thing, it was convience to allow us to do work in the evening. Which I think is different from what Brian describes in his book. Yes, it might have later been used for that also, but Plato did not have any influence. > > Clem Cole > ? From jeanjour at comcast.net Mon Aug 23 07:15:23 2021 From: jeanjour at comcast.net (John Day) Date: Mon, 23 Aug 2021 10:15:23 -0400 Subject: [ih] Fwd: How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: <8FEC94E2-254D-424C-A437-BAF59FEB6C08@comcast.net> See the email I just sent. And how many ways would an intelligent person do it. ;-) There is always that. It isn?t that surprising that they look the same. > On Aug 23, 2021, at 09:57, Brian Dear wrote: > > To be clear, PLATO III (1963-1972) had an graphical, interactive charset/bitmap editor (created to replace the extremely tedious exercise of creating character sets by declaring a series of octal codes), and that graphical editor was ported to the more sophisticated PLATO IV (1972 onwards). The PLATO III bitmap editor was first done around 1967. > > I was never able to establish a direct linkage between the PARC?s bitmap font editor and PLATO?s, but put ?em side to side and the resemblance is uncanny. Would welcome any insights from PARC alumni. > > - Brian > > On Sunday, August 22, 2021, Bob Purvy > wrote: > I'll just pick up on one thing you said: > > the charset editor (which inspired Xerox PARC to do a bitmap graphics editor for the Alto), > > This is easily checked. Charles Simonyi is still around, as is Tom Malloy. Are they going to argue with that, or accept it? > > On Sun, Aug 22, 2021 at 8:35 PM Brian Dear via Internet-history > wrote: > No disrespect, but I can?t help chuckle a tiny bit reading these assessments/dismissals of PLATO by ARPANET folks and computer science and engineering folks. (I?m reminded of the movie Amadeus, when Mozart is commenting on Salieri?s music.) > > Of course there?s no PLATO influence on ARPANET or ?the Internet? at the low levels, including protocols, etc. PLATO?s utterly custom architecture was designed for a very specific turnkey education system with homogenous hardware. That was its mission. > > PLATO wasn?t designed for packet networks, which had too much overhead for the fast responsiveness PLATO's creators insisted on (in my book I call it ?the fast round trip??about 120ms character echo response time 100% of the time). > > In 1973, a team at UC Santa Barbara spent about a year attempting to connect a PLATO IV terminal, with the fancy gas-plasma graphics display, to the ARPANET, in order to play the games that was quickly becoming legend on PLATO (see RFC 600). > > Writing off PLATO as ?just another time-sharing system? completely misses the point of PLATO?s narrow, specialized purpose and mission, funded by ARPA and NSF, and more importantly misses the historical significance of the explosion, starting in 1973, of unexpected, yet incredibly clever and creative applications that were built on top of an architecture intended for delivery of interactive computer-based educational lessons, simulations, testing, and so on. This explosion of unexpected apps contributed by the user/hacker community included TERM-talk (live 2-way instant messaging in character-by-character typed chat, which Unix people later implemented as ?talk? on that OS); chat rooms (akin to early IRC, but also live character-by-character typed chat this time in multiple channels, some open, some private); monitor mode (full screen sharing while in a TERM-talk), Pnotes (PLATO?s email application), TERM-consult (live tech support chat with online support teams), Notes (PLATO?s message forum system), the charset editor (which inspired Xerox PARC to do a bitmap graphics editor for the Alto), and of course the countless multiplayer graphical games, including tons of MUDs, 3D space and airplane simulators, etc. There was no other time-sharing system that had the applications PLATO had, nor the platform for creative expression that PLATO had in the 70s. But it meant nothing to the ARPANET crowd and was essentially dismissed as unimportant. > > PLATO?s always seemed to me like a "digital Galapagos." It evolved off on its own, with few if any predators, utterly disconnected from the rest of the world. The rest of the world would choose?wisely?to build a heterogenous network of protocols and standards to which all sorts of hardware and networks of networks could connect. That was a brilliant solution that enabled scale?something that was going to be exceedingly difficult, impractical, and expensive to do with PLATO. So we wound up with the Net and you had PLATO becoming more and more foreign and impossibly exotic and detached. However, PLATO hackers went off and worked at Apple, IBM, DEC, Data General, Sun, Atari, Google, and many other places. (PLATO Notes was the direct inspiration for Ray Ozzie, a PLATO software engineer and student at U of I, who later who named his Lotus application ?Notes?, which about 100 million people used for a little while, but I digress.) > > Years later, in a 25000-square-foot exhibit of the first 2000 years of computing, the esteemed Computer History Museum chose not to include any PLATO hardware or software or mention any of the countless innovations done during that era by that community, and in fact the only fleeting mention of the word ?PLATO? in the entire exhibit space at all was (and I don?t even know if it?s still there) found inside a small glass exhibit case near the end of the long IKEA-like maze through the museum rooms, that depicted tiny screen shots from AOL, CompuServe, Prodigy . . . and PLATO . . . with a little caption dismissing the whole lot as examples of ?walled gardens." > > So it goes. > > - Brian > > > > > On Aug 21, 2021, at 7:25 AM, John Day via Internet-history > wrote: > > > > Yea, the display technology was interesting but otherwise it was just another timesharing system, and really not all that great of one. Here is part of what I sent them last night off-list. I didn?t think others would be interested. I have said it here before: > > > > To refresh others: Our group was the ARPANET node at Illinois and developed 3 OSs from scratch. One proven secure, and one which experimented fairly deeply with the leverage of an extensible OS language, in addition to the what is recounted here. Pete Alsberg became the manager of the group at one point (slightly tweaked): > > > >> Around 75, Alsberg was teaching an OS class at DCL. One of the Plato guys was taking the course. They were in the process of building the new Plato OS. They did a computation in class and predicted that the system would top out at so many terminals. (Well, below their target.) Soon after that, Alsberg was at a party with the guy in charge of the system development. (The name is on the tip of my tongue, I can see his face. Remind me of a few names, it wasn?t Bitzer, Finally came to me. (Weird) It was Paul Tenszar (sp? that doesn?t look right) ). The guy got a funny look on his face. They had hit the limit that day and it was within one or two terminals of the prediction. > >> > >> As I said earlier, we put the first Unix on the ?Net in the summer of ?75. Got a few of the Plato terminals. Stripped down Unix to fit on an LSI-11, added touch to the screen. all on a short cabinet that so you could sit at and use it. Hooked up to NARIS the land-use planning system for the 6 counties around Chicago as an ?intelligent terminal.? You could do maps down to the square mile with different patterns for use, the usual graphs, etc. all without a keyboard. (it had a keyboard but it was only needed for data entry.) We also installed a few of the terminals with different application in DC (Reston), at CINCPAC in Hawaii, and possible a few others I don?t remember. While we were doing that, we also added non-blocking IPC to UNIX. All it had was pipes. NARIS was a distributed database application that used databases on both coasts and we used the ARPANET. We did a fair amount of distributed database research in that period as well and worked out the issues of network partition, which as near as I can tell are still not clearly understood. > >> > >> I am afraid we just didn?t see Plato as all that interesting, a traditional mainframe and terminals system. And they pretty well kept to themselves. We were probably both wrong. > > > > Take care, > > John > > > > The reason I remember the one at CINCPAC was that one of the stories that got back was, that they had been invited to an evening at the CINC?s home. The next Saturday morning our guy showed up at CINCPAC in a T-shirt and shorts to work on the installation. The guard was giving him grief, when the CINC strode in. Greeted our guy with a friendly remark about the gathering a few nights before and headed on in. The following conversation ensued: The guard a bit uncomfortable: You know the CINC? Yup. You know him pretty well? Yup. Errr, I guess it would be okay. ;-) And our guy was able to get to work. > > > >> On Aug 21, 2021, at 07:58, Vint Cerf via Internet-history > wrote: > >> > >> I worked on ARPANET from 1968-1972 and then started on Internet in 1973. I > >> joined ARPA in 1976 and stayed there working on Internet until end of 1982. > >> I visited the campus, met with Don Bitzer in the late 1960s (hazy memory), > >> was impressed by the display technology, but, honestly, do not recall much > >> influence on the Internet work. It was certainly an example of time-shared > >> application that could be expanded by way of networks like ARPANET and > >> Internet but I don't recall direct influence on, e.g., protocol > >> development, packet switch design, application layer protocols. > >> > >> v > >> > >> > >> On Fri, Aug 20, 2021 at 4:35 PM Bob Purvy via Internet-history < > >> internet-history at elists.isoc.org > wrote: > >> > >>> I will. I was a DCL rat, and we'd just occasionally meet people from PLATO, > >>> but that was it. > >>> > >>> I had a friend who took Latin and they used PLATO. I also used it in a > >>> Psych 100 experiment. And that's the extent of my contact. > >>> > >>> On Fri, Aug 20, 2021 at 1:30 PM Brian Dear > wrote: > >>> > >>>> Bob, > >>>> > >>>> Might I suggest, if you?re curious about PLATO, you rely on a more > >>>> in-depth history, available in my book The Friendly Orange Glow: The > >>> Untold > >>>> Story of the PLATO System and the Dawn of Cyberculture (Pantheon, 2017) > >>> [1] > >>>> which that 33-minute podcast episode seems to be a hodgepodge summary of. > >>>> In the real deal, my book, you might find a more engaging exploration of > >>>> the historical, technological, business, and societal influences of PLATO > >>>> and why it?s important. > >>>> > >>>> Regarding your being at U of I at the same time: if you were an undergrad > >>>> from say 64-68, there?s a very good chance you would not have come across > >>>> PLATO which was still in its formative stages and not deployed widely at > >>>> all on campus. Things started scaling significantly around 1972 with the > >>>> launch of the CDC CYBER mainframe-based system that grew to over 1000 > >>>> terminals, all over campus. However, if you were working on a Master?s > >>>> degree within the ivory tower of the CS dept from 68-73, a dept that with > >>>> few exceptions looked down upon PLATO as a silly toy not worthy of even > >>>> brief curiosity, it?s possible you still would have overlooked it. Even > >>>> though DCL was very close the CERL lab on Mathews Ave. > >>>> > >>>> Anyway, check out the book?it?s all about the Illinois story, as well as > >>>> the influence (in both directions) between the PLATO project and the > >>> Xerox > >>>> PARC Alto/SmallTalk/Dynabook projects. > >>>> > >>>> - Brian > >>>> > >>>> [1] http://amzn.to/2ol9Lu6 (Amazon link for the book) > >>>> > >>>> > >>>> > >>>> On Jul 5, 2021, at 6:06 PM, Bob Purvy via Internet-history < > >>>> internet-history at elists.isoc.org > wrote: > >>>> > >>>> I just listened to the episode > >>>> < > >>>> > >>> https://podcasts.apple.com/us/podcast/the-history-of-computing/id1472463802?i=1000511301793 > >>>>> > >>>> about > >>>> PLATO on The History of Computing podcast, mostly because I'm being > >>>> interviewed for it tomorrow on my book > >>>> < > >>>> > >>> https://www.amazon.com/Inventing-Future-Albert-Cory/dp/1736298615/ref=tmm_pap_swatch_0?_encoding=UTF8&qid=&sr= > >>>>> > >>>> . > >>>> > >>>> I know we've covered this before, but I think the "influence" of PLATO > >>> is a > >>>> bit overstated. I hesitate to be too dogmatic about that, but after all, > >>>> you would think I'd have heard more about it, being at the U of I at the > >>>> same time as he's talking about here. Maybe it had more influence at > >>>> *other* > >>>> sites? > >>>> > >>>> On Thu, Jun 10, 2021 at 11:48 AM John Day via Internet-history < > >>>> internet-history at elists.isoc.org > wrote: > >>>> > >>>> Forgot reply-all. > >>>> > >>>> Begin forwarded message: > >>>> > >>>> From: John Day > > >>>> Subject: Re: [ih] How Plato Influenced the Internet > >>>> Date: June 10, 2021 at 14:46:35 EDT > >>>> To: Clem Cole > > >>>> > >>>> Plato had very little if any influence on the ARPANET. I can?t say about > >>>> > >>>> the other way. We were the ARPANET node and saw very little of them. We > >>>> were in different buildings on the engineering campus a couple of blocks > >>>> from each other, neither of which was the CS building. This is probably a > >>>> case of people looking at similar problems and coming to similar > >>>> conclusions, or from the authors point of view, doing the same thing in > >>>> totally different ways. > >>>> > >>>> > >>>> I do remember once when the leader of our group, Pete Alsberg, was > >>>> > >>>> teaching an OS class and someone from Plato was taking it and brought up > >>>> what they were doing for the next major system release. In class, they > >>> did > >>>> a back of the envelope calculation of when the design would hit the wall. > >>>> That weekend at a party, (Champaign-Urbana isn?t that big) Pete found > >>>> himself talking to Bitzer and related the story from the class. Bitzer > >>> got > >>>> kind of embarrassed and it turned out they had hit the wall a couple of > >>>> days before as the class? estimate predicted. ;-) Other than having > >>>> screens we could use, we didn?t put much stock in their work. > >>>> > >>>> > >>>> (The wikipedia page on Plato says it was first used Illiac I. It may be > >>>> > >>>> true, but it must not have done much because Illiac I had 40 bit words > >>> with > >>>> 1K main memory on Willams tubes and about 12K on drum. Illiac I ( and II > >>>> and III) were asynchronous hardware.) > >>>> > >>>> > >>>> As Ryoko always said, I could be wrong, but I doubt it. > >>>> > >>>> John > >>>> > >>>> On Jun 10, 2021, at 11:48, Clem Cole via Internet-history < > >>>> > >>>> internet-history at elists.isoc.org > wrote: > >>>> > >>>> > >>>> FWIW: Since Plato was just brought up, I'll point a vector to some > >>>> > >>>> folks. > >>>> > >>>> If you read Dear's book, it tends to credit the walled garden' system > >>>> Plato with a lot of the things the Internet would eventually be known. > >>>> > >>>> How > >>>> > >>>> much truth there is, I can not say. But there is a lot of good stuff in > >>>> here and it really did impact a lot of us as we certainly had seen that > >>>> scheme, when we started to do things later. > >>>> > >>>> So ... if you have not yet read it, see if you can get a copy of Brian > >>>> Dear's *The Friendly Orange Glow: The Untold Story of the PLATO System > >>>> > >>>> and > >>>> > >>>> the Dawn of Cyberculture* ISBN-10 1101871555 > >>>> > >>>> In my own case, Plato was used for some Physics courses and I > >>>> personally never was one of the 'Plato ga-ga' type folks, although I did > >>>> take on course using it and thought the graphics were pretty slick. > >>>> > >>>> But, I > >>>> > >>>> had all the computing power I needed with full ARPANET access between > >>>> > >>>> the > >>>> > >>>> Computer Center and CMU's EE and CS Depts. But I do have friends that > >>>> > >>>> were > >>>> > >>>> Physics, Chem E, and Mat Sci that all thought it was amazing and liked > >>>> > >>>> it > >>>> > >>>> much better than the required FORTRAN course they had to take using TSS > >>>> > >>>> on > >>>> > >>>> the IBM 360/67. > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> > >>>> > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> https://elists.isoc.org/mailman/listinfo/internet-history > >>>> > >>>> > >>>> > >>> -- > >>> Internet-history mailing list > >>> Internet-history at elists.isoc.org > >>> https://elists.isoc.org/mailman/listinfo/internet-history > >>> > >> > >> > >> -- > >> Please send any postal/overnight deliveries to: > >> Vint Cerf > >> 1435 Woodhurst Blvd > >> McLean, VA 22102 > >> 703-448-0965 > >> > >> until further notice > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Mon Aug 23 07:18:05 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 23 Aug 2021 07:18:05 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: <938f5b98-491b-423e-6e8f-a44c672a3604@dcrocker.net> On 8/23/2021 7:12 AM, John Day via Internet-history wrote: > It is not uncommon in the history of technology (it has been observed back several centuries) that it isn?t so much direct transfer of technology but more someone brings back a story along the lines of, ?I saw this thing that did thus and so and kind of looks like t.? At the Munich 'IETF' meeting in 1982 - Vint announced he was leaving Arpa. I don't remember whether he said he'd be going to MCI, though that meeting was when he recruited me for the MCI Mail project. Anyhow, his announcement included the comment that technology does not transfer. People do. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From vgcerf at gmail.com Mon Aug 23 09:02:10 2021 From: vgcerf at gmail.com (vinton cerf) Date: Mon, 23 Aug 2021 12:02:10 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: i thought the hidden line algorithm was done by John Warnock while at University of Utah? v On Mon, Aug 23, 2021 at 10:12 AM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > Yea, I would say that Englebart?s NLS had more influence on us and the > people at PARC. We had TNLS running over the ?Net at Illinois. Many of > Englebart?s group ended up at PARC. With graphics, Ivan Sutherland and > Newman and Sproul were probably get biggest influence. We weren?t doing a > lot of fancy stuff, although I remember Knowles coming up with a pretty > nifty hidden line algorithm. > > As for chat-like programs, TENEX had the ability to share screens and in > 1970 or 71, Jim Calvin, either just before he left Case-Western or just > after he got to BBN, ?extended? it as a multi-user ?teleconferencing? > program. That *was* a social media. Most of us had terminals at home and we > would spend hours in the evenings discussing the problems of the world, > collaborating on code, etc. Someone wrote articles on it and demonstrated > it at ICCC ?72. Most of those applications were invented several times. > > It is not uncommon in the history of technology (it has been observed back > several centuries) that it isn?t so much direct transfer of technology but > more someone brings back a story along the lines of, ?I saw this thing that > did thus and so and kind of looks like t.? Which gives someone the idea, > that if it exists, then how it must work like this.? It isn?t quite > independent invention, but it isn?t quite direct influence either. > > John > > > On Aug 23, 2021, at 09:34, Clem Cole wrote: > > > > below... > > > > On Sun, Aug 22, 2021 at 11:35 PM Brian Dear via Internet-history < > internet-history at elists.isoc.org > > wrote: > > live 2-way instant messaging in character-by-character typed chat, which > Unix people later implemented as ?talk? > > I'm in an interesting position here because I started this thread and I > am author the author of Unix talk [and also person responsible for the > horrid error sending the rendezvous information in vax native order, not > network order]. > > > > As I said in my original email, I played with Plato, most games and > graphics as an undergrad; but I had access the PDP-10's, the GPD2 - > Graphics Wonders, the ARPAnet and UNIX which had a much higher influence on > me. I think Brian is right, that some people like Ray Ozzie,have said > Plato had a profound influence on them. I do think that people that saw > some of the features of Plato, remembered them when they did other systems. > > > > What I took from my limited Plato use, was how simple graphics could be > more easily integrated. The GDPs were (are awesome) but took an PDP-11/20 > to drive them and a lot of programming. I was also introduced to PLOT10 on > the IBM S/360 running TSS, as it turns out before I saw the GDPs. Later I > would have two of the 'Killer-Bs' [Kelly Booth and John Beatty] as > officemates at Tektronix Labs, which very much polished that thinking about > graphics, when we did the Magnolia < > http://bitsavers.trailing-edge.com/pdf/tektronix/magnolia/> workstation. > But without a doubt, an early experience trying to write a 'program' to > draw on the screen was with Plato, which I found easier than trying to do > something similar in FORTRAN and PLOT-10. > > > > That said, Brian, I never saw or used the PLATO 2-way chat scheme, so it > did not have any effect on me when I wrote talk(1). The UNIX program was > born from need. Many of us hated walking up the hill from our apts more > than once a day [Cory and Evan's hall are about ? way up the Berkeley hills > -- most cheap grad apts were in 'down the hill' nearing Berkeley's downtown > or Emeryville]. As grad students, we could only afford a single phone > line at home, so talk(1) was created so I could ask one of my officemates > to mount a mag tape or reboot a hung system in the UCB CAD lab, without > having to hang up the phone line. We had the Unix write(1) that I think > Ken wrote originally. That certainly was an influence, and I wanted > something a little more interactive. Peter Moore suggested (and built) the > split screen idea using the curses library, as the original version has > been line-by-line, more like write(1); which the sources to it, I do not > think left the CAD machines. Sam Leffler got it from me for the 4.1a > release. > > > > Talk was developed not as a social thing, it was convience to allow us > to do work in the evening. Which I think is different from what Brian > describes in his book. Yes, it might have later been used for that also, > but Plato did not have any influence. > > > > Clem Cole > > ? > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From craig at tereschau.net Mon Aug 23 11:46:01 2021 From: craig at tereschau.net (Craig Partridge) Date: Mon, 23 Aug 2021 12:46:01 -0600 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > It is not uncommon in the history of technology (it has been observed back > several centuries) that it isn?t so much direct transfer of technology but > more someone brings back a story along the lines of, ?I saw this thing that > did thus and so and kind of looks like t.? Which gives someone the idea, > that if it exists, then how it must work like this.? It isn?t quite > independent invention, but it isn?t quite direct influence either. > > > Related comment -- from my various interactions with historians about technology history. If the available technology is limited (as it was in the 1950s/60s/70s and early 1980s in many dimensions) then your solutions to certain problems are going to look rather similar. That doesn't meant that two similar solutions influenced each other... The trick in writing tech history is figuring out where there was a choice space and where there wasn't (much of) one. Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From dhc at dcrocker.net Mon Aug 23 11:58:38 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 23 Aug 2021 11:58:38 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> On 8/23/2021 11:46 AM, Craig Partridge via Internet-history wrote: > If the available technology is limited (as it was in > the 1950s/60s/70s and early 1980s in many dimensions) then your solutions > to certain problems are going to look rather similar. That explains why ISDN looks so much like the TCP/IP? d/ ps. yes, I should apologize. no, I won't. -- Dave Crocker Brandenburg InternetWorking bbiw.net From craig at tereschau.net Mon Aug 23 12:04:46 2021 From: craig at tereschau.net (Craig Partridge) Date: Mon, 23 Aug 2021 13:04:46 -0600 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> Message-ID: I think you're making my point.... in communications, once one invented packet switching (which I think we all acknowledge was a big step), then the choice space exploded in communications. Craig On Mon, Aug 23, 2021 at 12:58 PM Dave Crocker wrote: > On 8/23/2021 11:46 AM, Craig Partridge via Internet-history wrote: > > If the available technology is limited (as it was in > > the 1950s/60s/70s and early 1980s in many dimensions) then your solutions > > to certain problems are going to look rather similar. > > > That explains why ISDN looks so much like the TCP/IP? > > > d/ > > ps. yes, I should apologize. no, I won't. > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From jhlowry at mac.com Mon Aug 23 12:07:59 2021 From: jhlowry at mac.com (John Lowry) Date: Mon, 23 Aug 2021 15:07:59 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> Message-ID: You mean that the medium is the message ? > On Aug 23, 2021, at 14:58, Dave Crocker via Internet-history wrote: > > On 8/23/2021 11:46 AM, Craig Partridge via Internet-history wrote: >> If the available technology is limited (as it was in >> the 1950s/60s/70s and early 1980s in many dimensions) then your solutions >> to certain problems are going to look rather similar. > > > That explains why ISDN looks so much like the TCP/IP? > > > d/ > > ps. yes, I should apologize. no, I won't. > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Mon Aug 23 12:10:31 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 23 Aug 2021 12:10:31 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> Message-ID: .. and indeed, I wrote a paper about that. Mark Lemley cited it in his *amicus* brief to SCOTUS. If you hate software patents, this is for you. For a patent application to be "non-obvious" there usually has to be an absence of references that suggest combining two or more technologies. Because everything is obvious once you see it. However, there's also the legal notion of "obvious to try" which means that there are standard techniques that anyone skilled in the art would attempt. My argument was that these obvious-to-try techniques exist in software (or networking) and one should not be granted a patent for using them. You can probably imagine that lawyers aren't thrilled with this notion. On Mon, Aug 23, 2021 at 11:59 AM Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > On 8/23/2021 11:46 AM, Craig Partridge via Internet-history wrote: > > If the available technology is limited (as it was in > > the 1950s/60s/70s and early 1980s in many dimensions) then your solutions > > to certain problems are going to look rather similar. > > > That explains why ISDN looks so much like the TCP/IP? > > > d/ > > ps. yes, I should apologize. no, I won't. > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dhc at dcrocker.net Mon Aug 23 12:11:22 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 23 Aug 2021 12:11:22 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <9f5e173c-be3b-3faa-3bea-62fa9bf6bace@dcrocker.net> Message-ID: On 8/23/2021 12:07 PM, John Lowry via Internet-history wrote: > You mean that the medium is the message ? In this case, I'm pretty sure the message is the medium. d/ ps. but thanks for the setup. -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Mon Aug 23 12:15:57 2021 From: jeanjour at comcast.net (John Day) Date: Mon, 23 Aug 2021 15:15:57 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> Agreed. There are only so many ways to do something. ;-) > On Aug 23, 2021, at 14:46, Craig Partridge wrote: > > > > On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history > wrote: > It is not uncommon in the history of technology (it has been observed back several centuries) that it isn?t so much direct transfer of technology but more someone brings back a story along the lines of, ?I saw this thing that did thus and so and kind of looks like t.? Which gives someone the idea, that if it exists, then how it must work like this.? It isn?t quite independent invention, but it isn?t quite direct influence either. > > > > Related comment -- from my various interactions with historians about technology history. If the available technology is limited (as it was in the 1950s/60s/70s and early 1980s in many dimensions) then your solutions to certain problems are going to look rather similar. That doesn't meant that two similar solutions influenced each other... The trick in writing tech history is figuring out where there was a choice space and where there wasn't (much of) one. > > Craig > > > -- > ***** > Craig Partridge's email account for professional society activities and mailing lists. From brian.e.carpenter at gmail.com Mon Aug 23 14:15:45 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 24 Aug 2021 09:15:45 +1200 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> Message-ID: <24617c87-c7b9-fb8a-b53d-d3721be3fec8@gmail.com> On 24-Aug-21 06:46, Craig Partridge via Internet-history wrote: > On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> It is not uncommon in the history of technology (it has been observed back >> several centuries) that it isn?t so much direct transfer of technology but >> more someone brings back a story along the lines of, ?I saw this thing that >> did thus and so and kind of looks like t.? Which gives someone the idea, >> that if it exists, then how it must work like this.? It isn?t quite >> independent invention, but it isn?t quite direct influence either. >> >> >> > Related comment -- from my various interactions with historians about > technology history. If the available technology is limited (as it was in > the 1950s/60s/70s and early 1980s in many dimensions) then your solutions > to certain problems are going to look rather similar. That doesn't meant > that two similar solutions influenced each other... The trick in writing > tech history is figuring out where there was a choice space and where there > wasn't (much of) one. Absolutely. It goes back to the 1940s, in fact, if not to the Jacquard loom There's a reason I co-wrote a book chapter called "Turing's Zeitgeist." (ISBN 9780198747826, pre-print at https://www.cs.auckland.ac.nz/~brian/TuringZeitgeistPreprint.pdf .) Incidentally, re the subject line of this thread, I used to annoy Tim Berners-Lee by telling him that the web was "the fluff on top of the Internet". I think this discussion has confused matters a bit, because that's what PLATO presumably influenced, not the infrastructure. Regards Brian Carpenter From clemc at ccc.com Mon Aug 23 14:21:03 2021 From: clemc at ccc.com (Clem Cole) Date: Mon, 23 Aug 2021 17:21:03 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <24617c87-c7b9-fb8a-b53d-d3721be3fec8@gmail.com> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <24617c87-c7b9-fb8a-b53d-d3721be3fec8@gmail.com> Message-ID: Brian On Mon, Aug 23, 2021 at 5:15 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > Incidentally, re the subject line of this thread, I used to annoy Tim > Berners-Lee by telling him that the web was "the fluff on top of the > Internet". I think this discussion has confused matters a bit, because > that's what PLATO presumably influenced, not the infrastructure. > As the one that made the error starting it originally - fair enough, I could not agree more. Mei culpa. Clem ? From dhc at dcrocker.net Mon Aug 23 15:23:53 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 23 Aug 2021 15:23:53 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <24617c87-c7b9-fb8a-b53d-d3721be3fec8@gmail.com> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <24617c87-c7b9-fb8a-b53d-d3721be3fec8@gmail.com> Message-ID: <643407c6-ca3f-cd1f-11c8-e4c887ead3d9@dcrocker.net> On 8/23/2021 2:15 PM, Brian E Carpenter via Internet-history wrote: > Incidentally, re the subject line of this thread, I used to annoy Tim > Berners-Lee by telling him that the web was "the fluff on top of the > Internet". I think this discussion has confused matters a bit, because > that's what PLATO presumably influenced, not the infrastructure. Awareness of Plato was certainly pervasive in the Arpanet community and, I think, it was well-regarded. But I believe it had no direct influence on the development on any of the popular application protocols. That leaves what we now call UX, perhaps. That is, the fluff on top of the fluff. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Mon Aug 23 19:29:35 2021 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 23 Aug 2021 19:29:35 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> Message-ID: <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Back in the 60s, a lot of computer technology was not yet cast in concrete.? There were lots of choices.? But then someone pursues one choice, and if it works reasonably well, others follow the same path.?? It doesn't take very long for the "installed base" to become so large that it's unlikely that some other initial choice could easily take over.??? Think about how long it's taken, so far, for IPV6 to supplant IPV4. Sometime around 1968, as a learning experience in some lab course at MIT, I decided to make some non-binary logic.?? At the time, analog computers were still around, and digital computers hadn't yet agreed even on how many bits were in a byte, or how to encode characters, or what order bits should be in a computer memory word.?? But bits were pretty well established. I figured there must be other choices.?? So I made some ternary logic.?? Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as the three possible states.? Electronically it translated into positive, negative, or no current.?? Using transistors and such components, I made some basic logic "gates" that operated using three states instead of two.? Was that a good idea??? Probably not, but it was a good way to learn about circuits.??? Instead of bits (binary digits), how about manipulating trits (trinary digits)? There's nothing magic about 1s and 0s. Shortly thereafter, binary took over as circuitry went into integrated circuits and a whole industry came in to being around binary computers.?? If some other kind of approach, ternary, quaternary, or whatever is better than binary, we'll probably never know.?? I suspect something might happen soon with qubits though to challenge bits supremacy.. There are lots of ways to do things, and the one that "wins" might not have been the best choice. Imagine how networking and computing might have evolved with trits instead of bits.... /Jack On 8/23/21 12:15 PM, John Day via Internet-history wrote: > Agreed. There are only so many ways to do something. ;-) > >> On Aug 23, 2021, at 14:46, Craig Partridge wrote: >> >> >> >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history > wrote: >> It is not uncommon in the history of technology (it has been observed back several centuries) that it isn?t so much direct transfer of technology but more someone brings back a story along the lines of, ?I saw this thing that did thus and so and kind of looks like t.? Which gives someone the idea, that if it exists, then how it must work like this.? It isn?t quite independent invention, but it isn?t quite direct influence either. >> >> >> >> Related comment -- from my various interactions with historians about technology history. If the available technology is limited (as it was in the 1950s/60s/70s and early 1980s in many dimensions) then your solutions to certain problems are going to look rather similar. That doesn't meant that two similar solutions influenced each other... The trick in writing tech history is figuring out where there was a choice space and where there wasn't (much of) one. >> >> Craig >> >> >> -- >> ***** >> Craig Partridge's email account for professional society activities and mailing lists. From steve at shinkuro.com Mon Aug 23 19:47:50 2021 From: steve at shinkuro.com (Steve Crocker) Date: Mon, 23 Aug 2021 22:47:50 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: Jack, A classic analysis of bits vs trits says trits are slightly more efficient. The analysis is based on assuming that it takes b parts to represent a digit in base b. Two parts for a bit, three parts for a trit, four parts for a quit(?!). The information content of k digits in base b is b^k. The "cost" is b*k. The optimal base is e (2.71828...). Bases 2 and 4 are equal. (The information content of 2k bits is 2^(2k). The information content of k quits is 4^k. The costs are the same, i.e. 4k.) Trinary is better but not by much. Using six parts, you can make three bits or two trits. The information content of three bits is 8. The information content of two trits is 9. A different consideration of using trinary vs binary is the representation of integers. As we all know from hard experience, twos complement representation of signed integers gives you an asymmetry with one more negative number than positive number. Switch to ones complement and you wind up with two representations of zero. Trinary gives you a naturally symmetric representation of signed integers. I think that's the end of the advantages of trinary over binary. But I'm VERY impressed you took the time and effort to actually build such circuits. Bravo! Steve On Mon, Aug 23, 2021 at 10:29 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Back in the 60s, a lot of computer technology was not yet cast in > concrete. There were lots of choices. But then someone pursues one > choice, and if it works reasonably well, others follow the same path. > It doesn't take very long for the "installed base" to become so large > that it's unlikely that some other initial choice could easily take > over. Think about how long it's taken, so far, for IPV6 to supplant > IPV4. > > Sometime around 1968, as a learning experience in some lab course at > MIT, I decided to make some non-binary logic. At the time, analog > computers were still around, and digital computers hadn't yet agreed > even on how many bits were in a byte, or how to encode characters, or > what order bits should be in a computer memory word. But bits were > pretty well established. > > I figured there must be other choices. So I made some ternary logic. > Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as the > three possible states. Electronically it translated into positive, > negative, or no current. Using transistors and such components, I made > some basic logic "gates" that operated using three states instead of > two. Was that a good idea? Probably not, but it was a good way to > learn about circuits. Instead of bits (binary digits), how about > manipulating trits (trinary digits)? There's nothing magic about 1s and 0s. > > Shortly thereafter, binary took over as circuitry went into integrated > circuits and a whole industry came in to being around binary > computers. If some other kind of approach, ternary, quaternary, or > whatever is better than binary, we'll probably never know. I suspect > something might happen soon with qubits though to challenge bits > supremacy.. > > There are lots of ways to do things, and the one that "wins" might not > have been the best choice. > > Imagine how networking and computing might have evolved with trits > instead of bits.... > > /Jack > > > On 8/23/21 12:15 PM, John Day via Internet-history wrote: > > Agreed. There are only so many ways to do something. ;-) > > > >> On Aug 23, 2021, at 14:46, Craig Partridge wrote: > >> > >> > >> > >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history < > internet-history at elists.isoc.org > > wrote: > >> It is not uncommon in the history of technology (it has been observed > back several centuries) that it isn?t so much direct transfer of technology > but more someone brings back a story along the lines of, ?I saw this thing > that did thus and so and kind of looks like t.? Which gives someone the > idea, that if it exists, then how it must work like this.? It isn?t quite > independent invention, but it isn?t quite direct influence either. > >> > >> > >> > >> Related comment -- from my various interactions with historians about > technology history. If the available technology is limited (as it was in > the 1950s/60s/70s and early 1980s in many dimensions) then your solutions > to certain problems are going to look rather similar. That doesn't meant > that two similar solutions influenced each other... The trick in writing > tech history is figuring out where there was a choice space and where there > wasn't (much of) one. > >> > >> Craig > >> > >> > >> -- > >> ***** > >> Craig Partridge's email account for professional society activities and > mailing lists. > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Wed Aug 25 12:13:08 2021 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 25 Aug 2021 12:13:08 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: Hi Steve, I wasn't worrying about efficiency; I just wanted to see if I could figure out how to make some ternary logic.?? IIRC, the class was on digital design, and the lab assignment was basically "build something digital, other than the examples we used in the lectures".?? Rather than building boring flipflops and gates, maybe an adder, I decided to try to create ternary logic.?? After all, they said "digital", so it didn't have to be binary.?? I always tended to think outside the box. Your math is probably correct, but it's only a small part of making a design choice.?? At about the same time of that lab course, I had a student job using a PDP-8 to gather and analyze data from experiments being done at the "Instrumentation Lab".?? They were designing, building, testing, and deploying inertial navigation units that were use in a lot of places, including the Apollo spacecraft.? So I actually got to work with real "rocket scientists".? I learned as much from those engineers and scientists as I did from the classes. One thing I learned was that much of the mathematical toolbox from the courses wasn't terribly useful.? For example, there were lots of tools and techniques for minimizing Boolean logic.?? But reams of data had shown that the critical issue for reliability (it's hard to fix things in space) was the mechanical connections involved, e.g., how many pins on a PC board and corresponding socket were needed in the system design.?? More logic circuitry was OK if it meant fewer pins were needed.?? None of the tools and techniques taught in courses even mentioned the issue of pins or other such design questions, such as heat dissipation. So, it's possible that some kind of non-binary logic might have required fewer pins and resulted in more reliable hardware.? Or not.? The choice made early on meant we'd never pursue any other path. Getting back to Plato and the Internet, I can confirm that I never saw or even heard of Plato during the 60s/70s/80s.? So it probably didn't influence me, at least not directly. However, I was surprised to just read how Plato was focussed on latency as a key driver of the users' experience.?? I ran into that same issue at Licklider's MIT lab, when we were trying to bring up MazeWars on our newfangled Imlacs that were used as terminals on the PDP10.?? I spent a bit of time tweaking the RS232 TTY interfaces to get the line speed up around 100 kb/sec (typical max was 9.6 in those days), and that made the Maze game popular.?? When you "shot" an opponent they died as they should.? With higher latency, they'd often inexplicably get away. We tried to convince BBN to upgrade TIPs to run faster, but were rebuffed.? The TIPs supported the "maximum reasonable speed" of 9.6.?? Nothing faster was needed. Later on, circa 1978 while we were rearchitecting TCP to split out TCP and IP, and introduce UDP, I remembered my experience with latency, and pushed for inclusion of "Type Of Service" so that the underlying IP transport might someday be able to offer both low-latency and high-throughput services to meet different users' needs.?? And maybe a "guaranteed bandwidth" service to better mimic old physical circuits. Low latency was also important for things like conversational voice, so the "voice guys" at places like ISI and Lincoln were also interested in having such a capability in the Internet.?? I don't recall that "Plato guys" were involved, but I bet they would have been proponents as well. Sadly, although those experiences certainly "influenced the Internet" to the extent that various header fields and rudimentary mechanisms were included in the emerging TCP that we still have today, there apparently wasn't enough pressure and interest to cause low-latency service to actually get implemented.?? At least as far as I can tell.... I can't see "inside" the Internet now, just as a user today.?? But simply watching the now constant stream of live interviews on TV, and the pixelization, breaking audio, and such artifacts, makes me conclude that low-latency service isn't there yet, after 40 years of evolution. I suspect part of the cause was also the hardware availability, or lack thereof.? Like Plato, Imlacs were not common in "the network community" and neither were voice-capable terminals.? So unless you had one of those, you didn't understand why things like low-latency were needed.?? So the "rough consensus" never emerged for such mechanisms. Plato, and Maze, and Conversational Voice, and no doubt others, influenced the Internet.?? But not enough to drive the associated functionality all the way to deployment. Sometimes history is about what didn't happen.?? And why. /Jack On 8/23/21 7:47 PM, Steve Crocker wrote: > Jack, > > A classic analysis of bits vs trits says trits?are slightly more > efficient.? The analysis is based on assuming that it takes b parts to > represent a digit in base b.? Two parts for a bit, three parts for a > trit, four parts for a quit(?!).? The information content of k digits > in base b is b^k.? The "cost" is b*k.? The optimal base is e > (2.71828...). Bases 2 and 4 are equal.? (The information content of 2k > bits is 2^(2k).? The information content of k quits is 4^k.? The costs > are the same, i.e. 4k.) > > Trinary is better but not by much.? Using six parts, you can make > three bits or two trits.? The information content of three bits is 8.? > The information content of two trits?is 9. > > A different consideration of using trinary vs binary is the > representation of integers.? As we all know from hard experience, twos > complement representation of signed integers gives you an asymmetry > with one more negative number than positive number.? Switch to ones > complement and you wind up with two representations of zero.? Trinary > gives you a naturally symmetric representation of signed integers. > > I think that's the end of the advantages of trinary over binary.? But > I'm VERY impressed you?took the time and effort to actually build such > circuits.? Bravo! > > Steve > > > On Mon, Aug 23, 2021 at 10:29 PM Jack Haverty via Internet-history > > wrote: > > Back in the 60s, a lot of computer technology was not yet cast in > concrete.? There were lots of choices.? But then someone pursues one > choice, and if it works reasonably well, others follow the same path. > It doesn't take very long for the "installed base" to become so large > that it's unlikely that some other initial choice could easily take > over.??? Think about how long it's taken, so far, for IPV6 to > supplant IPV4. > > Sometime around 1968, as a learning experience in some lab course at > MIT, I decided to make some non-binary logic.?? At the time, analog > computers were still around, and digital computers hadn't yet agreed > even on how many bits were in a byte, or how to encode characters, or > what order bits should be in a computer memory word.?? But bits were > pretty well established. > > I figured there must be other choices.?? So I made some ternary > logic. > Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as > the > three possible states.? Electronically it translated into positive, > negative, or no current.?? Using transistors and such components, > I made > some basic logic "gates" that operated using three states instead of > two.? Was that a good idea??? Probably not, but it was a good way to > learn about circuits.??? Instead of bits (binary digits), how about > manipulating trits (trinary digits)? There's nothing magic about > 1s and 0s. > > Shortly thereafter, binary took over as circuitry went into > integrated > circuits and a whole industry came in to being around binary > computers.?? If some other kind of approach, ternary, quaternary, or > whatever is better than binary, we'll probably never know.?? I > suspect > something might happen soon with qubits though to challenge bits > supremacy.. > > There are lots of ways to do things, and the one that "wins" might > not > have been the best choice. > > Imagine how networking and computing might have evolved with trits > instead of bits.... > > /Jack > > > On 8/23/21 12:15 PM, John Day via Internet-history wrote: > > Agreed.? There are only so many ways to do something. ;-) > > > >> On Aug 23, 2021, at 14:46, Craig Partridge > wrote: > >> > >> > >> > >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history > > >> wrote: > >> It is not uncommon in the history of technology (it has been > observed back several centuries) that it isn?t so much direct > transfer of technology but more someone brings back a story along > the lines of, ?I saw this thing that did thus and so and kind of > looks like t.? Which gives someone the idea, that if it exists, > then how it must work like this.? It isn?t quite independent > invention, but it isn?t quite direct influence either. > >> > >> > >> > >> Related comment -- from my various interactions with historians > about technology history.? If the available technology is limited > (as it was in the 1950s/60s/70s and early 1980s in many > dimensions) then your solutions to certain problems are going to > look rather similar.? That doesn't meant that two similar > solutions influenced each other... The trick in writing tech > history is figuring out where there was a choice space and where > there wasn't (much of) one. > >> > >> Craig > >> > >> > >> -- > >> ***** > >> Craig Partridge's email account for professional society > activities and mailing lists. > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > From steve at shinkuro.com Wed Aug 25 12:20:15 2021 From: steve at shinkuro.com (Steve Crocker) Date: Wed, 25 Aug 2021 15:20:15 -0400 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: Thanks for the color. Lots of things were competing for attention. I worked on the original host-host protocol, later renamed NCP. One of the things I was very concerned about was how many round trips it would take to establish a connection. I wanted it to be as few as possible in order to make the system as responsive as possible. In retrospect, I wish I had considered voice and related applications in mind and not expected all applications to be built on top of a reliable stream of bits or bytes. Steve On Wed, Aug 25, 2021 at 3:13 PM Jack Haverty wrote: > Hi Steve, > > I wasn't worrying about efficiency; I just wanted to see if I could figure > out how to make some ternary logic. IIRC, the class was on digital > design, and the lab assignment was basically "build something digital, > other than the examples we used in the lectures". Rather than building > boring flipflops and gates, maybe an adder, I decided to try to create > ternary logic. After all, they said "digital", so it didn't have to be > binary. I always tended to think outside the box. > > Your math is probably correct, but it's only a small part of making a > design choice. At about the same time of that lab course, I had a student > job using a PDP-8 to gather and analyze data from experiments being done at > the "Instrumentation Lab". They were designing, building, testing, and > deploying inertial navigation units that were use in a lot of places, > including the Apollo spacecraft. So I actually got to work with real > "rocket scientists". I learned as much from those engineers and scientists > as I did from the classes. > > One thing I learned was that much of the mathematical toolbox from the > courses wasn't terribly useful. For example, there were lots of tools and > techniques for minimizing Boolean logic. But reams of data had shown that > the critical issue for reliability (it's hard to fix things in space) was > the mechanical connections involved, e.g., how many pins on a PC board and > corresponding socket were needed in the system design. More logic > circuitry was OK if it meant fewer pins were needed. None of the tools > and techniques taught in courses even mentioned the issue of pins or other > such design questions, such as heat dissipation. > > So, it's possible that some kind of non-binary logic might have required > fewer pins and resulted in more reliable hardware. Or not. The choice > made early on meant we'd never pursue any other path. > > Getting back to Plato and the Internet, I can confirm that I never saw or > even heard of Plato during the 60s/70s/80s. So it probably didn't > influence me, at least not directly. > > However, I was surprised to just read how Plato was focussed on latency as > a key driver of the users' experience. I ran into that same issue at > Licklider's MIT lab, when we were trying to bring up MazeWars on our > newfangled Imlacs that were used as terminals on the PDP10. I spent a bit > of time tweaking the RS232 TTY interfaces to get the line speed up around > 100 kb/sec (typical max was 9.6 in those days), and that made the Maze game > popular. When you "shot" an opponent they died as they should. With > higher latency, they'd often inexplicably get away. > > We tried to convince BBN to upgrade TIPs to run faster, but were > rebuffed. The TIPs supported the "maximum reasonable speed" of 9.6. > Nothing faster was needed. > > Later on, circa 1978 while we were rearchitecting TCP to split out TCP and > IP, and introduce UDP, I remembered my experience with latency, and pushed > for inclusion of "Type Of Service" so that the underlying IP transport > might someday be able to offer both low-latency and high-throughput > services to meet different users' needs. And maybe a "guaranteed > bandwidth" service to better mimic old physical circuits. > > Low latency was also important for things like conversational voice, so > the "voice guys" at places like ISI and Lincoln were also interested in > having such a capability in the Internet. I don't recall that "Plato > guys" were involved, but I bet they would have been proponents as well. > > Sadly, although those experiences certainly "influenced the Internet" to > the extent that various header fields and rudimentary mechanisms were > included in the emerging TCP that we still have today, there apparently > wasn't enough pressure and interest to cause low-latency service to > actually get implemented. At least as far as I can tell.... > > I can't see "inside" the Internet now, just as a user today. But simply > watching the now constant stream of live interviews on TV, and the > pixelization, breaking audio, and such artifacts, makes me conclude that > low-latency service isn't there yet, after 40 years of evolution. > > I suspect part of the cause was also the hardware availability, or lack > thereof. Like Plato, Imlacs were not common in "the network community" and > neither were voice-capable terminals. So unless you had one of those, you > didn't understand why things like low-latency were needed. So the "rough > consensus" never emerged for such mechanisms. > > Plato, and Maze, and Conversational Voice, and no doubt others, influenced > the Internet. But not enough to drive the associated functionality all > the way to deployment. > > Sometimes history is about what didn't happen. And why. > > /Jack > > > > > On 8/23/21 7:47 PM, Steve Crocker wrote: > > Jack, > > A classic analysis of bits vs trits says trits are slightly more > efficient. The analysis is based on assuming that it takes b parts to > represent a digit in base b. Two parts for a bit, three parts for a trit, > four parts for a quit(?!). The information content of k digits in base b > is b^k. The "cost" is b*k. The optimal base is e (2.71828...). Bases 2 > and 4 are equal. (The information content of 2k bits is 2^(2k). The > information content of k quits is 4^k. The costs are the same, i.e. 4k.) > > Trinary is better but not by much. Using six parts, you can make three > bits or two trits. The information content of three bits is 8. The > information content of two trits is 9. > > A different consideration of using trinary vs binary is the representation > of integers. As we all know from hard experience, twos complement > representation of signed integers gives you an asymmetry with one more > negative number than positive number. Switch to ones complement and you > wind up with two representations of zero. Trinary gives you a naturally > symmetric representation of signed integers. > > I think that's the end of the advantages of trinary over binary. But I'm > VERY impressed you took the time and effort to actually build such > circuits. Bravo! > > Steve > > > On Mon, Aug 23, 2021 at 10:29 PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Back in the 60s, a lot of computer technology was not yet cast in >> concrete. There were lots of choices. But then someone pursues one >> choice, and if it works reasonably well, others follow the same path. >> It doesn't take very long for the "installed base" to become so large >> that it's unlikely that some other initial choice could easily take >> over. Think about how long it's taken, so far, for IPV6 to supplant >> IPV4. >> >> Sometime around 1968, as a learning experience in some lab course at >> MIT, I decided to make some non-binary logic. At the time, analog >> computers were still around, and digital computers hadn't yet agreed >> even on how many bits were in a byte, or how to encode characters, or >> what order bits should be in a computer memory word. But bits were >> pretty well established. >> >> I figured there must be other choices. So I made some ternary logic. >> Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as the >> three possible states. Electronically it translated into positive, >> negative, or no current. Using transistors and such components, I made >> some basic logic "gates" that operated using three states instead of >> two. Was that a good idea? Probably not, but it was a good way to >> learn about circuits. Instead of bits (binary digits), how about >> manipulating trits (trinary digits)? There's nothing magic about 1s and >> 0s. >> >> Shortly thereafter, binary took over as circuitry went into integrated >> circuits and a whole industry came in to being around binary >> computers. If some other kind of approach, ternary, quaternary, or >> whatever is better than binary, we'll probably never know. I suspect >> something might happen soon with qubits though to challenge bits >> supremacy.. >> >> There are lots of ways to do things, and the one that "wins" might not >> have been the best choice. >> >> Imagine how networking and computing might have evolved with trits >> instead of bits.... >> >> /Jack >> >> >> On 8/23/21 12:15 PM, John Day via Internet-history wrote: >> > Agreed. There are only so many ways to do something. ;-) >> > >> >> On Aug 23, 2021, at 14:46, Craig Partridge >> wrote: >> >> >> >> >> >> >> >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history < >> internet-history at elists.isoc.org > >> wrote: >> >> It is not uncommon in the history of technology (it has been observed >> back several centuries) that it isn?t so much direct transfer of technology >> but more someone brings back a story along the lines of, ?I saw this thing >> that did thus and so and kind of looks like t.? Which gives someone the >> idea, that if it exists, then how it must work like this.? It isn?t quite >> independent invention, but it isn?t quite direct influence either. >> >> >> >> >> >> >> >> Related comment -- from my various interactions with historians about >> technology history. If the available technology is limited (as it was in >> the 1950s/60s/70s and early 1980s in many dimensions) then your solutions >> to certain problems are going to look rather similar. That doesn't meant >> that two similar solutions influenced each other... The trick in writing >> tech history is figuring out where there was a choice space and where there >> wasn't (much of) one. >> >> >> >> Craig >> >> >> >> >> >> -- >> >> ***** >> >> Craig Partridge's email account for professional society activities >> and mailing lists. >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > From bpurvy at gmail.com Wed Aug 25 12:24:07 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Wed, 25 Aug 2021 12:24:07 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: On type of service and guaranteed bandwidth: I'm so glad you didn't. It let me make money at Packeteer later! On Wed, Aug 25, 2021, 12:13 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Hi Steve, > > I wasn't worrying about efficiency; I just wanted to see if I could > figure out how to make some ternary logic. IIRC, the class was on > digital design, and the lab assignment was basically "build something > digital, other than the examples we used in the lectures". Rather than > building boring flipflops and gates, maybe an adder, I decided to try to > create ternary logic. After all, they said "digital", so it didn't > have to be binary. I always tended to think outside the box. > > Your math is probably correct, but it's only a small part of making a > design choice. At about the same time of that lab course, I had a > student job using a PDP-8 to gather and analyze data from experiments > being done at the "Instrumentation Lab". They were designing, > building, testing, and deploying inertial navigation units that were use > in a lot of places, including the Apollo spacecraft. So I actually got > to work with real "rocket scientists". I learned as much from those > engineers and scientists as I did from the classes. > > One thing I learned was that much of the mathematical toolbox from the > courses wasn't terribly useful. For example, there were lots of tools > and techniques for minimizing Boolean logic. But reams of data had > shown that the critical issue for reliability (it's hard to fix things > in space) was the mechanical connections involved, e.g., how many pins > on a PC board and corresponding socket were needed in the system > design. More logic circuitry was OK if it meant fewer pins were > needed. None of the tools and techniques taught in courses even > mentioned the issue of pins or other such design questions, such as heat > dissipation. > > So, it's possible that some kind of non-binary logic might have required > fewer pins and resulted in more reliable hardware. Or not. The choice > made early on meant we'd never pursue any other path. > > Getting back to Plato and the Internet, I can confirm that I never saw > or even heard of Plato during the 60s/70s/80s. So it probably didn't > influence me, at least not directly. > > However, I was surprised to just read how Plato was focussed on latency > as a key driver of the users' experience. I ran into that same issue > at Licklider's MIT lab, when we were trying to bring up MazeWars on our > newfangled Imlacs that were used as terminals on the PDP10. I spent a > bit of time tweaking the RS232 TTY interfaces to get the line speed up > around 100 kb/sec (typical max was 9.6 in those days), and that made the > Maze game popular. When you "shot" an opponent they died as they > should. With higher latency, they'd often inexplicably get away. > > We tried to convince BBN to upgrade TIPs to run faster, but were > rebuffed. The TIPs supported the "maximum reasonable speed" of 9.6. > Nothing faster was needed. > > Later on, circa 1978 while we were rearchitecting TCP to split out TCP > and IP, and introduce UDP, I remembered my experience with latency, and > pushed for inclusion of "Type Of Service" so that the underlying IP > transport might someday be able to offer both low-latency and > high-throughput services to meet different users' needs. And maybe a > "guaranteed bandwidth" service to better mimic old physical circuits. > > Low latency was also important for things like conversational voice, so > the "voice guys" at places like ISI and Lincoln were also interested in > having such a capability in the Internet. I don't recall that "Plato > guys" were involved, but I bet they would have been proponents as well. > > Sadly, although those experiences certainly "influenced the Internet" to > the extent that various header fields and rudimentary mechanisms were > included in the emerging TCP that we still have today, there apparently > wasn't enough pressure and interest to cause low-latency service to > actually get implemented. At least as far as I can tell.... > > I can't see "inside" the Internet now, just as a user today. But > simply watching the now constant stream of live interviews on TV, and > the pixelization, breaking audio, and such artifacts, makes me conclude > that low-latency service isn't there yet, after 40 years of evolution. > > I suspect part of the cause was also the hardware availability, or lack > thereof. Like Plato, Imlacs were not common in "the network community" > and neither were voice-capable terminals. So unless you had one of > those, you didn't understand why things like low-latency were needed. > So the "rough consensus" never emerged for such mechanisms. > > Plato, and Maze, and Conversational Voice, and no doubt others, > influenced the Internet. But not enough to drive the associated > functionality all the way to deployment. > > Sometimes history is about what didn't happen. And why. > > /Jack > > > > > On 8/23/21 7:47 PM, Steve Crocker wrote: > > Jack, > > > > A classic analysis of bits vs trits says trits are slightly more > > efficient. The analysis is based on assuming that it takes b parts to > > represent a digit in base b. Two parts for a bit, three parts for a > > trit, four parts for a quit(?!). The information content of k digits > > in base b is b^k. The "cost" is b*k. The optimal base is e > > (2.71828...). Bases 2 and 4 are equal. (The information content of 2k > > bits is 2^(2k). The information content of k quits is 4^k. The costs > > are the same, i.e. 4k.) > > > > Trinary is better but not by much. Using six parts, you can make > > three bits or two trits. The information content of three bits is 8. > > The information content of two trits is 9. > > > > A different consideration of using trinary vs binary is the > > representation of integers. As we all know from hard experience, twos > > complement representation of signed integers gives you an asymmetry > > with one more negative number than positive number. Switch to ones > > complement and you wind up with two representations of zero. Trinary > > gives you a naturally symmetric representation of signed integers. > > > > I think that's the end of the advantages of trinary over binary. But > > I'm VERY impressed you took the time and effort to actually build such > > circuits. Bravo! > > > > Steve > > > > > > On Mon, Aug 23, 2021 at 10:29 PM Jack Haverty via Internet-history > > > > wrote: > > > > Back in the 60s, a lot of computer technology was not yet cast in > > concrete. There were lots of choices. But then someone pursues one > > choice, and if it works reasonably well, others follow the same path. > > It doesn't take very long for the "installed base" to become so large > > that it's unlikely that some other initial choice could easily take > > over. Think about how long it's taken, so far, for IPV6 to > > supplant IPV4. > > > > Sometime around 1968, as a learning experience in some lab course at > > MIT, I decided to make some non-binary logic. At the time, analog > > computers were still around, and digital computers hadn't yet agreed > > even on how many bits were in a byte, or how to encode characters, or > > what order bits should be in a computer memory word. But bits were > > pretty well established. > > > > I figured there must be other choices. So I made some ternary > > logic. > > Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as > > the > > three possible states. Electronically it translated into positive, > > negative, or no current. Using transistors and such components, > > I made > > some basic logic "gates" that operated using three states instead of > > two. Was that a good idea? Probably not, but it was a good way to > > learn about circuits. Instead of bits (binary digits), how about > > manipulating trits (trinary digits)? There's nothing magic about > > 1s and 0s. > > > > Shortly thereafter, binary took over as circuitry went into > > integrated > > circuits and a whole industry came in to being around binary > > computers. If some other kind of approach, ternary, quaternary, or > > whatever is better than binary, we'll probably never know. I > > suspect > > something might happen soon with qubits though to challenge bits > > supremacy.. > > > > There are lots of ways to do things, and the one that "wins" might > > not > > have been the best choice. > > > > Imagine how networking and computing might have evolved with trits > > instead of bits.... > > > > /Jack > > > > > > On 8/23/21 12:15 PM, John Day via Internet-history wrote: > > > Agreed. There are only so many ways to do something. ;-) > > > > > >> On Aug 23, 2021, at 14:46, Craig Partridge > > wrote: > > >> > > >> > > >> > > >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history > > > > > > >> wrote: > > >> It is not uncommon in the history of technology (it has been > > observed back several centuries) that it isn?t so much direct > > transfer of technology but more someone brings back a story along > > the lines of, ?I saw this thing that did thus and so and kind of > > looks like t.? Which gives someone the idea, that if it exists, > > then how it must work like this.? It isn?t quite independent > > invention, but it isn?t quite direct influence either. > > >> > > >> > > >> > > >> Related comment -- from my various interactions with historians > > about technology history. If the available technology is limited > > (as it was in the 1950s/60s/70s and early 1980s in many > > dimensions) then your solutions to certain problems are going to > > look rather similar. That doesn't meant that two similar > > solutions influenced each other... The trick in writing tech > > history is figuring out where there was a choice space and where > > there wasn't (much of) one. > > >> > > >> Craig > > >> > > >> > > >> -- > > >> ***** > > >> Craig Partridge's email account for professional society > > activities and mailing lists. > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dhc at dcrocker.net Wed Aug 25 12:54:55 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 25 Aug 2021 12:54:55 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> On 8/25/2021 12:13 PM, Jack Haverty via Internet-history wrote: > Low latency was also important for things like conversational voice, Given the new ability to be interactive with a range of 'users', there was experimentation about usability issues. Lower latency has obvious benefit. But one experiment demonstrated it was not an absolute. Given an experience with significantly /variable/ latency, where the average was lower latency, versus an experience with very stable latency, but at a higher average, users preferred the latter. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dhc at dcrocker.net Wed Aug 25 12:49:42 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 25 Aug 2021 12:49:42 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> Message-ID: <173b81b2-5afb-a928-e0e6-c16d1d3fe1f2@dcrocker.net> On 8/23/2021 7:29 PM, Jack Haverty via Internet-history wrote: > Instead of bits (binary digits), how about manipulating trits (trinary > digits)? I suspect a major marketing barrier to this would have been people's not wanting their data to be treated as tryte. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Thu Aug 26 15:18:38 2021 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 26 Aug 2021 15:18:38 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> Message-ID: <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> I do remember that there were experiments over the years, and at least a few RFCs defining meanings for the values of that TOS field we put in the IP header.?? "Stable latency" could have been a useful type of service, making a virtual circuit look more like an old-school actual circuit.?? I'm not sure if it's in any of those RFCs, or the similar mechanisms which I gather have been defined in IPV6. Some of those RFCs (and TOS specifications) might even have made it to becoming a "standard".?? But IMHO the history of the Internet should focus on what happened "in the field" of the Internet we all use today.?? It's hard for me as a user to tell, but I personally haven't seen any evidence that any OS, or application, uses those TOS settings, or that any ISP and/or router manufacturer has equipment that behaves differently depending on the TOS settings. There have been "test the net" services around for a while, primarily measuring throughput, but recently I've seen a few that at least report on latency.?? Still haven't seen any ISP or equipment vendor touting their products' abilities to offer different types of service.?? Perhaps that's even illegal now given "net neutrality"? So it seems that the TOS functionality of IPV4 may have been evolved a bit with some experimentation that occurred, but it doesn't seem to have gotten into the live Internet. That's one thing that led me to the observation that the History of the Internet should include what didn't happen, and why. /Jack Haverty On 8/25/21 12:54 PM, Dave Crocker wrote: > On 8/25/2021 12:13 PM, Jack Haverty via Internet-history wrote: >> Low latency was also important for things like conversational voice, > > Given the new ability to be interactive with a range of 'users', there > was experimentation about usability issues.? Lower latency has obvious > benefit.? But one experiment demonstrated it was not an absolute. > > Given an experience with significantly /variable/ latency, where the > average was lower latency, versus an experience with very stable > latency, but at a higher average, users preferred the latter. > > d/ > From dhc at dcrocker.net Thu Aug 26 15:39:54 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 26 Aug 2021 15:39:54 -0700 Subject: [ih] Better-than-Best Effort Message-ID: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> Having not followed actual QOS work over the year, my naive brain wandered oddly, today with a thought about a semi-QOS approach. The usual view is that it requires complete, end-to-end support. Massive barriers to adoption, at the least. I'm thinking that the long-haul infrastructure tends to have enough capacity that it usually isn't the source of latency. It's the beginning and ending legs that do. So what about a scheme that defines and provides QOS in those segments but not the long middle? Cheaper, more implementable, and might give usefully-better performance. Assuming that this idea is new only to me, I'm curious about reactions/history/etc. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Thu Aug 26 15:45:36 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 27 Aug 2021 10:45:36 +1200 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> Message-ID: On 27-Aug-21 10:18, Jack Haverty via Internet-history wrote: > I do remember that there were experiments over the years, and at least a > few RFCs defining meanings for the values of that TOS field we put in > the IP header.?? "Stable latency" could have been a useful type of > service, making a virtual circuit look more like an old-school actual > circuit.?? I'm not sure if it's in any of those RFCs, or the similar > mechanisms which I gather have been defined in IPV6. Given that the Internet is a great big statistical multiplexer, the ambitions of the differentiated services ("diffserv") work that repurposed the "TOS" bits were limited to *bounded* latency aimed at services such as Voice over IP. This has been very widely deployed in corporate networks, for example. Getting differentiated services to work across the open Internet is a much harder problem; it can only happen if there are adequate service level agreements at all ISP interconnections along the path. Many many RFCs have addressed this topic, and it's still being worked on in the IETF. Just one sample: https://www.rfc-editor.org/rfc/rfc8100.html A feature of diffserv is that it's defined identically for IPv4 and IPv6. (I was co-chair of the original diffserv WG that produced https://www.rfc-editor.org/rfc/rfc2474.html in 1998.) There's a current IETF effort on 'deterministic networking'. I have my doubts, but you can read all about it at https://datatracker.ietf.org/wg/detnet/documents/ Regards Brian Carpenter > > Some of those RFCs (and TOS specifications) might even have made it to > becoming a "standard".?? But IMHO the history of the Internet should > focus on what happened "in the field" of the Internet we all use > today.?? It's hard for me as a user to tell, but I personally haven't > seen any evidence that any OS, or application, uses those TOS settings, > or that any ISP and/or router manufacturer has equipment that behaves > differently depending on the TOS settings. There have been "test the > net" services around for a while, primarily measuring throughput, but > recently I've seen a few that at least report on latency.?? Still > haven't seen any ISP or equipment vendor touting their products' > abilities to offer different types of service.?? Perhaps that's even > illegal now given "net neutrality"? > > So it seems that the TOS functionality of IPV4 may have been evolved a > bit with some experimentation that occurred, but it doesn't seem to have > gotten into the live Internet. > > That's one thing that led me to the observation that the History of the > Internet should include what didn't happen, and why. > > /Jack Haverty > > > On 8/25/21 12:54 PM, Dave Crocker wrote: >> On 8/25/2021 12:13 PM, Jack Haverty via Internet-history wrote: >>> Low latency was also important for things like conversational voice, >> >> Given the new ability to be interactive with a range of 'users', there >> was experimentation about usability issues.? Lower latency has obvious >> benefit.? But one experiment demonstrated it was not an absolute. >> >> Given an experience with significantly /variable/ latency, where the >> average was lower latency, versus an experience with very stable >> latency, but at a higher average, users preferred the latter. >> >> d/ >> > > From bpurvy at gmail.com Thu Aug 26 15:46:55 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Thu, 26 Aug 2021 15:46:55 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> Message-ID: well, 23 years later it can be revealed Packeteer *did* break all the rules, and the customers didn't care. No, we didn't use the TOS bits. The Packet Shaper had its own set of policies that users would tweak, e.g. to guarantee bandwidth or limit recreational apps during work hours (Stanford's network used this, in particular). It did it by modifying the window size and/or delaying the acks. No queueing. One might object "that won't work" but in fact it did. On Thu, Aug 26, 2021 at 3:18 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > I do remember that there were experiments over the years, and at least a > few RFCs defining meanings for the values of that TOS field we put in > the IP header. "Stable latency" could have been a useful type of > service, making a virtual circuit look more like an old-school actual > circuit. I'm not sure if it's in any of those RFCs, or the similar > mechanisms which I gather have been defined in IPV6. > > Some of those RFCs (and TOS specifications) might even have made it to > becoming a "standard". But IMHO the history of the Internet should > focus on what happened "in the field" of the Internet we all use > today. It's hard for me as a user to tell, but I personally haven't > seen any evidence that any OS, or application, uses those TOS settings, > or that any ISP and/or router manufacturer has equipment that behaves > differently depending on the TOS settings. There have been "test the > net" services around for a while, primarily measuring throughput, but > recently I've seen a few that at least report on latency. Still > haven't seen any ISP or equipment vendor touting their products' > abilities to offer different types of service. Perhaps that's even > illegal now given "net neutrality"? > > So it seems that the TOS functionality of IPV4 may have been evolved a > bit with some experimentation that occurred, but it doesn't seem to have > gotten into the live Internet. > > That's one thing that led me to the observation that the History of the > Internet should include what didn't happen, and why. > > /Jack Haverty > > > On 8/25/21 12:54 PM, Dave Crocker wrote: > > On 8/25/2021 12:13 PM, Jack Haverty via Internet-history wrote: > >> Low latency was also important for things like conversational voice, > > > > Given the new ability to be interactive with a range of 'users', there > > was experimentation about usability issues. Lower latency has obvious > > benefit. But one experiment demonstrated it was not an absolute. > > > > Given an experience with significantly /variable/ latency, where the > > average was lower latency, versus an experience with very stable > > latency, but at a higher average, users preferred the latter. > > > > d/ > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Thu Aug 26 15:55:06 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 27 Aug 2021 10:55:06 +1200 Subject: [ih] Better-than-Best Effort In-Reply-To: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> Message-ID: <1b64919a-9978-0f6f-ce27-2b7c37684c0f@gmail.com> Dave, On 27-Aug-21 10:39, Dave Crocker via Internet-history wrote: > Having not followed actual QOS work over the year, my naive brain > wandered oddly, today with a thought about a semi-QOS approach. > > The usual view is that it requires complete, end-to-end support. > Massive barriers to adoption, at the least. > > I'm thinking that the long-haul infrastructure tends to have enough > capacity that it usually isn't the source of latency. It's the > beginning and ending legs that do. > > So what about a scheme that defines and provides QOS in those segments > but not the long middle? Cheaper, more implementable, and might give > usefully-better performance. > > Assuming that this idea is new only to me, I'm curious about > reactions/history/etc. That's pretty much the deployment model for diffserv. Apply the per-hop behaviour that you want locally, and hope that the WAN has enough capacity. But of course, that is not guaranteed, so everything is messed up by buffer bloat, so you get "buffering" messages during your live video. Which is why you need upstream service level agreements, but you the user have no control over that. Also, your incentives are not the same as the transit ISPs' incentives, so your local ISP is caught in the middle, between conflicting incentives. This is a really complex topic but well known in tsvwg at ietf.org. Brian From tte at cs.fau.de Thu Aug 26 16:27:43 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Fri, 27 Aug 2021 01:27:43 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> Message-ID: <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> On Thu, Aug 26, 2021 at 03:39:54PM -0700, Dave Crocker via Internet-history wrote: > Having not followed actual QOS work over the year, my naive brain wandered > oddly, today with a thought about a semi-QOS approach. I think the unit of observable changes in QoS is typically greater than a decade. Then again, L4S is making slow? progress, NADA became RFC in 2020 and DetNet released its first significant round of RFC end of 2019, except that it has no good wide-area nework queuin solutions, hence i wrote draft-eckert-detnet-bounded-latency-problems and i am sure i am not tracking jus a small part of what may be going on. > The usual view is that it requires complete, end-to-end support. Massive > barriers to adoption, at the least. Depends probably on how-much-better-than-BE it intends to be. > I'm thinking that the long-haul infrastructure tends to have enough capacity > that it usually isn't the source of latency. It's the beginning and ending > legs that do. Not necessarily: There should be a good amount of OTT nowadays that will show you that they can sell you paths across the planet with better (stochastical) latency bounds than native Internet paths. Most of them do not even own any links, but just tunnel across various hops with container/VMs that they pay for. But i do agree that the mayority of future use-cases that go beyond the current application success story of the Internet (non-real-time, non-critical) will likely fail on the unreliability and imprecise latency bounds specifically in what i would call the metropolitan edge of the Internet. Aka: new applications running between edge-data-centers in a metropolitan region and subscribers in the same region. You can read my architectural view of that startin on page 103 of: https://www.itu.int/en/ITU-T/focusgroups/net2030/Documents/Network_2030_Architecture-framework.pdf > So what about a scheme that defines and provides QOS in those segments but > not the long middle? Cheaper, more implementable, and might give > usefully-better performance. PIE only made it through IETF because DOCSIS wanted it, so we do see it in that metro edge. The current/next generation of even whitebox metro switches do seem to start getting AQM of similar strength. Aka: The scheme that works is ad-hoc introducing of AQM into your very own congestion points to improve your subscribers experience. I always say BE before AQM was LE (Lousy Effort). So at least with AQM and better CC we are closer to a least getting BE ;-) I think where we hit the brick wall to adopt any more of any better _differentiated) QoS options, maybe starting with L4S, but hopefully also metro-size deterministic netwrking for remote-driving and live&death reliable services, is in any form of business models that would ask and pay for such network functionality. Pessimistic as i am, I think those business models will again, like we saw 20 years ago with MPLS/VPN evolve in isolated VPN/slices across the same infrastructure. And because they are driven by a small number of customers such as mobile operators, industrial or public services/traffic-control/power-distribution/... etc, we will just see a proliferation of hacked-together qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. Managemenet of Queue weights by FAX messages between customer and subscriber is my favourite common hack. As an ex-colleague-liked to say: www.showmethemoneyforqos.com > Assuming that this idea is new only to me, I'm curious about > reactions/history/etc. AFAK, IETF has not done any real QoS architecture since the 90th, except for now DetNet, which is IMHO very badly attended by the mayority of vendors or operators, and can IMHO also onlyy be the most high-end niche solution. Instead, everybody is optimizing in their own bubble where they can. Mostly with positioning of compute, a bit of AQM, better CC and increasing capacity through peering. Remember also that better QoS support in the Internet for applications immediately also get the whole brigade of general concerns about privacy and net neutrality raised against it by those that likely make the most money of better traffic experiences in their own, non-public-internet service offerings. So i fear better QoS architectures for the Internet are also a victim of commercial interests that ultimately will make the Internet and its high level of oversight get replaced by various propriety network service offerings with better QoS. Cheers Toerless > d/ > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Thu Aug 26 17:15:02 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Thu, 26 Aug 2021 17:15:02 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> Message-ID: I should note here that we were addressing a specific niche, not the whole world as the IETF must: A private WAN, from some branch of a company to a HQ network. They were paying for the WAN, so if someone was clogging it up with non-essential traffic (as they defined "non-essential" of course) that was a problem. On Thu, Aug 26, 2021 at 3:46 PM Bob Purvy wrote: > well, 23 years later it can be revealed Packeteer *did* break all the > rules, and the customers didn't care. > > No, we didn't use the TOS bits. The Packet Shaper had its own set of > policies that users would tweak, e.g. to guarantee bandwidth or limit > recreational apps during work hours (Stanford's network used this, in > particular). > > It did it by modifying the window size and/or delaying the acks. No > queueing. One might object "that won't work" but in fact it did. > > On Thu, Aug 26, 2021 at 3:18 PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> I do remember that there were experiments over the years, and at least a >> few RFCs defining meanings for the values of that TOS field we put in >> the IP header. "Stable latency" could have been a useful type of >> service, making a virtual circuit look more like an old-school actual >> circuit. I'm not sure if it's in any of those RFCs, or the similar >> mechanisms which I gather have been defined in IPV6. >> >> Some of those RFCs (and TOS specifications) might even have made it to >> becoming a "standard". But IMHO the history of the Internet should >> focus on what happened "in the field" of the Internet we all use >> today. It's hard for me as a user to tell, but I personally haven't >> seen any evidence that any OS, or application, uses those TOS settings, >> or that any ISP and/or router manufacturer has equipment that behaves >> differently depending on the TOS settings. There have been "test the >> net" services around for a while, primarily measuring throughput, but >> recently I've seen a few that at least report on latency. Still >> haven't seen any ISP or equipment vendor touting their products' >> abilities to offer different types of service. Perhaps that's even >> illegal now given "net neutrality"? >> >> So it seems that the TOS functionality of IPV4 may have been evolved a >> bit with some experimentation that occurred, but it doesn't seem to have >> gotten into the live Internet. >> >> That's one thing that led me to the observation that the History of the >> Internet should include what didn't happen, and why. >> >> /Jack Haverty >> >> >> On 8/25/21 12:54 PM, Dave Crocker wrote: >> > On 8/25/2021 12:13 PM, Jack Haverty via Internet-history wrote: >> >> Low latency was also important for things like conversational voice, >> > >> > Given the new ability to be interactive with a range of 'users', there >> > was experimentation about usability issues. Lower latency has obvious >> > benefit. But one experiment demonstrated it was not an absolute. >> > >> > Given an experience with significantly /variable/ latency, where the >> > average was lower latency, versus an experience with very stable >> > latency, but at a higher average, users preferred the latter. >> > >> > d/ >> > >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > From dhc at dcrocker.net Thu Aug 26 17:34:50 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 26 Aug 2021 17:34:50 -0700 Subject: [ih] How Plato Influenced the Internet In-Reply-To: <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> References: <77B70F1A-1652-422E-B1AF-766928AC8691@comcast.net> <6B50C8D7-6F4C-449A-8810-84B0E119D378@comcast.net> <7FCD2952-860F-454D-B5FE-CAA341E29970@platohistory.org> <2C5891A3-EE23-4EA4-97BA-C19C7D6BBA28@comcast.net> <056afe14-8a78-a5ab-448f-d522d3a79038@3kitty.org> <7a2373af-a123-e264-d34f-004be0060715@dcrocker.net> <4e6f7765-375b-a878-5dfb-189e9125e46e@3kitty.org> Message-ID: <83faacdd-ba89-2ca3-0156-9bd50f040611@dcrocker.net> On 8/26/2021 3:18 PM, Jack Haverty via Internet-history wrote: > I do remember that there were experiments over the years, Just in case there as any confusion, the experimentation I was referring to was in-the-lab human UX testing, not data comm-related testing. That is pure user interface activity variations. Obviously the datacomm work could be in the service of better UX, but it's more focussed (and more predictable for this group to talk about.) d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Thu Aug 26 18:34:39 2021 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 26 Aug 2021 18:34:39 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> Message-ID: <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> On 8/26/21 4:27 PM, Toerless Eckert via Internet-history wrote: > So i fear better > QoS architectures for the Internet are also a victim of commercial interests that > ultimately will make the Internet and its high level of oversight get replaced > by various propriety network service offerings with better QoS. Believable, but sad.?? If it comes true, the Internet will devolve into a set of competing silos, much like other computing/applications have been doing.?? Perhaps it already has. The original vision of The Internet as a global community of computers of any type, communicating using networks of any technology, will have proven unattainable.?? Perhaps we are seeing the end of? the Internet Experiment. /Jack From dhc at dcrocker.net Thu Aug 26 18:40:59 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 26 Aug 2021 18:40:59 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> Message-ID: <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> On 8/26/2021 6:34 PM, Jack Haverty via Internet-history wrote: > If it comes true, the Internet will devolve into a set of competing silos, People keep using the future tense, for things that are already true. Consider just how extremely siloed messaging/email now is, in spite of also still having the lingua franca if Internet Mail. Consider how often you are told a service runs better on this or that browser and how often it actually isn't usable on some other one. and so on... d/ ps. The realities of operating an Internet Mail service have now made is quite difficult for a small operator to run well, so even that service is highly concentrated. -- Dave Crocker Brandenburg InternetWorking bbiw.net From tte at cs.fau.de Thu Aug 26 22:34:09 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Fri, 27 Aug 2021 07:34:09 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> Message-ID: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> Any other top level reason for the email issue other than DDoS ? Wrt to DDoS i think thats the biggest issue that the original internet architecture does not solve well. Of course you can buy it as an add-on, but unless i am missing something, there is seemingly no desire to see if/how we could solve it in a standard fashion at the network/transport layer (as opposed to maybe per-application solutions). Then again, this isn't even an Internet network layer problem alone, but one of any platform with fundamentally uncontrolled large-scale any-to-any communication that can be easily instantiated at large scale, e.g.: within large application platforms for example. Cheers Toerless On Thu, Aug 26, 2021 at 06:40:59PM -0700, Dave Crocker via Internet-history wrote: > On 8/26/2021 6:34 PM, Jack Haverty via Internet-history wrote: > > If it comes true, the Internet will devolve into a set of competing silos, > > People keep using the future tense, for things that are already true. > > Consider just how extremely siloed messaging/email now is, in spite of also > still having the lingua franca if Internet Mail. > > Consider how often you are told a service runs better on this or that > browser and how often it actually isn't usable on some other one. > > and so on... > > d/ > > ps. The realities of operating an Internet Mail service have now made is > quite difficult for a small operator to run well, so even that service is > highly concentrated. > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From jack at 3kitty.org Fri Aug 27 10:50:46 2021 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 27 Aug 2021 10:50:46 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> Message-ID: <5f205882-3134-e523-3405-e5fe35506f56@3kitty.org> Totally agree!? I was referring to the "plumbing" layer of The Internet, i.e., the basic IP datagram delivery service.? But certainly the "silo-ization virus" is well entrenched in application layers like messaging/email.?? Apparently? it's working its way downward from the skins we see as applications through the layers towards the bones of the IP service, metastasizing through the whole stack. An interesting question for historians might be "Why?". Email has a long history of silo-izing.??? I wonder if that occurred partly because we never proceeded very aggressively beyond the "simple" functionality of SMTP, and silos evolved to satisfy unmet needs.? So, for example, I have several "email accounts" provided by medical groups, financial groups, etc., that require me to visit their silo to read/send mail with them.?? Sometimes they send me regular SMTP email to let me know that I should log in to seen my new mail. I suspect one reason for their choice of a silo was the desire to be able to trust the identity and security of the parties involved in any conversation.? Although the header of this email says it's from "Jack Haverty", most of us know that you can't really trust such information - even when it comes from your old friend the Banker in Nigeria whom you can't remember.?? Silos provide a bit more assurance, and might even keep your email from being read by someone along the way so they can send you better targeted advertising (except of course for that particular silo's operator). Curiously, there is some technology in the SMTP-email world to address such needs.?? My email app (Thunderbird) has the ability to sign and/or encrypt my email, using apparently well-publicized standards.?? I don't know how well it actually works, but from observation I can tell that very few people and organizations I interact with seem to have embraced it.??? In thousands of emails, perhaps there have been a few that arrived "signed".?? But just a miniscule fraction.? I don't think I've ever seen one come encrypted. So the question is "Why not?"?? This is another example of the Internet History of what didn't happen....for historians perhaps to ponder. /Jack On 8/26/21 6:40 PM, Dave Crocker wrote: > On 8/26/2021 6:34 PM, Jack Haverty via Internet-history wrote: >> If it comes true, the Internet will devolve into a set of competing >> silos, > > People keep using the future tense, for things that are already true. > > Consider just how extremely siloed messaging/email now is, in spite of > also still having the lingua franca if Internet Mail. > > Consider how often you are told a service runs better on this or that > browser and how often it actually isn't usable on some other one. > > and so on... > > d/ > > ps. The realities of operating an Internet Mail service have now made > is quite difficult for a small operator to run well, so even that > service is highly concentrated. > From louie at transsys.com Fri Aug 27 11:02:17 2021 From: louie at transsys.com (Louis Mamakos) Date: Fri, 27 Aug 2021 14:02:17 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> Message-ID: <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: > > Pessimistic as i am, > I think those business models will again, like we saw 20 years ago > with MPLS/VPN > evolve in isolated VPN/slices across the same infrastructure. And > because they > are driven by a small number of customers such as mobile operators, > industrial or public > services/traffic-control/power-distribution/... etc, we will just see > a proliferation of hacked-together > qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. > Managemenet > of Queue weights by FAX messages between customer and subscriber is my > favourite common hack. > > As an ex-colleague-liked to say: www.showmethemoneyforqos.com Around 1999-2000 while I was at UUNET, I recall having conversations with some of the marketing people about building some sort of QoS product or feature into the Internet transit service that we sold. I asked them what their expectations (or really, what the customer's expectations) would be of such a product? Would it: - produce an obvious, demonstrable, differentiated level of performance on an on-going basis? - or, was it an insurance policy? If you're selling IP transit, the best-effort service can't suck too much because competition in the marketplace. You probably can't get by with even a 1% or 2% packet loss rate for best-effort delivery vs. a premium offering. So what would the differentiated QoS offering bring? We already sold different size bandwidth pipes.. A few percent packet loss across your backbone wasn't acceptable; it was a capacity problem to be solved. What about as an insurance policy? We already offered a 100% availability SLA to customers. Not because they wanted to collect a refund; they just wanted it to work. It was to demonstrate the confidence in the reliability of our platform. So the "insurance policy" against the thing we said wasn't going to happen? And then of course, as much as you'd like to believe you had all the important customers on your network, how was some sort of QoS performance commitment supposed to work over peering interconnects? We had all sort of backed into settlement-free peering interconnects and it wasn't at all clear how multiple classes of traffic was going obviously fit into that model. I'm a customer of Internet transit these days, and I have no idea how I'd buy a QoS product if the problem I'm trying to solve is reaching a segment of customers defined by "everywhere on the Internet." Louis Mamakos From dhc at dcrocker.net Fri Aug 27 11:23:15 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 27 Aug 2021 11:23:15 -0700 Subject: [ih] Siloed Email (was: Re: Better-than-Best Effort) In-Reply-To: <5f205882-3134-e523-3405-e5fe35506f56@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <391e55ed-0ef7-af53-5eea-2b696fa33f4a@3kitty.org> <6bc63fe5-775e-94d3-6e88-f8202ee1c73c@dcrocker.net> <5f205882-3134-e523-3405-e5fe35506f56@3kitty.org> Message-ID: <8b1cd9cc-cc05-8321-62b3-673cc0ad14da@dcrocker.net> On 8/27/2021 10:50 AM, Jack Haverty wrote: > Email has a long history of silo-izing.??? I wonder if that occurred > partly because we never proceeded very aggressively beyond the "simple" > functionality of SMTP, and silos evolved to satisfy unmet needs.? So, As you well know, SMTP hasn't been simple for a long time. It's grown enormously, even for just the transport mechanism. The main complexity now comes from anti-abuse work, I think. And the trouble there is that it's just plain difficult to run that capability. So, I think, siloing comes from operational skill. The other issue, of course, is free vs. paid email and people tend to opt for the former, rather than the latter, for some odd reasons. > for example, I have several "email accounts" provided by medical groups, > financial groups, etc., that require me to visit their silo to read/send That, of course, is because we have yet to figure out how to do good quality privacy/security, especially distributed and at scale. And note I didn't say that as an email issue. > mail with them.?? Sometimes they send me regular SMTP email to let me > know that I should log in to seen my new mail. That's email as a notification, rather than transaction service. Different sec/priv isues. > Curiously, there is some technology in the SMTP-email world to address > such needs.?? My email app (Thunderbird) has the ability to sign and/or > encrypt my email, using apparently well-publicized standards. This has been a long-standing example of the difference between an existence proof vs. working adequately at scale. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From johnl at iecc.com Fri Aug 27 11:27:07 2021 From: johnl at iecc.com (John Levine) Date: 27 Aug 2021 14:27:07 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> Message-ID: <20210827182707.A1AF127067F3@ary.qy> It appears that Toerless Eckert via Internet-history said: >Any other top level reason for the email issue other than DDoS ? Spam, of course. Something like 90% of all mail is spam, more like 98% on a bad day. One might consider spam to be a DDoS against people who want to rear their mail so I suppose it's sort of the same thing. R's, John From tte at cs.fau.de Fri Aug 27 11:33:08 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Fri, 27 Aug 2021 20:33:08 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> Message-ID: <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Louis, In that FGNET2030 document where i wrote a section of, one of the core goals was to explicitly eliminate transit as an initial targe for QoS - because we have to much experience (yours included) how difficult it is to figure out not only what it could be, but then more importantly, how to finance it. To answer the question what it could be: If i was an access provider, i would like transit that can support to provide different relative bandwidths to different subscriber flows within my aggregate that i am providing to you. For example ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, such as traditional voice. and the rest just for example that my gold-class customers get 4 times as much bandwidth when there is contention than my lead-class customers. So i can sell more differentiated service to my customers and have this work across transit. And we always failed in the way too complicated thought process in SPs about the technologies required to monetize this. I saw this through when inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. Cheers Toerless On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: > On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: > > > > Pessimistic as i am, > > I think those business models will again, like we saw 20 years ago with > > MPLS/VPN > > evolve in isolated VPN/slices across the same infrastructure. And > > because they > > are driven by a small number of customers such as mobile operators, > > industrial or public > > services/traffic-control/power-distribution/... etc, we will just see a > > proliferation of hacked-together > > qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. > > Managemenet > > of Queue weights by FAX messages between customer and subscriber is my > > favourite common hack. > > > > As an ex-colleague-liked to say: www.showmethemoneyforqos.com > > Around 1999-2000 while I was at UUNET, I recall having conversations with > some > of the marketing people about building some sort of QoS product or feature > into the > Internet transit service that we sold. I asked them what their expectations > (or > really, what the customer's expectations) would be of such a product? Would > it: > > - produce an obvious, demonstrable, differentiated level of performance on > an > on-going basis? > - or, was it an insurance policy? > > If you're selling IP transit, the best-effort service can't suck too much > because > competition in the marketplace. You probably can't get by with even a 1% or > 2% > packet loss rate for best-effort delivery vs. a premium offering. So what > would > the differentiated QoS offering bring? We already sold different size > bandwidth > pipes.. A few percent packet loss across your backbone wasn't acceptable; > it was > a capacity problem to be solved. > > What about as an insurance policy? We already offered a 100% availability > SLA to > customers. Not because they wanted to collect a refund; they just wanted it > to work. > It was to demonstrate the confidence in the reliability of our platform. So > the > "insurance policy" against the thing we said wasn't going to happen? > > And then of course, as much as you'd like to believe you had all the > important > customers on your network, how was some sort of QoS performance commitment > supposed > to work over peering interconnects? We had all sort of backed into > settlement-free > peering interconnects and it wasn't at all clear how multiple classes of > traffic > was going obviously fit into that model. > > I'm a customer of Internet transit these days, and I have no idea how I'd > buy a > QoS product if the problem I'm trying to solve is reaching a segment of > customers > defined by "everywhere on the Internet." > > Louis Mamakos -- --- tte at cs.fau.de From tte at cs.fau.de Fri Aug 27 12:56:34 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Fri, 27 Aug 2021 21:56:34 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210827182707.A1AF127067F3@ary.qy> References: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> <20210827182707.A1AF127067F3@ary.qy> Message-ID: <20210827195634.GZ50345@faui48f.informatik.uni-erlangen.de> Hmm... As long as you or your users get the normal level of spam, it seems that easy enough to deploy spam filters like spamassassin. At least i think most of historic operators of MTAs (universities) still run them without firewalling themselves with pricier options but directly expose their port 25 to the Internet. From my understanding, this just fails (short term) if somebody is really so annoyed with a professor that he is willing to pay $100 or more for targeted DDoS. Which admittedly is a low enough bar, but its also not resulting in persistent DDoS. In any case, maybe one additional distinguisher is that email also has a lot of commercialized attacks that you can buy at retail *sigh* Cheers toerless On Fri, Aug 27, 2021 at 02:27:07PM -0400, John Levine wrote: > It appears that Toerless Eckert via Internet-history said: > >Any other top level reason for the email issue other than DDoS ? > > Spam, of course. Something like 90% of all mail is spam, more like 98% > on a bad day. > > One might consider spam to be a DDoS against people who want to rear their > mail so I suppose it's sort of the same thing. > > R's, > John -- --- tte at cs.fau.de From dhc at dcrocker.net Fri Aug 27 13:20:34 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 27 Aug 2021 13:20:34 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210827195634.GZ50345@faui48f.informatik.uni-erlangen.de> References: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> <20210827182707.A1AF127067F3@ary.qy> <20210827195634.GZ50345@faui48f.informatik.uni-erlangen.de> Message-ID: <396b5454-8206-1345-08c5-1a3c654b43cb@dcrocker.net> On 8/27/2021 12:56 PM, Toerless Eckert via Internet-history wrote: > Hmm... As long as you or your users get the normal level of spam, > it seems that easy enough to deploy spam filters like spamassassin. Sorry, no. incoming spam is a MUCH more complex and complincated issue than that. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From sob at sobco.com Fri Aug 27 13:24:05 2021 From: sob at sobco.com (Scott O. Bradner) Date: Fri, 27 Aug 2021 16:24:05 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: when the ITU-T was starting work on NGN, which assumed QoS that users would select & pay for, I said in a panel at the ITU ?the Internet is not reliably crappy enough to drive that business plan? specifically, the Internet works for VoIP (for example) too much of the time for anyone to be willing to pay extra for QoS that would only apply a small part of the time and would not deal with many problems (like a tree falling & taking out your local access) the response from the BT person was ?we are missing a TCP settlement protocol? Scott > On Aug 27, 2021, at 2:33 PM, Toerless Eckert via Internet-history wrote: > > Louis, > > In that FGNET2030 document where i wrote a section of, one of the core > goals was to explicitly eliminate transit as an initial targe for QoS - because > we have to much experience (yours included) how difficult it is to figure > out not only what it could be, but then more importantly, how to finance it. > > To answer the question what it could be: If i was an access provider, i would > like transit that can support to provide different relative bandwidths to different > subscriber flows within my aggregate that i am providing to you. For example > ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, > such as traditional voice. and the rest just for example that my gold-class > customers get 4 times as much bandwidth when there is contention than my lead-class > customers. So i can sell more differentiated service to my customers and have this > work across transit. > > And we always failed in the way too complicated thought process in SPs about > the technologies required to monetize this. I saw this through when > inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. > > Cheers > Toerless > > On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: >> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: >>> >>> Pessimistic as i am, >>> I think those business models will again, like we saw 20 years ago with >>> MPLS/VPN >>> evolve in isolated VPN/slices across the same infrastructure. And >>> because they >>> are driven by a small number of customers such as mobile operators, >>> industrial or public >>> services/traffic-control/power-distribution/... etc, we will just see a >>> proliferation of hacked-together >>> qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. >>> Managemenet >>> of Queue weights by FAX messages between customer and subscriber is my >>> favourite common hack. >>> >>> As an ex-colleague-liked to say: www.showmethemoneyforqos.com >> >> Around 1999-2000 while I was at UUNET, I recall having conversations with >> some >> of the marketing people about building some sort of QoS product or feature >> into the >> Internet transit service that we sold. I asked them what their expectations >> (or >> really, what the customer's expectations) would be of such a product? Would >> it: >> >> - produce an obvious, demonstrable, differentiated level of performance on >> an >> on-going basis? >> - or, was it an insurance policy? >> >> If you're selling IP transit, the best-effort service can't suck too much >> because >> competition in the marketplace. You probably can't get by with even a 1% or >> 2% >> packet loss rate for best-effort delivery vs. a premium offering. So what >> would >> the differentiated QoS offering bring? We already sold different size >> bandwidth >> pipes.. A few percent packet loss across your backbone wasn't acceptable; >> it was >> a capacity problem to be solved. >> >> What about as an insurance policy? We already offered a 100% availability >> SLA to >> customers. Not because they wanted to collect a refund; they just wanted it >> to work. >> It was to demonstrate the confidence in the reliability of our platform. So >> the >> "insurance policy" against the thing we said wasn't going to happen? >> >> And then of course, as much as you'd like to believe you had all the >> important >> customers on your network, how was some sort of QoS performance commitment >> supposed >> to work over peering interconnects? We had all sort of backed into >> settlement-free >> peering interconnects and it wasn't at all clear how multiple classes of >> traffic >> was going obviously fit into that model. >> >> I'm a customer of Internet transit these days, and I have no idea how I'd >> buy a >> QoS product if the problem I'm trying to solve is reaching a segment of >> customers >> defined by "everywhere on the Internet." >> >> Louis Mamakos > > -- > --- > tte at cs.fau.de > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Fri Aug 27 13:25:48 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 27 Aug 2021 13:25:48 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: <3ec8850c-8ddf-f571-19ac-5a10e1e6f394@dcrocker.net> On 8/27/2021 1:24 PM, Scott O. Bradner wrote: > the response from the BT person was ?we are missing a TCP settlement protocol? I hope your response was something like "well you might be, but the Internet demonstrably isn't." d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From sob at sobco.com Fri Aug 27 13:29:42 2021 From: sob at sobco.com (Scott O. Bradner) Date: Fri, 27 Aug 2021 16:29:42 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <3ec8850c-8ddf-f571-19ac-5a10e1e6f394@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <3ec8850c-8ddf-f571-19ac-5a10e1e6f394@dcrocker.net> Message-ID: that was after my speaking slot so I did not get a chance to point that out > On Aug 27, 2021, at 4:25 PM, Dave Crocker wrote: > > On 8/27/2021 1:24 PM, Scott O. Bradner wrote: >> the response from the BT person was ?we are missing a TCP settlement protocol? > > I hope your response was something like "well you might be, but the Internet demonstrably isn't." > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net From dhc at dcrocker.net Fri Aug 27 13:48:11 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 27 Aug 2021 13:48:11 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> <20210827182707.A1AF127067F3@ary.qy> <20210827195634.GZ50345@faui48f.informatik.uni-erlangen.de> <396b5454-8206-1345-08c5-1a3c654b43cb@dcrocker.net> Message-ID: On 8/27/2021 1:27 PM, the keyboard of geoff goodfellow wrote: > do you consider virus/malware/botware infected/laden email (as) spam (or > a separate issue/thing)? i meant it as a generic use for all the ugly stuff coming in via mail. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From tte at cs.fau.de Fri Aug 27 14:44:26 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Fri, 27 Aug 2021 23:44:26 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: <20210827214426.GA50345@faui48f.informatik.uni-erlangen.de> On Fri, Aug 27, 2021 at 04:24:05PM -0400, Scott O. Bradner wrote: > when the ITU-T was starting work on NGN, which assumed QoS that users would select & pay for, I said > in a panel at the ITU ?the Internet is not reliably crappy enough to drive that business plan? That IMHO very much depends on the actual application requirements > specifically, the Internet works for VoIP (for example) too much of the time > for anyone to be willing to pay extra for QoS that would only apply a small part of the time and would not deal. with many problems (like a tree falling & taking out your local access) Counterpoints: >From my experience, SPs that in the past decade migrated their own analog/digital infrastructure to VoIP do use DiffServ to protect it and to be able to provide the same flawless quality as they had in before. And of course, those SPs will never offer such a DiffServ network option to any OTT voip provideer because QoS is one of the few distinguishing aspects that an OTT can not easily clone. So with this data point i would re-emphasize that i think business models and regulations for equal access to nework services are a key challenge to enable use of better network services. Besides: 90% of all TCP/IP use is not the Internet, but in limited domain networks, and you will find a lot of QoS there, especially also when its being sold as managed services. Its the fine-grained business model of Internet subscribers where so far no business model evolved that would not compete with biger gains through siloed platforms such as SP owned VoIP service (see above). > the response from the BT person was ?we are missing a TCP settlement protocol? I thought we had that. In 2020, we had national politicans asking CEOs of content streamers like Netflix to reduce streaming rates during Corona to allow more WFH productivity, and i think tha was actually done for a few months by several countries in Network if i remember correctly. Does that count ? Ok. Fun aside. How do you call it when SPs need to do per-flow and per-subscriber policing of traffic to re-balance the unfairness introduced by a variability of aggressivness of deployed CCs, especially those like torrents that easily eat bandwidth persistently like an ideal gas ? This type of overcoming basic limitations of the end-to-end CC architecture of the Internet has been deployed by SPs for a long time and is of course becoming ever more difficult to recreate the desired fairness the more end-to-end encryption is used by application. Why do we have BOFs for something like MADINAS ? Cheers Toerless > Scott > > > On Aug 27, 2021, at 2:33 PM, Toerless Eckert via Internet-history wrote: > > > > Louis, > > > > In that FGNET2030 document where i wrote a section of, one of the core > > goals was to explicitly eliminate transit as an initial targe for QoS - because > > we have to much experience (yours included) how difficult it is to figure > > out not only what it could be, but then more importantly, how to finance it. > > > > To answer the question what it could be: If i was an access provider, i would > > like transit that can support to provide different relative bandwidths to different > > subscriber flows within my aggregate that i am providing to you. For example > > ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, > > such as traditional voice. and the rest just for example that my gold-class > > customers get 4 times as much bandwidth when there is contention than my lead-class > > customers. So i can sell more differentiated service to my customers and have this > > work across transit. > > > > And we always failed in the way too complicated thought process in SPs about > > the technologies required to monetize this. I saw this through when > > inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. > > > > Cheers > > Toerless > > > > On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: > >> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: > >>> > >>> Pessimistic as i am, > >>> I think those business models will again, like we saw 20 years ago with > >>> MPLS/VPN > >>> evolve in isolated VPN/slices across the same infrastructure. And > >>> because they > >>> are driven by a small number of customers such as mobile operators, > >>> industrial or public > >>> services/traffic-control/power-distribution/... etc, we will just see a > >>> proliferation of hacked-together > >>> qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. > >>> Managemenet > >>> of Queue weights by FAX messages between customer and subscriber is my > >>> favourite common hack. > >>> > >>> As an ex-colleague-liked to say: www.showmethemoneyforqos.com > >> > >> Around 1999-2000 while I was at UUNET, I recall having conversations with > >> some > >> of the marketing people about building some sort of QoS product or feature > >> into the > >> Internet transit service that we sold. I asked them what their expectations > >> (or > >> really, what the customer's expectations) would be of such a product? Would > >> it: > >> > >> - produce an obvious, demonstrable, differentiated level of performance on > >> an > >> on-going basis? > >> - or, was it an insurance policy? > >> > >> If you're selling IP transit, the best-effort service can't suck too much > >> because > >> competition in the marketplace. You probably can't get by with even a 1% or > >> 2% > >> packet loss rate for best-effort delivery vs. a premium offering. So what > >> would > >> the differentiated QoS offering bring? We already sold different size > >> bandwidth > >> pipes.. A few percent packet loss across your backbone wasn't acceptable; > >> it was > >> a capacity problem to be solved. > >> > >> What about as an insurance policy? We already offered a 100% availability > >> SLA to > >> customers. Not because they wanted to collect a refund; they just wanted it > >> to work. > >> It was to demonstrate the confidence in the reliability of our platform. So > >> the > >> "insurance policy" against the thing we said wasn't going to happen? > >> > >> And then of course, as much as you'd like to believe you had all the > >> important > >> customers on your network, how was some sort of QoS performance commitment > >> supposed > >> to work over peering interconnects? We had all sort of backed into > >> settlement-free > >> peering interconnects and it wasn't at all clear how multiple classes of > >> traffic > >> was going obviously fit into that model. > >> > >> I'm a customer of Internet transit these days, and I have no idea how I'd > >> buy a > >> QoS product if the problem I'm trying to solve is reaching a segment of > >> customers > >> defined by "everywhere on the Internet." > >> > >> Louis Mamakos > > > > -- > > --- > > tte at cs.fau.de > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From geoff at iconia.com Fri Aug 27 13:27:49 2021 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Fri, 27 Aug 2021 10:27:49 -1000 Subject: [ih] Better-than-Best Effort In-Reply-To: <396b5454-8206-1345-08c5-1a3c654b43cb@dcrocker.net> References: <20210827053409.GR50345@faui48f.informatik.uni-erlangen.de> <20210827182707.A1AF127067F3@ary.qy> <20210827195634.GZ50345@faui48f.informatik.uni-erlangen.de> <396b5454-8206-1345-08c5-1a3c654b43cb@dcrocker.net> Message-ID: do you consider virus/malware/botware infected/laden email (as) spam (or a separate issue/thing)? On Fri, Aug 27, 2021 at 10:21 AM Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > On 8/27/2021 12:56 PM, Toerless Eckert via Internet-history wrote: > > Hmm... As long as you or your users get the normal level of spam, > > it seems that easy enough to deploy spam filters like spamassassin. > > Sorry, no. incoming spam is a MUCH more complex and complincated issue > than that. > > > d/ > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From touch at strayalpha.com Fri Aug 27 15:49:04 2021 From: touch at strayalpha.com (touch at strayalpha.com) Date: Fri, 27 Aug 2021 15:49:04 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> Message-ID: Hi, Dave, > On Aug 26, 2021, at 3:39 PM, Dave Crocker via Internet-history wrote: > > Having not followed actual QOS work over the year, my naive brain wandered oddly, today with a thought about a semi-QOS approach. > > The usual view is that it requires complete, end-to-end support. Massive barriers to adoption, at the least. Absolute QoS does (ensuring 300 Mbps capacity), but relative QoS can be deployed as a layer on top of nearly anything ? i.e., run RSVP in an overlay and you don?t get 300 Mbps per se, but that reservation would get twice the capacity of one reserving 150 Mbps on paths they share. > I'm thinking that the long-haul infrastructure tends to have enough capacity that it usually isn't the source of latency. It's the beginning and ending legs that do. We do have some cases where that happens in the customer upload direction (bufferbloat), but I wonder if it?s more often in the aggregation network between the edge networks and the core. That?s the typical case I?ve seen for cable Internet, where the aggregation tree was designed assuming ratios that don?t match current transport protocol use. I have 200 Mbps cable over a WiFi LAN that can support 2.2Gbps, but I almost never see those capacities. At the other side, I wonder too if there are overloads on the end systems more than the edge net. > So what about a scheme that defines and provides QOS in those segments but not the long middle? Cheaper, more implementable, and might give usefully-better performance. Interesting question; FWIW, I don?t know if the edge is more agile than the core; AFAICT, they?re both susceptible to the same inertia and lack of consolidated oversight... > Assuming that this idea is new only to me, I'm curious about reactions/history/etc. > > d/ > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net ? Joe Touch, temporal epistemologist www.strayalpha.com From brian.e.carpenter at gmail.com Fri Aug 27 15:51:32 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 28 Aug 2021 10:51:32 +1200 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: On 28-Aug-21 08:24, Scott O. Bradner via Internet-history wrote: > when the ITU-T was starting work on NGN, which assumed QoS that users would select & pay for, I said > in a panel at the ITU ?the Internet is not reliably crappy enough to drive that business plan? > > specifically, the Internet works for VoIP (for example) too much of the time for anyone to be willing to pay extra > for QoS that would only apply a small part of the time and would not deal with many problems (like > a tree falling & taking out your local access) > > the response from the BT person was ?we are missing a TCP settlement protocol? And so it's been since the beginning of time. One of my favourite memories is a meeting in Brussels around 1995 (effectively pre-web) when we, the science community, met with several PTTs to discuss our need for international 34 Mb/s links. I remember one of them saying "You can't possibly need that much bandwidth for a private network [i.e. the Internet], there are no applications that could ever need that." I think it was someone from Telecom Italia but they all agreed with him. Of course these were people who made a lot of their money via settlements and had no real concept of a connectionless network. I'm a bit shocked that BT still had people thinking that way as recently as the NGN hypefest. As long as queuing theory holds and glass fibres are cheap, I am not sure much is going to change. Brian > > Scott > >> On Aug 27, 2021, at 2:33 PM, Toerless Eckert via Internet-history wrote: >> >> Louis, >> >> In that FGNET2030 document where i wrote a section of, one of the core >> goals was to explicitly eliminate transit as an initial targe for QoS - because >> we have to much experience (yours included) how difficult it is to figure >> out not only what it could be, but then more importantly, how to finance it. >> >> To answer the question what it could be: If i was an access provider, i would >> like transit that can support to provide different relative bandwidths to different >> subscriber flows within my aggregate that i am providing to you. For example >> ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, >> such as traditional voice. and the rest just for example that my gold-class >> customers get 4 times as much bandwidth when there is contention than my lead-class >> customers. So i can sell more differentiated service to my customers and have this >> work across transit. >> >> And we always failed in the way too complicated thought process in SPs about >> the technologies required to monetize this. I saw this through when >> inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. >> >> Cheers >> Toerless >> >> On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: >>> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: >>>> >>>> Pessimistic as i am, >>>> I think those business models will again, like we saw 20 years ago with >>>> MPLS/VPN >>>> evolve in isolated VPN/slices across the same infrastructure. And >>>> because they >>>> are driven by a small number of customers such as mobile operators, >>>> industrial or public >>>> services/traffic-control/power-distribution/... etc, we will just see a >>>> proliferation of hacked-together >>>> qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. >>>> Managemenet >>>> of Queue weights by FAX messages between customer and subscriber is my >>>> favourite common hack. >>>> >>>> As an ex-colleague-liked to say: www.showmethemoneyforqos.com >>> >>> Around 1999-2000 while I was at UUNET, I recall having conversations with >>> some >>> of the marketing people about building some sort of QoS product or feature >>> into the >>> Internet transit service that we sold. I asked them what their expectations >>> (or >>> really, what the customer's expectations) would be of such a product? Would >>> it: >>> >>> - produce an obvious, demonstrable, differentiated level of performance on >>> an >>> on-going basis? >>> - or, was it an insurance policy? >>> >>> If you're selling IP transit, the best-effort service can't suck too much >>> because >>> competition in the marketplace. You probably can't get by with even a 1% or >>> 2% >>> packet loss rate for best-effort delivery vs. a premium offering. So what >>> would >>> the differentiated QoS offering bring? We already sold different size >>> bandwidth >>> pipes.. A few percent packet loss across your backbone wasn't acceptable; >>> it was >>> a capacity problem to be solved. >>> >>> What about as an insurance policy? We already offered a 100% availability >>> SLA to >>> customers. Not because they wanted to collect a refund; they just wanted it >>> to work. >>> It was to demonstrate the confidence in the reliability of our platform. So >>> the >>> "insurance policy" against the thing we said wasn't going to happen? >>> >>> And then of course, as much as you'd like to believe you had all the >>> important >>> customers on your network, how was some sort of QoS performance commitment >>> supposed >>> to work over peering interconnects? We had all sort of backed into >>> settlement-free >>> peering interconnects and it wasn't at all clear how multiple classes of >>> traffic >>> was going obviously fit into that model. >>> >>> I'm a customer of Internet transit these days, and I have no idea how I'd >>> buy a >>> QoS product if the problem I'm trying to solve is reaching a segment of >>> customers >>> defined by "everywhere on the Internet." >>> >>> Louis Mamakos >> >> -- >> --- >> tte at cs.fau.de >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Fri Aug 27 16:09:54 2021 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 27 Aug 2021 16:09:54 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> Perhaps packet switching has outlived its usefulness??? With so much bandwidth, and so much computing power, setting up a circuit might now be better than moving packets? It's somewhat akin to the advent of cheap computers killing timesharing as PCs became dominant. Surely that thought will cause a ruckus here! /Jack On 8/27/21 3:51 PM, Brian E Carpenter via Internet-history wrote: > As long as queuing theory holds and glass fibres are cheap, I am not sure > much is going to change. From vint at google.com Fri Aug 27 16:10:49 2021 From: vint at google.com (Vint Cerf) Date: Fri, 27 Aug 2021 19:10:49 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> Message-ID: time-sharing is alive and well - spelled CLOUD v On Fri, Aug 27, 2021 at 7:10 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Perhaps packet switching has outlived its usefulness? With so much > bandwidth, and so much computing power, setting up a circuit might now > be better than moving packets? > > It's somewhat akin to the advent of cheap computers killing timesharing > as PCs became dominant. > > Surely that thought will cause a ruckus here! > > /Jack > > > On 8/27/21 3:51 PM, Brian E Carpenter via Internet-history wrote: > > As long as queuing theory holds and glass fibres are cheap, I am not sure > > much is going to change. > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From bpurvy at gmail.com Fri Aug 27 16:16:36 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 27 Aug 2021 16:16:36 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> Message-ID: "Perhaps packet switching has outlived its usefulness? With so much bandwidth, and so much computing power, setting up a circuit might now be better than moving packets?" I could say "Vint Cerf is spinning in his grave" but last I checked he's not dead. This is an interesting idea. I don't know where you go with it, but it's interesting. On Fri, Aug 27, 2021 at 4:10 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Perhaps packet switching has outlived its usefulness? With so much > bandwidth, and so much computing power, setting up a circuit might now > be better than moving packets? > > It's somewhat akin to the advent of cheap computers killing timesharing > as PCs became dominant. > > Surely that thought will cause a ruckus here! > > /Jack > > > On 8/27/21 3:51 PM, Brian E Carpenter via Internet-history wrote: > > As long as queuing theory holds and glass fibres are cheap, I am not sure > > much is going to change. > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Fri Aug 27 16:24:31 2021 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 27 Aug 2021 16:24:31 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> Message-ID: <0441c3f0-218e-6ed5-89c1-f574a957478a@3kitty.org> Well, some of us actually foresaw the eventual need for management mechanisms like a "Settlement Protocol".?? For example, somewhere in the late 80s I asked Cyndi Mills to drive an IETF activity on "Usage Accounting", or something like that, so it would be possible as a first step to actually gather data on usage of the net.?? That led to things like RFC 1272 (see https://sandbox.ietf.org/doc/rfc1272/ ) But "rough consensus" was the hurdle to jump, and anything that had the taint of possible money issues involved was soundly suppressed by the research community.?? So as the Internet became a commercial infrastructure, things like a "Settlement Protocol" were missing. That's another piece of the Internet History of What Didn't Happen. /Jack On 8/27/21 1:24 PM, Scott O. Bradner via Internet-history wrote: > when the ITU-T was starting work on NGN, which assumed QoS that users would select & pay for, I said > in a panel at the ITU ?the Internet is not reliably crappy enough to drive that business plan? > > specifically, the Internet works for VoIP (for example) too much of the time for anyone to be willing to pay extra > for QoS that would only apply a small part of the time and would not deal with many problems (like > a tree falling & taking out your local access) > > the response from the BT person was ?we are missing a TCP settlement protocol? > > Scott > >> On Aug 27, 2021, at 2:33 PM, Toerless Eckert via Internet-history wrote: >> >> Louis, >> >> In that FGNET2030 document where i wrote a section of, one of the core >> goals was to explicitly eliminate transit as an initial targe for QoS - because >> we have to much experience (yours included) how difficult it is to figure >> out not only what it could be, but then more importantly, how to finance it. >> >> To answer the question what it could be: If i was an access provider, i would >> like transit that can support to provide different relative bandwidths to different >> subscriber flows within my aggregate that i am providing to you. For example >> ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, >> such as traditional voice. and the rest just for example that my gold-class >> customers get 4 times as much bandwidth when there is contention than my lead-class >> customers. So i can sell more differentiated service to my customers and have this >> work across transit. >> >> And we always failed in the way too complicated thought process in SPs about >> the technologies required to monetize this. I saw this through when >> inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. >> >> Cheers >> Toerless >> >> On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: >>> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: >>>> Pessimistic as i am, >>>> I think those business models will again, like we saw 20 years ago with >>>> MPLS/VPN >>>> evolve in isolated VPN/slices across the same infrastructure. And >>>> because they >>>> are driven by a small number of customers such as mobile operators, >>>> industrial or public >>>> services/traffic-control/power-distribution/... etc, we will just see a >>>> proliferation of hacked-together >>>> qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. >>>> Managemenet >>>> of Queue weights by FAX messages between customer and subscriber is my >>>> favourite common hack. >>>> >>>> As an ex-colleague-liked to say: www.showmethemoneyforqos.com >>> Around 1999-2000 while I was at UUNET, I recall having conversations with >>> some >>> of the marketing people about building some sort of QoS product or feature >>> into the >>> Internet transit service that we sold. I asked them what their expectations >>> (or >>> really, what the customer's expectations) would be of such a product? Would >>> it: >>> >>> - produce an obvious, demonstrable, differentiated level of performance on >>> an >>> on-going basis? >>> - or, was it an insurance policy? >>> >>> If you're selling IP transit, the best-effort service can't suck too much >>> because >>> competition in the marketplace. You probably can't get by with even a 1% or >>> 2% >>> packet loss rate for best-effort delivery vs. a premium offering. So what >>> would >>> the differentiated QoS offering bring? We already sold different size >>> bandwidth >>> pipes.. A few percent packet loss across your backbone wasn't acceptable; >>> it was >>> a capacity problem to be solved. >>> >>> What about as an insurance policy? We already offered a 100% availability >>> SLA to >>> customers. Not because they wanted to collect a refund; they just wanted it >>> to work. >>> It was to demonstrate the confidence in the reliability of our platform. So >>> the >>> "insurance policy" against the thing we said wasn't going to happen? >>> >>> And then of course, as much as you'd like to believe you had all the >>> important >>> customers on your network, how was some sort of QoS performance commitment >>> supposed >>> to work over peering interconnects? We had all sort of backed into >>> settlement-free >>> peering interconnects and it wasn't at all clear how multiple classes of >>> traffic >>> was going obviously fit into that model. >>> >>> I'm a customer of Internet transit these days, and I have no idea how I'd >>> buy a >>> QoS product if the problem I'm trying to solve is reaching a segment of >>> customers >>> defined by "everywhere on the Internet." >>> >>> Louis Mamakos >> -- >> --- >> tte at cs.fau.de >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From olejacobsen at me.com Fri Aug 27 16:27:43 2021 From: olejacobsen at me.com (Ole Jacobsen) Date: Fri, 27 Aug 2021 16:27:43 -0700 (PDT) Subject: [ih] Recent history: "Hardening The Internet" Message-ID: IETF 88 Technical Plenary: Hardening The Internet. This is "only" 8 years ago, but it would be interesting to discuss if we've made any real progress since then. The main presenation starts at around 25 minutes, this link should take you to that point more or less: https://youtu.be/oV71hhEpQ20?t=1409 Ole Ole J. Jacobsen Editor and Publisher The Internet Protocol Journal Office: +1 415-550-9433 Cell: +1 415-370-4628 E-mail: olejacobsen at me.com Skype: organdemo From dhc at dcrocker.net Fri Aug 27 17:24:17 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 27 Aug 2021 17:24:17 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> Message-ID: <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> On 8/27/2021 4:10 PM, Vint Cerf via Internet-history wrote: > time-sharing is alive and well - spelled CLOUD As PCs started to emerge Postel commented that that would eliminate the need for time-sharing. I suggested that even for one person, it would be good for their computer to be running multiple, simultaneous activities. So, nevermind the cloud. Look at your phone. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From louie at transsys.com Fri Aug 27 20:45:12 2021 From: louie at transsys.com (Louis Mamakos) Date: Fri, 27 Aug 2021 23:45:12 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210827214426.GA50345@faui48f.informatik.uni-erlangen.de> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <20210827214426.GA50345@faui48f.informatik.uni-erlangen.de> Message-ID: On 27 Aug 2021, at 17:44, Toerless Eckert wrote: > Counterpoints: > > From my experience, SPs that in the past decade migrated their own > analog/digital > infrastructure to VoIP do use DiffServ to protect it and to be able to > provide the > same flawless quality as they had in before. And of course, those SPs > will never > offer such a DiffServ network option to any OTT voip provideer because > QoS is one of > the few distinguishing aspects that an OTT can not easily clone. So > with this data > point i would re-emphasize that i think business models and > regulations for equal > access to nework services are a key challenge to enable use of better > network services. > > Besides: 90% of all TCP/IP use is not the Internet, but in limited > domain networks, and > you will find a lot of QoS there, especially also when its being sold > as managed services. > Its the fine-grained business model of Internet subscribers where so > far no business > model evolved that would not compete with biger gains through siloed > platforms such as > SP owned VoIP service (see above). I spent a number of years as CTO of a large OTT VoIP service provider delivering "landline replacement" telephony service over the public Internet. It mostly works pretty well. Where it doesn't work, I don't believe that QoS would actually "fix" the problem. A customer on the end of an ADSL circuit with uplink speed measured in kilobits, not megabits was never going to have a great experience. Those on the end of satellite ISP service were just going to by stymied by speed-of-light latency that not even the ITU can fix with QoS standards. And some ISP last mile networks are just terribly operated. When my VoIP services at home don't perform well, it's because there's physical layer problems in the outside plant, not due to congestion or competing classes of application traffic. And from my time operating a VoIP service, my ear has become pretty well attuned to VoIP CODEC artifacts due to, e.g, packet loss or excessive jitter. We did quite a lot of call quality measurements sampled by way of observed delay jitter primarily to get a sense of this. Of course, my knowledge of this was 10 years or more ago, but since then last-mile networks have only increased in performance. And I'm still a customer of that service and it continues to work well. At last with VoIP, the mobile industry has gone quite a ways in training people to expect worse quality than traditional landline telephony, so even the expectations have been lowered. Louis Mamakos From jack at 3kitty.org Sat Aug 28 11:31:04 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 28 Aug 2021 11:31:04 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> Message-ID: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Actually, I think of "The Cloud" as TimeSharing 2.0. When I look at the last 60 years or so of history with an economics lens, there's a sort of cyclic pattern. Computers in the 60s were very expensive and users were charged for use by the number of CPU seconds their programs used.? To keep those expensive computers busy, they were fed a continuous stream of jobs on punch cards, day and night.?? You submitted your "job" and often got the results the next day.?? People could wait, computers could not.?? Their time was too valuable. People could interact directly with computers, but that would mean that the expensive computer would be idle while waiting for the user to digest what it had just done and tell it what to do next.?? So TimeSharing evolved as a useful technique to keep that expensive computer busy while allowing users to get their results faster, especially for very simple (to the CPU) tasks. TimeSharing was popular for quite a while, but eventually users got frustrated by the experience of relying on "The Comp Center" to keep the machine running, and the tendency for those managers to put more and more users on a machine until it no longer felt like a User had a whole computer to work with. But the Economics had changed.? Computers had gotten much much less expensive, so much so that it became economically feasible to purchase your own computer, and not worry about keeping it busy every minute of every day.? Plus you controlled that computer, rather than some bureaucracy inside the glassed-off computer installation.?? Power was in the Users' hands.? Workstations and departmental minicomputers arrived.? As costs dropped even further, Personal Computers became the norm. As LANs and PCs became dominant in offices and such commercial environments, the now-less-expensive "big computers" became servers, interacting with all those PCs in computer-to-computer communications, rather than the computer-to-terminal norm of TimeSharing. With costs still dropping rapidly, and silo-ization of "networking" creating an unwieldy mix of incompatible networking technologies even inside a single organization, corporate managers noticed that the expensive part of IT had become the labor involved in keeping all that stuff running, updated to fix critical vulnerabilities, and continuously upgraded as software vendors dictated and Users demanded.?? TCP/IP was a universal replacement for that hodgepodge, and Industry "embraced the Internet".?? All of the other networking schemes withered away. With costs of computing, and of communications, still dropping fast, the labor to operate an organization's "IT Department" became the dominant cost.?? Especially in smaller organizations, it was difficult to have the right personnel skills around to handle problems and needs that arose.?? A specialist in some aspect of technology might be needed urgently, but keeping such a person on the payroll would be an unnecessary expense when that particular problem or need had been addressed. Cloud computing provided a solution.?? By concentrating the technology all in one place, somewhere "out in the cloud", the expensive resources, whether computers or people, could be kept busy.?? Cloud computing might be viewed as TimeSharing of not only computers and storage, but also of people.?? TimeSharing 2.0 is here.? But instead of Users themselves interacting with a Computer in the Cloud, a User's personal computing device is interacting on that Users' behalf with often many Computers in several Clouds. Of course, similar changes have been happening, and continue to happen, in the costs of communications.? Back in the late 60s, when TimeSharing was emerging, computers were still expensive and there was a desire to share such expensive resources to keep them busy. Computers were connected to Users by use of Terminals, sending and receiving characters. At the time, communications was expensive.?? Leased lines could be bought, but not economically justified unless they were kept busy. Dial-up access was available, also expensive and priced by distance involved, and, just like in the case of the big expensive computers, dial-up lines were "wasted" while the User was thinking about what to do next. IIRC, a major motivation for Packet Switching was to address that economic problem, by allowing multiple Users to share communications circuits.?? The circuits could be kept busy, and a mix of leased and dial-up circuits could be used to achieve the lowest-cost means to interconnect those Users and Computers. I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were terminals all over the building, pretty much all connected through the ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA.? The technology of Packet Switching made it possible to keep a PDP-10 busy servicing all those Users and minimize the costs of everything, including those expensive communications circuits.? This was circa 1980.?? Users could efficiently share expensive communications, and expensive and distant computers -- although I always thought ARPA's choice to use a computer 3000 miles away was probably more to demonstrate the viability of the ARPANET than because it was cheaper than using a computer somewhere near DC. Since 1980, costs of everything have continued to drop.? But of course, Users' expectations have also continued to rise.? In the 1980s, the only economic way to move things like video files was shipping magtapes on an airplane.?? Streaming such material wasn't even a dream. The economics now are also different, and it would seem that eventually the economic motivation for techniques such as Packet Switching might have disappeared.? In addition, the limitations of such technology are becoming more evident, and some silo-ization of specific solutions has been happening.?? Bob Purvy's and Louis Mamakos' descriptions strike me as two examples of innovators tackling a specific problem with a point solution that mitigates that problem for a specific User community (aka their customers). There's a lot more to the story of course, as other changes and innovations occurred.? E.g., we no longer interact much directly with remote computers, but rather with the one on our desks or in our hands.? Latency is possibly more important now than bandwidth, since while fiber can provide lots of bandwidth but no one has yet figured out how to move data faster than the speed of light. So, that's a hopefully not too long explanation of why I mused that perhaps Packet Switching is no longer the best solution, at least when viewed through my economic lens.?? The need to share expensive communications lines has apparently almost disappeared, and latency is a new hurdle.??? If someone ever figures out how to make software that "just works" and doesn't need lots of care, perhaps "The Cloud" will wither away as well.? Maybe some AI like we see in the SciFi world.? Hopefully benevolent..... Just my perspective - YMMV, /Jack Haverty On 8/27/21 5:24 PM, Dave Crocker wrote: > On 8/27/2021 4:10 PM, Vint Cerf via Internet-history wrote: >> time-sharing is alive and well - spelled CLOUD > > As PCs started to emerge Postel commented that that would eliminate > the need for time-sharing.? I suggested that even for one person, it > would be good for their computer to be running multiple, simultaneous > activities. > > So, nevermind the cloud.? Look at your phone. > > d/ > From steve at shinkuro.com Sat Aug 28 11:51:15 2021 From: steve at shinkuro.com (Steve Crocker) Date: Sat, 28 Aug 2021 14:51:15 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Message-ID: Jack, You wrote: I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were terminals all over the building, pretty much all connected through the ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA. The technology of Packet Switching made it possible to keep a PDP-10 busy servicing all those Users and minimize the costs of everything, including those expensive communications circuits. This was circa 1980. Users could efficiently share expensive communications, and expensive and distant computers -- although I always thought ARPA's choice to use a computer 3000 miles away was probably more to demonstrate the viability of the ARPANET than because it was cheaper than using a computer somewhere near DC. The choice of USC-ISI in Marina del Rey was due to other factors. In 1972, with ARPA/IPTO (Larry Roberts) strong support, Keith Uncapher moved his research group out of RAND. Uncapher explored a couple of possibilities and found a comfortable institutional home with the University of Southern California (USC) with the proviso the institute would be off campus. Uncapher was solidly supportive of both ARPA/IPTO and of the Arpanet project. As the Arpanet grew, Roberts needed a place to have multiple PDP-10s providing service on the Arpanet. Not just for the staff at ARPA but for many others as well. Uncapher was cooperative and the rest followed easily. The fact that it demonstrated the viability of packet-switching over that distance was perhaps a bonus, but the same would have been true almost anywhere in the continental U.S. at that time. The more important factor was the quality of the relationship. One could imagine setting up a small farm of machines at various other universities, non-profits, or selected for profit companies or even some military bases. For each of these, cost, contracting rules, the ambitions of the principal investigator, and staff skill sets would have been the dominant concerns. Steve From jack at 3kitty.org Sat Aug 28 13:15:42 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 28 Aug 2021 13:15:42 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Message-ID: <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> Thanks, Steve.? I hadn't heard the details of why ISI was selected.?? I can believe that economics was probably a factor but the people and organizational issues could have been the dominant factors. IMHO, the "internet community" seems to often ignore non-technical influences on historical events, preferring to view everything in terms of RFCs, protocols, and such.? I think the other influences are an important part of the story - hence my "economic lens".?? You just described a view through a manager's lens. /Jack PS - I always thought that the "ARPANET demo" aspect of that ARPANET timeframe was suspect, especially after I noticed that the ARPANET had been configured with a leased circuit directly between the nearby IMPs to ISI and ARPA.?? So as a demo of "packet switching", there wasn't much actual switching involved.?? The 2 IMPs were more like multiplexors. I never heard whether that configuration was mandated by ARPA, or BBN decided to put a line in as a way to keep the customer happy, or if it just happened naturally as a result of the ongoing measurement of traffic flows and reconfiguration of the topology to adapt as needed.? Or something else.?? The interactivity of the service between a terminal at ARPA and a PDP-10 at ISI was noticeably better than other users (e.g., me) experienced. On 8/28/21 11:51 AM, Steve Crocker wrote: > Jack, > > You wrote: > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. > There were > terminals all over the building, pretty much all connected through the > ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA.? The > technology of Packet Switching made it possible to keep a PDP-10 busy > servicing all those Users and minimize the costs of everything, > including those expensive communications circuits.? This was circa > 1980. Users could efficiently share expensive communications, and > expensive and distant computers -- although I always thought ARPA's > choice to use a computer 3000 miles away was probably more to > demonstrate the viability of the ARPANET than because it was cheaper > than using a computer somewhere near DC. > > > The choice of USC-ISI in Marina del Rey was due to other factors.? In > 1972, with ARPA/IPTO (Larry Roberts) strong support, Keith Uncapher > moved his research group out of RAND.? Uncapher explored?a couple of > possibilities and found a comfortable institutional home with the > University of Southern California (USC) with the proviso the institute > would be off campus.? Uncapher was solidly supportive of both > ARPA/IPTO and of the Arpanet project. As the Arpanet grew, Roberts > needed a place to have multiple PDP-10s providing service on the > Arpanet.? Not just for the staff at ARPA but for many others as well. > Uncapher was cooperative and the rest followed easily. > > The fact that it demonstrated the viability of packet-switching over > that distance was perhaps a bonus, but the same would have been true > almost anywhere in the continental U.S. at that time.? The more > important factor was the quality of the relationship.? One could > imagine setting up a small farm of machines at various other > universities, non-profits, or selected for profit companies or even > some military?bases.? For each of these, cost, contracting rules, the > ambitions of the principal investigator, and staff skill sets would > have been the dominant concerns. > > Steve > From vint at google.com Sat Aug 28 13:55:21 2021 From: vint at google.com (Vint Cerf) Date: Sat, 28 Aug 2021 16:55:21 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> Message-ID: Jack, the 4 node configuration had two paths between UCLA and SRI and a two hop path to University of Utah. We did a variety of tests to force alternate routing (by congesting the first path). I used traffic generators in the IMPs and in the UCLA Sigma-7 to get this effect. Of course, we also crashed the Arpanet with these early experiments. v On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty wrote: > Thanks, Steve. I hadn't heard the details of why ISI was selected. I > can believe that economics was probably a factor but the people and > organizational issues could have been the dominant factors. > > IMHO, the "internet community" seems to often ignore non-technical > influences on historical events, preferring to view everything in terms of > RFCs, protocols, and such. I think the other influences are an important > part of the story - hence my "economic lens". You just described a view > through a manager's lens. > > /Jack > > PS - I always thought that the "ARPANET demo" aspect of that ARPANET > timeframe was suspect, especially after I noticed that the ARPANET had been > configured with a leased circuit directly between the nearby IMPs to ISI > and ARPA. So as a demo of "packet switching", there wasn't much actual > switching involved. The 2 IMPs were more like multiplexors. > > I never heard whether that configuration was mandated by ARPA, or BBN > decided to put a line in as a way to keep the customer happy, or if it just > happened naturally as a result of the ongoing measurement of traffic flows > and reconfiguration of the topology to adapt as needed. Or something > else. The interactivity of the service between a terminal at ARPA and a > PDP-10 at ISI was noticeably better than other users (e.g., me) experienced. > > On 8/28/21 11:51 AM, Steve Crocker wrote: > > Jack, > > You wrote: > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were > terminals all over the building, pretty much all connected through the > ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA. The > technology of Packet Switching made it possible to keep a PDP-10 busy > servicing all those Users and minimize the costs of everything, > including those expensive communications circuits. This was circa > 1980. Users could efficiently share expensive communications, and > expensive and distant computers -- although I always thought ARPA's > choice to use a computer 3000 miles away was probably more to > demonstrate the viability of the ARPANET than because it was cheaper > than using a computer somewhere near DC. > > > The choice of USC-ISI in Marina del Rey was due to other factors. In > 1972, with ARPA/IPTO (Larry Roberts) strong support, Keith Uncapher moved > his research group out of RAND. Uncapher explored a couple of > possibilities and found a comfortable institutional home with the > University of Southern California (USC) with the proviso the institute > would be off campus. Uncapher was solidly supportive of both ARPA/IPTO and > of the Arpanet project. As the Arpanet grew, Roberts needed a place to > have multiple PDP-10s providing service on the Arpanet. Not just for the > staff at ARPA but for many others as well. Uncapher was cooperative and > the rest followed easily. > > The fact that it demonstrated the viability of packet-switching over that > distance was perhaps a bonus, but the same would have been true almost > anywhere in the continental U.S. at that time. The more important factor > was the quality of the relationship. One could imagine setting up a small > farm of machines at various other universities, non-profits, or selected > for profit companies or even some military bases. For each of these, cost, > contracting rules, the ambitions of the principal investigator, and staff > skill sets would have been the dominant concerns. > > Steve > > > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From johnl at iecc.com Sat Aug 28 14:02:52 2021 From: johnl at iecc.com (John Levine) Date: 28 Aug 2021 17:02:52 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> Message-ID: <20210828210252.D288127123B7@ary.qy> It appears that Jack Haverty via Internet-history said: >Thanks, Steve.? I hadn't heard the details of why ISI was selected.?? I >can believe that economics was probably a factor but the people and >organizational issues could have been the dominant factors. I always assumed it was so the ex-RAND people wouldn't have to move. RAND and ISI are about 20 minutes apart by bicycle. R's, John From jack at 3kitty.org Sat Aug 28 14:06:43 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 28 Aug 2021 14:06:43 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> Message-ID: Sounds right.?? My experience was well after that early experimental period.? The ARPANET was much bigger (1980ish) and the topology had evolved over the years.? There was a direct 56K line (IIRC between ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in normal conditions ARPA<->ISI traffic flowed directly over that long-haul circuit. ? /Jack On 8/28/21 1:55 PM, Vint Cerf wrote: > Jack, the 4 node configuration had two paths between UCLA and SRI and > a two hop path to University of Utah. > We did a variety of tests to force alternate routing (by congesting > the first path). > I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > this effect. Of course, we also crashed the Arpanet with these early > experiments. > > v > > > On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > wrote: > > Thanks, Steve.? I hadn't heard the details of why ISI was > selected.?? I can believe that economics was probably a factor but > the people and organizational issues could have been the dominant > factors. > > IMHO, the "internet community" seems to often ignore non-technical > influences on historical events, preferring to view everything in > terms of RFCs, protocols, and such.? I think the other influences > are an important part of the story - hence my "economic lens".?? > You just described a view through a manager's lens. > > /Jack > > PS - I always thought that the "ARPANET demo" aspect of that > ARPANET timeframe was suspect, especially after I noticed that the > ARPANET had been configured with a leased circuit directly between > the nearby IMPs to ISI and ARPA.?? So as a demo of "packet > switching", there wasn't much actual switching involved.?? The 2 > IMPs were more like multiplexors. > > I never heard whether that configuration was mandated by ARPA, or > BBN decided to put a line in as a way to keep the customer happy, > or if it just happened naturally as a result of the ongoing > measurement of traffic flows and reconfiguration of the topology > to adapt as needed.? Or something else.?? The interactivity of the > service between a terminal at ARPA and a PDP-10 at ISI was > noticeably better than other users (e.g., me) experienced. > > On 8/28/21 11:51 AM, Steve Crocker wrote: >> Jack, >> >> You wrote: >> >> I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >> There were >> terminals all over the building, pretty much all connected >> through the >> ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >> CA.? The >> technology of Packet Switching made it possible to keep a >> PDP-10 busy >> servicing all those Users and minimize the costs of everything, >> including those expensive communications circuits.? This was >> circa >> 1980. Users could efficiently share expensive communications, and >> expensive and distant computers -- although I always thought >> ARPA's >> choice to use a computer 3000 miles away was probably more to >> demonstrate the viability of the ARPANET than because it was >> cheaper >> than using a computer somewhere near DC. >> >> >> The choice of USC-ISI in Marina del Rey was due to other >> factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong support, >> Keith Uncapher moved his research group out of RAND.? Uncapher >> explored?a couple of possibilities and found a comfortable >> institutional home with the University of Southern California >> (USC) with the proviso the institute would be off campus.? >> Uncapher was solidly supportive of both ARPA/IPTO and of the >> Arpanet project.? As the Arpanet grew, Roberts needed a place to >> have multiple PDP-10s providing service on the Arpanet.? Not just >> for the staff at ARPA but for many others as well.? Uncapher was >> cooperative and the rest followed easily. >> >> The fact that it demonstrated the viability of packet-switching >> over that distance was perhaps a bonus, but the same would have >> been true almost anywhere in the continental U.S. at that time. >> The more important factor was the quality of the relationship.? >> One could imagine setting up a small farm of machines at various >> other universities, non-profits, or selected for profit companies >> or even some military?bases.? For each of these, cost, >> contracting rules, the ambitions of the principal investigator, >> and staff skill sets would have been the dominant concerns. >> >> Steve >> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > From brian.e.carpenter at gmail.com Sat Aug 28 14:17:00 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 29 Aug 2021 09:17:00 +1200 Subject: [ih] Better-than-Best Effort In-Reply-To: <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> Message-ID: On 28-Aug-21 11:09, Jack Haverty via Internet-history wrote: > Perhaps packet switching has outlived its usefulness??? With so much > bandwidth, and so much computing power, setting up a circuit might now > be better than moving packets? It's possible, once you've decided you *need* to set up a circuit. But the packet-switched network is much better for discovery, very short transactions, and deciding that you need a circuit. Today, the circuit is set up using TCP. Maybe tomorrow, it will be set up by using QUIC. Maybe the day after, by using some kind of optical circuit switch. (However, when you look at the number of "connections" involved in a modern, advertising-rich, web page, I wonder about scalability.For example, I count 34 distinct http(s) destinations when I load the front page of The Guardian newspaper. Each one adds a TCP session.) > It's somewhat akin to the advent of cheap computers killing timesharing > as PCs became dominant. Except that they didn't; they just changed its nature. > Surely that thought will cause a ruckus here! Actually I find more radical thinking here than on most IETF lists. Regards Brian > > /Jack > > > On 8/27/21 3:51 PM, Brian E Carpenter via Internet-history wrote: >> As long as queuing theory holds and glass fibres are cheap, I am not sure >> much is going to change. > > From dhc at dcrocker.net Sat Aug 28 14:52:58 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 28 Aug 2021 14:52:58 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <20210828210252.D288127123B7@ary.qy> References: <20210828210252.D288127123B7@ary.qy> Message-ID: On 8/28/2021 2:02 PM, John Levine via Internet-history wrote: > RAND and ISI are about 20 minutes apart by bicycle. Longer by car. Of course. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Sat Aug 28 15:15:11 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 29 Aug 2021 10:15:11 +1200 Subject: [ih] Better-than-Best Effort In-Reply-To: <0441c3f0-218e-6ed5-89c1-f574a957478a@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <0441c3f0-218e-6ed5-89c1-f574a957478a@3kitty.org> Message-ID: <883a6ed6-5afb-a64b-81d8-bfa587c32c86@gmail.com> On 28-Aug-21 11:24, Jack Haverty via Internet-history wrote: > Well, some of us actually foresaw the eventual need for management > mechanisms like a "Settlement Protocol".?? For example, somewhere in the > late 80s I asked Cyndi Mills to drive an IETF activity on "Usage > Accounting", or something like that, so it would be possible as a first > step to actually gather data on usage of the net.?? That led to things > like RFC 1272 (see https://sandbox.ietf.org/doc/rfc1272/ ) > > But "rough consensus" was the hurdle to jump, and anything that had the > taint of possible money issues involved was soundly suppressed by the > research community.?? So as the Internet became a commercial > infrastructure, things like a "Settlement Protocol" were missing. > > That's another piece of the Internet History of What Didn't Happen. Interesting. I'd say that effort fell like a small rock into deep water and was never seen again. Of course, there's been a lot of work on measurement of all kinds since then, but mainly for operational purposes and traffic engineering in particular. Usage-based charging, or at least data caps, have crept in at the subscriber edge, but otherwise the industry has gone in for capacity-based charging. Settlements has always been a dirty word. After I posted https://datatracker.ietf.org/doc/html/draft-carpenter-metrics-00 in 1996, I was taken off for a quiet lunch which turned into a good roasting by a couple of *very* senior ISP folk. Why? They hated any talk of this topic because they were very concerned about the Sherman Act. Brian > > /Jack > > > On 8/27/21 1:24 PM, Scott O. Bradner via Internet-history wrote: >> when the ITU-T was starting work on NGN, which assumed QoS that users would select & pay for, I said >> in a panel at the ITU ?the Internet is not reliably crappy enough to drive that business plan? >> >> specifically, the Internet works for VoIP (for example) too much of the time for anyone to be willing to pay extra >> for QoS that would only apply a small part of the time and would not deal with many problems (like >> a tree falling & taking out your local access) >> >> the response from the BT person was ?we are missing a TCP settlement protocol? >> >> Scott >> >>> On Aug 27, 2021, at 2:33 PM, Toerless Eckert via Internet-history wrote: >>> >>> Louis, >>> >>> In that FGNET2030 document where i wrote a section of, one of the core >>> goals was to explicitly eliminate transit as an initial targe for QoS - because >>> we have to much experience (yours included) how difficult it is to figure >>> out not only what it could be, but then more importantly, how to finance it. >>> >>> To answer the question what it could be: If i was an access provider, i would >>> like transit that can support to provide different relative bandwidths to different >>> subscriber flows within my aggregate that i am providing to you. For example >>> ensuring no-loss for < 10% of my aggregate, so it could carry low-loss traffic, >>> such as traditional voice. and the rest just for example that my gold-class >>> customers get 4 times as much bandwidth when there is contention than my lead-class >>> customers. So i can sell more differentiated service to my customers and have this >>> work across transit. >>> >>> And we always failed in the way too complicated thought process in SPs about >>> the technologies required to monetize this. I saw this through when >>> inter-provider Inter-AS VPN was considered by SPs. Way too convoluted. >>> >>> Cheers >>> Toerless >>> >>> On Fri, Aug 27, 2021 at 02:02:17PM -0400, Louis Mamakos wrote: >>>> On 26 Aug 2021, at 19:27, Toerless Eckert via Internet-history wrote: >>>>> Pessimistic as i am, >>>>> I think those business models will again, like we saw 20 years ago with >>>>> MPLS/VPN >>>>> evolve in isolated VPN/slices across the same infrastructure. And >>>>> because they >>>>> are driven by a small number of customers such as mobile operators, >>>>> industrial or public >>>>> services/traffic-control/power-distribution/... etc, we will just see a >>>>> proliferation of hacked-together >>>>> qos for one-off solutions. Like i have seen it in QoS in MPLS/VPN. >>>>> Managemenet >>>>> of Queue weights by FAX messages between customer and subscriber is my >>>>> favourite common hack. >>>>> >>>>> As an ex-colleague-liked to say: www.showmethemoneyforqos.com >>>> Around 1999-2000 while I was at UUNET, I recall having conversations with >>>> some >>>> of the marketing people about building some sort of QoS product or feature >>>> into the >>>> Internet transit service that we sold. I asked them what their expectations >>>> (or >>>> really, what the customer's expectations) would be of such a product? Would >>>> it: >>>> >>>> - produce an obvious, demonstrable, differentiated level of performance on >>>> an >>>> on-going basis? >>>> - or, was it an insurance policy? >>>> >>>> If you're selling IP transit, the best-effort service can't suck too much >>>> because >>>> competition in the marketplace. You probably can't get by with even a 1% or >>>> 2% >>>> packet loss rate for best-effort delivery vs. a premium offering. So what >>>> would >>>> the differentiated QoS offering bring? We already sold different size >>>> bandwidth >>>> pipes.. A few percent packet loss across your backbone wasn't acceptable; >>>> it was >>>> a capacity problem to be solved. >>>> >>>> What about as an insurance policy? We already offered a 100% availability >>>> SLA to >>>> customers. Not because they wanted to collect a refund; they just wanted it >>>> to work. >>>> It was to demonstrate the confidence in the reliability of our platform. So >>>> the >>>> "insurance policy" against the thing we said wasn't going to happen? >>>> >>>> And then of course, as much as you'd like to believe you had all the >>>> important >>>> customers on your network, how was some sort of QoS performance commitment >>>> supposed >>>> to work over peering interconnects? We had all sort of backed into >>>> settlement-free >>>> peering interconnects and it wasn't at all clear how multiple classes of >>>> traffic >>>> was going obviously fit into that model. >>>> >>>> I'm a customer of Internet transit these days, and I have no idea how I'd >>>> buy a >>>> QoS product if the problem I'm trying to solve is reaching a segment of >>>> customers >>>> defined by "everywhere on the Internet." >>>> >>>> Louis Mamakos >>> -- >>> --- >>> tte at cs.fau.de >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > > From dan at lynch.com Sat Aug 28 18:48:02 2021 From: dan at lynch.com (Dan Lynch) Date: Sat, 28 Aug 2021 18:48:02 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> References: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Message-ID: Jack, thanks for the very accurate description of what has happened in the past 60 years or so. In the early 70s I experienced something quite unique. I was the single user of a SDS Sigma 5 (IBM copycat) that was used to track missiles with a very advanced radar in New Mexico. So I wrote the software, keypunched it myself, loaded it into the card reader and ran the batch monitor to compile the program and then ran the radar with the program. I had the whole system to myself. Since this was a high priority program for the defense department it was deemed the best use of the money! And I had the first taste of a big machine all to myself. What fun. Oh, I also was given the source code to all the systems software and libraries because the activity was highly classified and if I ran into a bug in the software provided by the manufacturer I had to find the problem and solve it without bringing anyone else in to my location. Hence I learned a lot of stuff that way. When I found and fixed a bug I would tell them the fix I had found. Quite an education! Dan Cell 650-776-7313 > On Aug 28, 2021, at 11:31 AM, Jack Haverty via Internet-history wrote: > > ?Actually, I think of "The Cloud" as TimeSharing 2.0. > > When I look at the last 60 years or so of history with an economics lens, there's a sort of cyclic pattern. > > Computers in the 60s were very expensive and users were charged for use by the number of CPU seconds their programs used. To keep those expensive computers busy, they were fed a continuous stream of jobs on punch cards, day and night. You submitted your "job" and often got the results the next day. People could wait, computers could not. Their time was too valuable. > > People could interact directly with computers, but that would mean that the expensive computer would be idle while waiting for the user to digest what it had just done and tell it what to do next. So TimeSharing evolved as a useful technique to keep that expensive computer busy while allowing users to get their results faster, especially for very simple (to the CPU) tasks. > > TimeSharing was popular for quite a while, but eventually users got frustrated by the experience of relying on "The Comp Center" to keep the machine running, and the tendency for those managers to put more and more users on a machine until it no longer felt like a User had a whole computer to work with. > > But the Economics had changed. Computers had gotten much much less expensive, so much so that it became economically feasible to purchase your own computer, and not worry about keeping it busy every minute of every day. Plus you controlled that computer, rather than some bureaucracy inside the glassed-off computer installation. Power was in the Users' hands. Workstations and departmental minicomputers arrived. As costs dropped even further, Personal Computers became the norm. > > As LANs and PCs became dominant in offices and such commercial environments, the now-less-expensive "big computers" became servers, interacting with all those PCs in computer-to-computer communications, rather than the computer-to-terminal norm of TimeSharing. > > With costs still dropping rapidly, and silo-ization of "networking" creating an unwieldy mix of incompatible networking technologies even inside a single organization, corporate managers noticed that the expensive part of IT had become the labor involved in keeping all that stuff running, updated to fix critical vulnerabilities, and continuously upgraded as software vendors dictated and Users demanded. TCP/IP was a universal replacement for that hodgepodge, and Industry "embraced the Internet". All of the other networking schemes withered away. > > With costs of computing, and of communications, still dropping fast, the labor to operate an organization's "IT Department" became the dominant cost. Especially in smaller organizations, it was difficult to have the right personnel skills around to handle problems and needs that arose. A specialist in some aspect of technology might be needed urgently, but keeping such a person on the payroll would be an unnecessary expense when that particular problem or need had been addressed. > > Cloud computing provided a solution. By concentrating the technology all in one place, somewhere "out in the cloud", the expensive resources, whether computers or people, could be kept busy. Cloud computing might be viewed as TimeSharing of not only computers and storage, but also of people. TimeSharing 2.0 is here. But instead of Users themselves interacting with a Computer in the Cloud, a User's personal computing device is interacting on that Users' behalf with often many Computers in several Clouds. > > Of course, similar changes have been happening, and continue to happen, in the costs of communications. Back in the late 60s, when TimeSharing was emerging, computers were still expensive and there was a desire to share such expensive resources to keep them busy. Computers were connected to Users by use of Terminals, sending and receiving characters. > > At the time, communications was expensive. Leased lines could be bought, but not economically justified unless they were kept busy. Dial-up access was available, also expensive and priced by distance involved, and, just like in the case of the big expensive computers, dial-up lines were "wasted" while the User was thinking about what to do next. > > IIRC, a major motivation for Packet Switching was to address that economic problem, by allowing multiple Users to share communications circuits. The circuits could be kept busy, and a mix of leased and dial-up circuits could be used to achieve the lowest-cost means to interconnect those Users and Computers. > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were terminals all over the building, pretty much all connected through the ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA. The technology of Packet Switching made it possible to keep a PDP-10 busy servicing all those Users and minimize the costs of everything, including those expensive communications circuits. This was circa 1980. Users could efficiently share expensive communications, and expensive and distant computers -- although I always thought ARPA's choice to use a computer 3000 miles away was probably more to demonstrate the viability of the ARPANET than because it was cheaper than using a computer somewhere near DC. > > Since 1980, costs of everything have continued to drop. But of course, Users' expectations have also continued to rise. In the 1980s, the only economic way to move things like video files was shipping magtapes on an airplane. Streaming such material wasn't even a dream. > > The economics now are also different, and it would seem that eventually the economic motivation for techniques such as Packet Switching might have disappeared. In addition, the limitations of such technology are becoming more evident, and some silo-ization of specific solutions has been happening. Bob Purvy's and Louis Mamakos' descriptions strike me as two examples of innovators tackling a specific problem with a point solution that mitigates that problem for a specific User community (aka their customers). > > There's a lot more to the story of course, as other changes and innovations occurred. E.g., we no longer interact much directly with remote computers, but rather with the one on our desks or in our hands. Latency is possibly more important now than bandwidth, since while fiber can provide lots of bandwidth but no one has yet figured out how to move data faster than the speed of light. > > So, that's a hopefully not too long explanation of why I mused that perhaps Packet Switching is no longer the best solution, at least when viewed through my economic lens. The need to share expensive communications lines has apparently almost disappeared, and latency is a new hurdle. If someone ever figures out how to make software that "just works" and doesn't need lots of care, perhaps "The Cloud" will wither away as well. Maybe some AI like we see in the SciFi world. Hopefully benevolent..... > > Just my perspective - YMMV, > /Jack Haverty > > > > > > > >> On 8/27/21 5:24 PM, Dave Crocker wrote: >>> On 8/27/2021 4:10 PM, Vint Cerf via Internet-history wrote: >>> time-sharing is alive and well - spelled CLOUD >> >> As PCs started to emerge Postel commented that that would eliminate the need for time-sharing. I suggested that even for one person, it would be good for their computer to be running multiple, simultaneous activities. >> >> So, nevermind the cloud. Look at your phone. >> >> d/ >> > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Sat Aug 28 18:54:23 2021 From: bpurvy at gmail.com (Bob Purvy) Date: Sat, 28 Aug 2021 18:54:23 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Message-ID: Since the Sigma was mentioned several times here, maybe some of you can msg me privately (since it's kinda off-topic and I don't want to derail the discussion): Was there third-party software for the Sigma that you had to pay for? I don't mean consulting / managing / leasing services. I'm writing a companion blog post for my book about "what Xerox should have done," and one of the things I'm addressing is Dave Liddle's contention that there really wasn't a software industry prior to the Apple II and the IBM PC. I know that there was for IBM mainframes and for DEC machines, but it would be especially ironic if it also existed for XDS machines! On Sat, Aug 28, 2021 at 6:48 PM Dan Lynch via Internet-history < internet-history at elists.isoc.org> wrote: > Jack, thanks for the very accurate description of what has happened in the > past 60 years or so. In the early 70s I experienced something quite unique. > I was the single user of a SDS Sigma 5 (IBM copycat) that was used to track > missiles with a very advanced radar in New Mexico. So I wrote the software, > keypunched it myself, loaded it into the card reader and ran the batch > monitor to compile the program and then ran the radar with the program. I > had the whole system to myself. Since this was a high priority program for > the defense department it was deemed the best use of the money! And I had > the first taste of a big machine all to myself. What fun. Oh, I also was > given the source code to all the systems software and libraries because the > activity was highly classified and if I ran into a bug in the software > provided by the manufacturer I had to find the problem and solve it without > bringing anyone else in to my location. Hence I learned a lot of stuff that > way. When I found and fixed a bug I would tell them the fix I had found. > Quite an education! > > Dan > > Cell 650-776-7313 > > > On Aug 28, 2021, at 11:31 AM, Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > ?Actually, I think of "The Cloud" as TimeSharing 2.0. > > > > When I look at the last 60 years or so of history with an economics > lens, there's a sort of cyclic pattern. > > > > Computers in the 60s were very expensive and users were charged for use > by the number of CPU seconds their programs used. To keep those expensive > computers busy, they were fed a continuous stream of jobs on punch cards, > day and night. You submitted your "job" and often got the results the > next day. People could wait, computers could not. Their time was too > valuable. > > > > People could interact directly with computers, but that would mean that > the expensive computer would be idle while waiting for the user to digest > what it had just done and tell it what to do next. So TimeSharing evolved > as a useful technique to keep that expensive computer busy while allowing > users to get their results faster, especially for very simple (to the CPU) > tasks. > > > > TimeSharing was popular for quite a while, but eventually users got > frustrated by the experience of relying on "The Comp Center" to keep the > machine running, and the tendency for those managers to put more and more > users on a machine until it no longer felt like a User had a whole computer > to work with. > > > > But the Economics had changed. Computers had gotten much much less > expensive, so much so that it became economically feasible to purchase your > own computer, and not worry about keeping it busy every minute of every > day. Plus you controlled that computer, rather than some bureaucracy > inside the glassed-off computer installation. Power was in the Users' > hands. Workstations and departmental minicomputers arrived. As costs > dropped even further, Personal Computers became the norm. > > > > As LANs and PCs became dominant in offices and such commercial > environments, the now-less-expensive "big computers" became servers, > interacting with all those PCs in computer-to-computer communications, > rather than the computer-to-terminal norm of TimeSharing. > > > > With costs still dropping rapidly, and silo-ization of "networking" > creating an unwieldy mix of incompatible networking technologies even > inside a single organization, corporate managers noticed that the expensive > part of IT had become the labor involved in keeping all that stuff running, > updated to fix critical vulnerabilities, and continuously upgraded as > software vendors dictated and Users demanded. TCP/IP was a universal > replacement for that hodgepodge, and Industry "embraced the Internet". > All of the other networking schemes withered away. > > > > With costs of computing, and of communications, still dropping fast, the > labor to operate an organization's "IT Department" became the dominant > cost. Especially in smaller organizations, it was difficult to have the > right personnel skills around to handle problems and needs that arose. A > specialist in some aspect of technology might be needed urgently, but > keeping such a person on the payroll would be an unnecessary expense when > that particular problem or need had been addressed. > > > > Cloud computing provided a solution. By concentrating the technology > all in one place, somewhere "out in the cloud", the expensive resources, > whether computers or people, could be kept busy. Cloud computing might be > viewed as TimeSharing of not only computers and storage, but also of > people. TimeSharing 2.0 is here. But instead of Users themselves > interacting with a Computer in the Cloud, a User's personal computing > device is interacting on that Users' behalf with often many Computers in > several Clouds. > > > > Of course, similar changes have been happening, and continue to happen, > in the costs of communications. Back in the late 60s, when TimeSharing was > emerging, computers were still expensive and there was a desire to share > such expensive resources to keep them busy. Computers were connected to > Users by use of Terminals, sending and receiving characters. > > > > At the time, communications was expensive. Leased lines could be > bought, but not economically justified unless they were kept busy. Dial-up > access was available, also expensive and priced by distance involved, and, > just like in the case of the big expensive computers, dial-up lines were > "wasted" while the User was thinking about what to do next. > > > > IIRC, a major motivation for Packet Switching was to address that > economic problem, by allowing multiple Users to share communications > circuits. The circuits could be kept busy, and a mix of leased and > dial-up circuits could be used to achieve the lowest-cost means to > interconnect those Users and Computers. > > > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were > terminals all over the building, pretty much all connected through the > ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA. The > technology of Packet Switching made it possible to keep a PDP-10 busy > servicing all those Users and minimize the costs of everything, including > those expensive communications circuits. This was circa 1980. Users > could efficiently share expensive communications, and expensive and distant > computers -- although I always thought ARPA's choice to use a computer 3000 > miles away was probably more to demonstrate the viability of the ARPANET > than because it was cheaper than using a computer somewhere near DC. > > > > Since 1980, costs of everything have continued to drop. But of course, > Users' expectations have also continued to rise. In the 1980s, the only > economic way to move things like video files was shipping magtapes on an > airplane. Streaming such material wasn't even a dream. > > > > The economics now are also different, and it would seem that eventually > the economic motivation for techniques such as Packet Switching might have > disappeared. In addition, the limitations of such technology are becoming > more evident, and some silo-ization of specific solutions has been > happening. Bob Purvy's and Louis Mamakos' descriptions strike me as two > examples of innovators tackling a specific problem with a point solution > that mitigates that problem for a specific User community (aka their > customers). > > > > There's a lot more to the story of course, as other changes and > innovations occurred. E.g., we no longer interact much directly with > remote computers, but rather with the one on our desks or in our hands. > Latency is possibly more important now than bandwidth, since while fiber > can provide lots of bandwidth but no one has yet figured out how to move > data faster than the speed of light. > > > > So, that's a hopefully not too long explanation of why I mused that > perhaps Packet Switching is no longer the best solution, at least when > viewed through my economic lens. The need to share expensive > communications lines has apparently almost disappeared, and latency is a > new hurdle. If someone ever figures out how to make software that "just > works" and doesn't need lots of care, perhaps "The Cloud" will wither away > as well. Maybe some AI like we see in the SciFi world. Hopefully > benevolent..... > > > > Just my perspective - YMMV, > > /Jack Haverty > > > > > > > > > > > > > > > >> On 8/27/21 5:24 PM, Dave Crocker wrote: > >>> On 8/27/2021 4:10 PM, Vint Cerf via Internet-history wrote: > >>> time-sharing is alive and well - spelled CLOUD > >> > >> As PCs started to emerge Postel commented that that would eliminate the > need for time-sharing. I suggested that even for one person, it would be > good for their computer to be running multiple, simultaneous activities. > >> > >> So, nevermind the cloud. Look at your phone. > >> > >> d/ > >> > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dan at lynch.com Sat Aug 28 19:10:35 2021 From: dan at lynch.com (Dan Lynch) Date: Sat, 28 Aug 2021 19:10:35 -0700 Subject: [ih] Better-than-Best Effort In-Reply-To: References: Message-ID: And in 1980 I was hired by ISI to run that farm of PDP-10s and follow on machines. For a year or so the plan was to get even bigger, but then one day Bob Kahn told me my job was changed! The era of personal computers was unfolding and my role was to buy 3 of each one and try to put them on the nascent Internet and see which ones worked and which ones didn?t. I bought the early Sun machines, from Cadlinc, not Sun yet, Perqs from CMU and BLTs from some part of Bell Labs I think, and early Vaxes from Dec and I tried to get the early Dolfins from Xerox and the first Stars. Oh, and this was to be the end of Timesharing! And eventually it was, eh? After a few years of doing this I left ISI for Silicon Valley to get in on the revolution. Eventually I realized that the Internet was taking off and I started Interop to teach the world how to make the Internet work for them. Thanks for the ride. Dan Ps. Oh, Steve Crocker was on the hiring committee that selected me??? Cell 650-776-7313 > On Aug 28, 2021, at 11:51 AM, Steve Crocker via Internet-history wrote: > > ?Jack, > > You wrote: > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. There were > terminals all over the building, pretty much all connected through the > ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, CA. The > technology of Packet Switching made it possible to keep a PDP-10 busy > servicing all those Users and minimize the costs of everything, > including those expensive communications circuits. This was circa > 1980. Users could efficiently share expensive communications, and > expensive and distant computers -- although I always thought ARPA's > choice to use a computer 3000 miles away was probably more to > demonstrate the viability of the ARPANET than because it was cheaper > than using a computer somewhere near DC. > > > The choice of USC-ISI in Marina del Rey was due to other factors. In 1972, > with ARPA/IPTO (Larry Roberts) strong support, Keith Uncapher moved his > research group out of RAND. Uncapher explored a couple of possibilities > and found a comfortable institutional home with the University of Southern > California (USC) with the proviso the institute would be off campus. > Uncapher was solidly supportive of both ARPA/IPTO and of the Arpanet > project. As the Arpanet grew, Roberts needed a place to have multiple > PDP-10s providing service on the Arpanet. Not just for the staff at ARPA > but for many others as well. Uncapher was cooperative and the rest > followed easily. > > The fact that it demonstrated the viability of packet-switching over that > distance was perhaps a bonus, but the same would have been true almost > anywhere in the continental U.S. at that time. The more important factor > was the quality of the relationship. One could imagine setting up a small > farm of machines at various other universities, non-profits, or selected > for profit companies or even some military bases. For each of these, cost, > contracting rules, the ambitions of the principal investigator, and staff > skill sets would have been the dominant concerns. > > Steve > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From amckenzie3 at yahoo.com Sun Aug 29 07:03:50 2021 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Sun, 29 Aug 2021 14:03:50 +0000 (UTC) Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> Message-ID: <86019613.672324.1630245830259@mail.yahoo.com> This is the second email from Jack mentioning a point-to-point line between the ARPA TIP and the ISI site.? I don't believe that is an accurate statement of the ARPAnet topology.? In January 1975 there were 5 hops between the 2 on the shortest path. In October 1975 there were 6.? I don't believe it was ever one or two hops, but perhaps someone can find a network map that proves me wrong. Alex McKenzie On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via Internet-history wrote: Sounds right.?? My experience was well after that early experimental period.? The ARPANET was much bigger (1980ish) and the topology had evolved over the years.? There was a direct 56K line (IIRC between ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in normal conditions ARPA<->ISI traffic flowed directly over that long-haul circuit. ? /Jack On 8/28/21 1:55 PM, Vint Cerf wrote: > Jack, the 4 node configuration had two paths between UCLA and SRI and > a two hop path to University of Utah. > We did a variety of tests to force alternate routing (by congesting > the first path). > I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > this effect. Of course, we also crashed the Arpanet with these early > experiments. > > v > > > On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > wrote: > >? ? Thanks, Steve.? I hadn't heard the details of why ISI was >? ? selected.?? I can believe that economics was probably a factor but >? ? the people and organizational issues could have been the dominant >? ? factors. > >? ? IMHO, the "internet community" seems to often ignore non-technical >? ? influences on historical events, preferring to view everything in >? ? terms of RFCs, protocols, and such.? I think the other influences >? ? are an important part of the story - hence my "economic lens".?? >? ? You just described a view through a manager's lens. > >? ? /Jack > >? ? PS - I always thought that the "ARPANET demo" aspect of that >? ? ARPANET timeframe was suspect, especially after I noticed that the >? ? ARPANET had been configured with a leased circuit directly between >? ? the nearby IMPs to ISI and ARPA.?? So as a demo of "packet >? ? switching", there wasn't much actual switching involved.?? The 2 >? ? IMPs were more like multiplexors. > >? ? I never heard whether that configuration was mandated by ARPA, or >? ? BBN decided to put a line in as a way to keep the customer happy, >? ? or if it just happened naturally as a result of the ongoing >? ? measurement of traffic flows and reconfiguration of the topology >? ? to adapt as needed.? Or something else.?? The interactivity of the >? ? service between a terminal at ARPA and a PDP-10 at ISI was >? ? noticeably better than other users (e.g., me) experienced. > >? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >>? ? Jack, >> >>? ? You wrote: >> >>? ? ? ? I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >>? ? ? ? There were >>? ? ? ? terminals all over the building, pretty much all connected >>? ? ? ? through the >>? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >>? ? ? ? CA.? The >>? ? ? ? technology of Packet Switching made it possible to keep a >>? ? ? ? PDP-10 busy >>? ? ? ? servicing all those Users and minimize the costs of everything, >>? ? ? ? including those expensive communications circuits.? This was >>? ? ? ? circa >>? ? ? ? 1980. Users could efficiently share expensive communications, and >>? ? ? ? expensive and distant computers -- although I always thought >>? ? ? ? ARPA's >>? ? ? ? choice to use a computer 3000 miles away was probably more to >>? ? ? ? demonstrate the viability of the ARPANET than because it was >>? ? ? ? cheaper >>? ? ? ? than using a computer somewhere near DC. >> >> >>? ? The choice of USC-ISI in Marina del Rey was due to other >>? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong support, >>? ? Keith Uncapher moved his research group out of RAND.? Uncapher >>? ? explored?a couple of possibilities and found a comfortable >>? ? institutional home with the University of Southern California >>? ? (USC) with the proviso the institute would be off campus.? >>? ? Uncapher was solidly supportive of both ARPA/IPTO and of the >>? ? Arpanet project.? As the Arpanet grew, Roberts needed a place to >>? ? have multiple PDP-10s providing service on the Arpanet.? Not just >>? ? for the staff at ARPA but for many others as well.? Uncapher was >>? ? cooperative and the rest followed easily. >> >>? ? The fact that it demonstrated the viability of packet-switching >>? ? over that distance was perhaps a bonus, but the same would have >>? ? been true almost anywhere in the continental U.S. at that time. >>? ? The more important factor was the quality of the relationship.? >>? ? One could imagine setting up a small farm of machines at various >>? ? other universities, non-profits, or selected for profit companies >>? ? or even some military?bases.? For each of these, cost, >>? ? contracting rules, the ambitions of the principal investigator, >>? ? and staff skill sets would have been the dominant concerns. >> >>? ? Steve >> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From vint at google.com Sun Aug 29 07:28:20 2021 From: vint at google.com (Vint Cerf) Date: Sun, 29 Aug 2021 10:28:20 -0400 Subject: [ih] Better-than-Best Effort In-Reply-To: <86019613.672324.1630245830259@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> Message-ID: my recollection matches Alex' in this case v On Sun, Aug 29, 2021 at 10:04 AM Alex McKenzie via Internet-history < internet-history at elists.isoc.org> wrote: > This is the second email from Jack mentioning a point-to-point line > between the ARPA TIP and the ISI site. I don't believe that is an accurate > statement of the ARPAnet topology. In January 1975 there were 5 hops > between the 2 on the shortest path. In October 1975 there were 6. I don't > believe it was ever one or two hops, but perhaps someone can find a network > map that proves me wrong. > Alex McKenzie > > On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > Internet-history wrote: > > Sounds right. My experience was well after that early experimental > period. The ARPANET was much bigger (1980ish) and the topology had > evolved over the years. There was a direct 56K line (IIRC between > ARPA-TIP and ISI) at that time. Lots of other circuits too, but in > normal conditions ARPA<->ISI traffic flowed directly over that long-haul > circuit. /Jack > > On 8/28/21 1:55 PM, Vint Cerf wrote: > > Jack, the 4 node configuration had two paths between UCLA and SRI and > > a two hop path to University of Utah. > > We did a variety of tests to force alternate routing (by congesting > > the first path). > > I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > > this effect. Of course, we also crashed the Arpanet with these early > > experiments. > > > > v > > > > > > On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > wrote: > > > > Thanks, Steve. I hadn't heard the details of why ISI was > > selected. I can believe that economics was probably a factor but > > the people and organizational issues could have been the dominant > > factors. > > > > IMHO, the "internet community" seems to often ignore non-technical > > influences on historical events, preferring to view everything in > > terms of RFCs, protocols, and such. I think the other influences > > are an important part of the story - hence my "economic lens". > > You just described a view through a manager's lens. > > > > /Jack > > > > PS - I always thought that the "ARPANET demo" aspect of that > > ARPANET timeframe was suspect, especially after I noticed that the > > ARPANET had been configured with a leased circuit directly between > > the nearby IMPs to ISI and ARPA. So as a demo of "packet > > switching", there wasn't much actual switching involved. The 2 > > IMPs were more like multiplexors. > > > > I never heard whether that configuration was mandated by ARPA, or > > BBN decided to put a line in as a way to keep the customer happy, > > or if it just happened naturally as a result of the ongoing > > measurement of traffic flows and reconfiguration of the topology > > to adapt as needed. Or something else. The interactivity of the > > service between a terminal at ARPA and a PDP-10 at ISI was > > noticeably better than other users (e.g., me) experienced. > > > > On 8/28/21 11:51 AM, Steve Crocker wrote: > >> Jack, > >> > >> You wrote: > >> > >> I recall many visits to ARPA on Wilson Blvd in Arlington, VA. > >> There were > >> terminals all over the building, pretty much all connected > >> through the > >> ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, > >> CA. The > >> technology of Packet Switching made it possible to keep a > >> PDP-10 busy > >> servicing all those Users and minimize the costs of everything, > >> including those expensive communications circuits. This was > >> circa > >> 1980. Users could efficiently share expensive communications, and > >> expensive and distant computers -- although I always thought > >> ARPA's > >> choice to use a computer 3000 miles away was probably more to > >> demonstrate the viability of the ARPANET than because it was > >> cheaper > >> than using a computer somewhere near DC. > >> > >> > >> The choice of USC-ISI in Marina del Rey was due to other > >> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong support, > >> Keith Uncapher moved his research group out of RAND. Uncapher > >> explored a couple of possibilities and found a comfortable > >> institutional home with the University of Southern California > >> (USC) with the proviso the institute would be off campus. > >> Uncapher was solidly supportive of both ARPA/IPTO and of the > >> Arpanet project. As the Arpanet grew, Roberts needed a place to > >> have multiple PDP-10s providing service on the Arpanet. Not just > >> for the staff at ARPA but for many others as well. Uncapher was > >> cooperative and the rest followed easily. > >> > >> The fact that it demonstrated the viability of packet-switching > >> over that distance was perhaps a bonus, but the same would have > >> been true almost anywhere in the continental U.S. at that time. > >> The more important factor was the quality of the relationship. > >> One could imagine setting up a small farm of machines at various > >> other universities, non-profits, or selected for profit companies > >> or even some military bases. For each of these, cost, > >> contracting rules, the ambitions of the principal investigator, > >> and staff skill sets would have been the dominant concerns. > >> > >> Steve > >> > > > > > > > > -- > > Please send any postal/overnight deliveries to: > > Vint Cerf > > 1435 Woodhurst Blvd > > McLean, VA 22102 > > 703-448-0965 <(703)%20448-0965> > > > > until further notice > > > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From amckenzie3 at yahoo.com Sun Aug 29 08:02:19 2021 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Sun, 29 Aug 2021 15:02:19 +0000 (UTC) Subject: [ih] More topology In-Reply-To: <86019613.672324.1630245830259@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> Message-ID: <395932638.701311.1630249339681@mail.yahoo.com> A look at some ARPAnet maps available on the web shows that in 1982 it was four hops from ARPA to ISI, but by 1985 it was one hop. Alex McKenzie On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via Internet-history wrote: This is the second email from Jack mentioning a point-to-point line between the ARPA TIP and the ISI site.? I don't believe that is an accurate statement of the ARPAnet topology.? In January 1975 there were 5 hops between the 2 on the shortest path. In October 1975 there were 6.? I don't believe it was ever one or two hops, but perhaps someone can find a network map that proves me wrong. Alex McKenzie ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via Internet-history wrote:? Sounds right.?? My experience was well after that early experimental period.? The ARPANET was much bigger (1980ish) and the topology had evolved over the years.? There was a direct 56K line (IIRC between ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in normal conditions ARPA<->ISI traffic flowed directly over that long-haul circuit. ? /Jack On 8/28/21 1:55 PM, Vint Cerf wrote: > Jack, the 4 node configuration had two paths between UCLA and SRI and > a two hop path to University of Utah. > We did a variety of tests to force alternate routing (by congesting > the first path). > I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > this effect. Of course, we also crashed the Arpanet with these early > experiments. > > v > > > On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > wrote: > >? ? Thanks, Steve.? I hadn't heard the details of why ISI was >? ? selected.?? I can believe that economics was probably a factor but >? ? the people and organizational issues could have been the dominant >? ? factors. > >? ? IMHO, the "internet community" seems to often ignore non-technical >? ? influences on historical events, preferring to view everything in >? ? terms of RFCs, protocols, and such.? I think the other influences >? ? are an important part of the story - hence my "economic lens".?? >? ? You just described a view through a manager's lens. > >? ? /Jack > >? ? PS - I always thought that the "ARPANET demo" aspect of that >? ? ARPANET timeframe was suspect, especially after I noticed that the >? ? ARPANET had been configured with a leased circuit directly between >? ? the nearby IMPs to ISI and ARPA.?? So as a demo of "packet >? ? switching", there wasn't much actual switching involved.?? The 2 >? ? IMPs were more like multiplexors. > >? ? I never heard whether that configuration was mandated by ARPA, or >? ? BBN decided to put a line in as a way to keep the customer happy, >? ? or if it just happened naturally as a result of the ongoing >? ? measurement of traffic flows and reconfiguration of the topology >? ? to adapt as needed.? Or something else.?? The interactivity of the >? ? service between a terminal at ARPA and a PDP-10 at ISI was >? ? noticeably better than other users (e.g., me) experienced. > >? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >>? ? Jack, >> >>? ? You wrote: >> >>? ? ? ? I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >>? ? ? ? There were >>? ? ? ? terminals all over the building, pretty much all connected >>? ? ? ? through the >>? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >>? ? ? ? CA.? The >>? ? ? ? technology of Packet Switching made it possible to keep a >>? ? ? ? PDP-10 busy >>? ? ? ? servicing all those Users and minimize the costs of everything, >>? ? ? ? including those expensive communications circuits.? This was >>? ? ? ? circa >>? ? ? ? 1980. Users could efficiently share expensive communications, and >>? ? ? ? expensive and distant computers -- although I always thought >>? ? ? ? ARPA's >>? ? ? ? choice to use a computer 3000 miles away was probably more to >>? ? ? ? demonstrate the viability of the ARPANET than because it was >>? ? ? ? cheaper >>? ? ? ? than using a computer somewhere near DC. >> >> >>? ? The choice of USC-ISI in Marina del Rey was due to other >>? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong support, >>? ? Keith Uncapher moved his research group out of RAND.? Uncapher >>? ? explored?a couple of possibilities and found a comfortable >>? ? institutional home with the University of Southern California >>? ? (USC) with the proviso the institute would be off campus.? >>? ? Uncapher was solidly supportive of both ARPA/IPTO and of the >>? ? Arpanet project.? As the Arpanet grew, Roberts needed a place to >>? ? have multiple PDP-10s providing service on the Arpanet.? Not just >>? ? for the staff at ARPA but for many others as well.? Uncapher was >>? ? cooperative and the rest followed easily. >> >>? ? The fact that it demonstrated the viability of packet-switching >>? ? over that distance was perhaps a bonus, but the same would have >>? ? been true almost anywhere in the continental U.S. at that time. >>? ? The more important factor was the quality of the relationship.? >>? ? One could imagine setting up a small farm of machines at various >>? ? other universities, non-profits, or selected for profit companies >>? ? or even some military?bases.? For each of these, cost, >>? ? contracting rules, the ambitions of the principal investigator, >>? ? and staff skill sets would have been the dominant concerns. >> >>? ? Steve >> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history ? -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Sun Aug 29 09:43:05 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 29 Aug 2021 09:43:05 -0700 Subject: [ih] More topology In-Reply-To: <395932638.701311.1630249339681@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> Message-ID: Actually July 1981 -- see http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg (thanks, Noel!)??? The experience I recall was being in the ARPANET NOC for some reason and noticing the topology on the big map that covered one wall of the NOC.?? There were 2 ARPANET nodes at that time labelled ISI, but I'm not sure where the PDP-10s were attached. ? Still just historically curious how the decision was made to configure that topology....but we'll probably never know.? /Jack On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > A look at some ARPAnet maps available on the web shows that in 1982 it was four hops from ARPA to ISI, but by 1985 it was one hop. > Alex McKenzie > > On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via Internet-history wrote: > > This is the second email from Jack mentioning a point-to-point line between the ARPA TIP and the ISI site.? I don't believe that is an accurate statement of the ARPAnet topology.? In January 1975 there were 5 hops between the 2 on the shortest path. In October 1975 there were 6.? I don't believe it was ever one or two hops, but perhaps someone can find a network map that proves me wrong. > Alex McKenzie > > ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via Internet-history wrote: > > Sounds right.?? My experience was well after that early experimental > period.? The ARPANET was much bigger (1980ish) and the topology had > evolved over the years.? There was a direct 56K line (IIRC between > ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in > normal conditions ARPA<->ISI traffic flowed directly over that long-haul > circuit. ? /Jack > > On 8/28/21 1:55 PM, Vint Cerf wrote: >> Jack, the 4 node configuration had two paths between UCLA and SRI and >> a two hop path to University of Utah. >> We did a variety of tests to force alternate routing (by congesting >> the first path). >> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get >> this effect. Of course, we also crashed the Arpanet with these early >> experiments. >> >> v >> >> >> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > wrote: >> >> ? ? Thanks, Steve.? I hadn't heard the details of why ISI was >> ? ? selected.?? I can believe that economics was probably a factor but >> ? ? the people and organizational issues could have been the dominant >> ? ? factors. >> >> ? ? IMHO, the "internet community" seems to often ignore non-technical >> ? ? influences on historical events, preferring to view everything in >> ? ? terms of RFCs, protocols, and such.? I think the other influences >> ? ? are an important part of the story - hence my "economic lens". >> ? ? You just described a view through a manager's lens. >> >> ? ? /Jack >> >> ? ? PS - I always thought that the "ARPANET demo" aspect of that >> ? ? ARPANET timeframe was suspect, especially after I noticed that the >> ? ? ARPANET had been configured with a leased circuit directly between >> ? ? the nearby IMPs to ISI and ARPA.?? So as a demo of "packet >> ? ? switching", there wasn't much actual switching involved.?? The 2 >> ? ? IMPs were more like multiplexors. >> >> ? ? I never heard whether that configuration was mandated by ARPA, or >> ? ? BBN decided to put a line in as a way to keep the customer happy, >> ? ? or if it just happened naturally as a result of the ongoing >> ? ? measurement of traffic flows and reconfiguration of the topology >> ? ? to adapt as needed.? Or something else.?? The interactivity of the >> ? ? service between a terminal at ARPA and a PDP-10 at ISI was >> ? ? noticeably better than other users (e.g., me) experienced. >> >> ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >>> ? ? Jack, >>> >>> ? ? You wrote: >>> >>> ? ? ? ? I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >>> ? ? ? ? There were >>> ? ? ? ? terminals all over the building, pretty much all connected >>> ? ? ? ? through the >>> ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >>> ? ? ? ? CA.? The >>> ? ? ? ? technology of Packet Switching made it possible to keep a >>> ? ? ? ? PDP-10 busy >>> ? ? ? ? servicing all those Users and minimize the costs of everything, >>> ? ? ? ? including those expensive communications circuits.? This was >>> ? ? ? ? circa >>> ? ? ? ? 1980. Users could efficiently share expensive communications, and >>> ? ? ? ? expensive and distant computers -- although I always thought >>> ? ? ? ? ARPA's >>> ? ? ? ? choice to use a computer 3000 miles away was probably more to >>> ? ? ? ? demonstrate the viability of the ARPANET than because it was >>> ? ? ? ? cheaper >>> ? ? ? ? than using a computer somewhere near DC. >>> >>> >>> ? ? The choice of USC-ISI in Marina del Rey was due to other >>> ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong support, >>> ? ? Keith Uncapher moved his research group out of RAND.? Uncapher >>> ? ? explored?a couple of possibilities and found a comfortable >>> ? ? institutional home with the University of Southern California >>> ? ? (USC) with the proviso the institute would be off campus. >>> ? ? Uncapher was solidly supportive of both ARPA/IPTO and of the >>> ? ? Arpanet project.? As the Arpanet grew, Roberts needed a place to >>> ? ? have multiple PDP-10s providing service on the Arpanet.? Not just >>> ? ? for the staff at ARPA but for many others as well.? Uncapher was >>> ? ? cooperative and the rest followed easily. >>> >>> ? ? The fact that it demonstrated the viability of packet-switching >>> ? ? over that distance was perhaps a bonus, but the same would have >>> ? ? been true almost anywhere in the continental U.S. at that time. >>> ? ? The more important factor was the quality of the relationship. >>> ? ? One could imagine setting up a small farm of machines at various >>> ? ? other universities, non-profits, or selected for profit companies >>> ? ? or even some military?bases.? For each of these, cost, >>> ? ? contracting rules, the ambitions of the principal investigator, >>> ? ? and staff skill sets would have been the dominant concerns. >>> >>> ? ? Steve >>> >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> 1435 Woodhurst Blvd >> McLean, VA 22102 >> 703-448-0965 >> >> until further notice >> >> >> From steve at shinkuro.com Sun Aug 29 09:55:26 2021 From: steve at shinkuro.com (Steve Crocker) Date: Sun, 29 Aug 2021 12:55:26 -0400 Subject: [ih] More topology In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> Message-ID: Each IMP had four host ports. ISI had more than four machines that needed to be on the Arpanet, hence the second IMP. Steve On Sun, Aug 29, 2021 at 12:43 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Actually July 1981 -- see > http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg (thanks, > Noel!) The experience I recall was being in the ARPANET NOC for some > reason and noticing the topology on the big map that covered one wall of > the NOC. There were 2 ARPANET nodes at that time labelled ISI, but I'm > not sure where the PDP-10s were attached. Still just historically > curious how the decision was made to configure that topology....but > we'll probably never know. /Jack > > > On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > A look at some ARPAnet maps available on the web shows that in 1982 it > was four hops from ARPA to ISI, but by 1985 it was one hop. > > Alex McKenzie > > > > On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > Internet-history wrote: > > > > This is the second email from Jack mentioning a point-to-point line > between the ARPA TIP and the ISI site. I don't believe that is an accurate > statement of the ARPAnet topology. In January 1975 there were 5 hops > between the 2 on the shortest path. In October 1975 there were 6. I don't > believe it was ever one or two hops, but perhaps someone can find a network > map that proves me wrong. > > Alex McKenzie > > > > On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > Internet-history wrote: > > > > Sounds right. My experience was well after that early experimental > > period. The ARPANET was much bigger (1980ish) and the topology had > > evolved over the years. There was a direct 56K line (IIRC between > > ARPA-TIP and ISI) at that time. Lots of other circuits too, but in > > normal conditions ARPA<->ISI traffic flowed directly over that long-haul > > circuit. /Jack > > > > On 8/28/21 1:55 PM, Vint Cerf wrote: > >> Jack, the 4 node configuration had two paths between UCLA and SRI and > >> a two hop path to University of Utah. > >> We did a variety of tests to force alternate routing (by congesting > >> the first path). > >> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > >> this effect. Of course, we also crashed the Arpanet with these early > >> experiments. > >> > >> v > >> > >> > >> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >> > wrote: > >> > >> Thanks, Steve. I hadn't heard the details of why ISI was > >> selected. I can believe that economics was probably a factor but > >> the people and organizational issues could have been the dominant > >> factors. > >> > >> IMHO, the "internet community" seems to often ignore non-technical > >> influences on historical events, preferring to view everything in > >> terms of RFCs, protocols, and such. I think the other influences > >> are an important part of the story - hence my "economic lens". > >> You just described a view through a manager's lens. > >> > >> /Jack > >> > >> PS - I always thought that the "ARPANET demo" aspect of that > >> ARPANET timeframe was suspect, especially after I noticed that the > >> ARPANET had been configured with a leased circuit directly between > >> the nearby IMPs to ISI and ARPA. So as a demo of "packet > >> switching", there wasn't much actual switching involved. The 2 > >> IMPs were more like multiplexors. > >> > >> I never heard whether that configuration was mandated by ARPA, or > >> BBN decided to put a line in as a way to keep the customer happy, > >> or if it just happened naturally as a result of the ongoing > >> measurement of traffic flows and reconfiguration of the topology > >> to adapt as needed. Or something else. The interactivity of the > >> service between a terminal at ARPA and a PDP-10 at ISI was > >> noticeably better than other users (e.g., me) experienced. > >> > >> On 8/28/21 11:51 AM, Steve Crocker wrote: > >>> Jack, > >>> > >>> You wrote: > >>> > >>> I recall many visits to ARPA on Wilson Blvd in Arlington, VA. > >>> There were > >>> terminals all over the building, pretty much all connected > >>> through the > >>> ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, > >>> CA. The > >>> technology of Packet Switching made it possible to keep a > >>> PDP-10 busy > >>> servicing all those Users and minimize the costs of > everything, > >>> including those expensive communications circuits. This was > >>> circa > >>> 1980. Users could efficiently share expensive communications, > and > >>> expensive and distant computers -- although I always thought > >>> ARPA's > >>> choice to use a computer 3000 miles away was probably more to > >>> demonstrate the viability of the ARPANET than because it was > >>> cheaper > >>> than using a computer somewhere near DC. > >>> > >>> > >>> The choice of USC-ISI in Marina del Rey was due to other > >>> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong support, > >>> Keith Uncapher moved his research group out of RAND. Uncapher > >>> explored a couple of possibilities and found a comfortable > >>> institutional home with the University of Southern California > >>> (USC) with the proviso the institute would be off campus. > >>> Uncapher was solidly supportive of both ARPA/IPTO and of the > >>> Arpanet project. As the Arpanet grew, Roberts needed a place to > >>> have multiple PDP-10s providing service on the Arpanet. Not just > >>> for the staff at ARPA but for many others as well. Uncapher was > >>> cooperative and the rest followed easily. > >>> > >>> The fact that it demonstrated the viability of packet-switching > >>> over that distance was perhaps a bonus, but the same would have > >>> been true almost anywhere in the continental U.S. at that time. > >>> The more important factor was the quality of the relationship. > >>> One could imagine setting up a small farm of machines at various > >>> other universities, non-profits, or selected for profit companies > >>> or even some military bases. For each of these, cost, > >>> contracting rules, the ambitions of the principal investigator, > >>> and staff skill sets would have been the dominant concerns. > >>> > >>> Steve > >>> > >> > >> > >> -- > >> Please send any postal/overnight deliveries to: > >> Vint Cerf > >> 1435 Woodhurst Blvd > >> McLean, VA 22102 > >> 703-448-0965 > >> > >> until further notice > >> > >> > >> > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From casner at acm.org Sun Aug 29 10:09:37 2021 From: casner at acm.org (Stephen Casner) Date: Sun, 29 Aug 2021 10:09:37 -0700 (PDT) Subject: [ih] More topology In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> Message-ID: Jack, that map shows one hop from ARPA to USC, but the PDP10s were at ISI which is 10 miles and 2 or 3 IMPs from USC. -- Steve On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > Actually July 1981 -- see > http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg (thanks, Noel!) > The experience I recall was being in the ARPANET NOC for some reason and > noticing the topology on the big map that covered one wall of the NOC. There > were 2 ARPANET nodes at that time labelled ISI, but I'm not sure where the > PDP-10s were attached. Still just historically curious how the decision was > made to configure that topology....but we'll probably never know. /Jack > > > On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > A look at some ARPAnet maps available on the web shows that in 1982 it was > > four hops from ARPA to ISI, but by 1985 it was one hop. > > Alex McKenzie > > > > On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > > Internet-history wrote: > > This is the second email from Jack mentioning a point-to-point line > > between the ARPA TIP and the ISI site. I don't believe that is an accurate > > statement of the ARPAnet topology. In January 1975 there were 5 hops > > between the 2 on the shortest path. In October 1975 there were 6. I don't > > believe it was ever one or two hops, but perhaps someone can find a network > > map that proves me wrong. > > Alex McKenzie > > > > On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > > Internet-history wrote: > > Sounds right. My experience was well after that early experimental > > period. The ARPANET was much bigger (1980ish) and the topology had > > evolved over the years. There was a direct 56K line (IIRC between > > ARPA-TIP and ISI) at that time. Lots of other circuits too, but in > > normal conditions ARPA<->ISI traffic flowed directly over that long-haul > > circuit. /Jack > > > > On 8/28/21 1:55 PM, Vint Cerf wrote: > > > Jack, the 4 node configuration had two paths between UCLA and SRI and > > > a two hop path to University of Utah. > > > We did a variety of tests to force alternate routing (by congesting > > > the first path). > > > I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > > > this effect. Of course, we also crashed the Arpanet with these early > > > experiments. > > > > > > v > > > > > > > > > On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > > wrote: > > > > > > Thanks, Steve. I hadn't heard the details of why ISI was > > > selected. I can believe that economics was probably a factor but > > > the people and organizational issues could have been the dominant > > > factors. > > > > > > IMHO, the "internet community" seems to often ignore non-technical > > > influences on historical events, preferring to view everything in > > > terms of RFCs, protocols, and such. I think the other influences > > > are an important part of the story - hence my "economic lens". > > > You just described a view through a manager's lens. > > > > > > /Jack > > > > > > PS - I always thought that the "ARPANET demo" aspect of that > > > ARPANET timeframe was suspect, especially after I noticed that the > > > ARPANET had been configured with a leased circuit directly between > > > the nearby IMPs to ISI and ARPA. So as a demo of "packet > > > switching", there wasn't much actual switching involved. The 2 > > > IMPs were more like multiplexors. > > > > > > I never heard whether that configuration was mandated by ARPA, or > > > BBN decided to put a line in as a way to keep the customer happy, > > > or if it just happened naturally as a result of the ongoing > > > measurement of traffic flows and reconfiguration of the topology > > > to adapt as needed. Or something else. The interactivity of the > > > service between a terminal at ARPA and a PDP-10 at ISI was > > > noticeably better than other users (e.g., me) experienced. > > > > > > On 8/28/21 11:51 AM, Steve Crocker wrote: > > > > Jack, > > > > > > > > You wrote: > > > > > > > > I recall many visits to ARPA on Wilson Blvd in Arlington, VA. > > > > There were > > > > terminals all over the building, pretty much all connected > > > > through the > > > > ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, > > > > CA. The > > > > technology of Packet Switching made it possible to keep a > > > > PDP-10 busy > > > > servicing all those Users and minimize the costs of everything, > > > > including those expensive communications circuits. This was > > > > circa > > > > 1980. Users could efficiently share expensive communications, > > > > and > > > > expensive and distant computers -- although I always thought > > > > ARPA's > > > > choice to use a computer 3000 miles away was probably more to > > > > demonstrate the viability of the ARPANET than because it was > > > > cheaper > > > > than using a computer somewhere near DC. > > > > > > > > > > > > The choice of USC-ISI in Marina del Rey was due to other > > > > factors. In 1972, with ARPA/IPTO (Larry Roberts) strong support, > > > > Keith Uncapher moved his research group out of RAND. Uncapher > > > > explored a couple of possibilities and found a comfortable > > > > institutional home with the University of Southern California > > > > (USC) with the proviso the institute would be off campus. > > > > Uncapher was solidly supportive of both ARPA/IPTO and of the > > > > Arpanet project. As the Arpanet grew, Roberts needed a place to > > > > have multiple PDP-10s providing service on the Arpanet. Not just > > > > for the staff at ARPA but for many others as well. Uncapher was > > > > cooperative and the rest followed easily. > > > > > > > > The fact that it demonstrated the viability of packet-switching > > > > over that distance was perhaps a bonus, but the same would have > > > > been true almost anywhere in the continental U.S. at that time. > > > > The more important factor was the quality of the relationship. > > > > One could imagine setting up a small farm of machines at various > > > > other universities, non-profits, or selected for profit companies > > > > or even some military bases. For each of these, cost, > > > > contracting rules, the ambitions of the principal investigator, > > > > and staff skill sets would have been the dominant concerns. > > > > > > > > Steve > > > > > > > > > > > > > -- > > > Please send any postal/overnight deliveries to: > > > Vint Cerf > > > 1435 Woodhurst Blvd > > > McLean, VA 22102 > > > 703-448-0965 > > > > > > until further notice From jack at 3kitty.org Sun Aug 29 12:54:27 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 29 Aug 2021 12:54:27 -0700 Subject: [ih] More topology In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> Message-ID: <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> Thanks Steve.?? I guess I was focussed only on the longhaul hops. The maps didn't show where host computers were attached. ? At the time (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, LA, SF), almost like an early form of Metropolitan Area Network (MAN), plus single nodes scattered around the US and a satellite circuit to Europe.? The "MAN" parts of the ARPANET were often richly connected, and the circuits might have even been in the same room or building or campus.?? So the long-haul circuits were in some sense more important in their scarcity and higher risk of problems from events such as marauding backhoes (we called such network outages "backhoe fade"). While I still remember...here's a little Internet History. The Internet, at the time in late 70s and early 80s, was in what I used to call the "Fuzzy Peach" stage of its development.? In addition to computers directly attached to an IMP, there were various kinds of "local area networks", including things such as Packet Radio networks and a few homegrown LANs, which provided connectivity in a small geographical area.? Each of those was attached to an ARPANET IMP somewhere close by, and the ARPANET provided all of the long-haul communications.?? The exception to that was the SATNET, which provided connectivity across the Atlantic, with a US node (in West Virginia IIRC), and a very active node in the UK.?? So the ARPANET was the "peach" and all of the local networks and computers in the US were the "fuzz", with SATNET attaching extending the Internet to Europe. That topology had some implications on the early Internet behavior. At the time, I was responsible for BBN's contract with ARPA in which one of the tasks was "make the core Internet reliable 24x7".?? That motivated quite frequent interactions with the ARPANET NOC, especially since it was literally right down the hall. TCP/IP was in use at the time, but most of the long-haul traffic flows were through the ARPANET.? With directly-connected computers at each end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol in use as the ARPANET TIPs became TACs. However... ? There's always a "however"...? The ARPANET itself already implemented a lot of the functionality that TCP provided. ARPANET already provided reliable end-end byte streams, as well as flow control; the IMPs would allow only 8 "messages" in transit between two endpoints, and would physically block the computer from sending more than that.?? So IP datagrams never got lost, or reordered, or duplicated, and never had to be discarded or retransmitted.?? TCP/IP could do such things too, but in the "fuzzy peach" situation, it didn't have to do so. The prominent exception to the "fuzzy peach" was transatlantic traffic, which had to cross both the ARPANET and SATNET.?? The gateway interconnecting those two had to discard IP datagrams when they came in faster than they could go out.?? TCP would have to notice, retransmit, and reorder things at the destination. Peter Kirstein's crew at UCL were quite active in experimenting with the early Internet, and their TCP/IP traffic had to actually do all of the functions that the Fuzzy Peach so successfully hid from those directly attached to it.?? I think the experiences in that path motivated a lot of the early thinking about algorithms for TCP behavior, as well as gateway actions. Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or phone messages waiting for me every morning advising that "The Internet Is Broken!", either from Europe directly or through ARPA.? One of the first troubleshooting steps, after making sure the gateway was running, was to see what was going on in the Fuzzy Peach which was so important to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, and Mike Brescia might remember more since they were usually on the front lines. Much of the experimentation at the time involved interactions between the UK crowd and some machine at ISI.?? If the ARPANET was acting up, the bandwidth and latency of those TCP/IP traffic flows could gyrate wildly, and TCP/IP implementations didn't always respond well to such things, especially since they didn't typically occur when you were just using the Fuzzy Peach. Result - "The Internet Is Broken".?? That long-haul ARPA-ISI circuit was an important part of the path from Europe to California.?? If it was "down", the path became 3 or more additional hops (IMP hops, not IP), and became further loaded by additional traffic routing around the break.?? TCPs would timeout, retransmit, and make the problem worse while their algorithms tried to adapt. So that's probably what I was doing in the NOC when I noticed the importance of that ARPA<->USC ARPANET circuit. /Jack Haverty On 8/29/21 10:09 AM, Stephen Casner wrote: > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > ISI which is 10 miles and 2 or 3 IMPs from USC. > > -- Steve > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > >> Actually July 1981 -- see >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg (thanks, Noel!) >> The experience I recall was being in the ARPANET NOC for some reason and >> noticing the topology on the big map that covered one wall of the NOC. There >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure where the >> PDP-10s were attached. Still just historically curious how the decision was >> made to configure that topology....but we'll probably never know. /Jack >> >> >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: >>> A look at some ARPAnet maps available on the web shows that in 1982 it was >>> four hops from ARPA to ISI, but by 1985 it was one hop. >>> Alex McKenzie >>> >>> On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via >>> Internet-history wrote: >>> This is the second email from Jack mentioning a point-to-point line >>> between the ARPA TIP and the ISI site. I don't believe that is an accurate >>> statement of the ARPAnet topology. In January 1975 there were 5 hops >>> between the 2 on the shortest path. In October 1975 there were 6. I don't >>> believe it was ever one or two hops, but perhaps someone can find a network >>> map that proves me wrong. >>> Alex McKenzie >>> >>> On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via >>> Internet-history wrote: >>> Sounds right. My experience was well after that early experimental >>> period. The ARPANET was much bigger (1980ish) and the topology had >>> evolved over the years. There was a direct 56K line (IIRC between >>> ARPA-TIP and ISI) at that time. Lots of other circuits too, but in >>> normal conditions ARPA<->ISI traffic flowed directly over that long-haul >>> circuit. /Jack >>> >>> On 8/28/21 1:55 PM, Vint Cerf wrote: >>>> Jack, the 4 node configuration had two paths between UCLA and SRI and >>>> a two hop path to University of Utah. >>>> We did a variety of tests to force alternate routing (by congesting >>>> the first path). >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get >>>> this effect. Of course, we also crashed the Arpanet with these early >>>> experiments. >>>> >>>> v >>>> >>>> >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >>> > wrote: >>>> >>>> Thanks, Steve. I hadn't heard the details of why ISI was >>>> selected. I can believe that economics was probably a factor but >>>> the people and organizational issues could have been the dominant >>>> factors. >>>> >>>> IMHO, the "internet community" seems to often ignore non-technical >>>> influences on historical events, preferring to view everything in >>>> terms of RFCs, protocols, and such. I think the other influences >>>> are an important part of the story - hence my "economic lens". >>>> You just described a view through a manager's lens. >>>> >>>> /Jack >>>> >>>> PS - I always thought that the "ARPANET demo" aspect of that >>>> ARPANET timeframe was suspect, especially after I noticed that the >>>> ARPANET had been configured with a leased circuit directly between >>>> the nearby IMPs to ISI and ARPA. So as a demo of "packet >>>> switching", there wasn't much actual switching involved. The 2 >>>> IMPs were more like multiplexors. >>>> >>>> I never heard whether that configuration was mandated by ARPA, or >>>> BBN decided to put a line in as a way to keep the customer happy, >>>> or if it just happened naturally as a result of the ongoing >>>> measurement of traffic flows and reconfiguration of the topology >>>> to adapt as needed. Or something else. The interactivity of the >>>> service between a terminal at ARPA and a PDP-10 at ISI was >>>> noticeably better than other users (e.g., me) experienced. >>>> >>>> On 8/28/21 11:51 AM, Steve Crocker wrote: >>>>> Jack, >>>>> >>>>> You wrote: >>>>> >>>>> I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >>>>> There were >>>>> terminals all over the building, pretty much all connected >>>>> through the >>>>> ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >>>>> CA. The >>>>> technology of Packet Switching made it possible to keep a >>>>> PDP-10 busy >>>>> servicing all those Users and minimize the costs of everything, >>>>> including those expensive communications circuits. This was >>>>> circa >>>>> 1980. Users could efficiently share expensive communications, >>>>> and >>>>> expensive and distant computers -- although I always thought >>>>> ARPA's >>>>> choice to use a computer 3000 miles away was probably more to >>>>> demonstrate the viability of the ARPANET than because it was >>>>> cheaper >>>>> than using a computer somewhere near DC. >>>>> >>>>> >>>>> The choice of USC-ISI in Marina del Rey was due to other >>>>> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong support, >>>>> Keith Uncapher moved his research group out of RAND. Uncapher >>>>> explored a couple of possibilities and found a comfortable >>>>> institutional home with the University of Southern California >>>>> (USC) with the proviso the institute would be off campus. >>>>> Uncapher was solidly supportive of both ARPA/IPTO and of the >>>>> Arpanet project. As the Arpanet grew, Roberts needed a place to >>>>> have multiple PDP-10s providing service on the Arpanet. Not just >>>>> for the staff at ARPA but for many others as well. Uncapher was >>>>> cooperative and the rest followed easily. >>>>> >>>>> The fact that it demonstrated the viability of packet-switching >>>>> over that distance was perhaps a bonus, but the same would have >>>>> been true almost anywhere in the continental U.S. at that time. >>>>> The more important factor was the quality of the relationship. >>>>> One could imagine setting up a small farm of machines at various >>>>> other universities, non-profits, or selected for profit companies >>>>> or even some military bases. For each of these, cost, >>>>> contracting rules, the ambitions of the principal investigator, >>>>> and staff skill sets would have been the dominant concerns. >>>>> >>>>> Steve >>>>> >>>> >>>> -- >>>> Please send any postal/overnight deliveries to: >>>> Vint Cerf >>>> 1435 Woodhurst Blvd >>>> McLean, VA 22102 >>>> 703-448-0965 >>>> >>>> until further notice From b_a_denny at yahoo.com Sun Aug 29 14:38:13 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sun, 29 Aug 2021 21:38:13 +0000 (UTC) Subject: [ih] More topology In-Reply-To: <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> Message-ID: <1434236203.765878.1630273093446@mail.yahoo.com> There was also SRI's port expander which increased the number of host ports available on an IMP.?? You can find the SRI technical report (1080-140-1) on the web. The title is "The Arpanet Imp Port Expander". barbara On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via Internet-history wrote: Thanks Steve.?? I guess I was focussed only on the longhaul hops. The maps didn't show where host computers were attached. ? At the time (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, LA, SF), almost like an early form of Metropolitan Area Network (MAN), plus single nodes scattered around the US and a satellite circuit to Europe.? The "MAN" parts of the ARPANET were often richly connected, and the circuits might have even been in the same room or building or campus.?? So the long-haul circuits were in some sense more important in their scarcity and higher risk of problems from events such as marauding backhoes (we called such network outages "backhoe fade"). While I still remember...here's a little Internet History. The Internet, at the time in late 70s and early 80s, was in what I used to call the "Fuzzy Peach" stage of its development.? In addition to computers directly attached to an IMP, there were various kinds of "local area networks", including things such as Packet Radio networks and a few homegrown LANs, which provided connectivity in a small geographical area.? Each of those was attached to an ARPANET IMP somewhere close by, and the ARPANET provided all of the long-haul communications.?? The exception to that was the SATNET, which provided connectivity across the Atlantic, with a US node (in West Virginia IIRC), and a very active node in the UK.?? So the ARPANET was the "peach" and all of the local networks and computers in the US were the "fuzz", with SATNET attaching extending the Internet to Europe. That topology had some implications on the early Internet behavior. At the time, I was responsible for BBN's contract with ARPA in which one of the tasks was "make the core Internet reliable 24x7".?? That motivated quite frequent interactions with the ARPANET NOC, especially since it was literally right down the hall. TCP/IP was in use at the time, but most of the long-haul traffic flows were through the ARPANET.? With directly-connected computers at each end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol in use as the ARPANET TIPs became TACs. However... ? There's always a "however"...? The ARPANET itself already implemented a lot of the functionality that TCP provided. ARPANET already provided reliable end-end byte streams, as well as flow control; the IMPs would allow only 8 "messages" in transit between two endpoints, and would physically block the computer from sending more than that.?? So IP datagrams never got lost, or reordered, or duplicated, and never had to be discarded or retransmitted.?? TCP/IP could do such things too, but in the "fuzzy peach" situation, it didn't have to do so. The prominent exception to the "fuzzy peach" was transatlantic traffic, which had to cross both the ARPANET and SATNET.?? The gateway interconnecting those two had to discard IP datagrams when they came in faster than they could go out.?? TCP would have to notice, retransmit, and reorder things at the destination. Peter Kirstein's crew at UCL were quite active in experimenting with the early Internet, and their TCP/IP traffic had to actually do all of the functions that the Fuzzy Peach so successfully hid from those directly attached to it.?? I think the experiences in that path motivated a lot of the early thinking about algorithms for TCP behavior, as well as gateway actions. Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or phone messages waiting for me every morning advising that "The Internet Is Broken!", either from Europe directly or through ARPA.? One of the first troubleshooting steps, after making sure the gateway was running, was to see what was going on in the Fuzzy Peach which was so important to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, and Mike Brescia might remember more since they were usually on the front lines. Much of the experimentation at the time involved interactions between the UK crowd and some machine at ISI.?? If the ARPANET was acting up, the bandwidth and latency of those TCP/IP traffic flows could gyrate wildly, and TCP/IP implementations didn't always respond well to such things, especially since they didn't typically occur when you were just using the Fuzzy Peach. Result - "The Internet Is Broken".?? That long-haul ARPA-ISI circuit was an important part of the path from Europe to California.?? If it was "down", the path became 3 or more additional hops (IMP hops, not IP), and became further loaded by additional traffic routing around the break.?? TCPs would timeout, retransmit, and make the problem worse while their algorithms tried to adapt. So that's probably what I was doing in the NOC when I noticed the importance of that ARPA<->USC ARPANET circuit. /Jack Haverty On 8/29/21 10:09 AM, Stephen Casner wrote: > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > ISI which is 10 miles and 2 or 3 IMPs from USC. > >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- Steve > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > >> Actually July 1981 -- see >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg (thanks, Noel!) >> The experience I recall was being in the ARPANET NOC for some reason and >> noticing the topology on the big map that covered one wall of the NOC.? There >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure where the >> PDP-10s were attached.? Still just historically curious how the decision was >> made to configure that topology....but we'll probably never know.? /Jack >> >> >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: >>>? ? A look at some ARPAnet maps available on the web shows that in 1982 it was >>> four hops from ARPA to ISI, but by 1985 it was one hop. >>> Alex McKenzie >>> >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via >>> Internet-history wrote: >>>? ? ? This is the second email from Jack mentioning a point-to-point line >>> between the ARPA TIP and the ISI site.? I don't believe that is an accurate >>> statement of the ARPAnet topology.? In January 1975 there were 5 hops >>> between the 2 on the shortest path. In October 1975 there were 6.? I don't >>> believe it was ever one or two hops, but perhaps someone can find a network >>> map that proves me wrong. >>> Alex McKenzie >>> >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via >>> Internet-history wrote: >>>? ? ? Sounds right.? My experience was well after that early experimental >>> period.? The ARPANET was much bigger (1980ish) and the topology had >>> evolved over the years.? There was a direct 56K line (IIRC between >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in >>> normal conditions ARPA<->ISI traffic flowed directly over that long-haul >>> circuit.? /Jack >>> >>> On 8/28/21 1:55 PM, Vint Cerf wrote: >>>> Jack, the 4 node configuration had two paths between UCLA and SRI and >>>> a two hop path to University of Utah. >>>> We did a variety of tests to force alternate routing (by congesting >>>> the first path). >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get >>>> this effect. Of course, we also crashed the Arpanet with these early >>>> experiments. >>>> >>>> v >>>> >>>> >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >>> > wrote: >>>> >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was >>>>? ? ? selected.? I can believe that economics was probably a factor but >>>>? ? ? the people and organizational issues could have been the dominant >>>>? ? ? factors. >>>> >>>>? ? ? IMHO, the "internet community" seems to often ignore non-technical >>>>? ? ? influences on historical events, preferring to view everything in >>>>? ? ? terms of RFCs, protocols, and such.? I think the other influences >>>>? ? ? are an important part of the story - hence my "economic lens". >>>>? ? ? You just described a view through a manager's lens. >>>> >>>>? ? ? /Jack >>>> >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect of that >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed that the >>>>? ? ? ARPANET had been configured with a leased circuit directly between >>>>? ? ? the nearby IMPs to ISI and ARPA.? So as a demo of "packet >>>>? ? ? switching", there wasn't much actual switching involved.? The 2 >>>>? ? ? IMPs were more like multiplexors. >>>> >>>>? ? ? I never heard whether that configuration was mandated by ARPA, or >>>>? ? ? BBN decided to put a line in as a way to keep the customer happy, >>>>? ? ? or if it just happened naturally as a result of the ongoing >>>>? ? ? measurement of traffic flows and reconfiguration of the topology >>>>? ? ? to adapt as needed.? Or something else.? The interactivity of the >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was >>>>? ? ? noticeably better than other users (e.g., me) experienced. >>>> >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >>>>>? ? ? Jack, >>>>> >>>>>? ? ? You wrote: >>>>> >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in Arlington, VA. >>>>>? ? ? ? ? There were >>>>>? ? ? ? ? terminals all over the building, pretty much all connected >>>>>? ? ? ? ? through the >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine Del Rey, >>>>>? ? ? ? ? CA.? The >>>>>? ? ? ? ? technology of Packet Switching made it possible to keep a >>>>>? ? ? ? ? PDP-10 busy >>>>>? ? ? ? ? servicing all those Users and minimize the costs of everything, >>>>>? ? ? ? ? including those expensive communications circuits.? This was >>>>>? ? ? ? ? circa >>>>>? ? ? ? ? 1980. Users could efficiently share expensive communications, >>>>> and >>>>>? ? ? ? ? expensive and distant computers -- although I always thought >>>>>? ? ? ? ? ARPA's >>>>>? ? ? ? ? choice to use a computer 3000 miles away was probably more to >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than because it was >>>>>? ? ? ? ? cheaper >>>>>? ? ? ? ? than using a computer somewhere near DC. >>>>> >>>>> >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong support, >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? Uncapher >>>>>? ? ? explored a couple of possibilities and found a comfortable >>>>>? ? ? institutional home with the University of Southern California >>>>>? ? ? (USC) with the proviso the institute would be off campus. >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and of the >>>>>? ? ? Arpanet project.? As the Arpanet grew, Roberts needed a place to >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet.? Not just >>>>>? ? ? for the staff at ARPA but for many others as well.? Uncapher was >>>>>? ? ? cooperative and the rest followed easily. >>>>> >>>>>? ? ? The fact that it demonstrated the viability of packet-switching >>>>>? ? ? over that distance was perhaps a bonus, but the same would have >>>>>? ? ? been true almost anywhere in the continental U.S. at that time. >>>>>? ? ? The more important factor was the quality of the relationship. >>>>>? ? ? One could imagine setting up a small farm of machines at various >>>>>? ? ? other universities, non-profits, or selected for profit companies >>>>>? ? ? or even some military bases.? For each of these, cost, >>>>>? ? ? contracting rules, the ambitions of the principal investigator, >>>>>? ? ? and staff skill sets would have been the dominant concerns. >>>>> >>>>>? ? ? Steve >>>>> >>>> >>>> -- >>>> Please send any postal/overnight deliveries to: >>>> Vint Cerf >>>> 1435 Woodhurst Blvd >>>> McLean, VA 22102 >>>> 703-448-0965 >>>> >>>> until further notice -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Sun Aug 29 14:46:00 2021 From: dhc at dcrocker.net (Dave Crocker) Date: Sun, 29 Aug 2021 14:46:00 -0700 Subject: [ih] More topology In-Reply-To: <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> Message-ID: <34eda7fd-e934-36ce-74ec-dc02c1958ed8@dcrocker.net> On 8/29/2021 12:54 PM, Jack Haverty via Internet-history wrote: > The prominent exception to the "fuzzy peach" was transatlantic traffic, > which had to cross both the ARPANET and SATNET.?? The gateway > interconnecting those two had to discard IP datagrams when they came in > faster than they could go out.?? TCP would have to notice, retransmit, > and reorder things at the destination. True gatewaying -- exchanges between systems with different semantics -- seems to be quite a good way of testing assumptions and overall robustsness. For email, getting a message from one type of mail service to a recipient in another is not the interesting test. Whether the recipient can normally and successfully use their Reply command, back to the original author is. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Sun Aug 29 19:29:59 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 29 Aug 2021 19:29:59 -0700 Subject: [ih] More topology In-Reply-To: <1434236203.765878.1630273093446@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> Message-ID: <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> Thanks Barbara -- yes, the port Expander was one of the things I called "homegrown LANs".? I never did learn how the PE handled RFNMs, in particular how it interacted with its associated NCP host that it was "stealing" RFNMs from. /jack On 8/29/21 2:38 PM, Barbara Denny wrote: > There was also SRI's port expander which increased the number of host > ports available on an IMP. > > You can find the SRI technical report (1080-140-1) on the web. The > title is "The Arpanet Imp Port Expander". > > barbara > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > Internet-history wrote: > > > Thanks Steve.?? I guess I was focussed only on the longhaul hops. The > maps didn't show where host computers were attached. At the time > (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, > LA, SF), almost like an early form of Metropolitan Area Network (MAN), > plus single nodes scattered around the US and a satellite circuit to > Europe.? The "MAN" parts of the ARPANET were often richly connected, and > the circuits might have even been in the same room or building or > campus.?? So the long-haul circuits were in some sense more important in > their scarcity and higher risk of problems from events such as marauding > backhoes (we called such network outages "backhoe fade"). > > While I still remember...here's a little Internet History. > > The Internet, at the time in late 70s and early 80s, was in what I used > to call the "Fuzzy Peach" stage of its development.? In addition to > computers directly attached to an IMP, there were various kinds of > "local area networks", including things such as Packet Radio networks > and a few homegrown LANs, which provided connectivity in a small > geographical area.? Each of those was attached to an ARPANET IMP > somewhere close by, and the ARPANET provided all of the long-haul > communications.?? The exception to that was the SATNET, which provided > connectivity across the Atlantic, with a US node (in West Virginia > IIRC), and a very active node in the UK.?? So the ARPANET was the > "peach" and all of the local networks and computers in the US were the > "fuzz", with SATNET attaching extending the Internet to Europe. > > That topology had some implications on the early Internet behavior. > > At the time, I was responsible for BBN's contract with ARPA in which one > of the tasks was "make the core Internet reliable 24x7".?? That > motivated quite frequent interactions with the ARPANET NOC, especially > since it was literally right down the hall. > > TCP/IP was in use at the time, but most of the long-haul traffic flows > were through the ARPANET.? With directly-connected computers at each > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol > in use as the ARPANET TIPs became TACs. > > However... ? There's always a "however"...? The ARPANET itself already > implemented a lot of the functionality that TCP provided. ARPANET > already provided reliable end-end byte streams, as well as flow control; > the IMPs would allow only 8 "messages" in transit between two endpoints, > and would physically block the computer from sending more than that. > So IP datagrams never got lost, or reordered, or duplicated, and never > had to be discarded or retransmitted.?? TCP/IP could do such things too, > but in the "fuzzy peach" situation, it didn't have to do so. > > The prominent exception to the "fuzzy peach" was transatlantic traffic, > which had to cross both the ARPANET and SATNET.?? The gateway > interconnecting those two had to discard IP datagrams when they came in > faster than they could go out.?? TCP would have to notice, retransmit, > and reorder things at the destination. > > Peter Kirstein's crew at UCL were quite active in experimenting with the > early Internet, and their TCP/IP traffic had to actually do all of the > functions that the Fuzzy Peach so successfully hid from those directly > attached to it.?? I think the experiences in that path motivated a lot > of the early thinking about algorithms for TCP behavior, as well as > gateway actions. > > Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or > phone messages waiting for me every morning advising that "The Internet > Is Broken!", either from Europe directly or through ARPA.? One of the > first troubleshooting steps, after making sure the gateway was running, > was to see what was going on in the Fuzzy Peach which was so important > to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, and Mike > Brescia might remember more since they were usually on the front lines. > > Much of the experimentation at the time involved interactions between > the UK crowd and some machine at ISI.?? If the ARPANET was acting up, > the bandwidth and latency of those TCP/IP traffic flows could gyrate > wildly, and TCP/IP implementations didn't always respond well to such > things, especially since they didn't typically occur when you were just > using the Fuzzy Peach. > > Result - "The Internet Is Broken".?? That long-haul ARPA-ISI circuit was > an important part of the path from Europe to California.?? If it was > "down", the path became 3 or more additional hops (IMP hops, not IP), > and became further loaded by additional traffic routing around the > break.?? TCPs would timeout, retransmit, and make the problem worse > while their algorithms tried to adapt. > > So that's probably what I was doing in the NOC when I noticed the > importance of that ARPA<->USC ARPANET circuit. > > /Jack Haverty > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > ? ? ? ? -- Steve > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > >> Actually July 1981 -- see > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > (thanks, > Noel!) > >> The experience I recall was being in the ARPANET NOC for some > reason and > >> noticing the topology on the big map that covered one wall of the > NOC.? There > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > where the > >> PDP-10s were attached.? Still just historically curious how the > decision was > >> made to configure that topology....but we'll probably never know.? > /Jack > >> > >> > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > >>>? ? A look at some ARPAnet maps available on the web shows that in > 1982 it was > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > >>> Alex McKenzie > >>> > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > >>> Internet-history > wrote: > >>>? ? ? This is the second email from Jack mentioning a > point-to-point line > >>> between the ARPA TIP and the ISI site.? I don't believe that is an > accurate > >>> statement of the ARPAnet topology.? In January 1975 there were 5 hops > >>> between the 2 on the shortest path. In October 1975 there were 6.? > I don't > >>> believe it was ever one or two hops, but perhaps someone can find > a network > >>> map that proves me wrong. > >>> Alex McKenzie > >>> > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > >>> Internet-history > wrote: > >>>? ? ? Sounds right.? My experience was well after that early > experimental > >>> period.? The ARPANET was much bigger (1980ish) and the topology had > >>> evolved over the years.? There was a direct 56K line (IIRC between > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in > >>> normal conditions ARPA<->ISI traffic flowed directly over that > long-haul > >>> circuit.? /Jack > >>> > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > >>>> Jack, the 4 node configuration had two paths between UCLA and SRI and > >>>> a two hop path to University of Utah. > >>>> We did a variety of tests to force alternate routing (by congesting > >>>> the first path). > >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > >>>> this effect. Of course, we also crashed the Arpanet with these early > >>>> experiments. > >>>> > >>>> v > >>>> > >>>> > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > >>>> >> wrote: > >>>> > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was > >>>>? ? ? selected.? I can believe that economics was probably a > factor but > >>>>? ? ? the people and organizational issues could have been the > dominant > >>>>? ? ? factors. > >>>> > >>>>? ? ? IMHO, the "internet community" seems to often ignore > non-technical > >>>>? ? ? influences on historical events, preferring to view > everything in > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other > influences > >>>>? ? ? are an important part of the story - hence my "economic lens". > >>>>? ? ? You just described a view through a manager's lens. > >>>> > >>>>? ? ? /Jack > >>>> > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect of that > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed > that the > >>>>? ? ? ARPANET had been configured with a leased circuit directly > between > >>>>? ? ? the nearby IMPs to ISI and ARPA.? So as a demo of "packet > >>>>? ? ? switching", there wasn't much actual switching involved.? The 2 > >>>>? ? ? IMPs were more like multiplexors. > >>>> > >>>>? ? ? I never heard whether that configuration was mandated by > ARPA, or > >>>>? ? ? BBN decided to put a line in as a way to keep the customer > happy, > >>>>? ? ? or if it just happened naturally as a result of the ongoing > >>>>? ? ? measurement of traffic flows and reconfiguration of the topology > >>>>? ? ? to adapt as needed.? Or something else.? The interactivity > of the > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was > >>>>? ? ? noticeably better than other users (e.g., me) experienced. > >>>> > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: > >>>>>? ? ? Jack, > >>>>> > >>>>>? ? ? You wrote: > >>>>> > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in > Arlington, VA. > >>>>>? ? ? ? ? There were > >>>>>? ? ? ? ? terminals all over the building, pretty much all connected > >>>>>? ? ? ? ? through the > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine > Del Rey, > >>>>>? ? ? ? ? CA.? The > >>>>>? ? ? ? ? technology of Packet Switching made it possible to keep a > >>>>>? ? ? ? ? PDP-10 busy > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of > everything, > >>>>>? ? ? ? ? including those expensive communications circuits.? > This was > >>>>>? ? ? ? ? circa > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive > communications, > >>>>> and > >>>>>? ? ? ? ? expensive and distant computers -- although I always > thought > >>>>>? ? ? ? ? ARPA's > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was probably > more to > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than because > it was > >>>>>? ? ? ? ? cheaper > >>>>>? ? ? ? ? than using a computer somewhere near DC. > >>>>> > >>>>> > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong > support, > >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? Uncapher > >>>>>? ? ? explored a couple of possibilities and found a comfortable > >>>>>? ? ? institutional home with the University of Southern California > >>>>>? ? ? (USC) with the proviso the institute would be off campus. > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and of the > >>>>>? ? ? Arpanet project.? As the Arpanet grew, Roberts needed a > place to > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet.? > Not just > >>>>>? ? ? for the staff at ARPA but for many others as well.? > Uncapher was > >>>>>? ? ? cooperative and the rest followed easily. > >>>>> > >>>>>? ? ? The fact that it demonstrated the viability of packet-switching > >>>>>? ? ? over that distance was perhaps a bonus, but the same would have > >>>>>? ? ? been true almost anywhere in the continental U.S. at that time. > >>>>>? ? ? The more important factor was the quality of the relationship. > >>>>>? ? ? One could imagine setting up a small farm of machines at > various > >>>>>? ? ? other universities, non-profits, or selected for profit > companies > >>>>>? ? ? or even some military bases.? For each of these, cost, > >>>>>? ? ? contracting rules, the ambitions of the principal investigator, > >>>>>? ? ? and staff skill sets would have been the dominant concerns. > >>>>> > >>>>>? ? ? Steve > >>>>> > >>>> > >>>> -- > >>>> Please send any postal/overnight deliveries to: > >>>> Vint Cerf > >>>> 1435 Woodhurst Blvd > >>>> McLean, VA 22102 > >>>> 703-448-0965 > >>>> > >>>> until further notice > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From vint at google.com Mon Aug 30 03:54:12 2021 From: vint at google.com (Vint Cerf) Date: Mon, 30 Aug 2021 06:54:12 -0400 Subject: [ih] More topology In-Reply-To: <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> Message-ID: two tcp connections could multiplex on a given IMP-IMP link - one RFNM per IP packet regardless of the TCP layer "connection" v On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Thanks Barbara -- yes, the port Expander was one of the things I called > "homegrown LANs". I never did learn how the PE handled RFNMs, in > particular how it interacted with its associated NCP host that it was > "stealing" RFNMs from. > /jack > > On 8/29/21 2:38 PM, Barbara Denny wrote: > > There was also SRI's port expander which increased the number of host > > ports available on an IMP. > > > > You can find the SRI technical report (1080-140-1) on the web. The > > title is "The Arpanet Imp Port Expander". > > > > barbara > > > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > > Internet-history wrote: > > > > > > Thanks Steve. I guess I was focussed only on the longhaul hops. The > > maps didn't show where host computers were attached. At the time > > (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, > > LA, SF), almost like an early form of Metropolitan Area Network (MAN), > > plus single nodes scattered around the US and a satellite circuit to > > Europe. The "MAN" parts of the ARPANET were often richly connected, and > > the circuits might have even been in the same room or building or > > campus. So the long-haul circuits were in some sense more important in > > their scarcity and higher risk of problems from events such as marauding > > backhoes (we called such network outages "backhoe fade"). > > > > While I still remember...here's a little Internet History. > > > > The Internet, at the time in late 70s and early 80s, was in what I used > > to call the "Fuzzy Peach" stage of its development. In addition to > > computers directly attached to an IMP, there were various kinds of > > "local area networks", including things such as Packet Radio networks > > and a few homegrown LANs, which provided connectivity in a small > > geographical area. Each of those was attached to an ARPANET IMP > > somewhere close by, and the ARPANET provided all of the long-haul > > communications. The exception to that was the SATNET, which provided > > connectivity across the Atlantic, with a US node (in West Virginia > > IIRC), and a very active node in the UK. So the ARPANET was the > > "peach" and all of the local networks and computers in the US were the > > "fuzz", with SATNET attaching extending the Internet to Europe. > > > > That topology had some implications on the early Internet behavior. > > > > At the time, I was responsible for BBN's contract with ARPA in which one > > of the tasks was "make the core Internet reliable 24x7". That > > motivated quite frequent interactions with the ARPANET NOC, especially > > since it was literally right down the hall. > > > > TCP/IP was in use at the time, but most of the long-haul traffic flows > > were through the ARPANET. With directly-connected computers at each > > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol > > in use as the ARPANET TIPs became TACs. > > > > However... There's always a "however"... The ARPANET itself already > > implemented a lot of the functionality that TCP provided. ARPANET > > already provided reliable end-end byte streams, as well as flow control; > > the IMPs would allow only 8 "messages" in transit between two endpoints, > > and would physically block the computer from sending more than that. > > So IP datagrams never got lost, or reordered, or duplicated, and never > > had to be discarded or retransmitted. TCP/IP could do such things too, > > but in the "fuzzy peach" situation, it didn't have to do so. > > > > The prominent exception to the "fuzzy peach" was transatlantic traffic, > > which had to cross both the ARPANET and SATNET. The gateway > > interconnecting those two had to discard IP datagrams when they came in > > faster than they could go out. TCP would have to notice, retransmit, > > and reorder things at the destination. > > > > Peter Kirstein's crew at UCL were quite active in experimenting with the > > early Internet, and their TCP/IP traffic had to actually do all of the > > functions that the Fuzzy Peach so successfully hid from those directly > > attached to it. I think the experiences in that path motivated a lot > > of the early thinking about algorithms for TCP behavior, as well as > > gateway actions. > > > > Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or > > phone messages waiting for me every morning advising that "The Internet > > Is Broken!", either from Europe directly or through ARPA. One of the > > first troubleshooting steps, after making sure the gateway was running, > > was to see what was going on in the Fuzzy Peach which was so important > > to the operation of the Internet. Bob Hinden, Alan Sheltzer, and Mike > > Brescia might remember more since they were usually on the front lines. > > > > Much of the experimentation at the time involved interactions between > > the UK crowd and some machine at ISI. If the ARPANET was acting up, > > the bandwidth and latency of those TCP/IP traffic flows could gyrate > > wildly, and TCP/IP implementations didn't always respond well to such > > things, especially since they didn't typically occur when you were just > > using the Fuzzy Peach. > > > > Result - "The Internet Is Broken". That long-haul ARPA-ISI circuit was > > an important part of the path from Europe to California. If it was > > "down", the path became 3 or more additional hops (IMP hops, not IP), > > and became further loaded by additional traffic routing around the > > break. TCPs would timeout, retransmit, and make the problem worse > > while their algorithms tried to adapt. > > > > So that's probably what I was doing in the NOC when I noticed the > > importance of that ARPA<->USC ARPANET circuit. > > > > /Jack Haverty > > > > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > > > -- Steve > > > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > > > >> Actually July 1981 -- see > > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > > (thanks, > > Noel!) > > >> The experience I recall was being in the ARPANET NOC for some > > reason and > > >> noticing the topology on the big map that covered one wall of the > > NOC. There > > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > > where the > > >> PDP-10s were attached. Still just historically curious how the > > decision was > > >> made to configure that topology....but we'll probably never know. > > /Jack > > >> > > >> > > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > >>> A look at some ARPAnet maps available on the web shows that in > > 1982 it was > > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > > >>> Alex McKenzie > > >>> > > >>> On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > > >>> Internet-history > > wrote: > > >>> This is the second email from Jack mentioning a > > point-to-point line > > >>> between the ARPA TIP and the ISI site. I don't believe that is an > > accurate > > >>> statement of the ARPAnet topology. In January 1975 there were 5 hops > > >>> between the 2 on the shortest path. In October 1975 there were 6. > > I don't > > >>> believe it was ever one or two hops, but perhaps someone can find > > a network > > >>> map that proves me wrong. > > >>> Alex McKenzie > > >>> > > >>> On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > > >>> Internet-history > > wrote: > > >>> Sounds right. My experience was well after that early > > experimental > > >>> period. The ARPANET was much bigger (1980ish) and the topology had > > >>> evolved over the years. There was a direct 56K line (IIRC between > > >>> ARPA-TIP and ISI) at that time. Lots of other circuits too, but in > > >>> normal conditions ARPA<->ISI traffic flowed directly over that > > long-haul > > >>> circuit. /Jack > > >>> > > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > > >>>> Jack, the 4 node configuration had two paths between UCLA and SRI > and > > >>>> a two hop path to University of Utah. > > >>>> We did a variety of tests to force alternate routing (by congesting > > >>>> the first path). > > >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > > >>>> this effect. Of course, we also crashed the Arpanet with these early > > >>>> experiments. > > >>>> > > >>>> v > > >>>> > > >>>> > > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > > >>>> >> wrote: > > >>>> > > >>>> Thanks, Steve. I hadn't heard the details of why ISI was > > >>>> selected. I can believe that economics was probably a > > factor but > > >>>> the people and organizational issues could have been the > > dominant > > >>>> factors. > > >>>> > > >>>> IMHO, the "internet community" seems to often ignore > > non-technical > > >>>> influences on historical events, preferring to view > > everything in > > >>>> terms of RFCs, protocols, and such. I think the other > > influences > > >>>> are an important part of the story - hence my "economic lens". > > >>>> You just described a view through a manager's lens. > > >>>> > > >>>> /Jack > > >>>> > > >>>> PS - I always thought that the "ARPANET demo" aspect of that > > >>>> ARPANET timeframe was suspect, especially after I noticed > > that the > > >>>> ARPANET had been configured with a leased circuit directly > > between > > >>>> the nearby IMPs to ISI and ARPA. So as a demo of "packet > > >>>> switching", there wasn't much actual switching involved. The 2 > > >>>> IMPs were more like multiplexors. > > >>>> > > >>>> I never heard whether that configuration was mandated by > > ARPA, or > > >>>> BBN decided to put a line in as a way to keep the customer > > happy, > > >>>> or if it just happened naturally as a result of the ongoing > > >>>> measurement of traffic flows and reconfiguration of the > topology > > >>>> to adapt as needed. Or something else. The interactivity > > of the > > >>>> service between a terminal at ARPA and a PDP-10 at ISI was > > >>>> noticeably better than other users (e.g., me) experienced. > > >>>> > > >>>> On 8/28/21 11:51 AM, Steve Crocker wrote: > > >>>>> Jack, > > >>>>> > > >>>>> You wrote: > > >>>>> > > >>>>> I recall many visits to ARPA on Wilson Blvd in > > Arlington, VA. > > >>>>> There were > > >>>>> terminals all over the building, pretty much all connected > > >>>>> through the > > >>>>> ARPANET to a PDP-10 3000 miles away at USC in Marine > > Del Rey, > > >>>>> CA. The > > >>>>> technology of Packet Switching made it possible to keep a > > >>>>> PDP-10 busy > > >>>>> servicing all those Users and minimize the costs of > > everything, > > >>>>> including those expensive communications circuits. > > This was > > >>>>> circa > > >>>>> 1980. Users could efficiently share expensive > > communications, > > >>>>> and > > >>>>> expensive and distant computers -- although I always > > thought > > >>>>> ARPA's > > >>>>> choice to use a computer 3000 miles away was probably > > more to > > >>>>> demonstrate the viability of the ARPANET than because > > it was > > >>>>> cheaper > > >>>>> than using a computer somewhere near DC. > > >>>>> > > >>>>> > > >>>>> The choice of USC-ISI in Marina del Rey was due to other > > >>>>> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong > > support, > > >>>>> Keith Uncapher moved his research group out of RAND. Uncapher > > >>>>> explored a couple of possibilities and found a comfortable > > >>>>> institutional home with the University of Southern California > > >>>>> (USC) with the proviso the institute would be off campus. > > >>>>> Uncapher was solidly supportive of both ARPA/IPTO and of the > > >>>>> Arpanet project. As the Arpanet grew, Roberts needed a > > place to > > >>>>> have multiple PDP-10s providing service on the Arpanet. > > Not just > > >>>>> for the staff at ARPA but for many others as well. > > Uncapher was > > >>>>> cooperative and the rest followed easily. > > >>>>> > > >>>>> The fact that it demonstrated the viability of > packet-switching > > >>>>> over that distance was perhaps a bonus, but the same would > have > > >>>>> been true almost anywhere in the continental U.S. at that > time. > > >>>>> The more important factor was the quality of the relationship. > > >>>>> One could imagine setting up a small farm of machines at > > various > > >>>>> other universities, non-profits, or selected for profit > > companies > > >>>>> or even some military bases. For each of these, cost, > > >>>>> contracting rules, the ambitions of the principal > investigator, > > >>>>> and staff skill sets would have been the dominant concerns. > > >>>>> > > >>>>> Steve > > >>>>> > > >>>> > > >>>> -- > > >>>> Please send any postal/overnight deliveries to: > > >>>> Vint Cerf > > >>>> 1435 Woodhurst Blvd > > >>>> McLean, VA 22102 > > >>>> 703-448-0965 <(703)%20448-0965> > > >>>> > > >>>> until further notice > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org Internet-history at elists.isoc.org> > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From jack at 3kitty.org Mon Aug 30 11:01:38 2021 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 30 Aug 2021 11:01:38 -0700 Subject: [ih] More topology In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> Message-ID: Yes, but it was more complicated than that...a little more history: ARPANET used RFNMs (Request For Next Message) as a means of flow control.? Every message (packet/datagram/whatever) sent by a host would eventually cause a RFNM to be returned to the host.?? IIRC, hosts were allowed to send up to 8 messages to any particular destination.?? So there could be up to 8 pending RFNMs to come back to the host for traffic to that destination.?? If the host tried to send a 9th message to a particular destination, the IMP would block all transmissions from the host until those RFNMs arrived, by shutting off the hardware interface.?? So, if a host exceeded that limit of "8 in flight" to any destination, the IMP would block it, at least temporarily, from sending anything to any destination. That would probably be A Bad Thing. Hosts could implement a simple algorithm and simply send one message, and hold the next message until a RFNM came back.? But to increase throughput, it was advisable to implement some sort of "RFNM Counting" where the host would keep track of how many messages were "in flight", and avoid sending another message to a particular destination if that message would exceed the 8-in-flight constraint, and thereby avoid having the IMP shut off all of its traffic to all destinations.??? The TCP/IP I implemented for Unix did that kind of RFNM Counting on the ARPANET interface, but I'm not sure how other implementations handled the RFNM issues. Any "box" (such as a Port Expander) that was "spliced into" the connection between a host and an IMP had to perform two related functions. ? It had to act as a host itself in interacting with the IMP. ? It also had to "look like an IMP" to the host(s) that were attached to it.?? It had to essentially implement "timesharing" of the IMP's interface. The "1822 specifications" defined the interface between a Host and an IMP. ?? From it, engineers could build interfaces for their hosts to connect them to the ARPANET.? However (always a however...) the 1822 spec appeared to be symmetrical.? But it wasn't. ? Interfaces that met the 1822 specs could successfully interact with an IMP. Also, if you plugged two such 1822 interfaces back-to-back (as was done in connecting the 4 host to a Port Expander), it would often work apparently fine.?? The "Host to IMP" specification wasn't quite the same as the (internal-to-BBN) "IMP To Host" specification;? it was easy for people to treat it as if it was. But in that early Internet, there were lots of "outages" to be investigated.? I remember doing a "deep dive" into one such configuration where equipment was "spliced into" a Host/IMP 1822 cable with unreliable results.?? It turned out to be a hardware issue, with the root cause being the invalid assumption that any 1822-compliant interface on a host could also successfully emulate the 1822 interface on an IMP. This was a sufficiently common problem that I wrote IEN 139 "Hosts As IMPs" to explain the situation (see https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone trying to do such things.? But that IEN only addressed the low-level issues of hardware, signals, voltages, and noise., and warned that to do such things might require more effort to actually behave as an IMP. RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an IMP", a box such as a Port Expander faced design choices for providing functionality such as RFNMs.? I never knew how it did that, and how successfully it "looked like an IMP" to all its attached hosts.?? E.g., if all 4 hosts, thinking they were connected to their own dedicated IMP port, did their own RFNM Counting, how did the PE make that all work reliably??? Maybe the situation just never came up often enough in practice to motivate troubleshooting. Not an issue now of course, but historically I wonder how much of the early reliability issues in the Internet in the Fuzzy Peach era might have been caused by such situations. /Jack PS - the same kind of thought has occurred to me with respect to NAT, which seems to perform a similar "look like an Internet" function. On 8/30/21 3:54 AM, Vint Cerf wrote: > two tcp connections could multiplex on a given IMP-IMP link - one RFNM > per IP packet regardless of the TCP layer "connection" > v > > > On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history > > wrote: > > Thanks Barbara -- yes, the port Expander was one of the things I > called > "homegrown LANs".? I never did learn how the PE handled RFNMs, in > particular how it interacted with its associated NCP host that it was > "stealing" RFNMs from. > /jack > > On 8/29/21 2:38 PM, Barbara Denny wrote: > > There was also SRI's port expander which increased the number of > host > > ports available on an IMP. > > > > You can find the SRI technical report (1080-140-1) on the web. The > > title is "The Arpanet Imp Port Expander". > > > > barbara > > > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > > Internet-history > wrote: > > > > > > Thanks Steve.?? I guess I was focussed only on the longhaul > hops. The > > maps didn't show where host computers were attached. At the time > > (1981) the ARPANET consisted of several clusters of nodes (DC, > Boston, > > LA, SF), almost like an early form of Metropolitan Area Network > (MAN), > > plus single nodes scattered around the US and a satellite circuit to > > Europe.? The "MAN" parts of the ARPANET were often richly > connected, and > > the circuits might have even been in the same room or building or > > campus.?? So the long-haul circuits were in some sense more > important in > > their scarcity and higher risk of problems from events such as > marauding > > backhoes (we called such network outages "backhoe fade"). > > > > While I still remember...here's a little Internet History. > > > > The Internet, at the time in late 70s and early 80s, was in what > I used > > to call the "Fuzzy Peach" stage of its development.? In addition to > > computers directly attached to an IMP, there were various kinds of > > "local area networks", including things such as Packet Radio > networks > > and a few homegrown LANs, which provided connectivity in a small > > geographical area.? Each of those was attached to an ARPANET IMP > > somewhere close by, and the ARPANET provided all of the long-haul > > communications.?? The exception to that was the SATNET, which > provided > > connectivity across the Atlantic, with a US node (in West Virginia > > IIRC), and a very active node in the UK.?? So the ARPANET was the > > "peach" and all of the local networks and computers in the US > were the > > "fuzz", with SATNET attaching extending the Internet to Europe. > > > > That topology had some implications on the early Internet behavior. > > > > At the time, I was responsible for BBN's contract with ARPA in > which one > > of the tasks was "make the core Internet reliable 24x7".?? That > > motivated quite frequent interactions with the ARPANET NOC, > especially > > since it was literally right down the hall. > > > > TCP/IP was in use at the time, but most of the long-haul traffic > flows > > were through the ARPANET.? With directly-connected computers at each > > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the > protocol > > in use as the ARPANET TIPs became TACs. > > > > However... ? There's always a "however"...? The ARPANET itself > already > > implemented a lot of the functionality that TCP provided. ARPANET > > already provided reliable end-end byte streams, as well as flow > control; > > the IMPs would allow only 8 "messages" in transit between two > endpoints, > > and would physically block the computer from sending more than that. > > So IP datagrams never got lost, or reordered, or duplicated, and > never > > had to be discarded or retransmitted.?? TCP/IP could do such > things too, > > but in the "fuzzy peach" situation, it didn't have to do so. > > > > The prominent exception to the "fuzzy peach" was transatlantic > traffic, > > which had to cross both the ARPANET and SATNET.?? The gateway > > interconnecting those two had to discard IP datagrams when they > came in > > faster than they could go out.?? TCP would have to notice, > retransmit, > > and reorder things at the destination. > > > > Peter Kirstein's crew at UCL were quite active in experimenting > with the > > early Internet, and their TCP/IP traffic had to actually do all > of the > > functions that the Fuzzy Peach so successfully hid from those > directly > > attached to it.?? I think the experiences in that path motivated > a lot > > of the early thinking about algorithms for TCP behavior, as well as > > gateway actions. > > > > Europe is 5+ hours ahead of Boston, so I learned to expect > emails and/or > > phone messages waiting for me every morning advising that "The > Internet > > Is Broken!", either from Europe directly or through ARPA.? One > of the > > first troubleshooting steps, after making sure the gateway was > running, > > was to see what was going on in the Fuzzy Peach which was so > important > > to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, > and Mike > > Brescia might remember more since they were usually on the front > lines. > > > > Much of the experimentation at the time involved interactions > between > > the UK crowd and some machine at ISI.?? If the ARPANET was > acting up, > > the bandwidth and latency of those TCP/IP traffic flows could gyrate > > wildly, and TCP/IP implementations didn't always respond well to > such > > things, especially since they didn't typically occur when you > were just > > using the Fuzzy Peach. > > > > Result - "The Internet Is Broken".?? That long-haul ARPA-ISI > circuit was > > an important part of the path from Europe to California.?? If it was > > "down", the path became 3 or more additional hops (IMP hops, not > IP), > > and became further loaded by additional traffic routing around the > > break.?? TCPs would timeout, retransmit, and make the problem worse > > while their algorithms tried to adapt. > > > > So that's probably what I was doing in the NOC when I noticed the > > importance of that ARPA<->USC ARPANET circuit. > > > > /Jack Haverty > > > > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > > Jack, that map shows one hop from ARPA to USC, but the PDP10s > were at > > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > > > ? ? ? ? -- Steve > > > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > > > >> Actually July 1981 -- see > > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > > > > >(thanks, > > Noel!) > > >> The experience I recall was being in the ARPANET NOC for some > > reason and > > >> noticing the topology on the big map that covered one wall of > the > > NOC.? There > > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > > where the > > >> PDP-10s were attached.? Still just historically curious how the > > decision was > > >> made to configure that topology....but we'll probably never > know. > > /Jack > > >> > > >> > > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > >>>? ? A look at some ARPAnet maps available on the web shows > that in > > 1982 it was > > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > > >>> Alex McKenzie > > >>> > > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex > McKenzie via > > >>> Internet-history > > >> wrote: > > >>>? ? ? This is the second email from Jack mentioning a > > point-to-point line > > >>> between the ARPA TIP and the ISI site.? I don't believe that > is an > > accurate > > >>> statement of the ARPAnet topology.? In January 1975 there > were 5 hops > > >>> between the 2 on the shortest path. In October 1975 there > were 6. > > I don't > > >>> believe it was ever one or two hops, but perhaps someone can > find > > a network > > >>> map that proves me wrong. > > >>> Alex McKenzie > > >>> > > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack > Haverty via > > >>> Internet-history > > >> wrote: > > >>>? ? ? Sounds right.? My experience was well after that early > > experimental > > >>> period.? The ARPANET was much bigger (1980ish) and the > topology had > > >>> evolved over the years.? There was a direct 56K line (IIRC > between > > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, > but in > > >>> normal conditions ARPA<->ISI traffic flowed directly over that > > long-haul > > >>> circuit.? /Jack > > >>> > > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > > >>>> Jack, the 4 node configuration had two paths between UCLA > and SRI and > > >>>> a two hop path to University of Utah. > > >>>> We did a variety of tests to force alternate routing (by > congesting > > >>>> the first path). > > >>>> I used traffic generators in the IMPs and in the UCLA > Sigma-7 to get > > >>>> this effect. Of course, we also crashed the Arpanet with > these early > > >>>> experiments. > > >>>> > > >>>> v > > >>>> > > >>>> > > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > > > > > >>>> > >>> wrote: > > >>>> > > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was > > >>>>? ? ? selected.? I can believe that economics was probably a > > factor but > > >>>>? ? ? the people and organizational issues could have been the > > dominant > > >>>>? ? ? factors. > > >>>> > > >>>>? ? ? IMHO, the "internet community" seems to often ignore > > non-technical > > >>>>? ? ? influences on historical events, preferring to view > > everything in > > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other > > influences > > >>>>? ? ? are an important part of the story - hence my > "economic lens". > > >>>>? ? ? You just described a view through a manager's lens. > > >>>> > > >>>>? ? ? /Jack > > >>>> > > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect > of that > > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed > > that the > > >>>>? ? ? ARPANET had been configured with a leased circuit > directly > > between > > >>>>? ? ? the nearby IMPs to ISI and ARPA. So as a demo of "packet > > >>>>? ? ? switching", there wasn't much actual switching > involved.? The 2 > > >>>>? ? ? IMPs were more like multiplexors. > > >>>> > > >>>>? ? ? I never heard whether that configuration was mandated by > > ARPA, or > > >>>>? ? ? BBN decided to put a line in as a way to keep the > customer > > happy, > > >>>>? ? ? or if it just happened naturally as a result of the > ongoing > > >>>>? ? ? measurement of traffic flows and reconfiguration of > the topology > > >>>>? ? ? to adapt as needed.? Or something else.? The > interactivity > > of the > > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was > > >>>>? ? ? noticeably better than other users (e.g., me) experienced. > > >>>> > > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: > > >>>>>? ? ? Jack, > > >>>>> > > >>>>>? ? ? You wrote: > > >>>>> > > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in > > Arlington, VA. > > >>>>>? ? ? ? ? There were > > >>>>>? ? ? ? ? terminals all over the building, pretty much all > connected > > >>>>>? ? ? ? ? through the > > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine > > Del Rey, > > >>>>>? ? ? ? ? CA.? The > > >>>>>? ? ? ? ? technology of Packet Switching made it possible > to keep a > > >>>>>? ? ? ? ? PDP-10 busy > > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of > > everything, > > >>>>>? ? ? ? ? including those expensive communications circuits. > > This was > > >>>>>? ? ? ? ? circa > > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive > > communications, > > >>>>> and > > >>>>>? ? ? ? ? expensive and distant computers -- although I always > > thought > > >>>>>? ? ? ? ? ARPA's > > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was > probably > > more to > > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than > because > > it was > > >>>>>? ? ? ? ? cheaper > > >>>>>? ? ? ? ? than using a computer somewhere near DC. > > >>>>> > > >>>>> > > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other > > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong > > support, > > >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? > Uncapher > > >>>>>? ? ? explored a couple of possibilities and found a > comfortable > > >>>>>? ? ? institutional home with the University of Southern > California > > >>>>>? ? ? (USC) with the proviso the institute would be off campus. > > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and > of the > > >>>>>? ? ? Arpanet project.? As the Arpanet grew, Roberts needed a > > place to > > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet. > > Not just > > >>>>>? ? ? for the staff at ARPA but for many others as well. > > Uncapher was > > >>>>>? ? ? cooperative and the rest followed easily. > > >>>>> > > >>>>>? ? ? The fact that it demonstrated the viability of > packet-switching > > >>>>>? ? ? over that distance was perhaps a bonus, but the same > would have > > >>>>>? ? ? been true almost anywhere in the continental U.S. at > that time. > > >>>>>? ? ? The more important factor was the quality of the > relationship. > > >>>>>? ? ? One could imagine setting up a small farm of machines at > > various > > >>>>>? ? ? other universities, non-profits, or selected for profit > > companies > > >>>>>? ? ? or even some military bases. For each of these, cost, > > >>>>>? ? ? contracting rules, the ambitions of the principal > investigator, > > >>>>>? ? ? and staff skill sets would have been the dominant > concerns. > > >>>>> > > >>>>>? ? ? Steve > > >>>>> > > >>>> > > >>>> -- > > >>>> Please send any postal/overnight deliveries to: > > >>>> Vint Cerf > > >>>> 1435 Woodhurst Blvd > > >>>> McLean, VA 22102 > > >>>> 703-448-0965 > > >>>> > > >>>> until further notice > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > From steve at shinkuro.com Mon Aug 30 11:33:56 2021 From: steve at shinkuro.com (Steve Crocker) Date: Mon, 30 Aug 2021 14:33:56 -0400 Subject: [ih] More topology In-Reply-To: References: Message-ID: Minor point: RFNM = Ready (not Request) for Next Message. Sent from my iPhone > On Aug 30, 2021, at 2:01 PM, Jack Haverty via Internet-history wrote: > > ?Yes, but it was more complicated than that...a little more history: > > ARPANET used RFNMs (Request For Next Message) as a means of flow control. Every message (packet/datagram/whatever) sent by a host would eventually cause a RFNM to be returned to the host. IIRC, hosts were allowed to send up to 8 messages to any particular destination. So there could be up to 8 pending RFNMs to come back to the host for traffic to that destination. If the host tried to send a 9th message to a particular destination, the IMP would block all transmissions from the host until those RFNMs arrived, by shutting off the hardware interface. So, if a host exceeded that limit of "8 in flight" to any destination, the IMP would block it, at least temporarily, from sending anything to any destination. That would probably be A Bad Thing. > > Hosts could implement a simple algorithm and simply send one message, and hold the next message until a RFNM came back. But to increase throughput, it was advisable to implement some sort of "RFNM Counting" where the host would keep track of how many messages were "in flight", and avoid sending another message to a particular destination if that message would exceed the 8-in-flight constraint, and thereby avoid having the IMP shut off all of its traffic to all destinations. The TCP/IP I implemented for Unix did that kind of RFNM Counting on the ARPANET interface, but I'm not sure how other implementations handled the RFNM issues. > > Any "box" (such as a Port Expander) that was "spliced into" the connection between a host and an IMP had to perform two related functions. It had to act as a host itself in interacting with the IMP. It also had to "look like an IMP" to the host(s) that were attached to it. It had to essentially implement "timesharing" of the IMP's interface. > > The "1822 specifications" defined the interface between a Host and an IMP. From it, engineers could build interfaces for their hosts to connect them to the ARPANET. However (always a however...) the 1822 spec appeared to be symmetrical. But it wasn't. Interfaces that met the 1822 specs could successfully interact with an IMP. Also, if you plugged two such 1822 interfaces back-to-back (as was done in connecting the 4 host to a Port Expander), it would often work apparently fine. The "Host to IMP" specification wasn't quite the same as the (internal-to-BBN) "IMP To Host" specification; it was easy for people to treat it as if it was. > > But in that early Internet, there were lots of "outages" to be investigated. I remember doing a "deep dive" into one such configuration where equipment was "spliced into" a Host/IMP 1822 cable with unreliable results. It turned out to be a hardware issue, with the root cause being the invalid assumption that any 1822-compliant interface on a host could also successfully emulate the 1822 interface on an IMP. > > This was a sufficiently common problem that I wrote IEN 139 "Hosts As IMPs" to explain the situation (see https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone trying to do such things. But that IEN only addressed the low-level issues of hardware, signals, voltages, and noise., and warned that to do such things might require more effort to actually behave as an IMP. > > RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an IMP", a box such as a Port Expander faced design choices for providing functionality such as RFNMs. I never knew how it did that, and how successfully it "looked like an IMP" to all its attached hosts. E.g., if all 4 hosts, thinking they were connected to their own dedicated IMP port, did their own RFNM Counting, how did the PE make that all work reliably? Maybe the situation just never came up often enough in practice to motivate troubleshooting. > > Not an issue now of course, but historically I wonder how much of the early reliability issues in the Internet in the Fuzzy Peach era might have been caused by such situations. > > /Jack > > PS - the same kind of thought has occurred to me with respect to NAT, which seems to perform a similar "look like an Internet" function. > > > > >> On 8/30/21 3:54 AM, Vint Cerf wrote: >> two tcp connections could multiplex on a given IMP-IMP link - one RFNM per IP packet regardless of the TCP layer "connection" >> v >> >> >> On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history > wrote: >> >> Thanks Barbara -- yes, the port Expander was one of the things I >> called >> "homegrown LANs". I never did learn how the PE handled RFNMs, in >> particular how it interacted with its associated NCP host that it was >> "stealing" RFNMs from. >> /jack >> >> On 8/29/21 2:38 PM, Barbara Denny wrote: >> > There was also SRI's port expander which increased the number of >> host >> > ports available on an IMP. >> > >> > You can find the SRI technical report (1080-140-1) on the web. The >> > title is "The Arpanet Imp Port Expander". >> > >> > barbara >> > >> > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via >> > Internet-history > > wrote: >> > >> > >> > Thanks Steve. I guess I was focussed only on the longhaul >> hops. The >> > maps didn't show where host computers were attached. At the time >> > (1981) the ARPANET consisted of several clusters of nodes (DC, >> Boston, >> > LA, SF), almost like an early form of Metropolitan Area Network >> (MAN), >> > plus single nodes scattered around the US and a satellite circuit to >> > Europe. The "MAN" parts of the ARPANET were often richly >> connected, and >> > the circuits might have even been in the same room or building or >> > campus. So the long-haul circuits were in some sense more >> important in >> > their scarcity and higher risk of problems from events such as >> marauding >> > backhoes (we called such network outages "backhoe fade"). >> > >> > While I still remember...here's a little Internet History. >> > >> > The Internet, at the time in late 70s and early 80s, was in what >> I used >> > to call the "Fuzzy Peach" stage of its development. In addition to >> > computers directly attached to an IMP, there were various kinds of >> > "local area networks", including things such as Packet Radio >> networks >> > and a few homegrown LANs, which provided connectivity in a small >> > geographical area. Each of those was attached to an ARPANET IMP >> > somewhere close by, and the ARPANET provided all of the long-haul >> > communications. The exception to that was the SATNET, which >> provided >> > connectivity across the Atlantic, with a US node (in West Virginia >> > IIRC), and a very active node in the UK. So the ARPANET was the >> > "peach" and all of the local networks and computers in the US >> were the >> > "fuzz", with SATNET attaching extending the Internet to Europe. >> > >> > That topology had some implications on the early Internet behavior. >> > >> > At the time, I was responsible for BBN's contract with ARPA in >> which one >> > of the tasks was "make the core Internet reliable 24x7". That >> > motivated quite frequent interactions with the ARPANET NOC, >> especially >> > since it was literally right down the hall. >> > >> > TCP/IP was in use at the time, but most of the long-haul traffic >> flows >> > were through the ARPANET. With directly-connected computers at each >> > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the >> protocol >> > in use as the ARPANET TIPs became TACs. >> > >> > However... There's always a "however"... The ARPANET itself >> already >> > implemented a lot of the functionality that TCP provided. ARPANET >> > already provided reliable end-end byte streams, as well as flow >> control; >> > the IMPs would allow only 8 "messages" in transit between two >> endpoints, >> > and would physically block the computer from sending more than that. >> > So IP datagrams never got lost, or reordered, or duplicated, and >> never >> > had to be discarded or retransmitted. TCP/IP could do such >> things too, >> > but in the "fuzzy peach" situation, it didn't have to do so. >> > >> > The prominent exception to the "fuzzy peach" was transatlantic >> traffic, >> > which had to cross both the ARPANET and SATNET. The gateway >> > interconnecting those two had to discard IP datagrams when they >> came in >> > faster than they could go out. TCP would have to notice, >> retransmit, >> > and reorder things at the destination. >> > >> > Peter Kirstein's crew at UCL were quite active in experimenting >> with the >> > early Internet, and their TCP/IP traffic had to actually do all >> of the >> > functions that the Fuzzy Peach so successfully hid from those >> directly >> > attached to it. I think the experiences in that path motivated >> a lot >> > of the early thinking about algorithms for TCP behavior, as well as >> > gateway actions. >> > >> > Europe is 5+ hours ahead of Boston, so I learned to expect >> emails and/or >> > phone messages waiting for me every morning advising that "The >> Internet >> > Is Broken!", either from Europe directly or through ARPA. One >> of the >> > first troubleshooting steps, after making sure the gateway was >> running, >> > was to see what was going on in the Fuzzy Peach which was so >> important >> > to the operation of the Internet. Bob Hinden, Alan Sheltzer, >> and Mike >> > Brescia might remember more since they were usually on the front >> lines. >> > >> > Much of the experimentation at the time involved interactions >> between >> > the UK crowd and some machine at ISI. If the ARPANET was >> acting up, >> > the bandwidth and latency of those TCP/IP traffic flows could gyrate >> > wildly, and TCP/IP implementations didn't always respond well to >> such >> > things, especially since they didn't typically occur when you >> were just >> > using the Fuzzy Peach. >> > >> > Result - "The Internet Is Broken". That long-haul ARPA-ISI >> circuit was >> > an important part of the path from Europe to California. If it was >> > "down", the path became 3 or more additional hops (IMP hops, not >> IP), >> > and became further loaded by additional traffic routing around the >> > break. TCPs would timeout, retransmit, and make the problem worse >> > while their algorithms tried to adapt. >> > >> > So that's probably what I was doing in the NOC when I noticed the >> > importance of that ARPA<->USC ARPANET circuit. >> > >> > /Jack Haverty >> > >> > >> > On 8/29/21 10:09 AM, Stephen Casner wrote: >> > > Jack, that map shows one hop from ARPA to USC, but the PDP10s >> were at >> > > ISI which is 10 miles and 2 or 3 IMPs from USC. >> > > >> > > -- Steve >> > > >> > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: >> > > >> > >> Actually July 1981 -- see >> > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg >> >> > > >> >(thanks, >> > Noel!) >> > >> The experience I recall was being in the ARPANET NOC for some >> > reason and >> > >> noticing the topology on the big map that covered one wall of >> the >> > NOC. There >> > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure >> > where the >> > >> PDP-10s were attached. Still just historically curious how the >> > decision was >> > >> made to configure that topology....but we'll probably never >> know. >> > /Jack >> > >> >> > >> >> > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: >> > >>> A look at some ARPAnet maps available on the web shows >> that in >> > 1982 it was >> > >>> four hops from ARPA to ISI, but by 1985 it was one hop. >> > >>> Alex McKenzie >> > >>> >> > >>> On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex >> McKenzie via >> > >>> Internet-history > >> > > >> wrote: >> > >>> This is the second email from Jack mentioning a >> > point-to-point line >> > >>> between the ARPA TIP and the ISI site. I don't believe that >> is an >> > accurate >> > >>> statement of the ARPAnet topology. In January 1975 there >> were 5 hops >> > >>> between the 2 on the shortest path. In October 1975 there >> were 6. >> > I don't >> > >>> believe it was ever one or two hops, but perhaps someone can >> find >> > a network >> > >>> map that proves me wrong. >> > >>> Alex McKenzie >> > >>> >> > >>> On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack >> Haverty via >> > >>> Internet-history > >> > > >> wrote: >> > >>> Sounds right. My experience was well after that early >> > experimental >> > >>> period. The ARPANET was much bigger (1980ish) and the >> topology had >> > >>> evolved over the years. There was a direct 56K line (IIRC >> between >> > >>> ARPA-TIP and ISI) at that time. Lots of other circuits too, >> but in >> > >>> normal conditions ARPA<->ISI traffic flowed directly over that >> > long-haul >> > >>> circuit. /Jack >> > >>> >> > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: >> > >>>> Jack, the 4 node configuration had two paths between UCLA >> and SRI and >> > >>>> a two hop path to University of Utah. >> > >>>> We did a variety of tests to force alternate routing (by >> congesting >> > >>>> the first path). >> > >>>> I used traffic generators in the IMPs and in the UCLA >> Sigma-7 to get >> > >>>> this effect. Of course, we also crashed the Arpanet with >> these early >> > >>>> experiments. >> > >>>> >> > >>>> v >> > >>>> >> > >>>> >> > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >> >> > > >> > >>>> >> >>> wrote: >> > >>>> >> > >>>> Thanks, Steve. I hadn't heard the details of why ISI was >> > >>>> selected. I can believe that economics was probably a >> > factor but >> > >>>> the people and organizational issues could have been the >> > dominant >> > >>>> factors. >> > >>>> >> > >>>> IMHO, the "internet community" seems to often ignore >> > non-technical >> > >>>> influences on historical events, preferring to view >> > everything in >> > >>>> terms of RFCs, protocols, and such. I think the other >> > influences >> > >>>> are an important part of the story - hence my >> "economic lens". >> > >>>> You just described a view through a manager's lens. >> > >>>> >> > >>>> /Jack >> > >>>> >> > >>>> PS - I always thought that the "ARPANET demo" aspect >> of that >> > >>>> ARPANET timeframe was suspect, especially after I noticed >> > that the >> > >>>> ARPANET had been configured with a leased circuit >> directly >> > between >> > >>>> the nearby IMPs to ISI and ARPA. So as a demo of "packet >> > >>>> switching", there wasn't much actual switching >> involved. The 2 >> > >>>> IMPs were more like multiplexors. >> > >>>> >> > >>>> I never heard whether that configuration was mandated by >> > ARPA, or >> > >>>> BBN decided to put a line in as a way to keep the >> customer >> > happy, >> > >>>> or if it just happened naturally as a result of the >> ongoing >> > >>>> measurement of traffic flows and reconfiguration of >> the topology >> > >>>> to adapt as needed. Or something else. The >> interactivity >> > of the >> > >>>> service between a terminal at ARPA and a PDP-10 at ISI was >> > >>>> noticeably better than other users (e.g., me) experienced. >> > >>>> >> > >>>> On 8/28/21 11:51 AM, Steve Crocker wrote: >> > >>>>> Jack, >> > >>>>> >> > >>>>> You wrote: >> > >>>>> >> > >>>>> I recall many visits to ARPA on Wilson Blvd in >> > Arlington, VA. >> > >>>>> There were >> > >>>>> terminals all over the building, pretty much all >> connected >> > >>>>> through the >> > >>>>> ARPANET to a PDP-10 3000 miles away at USC in Marine >> > Del Rey, >> > >>>>> CA. The >> > >>>>> technology of Packet Switching made it possible >> to keep a >> > >>>>> PDP-10 busy >> > >>>>> servicing all those Users and minimize the costs of >> > everything, >> > >>>>> including those expensive communications circuits. >> > This was >> > >>>>> circa >> > >>>>> 1980. Users could efficiently share expensive >> > communications, >> > >>>>> and >> > >>>>> expensive and distant computers -- although I always >> > thought >> > >>>>> ARPA's >> > >>>>> choice to use a computer 3000 miles away was >> probably >> > more to >> > >>>>> demonstrate the viability of the ARPANET than >> because >> > it was >> > >>>>> cheaper >> > >>>>> than using a computer somewhere near DC. >> > >>>>> >> > >>>>> >> > >>>>> The choice of USC-ISI in Marina del Rey was due to other >> > >>>>> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong >> > support, >> > >>>>> Keith Uncapher moved his research group out of RAND. >> Uncapher >> > >>>>> explored a couple of possibilities and found a >> comfortable >> > >>>>> institutional home with the University of Southern >> California >> > >>>>> (USC) with the proviso the institute would be off campus. >> > >>>>> Uncapher was solidly supportive of both ARPA/IPTO and >> of the >> > >>>>> Arpanet project. As the Arpanet grew, Roberts needed a >> > place to >> > >>>>> have multiple PDP-10s providing service on the Arpanet. >> > Not just >> > >>>>> for the staff at ARPA but for many others as well. >> > Uncapher was >> > >>>>> cooperative and the rest followed easily. >> > >>>>> >> > >>>>> The fact that it demonstrated the viability of >> packet-switching >> > >>>>> over that distance was perhaps a bonus, but the same >> would have >> > >>>>> been true almost anywhere in the continental U.S. at >> that time. >> > >>>>> The more important factor was the quality of the >> relationship. >> > >>>>> One could imagine setting up a small farm of machines at >> > various >> > >>>>> other universities, non-profits, or selected for profit >> > companies >> > >>>>> or even some military bases. For each of these, cost, >> > >>>>> contracting rules, the ambitions of the principal >> investigator, >> > >>>>> and staff skill sets would have been the dominant >> concerns. >> > >>>>> >> > >>>>> Steve >> > >>>>> >> > >>>> >> > >>>> -- >> > >>>> Please send any postal/overnight deliveries to: >> > >>>> Vint Cerf >> > >>>> 1435 Woodhurst Blvd >> > >>>> McLean, VA 22102 >> > >>>> 703-448-0965 >> > >>>> >> > >>>> until further notice >> > >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> >> > > >> > https://elists.isoc.org/mailman/listinfo/internet-history >> >> > > > >> >> -- Internet-history mailing list >> Internet-history at elists.isoc.org >> >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> 1435 Woodhurst Blvd >> McLean, VA 22102 >> 703-448-0965 >> >> until further notice >> >> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jnc at mercury.lcs.mit.edu Mon Aug 30 12:02:50 2021 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 30 Aug 2021 15:02:50 -0400 (EDT) Subject: [ih] More topology Message-ID: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> > From: Jack Haverty > I never did learn how the PE handled RFNMs, in particular how it > interacted with its associated NCP host that it was "stealing" RFNMs > from. I know a bit about the Port Expander; we were planning on using it at MIT at one point, since MIT had no spare IMP ports for an IP gateway (router). (We didn't get an IMP port for the MIT gateway until MIT got its third IMP, one of the first C/30's.) That didn't work out, as I'll explain later. The PE didn't share the NCP 'host' among connected hosts; all NCP traffic coming in from the IMP is sent to the 'main' subsidiary host's port: ; WHEN A TYPE 0 OR TYPE 3 MESSAGE IS RECEIVED, FIRST CHECK THE MESSAGE'S ; LINK NUMBER. IF THE MSG IS NOT ON AN INTERNET LINK, THEN SEND THE MSG TO ; THE PORT THAT RECEIVES ALL NON-INET TRAFFIC (PORT INDEX IS IN NCPPRT) For IP traffic, the PE acts as a gateway (i.e. router), and there's a table which says which downstream port various IP hosts are on. The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s which record messages sent out to the IMP; when a RFNM arrives, it uses the CB database to work out the downstream host which originated the message the RFNM is for; the RFNM is then handed to it. As the above excerpt probably made clear, I still have the PE code (it had been squirreled away on the MIT-CSR Unix - I made a full dump of that machine before it croaked, so we now have access to all that history; I guess I was concerned about history even back then). I don't think I have the _original_, unmodified PE code; what I have is a bodged version that I hacked to act as a gateway to the MIT 1 Mbit/sec ring LAN. I.e. it did't have any subsidiary hosts attached to 1822 ports; just the main 1822 port (connected to the IMP) and the LAN. I'm too lazy to see exactly what I did with RFNM's there; probably just pitched them (no RFNM's on a LAN :-). While I was looking for that, I ran across some other old code that might be interesing: - the TIU (kind of a predecessor to the TAC, a _very_ early implementation of TCP in Macro-11 for the PDP-11, written by Jim Mathis, which I believe was the basis for Jack's first UNIX TCP at BBN); - a couple of modules from the BCPL gateway code from BBN (the one that ran under ELF); historically interesting, as it was the very first IP router code _ever_. If anyone is interested in any of this stuff, let me know and I'll look into getting it uploaded and made available. The reason we couldn't get the PE to work was that the SRI 1822 interface (which is what were planning to use on our PE) didn't _exactly_ electrically duplicate the IMP 1822 interface; the latter used optp-isolators on the DH interface, and the SRI interface didn't have them. The plan was to put the PE in from of the DM ITS machine, but when we tried it, it didn't work. Ken Pogran looked into the issue, and discovered that the person who did DM's IMP interface (I wonder who that was :-) had done some 'trick' (the exact details of which now escape me - it was something to do with the ground he used for the DH interface signals),and without the opto-isolators the SRI 1822 interface wouldn't talk to it. Noel From tte at cs.fau.de Mon Aug 30 12:31:06 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Mon, 30 Aug 2021 21:31:06 +0200 Subject: [ih] Better-than-Best Effort In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> Message-ID: <20210830193106.GG50345@faui48f.informatik.uni-erlangen.de> On Fri, Aug 27, 2021 at 03:49:04PM -0700, touch--- via Internet-history wrote: > Absolute QoS does (ensuring 300 Mbps capacity), but relative QoS can be deployed as a layer on top of nearly anything ? i.e., run RSVP in an overlay and you don?t get 300 Mbps per se, but that reservation would get twice the capacity of one reserving 150 Mbps on paths they share. Very much agree with the concept of relative bandwidth allocation. RSVP is a bit of a red herring here because not only does it not have the concept of relative, it also does not have the concept of variable (we did once start multi-TSPEC in TSVWG but never finished it). And ultimately contention based relative weight are a matter of AQM and CC. RFC8698 has the concept of weighted CC, allowing flows to get differentiated bandwidth. DPS has weighting in AQM (that could be integrated with RSVP). I am sur there are many other example not well explored because its such a big gap between great idea and actual adoption, especially when its not only affecting hosts. > > I'm thinking that the long-haul infrastructure tends to have enough capacity that it usually isn't the source of latency. It's the beginning and ending legs that do. > > We do have some cases where that happens in the customer upload direction (bufferbloat), but I wonder if it?s more often in the aggregation network between the edge networks and the core. That?s the typical case I?ve seen for cable Internet, where the aggregation tree was designed assuming ratios that don?t match current transport protocol use. I have 200 Mbps cable over a WiFi LAN that can support 2.2Gbps, but I almost never see those capacities. The main issue is inelastic traffic, such as classical VoIP with video (not very common anymore), when it tries to get more of a links equal share - and that link is loaed with many more TCP flows. How common or uncommon that situation is so highly deployment dependent, so many folks may never get into it and think QoS is never required... Cheers Toerless > At the other side, I wonder too if there are overloads on the end systems more than the edge net. > > > So what about a scheme that defines and provides QOS in those segments but not the long middle? Cheaper, more implementable, and might give usefully-better performance. > > Interesting question; FWIW, I don?t know if the edge is more agile than the core; AFAICT, they?re both susceptible to the same inertia and lack of consolidated oversight... > > > Assuming that this idea is new only to me, I'm curious about reactions/history/etc. > > > > d/ > > -- > > Dave Crocker > > Brandenburg InternetWorking > > bbiw.net > > ? > Joe Touch, temporal epistemologist > www.strayalpha.com > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From b_a_denny at yahoo.com Mon Aug 30 13:37:58 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 30 Aug 2021 20:37:58 +0000 (UTC) Subject: [ih] More topology In-Reply-To: References: Message-ID: <165760904.1127720.1630355878166@mail.yahoo.com> I am pretty sure this did impact me while I was trying to get the Reconstitution Protocol to work.? I remember having trouble with trying to reliably know when the ARPANET partitioned.? I needed to get things working so I changed what was originally specified in our design.? I remember Jim Mathis explaining RFNMs and that this might be causing the problem.? I don't know if this was brought to the attention of the ARPAnet folks. I also don't remember the testbed well enough to know if a port expander was between the RP gateway and the IMP. barbara On Monday, August 30, 2021, 11:34:09 AM PDT, Steve Crocker via Internet-history wrote: Minor point: RFNM = Ready (not Request) for Next Message. Sent from my iPhone > On Aug 30, 2021, at 2:01 PM, Jack Haverty via Internet-history wrote: > > ?Yes, but it was more complicated than that...a little more history: > > ARPANET used RFNMs (Request For Next Message) as a means of flow control.? Every message (packet/datagram/whatever) sent by a host would eventually cause a RFNM to be returned to the host.? IIRC, hosts were allowed to send up to 8 messages to any particular destination.? So there could be up to 8 pending RFNMs to come back to the host for traffic to that destination.? If the host tried to send a 9th message to a particular destination, the IMP would block all transmissions from the host until those RFNMs arrived, by shutting off the hardware interface.? So, if a host exceeded that limit of "8 in flight" to any destination, the IMP would block it, at least temporarily, from sending anything to any destination. That would probably be A Bad Thing. > > Hosts could implement a simple algorithm and simply send one message, and hold the next message until a RFNM came back.? But to increase throughput, it was advisable to implement some sort of "RFNM Counting" where the host would keep track of how many messages were "in flight", and avoid sending another message to a particular destination if that message would exceed the 8-in-flight constraint, and thereby avoid having the IMP shut off all of its traffic to all destinations.? ? The TCP/IP I implemented for Unix did that kind of RFNM Counting on the ARPANET interface, but I'm not sure how other implementations handled the RFNM issues. > > Any "box" (such as a Port Expander) that was "spliced into" the connection between a host and an IMP had to perform two related functions.? It had to act as a host itself in interacting with the IMP.? It also had to "look like an IMP" to the host(s) that were attached to it.? It had to essentially implement "timesharing" of the IMP's interface. > > The "1822 specifications" defined the interface between a Host and an IMP.? ? From it, engineers could build interfaces for their hosts to connect them to the ARPANET.? However (always a however...) the 1822 spec appeared to be symmetrical.? But it wasn't.? Interfaces that met the 1822 specs could successfully interact with an IMP. Also, if you plugged two such 1822 interfaces back-to-back (as was done in connecting the 4 host to a Port Expander), it would often work apparently fine.? The "Host to IMP" specification wasn't quite the same as the (internal-to-BBN) "IMP To Host" specification;? it was easy for people to treat it as if it was. > > But in that early Internet, there were lots of "outages" to be investigated.? I remember doing a "deep dive" into one such configuration where equipment was "spliced into" a Host/IMP 1822 cable with unreliable results.? It turned out to be a hardware issue, with the root cause being the invalid assumption that any 1822-compliant interface on a host could also successfully emulate the 1822 interface on an IMP. > > This was a sufficiently common problem that I wrote IEN 139 "Hosts As IMPs" to explain the situation (see https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone trying to do such things.? But that IEN only addressed the low-level issues of hardware, signals, voltages, and noise., and warned that to do such things might require more effort to actually behave as an IMP. > > RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an IMP", a box such as a Port Expander faced design choices for providing functionality such as RFNMs.? I never knew how it did that, and how successfully it "looked like an IMP" to all its attached hosts.? E.g., if all 4 hosts, thinking they were connected to their own dedicated IMP port, did their own RFNM Counting, how did the PE make that all work reliably?? Maybe the situation just never came up often enough in practice to motivate troubleshooting. > > Not an issue now of course, but historically I wonder how much of the early reliability issues in the Internet in the Fuzzy Peach era might have been caused by such situations. > > /Jack > > PS - the same kind of thought has occurred to me with respect to NAT, which seems to perform a similar "look like an Internet" function. > > > > >> On 8/30/21 3:54 AM, Vint Cerf wrote: >> two tcp connections could multiplex on a given IMP-IMP link - one RFNM per IP packet regardless of the TCP layer "connection" >> v >> >> >> On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history > wrote: >> >>? ? Thanks Barbara -- yes, the port Expander was one of the things I >>? ? called >>? ? "homegrown LANs".? I never did learn how the PE handled RFNMs, in >>? ? particular how it interacted with its associated NCP host that it was >>? ? "stealing" RFNMs from. >>? ? /jack >> >>? ? On 8/29/21 2:38 PM, Barbara Denny wrote: >>? ? > There was also SRI's port expander which increased the number of >>? ? host >>? ? > ports available on an IMP. >>? ? > >>? ? > You can find the SRI technical report (1080-140-1) on the web. The >>? ? > title is "The Arpanet Imp Port Expander". >>? ? > >>? ? > barbara >>? ? > >>? ? > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via >>? ? > Internet-history >? ? > wrote: >>? ? > >>? ? > >>? ? > Thanks Steve.? I guess I was focussed only on the longhaul >>? ? hops. The >>? ? > maps didn't show where host computers were attached. At the time >>? ? > (1981) the ARPANET consisted of several clusters of nodes (DC, >>? ? Boston, >>? ? > LA, SF), almost like an early form of Metropolitan Area Network >>? ? (MAN), >>? ? > plus single nodes scattered around the US and a satellite circuit to >>? ? > Europe.? The "MAN" parts of the ARPANET were often richly >>? ? connected, and >>? ? > the circuits might have even been in the same room or building or >>? ? > campus.? So the long-haul circuits were in some sense more >>? ? important in >>? ? > their scarcity and higher risk of problems from events such as >>? ? marauding >>? ? > backhoes (we called such network outages "backhoe fade"). >>? ? > >>? ? > While I still remember...here's a little Internet History. >>? ? > >>? ? > The Internet, at the time in late 70s and early 80s, was in what >>? ? I used >>? ? > to call the "Fuzzy Peach" stage of its development.? In addition to >>? ? > computers directly attached to an IMP, there were various kinds of >>? ? > "local area networks", including things such as Packet Radio >>? ? networks >>? ? > and a few homegrown LANs, which provided connectivity in a small >>? ? > geographical area.? Each of those was attached to an ARPANET IMP >>? ? > somewhere close by, and the ARPANET provided all of the long-haul >>? ? > communications.? The exception to that was the SATNET, which >>? ? provided >>? ? > connectivity across the Atlantic, with a US node (in West Virginia >>? ? > IIRC), and a very active node in the UK.? So the ARPANET was the >>? ? > "peach" and all of the local networks and computers in the US >>? ? were the >>? ? > "fuzz", with SATNET attaching extending the Internet to Europe. >>? ? > >>? ? > That topology had some implications on the early Internet behavior. >>? ? > >>? ? > At the time, I was responsible for BBN's contract with ARPA in >>? ? which one >>? ? > of the tasks was "make the core Internet reliable 24x7".? That >>? ? > motivated quite frequent interactions with the ARPANET NOC, >>? ? especially >>? ? > since it was literally right down the hall. >>? ? > >>? ? > TCP/IP was in use at the time, but most of the long-haul traffic >>? ? flows >>? ? > were through the ARPANET.? With directly-connected computers at each >>? ? > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the >>? ? protocol >>? ? > in use as the ARPANET TIPs became TACs. >>? ? > >>? ? > However...? There's always a "however"...? The ARPANET itself >>? ? already >>? ? > implemented a lot of the functionality that TCP provided. ARPANET >>? ? > already provided reliable end-end byte streams, as well as flow >>? ? control; >>? ? > the IMPs would allow only 8 "messages" in transit between two >>? ? endpoints, >>? ? > and would physically block the computer from sending more than that. >>? ? > So IP datagrams never got lost, or reordered, or duplicated, and >>? ? never >>? ? > had to be discarded or retransmitted.? TCP/IP could do such >>? ? things too, >>? ? > but in the "fuzzy peach" situation, it didn't have to do so. >>? ? > >>? ? > The prominent exception to the "fuzzy peach" was transatlantic >>? ? traffic, >>? ? > which had to cross both the ARPANET and SATNET.? The gateway >>? ? > interconnecting those two had to discard IP datagrams when they >>? ? came in >>? ? > faster than they could go out.? TCP would have to notice, >>? ? retransmit, >>? ? > and reorder things at the destination. >>? ? > >>? ? > Peter Kirstein's crew at UCL were quite active in experimenting >>? ? with the >>? ? > early Internet, and their TCP/IP traffic had to actually do all >>? ? of the >>? ? > functions that the Fuzzy Peach so successfully hid from those >>? ? directly >>? ? > attached to it.? I think the experiences in that path motivated >>? ? a lot >>? ? > of the early thinking about algorithms for TCP behavior, as well as >>? ? > gateway actions. >>? ? > >>? ? > Europe is 5+ hours ahead of Boston, so I learned to expect >>? ? emails and/or >>? ? > phone messages waiting for me every morning advising that "The >>? ? Internet >>? ? > Is Broken!", either from Europe directly or through ARPA.? One >>? ? of the >>? ? > first troubleshooting steps, after making sure the gateway was >>? ? running, >>? ? > was to see what was going on in the Fuzzy Peach which was so >>? ? important >>? ? > to the operation of the Internet.? Bob Hinden, Alan Sheltzer, >>? ? and Mike >>? ? > Brescia might remember more since they were usually on the front >>? ? lines. >>? ? > >>? ? > Much of the experimentation at the time involved interactions >>? ? between >>? ? > the UK crowd and some machine at ISI.? If the ARPANET was >>? ? acting up, >>? ? > the bandwidth and latency of those TCP/IP traffic flows could gyrate >>? ? > wildly, and TCP/IP implementations didn't always respond well to >>? ? such >>? ? > things, especially since they didn't typically occur when you >>? ? were just >>? ? > using the Fuzzy Peach. >>? ? > >>? ? > Result - "The Internet Is Broken".? That long-haul ARPA-ISI >>? ? circuit was >>? ? > an important part of the path from Europe to California.? If it was >>? ? > "down", the path became 3 or more additional hops (IMP hops, not >>? ? IP), >>? ? > and became further loaded by additional traffic routing around the >>? ? > break.? TCPs would timeout, retransmit, and make the problem worse >>? ? > while their algorithms tried to adapt. >>? ? > >>? ? > So that's probably what I was doing in the NOC when I noticed the >>? ? > importance of that ARPA<->USC ARPANET circuit. >>? ? > >>? ? > /Jack Haverty >>? ? > >>? ? > >>? ? > On 8/29/21 10:09 AM, Stephen Casner wrote: >>? ? > > Jack, that map shows one hop from ARPA to USC, but the PDP10s >>? ? were at >>? ? > > ISI which is 10 miles and 2 or 3 IMPs from USC. >>? ? > > >>? ? > >? ? ? ? -- Steve >>? ? > > >>? ? > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: >>? ? > > >>? ? > >> Actually July 1981 -- see >>? ? > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg >>? ? >>? ? > >? ? >>? ? >(thanks, >>? ? > Noel!) >>? ? > >> The experience I recall was being in the ARPANET NOC for some >>? ? > reason and >>? ? > >> noticing the topology on the big map that covered one wall of >>? ? the >>? ? > NOC.? There >>? ? > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure >>? ? > where the >>? ? > >> PDP-10s were attached.? Still just historically curious how the >>? ? > decision was >>? ? > >> made to configure that topology....but we'll probably never >>? ? know. >>? ? > /Jack >>? ? > >> >>? ? > >> >>? ? > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: >>? ? > >>>? ? A look at some ARPAnet maps available on the web shows >>? ? that in >>? ? > 1982 it was >>? ? > >>> four hops from ARPA to ISI, but by 1985 it was one hop. >>? ? > >>> Alex McKenzie >>? ? > >>> >>? ? > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex >>? ? McKenzie via >>? ? > >>> Internet-history >? ? >>? ? > >? ? >> wrote: >>? ? > >>>? ? ? This is the second email from Jack mentioning a >>? ? > point-to-point line >>? ? > >>> between the ARPA TIP and the ISI site.? I don't believe that >>? ? is an >>? ? > accurate >>? ? > >>> statement of the ARPAnet topology.? In January 1975 there >>? ? were 5 hops >>? ? > >>> between the 2 on the shortest path. In October 1975 there >>? ? were 6. >>? ? > I don't >>? ? > >>> believe it was ever one or two hops, but perhaps someone can >>? ? find >>? ? > a network >>? ? > >>> map that proves me wrong. >>? ? > >>> Alex McKenzie >>? ? > >>> >>? ? > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack >>? ? Haverty via >>? ? > >>> Internet-history >? ? >>? ? > >? ? >> wrote: >>? ? > >>>? ? ? Sounds right.? My experience was well after that early >>? ? > experimental >>? ? > >>> period.? The ARPANET was much bigger (1980ish) and the >>? ? topology had >>? ? > >>> evolved over the years.? There was a direct 56K line (IIRC >>? ? between >>? ? > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, >>? ? but in >>? ? > >>> normal conditions ARPA<->ISI traffic flowed directly over that >>? ? > long-haul >>? ? > >>> circuit.? /Jack >>? ? > >>> >>? ? > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: >>? ? > >>>> Jack, the 4 node configuration had two paths between UCLA >>? ? and SRI and >>? ? > >>>> a two hop path to University of Utah. >>? ? > >>>> We did a variety of tests to force alternate routing (by >>? ? congesting >>? ? > >>>> the first path). >>? ? > >>>> I used traffic generators in the IMPs and in the UCLA >>? ? Sigma-7 to get >>? ? > >>>> this effect. Of course, we also crashed the Arpanet with >>? ? these early >>? ? > >>>> experiments. >>? ? > >>>> >>? ? > >>>> v >>? ? > >>>> >>? ? > >>>> >>? ? > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >>? ? >>? ? > > >>? ? > >>>> >>? ? >>> wrote: >>? ? > >>>> >>? ? > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was >>? ? > >>>>? ? ? selected.? I can believe that economics was probably a >>? ? > factor but >>? ? > >>>>? ? ? the people and organizational issues could have been the >>? ? > dominant >>? ? > >>>>? ? ? factors. >>? ? > >>>> >>? ? > >>>>? ? ? IMHO, the "internet community" seems to often ignore >>? ? > non-technical >>? ? > >>>>? ? ? influences on historical events, preferring to view >>? ? > everything in >>? ? > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other >>? ? > influences >>? ? > >>>>? ? ? are an important part of the story - hence my >>? ? "economic lens". >>? ? > >>>>? ? ? You just described a view through a manager's lens. >>? ? > >>>> >>? ? > >>>>? ? ? /Jack >>? ? > >>>> >>? ? > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect >>? ? of that >>? ? > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed >>? ? > that the >>? ? > >>>>? ? ? ARPANET had been configured with a leased circuit >>? ? directly >>? ? > between >>? ? > >>>>? ? ? the nearby IMPs to ISI and ARPA. So as a demo of "packet >>? ? > >>>>? ? ? switching", there wasn't much actual switching >>? ? involved.? The 2 >>? ? > >>>>? ? ? IMPs were more like multiplexors. >>? ? > >>>> >>? ? > >>>>? ? ? I never heard whether that configuration was mandated by >>? ? > ARPA, or >>? ? > >>>>? ? ? BBN decided to put a line in as a way to keep the >>? ? customer >>? ? > happy, >>? ? > >>>>? ? ? or if it just happened naturally as a result of the >>? ? ongoing >>? ? > >>>>? ? ? measurement of traffic flows and reconfiguration of >>? ? the topology >>? ? > >>>>? ? ? to adapt as needed.? Or something else.? The >>? ? interactivity >>? ? > of the >>? ? > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was >>? ? > >>>>? ? ? noticeably better than other users (e.g., me) experienced. >>? ? > >>>> >>? ? > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >>? ? > >>>>>? ? ? Jack, >>? ? > >>>>> >>? ? > >>>>>? ? ? You wrote: >>? ? > >>>>> >>? ? > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in >>? ? > Arlington, VA. >>? ? > >>>>>? ? ? ? ? There were >>? ? > >>>>>? ? ? ? ? terminals all over the building, pretty much all >>? ? connected >>? ? > >>>>>? ? ? ? ? through the >>? ? > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine >>? ? > Del Rey, >>? ? > >>>>>? ? ? ? ? CA.? The >>? ? > >>>>>? ? ? ? ? technology of Packet Switching made it possible >>? ? to keep a >>? ? > >>>>>? ? ? ? ? PDP-10 busy >>? ? > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of >>? ? > everything, >>? ? > >>>>>? ? ? ? ? including those expensive communications circuits. >>? ? > This was >>? ? > >>>>>? ? ? ? ? circa >>? ? > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive >>? ? > communications, >>? ? > >>>>> and >>? ? > >>>>>? ? ? ? ? expensive and distant computers -- although I always >>? ? > thought >>? ? > >>>>>? ? ? ? ? ARPA's >>? ? > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was >>? ? probably >>? ? > more to >>? ? > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than >>? ? because >>? ? > it was >>? ? > >>>>>? ? ? ? ? cheaper >>? ? > >>>>>? ? ? ? ? than using a computer somewhere near DC. >>? ? > >>>>> >>? ? > >>>>> >>? ? > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other >>? ? > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong >>? ? > support, >>? ? > >>>>>? ? ? Keith Uncapher moved his research group out of RAND. >>? ? Uncapher >>? ? > >>>>>? ? ? explored a couple of possibilities and found a >>? ? comfortable >>? ? > >>>>>? ? ? institutional home with the University of Southern >>? ? California >>? ? > >>>>>? ? ? (USC) with the proviso the institute would be off campus. >>? ? > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and >>? ? of the >>? ? > >>>>>? ? ? Arpanet project.? As the Arpanet grew, Roberts needed a >>? ? > place to >>? ? > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet. >>? ? > Not just >>? ? > >>>>>? ? ? for the staff at ARPA but for many others as well. >>? ? > Uncapher was >>? ? > >>>>>? ? ? cooperative and the rest followed easily. >>? ? > >>>>> >>? ? > >>>>>? ? ? The fact that it demonstrated the viability of >>? ? packet-switching >>? ? > >>>>>? ? ? over that distance was perhaps a bonus, but the same >>? ? would have >>? ? > >>>>>? ? ? been true almost anywhere in the continental U.S. at >>? ? that time. >>? ? > >>>>>? ? ? The more important factor was the quality of the >>? ? relationship. >>? ? > >>>>>? ? ? One could imagine setting up a small farm of machines at >>? ? > various >>? ? > >>>>>? ? ? other universities, non-profits, or selected for profit >>? ? > companies >>? ? > >>>>>? ? ? or even some military bases. For each of these, cost, >>? ? > >>>>>? ? ? contracting rules, the ambitions of the principal >>? ? investigator, >>? ? > >>>>>? ? ? and staff skill sets would have been the dominant >>? ? concerns. >>? ? > >>>>> >>? ? > >>>>>? ? ? Steve >>? ? > >>>>> >>? ? > >>>> >>? ? > >>>> -- >>? ? > >>>> Please send any postal/overnight deliveries to: >>? ? > >>>> Vint Cerf >>? ? > >>>> 1435 Woodhurst Blvd >>? ? > >>>> McLean, VA 22102 >>? ? > >>>> 703-448-0965 >>? ? > >>>> >>? ? > >>>> until further notice >>? ? > >>? ? > >>? ? > -- >>? ? > Internet-history mailing list >>? ? > Internet-history at elists.isoc.org >>? ? >>? ? >? ? > >>? ? > https://elists.isoc.org/mailman/listinfo/internet-history >>? ? >>? ? > >? ? > >> >>? ? --? ? Internet-history mailing list >>? ? Internet-history at elists.isoc.org >>? ? >>? ? https://elists.isoc.org/mailman/listinfo/internet-history >>? ? >> >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> 1435 Woodhurst Blvd >> McLean, VA 22102 >> 703-448-0965 >> >> until further notice >> >> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Mon Aug 30 14:04:40 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 30 Aug 2021 21:04:40 +0000 (UTC) Subject: [ih] More topology In-Reply-To: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> References: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> Message-ID: <88662475.1136461.1630357480385@mail.yahoo.com> Just a Guess. The Packet Radio station software probably made use of the router code base you mention.? The station software was written in BCPL and ELF was the operating system.? I don't know the timelines of the router development and the Packet Radio station development.? Ginny Strazisar (Travers)? probably can clarify this or perhaps Mike Beeler or Jil Westcott. barbara On Monday, August 30, 2021, 12:59:28 PM PDT, Noel Chiappa via Internet-history wrote: ? ? > From: Jack Haverty ? ? > I never did learn how the PE handled RFNMs, in particular how it ? ? > interacted with its associated NCP host that it was "stealing" RFNMs ? ? > from. I know a bit about the Port Expander; we were planning on using it at MIT at one point, since MIT had no spare IMP ports for an IP gateway (router). (We didn't get an IMP port for the MIT gateway until MIT got its third IMP, one of the first C/30's.) That didn't work out, as I'll explain later. The PE didn't share the NCP 'host' among connected hosts; all NCP traffic coming in from the IMP is sent to the 'main' subsidiary host's port: ? ; WHEN A TYPE 0 OR TYPE 3 MESSAGE IS RECEIVED, FIRST CHECK THE MESSAGE'S ? ; LINK NUMBER.? IF THE MSG IS NOT ON AN INTERNET LINK, THEN SEND THE MSG TO ? ; THE PORT THAT RECEIVES ALL NON-INET TRAFFIC (PORT INDEX IS IN NCPPRT) For IP traffic, the PE acts as a gateway (i.e. router), and there's a table which says which downstream port various IP hosts are on. The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s which record messages sent out to the IMP; when a RFNM arrives, it uses the CB database to work out the downstream host which originated the message the RFNM is for; the RFNM is then handed to it. As the above excerpt probably made clear, I still have the PE code (it had been squirreled away on the MIT-CSR Unix - I made a full dump of that machine before it croaked, so we now have access to all that history; I guess I was concerned about history even back then). I don't think I have the _original_, unmodified PE code; what I have is a bodged version that I hacked to act as a gateway to the MIT 1 Mbit/sec ring LAN. I.e. it did't have any subsidiary hosts attached to 1822 ports; just the main 1822 port (connected to the IMP) and the LAN. I'm too lazy to see exactly what I did with RFNM's there; probably just pitched them (no RFNM's on a LAN :-). While I was looking for that, I ran across some other old code that might be interesing: - the TIU (kind of a predecessor to the TAC, a _very_ early implementation of ? TCP in Macro-11 for the PDP-11, written by Jim Mathis, which I believe ? was the basis for Jack's first UNIX TCP at BBN); - a couple of modules from the BCPL gateway code from BBN (the one that ? ran under ELF); historically interesting, as it was the very first ? IP router code _ever_. If anyone is interested in any of this stuff, let me know and I'll look into getting it uploaded and made available. The reason we couldn't get the PE to work was that the SRI 1822 interface (which is what were planning to use on our PE) didn't _exactly_ electrically duplicate the IMP 1822 interface; the latter used optp-isolators on the DH interface, and the SRI interface didn't have them. The plan was to put the PE in from of the DM ITS machine, but when we tried it, it didn't work. Ken Pogran looked into the issue, and discovered that the person who did DM's IMP interface (I wonder who that was :-) had done some 'trick' (the exact details of which now escape me - it was something to do with the ground he used for the DH interface signals),and without the opto-isolators the SRI 1822 interface wouldn't talk to it. ??? Noel -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From vint at google.com Mon Aug 30 14:50:26 2021 From: vint at google.com (Vint Cerf) Date: Mon, 30 Aug 2021 17:50:26 -0400 Subject: [ih] More topology In-Reply-To: <88662475.1136461.1630357480385@mail.yahoo.com> References: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> <88662475.1136461.1630357480385@mail.yahoo.com> Message-ID: Barbara, I had not realized you were involved in the partitioned network solution. Wasn't Radia Perlman also engaged on that? So ELF was the OS for the BCPL gateway. I had forgotten that. Dick Karp at Stanford did his first TCP in BCPL for our PDP-11/40 v On Mon, Aug 30, 2021 at 5:07 PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Just a Guess. The Packet Radio station software probably made use of the > router code base you mention. The station software was written in BCPL and > ELF was the operating system. I don't know the timelines of the router > development and the Packet Radio station development. Ginny Strazisar > (Travers) probably can clarify this or perhaps Mike Beeler or Jil Westcott. > barbara > On Monday, August 30, 2021, 12:59:28 PM PDT, Noel Chiappa via > Internet-history wrote: > > > From: Jack Haverty > > > I never did learn how the PE handled RFNMs, in particular how it > > interacted with its associated NCP host that it was "stealing" RFNMs > > from. > > I know a bit about the Port Expander; we were planning on using it at MIT > at > one point, since MIT had no spare IMP ports for an IP gateway (router). (We > didn't get an IMP port for the MIT gateway until MIT got its third IMP, one > of the first C/30's.) That didn't work out, as I'll explain later. > > The PE didn't share the NCP 'host' among connected hosts; all NCP traffic > coming in from the IMP is sent to the 'main' subsidiary host's port: > > ; WHEN A TYPE 0 OR TYPE 3 MESSAGE IS RECEIVED, FIRST CHECK THE MESSAGE'S > ; LINK NUMBER. IF THE MSG IS NOT ON AN INTERNET LINK, THEN SEND THE MSG > TO > ; THE PORT THAT RECEIVES ALL NON-INET TRAFFIC (PORT INDEX IS IN NCPPRT) > > For IP traffic, the PE acts as a gateway (i.e. router), and there's a table > which says which downstream port various IP hosts are on. > > The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s > which record messages sent out to the IMP; when a RFNM arrives, it uses the > CB database to work out the downstream host which originated the message > the > RFNM is for; the RFNM is then handed to it. > > > As the above excerpt probably made clear, I still have the PE code (it had > been squirreled away on the MIT-CSR Unix - I made a full dump of that > machine > before it croaked, so we now have access to all that history; I guess I was > concerned about history even back then). > > I don't think I have the _original_, unmodified PE code; what I have is a > bodged version that I hacked to act as a gateway to the MIT 1 Mbit/sec ring > LAN. I.e. it did't have any subsidiary hosts attached to 1822 ports; just > the > main 1822 port (connected to the IMP) and the LAN. I'm too lazy to > see exactly what I did with RFNM's there; probably just pitched them > (no RFNM's on a LAN :-). > > While I was looking for that, I ran across some other old code that > might be interesing: > > - the TIU (kind of a predecessor to the TAC, a _very_ early implementation > of > TCP in Macro-11 for the PDP-11, written by Jim Mathis, which I believe > was the basis for Jack's first UNIX TCP at BBN); > - a couple of modules from the BCPL gateway code from BBN (the one that > ran under ELF); historically interesting, as it was the very first > IP router code _ever_. > > If anyone is interested in any of this stuff, let me know and I'll look > into getting it uploaded and made available. > > > The reason we couldn't get the PE to work was that the SRI 1822 interface > (which is what were planning to use on our PE) didn't _exactly_ > electrically > duplicate the IMP 1822 interface; the latter used optp-isolators on the DH > interface, and the SRI interface didn't have them. > > The plan was to put the PE in from of the DM ITS machine, but when > we tried it, it didn't work. Ken Pogran looked into the issue, and > discovered that the person who did DM's IMP interface (I wonder who > that was :-) had done some 'trick' (the exact details of which now > escape me - it was something to do with the ground he used for the > DH interface signals),and without the opto-isolators the SRI > 1822 interface wouldn't talk to it. > > > Noel > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From jack at 3kitty.org Mon Aug 30 15:31:59 2021 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 30 Aug 2021 15:31:59 -0700 Subject: [ih] More topology In-Reply-To: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> References: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> Message-ID: <5d7b1811-898b-4bfd-39cd-ff4791e00c7c@3kitty.org> The PE behavior you describe: "The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s which record messages sent out to the IMP; when a RFNM arrives, it uses the CB database to work out the downstream host which originated the message the RFNM is for; the RFNM is then handed to it." has a problem, if the host(s) attached to it did RFNM Counting. Any host might try to get up to 8 "in-flight" messages. If more than one such host is sending to the same destination, each expecting to be able to keep 8 messages in flight, the IMP would block the PE as it sent the 9th message. It doesn't matter whether they were NCP or TCP hosts, just whether or not they were RFNM Counting and interacting with the same destination. Such situations might have been rare, and for TCP of course, this would be mostly invisible, mitigated by retransmissions and duplicate removal as needed. And perhaps some total pauses in traffic flow to the ARPANET if the IMP had to block the interface. How's that for a 40+ year troubleshooting session... BTW, you're right that the Unix TCP I wrote started with Jim Mathis' LSI-11 TCP. IIRC, it didn't have any 1822/IMP interface code, so I had to write that (and hence learn about RFNMs et al). Or perhaps it did but the code wasn't obviously compatible with the Unix kernel. /Jack On 8/30/21 12:02 PM, Noel Chiappa via Internet-history wrote: > > From: Jack Haverty > > > I never did learn how the PE handled RFNMs, in particular how it > > interacted with its associated NCP host that it was "stealing" RFNMs > > from. > > I know a bit about the Port Expander; we were planning on using it at MIT at > one point, since MIT had no spare IMP ports for an IP gateway (router). (We > didn't get an IMP port for the MIT gateway until MIT got its third IMP, one > of the first C/30's.) That didn't work out, as I'll explain later. > > The PE didn't share the NCP 'host' among connected hosts; all NCP traffic > coming in from the IMP is sent to the 'main' subsidiary host's port: > > ; WHEN A TYPE 0 OR TYPE 3 MESSAGE IS RECEIVED, FIRST CHECK THE MESSAGE'S > ; LINK NUMBER. IF THE MSG IS NOT ON AN INTERNET LINK, THEN SEND THE MSG TO > ; THE PORT THAT RECEIVES ALL NON-INET TRAFFIC (PORT INDEX IS IN NCPPRT) > > For IP traffic, the PE acts as a gateway (i.e. router), and there's a table > which says which downstream port various IP hosts are on. > > The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s > which record messages sent out to the IMP; when a RFNM arrives, it uses the > CB database to work out the downstream host which originated the message the > RFNM is for; the RFNM is then handed to it. > > > As the above excerpt probably made clear, I still have the PE code (it had > been squirreled away on the MIT-CSR Unix - I made a full dump of that machine > before it croaked, so we now have access to all that history; I guess I was > concerned about history even back then). > > I don't think I have the _original_, unmodified PE code; what I have is a > bodged version that I hacked to act as a gateway to the MIT 1 Mbit/sec ring > LAN. I.e. it did't have any subsidiary hosts attached to 1822 ports; just the > main 1822 port (connected to the IMP) and the LAN. I'm too lazy to > see exactly what I did with RFNM's there; probably just pitched them > (no RFNM's on a LAN :-). > > While I was looking for that, I ran across some other old code that > might be interesing: > > - the TIU (kind of a predecessor to the TAC, a _very_ early implementation of > TCP in Macro-11 for the PDP-11, written by Jim Mathis, which I believe > was the basis for Jack's first UNIX TCP at BBN); > - a couple of modules from the BCPL gateway code from BBN (the one that > ran under ELF); historically interesting, as it was the very first > IP router code _ever_. > > If anyone is interested in any of this stuff, let me know and I'll look > into getting it uploaded and made available. > > > The reason we couldn't get the PE to work was that the SRI 1822 interface > (which is what were planning to use on our PE) didn't _exactly_ electrically > duplicate the IMP 1822 interface; the latter used optp-isolators on the DH > interface, and the SRI interface didn't have them. > > The plan was to put the PE in from of the DM ITS machine, but when > we tried it, it didn't work. Ken Pogran looked into the issue, and > discovered that the person who did DM's IMP interface (I wonder who > that was :-) had done some 'trick' (the exact details of which now > escape me - it was something to do with the ground he used for the > DH interface signals),and without the opto-isolators the SRI > 1822 interface wouldn't talk to it. > > > Noel From b_a_denny at yahoo.com Mon Aug 30 16:36:11 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 30 Aug 2021 23:36:11 +0000 (UTC) Subject: [ih] More topology In-Reply-To: References: <20210830190250.199AE18C08C@mercury.lcs.mit.edu> <88662475.1136461.1630357480385@mail.yahoo.com> Message-ID: <2106606920.1184635.1630366571203@mail.yahoo.com> I think Radia was involved with the problem but I don't know if she influenced what SRI did.? Jim Mathis or Zaw-Sing Su or perhaps Mark Lewis might be able to remember but I would have to look at what she was thinking in that space to have any comments.? If I remember the order of my projects correctly,? I was working on the Metanet gateway when the design and most of the development of the RP gateway was done.?? barbara On Monday, August 30, 2021, 02:50:39 PM PDT, Vint Cerf wrote: Barbara,I had not realized you were involved in the partitioned network solution. Wasn't Radia Perlman also engaged on that? So ELF was the OS for the BCPL gateway. I had forgotten that. Dick Karp at Stanford did his first TCP in BCPL for our PDP-11/40 v On Mon, Aug 30, 2021 at 5:07 PM Barbara Denny via Internet-history wrote: ?Just a Guess. The Packet Radio station software probably made use of the router code base you mention.? The station software was written in BCPL and ELF was the operating system.? I don't know the timelines of the router development and the Packet Radio station development.? Ginny Strazisar (Travers)? probably can clarify this or perhaps Mike Beeler or Jil Westcott. barbara ? ? On Monday, August 30, 2021, 12:59:28 PM PDT, Noel Chiappa via Internet-history wrote:? ?? ? > From: Jack Haverty ? ? > I never did learn how the PE handled RFNMs, in particular how it ? ? > interacted with its associated NCP host that it was "stealing" RFNMs ? ? > from. I know a bit about the Port Expander; we were planning on using it at MIT at one point, since MIT had no spare IMP ports for an IP gateway (router). (We didn't get an IMP port for the MIT gateway until MIT got its third IMP, one of the first C/30's.) That didn't work out, as I'll explain later. The PE didn't share the NCP 'host' among connected hosts; all NCP traffic coming in from the IMP is sent to the 'main' subsidiary host's port: ? ; WHEN A TYPE 0 OR TYPE 3 MESSAGE IS RECEIVED, FIRST CHECK THE MESSAGE'S ? ; LINK NUMBER.? IF THE MSG IS NOT ON AN INTERNET LINK, THEN SEND THE MSG TO ? ; THE PORT THAT RECEIVES ALL NON-INET TRAFFIC (PORT INDEX IS IN NCPPRT) For IP traffic, the PE acts as a gateway (i.e. router), and there's a table which says which downstream port various IP hosts are on. The way it handles RFNM's is that it has a database of "CONNECTION BLOCK"s which record messages sent out to the IMP; when a RFNM arrives, it uses the CB database to work out the downstream host which originated the message the RFNM is for; the RFNM is then handed to it. As the above excerpt probably made clear, I still have the PE code (it had been squirreled away on the MIT-CSR Unix - I made a full dump of that machine before it croaked, so we now have access to all that history; I guess I was concerned about history even back then). I don't think I have the _original_, unmodified PE code; what I have is a bodged version that I hacked to act as a gateway to the MIT 1 Mbit/sec ring LAN. I.e. it did't have any subsidiary hosts attached to 1822 ports; just the main 1822 port (connected to the IMP) and the LAN. I'm too lazy to see exactly what I did with RFNM's there; probably just pitched them (no RFNM's on a LAN :-). While I was looking for that, I ran across some other old code that might be interesing: - the TIU (kind of a predecessor to the TAC, a _very_ early implementation of ? TCP in Macro-11 for the PDP-11, written by Jim Mathis, which I believe ? was the basis for Jack's first UNIX TCP at BBN); - a couple of modules from the BCPL gateway code from BBN (the one that ? ran under ELF); historically interesting, as it was the very first ? IP router code _ever_. If anyone is interested in any of this stuff, let me know and I'll look into getting it uploaded and made available. The reason we couldn't get the PE to work was that the SRI 1822 interface (which is what were planning to use on our PE) didn't _exactly_ electrically duplicate the IMP 1822 interface; the latter used optp-isolators on the DH interface, and the SRI interface didn't have them. The plan was to put the PE in from of the DM ITS machine, but when we tried it, it didn't work. Ken Pogran looked into the issue, and discovered that the person who did DM's IMP interface (I wonder who that was :-) had done some 'trick' (the exact details of which now escape me - it was something to do with the ground he used for the DH interface signals),and without the opto-isolators the SRI 1822 interface wouldn't talk to it. ??? Noel -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history -- Please send any postal/overnight deliveries to:Vint Cerf1435 Woodhurst Blvd?McLean, VA 22102703-448-0965 until further notice From gnu at toad.com Mon Aug 30 20:01:43 2021 From: gnu at toad.com (John Gilmore) Date: Mon, 30 Aug 2021 20:01:43 -0700 Subject: [ih] Better-than-Best Effort: Lower latency In-Reply-To: <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> Message-ID: <11645.1630378903@hop.toad.com> Jack Haverty via Internet-history wrote: > Latency is possibly more important now than bandwidth, since while > fiber can provide lots of bandwidth but no one has yet figured out how > to move data faster than the speed of light. Just when you thought some trope was likely to be true, more facts intrude... https://www.nojitter.com/enterprise-networking/hollow-fiber-new-option-low-latency Turns out that microwave links are "faster than the speed of light in fiber", but air-filled hollow fiber is apparently even faster. See also: https://en.wikipedia.org/wiki/Spread_Networks And then there's going to low-earth orbit and back on straight lines, via Starlink, rather than going "around" the globe on a great-circle fiber route. This is claimed to be able to reduce NY/London latency, even when there are no working inter-satellite laser links (and thus multiple hops between orbit and ground stations are required): https://circleid.com/posts/20191230_starlink_simulation_low_latency_without_intersatellite_laser_links/ It appears that there are lots of ways to skin the latency cat despite the speed of light. John From touch at strayalpha.com Mon Aug 30 20:25:38 2021 From: touch at strayalpha.com (touch at strayalpha.com) Date: Mon, 30 Aug 2021 20:25:38 -0700 Subject: [ih] Better-than-Best Effort: Lower latency In-Reply-To: <11645.1630378903@hop.toad.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <11645.1630378903@hop.toad.com> Message-ID: > On Aug 30, 2021, at 8:01 PM, John Gilmore via Internet-history wrote: > ?. > It appears that there are lots of ways to skin the latency cat despite > the speed of light. Except that there aren?t. There are very specific ways to get around unnecessary latency (e.g., when interaction is limited to a finite set of decisions), but truly unpredictable behavior always incurs SOL delays with everything it impacts (cone of light). I have a 180+ slide, 4-hour, 7-yr old tutorial on this issue and those ways - including hollow-core fiber. As I said back in 1988: ?Everyone talks about the speed of light, but nobody every does anything about it?. Joe ? Joe Touch, temporal epistemologist www.strayalpha.com From tte at cs.fau.de Mon Aug 30 22:15:44 2021 From: tte at cs.fau.de (Toerless Eckert) Date: Tue, 31 Aug 2021 07:15:44 +0200 Subject: [ih] Better-than-Best Effort: Lower latency In-Reply-To: References: <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <11645.1630378903@hop.toad.com> Message-ID: <20210831051544.GJ50345@faui48f.informatik.uni-erlangen.de> Seems though as if hollow fiber may be making a comeback: Latency (old): https://www.nojitter.com/enterprise-networking/hollow-fiber-new-option-low-latency Loss (this is what seems new): https://www.osa-opn.org/home/newsroom/2020/december/lower_losses_with_air-filled_fiber/ Guess it first needed HFT to pump money into radio links in the last decade before there was enough financical interest to do more research into reducing loss for hollow fiber to make it compete. Hey, finally something good coming out of HFT ;-)) Toerless On Mon, Aug 30, 2021 at 08:25:38PM -0700, touch--- via Internet-history wrote: > > On Aug 30, 2021, at 8:01 PM, John Gilmore via Internet-history wrote: > > > ?. > > It appears that there are lots of ways to skin the latency cat despite > > the speed of light. > > Except that there aren?t. There are very specific ways to get around unnecessary latency (e.g., when interaction is limited to a finite set of decisions), but truly unpredictable behavior always incurs SOL delays with everything it impacts (cone of light). > > I have a 180+ slide, 4-hour, 7-yr old tutorial on this issue and those ways - including hollow-core fiber. > > As I said back in 1988: > ?Everyone talks about the speed of light, but nobody every does anything about it?. > > Joe > > ? > Joe Touch, temporal epistemologist > www.strayalpha.com > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vgcerf at gmail.com Tue Aug 31 01:02:23 2021 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 31 Aug 2021 04:02:23 -0400 Subject: [ih] Better-than-Best Effort: Lower latency In-Reply-To: <11645.1630378903@hop.toad.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <20210826232743.GP50345@faui48f.informatik.uni-erlangen.de> <0AE51F14-2E54-4D99-A8A1-055CBC3201A1@transsys.com> <20210827183308.GW50345@faui48f.informatik.uni-erlangen.de> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <11645.1630378903@hop.toad.com> Message-ID: Hollow fiber was pioneered by Sir David Payne at the University of Southampton. Payne brought us erbium-doped fiber to extend the length of fiber that could carry a signal farther, before requiring a repeater, to over 1500 km. v On Mon, Aug 30, 2021 at 11:02 PM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > Jack Haverty via Internet-history > wrote: > > Latency is possibly more important now than bandwidth, since while > > fiber can provide lots of bandwidth but no one has yet figured out how > > to move data faster than the speed of light. > > Just when you thought some trope was likely to be true, more facts > intrude... > > > https://www.nojitter.com/enterprise-networking/hollow-fiber-new-option-low-latency > > Turns out that microwave links are "faster than the speed of light > in fiber", but air-filled hollow fiber is apparently even faster. See > also: > > https://en.wikipedia.org/wiki/Spread_Networks > > And then there's going to low-earth orbit and back on straight lines, > via Starlink, rather than going "around" the globe on a great-circle > fiber route. This is claimed to be able to reduce NY/London latency, > even when there are no working inter-satellite laser links (and thus > multiple hops between orbit and ground stations are required): > > > https://circleid.com/posts/20191230_starlink_simulation_low_latency_without_intersatellite_laser_links/ > > It appears that there are lots of ways to skin the latency cat despite > the speed of light. > > John > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From stewart at serissa.com Tue Aug 31 05:56:34 2021 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 31 Aug 2021 08:56:34 -0400 Subject: [ih] More Topology, Packet Radio In-Reply-To: References: Message-ID: <4C467B3F-2803-4F68-823C-8A141D9D6803@serissa.com> I can contribute a few bits of information about the Packet Radio Network. In 1978 I designed the 1822 interface for the Xerox Alto. It was used to connect to the Bay Area Packet Radio Network and for connecting PARC-MAXC2 to the Arpanet. The Radios used an entirely different low level protocol than the IMPs. It was called CAP, for Channel Access Protocol. CAP was notable for a very small MTU - it had an 11 (16-bit) word header and up to 116 words of data. PARC used the PRNet for a while to encapsulate PUP traffic between the PARC building and the Xerox Advanced Systems Devision (Ben Wegbreit and Charles Simonyi) building. I wrote the CAP driver in Mesa, for connection to Hal Murray?s Mesa Gateway code. It may still be around, in the files Paul McJones put up on the CHM servers at http://xeroxalto.computerhistory.org/Indigo/Alto-1822/.index.html The BCPL test software for the 1822 is definitely there. I don?t know what language the radio code used. It was written by Collins Radio and they had (from SRI accounts) a truly stone age attitude about it. The master version was kept in a box of cards in the manager?s office. I found the writeup of the Xerox work in IEN-78 at http://www.watersprings.org/pub/rfc/ien/ien78.pdf -Larry I guess I am surprised by the comments here about the subleties of the 1822 distant host signaling. I don?t think the Alto board had optoisolaters and it did work in both local and distant host modes, but was never tried with very long cables or ground problems. From b_a_denny at yahoo.com Tue Aug 31 09:01:55 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 31 Aug 2021 16:01:55 +0000 (UTC) Subject: [ih] More Topology, Packet Radio In-Reply-To: <4C467B3F-2803-4F68-823C-8A141D9D6803@serissa.com> References: <4C467B3F-2803-4F68-823C-8A141D9D6803@serissa.com> Message-ID: <1002464625.1436263.1630425715360@mail.yahoo.com> Eventually under the SURAN contract we/SRI got a version of the radio code.? What we received was probably BCPL because at this point I am thinking I got asked to do a modification because I was probably the only one around with BCPL experience from the Packet Radio station software.? There is a chance it was in C.? The big thing I remember was the code reminded me of more like something that might have been written by people used to a lower level language, like assembler.? My memory might be wrong but I seem to remember Packet Radio had 256 byte packets.?? The different CAP version numbers indicated functionality in the Packet Radio network so if I remember correctly CAP6.2 included the Packet Radio Station while CAP7 was stationless. barbara On Tuesday, August 31, 2021, 05:56:47 AM PDT, Lawrence Stewart via Internet-history wrote: I can contribute a few bits of information about the Packet Radio Network. In 1978 I designed the 1822 interface for the Xerox Alto.? It was used to connect to the Bay Area Packet Radio Network and for connecting PARC-MAXC2 to the Arpanet. The Radios used an entirely different low level protocol than the IMPs.? It was called CAP, for Channel Access Protocol.? CAP was notable for a very small MTU - it had an 11 (16-bit) word header and up to 116 words of data. PARC used the PRNet for a while to encapsulate PUP traffic between the PARC building and the Xerox Advanced Systems Devision (Ben Wegbreit and Charles Simonyi) building. I wrote the CAP driver in Mesa, for connection to Hal Murray?s Mesa Gateway code.? It may still be around, in the files Paul McJones put up on the CHM servers at http://xeroxalto.computerhistory.org/Indigo/Alto-1822/.index.html The BCPL test software for the 1822 is definitely there. I don?t know what language the radio code used. It was written by Collins Radio and they had (from SRI accounts) a truly stone age attitude about it.? The master version was kept in a box of cards in the manager?s office. I found the writeup of the Xerox work in IEN-78 at http://www.watersprings.org/pub/rfc/ien/ien78.pdf -Larry I guess I am surprised by the comments here about the subleties of the 1822 distant host signaling.? I don?t think the Alto board had optoisolaters and it did work in both local and distant host modes, but was never tried with very long cables or ground problems. -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Tue Aug 31 12:07:38 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 31 Aug 2021 19:07:38 +0000 (UTC) Subject: [ih] More Topology, Packet Radio In-Reply-To: <1002464625.1436263.1630425715360@mail.yahoo.com> References: <4C467B3F-2803-4F68-823C-8A141D9D6803@serissa.com> <1002464625.1436263.1630425715360@mail.yahoo.com> Message-ID: <859772975.1507758.1630436858010@mail.yahoo.com> Upon reflection I want to mention I think 6.2 supported multiple stations in a Packet Radio network.? I believe an earlier release supported a single station. I just don't remember if that version was something like Cap5 versus Cap6.? barbara On Tuesday, August 31, 2021, 09:01:55 AM PDT, Barbara Denny wrote: Eventually under the SURAN contract we/SRI got a version of the radio code.? What we received was probably BCPL because at this point I am thinking I got asked to do a modification because I was probably the only one around with BCPL experience from the Packet Radio station software.? There is a chance it was in C.? The big thing I remember was the code reminded me of more like something that might have been written by people used to a lower level language, like assembler.? My memory might be wrong but I seem to remember Packet Radio had 256 byte packets.?? The different CAP version numbers indicated functionality in the Packet Radio network so if I remember correctly CAP6.2 included the Packet Radio Station while CAP7 was stationless. barbara On Tuesday, August 31, 2021, 05:56:47 AM PDT, Lawrence Stewart via Internet-history wrote: I can contribute a few bits of information about the Packet Radio Network. In 1978 I designed the 1822 interface for the Xerox Alto.? It was used to connect to the Bay Area Packet Radio Network and for connecting PARC-MAXC2 to the Arpanet. The Radios used an entirely different low level protocol than the IMPs.? It was called CAP, for Channel Access Protocol.? CAP was notable for a very small MTU - it had an 11 (16-bit) word header and up to 116 words of data. PARC used the PRNet for a while to encapsulate PUP traffic between the PARC building and the Xerox Advanced Systems Devision (Ben Wegbreit and Charles Simonyi) building. I wrote the CAP driver in Mesa, for connection to Hal Murray?s Mesa Gateway code.? It may still be around, in the files Paul McJones put up on the CHM servers at http://xeroxalto.computerhistory.org/Indigo/Alto-1822/.index.html The BCPL test software for the 1822 is definitely there. I don?t know what language the radio code used. It was written by Collins Radio and they had (from SRI accounts) a truly stone age attitude about it.? The master version was kept in a box of cards in the manager?s office. I found the writeup of the Xerox work in IEN-78 at http://www.watersprings.org/pub/rfc/ien/ien78.pdf -Larry I guess I am surprised by the comments here about the subleties of the 1822 distant host signaling.? I don?t think the Alto board had optoisolaters and it did work in both local and distant host modes, but was never tried with very long cables or ground problems. -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Tue Aug 31 12:24:41 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 31 Aug 2021 19:24:41 +0000 (UTC) Subject: [ih] More topology In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> Message-ID: <1809861388.1530809.1630437881326@mail.yahoo.com> Hi Jack, Based on what you said, during RP testing I think I remember seeing where a host on one IMP couldn't even send packets to another host on a different port of the same IMP.? Just want to double check this was possible when you say any destination. barbara?? On Monday, August 30, 2021, 11:01:44 AM PDT, Jack Haverty wrote: Yes, but it was more complicated than that...a little more history: ARPANET used RFNMs (Request For Next Message) as a means of flow control.? Every message (packet/datagram/whatever) sent by a host would eventually cause a RFNM to be returned to the host.?? IIRC, hosts were allowed to send up to 8 messages to any particular destination.?? So there could be up to 8 pending RFNMs to come back to the host for traffic to that destination.?? If the host tried to send a 9th message to a particular destination, the IMP would block all transmissions from the host until those RFNMs arrived, by shutting off the hardware interface.?? So, if a host exceeded that limit of "8 in flight" to any destination, the IMP would block it, at least temporarily, from sending anything to any destination.?? That would probably be A Bad Thing. Hosts could implement a simple algorithm and simply send one message, and hold the next message until a RFNM came back.? But to increase throughput, it was advisable to implement some sort of "RFNM Counting" where the host would keep track of how many messages were "in flight", and avoid sending another message to a particular destination if that message would exceed the 8-in-flight constraint, and thereby avoid having the IMP shut off all of its traffic to all destinations.??? The TCP/IP I implemented for Unix did that kind of RFNM Counting on the ARPANET interface, but I'm not sure how other implementations handled the RFNM issues. Any "box" (such as a Port Expander) that was "spliced into" the connection between a host and an IMP had to perform two related functions. ? It had to act as a host itself in interacting with the IMP. ? It also had to "look like an IMP" to the host(s) that were attached to it.?? It had to essentially implement "timesharing" of the IMP's interface. The "1822 specifications" defined the interface between a Host and an IMP. ?? From it, engineers could build interfaces for their hosts to connect them to the ARPANET.? However (always a however...) the 1822 spec appeared to be symmetrical.? But it wasn't. ? Interfaces that met the 1822 specs could successfully interact with an IMP. ? Also, if you plugged two such 1822 interfaces back-to-back (as was done in connecting the 4 host to a Port Expander), it would often work apparently fine.?? The "Host to IMP" specification wasn't quite the same as the (internal-to-BBN) "IMP To Host" specification;? it was easy for people to treat it as if it was. But in that early Internet, there were lots of "outages" to be investigated.? I remember doing a "deep dive" into one such configuration where equipment was "spliced into" a Host/IMP 1822 cable with unreliable results.?? It turned out to be a hardware issue, with the root cause being the invalid assumption that any 1822-compliant interface on a host could also successfully emulate the 1822 interface on an IMP. This was a sufficiently common problem that I wrote IEN 139 "Hosts As IMPs" to explain the situation (see https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone trying to do such things.? But that IEN only addressed the low-level issues of hardware, signals, voltages, and noise., and warned that to do such things might require more effort to actually behave as an IMP. RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an IMP", a box such as a Port Expander faced design choices for providing functionality such as RFNMs.? I never knew how it did that, and how successfully it "looked like an IMP" to all its attached hosts.?? E.g., if all 4 hosts, thinking they were connected to their own dedicated IMP port, did their own RFNM Counting, how did the PE make that all work reliably??? Maybe the situation just never came up often enough in practice to motivate troubleshooting. Not an issue now of course, but historically I wonder how much of the early reliability issues in the Internet in the Fuzzy Peach era might have been caused by such situations. /Jack PS - the same kind of thought has occurred to me with respect to NAT, which seems to perform a similar "look like an Internet" function. On 8/30/21 3:54 AM, Vint Cerf wrote: two tcp connections could multiplex on a given IMP-IMP link - one RFNM per IP packet regardless of the TCP layer "connection" v On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history wrote: Thanks Barbara -- yes, the port Expander was one of the things I called "homegrown LANs".? I never did learn how the PE handled RFNMs, in particular how it interacted with its associated NCP host that it was "stealing" RFNMs from. /jack On 8/29/21 2:38 PM, Barbara Denny wrote: > There was also SRI's port expander which increased the number of host > ports available on an IMP. > > You can find the SRI technical report (1080-140-1) on the web. The > title is "The Arpanet Imp Port Expander". > > barbara > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > Internet-history wrote: > > > Thanks Steve.?? I guess I was focussed only on the longhaul hops. The > maps didn't show where host computers were attached. At the time > (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, > LA, SF), almost like an early form of Metropolitan Area Network (MAN), > plus single nodes scattered around the US and a satellite circuit to > Europe.? The "MAN" parts of the ARPANET were often richly connected, and > the circuits might have even been in the same room or building or > campus.?? So the long-haul circuits were in some sense more important in > their scarcity and higher risk of problems from events such as marauding > backhoes (we called such network outages "backhoe fade"). > > While I still remember...here's a little Internet History. > > The Internet, at the time in late 70s and early 80s, was in what I used > to call the "Fuzzy Peach" stage of its development.? In addition to > computers directly attached to an IMP, there were various kinds of > "local area networks", including things such as Packet Radio networks > and a few homegrown LANs, which provided connectivity in a small > geographical area.? Each of those was attached to an ARPANET IMP > somewhere close by, and the ARPANET provided all of the long-haul > communications.?? The exception to that was the SATNET, which provided > connectivity across the Atlantic, with a US node (in West Virginia > IIRC), and a very active node in the UK.?? So the ARPANET was the > "peach" and all of the local networks and computers in the US were the > "fuzz", with SATNET attaching extending the Internet to Europe. > > That topology had some implications on the early Internet behavior. > > At the time, I was responsible for BBN's contract with ARPA in which one > of the tasks was "make the core Internet reliable 24x7".?? That > motivated quite frequent interactions with the ARPANET NOC, especially > since it was literally right down the hall. > > TCP/IP was in use at the time, but most of the long-haul traffic flows > were through the ARPANET.? With directly-connected computers at each > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol > in use as the ARPANET TIPs became TACs. > > However... ? There's always a "however"...? The ARPANET itself already > implemented a lot of the functionality that TCP provided. ARPANET > already provided reliable end-end byte streams, as well as flow control; > the IMPs would allow only 8 "messages" in transit between two endpoints, > and would physically block the computer from sending more than that. > So IP datagrams never got lost, or reordered, or duplicated, and never > had to be discarded or retransmitted.?? TCP/IP could do such things too, > but in the "fuzzy peach" situation, it didn't have to do so. > > The prominent exception to the "fuzzy peach" was transatlantic traffic, > which had to cross both the ARPANET and SATNET.?? The gateway > interconnecting those two had to discard IP datagrams when they came in > faster than they could go out.?? TCP would have to notice, retransmit, > and reorder things at the destination. > > Peter Kirstein's crew at UCL were quite active in experimenting with the > early Internet, and their TCP/IP traffic had to actually do all of the > functions that the Fuzzy Peach so successfully hid from those directly > attached to it.?? I think the experiences in that path motivated a lot > of the early thinking about algorithms for TCP behavior, as well as > gateway actions. > > Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or > phone messages waiting for me every morning advising that "The Internet > Is Broken!", either from Europe directly or through ARPA.? One of the > first troubleshooting steps, after making sure the gateway was running, > was to see what was going on in the Fuzzy Peach which was so important > to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, and Mike > Brescia might remember more since they were usually on the front lines. > > Much of the experimentation at the time involved interactions between > the UK crowd and some machine at ISI.?? If the ARPANET was acting up, > the bandwidth and latency of those TCP/IP traffic flows could gyrate > wildly, and TCP/IP implementations didn't always respond well to such > things, especially since they didn't typically occur when you were just > using the Fuzzy Peach. > > Result - "The Internet Is Broken".?? That long-haul ARPA-ISI circuit was > an important part of the path from Europe to California.?? If it was > "down", the path became 3 or more additional hops (IMP hops, not IP), > and became further loaded by additional traffic routing around the > break.?? TCPs would timeout, retransmit, and make the problem worse > while their algorithms tried to adapt. > > So that's probably what I was doing in the NOC when I noticed the > importance of that ARPA<->USC ARPANET circuit. > > /Jack Haverty > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > ? ? ? ? -- Steve > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > >> Actually July 1981 -- see > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > (thanks, > Noel!) > >> The experience I recall was being in the ARPANET NOC for some > reason and > >> noticing the topology on the big map that covered one wall of the > NOC.? There > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > where the > >> PDP-10s were attached.? Still just historically curious how the > decision was > >> made to configure that topology....but we'll probably never know.? > /Jack > >> > >> > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > >>>? ? A look at some ARPAnet maps available on the web shows that in > 1982 it was > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > >>> Alex McKenzie > >>> > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > >>> Internet-history > wrote: > >>>? ? ? This is the second email from Jack mentioning a > point-to-point line > >>> between the ARPA TIP and the ISI site.? I don't believe that is an > accurate > >>> statement of the ARPAnet topology.? In January 1975 there were 5 hops > >>> between the 2 on the shortest path. In October 1975 there were 6.? > I don't > >>> believe it was ever one or two hops, but perhaps someone can find > a network > >>> map that proves me wrong. > >>> Alex McKenzie > >>> > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > >>> Internet-history > wrote: > >>>? ? ? Sounds right.? My experience was well after that early > experimental > >>> period.? The ARPANET was much bigger (1980ish) and the topology had > >>> evolved over the years.? There was a direct 56K line (IIRC between > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, but in > >>> normal conditions ARPA<->ISI traffic flowed directly over that > long-haul > >>> circuit.? /Jack > >>> > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > >>>> Jack, the 4 node configuration had two paths between UCLA and SRI and > >>>> a two hop path to University of Utah. > >>>> We did a variety of tests to force alternate routing (by congesting > >>>> the first path). > >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get > >>>> this effect. Of course, we also crashed the Arpanet with these early > >>>> experiments. > >>>> > >>>> v > >>>> > >>>> > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > >>>> >> wrote: > >>>> > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was > >>>>? ? ? selected.? I can believe that economics was probably a > factor but > >>>>? ? ? the people and organizational issues could have been the > dominant > >>>>? ? ? factors. > >>>> > >>>>? ? ? IMHO, the "internet community" seems to often ignore > non-technical > >>>>? ? ? influences on historical events, preferring to view > everything in > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other > influences > >>>>? ? ? are an important part of the story - hence my "economic lens". > >>>>? ? ? You just described a view through a manager's lens. > >>>> > >>>>? ? ? /Jack > >>>> > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect of that > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed > that the > >>>>? ? ? ARPANET had been configured with a leased circuit directly > between > >>>>? ? ? the nearby IMPs to ISI and ARPA.? So as a demo of "packet > >>>>? ? ? switching", there wasn't much actual switching involved.? The 2 > >>>>? ? ? IMPs were more like multiplexors. > >>>> > >>>>? ? ? I never heard whether that configuration was mandated by > ARPA, or > >>>>? ? ? BBN decided to put a line in as a way to keep the customer > happy, > >>>>? ? ? or if it just happened naturally as a result of the ongoing > >>>>? ? ? measurement of traffic flows and reconfiguration of the topology > >>>>? ? ? to adapt as needed.? Or something else.? The interactivity > of the > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was > >>>>? ? ? noticeably better than other users (e.g., me) experienced. > >>>> > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: > >>>>>? ? ? Jack, > >>>>> > >>>>>? ? ? You wrote: > >>>>> > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in > Arlington, VA. > >>>>>? ? ? ? ? There were > >>>>>? ? ? ? ? terminals all over the building, pretty much all connected > >>>>>? ? ? ? ? through the > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine > Del Rey, > >>>>>? ? ? ? ? CA.? The > >>>>>? ? ? ? ? technology of Packet Switching made it possible to keep a > >>>>>? ? ? ? ? PDP-10 busy > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of > everything, > >>>>>? ? ? ? ? including those expensive communications circuits.? > This was > >>>>>? ? ? ? ? circa > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive > communications, > >>>>> and > >>>>>? ? ? ? ? expensive and distant computers -- although I always > thought > >>>>>? ? ? ? ? ARPA's > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was probably > more to > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than because > it was > >>>>>? ? ? ? ? cheaper > >>>>>? ? ? ? ? than using a computer somewhere near DC. > >>>>> > >>>>> > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong > support, > >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? Uncapher > >>>>>? ? ? explored a couple of possibilities and found a comfortable > >>>>>? ? ? institutional home with the University of Southern California > >>>>>? ? ? (USC) with the proviso the institute would be off campus. > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and of the > >>>>>? ? ? Arpanet project.? As the Arpanet grew, Roberts needed a > place to > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet.? > Not just > >>>>>? ? ? for the staff at ARPA but for many others as well.? > Uncapher was > >>>>>? ? ? cooperative and the rest followed easily. > >>>>> > >>>>>? ? ? The fact that it demonstrated the viability of packet-switching > >>>>>? ? ? over that distance was perhaps a bonus, but the same would have > >>>>>? ? ? been true almost anywhere in the continental U.S. at that time. > >>>>>? ? ? The more important factor was the quality of the relationship. > >>>>>? ? ? One could imagine setting up a small farm of machines at > various > >>>>>? ? ? other universities, non-profits, or selected for profit > companies > >>>>>? ? ? or even some military bases.? For each of these, cost, > >>>>>? ? ? contracting rules, the ambitions of the principal investigator, > >>>>>? ? ? and staff skill sets would have been the dominant concerns. > >>>>> > >>>>>? ? ? Steve > >>>>> > >>>> > >>>> -- > >>>> Please send any postal/overnight deliveries to: > >>>> Vint Cerf > >>>> 1435 Woodhurst Blvd > >>>> McLean, VA 22102 > >>>> 703-448-0965 > >>>> > >>>> until further notice > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd? McLean, VA 22102 703-448-0965 until further notice From vgcerf at gmail.com Tue Aug 31 12:36:50 2021 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 31 Aug 2021 15:36:50 -0400 Subject: [ih] More topology In-Reply-To: <1809861388.1530809.1630437881326@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> <1809861388.1530809.1630437881326@mail.yahoo.com> Message-ID: actually, transmissions between two hosts on the same IMP were normal and called "incestuous" traffic - we found that the bulk of UCLA traffic was between the 360/91 and the Sigma-7 on the same campus! v On Tue, Aug 31, 2021 at 3:24 PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Hi Jack, > Based on what you said, during RP testing I think I remember seeing where > a host on one IMP couldn't even send packets to another host on a different > port of the same IMP. Just want to double check this was possible when you > say any destination. > barbara > On Monday, August 30, 2021, 11:01:44 AM PDT, Jack Haverty < > jack at 3kitty.org> wrote: > > Yes, but it was more complicated than that...a little more history: > > ARPANET used RFNMs (Request For Next Message) as a means of flow > control. Every message (packet/datagram/whatever) sent by a host would > eventually cause a RFNM to be returned to the host. IIRC, hosts were > allowed to send up to 8 messages to any particular destination. So there > could be up to 8 pending RFNMs to come back to the host for traffic to that > destination. If the host tried to send a 9th message to a particular > destination, the IMP would block all transmissions from the host until > those RFNMs arrived, by shutting off the hardware interface. So, if a > host exceeded that limit of "8 in flight" to any destination, the IMP would > block it, at least temporarily, from sending anything to any destination. > That would probably be A Bad Thing. > > Hosts could implement a simple algorithm and simply send one message, and > hold the next message until a RFNM came back. But to increase throughput, > it was advisable to implement some sort of "RFNM Counting" where the host > would keep track of how many messages were "in flight", and avoid sending > another message to a particular destination if that message would exceed > the 8-in-flight constraint, and thereby avoid having the IMP shut off all > of its traffic to all destinations. The TCP/IP I implemented for Unix > did that kind of RFNM Counting on the ARPANET interface, but I'm not sure > how other implementations handled the RFNM issues. > > Any "box" (such as a Port Expander) that was "spliced into" the > connection between a host and an IMP had to perform two related functions. > It had to act as a host itself in interacting with the IMP. It also had > to "look like an IMP" to the host(s) that were attached to it. It had to > essentially implement "timesharing" of the IMP's interface. > > The "1822 specifications" defined the interface between a Host and an > IMP. From it, engineers could build interfaces for their hosts to > connect them to the ARPANET. However (always a however...) the 1822 spec > appeared to be symmetrical. But it wasn't. Interfaces that met the 1822 > specs could successfully interact with an IMP. Also, if you plugged two > such 1822 interfaces back-to-back (as was done in connecting the 4 host to > a Port Expander), it would often work apparently fine. The "Host to IMP" > specification wasn't quite the same as the (internal-to-BBN) "IMP To Host" > specification; it was easy for people to treat it as if it was. > > But in that early Internet, there were lots of "outages" to be > investigated. I remember doing a "deep dive" into one such configuration > where equipment was "spliced into" a Host/IMP 1822 cable with unreliable > results. It turned out to be a hardware issue, with the root cause being > the invalid assumption that any 1822-compliant interface on a host could > also successfully emulate the 1822 interface on an IMP. > > This was a sufficiently common problem that I wrote IEN 139 "Hosts As > IMPs" to explain the situation (see > https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone > trying to do such things. But that IEN only addressed the low-level issues > of hardware, signals, voltages, and noise., and warned that to do such > things might require more effort to actually behave as an IMP. > > RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an > IMP", a box such as a Port Expander faced design choices for providing > functionality such as RFNMs. I never knew how it did that, and how > successfully it "looked like an IMP" to all its attached hosts. E.g., if > all 4 hosts, thinking they were connected to their own dedicated IMP port, > did their own RFNM Counting, how did the PE make that all work reliably? > Maybe the situation just never came up often enough in practice to motivate > troubleshooting. > > Not an issue now of course, but historically I wonder how much of the > early reliability issues in the Internet in the Fuzzy Peach era might have > been caused by such situations. > > /Jack > > PS - the same kind of thought has occurred to me with respect to NAT, > which seems to perform a similar "look like an Internet" function. > > > > > On 8/30/21 3:54 AM, Vint Cerf wrote: > > > two tcp connections could multiplex on a given IMP-IMP link - one RFNM per > IP packet regardless of the TCP layer "connection" v > > On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > Thanks Barbara -- yes, the port Expander was one of the things I called > "homegrown LANs". I never did learn how the PE handled RFNMs, in > particular how it interacted with its associated NCP host that it was > "stealing" RFNMs from. > /jack > > On 8/29/21 2:38 PM, Barbara Denny wrote: > > There was also SRI's port expander which increased the number of host > > ports available on an IMP. > > > > You can find the SRI technical report (1080-140-1) on the web. The > > title is "The Arpanet Imp Port Expander". > > > > barbara > > > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > > Internet-history wrote: > > > > > > Thanks Steve. I guess I was focussed only on the longhaul hops. The > > maps didn't show where host computers were attached. At the time > > (1981) the ARPANET consisted of several clusters of nodes (DC, Boston, > > LA, SF), almost like an early form of Metropolitan Area Network (MAN), > > plus single nodes scattered around the US and a satellite circuit to > > Europe. The "MAN" parts of the ARPANET were often richly connected, and > > the circuits might have even been in the same room or building or > > campus. So the long-haul circuits were in some sense more important in > > their scarcity and higher risk of problems from events such as marauding > > backhoes (we called such network outages "backhoe fade"). > > > > While I still remember...here's a little Internet History. > > > > The Internet, at the time in late 70s and early 80s, was in what I used > > to call the "Fuzzy Peach" stage of its development. In addition to > > computers directly attached to an IMP, there were various kinds of > > "local area networks", including things such as Packet Radio networks > > and a few homegrown LANs, which provided connectivity in a small > > geographical area. Each of those was attached to an ARPANET IMP > > somewhere close by, and the ARPANET provided all of the long-haul > > communications. The exception to that was the SATNET, which provided > > connectivity across the Atlantic, with a US node (in West Virginia > > IIRC), and a very active node in the UK. So the ARPANET was the > > "peach" and all of the local networks and computers in the US were the > > "fuzz", with SATNET attaching extending the Internet to Europe. > > > > That topology had some implications on the early Internet behavior. > > > > At the time, I was responsible for BBN's contract with ARPA in which one > > of the tasks was "make the core Internet reliable 24x7". That > > motivated quite frequent interactions with the ARPANET NOC, especially > > since it was literally right down the hall. > > > > TCP/IP was in use at the time, but most of the long-haul traffic flows > > were through the ARPANET. With directly-connected computers at each > > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol > > in use as the ARPANET TIPs became TACs. > > > > However... There's always a "however"... The ARPANET itself already > > implemented a lot of the functionality that TCP provided. ARPANET > > already provided reliable end-end byte streams, as well as flow control; > > the IMPs would allow only 8 "messages" in transit between two endpoints, > > and would physically block the computer from sending more than that. > > So IP datagrams never got lost, or reordered, or duplicated, and never > > had to be discarded or retransmitted. TCP/IP could do such things too, > > but in the "fuzzy peach" situation, it didn't have to do so. > > > > The prominent exception to the "fuzzy peach" was transatlantic traffic, > > which had to cross both the ARPANET and SATNET. The gateway > > interconnecting those two had to discard IP datagrams when they came in > > faster than they could go out. TCP would have to notice, retransmit, > > and reorder things at the destination. > > > > Peter Kirstein's crew at UCL were quite active in experimenting with the > > early Internet, and their TCP/IP traffic had to actually do all of the > > functions that the Fuzzy Peach so successfully hid from those directly > > attached to it. I think the experiences in that path motivated a lot > > of the early thinking about algorithms for TCP behavior, as well as > > gateway actions. > > > > Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or > > phone messages waiting for me every morning advising that "The Internet > > Is Broken!", either from Europe directly or through ARPA. One of the > > first troubleshooting steps, after making sure the gateway was running, > > was to see what was going on in the Fuzzy Peach which was so important > > to the operation of the Internet. Bob Hinden, Alan Sheltzer, and Mike > > Brescia might remember more since they were usually on the front lines. > > > > Much of the experimentation at the time involved interactions between > > the UK crowd and some machine at ISI. If the ARPANET was acting up, > > the bandwidth and latency of those TCP/IP traffic flows could gyrate > > wildly, and TCP/IP implementations didn't always respond well to such > > things, especially since they didn't typically occur when you were just > > using the Fuzzy Peach. > > > > Result - "The Internet Is Broken". That long-haul ARPA-ISI circuit was > > an important part of the path from Europe to California. If it was > > "down", the path became 3 or more additional hops (IMP hops, not IP), > > and became further loaded by additional traffic routing around the > > break. TCPs would timeout, retransmit, and make the problem worse > > while their algorithms tried to adapt. > > > > So that's probably what I was doing in the NOC when I noticed the > > importance of that ARPA<->USC ARPANET circuit. > > > > /Jack Haverty > > > > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at > > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > > > -- Steve > > > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > > > >> Actually July 1981 -- see > > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > > (thanks, > > Noel!) > > >> The experience I recall was being in the ARPANET NOC for some > > reason and > > >> noticing the topology on the big map that covered one wall of the > > NOC. There > > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > > where the > > >> PDP-10s were attached. Still just historically curious how the > > decision was > > >> made to configure that topology....but we'll probably never know. > > /Jack > > >> > > >> > > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > >>> A look at some ARPAnet maps available on the web shows that in > > 1982 it was > > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > > >>> Alex McKenzie > > >>> > > >>> On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via > > >>> Internet-history > > wrote: > > >>> This is the second email from Jack mentioning a > > point-to-point line > > >>> between the ARPA TIP and the ISI site. I don't believe that is an > > accurate > > >>> statement of the ARPAnet topology. In January 1975 there were 5 > hops > > >>> between the 2 on the shortest path. In October 1975 there were 6. > > I don't > > >>> believe it was ever one or two hops, but perhaps someone can find > > a network > > >>> map that proves me wrong. > > >>> Alex McKenzie > > >>> > > >>> On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via > > >>> Internet-history > > wrote: > > >>> Sounds right. My experience was well after that early > > experimental > > >>> period. The ARPANET was much bigger (1980ish) and the topology had > > >>> evolved over the years. There was a direct 56K line (IIRC between > > >>> ARPA-TIP and ISI) at that time. Lots of other circuits too, but in > > >>> normal conditions ARPA<->ISI traffic flowed directly over that > > long-haul > > >>> circuit. /Jack > > >>> > > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > > >>>> Jack, the 4 node configuration had two paths between UCLA and SRI > and > > >>>> a two hop path to University of Utah. > > >>>> We did a variety of tests to force alternate routing (by congesting > > >>>> the first path). > > >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to > get > > >>>> this effect. Of course, we also crashed the Arpanet with these > early > > >>>> experiments. > > >>>> > > >>>> v > > >>>> > > >>>> > > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > > >>>> >> wrote: > > >>>> > > >>>> Thanks, Steve. I hadn't heard the details of why ISI was > > >>>> selected. I can believe that economics was probably a > > factor but > > >>>> the people and organizational issues could have been the > > dominant > > >>>> factors. > > >>>> > > >>>> IMHO, the "internet community" seems to often ignore > > non-technical > > >>>> influences on historical events, preferring to view > > everything in > > >>>> terms of RFCs, protocols, and such. I think the other > > influences > > >>>> are an important part of the story - hence my "economic lens". > > >>>> You just described a view through a manager's lens. > > >>>> > > >>>> /Jack > > >>>> > > >>>> PS - I always thought that the "ARPANET demo" aspect of that > > >>>> ARPANET timeframe was suspect, especially after I noticed > > that the > > >>>> ARPANET had been configured with a leased circuit directly > > between > > >>>> the nearby IMPs to ISI and ARPA. So as a demo of "packet > > >>>> switching", there wasn't much actual switching involved. The > 2 > > >>>> IMPs were more like multiplexors. > > >>>> > > >>>> I never heard whether that configuration was mandated by > > ARPA, or > > >>>> BBN decided to put a line in as a way to keep the customer > > happy, > > >>>> or if it just happened naturally as a result of the ongoing > > >>>> measurement of traffic flows and reconfiguration of the > topology > > >>>> to adapt as needed. Or something else. The interactivity > > of the > > >>>> service between a terminal at ARPA and a PDP-10 at ISI was > > >>>> noticeably better than other users (e.g., me) experienced. > > >>>> > > >>>> On 8/28/21 11:51 AM, Steve Crocker wrote: > > >>>>> Jack, > > >>>>> > > >>>>> You wrote: > > >>>>> > > >>>>> I recall many visits to ARPA on Wilson Blvd in > > Arlington, VA. > > >>>>> There were > > >>>>> terminals all over the building, pretty much all > connected > > >>>>> through the > > >>>>> ARPANET to a PDP-10 3000 miles away at USC in Marine > > Del Rey, > > >>>>> CA. The > > >>>>> technology of Packet Switching made it possible to keep a > > >>>>> PDP-10 busy > > >>>>> servicing all those Users and minimize the costs of > > everything, > > >>>>> including those expensive communications circuits. > > This was > > >>>>> circa > > >>>>> 1980. Users could efficiently share expensive > > communications, > > >>>>> and > > >>>>> expensive and distant computers -- although I always > > thought > > >>>>> ARPA's > > >>>>> choice to use a computer 3000 miles away was probably > > more to > > >>>>> demonstrate the viability of the ARPANET than because > > it was > > >>>>> cheaper > > >>>>> than using a computer somewhere near DC. > > >>>>> > > >>>>> > > >>>>> The choice of USC-ISI in Marina del Rey was due to other > > >>>>> factors. In 1972, with ARPA/IPTO (Larry Roberts) strong > > support, > > >>>>> Keith Uncapher moved his research group out of RAND. > Uncapher > > >>>>> explored a couple of possibilities and found a comfortable > > >>>>> institutional home with the University of Southern California > > >>>>> (USC) with the proviso the institute would be off campus. > > >>>>> Uncapher was solidly supportive of both ARPA/IPTO and of the > > >>>>> Arpanet project. As the Arpanet grew, Roberts needed a > > place to > > >>>>> have multiple PDP-10s providing service on the Arpanet. > > Not just > > >>>>> for the staff at ARPA but for many others as well. > > Uncapher was > > >>>>> cooperative and the rest followed easily. > > >>>>> > > >>>>> The fact that it demonstrated the viability of > packet-switching > > >>>>> over that distance was perhaps a bonus, but the same would > have > > >>>>> been true almost anywhere in the continental U.S. at that > time. > > >>>>> The more important factor was the quality of the > relationship. > > >>>>> One could imagine setting up a small farm of machines at > > various > > >>>>> other universities, non-profits, or selected for profit > > companies > > >>>>> or even some military bases. For each of these, cost, > > >>>>> contracting rules, the ambitions of the principal > investigator, > > >>>>> and staff skill sets would have been the dominant concerns. > > >>>>> > > >>>>> Steve > > >>>>> > > >>>> > > >>>> -- > > >>>> Please send any postal/overnight deliveries to: > > >>>> Vint Cerf > > >>>> 1435 Woodhurst Blvd > > >>>> McLean, VA 22102 > > >>>> 703-448-0965 > > >>>> > > >>>> until further notice > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org Internet-history at elists.isoc.org> > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Please send any postal/overnight deliveries to: Vint Cerf 1435 > Woodhurst Blvd McLean, VA 22102 703-448-0965 > until further notice > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From vgcerf at gmail.com Tue Aug 31 12:37:57 2021 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 31 Aug 2021 15:37:57 -0400 Subject: [ih] More Topology, Packet Radio In-Reply-To: <859772975.1507758.1630436858010@mail.yahoo.com> References: <4C467B3F-2803-4F68-823C-8A141D9D6803@serissa.com> <1002464625.1436263.1630425715360@mail.yahoo.com> <859772975.1507758.1630436858010@mail.yahoo.com> Message-ID: that must have been when Barry Leiner was running the PRNET/SURAN programs? v On Tue, Aug 31, 2021 at 3:07 PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Upon reflection I want to mention I think 6.2 supported multiple stations > in a Packet Radio network. I believe an earlier release supported a single > station. I just don't remember if that version was something like Cap5 > versus Cap6. > barbara > On Tuesday, August 31, 2021, 09:01:55 AM PDT, Barbara Denny < > b_a_denny at yahoo.com> wrote: > > Eventually under the SURAN contract we/SRI got a version of the radio > code. What we received was probably BCPL because at this point I am > thinking I got asked to do a modification because I was probably the only > one around with BCPL experience from the Packet Radio station software. > There is a chance it was in C. The big thing I remember was the code > reminded me of more like something that might have been written by people > used to a lower level language, like assembler. > My memory might be wrong but I seem to remember Packet Radio had 256 byte > packets. > The different CAP version numbers indicated functionality in the Packet > Radio network so if I remember correctly CAP6.2 included the Packet Radio > Station while CAP7 was stationless. > barbara > On Tuesday, August 31, 2021, 05:56:47 AM PDT, Lawrence Stewart via > Internet-history wrote: > > I can contribute a few bits of information about the Packet Radio Network. > > In 1978 I designed the 1822 interface for the Xerox Alto. It was used to > connect to the Bay Area Packet Radio Network and for connecting PARC-MAXC2 > to the Arpanet. > > The Radios used an entirely different low level protocol than the IMPs. > It was called CAP, for Channel Access Protocol. CAP was notable for a very > small MTU - it had an 11 (16-bit) word header and up to 116 words of data. > > PARC used the PRNet for a while to encapsulate PUP traffic between the > PARC building and the Xerox Advanced Systems Devision (Ben Wegbreit and > Charles Simonyi) building. > > I wrote the CAP driver in Mesa, for connection to Hal Murray?s Mesa > Gateway code. It may still be around, in the files Paul McJones put up on > the CHM servers at > http://xeroxalto.computerhistory.org/Indigo/Alto-1822/.index.html < > http://xeroxalto.computerhistory.org/Indigo/Alto-1822/.index.html> > The BCPL test software for the 1822 is definitely there. > > I don?t know what language the radio code used. It was written by Collins > Radio and they had (from SRI accounts) a truly stone age attitude about > it. The master version was kept in a box of cards in the manager?s office. > > I found the writeup of the Xerox work in IEN-78 at > http://www.watersprings.org/pub/rfc/ien/ien78.pdf < > http://www.watersprings.org/pub/rfc/ien/ien78.pdf> > > -Larry > > I guess I am surprised by the comments here about the subleties of the > 1822 distant host signaling. I don?t think the Alto board had > optoisolaters and it did work in both local and distant host modes, but was > never tried with very long cables or ground problems. > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From steve at shinkuro.com Tue Aug 31 12:48:18 2021 From: steve at shinkuro.com (Steve Crocker) Date: Tue, 31 Aug 2021 15:48:18 -0400 Subject: [ih] Local traffic In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <956081ab-f313-453e-5e1e-02f3600a03b9@3kitty.org> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> <1809861388.1530809.1630437881326@mail.yahoo.com> Message-ID: Vint, The use of IMPs for local traffic emerged almost instantly. In visits to various sites before and during the first IMP installations, it was not uncommon for someone to show me the network project they were working on. These usually consisted of two or three machines that were not yet connected to each other, but would be "real soon." The arrival of an IMP completely solved their problems. It subdivided the work, and once each computer was connected to the IMP, they were able to talk to each other. At UCLA, it would have been impossible to get the Sigma 7 and the IBM 360/91 connected directly to each other. My favorite instance of an IMP providing local connectivity involved just one host. UCSB had an IBM 360/75. They also had a long suffering effort to provide interprocess communication between different partitions so two jobs could talk to each other. I don't believe they ever got it working. However, once the IMP was installed, their problem was instantly solved. Interprocess communication between partitions traveled into and back out of the IMP. Steve On Tue, Aug 31, 2021 at 3:37 PM vinton cerf via Internet-history < internet-history at elists.isoc.org> wrote: > actually, transmissions between two hosts on the same IMP were normal and > called "incestuous" traffic - we found that the bulk of UCLA traffic was > between the 360/91 and the Sigma-7 on the same campus! > From jack at 3kitty.org Tue Aug 31 13:09:46 2021 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 31 Aug 2021 13:09:46 -0700 Subject: [ih] More topology In-Reply-To: <1809861388.1530809.1630437881326@mail.yahoo.com> References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> <1809861388.1530809.1630437881326@mail.yahoo.com> Message-ID: I never did IMP programming, but my recollection is that a "destination" was an IMP port; but I don't remember how ARPANET "links" might have been involved, or whether or not flow control within a single IMP was handled differently. So, it's possible that the blocking you saw happened because the 8-in-flight limit was hit.?? That might seem unlikely inside of a single IMP, but if the destination host wasn't taking incoming messages as fast as the sender was transmitting them, I think it could happen and cause the blocking you saw. Or, if the host that "couldn't even send packets to another host on the same IMP" had already sent 8 packets to any other IMP/port in the ARPANET, the local IMP would have blocked the sending host and therefore it wouldn't even be able to send traffic anywhere, even to another local host. Or, if that host was a PE, some other computer attached to that PE might trigger such behavior.? That's the scenario I was thinking of when I posed the question of how the PE dealt with sharing RFNMs. An ARPANET host (e.g., a PE) also had to deal with other events, such as "flapping the ready line" and reset its counters appropriately.?? If such events weren't handled, that might explain what you saw. The 1970s IMP code (Honeywell 316 assembler) is online, if any historian is curious enough to dive into it.?? It's actually quite an elegant piece of software that probably violates all principles of software engineering known today (e.g., it uses self-modifying code). /Jack On 8/31/21 12:24 PM, Barbara Denny wrote: > Hi Jack, > > Based on what you said, during RP testing I think I remember seeing > where a host on one IMP couldn't even send packets to another host on > a different port of the same IMP. Just want to double check this was > possible when you say any destination. > > barbara > > On Monday, August 30, 2021, 11:01:44 AM PDT, Jack Haverty > wrote: > > > Yes, but it was more complicated than that...a little more history: > > ARPANET used RFNMs (Request For Next Message) as a means of flow > control.? Every message (packet/datagram/whatever) sent by a host > would eventually cause a RFNM to be returned to the host. IIRC, hosts > were allowed to send up to 8 messages to any particular destination.?? > So there could be up to 8 pending RFNMs to come back to the host for > traffic to that destination.?? If the host tried to send a 9th message > to a particular destination, the IMP would block all transmissions > from the host until those RFNMs arrived, by shutting off the hardware > interface.?? So, if a host exceeded that limit of "8 in flight" to any > destination, the IMP would block it, at least temporarily, from > sending anything to any destination.?? That would probably be A Bad Thing. > > Hosts could implement a simple algorithm and simply send one message, > and hold the next message until a RFNM came back.? But to increase > throughput, it was advisable to implement some sort of "RFNM Counting" > where the host would keep track of how many messages were "in flight", > and avoid sending another message to a particular destination if that > message would exceed the 8-in-flight constraint, and thereby avoid > having the IMP shut off all of its traffic to all destinations.??? The > TCP/IP I implemented for Unix did that kind of RFNM Counting on the > ARPANET interface, but I'm not sure how other implementations handled > the RFNM issues. > > Any "box" (such as a Port Expander) that was "spliced into" the > connection between a host and an IMP had to perform two related > functions. ? It had to act as a host itself in interacting with the > IMP. ? It also had to "look like an IMP" to the host(s) that were > attached to it.?? It had to essentially implement "timesharing" of the > IMP's interface. > > The "1822 specifications" defined the interface between a Host and an > IMP. ?? From it, engineers could build interfaces for their hosts to > connect them to the ARPANET.? However (always a however...) the 1822 > spec appeared to be symmetrical.? But it wasn't. Interfaces that met > the 1822 specs could successfully interact with an IMP. ? Also, if you > plugged two such 1822 interfaces back-to-back (as was done in > connecting the 4 host to a Port Expander), it would often work > apparently fine.?? The "Host to IMP" specification wasn't quite the > same as the (internal-to-BBN) "IMP To Host" specification;? it was > easy for people to treat it as if it was. > > But in that early Internet, there were lots of "outages" to be > investigated.? I remember doing a "deep dive" into one such > configuration where equipment was "spliced into" a Host/IMP 1822 cable > with unreliable results.?? It turned out to be a hardware issue, with > the root cause being the invalid assumption that any 1822-compliant > interface on a host could also successfully emulate the 1822 interface > on an IMP. > > This was a sufficiently common problem that I wrote IEN 139 "Hosts As > IMPs" to explain the situation (see > https://www.rfc-editor.org/ien/scanned/ien139.pdf > ), to warn anyone > trying to do such things.? But that IEN only addressed the low-level > issues of hardware, signals, voltages, and noise., and warned that to > do such things might require more effort to actually behave as an IMP. > > RFNMs, and RFNM counting, weren't specified in 1822, but to "look like > an IMP", a box such as a Port Expander faced design choices for > providing functionality such as RFNMs.? I never knew how it did that, > and how successfully it "looked like an IMP" to all its attached > hosts.?? E.g., if all 4 hosts, thinking they were connected to their > own dedicated IMP port, did their own RFNM Counting, how did the PE > make that all work reliably??? Maybe the situation just never came up > often enough in practice to motivate troubleshooting. > > Not an issue now of course, but historically I wonder how much of the > early reliability issues in the Internet in the Fuzzy Peach era might > have been caused by such situations. > > /Jack > > PS - the same kind of thought has occurred to me with respect to NAT, > which seems to perform a similar "look like an Internet" function. > > > > > On 8/30/21 3:54 AM, Vint Cerf wrote: > two tcp connections could multiplex on a given IMP-IMP link - one RFNM > per IP packet regardless of the TCP layer "connection" > v > > > On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history > > wrote: > > Thanks Barbara -- yes, the port Expander was one of the things I > called > "homegrown LANs".? I never did learn how the PE handled RFNMs, in > particular how it interacted with its associated NCP host that it was > "stealing" RFNMs from. > /jack > > On 8/29/21 2:38 PM, Barbara Denny wrote: > > There was also SRI's port expander which increased the number of > host > > ports available on an IMP. > > > > You can find the SRI technical report (1080-140-1) on the web. The > > title is "The Arpanet Imp Port Expander". > > > > barbara > > > > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via > > Internet-history > wrote: > > > > > > Thanks Steve.?? I guess I was focussed only on the longhaul > hops. The > > maps didn't show where host computers were attached. At the time > > (1981) the ARPANET consisted of several clusters of nodes (DC, > Boston, > > LA, SF), almost like an early form of Metropolitan Area Network > (MAN), > > plus single nodes scattered around the US and a satellite circuit to > > Europe.? The "MAN" parts of the ARPANET were often richly > connected, and > > the circuits might have even been in the same room or building or > > campus.?? So the long-haul circuits were in some sense more > important in > > their scarcity and higher risk of problems from events such as > marauding > > backhoes (we called such network outages "backhoe fade"). > > > > While I still remember...here's a little Internet History. > > > > The Internet, at the time in late 70s and early 80s, was in what > I used > > to call the "Fuzzy Peach" stage of its development.? In addition to > > computers directly attached to an IMP, there were various kinds of > > "local area networks", including things such as Packet Radio > networks > > and a few homegrown LANs, which provided connectivity in a small > > geographical area.? Each of those was attached to an ARPANET IMP > > somewhere close by, and the ARPANET provided all of the long-haul > > communications.?? The exception to that was the SATNET, which > provided > > connectivity across the Atlantic, with a US node (in West Virginia > > IIRC), and a very active node in the UK. So the ARPANET was the > > "peach" and all of the local networks and computers in the US > were the > > "fuzz", with SATNET attaching extending the Internet to Europe. > > > > That topology had some implications on the early Internet behavior. > > > > At the time, I was responsible for BBN's contract with ARPA in > which one > > of the tasks was "make the core Internet reliable 24x7".?? That > > motivated quite frequent interactions with the ARPANET NOC, > especially > > since it was literally right down the hall. > > > > TCP/IP was in use at the time, but most of the long-haul traffic > flows > > were through the ARPANET.? With directly-connected computers at each > > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the > protocol > > in use as the ARPANET TIPs became TACs. > > > > However... ? There's always a "however"... The ARPANET itself > already > > implemented a lot of the functionality that TCP provided. ARPANET > > already provided reliable end-end byte streams, as well as flow > control; > > the IMPs would allow only 8 "messages" in transit between two > endpoints, > > and would physically block the computer from sending more than that. > > So IP datagrams never got lost, or reordered, or duplicated, and > never > > had to be discarded or retransmitted. TCP/IP could do such > things too, > > but in the "fuzzy peach" situation, it didn't have to do so. > > > > The prominent exception to the "fuzzy peach" was transatlantic > traffic, > > which had to cross both the ARPANET and SATNET.?? The gateway > > interconnecting those two had to discard IP datagrams when they > came in > > faster than they could go out.?? TCP would have to notice, > retransmit, > > and reorder things at the destination. > > > > Peter Kirstein's crew at UCL were quite active in experimenting > with the > > early Internet, and their TCP/IP traffic had to actually do all > of the > > functions that the Fuzzy Peach so successfully hid from those > directly > > attached to it.?? I think the experiences in that path motivated > a lot > > of the early thinking about algorithms for TCP behavior, as well as > > gateway actions. > > > > Europe is 5+ hours ahead of Boston, so I learned to expect > emails and/or > > phone messages waiting for me every morning advising that "The > Internet > > Is Broken!", either from Europe directly or through ARPA.? One > of the > > first troubleshooting steps, after making sure the gateway was > running, > > was to see what was going on in the Fuzzy Peach which was so > important > > to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, > and Mike > > Brescia might remember more since they were usually on the front > lines. > > > > Much of the experimentation at the time involved interactions > between > > the UK crowd and some machine at ISI.?? If the ARPANET was > acting up, > > the bandwidth and latency of those TCP/IP traffic flows could gyrate > > wildly, and TCP/IP implementations didn't always respond well to > such > > things, especially since they didn't typically occur when you > were just > > using the Fuzzy Peach. > > > > Result - "The Internet Is Broken".?? That long-haul ARPA-ISI > circuit was > > an important part of the path from Europe to California.?? If it was > > "down", the path became 3 or more additional hops (IMP hops, not > IP), > > and became further loaded by additional traffic routing around the > > break.?? TCPs would timeout, retransmit, and make the problem worse > > while their algorithms tried to adapt. > > > > So that's probably what I was doing in the NOC when I noticed the > > importance of that ARPA<->USC ARPANET circuit. > > > > /Jack Haverty > > > > > > On 8/29/21 10:09 AM, Stephen Casner wrote: > > > Jack, that map shows one hop from ARPA to USC, but the PDP10s > were at > > > ISI which is 10 miles and 2 or 3 IMPs from USC. > > > > > > ? ? ? ? -- Steve > > > > > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: > > > > > >> Actually July 1981 -- see > > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg > > > > >(thanks, > > Noel!) > > >> The experience I recall was being in the ARPANET NOC for some > > reason and > > >> noticing the topology on the big map that covered one wall of > the > > NOC.? There > > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure > > where the > > >> PDP-10s were attached.? Still just historically curious how the > > decision was > > >> made to configure that topology....but we'll probably never > know. > > /Jack > > >> > > >> > > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: > > >>>? ? A look at some ARPAnet maps available on the web shows > that in > > 1982 it was > > >>> four hops from ARPA to ISI, but by 1985 it was one hop. > > >>> Alex McKenzie > > >>> > > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex > McKenzie via > > >>> Internet-history > > >> wrote: > > >>>? ? ? This is the second email from Jack mentioning a > > point-to-point line > > >>> between the ARPA TIP and the ISI site.? I don't believe that > is an > > accurate > > >>> statement of the ARPAnet topology.? In January 1975 there > were 5 hops > > >>> between the 2 on the shortest path. In October 1975 there > were 6. > > I don't > > >>> believe it was ever one or two hops, but perhaps someone can > find > > a network > > >>> map that proves me wrong. > > >>> Alex McKenzie > > >>> > > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack > Haverty via > > >>> Internet-history > > >> wrote: > > >>>? ? ? Sounds right.? My experience was well after that early > > experimental > > >>> period.? The ARPANET was much bigger (1980ish) and the > topology had > > >>> evolved over the years.? There was a direct 56K line (IIRC > between > > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, > but in > > >>> normal conditions ARPA<->ISI traffic flowed directly over that > > long-haul > > >>> circuit.? /Jack > > >>> > > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: > > >>>> Jack, the 4 node configuration had two paths between UCLA > and SRI and > > >>>> a two hop path to University of Utah. > > >>>> We did a variety of tests to force alternate routing (by > congesting > > >>>> the first path). > > >>>> I used traffic generators in the IMPs and in the UCLA > Sigma-7 to get > > >>>> this effect. Of course, we also crashed the Arpanet with > these early > > >>>> experiments. > > >>>> > > >>>> v > > >>>> > > >>>> > > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty > > > > > > >>>> > >>> wrote: > > >>>> > > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was > > >>>>? ? ? selected.? I can believe that economics was probably a > > factor but > > >>>>? ? ? the people and organizational issues could have been the > > dominant > > >>>>? ? ? factors. > > >>>> > > >>>>? ? ? IMHO, the "internet community" seems to often ignore > > non-technical > > >>>>? ? ? influences on historical events, preferring to view > > everything in > > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other > > influences > > >>>>? ? ? are an important part of the story - hence my > "economic lens". > > >>>>? ? ? You just described a view through a manager's lens. > > >>>> > > >>>>? ? ? /Jack > > >>>> > > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect > of that > > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed > > that the > > >>>>? ? ? ARPANET had been configured with a leased circuit > directly > > between > > >>>>? ? ? the nearby IMPs to ISI and ARPA.? So as a demo of "packet > > >>>>? ? ? switching", there wasn't much actual switching > involved.? The 2 > > >>>>? ? ? IMPs were more like multiplexors. > > >>>> > > >>>>? ? ? I never heard whether that configuration was mandated by > > ARPA, or > > >>>>? ? ? BBN decided to put a line in as a way to keep the > customer > > happy, > > >>>>? ? ? or if it just happened naturally as a result of the > ongoing > > >>>>? ? ? measurement of traffic flows and reconfiguration of > the topology > > >>>>? ? ? to adapt as needed. Or something else.? The interactivity > > of the > > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was > > >>>>? ? ? noticeably better than other users (e.g., me) experienced. > > >>>> > > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: > > >>>>>? ? ? Jack, > > >>>>> > > >>>>>? ? ? You wrote: > > >>>>> > > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in > > Arlington, VA. > > >>>>>? ? ? ? ? There were > > >>>>>? ? ? ? ? terminals all over the building, pretty much all > connected > > >>>>>? ? ? ? ? through the > > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine > > Del Rey, > > >>>>>? ? ? ? ? CA.? The > > >>>>>? ? ? ? ? technology of Packet Switching made it possible > to keep a > > >>>>>? ? ? ? ? PDP-10 busy > > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of > > everything, > > >>>>>? ? ? ? ? including those expensive communications circuits. > > This was > > >>>>>? ? ? ? ? circa > > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive > > communications, > > >>>>> and > > >>>>>? ? ? ? ? expensive and distant computers -- although I always > > thought > > >>>>>? ? ? ? ? ARPA's > > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was > probably > > more to > > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than > because > > it was > > >>>>>? ? ? ? ? cheaper > > >>>>>? ? ? ? ? than using a computer somewhere near DC. > > >>>>> > > >>>>> > > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other > > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong > > support, > > >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? > Uncapher > > >>>>>? ? ? explored a couple of possibilities and found a > comfortable > > >>>>>? ? ? institutional home with the University of Southern > California > > >>>>>? ? ? (USC) with the proviso the institute would be off campus. > > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and > of the > > >>>>>? ? ? Arpanet project. As the Arpanet grew, Roberts needed a > > place to > > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet. > > Not just > > >>>>>? ? ? for the staff at ARPA but for many others as well. > > Uncapher was > > >>>>>? ? ? cooperative and the rest followed easily. > > >>>>> > > >>>>>? ? ? The fact that it demonstrated the viability of > packet-switching > > >>>>>? ? ? over that distance was perhaps a bonus, but the same > would have > > >>>>>? ? ? been true almost anywhere in the continental U.S. at > that time. > > >>>>>? ? ? The more important factor was the quality of the > relationship. > > >>>>>? ? ? One could imagine setting up a small farm of machines at > > various > > >>>>>? ? ? other universities, non-profits, or selected for profit > > companies > > >>>>>? ? ? or even some military bases.? For each of these, cost, > > >>>>>? ? ? contracting rules, the ambitions of the principal > investigator, > > >>>>>? ? ? and staff skill sets would have been the dominant > concerns. > > >>>>> > > >>>>>? ? ? Steve > > >>>>> > > >>>> > > >>>> -- > > >>>> Please send any postal/overnight deliveries to: > > >>>> Vint Cerf > > >>>> 1435 Woodhurst Blvd > > >>>> McLean, VA 22102 > > >>>> 703-448-0965 > > >>>> > > >>>> until further notice > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > > From amckenzie3 at yahoo.com Tue Aug 31 13:26:13 2021 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Tue, 31 Aug 2021 20:26:13 +0000 (UTC) Subject: [ih] IMP System control mechanisms In-Reply-To: References: <1aa8e320-5ab2-4e32-9596-d4451e0b936d@dcrocker.net> <4508e459-b461-77aa-48b5-147845d03ab8@dcrocker.net> <3e011c8c-0097-c0aa-9082-012315a1738c@3kitty.org> <637fd7cb-7d21-c978-6676-365a7a33439b@3kitty.org> <86019613.672324.1630245830259@mail.yahoo.com> <395932638.701311.1630249339681@mail.yahoo.com> <6302a806-12bc-7d49-9fb0-98154810461b@3kitty.org> <1434236203.765878.1630273093446@mail.yahoo.com> <56c8c108-6ea6-7843-18ee-672f5eb7e1e5@3kitty.org> <1809861388.1530809.1630437881326@mail.yahoo.com> < e80c7800-f4c4-4804-d3d3-08fe468735c6@3kitty.org> Message-ID: <391078.1538682.1630441573503@mail.yahoo.com> The IMP System's internal control mechanisms applied to all traffic.? Packets entering an IMP and being delivered to a Host on that IMP were subject to the same handling as packets to be delivered to a Host on a different IMP.? When I use the word "Host" it could be the same real Host, a different real Host, or one of the internal (to the IMP) "fake Hosts".? The IMP had only one control structure - there were no special cases. Alex On Tuesday, August 31, 2021, 04:10:05 PM EDT, Jack Haverty via Internet-history wrote: I never did IMP programming, but my recollection is that a "destination" was an IMP port; but I don't remember how ARPANET "links" might have been involved, or whether or not flow control within a single IMP was handled differently. So, it's possible that the blocking you saw happened because the 8-in-flight limit was hit.?? That might seem unlikely inside of a single IMP, but if the destination host wasn't taking incoming messages as fast as the sender was transmitting them, I think it could happen and cause the blocking you saw. Or, if the host that "couldn't even send packets to another host on the same IMP" had already sent 8 packets to any other IMP/port in the ARPANET, the local IMP would have blocked the sending host and therefore it wouldn't even be able to send traffic anywhere, even to another local host. Or, if that host was a PE, some other computer attached to that PE might trigger such behavior.? That's the scenario I was thinking of when I posed the question of how the PE dealt with sharing RFNMs. An ARPANET host (e.g., a PE) also had to deal with other events, such as "flapping the ready line" and reset its counters appropriately.?? If such events weren't handled, that might explain what you saw. The 1970s IMP code (Honeywell 316 assembler) is online, if any historian is curious enough to dive into it.?? It's actually quite an elegant piece of software that probably violates all principles of software engineering known today (e.g., it uses self-modifying code). /Jack On 8/31/21 12:24 PM, Barbara Denny wrote: > Hi Jack, > > Based on what you said, during RP testing I think I remember seeing > where a host on one IMP couldn't even send packets to another host on > a different port of the same IMP. Just want to double check this was > possible when you say any destination. > > barbara > > On Monday, August 30, 2021, 11:01:44 AM PDT, Jack Haverty > wrote: > > > Yes, but it was more complicated than that...a little more history: > > ARPANET used RFNMs (Request For Next Message) as a means of flow > control.? Every message (packet/datagram/whatever) sent by a host > would eventually cause a RFNM to be returned to the host. IIRC, hosts > were allowed to send up to 8 messages to any particular destination.?? > So there could be up to 8 pending RFNMs to come back to the host for > traffic to that destination.?? If the host tried to send a 9th message > to a particular destination, the IMP would block all transmissions > from the host until those RFNMs arrived, by shutting off the hardware > interface.?? So, if a host exceeded that limit of "8 in flight" to any > destination, the IMP would block it, at least temporarily, from > sending anything to any destination.?? That would probably be A Bad Thing. > > Hosts could implement a simple algorithm and simply send one message, > and hold the next message until a RFNM came back.? But to increase > throughput, it was advisable to implement some sort of "RFNM Counting" > where the host would keep track of how many messages were "in flight", > and avoid sending another message to a particular destination if that > message would exceed the 8-in-flight constraint, and thereby avoid > having the IMP shut off all of its traffic to all destinations.??? The > TCP/IP I implemented for Unix did that kind of RFNM Counting on the > ARPANET interface, but I'm not sure how other implementations handled > the RFNM issues. > > Any "box" (such as a Port Expander) that was "spliced into" the > connection between a host and an IMP had to perform two related > functions. ? It had to act as a host itself in interacting with the > IMP. ? It also had to "look like an IMP" to the host(s) that were > attached to it.?? It had to essentially implement "timesharing" of the > IMP's interface. > > The "1822 specifications" defined the interface between a Host and an > IMP. ?? From it, engineers could build interfaces for their hosts to > connect them to the ARPANET.? However (always a however...) the 1822 > spec appeared to be symmetrical.? But it wasn't. Interfaces that met > the 1822 specs could successfully interact with an IMP. ? Also, if you > plugged two such 1822 interfaces back-to-back (as was done in > connecting the 4 host to a Port Expander), it would often work > apparently fine.?? The "Host to IMP" specification wasn't quite the > same as the (internal-to-BBN) "IMP To Host" specification;? it was > easy for people to treat it as if it was. > > But in that early Internet, there were lots of "outages" to be > investigated.? I remember doing a "deep dive" into one such > configuration where equipment was "spliced into" a Host/IMP 1822 cable > with unreliable results.?? It turned out to be a hardware issue, with > the root cause being the invalid assumption that any 1822-compliant > interface on a host could also successfully emulate the 1822 interface > on an IMP. > > This was a sufficiently common problem that I wrote IEN 139 "Hosts As > IMPs" to explain the situation (see > https://www.rfc-editor.org/ien/scanned/ien139.pdf > ), to warn anyone > trying to do such things.? But that IEN only addressed the low-level > issues of hardware, signals, voltages, and noise., and warned that to > do such things might require more effort to actually behave as an IMP. > > RFNMs, and RFNM counting, weren't specified in 1822, but to "look like > an IMP", a box such as a Port Expander faced design choices for > providing functionality such as RFNMs.? I never knew how it did that, > and how successfully it "looked like an IMP" to all its attached > hosts.?? E.g., if all 4 hosts, thinking they were connected to their > own dedicated IMP port, did their own RFNM Counting, how did the PE > make that all work reliably??? Maybe the situation just never came up > often enough in practice to motivate troubleshooting. > > Not an issue now of course, but historically I wonder how much of the > early reliability issues in the Internet in the Fuzzy Peach era might > have been caused by such situations. > > /Jack > > PS - the same kind of thought has occurred to me with respect to NAT, > which seems to perform a similar "look like an Internet" function. > > > > > On 8/30/21 3:54 AM, Vint Cerf wrote: > two tcp connections could multiplex on a given IMP-IMP link - one RFNM > per IP packet regardless of the TCP layer "connection" > v > > > On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history > > wrote: > >? ? Thanks Barbara -- yes, the port Expander was one of the things I >? ? called >? ? "homegrown LANs".? I never did learn how the PE handled RFNMs, in >? ? particular how it interacted with its associated NCP host that it was >? ? "stealing" RFNMs from. >? ? /jack > >? ? On 8/29/21 2:38 PM, Barbara Denny wrote: >? ? > There was also SRI's port expander which increased the number of >? ? host >? ? > ports available on an IMP. >? ? > >? ? > You can find the SRI technical report (1080-140-1) on the web. The >? ? > title is "The Arpanet Imp Port Expander". >? ? > >? ? > barbara >? ? > >? ? > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via >? ? > Internet-history ? ? > wrote: >? ? > >? ? > >? ? > Thanks Steve.?? I guess I was focussed only on the longhaul >? ? hops. The >? ? > maps didn't show where host computers were attached. At the time >? ? > (1981) the ARPANET consisted of several clusters of nodes (DC, >? ? Boston, >? ? > LA, SF), almost like an early form of Metropolitan Area Network >? ? (MAN), >? ? > plus single nodes scattered around the US and a satellite circuit to >? ? > Europe.? The "MAN" parts of the ARPANET were often richly >? ? connected, and >? ? > the circuits might have even been in the same room or building or >? ? > campus.?? So the long-haul circuits were in some sense more >? ? important in >? ? > their scarcity and higher risk of problems from events such as >? ? marauding >? ? > backhoes (we called such network outages "backhoe fade"). >? ? > >? ? > While I still remember...here's a little Internet History. >? ? > >? ? > The Internet, at the time in late 70s and early 80s, was in what >? ? I used >? ? > to call the "Fuzzy Peach" stage of its development.? In addition to >? ? > computers directly attached to an IMP, there were various kinds of >? ? > "local area networks", including things such as Packet Radio >? ? networks >? ? > and a few homegrown LANs, which provided connectivity in a small >? ? > geographical area.? Each of those was attached to an ARPANET IMP >? ? > somewhere close by, and the ARPANET provided all of the long-haul >? ? > communications.?? The exception to that was the SATNET, which >? ? provided >? ? > connectivity across the Atlantic, with a US node (in West Virginia >? ? > IIRC), and a very active node in the UK. So the ARPANET was the >? ? > "peach" and all of the local networks and computers in the US >? ? were the >? ? > "fuzz", with SATNET attaching extending the Internet to Europe. >? ? > >? ? > That topology had some implications on the early Internet behavior. >? ? > >? ? > At the time, I was responsible for BBN's contract with ARPA in >? ? which one >? ? > of the tasks was "make the core Internet reliable 24x7".?? That >? ? > motivated quite frequent interactions with the ARPANET NOC, >? ? especially >? ? > since it was literally right down the hall. >? ? > >? ? > TCP/IP was in use at the time, but most of the long-haul traffic >? ? flows >? ? > were through the ARPANET.? With directly-connected computers at each >? ? > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the >? ? protocol >? ? > in use as the ARPANET TIPs became TACs. >? ? > >? ? > However... ? There's always a "however"... The ARPANET itself >? ? already >? ? > implemented a lot of the functionality that TCP provided. ARPANET >? ? > already provided reliable end-end byte streams, as well as flow >? ? control; >? ? > the IMPs would allow only 8 "messages" in transit between two >? ? endpoints, >? ? > and would physically block the computer from sending more than that. >? ? > So IP datagrams never got lost, or reordered, or duplicated, and >? ? never >? ? > had to be discarded or retransmitted. TCP/IP could do such >? ? things too, >? ? > but in the "fuzzy peach" situation, it didn't have to do so. >? ? > >? ? > The prominent exception to the "fuzzy peach" was transatlantic >? ? traffic, >? ? > which had to cross both the ARPANET and SATNET.?? The gateway >? ? > interconnecting those two had to discard IP datagrams when they >? ? came in >? ? > faster than they could go out.?? TCP would have to notice, >? ? retransmit, >? ? > and reorder things at the destination. >? ? > >? ? > Peter Kirstein's crew at UCL were quite active in experimenting >? ? with the >? ? > early Internet, and their TCP/IP traffic had to actually do all >? ? of the >? ? > functions that the Fuzzy Peach so successfully hid from those >? ? directly >? ? > attached to it.?? I think the experiences in that path motivated >? ? a lot >? ? > of the early thinking about algorithms for TCP behavior, as well as >? ? > gateway actions. >? ? > >? ? > Europe is 5+ hours ahead of Boston, so I learned to expect >? ? emails and/or >? ? > phone messages waiting for me every morning advising that "The >? ? Internet >? ? > Is Broken!", either from Europe directly or through ARPA.? One >? ? of the >? ? > first troubleshooting steps, after making sure the gateway was >? ? running, >? ? > was to see what was going on in the Fuzzy Peach which was so >? ? important >? ? > to the operation of the Internet.?? Bob Hinden, Alan Sheltzer, >? ? and Mike >? ? > Brescia might remember more since they were usually on the front >? ? lines. >? ? > >? ? > Much of the experimentation at the time involved interactions >? ? between >? ? > the UK crowd and some machine at ISI.?? If the ARPANET was >? ? acting up, >? ? > the bandwidth and latency of those TCP/IP traffic flows could gyrate >? ? > wildly, and TCP/IP implementations didn't always respond well to >? ? such >? ? > things, especially since they didn't typically occur when you >? ? were just >? ? > using the Fuzzy Peach. >? ? > >? ? > Result - "The Internet Is Broken".?? That long-haul ARPA-ISI >? ? circuit was >? ? > an important part of the path from Europe to California.?? If it was >? ? > "down", the path became 3 or more additional hops (IMP hops, not >? ? IP), >? ? > and became further loaded by additional traffic routing around the >? ? > break.?? TCPs would timeout, retransmit, and make the problem worse >? ? > while their algorithms tried to adapt. >? ? > >? ? > So that's probably what I was doing in the NOC when I noticed the >? ? > importance of that ARPA<->USC ARPANET circuit. >? ? > >? ? > /Jack Haverty >? ? > >? ? > >? ? > On 8/29/21 10:09 AM, Stephen Casner wrote: >? ? > > Jack, that map shows one hop from ARPA to USC, but the PDP10s >? ? were at >? ? > > ISI which is 10 miles and 2 or 3 IMPs from USC. >? ? > > >? ? > > ? ? ? ? -- Steve >? ? > > >? ? > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote: >? ? > > >? ? > >> Actually July 1981 -- see >? ? > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg >? ? >? ? > ? ? >? ? >(thanks, >? ? > Noel!) >? ? > >> The experience I recall was being in the ARPANET NOC for some >? ? > reason and >? ? > >> noticing the topology on the big map that covered one wall of >? ? the >? ? > NOC.? There >? ? > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure >? ? > where the >? ? > >> PDP-10s were attached.? Still just historically curious how the >? ? > decision was >? ? > >> made to configure that topology....but we'll probably never >? ? know. >? ? > /Jack >? ? > >> >? ? > >> >? ? > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote: >? ? > >>>? ? A look at some ARPAnet maps available on the web shows >? ? that in >? ? > 1982 it was >? ? > >>> four hops from ARPA to ISI, but by 1985 it was one hop. >? ? > >>> Alex McKenzie >? ? > >>> >? ? > >>>? ? ? On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex >? ? McKenzie via >? ? > >>> Internet-history ? ? >? ? > ? ? >> wrote: >? ? > >>>? ? ? This is the second email from Jack mentioning a >? ? > point-to-point line >? ? > >>> between the ARPA TIP and the ISI site.? I don't believe that >? ? is an >? ? > accurate >? ? > >>> statement of the ARPAnet topology.? In January 1975 there >? ? were 5 hops >? ? > >>> between the 2 on the shortest path. In October 1975 there >? ? were 6. >? ? > I don't >? ? > >>> believe it was ever one or two hops, but perhaps someone can >? ? find >? ? > a network >? ? > >>> map that proves me wrong. >? ? > >>> Alex McKenzie >? ? > >>> >? ? > >>>? ? ? On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack >? ? Haverty via >? ? > >>> Internet-history ? ? >? ? > ? ? >> wrote: >? ? > >>>? ? ? Sounds right.? My experience was well after that early >? ? > experimental >? ? > >>> period.? The ARPANET was much bigger (1980ish) and the >? ? topology had >? ? > >>> evolved over the years.? There was a direct 56K line (IIRC >? ? between >? ? > >>> ARPA-TIP and ISI) at that time.? Lots of other circuits too, >? ? but in >? ? > >>> normal conditions ARPA<->ISI traffic flowed directly over that >? ? > long-haul >? ? > >>> circuit.? /Jack >? ? > >>> >? ? > >>> On 8/28/21 1:55 PM, Vint Cerf wrote: >? ? > >>>> Jack, the 4 node configuration had two paths between UCLA >? ? and SRI and >? ? > >>>> a two hop path to University of Utah. >? ? > >>>> We did a variety of tests to force alternate routing (by >? ? congesting >? ? > >>>> the first path). >? ? > >>>> I used traffic generators in the IMPs and in the UCLA >? ? Sigma-7 to get >? ? > >>>> this effect. Of course, we also crashed the Arpanet with >? ? these early >? ? > >>>> experiments. >? ? > >>>> >? ? > >>>> v >? ? > >>>> >? ? > >>>> >? ? > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty >? ? >? ? > > >? ? > >>>> >? ? >>> wrote: >? ? > >>>> >? ? > >>>>? ? ? Thanks, Steve.? I hadn't heard the details of why ISI was >? ? > >>>>? ? ? selected.? I can believe that economics was probably a >? ? > factor but >? ? > >>>>? ? ? the people and organizational issues could have been the >? ? > dominant >? ? > >>>>? ? ? factors. >? ? > >>>> >? ? > >>>>? ? ? IMHO, the "internet community" seems to often ignore >? ? > non-technical >? ? > >>>>? ? ? influences on historical events, preferring to view >? ? > everything in >? ? > >>>>? ? ? terms of RFCs, protocols, and such.? I think the other >? ? > influences >? ? > >>>>? ? ? are an important part of the story - hence my >? ? "economic lens". >? ? > >>>>? ? ? You just described a view through a manager's lens. >? ? > >>>> >? ? > >>>>? ? ? /Jack >? ? > >>>> >? ? > >>>>? ? ? PS - I always thought that the "ARPANET demo" aspect >? ? of that >? ? > >>>>? ? ? ARPANET timeframe was suspect, especially after I noticed >? ? > that the >? ? > >>>>? ? ? ARPANET had been configured with a leased circuit >? ? directly >? ? > between >? ? > >>>>? ? ? the nearby IMPs to ISI and ARPA.? So as a demo of "packet >? ? > >>>>? ? ? switching", there wasn't much actual switching >? ? involved.? The 2 >? ? > >>>>? ? ? IMPs were more like multiplexors. >? ? > >>>> >? ? > >>>>? ? ? I never heard whether that configuration was mandated by >? ? > ARPA, or >? ? > >>>>? ? ? BBN decided to put a line in as a way to keep the >? ? customer >? ? > happy, >? ? > >>>>? ? ? or if it just happened naturally as a result of the >? ? ongoing >? ? > >>>>? ? ? measurement of traffic flows and reconfiguration of >? ? the topology >? ? > >>>>? ? ? to adapt as needed. Or something else.? The interactivity >? ? > of the >? ? > >>>>? ? ? service between a terminal at ARPA and a PDP-10 at ISI was >? ? > >>>>? ? ? noticeably better than other users (e.g., me) experienced. >? ? > >>>> >? ? > >>>>? ? ? On 8/28/21 11:51 AM, Steve Crocker wrote: >? ? > >>>>>? ? ? Jack, >? ? > >>>>> >? ? > >>>>>? ? ? You wrote: >? ? > >>>>> >? ? > >>>>>? ? ? ? ? I recall many visits to ARPA on Wilson Blvd in >? ? > Arlington, VA. >? ? > >>>>>? ? ? ? ? There were >? ? > >>>>>? ? ? ? ? terminals all over the building, pretty much all >? ? connected >? ? > >>>>>? ? ? ? ? through the >? ? > >>>>>? ? ? ? ? ARPANET to a PDP-10 3000 miles away at USC in Marine >? ? > Del Rey, >? ? > >>>>>? ? ? ? ? CA.? The >? ? > >>>>>? ? ? ? ? technology of Packet Switching made it possible >? ? to keep a >? ? > >>>>>? ? ? ? ? PDP-10 busy >? ? > >>>>>? ? ? ? ? servicing all those Users and minimize the costs of >? ? > everything, >? ? > >>>>>? ? ? ? ? including those expensive communications circuits. >? ? > This was >? ? > >>>>>? ? ? ? ? circa >? ? > >>>>>? ? ? ? ? 1980. Users could efficiently share expensive >? ? > communications, >? ? > >>>>> and >? ? > >>>>>? ? ? ? ? expensive and distant computers -- although I always >? ? > thought >? ? > >>>>>? ? ? ? ? ARPA's >? ? > >>>>>? ? ? ? ? choice to use a computer 3000 miles away was >? ? probably >? ? > more to >? ? > >>>>>? ? ? ? ? demonstrate the viability of the ARPANET than >? ? because >? ? > it was >? ? > >>>>>? ? ? ? ? cheaper >? ? > >>>>>? ? ? ? ? than using a computer somewhere near DC. >? ? > >>>>> >? ? > >>>>> >? ? > >>>>>? ? ? The choice of USC-ISI in Marina del Rey was due to other >? ? > >>>>>? ? ? factors.? In 1972, with ARPA/IPTO (Larry Roberts) strong >? ? > support, >? ? > >>>>>? ? ? Keith Uncapher moved his research group out of RAND.? >? ? Uncapher >? ? > >>>>>? ? ? explored a couple of possibilities and found a >? ? comfortable >? ? > >>>>>? ? ? institutional home with the University of Southern >? ? California >? ? > >>>>>? ? ? (USC) with the proviso the institute would be off campus. >? ? > >>>>>? ? ? Uncapher was solidly supportive of both ARPA/IPTO and >? ? of the >? ? > >>>>>? ? ? Arpanet project. As the Arpanet grew, Roberts needed a >? ? > place to >? ? > >>>>>? ? ? have multiple PDP-10s providing service on the Arpanet. >? ? > Not just >? ? > >>>>>? ? ? for the staff at ARPA but for many others as well. >? ? > Uncapher was >? ? > >>>>>? ? ? cooperative and the rest followed easily. >? ? > >>>>> >? ? > >>>>>? ? ? The fact that it demonstrated the viability of >? ? packet-switching >? ? > >>>>>? ? ? over that distance was perhaps a bonus, but the same >? ? would have >? ? > >>>>>? ? ? been true almost anywhere in the continental U.S. at >? ? that time. >? ? > >>>>>? ? ? The more important factor was the quality of the >? ? relationship. >? ? > >>>>>? ? ? One could imagine setting up a small farm of machines at >? ? > various >? ? > >>>>>? ? ? other universities, non-profits, or selected for profit >? ? > companies >? ? > >>>>>? ? ? or even some military bases.? For each of these, cost, >? ? > >>>>>? ? ? contracting rules, the ambitions of the principal >? ? investigator, >? ? > >>>>>? ? ? and staff skill sets would have been the dominant >? ? concerns. >? ? > >>>>> >? ? > >>>>>? ? ? Steve >? ? > >>>>> >? ? > >>>> >? ? > >>>> -- >? ? > >>>> Please send any postal/overnight deliveries to: >? ? > >>>> Vint Cerf >? ? > >>>> 1435 Woodhurst Blvd >? ? > >>>> McLean, VA 22102 >? ? > >>>> 703-448-0965 >? ? > >>>> >? ? > >>>> until further notice >? ? > >? ? > >? ? > -- >? ? > Internet-history mailing list >? ? > Internet-history at elists.isoc.org >? ? >? ? ? ? > >? ? > https://elists.isoc.org/mailman/listinfo/internet-history >? ? >? ? > ? ? > > >? ? -- >? ? Internet-history mailing list >? ? Internet-history at elists.isoc.org >? ? >? ? https://elists.isoc.org/mailman/listinfo/internet-history >? ? > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From agoldmanster at gmail.com Tue Aug 31 14:13:05 2021 From: agoldmanster at gmail.com (Alexander Goldman) Date: Tue, 31 Aug 2021 17:13:05 -0400 Subject: [ih] Why was MAE East located in Northern Virginia? Message-ID: I understand why 60 Hudson Street is important to the internet. As Hunter Newby once said, "everything is where it is now because it was there then" and 60 Hudson Street was the HQ of Western Union. 56 Marietta in Atlanta was also a Western Union building. Those buildings were connected to the telegraph lines that crossed the nation on railroad rights of way. But why Ashburn, Loudoun, etc. in Northern Virginia? VOA says that America Online made the investment https://www.voanews.com/usa/all-about-america/heres-where-internet-actually-lives Wikipedia says a bunch of ISP CEOs decided to interconnect there https://en.wikipedia.org/wiki/MAE-East. Another article says NSFNET chose it https://en.wikipedia.org/wiki/Internet_exchange_point#History But I always assumed there was either government influence in the choice, or that a railway line that I did not know about passed through there. Does anyone know the answer? From jnc at mercury.lcs.mit.edu Tue Aug 31 14:28:00 2021 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 31 Aug 2021 17:28:00 -0400 (EDT) Subject: [ih] More topology Message-ID: <20210831212800.D4B8218C096@mercury.lcs.mit.edu> > From: Jack Haverty > Any host might try to get up to 8 " in-flight" messages. If more than > one such host is sending to the same destination, each expecting to be > able to keep 8 messages in flight, the IMP would block the PE as it > sent the 9th message. The person who wrote the PE code (Jim Mathis?) either foresaw the possibility, or experienced it, as the PE has code to handle exactly this: ; IF THE NUMBER OF MESSAGES ; OUTSTANDING ON THE CONNECTION IS LESS THAN 8, PUT THE PORT NUMBER ; INDICATOR INTO SUB-LINK FIELD OF THE MESSAGE AND OUTPUT THE MESSAGE TO ; THE IMP. IF THE NUMBER OF MESSAGES OUTSTANDING IS GREATER THAN OR EQUAL ; TO 8, ENQUEUE THE IORB ONTO THE "BLOCKED" LINKED LIST. IT WILL BE SENT ; WHEN A RFNM IS RECEIVED AND THE OUTSTANDING COUNT IS LESS THAN 8. THE ; HOST PORT WILL BE BLOCKED UNTIL THE MESSAGE IS SENT. May I suggest that rather than idly speculate about what the PE _might_ have done, people inspect the actual code (sort of - see below), which I have put online here: http://ana-3.lcs.mit.edu/~jnc/tech/gw/pe/ It's not the _original_ PE code, this is the version that I hacked on to turn it (effectively) into a gateway to the MIT LAN. Still, I didn't chop out lots of existing functionality, just hung a bag on the side to turn it into a LAN gateway, so the original PE stuff is all there. Why didn't we just use the BBN gateway code (which we clearly had access to), instead of hack the PE? I don't recall for sure, but I suspect our thinking went something like this: we were already familiar with MOS, and had it (and had it, the PE and the TIU) building on the local TOPS-20 (and knew it was clean and easy to work with); but didn't have ELF, or the gateway. It was probably easier to do the PE hack than get the ELF-based gateway running. > From: Steve Crocker > Minor point: RFNM = Ready (not Request) for Next Message. Ironically, the 'improved' ARPANET (post ~'72) actually did kind of act that way ('Request'). From J.M. McQuillan, W.R. Crowther, B.P. Cosell, D.C. Walden, and F.E. Heart, "Improvements in the Design and Performance of the ARPA Network": When the message itself arrives at the destination, and the destination IMP is about to return the Ready-For-Next-Message (RFNM), the destination IMP waits until it has room for an additional multipacket message. It then piggybacks a storage allocation on the RFNM. If the source Host is prompt in answering the RFNM with its next message, an allocation is ready and the message can be transmitted at once. Easily available not behind one of those irritatig, annoying paywalls, here: https://walden-family.com/impcode/1972-improvements-paper.pdf so you all won't have to find your old hardcopy! Noel From b_a_denny at yahoo.com Tue Aug 31 14:59:41 2021 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 31 Aug 2021 21:59:41 +0000 (UTC) Subject: [ih] More topology In-Reply-To: <20210831212800.D4B8218C096@mercury.lcs.mit.edu> References: <20210831212800.D4B8218C096@mercury.lcs.mit.edu> Message-ID: <663455324.1587123.1630447181557@mail.yahoo.com> The PE code (some or all?) could also have been written by Holly Nelson, or perhaps James Lieb since he is listed on the technical report besides Jim and Holly.? Holly left SRI shortly before I arrived (When I interviewed at SRI I thought I might end up working with her on my first project there).? I don't remember meeting James Lieb.? BTW, SRI technical reports usually included everyone who worked on an effort so I don't think the PE could have been written by a person not listed on the report. barbara On Tuesday, August 31, 2021, 02:28:12 PM PDT, Noel Chiappa via Internet-history wrote: ? ? > From: Jack Haverty ? ? > Any host might try to get up to 8 " in-flight" messages. If more than ? ? > one such host is sending to the same destination, each expecting to be ? ? > able to keep 8 messages in flight, the IMP would block the PE as it ? ? > sent the 9th message. The person who wrote the PE code (Jim Mathis?) either foresaw the possibility, or experienced it, as the PE has code to handle exactly this: ? ; IF THE NUMBER OF MESSAGES ? ; OUTSTANDING ON THE CONNECTION IS LESS THAN 8, PUT THE PORT NUMBER ? ; INDICATOR INTO SUB-LINK FIELD OF THE MESSAGE AND OUTPUT THE MESSAGE TO ? ; THE IMP.? IF THE NUMBER OF MESSAGES OUTSTANDING IS GREATER THAN OR EQUAL ? ; TO 8, ENQUEUE THE IORB ONTO THE "BLOCKED" LINKED LIST.? IT WILL BE SENT ? ; WHEN A RFNM IS RECEIVED AND THE OUTSTANDING COUNT IS LESS THAN 8.? THE ? ; HOST PORT WILL BE BLOCKED UNTIL THE MESSAGE IS SENT. May I suggest that rather than idly speculate about what the PE _might_ have done, people inspect the actual code (sort of - see below), which I have put online here: ? http://ana-3.lcs.mit.edu/~jnc/tech/gw/pe/ It's not the _original_ PE code, this is the version that I hacked on to turn it (effectively) into a gateway to the MIT LAN. Still, I didn't chop out lots of existing functionality, just hung a bag on the side to turn it into a LAN gateway, so the original PE stuff is all there. Why didn't we just use the BBN gateway code (which we clearly had access to), instead of hack the PE? I don't recall for sure, but I suspect our thinking went something like this: we were already familiar with MOS, and had it (and had it, the PE and the TIU) building on the local TOPS-20 (and knew it was clean and easy to work with); but didn't have ELF, or the gateway. It was probably easier to do the PE hack than get the ELF-based gateway running. ? ? > From: Steve Crocker ? ? > Minor point: RFNM = Ready (not Request) for Next Message. Ironically, the 'improved' ARPANET (post ~'72) actually did kind of act that way ('Request'). From J.M. McQuillan, W.R. Crowther, B.P. Cosell, D.C. Walden, and F.E. Heart, "Improvements in the Design and Performance of the ARPA Network": ? When the message itself arrives at the destination, and the destination IMP ? is about to return the Ready-For-Next-Message (RFNM), the destination IMP ? waits until it has room for an additional multipacket message. It then ? piggybacks a storage allocation on the RFNM. If the source Host is prompt in ? answering the RFNM with its next message, an allocation is ready and the ? message can be transmitted at once. Easily available not behind one of those irritatig, annoying paywalls, here: ? https://walden-family.com/impcode/1972-improvements-paper.pdf so you all won't have to find your old hardcopy! ? Noel -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From jnc at mercury.lcs.mit.edu Tue Aug 31 15:01:28 2021 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 31 Aug 2021 18:01:28 -0400 (EDT) Subject: [ih] More Topology, Packet Radio Message-ID: <20210831220128.86F5C18C096@mercury.lcs.mit.edu> > From: Lawrence Stewart > I guess I am surprised by the comments here about the subleties of the > 1822 distant host signaling. I dont think the Alto board had > optoisolaters and it did work in both local and distant host modes The thing is that the 1822 spec _looks_ like it's symmetrical between the Host and IMP sides, but it's not, really - not 100.000%. The IMP end of the DH interface did have optoisolators (1822, May '78 revision, pg. 4-24: "DC isolation is done at the IMP end of the cable ... This isolation is accomplished by optically isolating the signals."). So if you had i) a host 1822 interface which didn't have that (because such wasn't required for a _host_ interface), and ii) tried to use said host 1822 interface to emulate an _IMP_, to another host, and iii) the other host's 1822 interface played fast and loose, and _depended_ on there being isolation at the 'IMP' end... it wouldn't work. Now that I look at the 1822 DH stuff, it (pg. 4-26) "drives the odd-number connector pin of each pair to +0.5 volts, and the other pin to -0.5 volt". What the DM ITS' 1822 interface did, IIRC, was tied the - pin to the _host's ground_, producing 1.0V signals (from its perspective) on the other pin. The problem was that the DH interface _also_ had ground ("the cable shields should be very solidly connected to the host's signal ground"); so when the - output was tied to actual ground, then _on an interface which didn't DC isolate the + and - outputs_, one no longer got 1.0V on the + pin - and it wouldn't work. Noel From sob at sobco.com Tue Aug 31 15:12:30 2021 From: sob at sobco.com (Scott O. Bradner) Date: Tue, 31 Aug 2021 18:12:30 -0400 Subject: [ih] Why was MAE East located in Northern Virginia? In-Reply-To: References: Message-ID: <3A9BACDC-79E9-4C6B-A3A2-ACA31847051F@sobco.com> I am quite sure that MAE East predated the NSF NAPs by quite a few years NSF later designated MAE East as a NAP which may be the bases of the story I tink the Wikipedia story you point to is the correct version Scott > On Aug 31, 2021, at 5:13 PM, Alexander Goldman via Internet-history wrote: > > I understand why 60 Hudson Street is important to the internet. As Hunter > Newby once said, "everything is where it is now because it was there then" > and 60 Hudson Street was the HQ of Western Union. > > 56 Marietta in Atlanta was also a Western Union building. Those buildings > were connected to the telegraph lines that crossed the nation on railroad > rights of way. > > But why Ashburn, Loudoun, etc. in Northern Virginia? VOA says that America > Online made the investment > https://www.voanews.com/usa/all-about-america/heres-where-internet-actually-lives > > Wikipedia says a bunch of ISP CEOs decided to interconnect there > https://en.wikipedia.org/wiki/MAE-East. Another article says NSFNET chose > it https://en.wikipedia.org/wiki/Internet_exchange_point#History > > But I always assumed there was either government influence in the choice, > or that a railway line that I did not know about passed through there. > > Does anyone know the answer? > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From woody at pch.net Tue Aug 31 16:05:07 2021 From: woody at pch.net (Bill Woodcock) Date: Wed, 1 Sep 2021 01:05:07 +0200 Subject: [ih] Why was MAE East located in Northern Virginia? In-Reply-To: <3A9BACDC-79E9-4C6B-A3A2-ACA31847051F@sobco.com> References: <3A9BACDC-79E9-4C6B-A3A2-ACA31847051F@sobco.com> Message-ID: <4D115189-2290-4F96-BB88-5BD0AFC75F0E@pch.net> > On Sep 1, 2021, at 12:12 AM, Scott O. Bradner via Internet-history wrote: > I am quite sure that MAE East predated the NSF NAPs by quite a few years Predated, yes, but only by about a year. Steve Feldman would be the best person to recount the specifics, he was the principal engineer on the MAE. My recollection is that the MAE was stood up in 1992, while the NII was implemented in 1993. > NSF later designated MAE East as a NAP which may be the bases of the story Yes, along with the Pac Bell NAP, Ameritech, and the Sprint NAP in Pennsauken. > I tink the Wikipedia story you point to is the correct version >> Wikipedia says a bunch of ISP CEOs decided to interconnect there >> https://en.wikipedia.org/wiki/MAE-East. Yes, that looks correct to me. -Bill -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From woody at pch.net Tue Aug 31 16:45:57 2021 From: woody at pch.net (Bill Woodcock) Date: Wed, 1 Sep 2021 01:45:57 +0200 Subject: [ih] Why was MAE East located in Northern Virginia? In-Reply-To: <64E57933-2DA2-4815-BEB1-309447E14718@gmail.com> References: <3A9BACDC-79E9-4C6B-A3A2-ACA31847051F@sobco.com> <4D115189-2290-4F96-BB88-5BD0AFC75F0E@pch.net> <64E57933-2DA2-4815-BEB1-309447E14718@gmail.com> Message-ID: > On Sep 1, 2021, at 1:18 AM, Tony Li wrote: > >> On Aug 31, 2021, at 4:05 PM, Bill Woodcock via Internet-history wrote: >> >> Steve Feldman would be the best person to recount the specifics, he was the principal engineer on the MAE. > > Another reference would be Andrew Partan, then of AlterNet. And proprietor of the infamous Andrew?s Basement Exchange. :-) https://youtu.be/Od_dBpEHMuk?t=343 -Bill -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From stewart at serissa.com Tue Aug 31 16:53:07 2021 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 31 Aug 2021 19:53:07 -0400 Subject: [ih] More Topology, Packet Radio In-Reply-To: <20210831220128.86F5C18C096@mercury.lcs.mit.edu> References: <20210831220128.86F5C18C096@mercury.lcs.mit.edu> Message-ID: > On 2021, Aug 31, at 6:01 PM, Noel Chiappa wrote: > >> From: Lawrence Stewart > >> I guess I am surprised by the comments here about the subleties of the >> 1822 distant host signaling. I dont think the Alto board had >> optoisolaters and it did work in both local and distant host modes > > The thing is that the 1822 spec _looks_ like it's symmetrical between the Host > and IMP sides, but it's not, really - not 100.000%. The IMP end of the DH > interface did have optoisolators (1822, May '78 revision, pg. 4-24: "DC > isolation is done at the IMP end of the cable ... This isolation is > accomplished by optically isolating the signals."). > > So if you had i) a host 1822 interface which didn't have that (because such > wasn't required for a _host_ interface), and ii) tried to use said host 1822 > interface to emulate an _IMP_, to another host, and iii) the other host's 1822 > interface played fast and loose, and _depended_ on there being isolation at > the 'IMP' end... it wouldn't work. > > Now that I look at the 1822 DH stuff, it (pg. 4-26) "drives the odd-number > connector pin of each pair to +0.5 volts, and the other pin to -0.5 volt". > What the DM ITS' 1822 interface did, IIRC, was tied the - pin to the _host's > ground_, producing 1.0V signals (from its perspective) on the other pin. The > problem was that the DH interface _also_ had ground ("the cable shields > should be very solidly connected to the host's signal ground"); so when the - > output was tied to actual ground, then _on an interface which didn't DC > isolate the + and - outputs_, one no longer got 1.0V on the + pin - and it > wouldn't work. > > Noel I understand. There are some legible schematics of the Alto-1822 at the CHM although the translation from .press to .pdf is spotty. It looks like it used the TI 75115 differential receiver, which ran off 0 and +5 but would tolerate a +/- 15 v common mode. There was a 2v reference available connected to the minus inputs for the local host case. No optoisolators but it would handle a fair amount of offset. The transmit was 75114 differential drivers. Without a negative supply, the transmit signals would be differential, but not centered around ground, so that part wouldn?t look like an IMP but would work with most differential receivers or misapplied single ended receivers. I found both the 1975 and the 1978 versions of the BBN-1822 report and both say ?Ground Isolation is provided by the IMP? but the circuits in Appendix D do not have this. They must be elsewhere in the hardware. The 1975 version says the signals are transformer coupled and the 1978 version says optically isolated. Pretty sensible stuff. I wonder if the Packet Radios actually had isolation or transmit signals centered around ground. -L From andrew at blum.net Tue Aug 31 17:01:01 2021 From: andrew at blum.net (Andrew Blum) Date: Tue, 31 Aug 2021 20:01:01 -0400 Subject: [ih] Why was MAE East located in Northern Virginia? In-Reply-To: References: Message-ID: <5A491605-44EA-495A-B1AE-5D9A0F855FF5@blum.net> I asked this exact question of Steve Feldman in my book Tubes, and he tells the story pretty well. > On Aug 31, 2021, at 7:46 PM, Bill Woodcock via Internet-history wrote: > > ? > >>> On Sep 1, 2021, at 1:18 AM, Tony Li wrote: >>> >>>> On Aug 31, 2021, at 4:05 PM, Bill Woodcock via Internet-history wrote: >>> >>> Steve Feldman would be the best person to recount the specifics, he was the principal engineer on the MAE. >> >> Another reference would be Andrew Partan, then of AlterNet. > > And proprietor of the infamous Andrew?s Basement Exchange. :-) > > https://youtu.be/Od_dBpEHMuk?t=343 > > -Bill > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history