From jack at 3kitty.org Wed Mar 5 19:44:46 2025 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 5 Mar 2025 19:44:46 -0800 Subject: [ih] Archive of internet-history email (and others) Message-ID: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> I just stumbled across a site which appears to have archived this list, as well as a bunch of newsgroups (rec.xxx etc.). Anyone know more about it? https://internet-history.postel.narkive.com/ Jack Haverty -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From agmalis at gmail.com Thu Mar 6 05:59:45 2025 From: agmalis at gmail.com (Andrew G. Malis) Date: Thu, 6 Mar 2025 08:59:45 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> Message-ID: Jack, Check out https://narkive.com/about . Cheers, Andy On Wed, Mar 5, 2025 at 10:45?PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > I just stumbled across a site which appears to have archived this list, > as well as a bunch of newsgroups (rec.xxx etc.). > > Anyone know more about it? > > https://internet-history.postel.narkive.com/ > > Jack Haverty > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From touch at strayalpha.com Thu Mar 6 16:17:58 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Thu, 6 Mar 2025 16:17:58 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> Message-ID: <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> I know it isn?t authorized, but then neither is the wayback machine. Downloading stuff is one thing, but *reposting it* elsewhere is another. IANAL, but it?s times like this I wish we had one on retainer? Joe (list admin) ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Mar 6, 2025, at 5:59?AM, Andrew G. Malis via Internet-history wrote: > > Jack, > > Check out https://narkive.com/about . > > Cheers, > Andy > > > On Wed, Mar 5, 2025 at 10:45?PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> I just stumbled across a site which appears to have archived this list, >> as well as a bunch of newsgroups (rec.xxx etc.). >> >> Anyone know more about it? >> >> https://internet-history.postel.narkive.com/ >> >> Jack Haverty >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From johnl at iecc.com Thu Mar 6 17:59:08 2025 From: johnl at iecc.com (John Levine) Date: 6 Mar 2025 20:59:08 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> Message-ID: <20250307015909.21ABFBEF5B60@ary.qy> It appears that touch--- via Internet-history said: >I know it isn?t authorized, but then neither is the wayback machine. Well, somewone was feeding it messages from the list's predecessor. The archive stops six years ago, I'm guessing when it moved to ISOC. >IANAL, but it?s times like this I wish we had one on retainer? If you really don't want a copy at narkive, write him a reasonably polite letter and I expect he'll delete it. R's, John From jack at 3kitty.org Thu Mar 6 18:55:25 2025 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 6 Mar 2025 18:55:25 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <20250307015909.21ABFBEF5B60@ary.qy> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: Narkive says "If you find content on Narkive that you find discriminatory against you, please send us an email and we will evaluate it to be removed. "?? See https://narkive.com/legalese# OTOH, there are legal questions that I certainly don't know how to answer.? E.g., who owns the material posted on the list?? Who owns the messages which contain long chains of previous messages or "digests"?? When we "signed up" for internet-history, what, if anything, did we agree to?? Does ISOC have legal rights to the content?????? Same questions for the other archive content, e.g., all the newsgroups?? Any intellectual property lawyers on the list - it's an international issue, not just a US one? In any event, I've already found the Narkive repository to be much more usable than the ISOC one.? Having long conversations sorted into threads is much easier to use than lots of folders organized by dates.? Unfortunately there doesn't seem to be any kind of "search" or "filter" capability. A little history -- Back in the mid-70s, Lick (Licklider) had a vision of human-human communications which included the ability for "important" content to be copied to The Datacomputer, where it could be accessible, and even searchable, for posterity.? Lick thought that archives, and other such mechanisms from the non-digital world such as escrow, verified sending and delivery, trusted third-parties, distribution lists, et al were important to implement in the new digital world.? I wrote the code to do that for our own email system.?? Such capability was deferred in the overall network until the "next" version of mail protocols, with focus shifted to a "simple" interim protocol (SMTP). After 50 years now, I doubt such stuff will ever happen. Jack On 3/6/25 17:59, John Levine via Internet-history wrote: > It appears that touch--- via Internet-history said: >> I know it isn?t authorized, but then neither is the wayback machine. > Well, somewone was feeding it messages from the list's predecessor. The > archive stops six years ago, I'm guessing when it moved to ISOC. > >> IANAL, but it?s times like this I wish we had one on retainer? > If you really don't want a copy at narkive, write him a reasonably polite > letter and I expect he'll delete it. > > R's, > John -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From brian.e.carpenter at gmail.com Thu Mar 6 20:15:39 2025 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 7 Mar 2025 17:15:39 +1300 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <20250307015909.21ABFBEF5B60@ary.qy> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: If this list (and its predecessor) has any value, it's *only* as an archive for future historians, and IMHO we should be glad that somebody is willing to archive the old material independently. Regards Brian Carpenter On 07-Mar-25 14:59, John Levine via Internet-history wrote: > It appears that touch--- via Internet-history said: >> I know it isn?t authorized, but then neither is the wayback machine. > > Well, somewone was feeding it messages from the list's predecessor. The > archive stops six years ago, I'm guessing when it moved to ISOC. > >> IANAL, but it?s times like this I wish we had one on retainer? > > If you really don't want a copy at narkive, write him a reasonably polite > letter and I expect he'll delete it. > > R's, > John From touch at strayalpha.com Thu Mar 6 22:00:42 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Thu, 6 Mar 2025 22:00:42 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: I don?t disagree, but IMO, *asking* first would have been appropriate. Copying something for personal use is fine, but REPUBLISHING without *explicit* permission is the problem. Joe ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Mar 6, 2025, at 8:15?PM, Brian E Carpenter via Internet-history wrote: > > If this list (and its predecessor) has any value, it's *only* as an > archive for future historians, and IMHO we should be glad that > somebody is willing to archive the old material independently. > > Regards > Brian Carpenter > > On 07-Mar-25 14:59, John Levine via Internet-history wrote: >> It appears that touch--- via Internet-history said: >>> I know it isn?t authorized, but then neither is the wayback machine. >> Well, somewone was feeding it messages from the list's predecessor. The >> archive stops six years ago, I'm guessing when it moved to ISOC. >>> IANAL, but it?s times like this I wish we had one on retainer? >> If you really don't want a copy at narkive, write him a reasonably polite >> letter and I expect he'll delete it. >> R's, >> John > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From touch at strayalpha.com Thu Mar 6 22:07:21 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Thu, 6 Mar 2025 22:07:21 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: <6BD3E0A5-BF84-4798-972C-95683D79D639@strayalpha.com> Hi, all, I agree with most of the stuff below - I don?t claim to ?own? the site, but I am the one ?publishing? it, as the one who created the list and manages it, and (where possible) defends it. Posting your own stuff or portions seems like fair use, but publishing the whole thing out - AFAICT - to include a request to do so from the current publisher. Ease of use doesn?t make what they did - without asking - right, in my book. Just like distributing magnified copies of a whole book isn?t a justification just because it?s easier to read. Licklider had a vision for threads that connected cards in a card catalog, not one where a whole set of new cards were copied from the first to make that web. LINKING to our site doesn?t require permission, but (IMO, IANAL) *copying* it and *posting it* should. Joe ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Mar 6, 2025, at 6:55?PM, Jack Haverty via Internet-history wrote: > > Narkive says "If you find content on Narkive that you find discriminatory against you, please send us an email and we will evaluate it to be removed. " See https://narkive.com/legalese# > > OTOH, there are legal questions that I certainly don't know how to answer. E.g., who owns the material posted on the list? Who owns the messages which contain long chains of previous messages or "digests"? When we "signed up" for internet-history, what, if anything, did we agree to? Does ISOC have legal rights to the content? Same questions for the other archive content, e.g., all the newsgroups Any intellectual property lawyers on the list - it's an international issue, not just a US one? > > In any event, I've already found the Narkive repository to be much more usable than the ISOC one. Having long conversations sorted into threads is much easier to use than lots of folders organized by dates. Unfortunately there doesn't seem to be any kind of "search" or "filter" capability. > > A little history -- Back in the mid-70s, Lick (Licklider) had a vision of human-human communications which included the ability for "important" content to be copied to The Datacomputer, where it could be accessible, and even searchable, for posterity. Lick thought that archives, and other such mechanisms from the non-digital world such as escrow, verified sending and delivery, trusted third-parties, distribution lists, et al were important to implement in the new digital world. I wrote the code to do that for our own email system. Such capability was deferred in the overall network until the "next" version of mail protocols, with focus shifted to a "simple" interim protocol (SMTP). > > After 50 years now, I doubt such stuff will ever happen. > > Jack > > > > On 3/6/25 17:59, John Levine via Internet-history wrote: >> It appears that touch--- via Internet-history said: >>> I know it isn?t authorized, but then neither is the wayback machine. >> Well, somewone was feeding it messages from the list's predecessor. The >> archive stops six years ago, I'm guessing when it moved to ISOC. >> >>> IANAL, but it?s times like this I wish we had one on retainer? >> If you really don't want a copy at narkive, write him a reasonably polite >> letter and I expect he'll delete it. >> >> R's, >> John > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From lars at nocrew.org Thu Mar 6 22:26:57 2025 From: lars at nocrew.org (Lars Brinkhoff) Date: Fri, 07 Mar 2025 06:26:57 +0000 Subject: [ih] Datacomputer In-Reply-To: (Jack Haverty via Internet-history's message of "Thu, 6 Mar 2025 18:55:25 -0800") References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: <7w34fpicem.fsf_-_@junk.nocrew.org> Jack Haverty wrote: > A little history -- Back in the mid-70s, Lick (Licklider) had a vision > of human-human communications which included the ability for > "important" content to be copied to The Datacomputer, where it could > be accessible, and even searchable, for posterity. Computer Corporation of America, with offices at Tech Square, ran the Datacomputer. It was a TENEX machine fitted with an Ampex tape-based "Terabit Memory"[*]. Indeed much data was placed there, like ARPANET survey data and backup copies of early CLU versions. * https://dl.acm.org/doi/pdf/10.1145/1476706.1476771 From brian.e.carpenter at gmail.com Thu Mar 6 22:45:57 2025 From: brian.e.carpenter at gmail.com (Brian Carpenter) Date: Fri, 7 Mar 2025 19:45:57 +1300 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: As a matter of common courtesy, I agree. (via tiny screen & keyboard) Regards, Brian Carpenter On Fri, 7 Mar 2025, 19:00 touch at strayalpha.com, wrote: > I don?t disagree, but IMO, *asking* first would have been appropriate. > > Copying something for personal use is fine, but REPUBLISHING without > *explicit* permission is the problem. > > Joe > > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com > > On Mar 6, 2025, at 8:15?PM, Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > > If this list (and its predecessor) has any value, it's *only* as an > archive for future historians, and IMHO we should be glad that > somebody is willing to archive the old material independently. > > Regards > Brian Carpenter > > On 07-Mar-25 14:59, John Levine via Internet-history wrote: > > It appears that touch--- via Internet-history said: > > I know it isn?t authorized, but then neither is the wayback machine. > > Well, somewone was feeding it messages from the list's predecessor. The > archive stops six years ago, I'm guessing when it moved to ISOC. > > IANAL, but it?s times like this I wish we had one on retainer? > > If you really don't want a copy at narkive, write him a reasonably polite > letter and I expect he'll delete it. > R's, > John > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > From gnu at toad.com Fri Mar 7 02:40:26 2025 From: gnu at toad.com (John Gilmore) Date: Fri, 07 Mar 2025 02:40:26 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: <17359.1741344026@hop.toad.com> The passage of time is not good for digital documents. Unless they are deliberately copied onto new media and "ported forward", they tend to become rare, and then as systems are decommissioned or former administrators die, they become very hard to access, and eventually impossible. This is true of paper records too. We get periodic requests for such things here on the list. For example, we'd naively think that all the RFC's would have been copied a bezillion times and they would be easy to access even decades later. But RFC 872 by Mike Padlipsky referenced two hand-drawn figures that you could only get by writing to Mike. The RFC administrators didn't scan them in, photograph them, xerox them, etc; they weren't ascii text so they didn't travel with the rest of the RFCs. Apparently, nobody has a copy today. Oops. Brian wrote: > If this list (and its predecessor) has any value, it's *only* as an > archive for future historians, and IMHO we should be glad that > somebody is willing to archive the old material independently. I agree. And I assume that when I write something to a large mailing list, that I'm writing it for public consumption, now and in the future; not to become somebody's private property that others aren't allowed to share. Jack wrote: > A little history -- Back in the mid-70s, Lick (Licklider) had a vision > of human-human communications which included the ability for > "important" content to be copied to The Datacomputer, where it could > be accessible, and even searchable, for posterity. Where it was never backed up on ordinary media. Its entire contents would now fit on a single thumb-sized flash memory drive, or an even smaller micro-SD card. But instead, its entire contents are now apparently either completely inaccessible, or permanently gone (see https://en.wikipedia.org/wiki/Talk:Datacomputer). Posterity will not be thanking the Datacomputer administrators for their trove of important documents carefully husbanded from all over the ARPANET. While it's fun chatting with each other on this list about our past successes and failures, the more serious purpose that I thought we were doing here was to draw out much material that was never formally captured during the creation of the Internet and its predecessors. Or was hard to find among all the possible places one might look. And to record that valuable informal information "for posterity". There is a Stanford (and many cooperating university) project making local copies of scientific journals, so that when their online publisher goes out of business, screws up, is bought by a scrooge, or falls off the Internet, the university libraries and all their researchers still have their locally stored copies of the journals that they paid dearly for. It's called LOCKSS, because: "Lots Of Copies Keeps Stuff Safe" Right here, I'd like to thank Joe Touch for managing this mailing list, because I know it's a thankless job. And yet, where will posterity find a copy of the mailing list archives? On magtapes stored in the classic offsite backup location (under Joe's sysadmin's bed)? Explicitly setting a community expectation of public access and a public right to share (e.g. a CC-BY license) would reduce the transaction costs of cooperation, encouraging the creation of Lots Of Copies. That would make it far more likely that even ONE copy survives into the distant future. John From jack at 3kitty.org Fri Mar 7 10:51:25 2025 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 7 Mar 2025 10:51:25 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <17359.1741344026@hop.toad.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> Message-ID: <997dd40e-8b03-41f0-ae4e-d223bd39e48e@3kitty.org> My assumption has always also been that whatever gets posted on a mailing list has essentially become public.? I've also hoped that someone(s) out there are capturing all the recollections and saving them away somehow.?? That's especially true for this mailing list. I just checked the "sign up form" that I must have used years ago to join the list.?? Even today, it doesn't say anything about restrictions on use of the material, who "owns" such material, or any kind of license details.?? The membership list is restricted, so we can't even tell who gets these emails or how big the audience is.?? So it's best (for me at least) to assume it is public. Some other isoc.org lists say that the "Internet Society Code of Conduct" applies.?? That code prohibits republishing without permission.? I couldn't find any information about who might have gotten such permission.?? The sign-up for this list however doesn't reference that ISOC code. Personally, I'd like to see the discussions on this list published more widely.? It seems like it would be straightforward to "gateway" this list (or any other) to other social media sites where it might get wider distribution perhaps as some kind of "Tech History Channel".? LOCKSS is a great idea.?? Perhaps someone could "gateway" this list to some place like the Computer History Museum? Maybe that's been done and I just haven't stumbled on any such site until I found narkive.?? I found that site only as a result of a web search.?? Was that search engine "publishing" without permission? One salient reason for such wider dissemination is due to our own behavior back in the early days of networking - roughly from 1970 through the emergence of the Web and the sharp drop in tech costs in the 90s.? We were all enamored on the ARPANET with the new toy of email.? Much discussion, debate, and planning occurred by email exchanges, with occasional RFCs, distributed by email and FTP, that have been preserved (thanks, Jake!). In pre-network days, that history might have been preserved in journals, conference proceedings, and even letters.? But much of that email historical record has been lost.?? Some of it was even in the Datacomputer! Other news of that era that is now history was captured in the trade press - Data Communications, Computerworld, Network World, and many other such print media.? Such material captured the characteristic of the era better than RFCs, such as the many competitors to "The Internet".? History should remember XNS, DECNET, SNA, Netware, VINES, global LANs, and other such combatants that The Internet seems to have defeated. Most of all that has probably also been lost, as those stacks of old paper in our basements decay.?? Much of Internet History is only in our memories.? In DRAM unfortunately. Yes, Joe deserves a lot of thanks for his efforts to keep this list going! Jack Haverty On 3/7/25 02:40, John Gilmore via Internet-history wrote: > The passage of time is not good for digital documents. Unless they are > deliberately copied onto new media and "ported forward", they tend to > become rare, and then as systems are decommissioned or former > administrators die, they become very hard to access, and eventually > impossible. This is true of paper records too. We get periodic requests > for such things here on the list. > > For example, we'd naively think that all the RFC's would have been > copied a bezillion times and they would be easy to access even decades > later. But RFC 872 by Mike Padlipsky referenced two hand-drawn figures > that you could only get by writing to Mike. The RFC administrators > didn't scan them in, photograph them, xerox them, etc; they weren't > ascii text so they didn't travel with the rest of the RFCs. Apparently, > nobody has a copy today. Oops. > > Brian wrote: >> If this list (and its predecessor) has any value, it's *only* as an >> archive for future historians, and IMHO we should be glad that >> somebody is willing to archive the old material independently. > I agree. And I assume that when I write something to a large mailing > list, that I'm writing it for public consumption, now and in the future; > not to become somebody's private property that others aren't allowed to > share. > > Jack wrote: >> A little history -- Back in the mid-70s, Lick (Licklider) had a vision >> of human-human communications which included the ability for >> "important" content to be copied to The Datacomputer, where it could >> be accessible, and even searchable, for posterity. > Where it was never backed up on ordinary media. Its entire contents > would now fit on a single thumb-sized flash memory drive, or an even > smaller micro-SD card. But instead, its entire contents are now > apparently either completely inaccessible, or permanently gone (see > https://en.wikipedia.org/wiki/Talk:Datacomputer). Posterity will not be > thanking the Datacomputer administrators for their trove of important > documents carefully husbanded from all over the ARPANET. > > While it's fun chatting with each other on this list about our past > successes and failures, the more serious purpose that I thought we were > doing here was to draw out much material that was never formally > captured during the creation of the Internet and its predecessors. Or > was hard to find among all the possible places one might look. And to > record that valuable informal information "for posterity". > > There is a Stanford (and many cooperating university) project making > local copies of scientific journals, so that when their online publisher > goes out of business, screws up, is bought by a scrooge, or falls off > the Internet, the university libraries and all their researchers still > have their locally stored copies of the journals that they paid dearly > for. It's called LOCKSS, because: > > "Lots Of Copies Keeps Stuff Safe" > > Right here, I'd like to thank Joe Touch for managing this mailing list, > because I know it's a thankless job. And yet, where will posterity find > a copy of the mailing list archives? On magtapes stored in the classic > offsite backup location (under Joe's sysadmin's bed)? > > Explicitly setting a community expectation of public access and a public > right to share (e.g. a CC-BY license) would reduce the transaction costs > of cooperation, encouraging the creation of Lots Of Copies. That would > make it far more likely that even ONE copy survives into the distant > future. > > John > > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From dhc at dcrocker.net Fri Mar 7 11:14:07 2025 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 07 Mar 2025 19:14:07 +0000 (UTC) Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <997dd40e-8b03-41f0-ae4e-d223bd39e48e@3kitty.org> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <997dd40e-8b03-41f0-ae4e-d223bd39e48e@3kitty.org> Message-ID: On 3/7/2025 10:51 AM, Jack Haverty via Internet-history wrote: > My assumption has always also been that whatever gets posted on a > mailing list has essentially become public. While there are such things as 'private' mailing lists, I don't see this one as qualifying for that label.? It's participation is open. And I take a public mailing list as subject to arbitrary dissemination, including archive.? In fact, for many such lists -- and this one definitely does qualify -- I think public archival is, or should be, a major goal.? The recollections and citations included here are of historical import. d/ ps. More broadly, I actually consider everything done online to be viewed as public, since the state of consumer and business industry practices, at scale, do not seem able to protect against eventual disclosure. -- Dave Crocker Brandenburg InternetWorking bbiw.net bluesky: @dcrocker.bsky.social mast: @dcrocker at mastodon.social From karl at iwl.com Fri Mar 7 12:07:03 2025 From: karl at iwl.com (Karl Auerbach) Date: Fri, 7 Mar 2025 12:07:03 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> Message-ID: <1ee1a664-24fb-485c-b009-6508f7b46a74@iwl.com> On 3/6/25 6:55 PM, Jack Haverty via Internet-history wrote: > OTOH, there are legal questions that I certainly don't know how to > answer.? E.g., who owns the material posted on the list? I've got a couple of fancy looking certificates on my wall that suggest that I may have some insight about this. (And those certificates require me to say that my comments below are merely for purposes of discussion and are not intended to be legal advice and that we are not entering into any attorney-client relationship.) In general we are probably thinking of ownership of the various copyright rights. Copyright is multi-dimensional - who created the work, when it was created, where it was created, how it was recorded on some medium all are factors on the creation end. And at the usage end there are factors about how much is used, where it is used, how it is conveyed, how it is used, by whom it is used, whether it is used directly or transformed in some way. The whole thing is complicated and often contentious (especially for music, but fortunately we are not dealing with that - at least I am not aware that someone such as Tom Lehrer has done a song like "Resetting TCP connections in the park".) The rules/laws about copyright vary from country to country. Even the foundational purpose varies. Here in the US the purpose of copyright is to promote future creativity while elsewhere the purposes is often more to reward a creator with some control over his/her work. So let's ask your question in the context of a US based author of an email to this list: In the US copyright rights spring into existence only when the expression is recorded on a tangible medium. To my mind that encompasses typing it onto the screen and memory of a computer. In other words, if one adopts my interpretation, the rights spring into existence, at least here in the US, at the time you compose the email on a computer - i.e. before you send it. Some may argue that the rights come into existence when you send it or when a mail server relays it or when it lands in an email archive. None of those interpretations changes the fact that the owner of the copyright is the human who typed the text. (Here in the US text created by non-human means, particularly AI, is not subject to copyright - this may change.) So, if we conclude that the copyright rights (a whole bundle of rights) springs into existence at the time the text (or reply) is written and that the owner of those rights is the person whose fingers were on the keyboard then your question becomes one of either transfer of those rights, dedication of those rights to the public domain, or some sort of license of those rights. When we join a mailing-list based discussion we are entering into either an express or implied agreement about rights licensing or transfer. The totality of that agreement - a contract - is a blend of things like written terms of service of the e-mail list service system (in our case ISOC's systems), any additional terms that may have been added via negotiation (very unlikely in our context), and a rather vague but important set of way that these things are understood and practiced either by the participants/users of the email list or by other email lists. That last part - our individual or collective, or even general - course of performance (i.e. the way we use or our expectations) are really important here because they can fill in gaps in any written terms of service or even to explain ambiguities or supersede, sometimes even in the presence of an "integration clause" in any written terms of service that say that any modifications must be in writing. (The laws of how this all blends together can be quite complicated and vary a lot from jurisdiction to jurisdiction.) I would argue that we are non-exclusively licensing our rights rather than transferring them. The reason for this is that non-exclusive implicit licensing is rather common while there may be legal formalities required for an actual transfer of copyright rights. I will make a rather bold assertion: That those of here are proud of what many among us have built - the Internet is an impressive thing that is changing the world. I would assert that those of us here have the mental purpose of explaining that creation, the creation of the Internet, to posterity. I would further assert that as such we each have in our minds the desire to disseminate to everyone and anyone our emails that are posted to this list. If my assertion is accurate I would argue that we each have in mind a mental purpose to license our copyright rights to all potential readers, worldwide, and forever. (Many of us may go further and have a mental purpose to transfer our rights to the public domain - but I'd then ask whether we would then feel comfortable with others changing or re-using our words without our assent, as would be possible in a fully public domain context. I know that I am personally a bit uncomfortable with that.) But what are we licensing to those unknown others? Is it merely a right to read and make fair use (or transformative use), or are we licensing more broadly so that our text could be used, for instance, as part of a performative work, such as a movie script or Broadway musical. My own sense is that we are licensing broadly, but that most of us have not envisioned use of our words for other than historical discussion and not for more expansive, particularly commercial, uses. (A commercial book about the history of the Internet that makes use of our emails is an interesting middle case, possibly one that falls under Fair Use.) There is a layer of fair use - I tend to look at Fair Use in the US as a legal right to use pieces of a work for the purpose of carrying out a kind of dialog with the author about the meaning or expression of the original work. (My view is at odds with the views of many who have a rather more expansive conception of what is fair use - US courts have been wrestling with this, hence the still uncertain concept of "transformative use.") On this mail list we are engaging in such a fair-use dialog, so it seems sensible to me to conclude that when we go back-and-forth we are engaging in exactly the intended core of Fair Use doctrine (and law). In the US fair (and its step-child, transformative) use can be very vague and contentious around the edges. Our emails to this list, and the collective archive, are valuable assets. We will see them used, sometimes in ways that make us uncomfortable. Here in the US we can express that discomfort via copyright litigation but there is a catch - until a work (i.e. our emails) are registered with the copyright office (at $35 a pop) we can't go into US Federal court to ask for our rights to be enforced. Parts of the open source community have recognized this and have done explicit transfers of copyright ownership rights in code updates (and have jumped through the legal hoops to make such transfers - potentially a nuisance with dollar signs attached) and have registered the whole with the copyright office. Concluding this rather long note: Here in the US those of us who create an email (even if it is a reply) own the copyright to our contribution to that email. However we are licensing that ownership to others - such as to the email archive and to those who are reading our creations. The terms of that license and full extent of its grant of rights (and any limitations on that grant) is not clear. To borrow a phrase: In these matters our mileage outside of the US may vary. --karl-- From bill.n1vux at gmail.com Fri Mar 7 12:23:52 2025 From: bill.n1vux at gmail.com (Bill Ricker) Date: Fri, 7 Mar 2025 15:23:52 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <17359.1741344026@hop.toad.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> Message-ID: On Fri, Mar 7, 2025 at 5:40?AM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > The passage of time is not good for digital documents. Unless they are > deliberately copied onto new media and "ported forward", ... quite. For example, we'd naively think that all the RFC's would have been > copied a bezillion times and they would be easy to access even decades > later. But RFC 872 by Mike Padlipsky referenced two hand-drawn figures > that you could only get by writing to Mike. The RFC administrators > didn't scan them in, photograph them, xerox them, etc; they weren't > ascii text so they didn't travel with the rest of the RFCs. > Apparently, nobody has a copy today. Oops. > Well, I have much of Mike's files, so I may very well have the hand-drawn ARM diagrams that were attached only in paper copies of both RFC 871 and 872, and three figures in RFC 875. (*I probably have my personal copies from when we were all too briefly * *office colleagues in my own files as well, since I haven't moved but once * *since '82. yet*.) The *Tea-bag Papers*' figures were IIRC professionally redrawn for The Book. I should scan them from the book to make the RFCs complete, as well as search for the originals. (IIRC the *Tea-bag Papers* were also cleaned up and published as official MITRE reports, e.g. RFC 872 = M82-48, the others being M82-47, M82-49, M82-50, and M82-51; and so *should* have been on DTIC, but I didn't find them when last I looked. They should also be in MITRE archives.) (However, the Book did *not* contain the Quotations from Winnie The Pooh that were excerpted as as Fair Use in RFC 872 and 875, as the Estate of AAMilne declined "mechanical copyright" (term of art) authorization, and the publisher didn't care to fight Fair Use. Since the original was ?1926, it is now Public Domain, the Woozle and Heffalump passages can now be quoted freely, and even the classic illustration of walking in circles and dreams of Heffalumps.) William Ricker, as The Literary Estate of Michael A Padlispsky *which latter is more collegial than AAMilne's* From dhc at dcrocker.net Fri Mar 7 12:30:18 2025 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 07 Mar 2025 20:30:18 +0000 (UTC) Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> Message-ID: <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> On 3/7/2025 12:23 PM, Bill Ricker via Internet-history wrote: > Well, I have much of Mike's files, so I may very well have the hand-drawn > ARM diagrams that were attached only in paper copies of both RFC 871 and > 872, At this point, for each of us, we should resolve a basic question of why we haven't provided potentially historical documents to a long-term archive, such as the Computer History Museum. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net bluesky: @dcrocker.bsky.social mast: @dcrocker at mastodon.social From jeanjour at comcast.net Fri Mar 7 12:33:37 2025 From: jeanjour at comcast.net (John Day) Date: Fri, 7 Mar 2025 15:33:37 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> Message-ID: <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> With all due respect to CHM, CBIs archive is safer. Those caves under the library aren?t going anywhere. ;-) We lost the HP Archives a few years ago in one of the CA fires. > On Mar 7, 2025, at 15:30, Dave Crocker via Internet-history wrote: > > On 3/7/2025 12:23 PM, Bill Ricker via Internet-history wrote: >> Well, I have much of Mike's files, so I may very well have the hand-drawn >> ARM diagrams that were attached only in paper copies of both RFC 871 and >> 872, > > > At this point, for each of us, we should resolve a basic question of why we haven't provided potentially historical documents to a long-term archive, such as the Computer History Museum. > > d/ > > -- > Dave Crocker > > Brandenburg InternetWorking > bbiw.net > bluesky: @dcrocker.bsky.social > mast: @dcrocker at mastodon.social > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Fri Mar 7 12:44:47 2025 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 07 Mar 2025 20:44:47 +0000 (UTC) Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> Message-ID: On 3/7/2025 12:33 PM, John Day wrote: > With all due respect to CHM, CBIs archive is safer. I apologize.? Perhaps I should have typed SUCH AS more loudly? d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net bluesky: @dcrocker.bsky.social mast: @dcrocker at mastodon.social From ajs at crankycanuck.ca Fri Mar 7 21:55:33 2025 From: ajs at crankycanuck.ca (Andrew Sullivan) Date: Sat, 8 Mar 2025 00:55:33 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <997dd40e-8b03-41f0-ae4e-d223bd39e48e@3kitty.org> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <997dd40e-8b03-41f0-ae4e-d223bd39e48e@3kitty.org> Message-ID: Dear colleagues, On Fri, Mar 07, 2025 at 10:51:25AM -0500, Jack Haverty via Internet-history wrote: >Some other isoc.org lists say that the "Internet Society Code of >Conduct" applies.?? That code prohibits republishing without >permission.? I couldn't find any information about who might have >gotten such permission.?? The sign-up for this list however doesn't >reference that ISOC code. Since I had a small part in getting this list hosted at isoc.org, I can tell you why this list doesn't refer to the Internet Society code of conduct. It's because this isn't an Internet Society list. Internet Society lists are operated by the Internet Society for the purposes of the Internet Society and its members (and anyone else who might be involved or interested, in a couple of exceptional cases I can think of[*]). When this list found it needed a home, the Internet Society was running some lists, and it seemed easy enough just to add this list to that hosting. (It turned out to be rather less easy than I imagined--which, come to think if it, may have been a slogan for my life for a little while.) But it was at least never my intention that it would somehow become part of the Internet Society's operation or under its control. It was just that we had the ability to offer a home to a resource that, in my opinion, is valuable to the Internet and its future. As for the organization of the archive and the threaded view of presentation, there is a threaded view available in Mailman, but it is still nailed to the month in which the messaged arrived. As far as I am aware, this is a limitation of Mailman, at least in the 2.x series. Best regards, A [*] Internet Society lists also usually have a remarkably awkward, bothersome, and unintuitive way of managing membership in them. That mechanism was a filthy hack created some time ago as an almost-reasonable workaround to a specification deficiency. It is one of the best-worst examples I can recall of the old rule that there's nothing so permanent as a temporary solution. So far as I know, it hasn't actually been killed off yet, but it's supposed to be soon. Let's all be grateful it wasn't visited upon this list. -- Andrew Sullivan ajs at crankycanuck.ca From bill.n1vux at gmail.com Sat Mar 8 10:33:17 2025 From: bill.n1vux at gmail.com (Bill Ricker) Date: Sat, 8 Mar 2025 13:33:17 -0500 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> Message-ID: *1. Planning and Execution* > On Mar 7, 2025, at 15:30, Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > At this point, for each of us, we should resolve a basic question of why we haven't provided potentially historical documents to a long-term archive, ***such as*** the Computer History Museum. (Author's ***retroactive emphasis supplied***) Having received the Literary Estate of MAP by prior arrangement and explicit instructions in his Will, and recently assisting Mother in processing the Estate of Father, this is somewhat on my mind. Yes, arrangements need to be made, either while we yet live & downsize, or in written instructions in / attached to a Last Will and Testament, which needs to indicate WHICH drawers/boxes and computer files and perhaps reference a CHM/CBI/... reference# as having been previously coordinated. Instructions should probably be written such that if we're incapacitated and placed in long-term care with Conservators/Power of Attorney etc, which may require downsizing, the instructions are actionable (and mandatory upon) said agents before decease. If the prospective permanent archive can make better use of the files TODAY than ourself, shipping sooner is better. If they're only going to accession the collection, number each box, and stick it in climate controlled warehouse, it's of more use _casa nostra_ where it's plausibly accessible remotely by query on this list. (Files of John Smith, "Internet Early History", 7 linear feet) Anyone with intellectual assetts (authored works and their copyrights, historical artifacts, manuscripts, rare copies of others' documents, ...) needs to provide for their "Literary Estate". This is perhaps _more_ important than providing for their real and personal property in a normal will, as the several states have a centuries-tested process for dealing with intestate inheritance, but that system handles intellectual and historical property poorly if at all. (Obviously there are family situations that make death-intestate highly problematic and thus wills are extra important if one has such a problematic family. Here in a Community Property state, setting joint tenancy of appropriate forms on as many things as possible keeps those assets out of Probate.) Additionally if one ***CHANGES*** their choice of (Literary) Executor, be sure to have the lawyer draft a codicil or latest testament recording the change, and store copies with all the copies of the prior edition. I was uncontested Literary Executor in the Will of MAP, but my promotion to Spiritus Executor (conservator of Scotch liquid assets) was not legally recorded, or if recorded in a proper Codicil, not found. The previous designee (who may still be on the list; Hi! Thank you!) was informed by Estate Executor, looked at what taking possession would entail, and balked/declined, so the primary Estate Executor turned that over to me also as informally intended; since I needed to fly out and ship back the Literary Estate anyway, it was a twofer. (Which made shipping cross-country quite worthwhile !) *2. Where to Archive* On Fri, Mar 7, 2025, 15:33 John Day wrote: > With all due respect to CHM, CBIs archive is safer. Those caves under the > library aren?t going anywhere. ;-) > We lost the HP Archives a few years ago in one of the CA fires. > Oh, thank you for mentioning CBI. For those not on ^this^ continent (^that sometimes forgets there are others^), there's also NCHM collocated with Britain's cryptological museum at Bletchley Park, Maynard Keynes, UK. One consideration that may override all else in choosing an archival destination is WILL THEY ACCEPT YOUR FILES? E.g., Vint's files would of course be accepted at his first choice repository, there's no question of his "notability" (to use Wikipedia's term of art). OTOH E.g. I might need to reach out to all of CBI, CHM, MIT, BU, ... in order to find one archivist who'll agree MAP was sufficiently notable to promise to accept *his* files (at the cost of having to inventory accession and store them in perpetuity) when i or my executor is ready to ship them. (And the sooner I record that agreement the better! Leaving that canvassing to my executor would not only be unfair to them but also inefficient; i can make that case to the archivist better than the _next_ generation, who only remember MAP as a kindly curmudgeonly reitree (as I aspire to be now).) With *perhaps only a majority of* due respect to CHM, I'm still a little salty that after I'd TWICE bought Founding Memberships in the Marlboro and Boston instances of CHM (DCHM & BC(H)M), they followed the billionaires to Sili Valley and cleared the slate for yet another round of ^Founding^ Memberships. (NO, I'm NOT challenging that they needed to "follow the money", just disclosing that I'm a little salty. Which you'll not be surprised at MAP's mentee & literary executor.) (MAP would appreciate my quoting Woodward&Bernstein's 'Deep Throat'. And would've quoted his lewd aphorism connecting that cryptonym to Babbage, some of you may even remember. If one needs to be reminded, ask off-list.) Since Mike and I have each already paid shipping to move the Padlipsky archive across the continent Boston->LA->Boston (and some portions moreseo?), shipping it back to Californicatia isn't high on my list. Minnesota would be half the pound-miles, as well as perhaps better protected archives. Yes, all California archives need to seriously reconsider if they are still safe archival storage, and what they can do to ensure continuity in warmer centuries with oscillatory flood-and-drought-and-fire cycles. (It is unclear to me if the Getty survived the fire because of luck or because they have secret Bond-villian-lair grade private fire suppression?) Both Reagan and Nixon libraries are IIRC in the inflammable chapparell "golden hills". (Apparently one **OF MANY** keys in buildings' wildfire-survival is shutters or reflective curtains to prevent interior decor igniting from infrared transmission.) *3. Returning to the original topic, archival of mailing lists and digital files ...* one of my frustrations as combined Literary and Spiritus executor is that the original MALTS-L at BBN list archives appear not to have been saved when hosting shifted. (Does anyone remember how early Malts-L was formed? I believe S F Lovers was the first non-technical mailing-list/reflector on dARPAnet, and that Malts-L followed soon after when more than 2 scotch-hounds in the working groups discovered common interest, but how soon?) I need to research how to extract data from MAP's old MFM drives. (Hoping there may be older emails and writings.) I didn't ship all his older computers, just the latest laptop, but I harvested all the hard-drives. Maybe I can configure a thin-net adapter into one of my old hulks (if its capacitors haven't rotted!) and use that, otherwise I may need a rarer adapter or a service? WILLIAM D RICKER // LITERARY ESTATE OF MICHAEL A PADLIPSKY > From jack at 3kitty.org Sat Mar 8 15:21:46 2025 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 8 Mar 2025 15:21:46 -0800 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> Message-ID: Thanks to KarlA for the legal explanation! ? It reinforced my feeling that I got a decade ago when I accidentally ended up as an expert witness in a patent fight.? The legal system hasn't yet adapted to the new world of digital media and networking. Thanks also to AndrewS for his help in finding a home for this list! All the discussion of archives, the Datacomputer, et al reminded me of Lick's vision of "human-human communications" -- including but not limited to what we now know as email.?? That was more than fifty years ago, part of his "Galactic Network" ideas that drove our work at MIT. As far as I remember, that aspect of his vision was never written down anywhere, except in various email threads in the 1970s that have likely been lost.? What we have today is still pretty far from what he envisioned.? I'll write down what I remember, and send it in a new thread. Jack Haverty -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From brian.e.carpenter at gmail.com Sat Mar 8 17:49:27 2025 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 9 Mar 2025 14:49:27 +1300 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> Message-ID: <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> Bill's message was very interesting. But I've curtailed it below, to comment on one point. On 09-Mar-25 07:33, Bill Ricker via Internet-history wrote: > *1. Planning and Execution* A few years ago I went through all the office files of a deceased colleague, and deposited the material with some level of historical interest in three different museums. Currently I'm the guardian of one heavy box of files from another deceased colleague that I have to advise their family about - there's a good chance that it's partly of museum quality too. Having also done archive research myself, I feel entitled to make the following statement: both of those colleagues kept too much stuff. So I would suggest that anyone who has paper archives either prunes them vigorously or (perhaps better) makes a *very* detailed list of contents. Otherwise, future users of the archives will very likely fail to find the important bits. Of course pruning an archive is a matter of judgment, but an archive of paper that's 90% dross is a problem in itself. (The same problem exists for electronic archives, but at least there we can imagine searches being automated in a way that's impossible for paper and rather unreliable for scanned and OCR'ed paper.) Regards Brian Carpenter From craig at tereschau.net Sat Mar 8 19:06:17 2025 From: craig at tereschau.net (Craig Partridge) Date: Sat, 8 Mar 2025 20:06:17 -0700 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> Message-ID: Speaking as someone who trained as a social historian as an undergraduate (before I saw the light :-)), one historian's dross is another historian's great value, so prune carefully. It is probably worth listing the kinds of things historians these days look for: - Obviously historians care about the evolution of technology. So successive draft specifications (of something notable -- aka that got used) are of great interest. - Historians also care about personalities. While we tend to think of technical decisions as being objective, they are actually typically a mix of objective engineering decisions and the personalities of the team. So material that sheds light on team dynamics and thinking are valuable. Historians and repositories typically do NOT want the 450th copy of IBM manuals (or DEC manuals or whatever). Though I note some stuff is difficult to find -- so worth doing a web search to see if the manual you have is available on-line. If not, it may be useful. Craig PS: I'm reminded of a story from my undergraduate days. A professor told of doing research in the archives in Florence, Italy a few years before (this would have been early 1970s). You needed a letter of reference from your university to get into the archives and the letter had to state why your research mattered. A PhD student from an unnamed university showed up with a letter, interested in doing research on architectural trends in medieval Florence, and the head archivist refused to let them in as the topic was not, in the archivist's view, respectable research! (Now, of course, it is a routine topic). On Sat, Mar 8, 2025 at 6:49?PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > Bill's message was very interesting. But I've curtailed it below, to > comment on one point. > > On 09-Mar-25 07:33, Bill Ricker via Internet-history wrote: > > *1. Planning and Execution* > > A few years ago I went through all the office files of a deceased > colleague, and deposited the material with some level of historical > interest in three different museums. Currently I'm the guardian of one > heavy box of files from another deceased colleague that I have to advise > their family about - there's a good chance that it's partly of museum > quality too. Having also done archive research myself, I feel entitled to > make the following statement: both of those colleagues kept too much stuff. > > So I would suggest that anyone who has paper archives either prunes them > vigorously or (perhaps better) makes a *very* detailed list of contents. > Otherwise, future users of the archives will very likely fail to find the > important bits. > > Of course pruning an archive is a matter of judgment, but an archive of > paper that's 90% dross is a problem in itself. > > (The same problem exists for electronic archives, but at least there we > can imagine searches being automated in a way that's impossible for paper > and rather unreliable for scanned and OCR'ed paper.) > > Regards > Brian Carpenter > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From brian.e.carpenter at gmail.com Sun Mar 9 00:03:11 2025 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 9 Mar 2025 21:03:11 +1300 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> Message-ID: <239a9231-2df6-4e92-82aa-cdb4c9118214@gmail.com> Craig, Of course, you are correct - but many if not all libraries have a physical storage crisis these days, so the situation is very different from the 1970s, and unless you are dealing with an already famous person's material, it's very hard to get stuff accepted. Regards Brian Carpenter On 09-Mar-25 16:06, Craig Partridge wrote: > Speaking as someone?who trained as a social historian as an undergraduate (before I saw the light :-)), one historian's dross is another historian's great value, so prune carefully. > > It is probably worth listing the kinds of things historians these days look for: > > * Obviously historians care about the evolution of technology.? So successive draft specifications (of something notable -- aka that got used) are of great interest. > * Historians also care about personalities.? While we tend to think of technical decisions as being objective, they are actually typically a mix of objective engineering decisions and the personalities of the team.? So material that sheds light on team dynamics and thinking are valuable. > > Historians and repositories typically do NOT want the 450th copy of IBM manuals (or DEC manuals or whatever).? Though I note some stuff is difficult to find -- so worth doing a web search to see if the manual you have is available on-line.? If not, it may be useful. > > Craig > > PS: I'm reminded of a story from my undergraduate days.? A professor told of doing research in the archives in Florence, Italy a few years before (this would have been early 1970s).? You needed a letter of reference from your university to get into the archives and the letter had to state why your research mattered.? A PhD student from an unnamed university showed up with a letter, interested in doing research on architectural trends in medieval Florence, and the head archivist refused to let them in as the topic was not, in the archivist's view, respectable research! ?(Now, of course, it is a routine topic). > > On Sat, Mar 8, 2025 at 6:49?PM Brian E Carpenter via Internet-history > wrote: > > Bill's message was very interesting. But I've curtailed it below, to comment on one point. > > On 09-Mar-25 07:33, Bill Ricker via Internet-history wrote: > > *1. Planning and Execution* > > A few years ago I went through all the office files of a deceased colleague, and deposited the material with some level of historical interest in three different museums. Currently I'm the guardian of one heavy box of files from another deceased colleague that I have to advise their family about - there's a good chance that it's partly of museum quality too. Having also done archive research myself, I feel entitled to make the following statement: both of those colleagues kept too much stuff. > > So I would suggest that anyone who has paper archives either prunes them vigorously or (perhaps better) makes a *very* detailed list of contents. Otherwise, future users of the archives will very likely fail to find the important bits. > > Of course pruning an archive is a matter of judgment, but an archive of paper that's 90% dross is a problem in itself. > > (The same problem exists for electronic archives, but at least there we can imagine searches being automated in a way that's impossible for paper and rather unreliable for scanned and OCR'ed paper.) > > Regards > ? ? Brian Carpenter > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > ***** > Craig Partridge's email account for professional society activities and mailing lists. From touch at strayalpha.com Sun Mar 9 10:12:52 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sun, 9 Mar 2025 10:12:52 -0700 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> Message-ID: <56DA9B21-C374-4AE6-A6E8-F620E949FEF5@strayalpha.com> > Historians and repositories typically do NOT want the 450th copy of IBM > manuals (or DEC manuals or whatever). Though I note some stuff is > difficult to find -- so worth doing a web search to see if the manual you > have is available on-line. If not, it may be useful. Yeah - but I?ve seen this work to the detriment of those manuals. Historians say ?toss it? because the vendor probably has a copy or there are too many copies. But vendors disappear and there?s the tragedy of the commons for the latter. E.g., I was looking for manuals for the Fairchild F8 in 1990, just three years after they had been consumed by National Semiconductor, who promptly discarded Fairchild?s tech library*. I found a kind soul on a mailing list (before the web took off) who had an intact *set* of manuals on the line *and* an operational development board - which he gave me (and even paid postage). I do still have it and am now thinking it might be time to ship it off to the Computer History Museum? Joe *So that?s where ISI learned it from ;-) ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com From brian.e.carpenter at gmail.com Sun Mar 9 11:27:02 2025 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 10 Mar 2025 07:27:02 +1300 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <56DA9B21-C374-4AE6-A6E8-F620E949FEF5@strayalpha.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> <56DA9B21-C374-4AE6-A6E8-F620E949FEF5@strayalpha.com> Message-ID: <383be9ff-4996-44f9-9db2-59c1b2ae36e0@gmail.com> On 10-Mar-25 06:12, touch at strayalpha.com wrote: >> Historians and repositories typically do NOT want the 450th copy of IBM >> manuals (or DEC manuals or whatever). ?Though I note some stuff is >> difficult to find -- so worth doing a web search to see if the manual you >> have is available on-line. ?If not, it may be useful. > > Yeah - but I?ve seen this work to the detriment of those manuals. Also for tech reports. The CHM was very happy to accept a bunch of Amdahl tech reports a few years ago. Brian > > Historians say ?toss it? because the vendor probably has a copy or there are too many copies. > > But vendors disappear and there?s the tragedy of the commons for the latter. E.g., I was looking for manuals for the Fairchild F8 in 1990, just three years after they had been consumed by National Semiconductor, who promptly discarded Fairchild?s tech library*. > > I found a kind soul on a mailing list (before the web took off) who had an intact *set* of manuals on the line *and* an operational development board - which he gave me (and even paid postage). I do still have it and am now thinking it might be time to ship it off to the Computer History Museum? > > Joe > > *So that?s where ISI learned it from ;-) > > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com From df at macgui.com Mon Mar 10 09:27:16 2025 From: df at macgui.com (David Finnigan) Date: Mon, 10 Mar 2025 11:27:16 -0500 Subject: [ih] Hello, Internet History group Message-ID: Hello everyone, I just joined the Internet history group today. A brief introduction: Since April 2020 I have been working on implementing the Internet protocols on the earliest models of Apple Macintosh: the Mac 128K and Mac 512K from 1984. The goal is to implement the original triad of Internetworking applications: electronic mail, FTP, and Telnet on the first models of Macintosh. I am using PPP over the serial port as the link layer. I enjoy programming in 68000 assembly language, and I also know 6502 for the Apple II. I first started programming Apple computers around 1999, and vintage computing is today one of my hobbies. While implementing TCP on the early Macintosh, I have a few questions which are mostly on the philosophy of design, evolution, and rationale behind some features or design decisions in TCP/IP, and I'll dole these out in the coming days or weeks. -David Finnigan From b_a_denny at yahoo.com Mon Mar 10 13:09:23 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 10 Mar 2025 20:09:23 +0000 (UTC) Subject: [ih] Hello, Internet History group In-Reply-To: References: Message-ID: <1778381380.2389562.1741637363483@mail.yahoo.com> You might also want to reach out to Jim Mathis.? I think he implemented the first TCP/IP for Apple.? I don't think he is on this mailing list. I am not sure if I still have his current email address but let me know if you can't find a way to reach him. barbara On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan via Internet-history wrote: Hello everyone, I just joined the Internet history group today. A brief introduction: Since April 2020 I have been working on implementing the Internet protocols on the earliest models of Apple Macintosh: the Mac 128K and Mac 512K from 1984. The goal is to implement the original triad of Internetworking applications: electronic mail, FTP, and Telnet on the first models of Macintosh. I am using PPP over the serial port as the link layer. I enjoy programming in 68000 assembly language, and I also know 6502 for the Apple II. I first started programming Apple computers around 1999, and vintage computing is today one of my hobbies. While implementing TCP on the early Macintosh, I have a few questions which are mostly on the philosophy of design, evolution, and rationale behind some features or design decisions in TCP/IP, and I'll dole these out in the coming days or weeks. -David Finnigan -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From karl at iwl.com Mon Mar 10 13:59:04 2025 From: karl at iwl.com (Karl Auerbach) Date: Mon, 10 Mar 2025 13:59:04 -0700 Subject: [ih] Hello, Internet History group In-Reply-To: <1778381380.2389562.1741637363483@mail.yahoo.com> References: <1778381380.2389562.1741637363483@mail.yahoo.com> Message-ID: By-the-way, the folks from Intercon, who did a commercial TCP/IP product for the Mac back in the 1980s, are still around even if the company is not. I think Craig Watkins would know more - I suspect he is still at crw at transcend.com ??? --karl-- On 3/10/25 1:09 PM, Barbara Denny via Internet-history wrote: > You might also want to reach out to Jim Mathis.? I think he implemented the first TCP/IP for Apple.? I don't think he is on this mailing list. I am not sure if I still have his current email address but let me know if you can't find a way to reach him. > barbara > On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan via Internet-history wrote: > > Hello everyone, > > I just joined the Internet history group today. A brief introduction: > Since April 2020 I have been working on implementing the Internet > protocols on the earliest models of Apple Macintosh: the Mac 128K and > Mac 512K from 1984. The goal is to implement the original triad of > Internetworking applications: electronic mail, FTP, and Telnet on the > first models of Macintosh. I am using PPP over the serial port as the > link layer. > > I enjoy programming in 68000 assembly language, and I also know 6502 for > the Apple II. I first started programming Apple computers around 1999, > and vintage computing is today one of my hobbies. > > While implementing TCP on the early Macintosh, I have a few questions > which are mostly on the philosophy of design, evolution, and rationale > behind some features or design decisions in TCP/IP, and I'll dole these > out in the coming days or weeks. > > -David Finnigan From touch at strayalpha.com Mon Mar 10 14:20:33 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Mon, 10 Mar 2025 14:20:33 -0700 Subject: [ih] Hello, Internet History group In-Reply-To: References: <1778381380.2389562.1741637363483@mail.yahoo.com> Message-ID: <69FBE0A5-C917-4FE6-BF4A-EFFCBB597472@strayalpha.com> FWIW, those Macs did have TCP/IP - using SLIP. I think PPP came much later. But I do recall using it with Fetch (1989) Lots of us also used terminal emulators too, including Kermit - which a friend of mine was porting to the Lisa in summer 1984. That didn?t extend IP into the Mac, though, but could be used to put about 16 different terminal windows on a single Mac (helpful for remote job management on a bunch of Sun workstations that were 1 mile and 10? of snow away at Cornell). Joe > On Mar 10, 2025, at 1:59?PM, Karl Auerbach via Internet-history wrote: > > By-the-way, the folks from Intercon, who did a commercial TCP/IP product for the Mac back in the 1980s, are still around even if the company is not. > > I think Craig Watkins would know more - I suspect he is still at crw at transcend.com > > --karl-- > > On 3/10/25 1:09 PM, Barbara Denny via Internet-history wrote: >> You might also want to reach out to Jim Mathis. I think he implemented the first TCP/IP for Apple. I don't think he is on this mailing list. I am not sure if I still have his current email address but let me know if you can't find a way to reach him. >> barbara >> On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan via Internet-history wrote: >> Hello everyone, >> >> I just joined the Internet history group today. A brief introduction: >> Since April 2020 I have been working on implementing the Internet >> protocols on the earliest models of Apple Macintosh: the Mac 128K and >> Mac 512K from 1984. The goal is to implement the original triad of >> Internetworking applications: electronic mail, FTP, and Telnet on the >> first models of Macintosh. I am using PPP over the serial port as the >> link layer. >> >> I enjoy programming in 68000 assembly language, and I also know 6502 for >> the Apple II. I first started programming Apple computers around 1999, >> and vintage computing is today one of my hobbies. >> >> While implementing TCP on the early Macintosh, I have a few questions >> which are mostly on the philosophy of design, evolution, and rationale >> behind some features or design decisions in TCP/IP, and I'll dole these >> out in the coming days or weeks. >> >> -David Finnigan > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Mon Mar 10 15:26:10 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 10 Mar 2025 22:26:10 +0000 (UTC) Subject: [ih] Hello, Internet History group In-Reply-To: <69FBE0A5-C917-4FE6-BF4A-EFFCBB597472@strayalpha.com> References: <1778381380.2389562.1741637363483@mail.yahoo.com> <69FBE0A5-C917-4FE6-BF4A-EFFCBB597472@strayalpha.com> Message-ID: <1676975156.2448274.1741645570162@mail.yahoo.com> I also think PPP much came later.? I was kinda wondering why you chose PPP instead of SLIP given the time frame you mentioned. I was guessing it was expediency, instead of perhaps historical accuracy for the link layer,? but maybe I am wrong. I was doing stuff with SLIP, and not PPP,? in the lab in the 80s but I was using Sun workstations. barbara On Monday, March 10, 2025 at 02:20:48 PM PDT, touch at strayalpha.com wrote: FWIW, those Macs did have TCP/IP - using SLIP. I think PPP came much later. But I do recall using it with Fetch (1989) Lots of us also used terminal emulators too, including Kermit - which a friend of mine was porting to the Lisa in summer 1984. That didn?t extend IP into the Mac, though, but could be used to put about 16 different terminal windows on a single Mac (helpful for remote job management on a bunch of Sun workstations that were 1 mile and 10? of snow away at Cornell). Joe On Mar 10, 2025, at 1:59?PM, Karl Auerbach via Internet-history wrote: By-the-way, the folks from Intercon, who did a commercial TCP/IP product for the Mac back in the 1980s, are still around even if the company is not. I think Craig Watkins would know more - I suspect he is still at crw at transcend.com ??? --karl-- On 3/10/25 1:09 PM, Barbara Denny via Internet-history wrote: ?You might also want to reach out to Jim Mathis.? I think he implemented the first TCP/IP for Apple.? I don't think he is on this mailing list. I am not sure if I still have his current email address but let me know if you can't find a way to reach him. barbara ????On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan via Internet-history wrote: ???Hello everyone, I just joined the Internet history group today. A brief introduction: Since April 2020 I have been working on implementing the Internet protocols on the earliest models of Apple Macintosh: the Mac 128K and Mac 512K from 1984. The goal is to implement the original triad of Internetworking applications: electronic mail, FTP, and Telnet on the first models of Macintosh. I am using PPP over the serial port as the link layer. I enjoy programming in 68000 assembly language, and I also know 6502 for the Apple II. I first started programming Apple computers around 1999, and vintage computing is today one of my hobbies. While implementing TCP on the early Macintosh, I have a few questions which are mostly on the philosophy of design, evolution, and rationale behind some features or design decisions in TCP/IP, and I'll dole these out in the coming days or weeks. -David Finnigan -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From bill.n1vux at gmail.com Mon Mar 10 16:35:21 2025 From: bill.n1vux at gmail.com (Bill Ricker) Date: Mon, 10 Mar 2025 19:35:21 -0400 Subject: [ih] Archive of internet-history email (and others) In-Reply-To: <383be9ff-4996-44f9-9db2-59c1b2ae36e0@gmail.com> References: <77b6e319-aca5-40e4-a2bd-b4b3600f0552@3kitty.org> <8E98A7CC-F274-4B6E-82F9-1894E722524D@strayalpha.com> <20250307015909.21ABFBEF5B60@ary.qy> <17359.1741344026@hop.toad.com> <52e88362-0979-4159-81a2-67f8787d0130@dcrocker.net> <58E4E18C-28D2-4EB5-93E1-D0EEC3FD22F4@comcast.net> <347d2e8a-23b7-4222-a081-50d2e43d4c56@gmail.com> <56DA9B21-C374-4AE6-A6E8-F620E949FEF5@strayalpha.com> <383be9ff-4996-44f9-9db2-59c1b2ae36e0@gmail.com> Message-ID: *Good Practices for Acquiring Email Archives : A community guide* Returning to the initial topic of *archiving email lists*, for good and ill, a post in the Galleries/Libraries/Archives/Museums professional Fediverse social platform (GlammR.us a cute retronym!) somehow resurfaced for me today, announcing the November update/release of the above-titled report. Predictably, Digital Archivists have already been studying how to preserve email. https://glammr.us/@redstart_works/113522729293149411 > *Redstart Works* *@redstart_works at glammr.us * *Nov > 21, 2024, 03:03 PM* > The Email Archiving and Preservation Interest Group is excited to announce > a new resource for #archivists and others preserving email! > *Good Practices for Acquiring #Email* explores the initial stages of > acquiring and transferring email to a collecting or host repository. It > provides a series of exercises you can go through to build your email > #preservation program. > *https://osf.io/v4y9w * (1/2) The group was generously supported by funds from the Email Archiving: > Building Capacity and Community regrant program, administered by the > University of Illinois at Urbana-Champaign, and funded by the Andrew W. > Mellon Foundation. Edited by me (2/2) (*Emphasis supplied*.) The *osf.io * link is not a shortener but the actual document page for the guide in the Center for Open Science's repository. -- Bill aka @n1vux at mastodon.radio @BRicker at fosstodon.org @bill-n1vux.bsky.social *inter alia * From df at macgui.com Tue Mar 11 05:39:21 2025 From: df at macgui.com (David Finnigan) Date: Tue, 11 Mar 2025 07:39:21 -0500 Subject: [ih] Hello, Internet History group In-Reply-To: <69FBE0A5-C917-4FE6-BF4A-EFFCBB597472@strayalpha.com> References: <1778381380.2389562.1741637363483@mail.yahoo.com> <69FBE0A5-C917-4FE6-BF4A-EFFCBB597472@strayalpha.com> Message-ID: <96dbe4db42334531b9ec443bca027122@macgui.com> Indeed, PPP came a little later. I chose to implement PPP for practical reasons: it seemed to be more robust and interoperable than SLIP. But yikes! What an over-engineered protocol. Writing and debugging PPP was one of my least favorite parts of this project. -David Finnigan On 10 Mar 2025 4:20 pm, touch at strayalpha.com wrote: > FWIW, those Macs did have TCP/IP - using SLIP. I think PPP came much > later. But I do recall using it with Fetch (1989) > > Lots of us also used terminal emulators too, including Kermit - which > a friend of mine was porting to the Lisa in summer 1984. That didn?t > extend IP into the Mac, though, but could be used to put about 16 > different terminal windows on a single Mac (helpful for remote job > management on a bunch of Sun workstations that were 1 mile and 10? > of snow away at Cornell). > > Joe > >> On Mar 10, 2025, at 1:59?PM, Karl Auerbach via Internet-history >> wrote: >> >> By-the-way, the folks from Intercon, who did a commercial TCP/IP >> product for the Mac back in the 1980s, are still around even if the >> company is not. >> >> I think Craig Watkins would know more - I suspect he is still at >> crw at transcend.com >> >> --karl-- >> >> On 3/10/25 1:09 PM, Barbara Denny via Internet-history wrote: >> >>> You might also want to reach out to Jim Mathis. I think he >>> implemented the first TCP/IP for Apple. I don't think he is on >>> this mailing list. I am not sure if I still have his current email >>> address but let me know if you can't find a way to reach him. >>> barbara >>> On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan >>> via Internet-history wrote: >>> Hello everyone, >>> >>> I just joined the Internet history group today. A brief >>> introduction: >>> Since April 2020 I have been working on implementing the Internet >>> protocols on the earliest models of Apple Macintosh: the Mac 128K >>> and >>> Mac 512K from 1984. The goal is to implement the original triad of >>> Internetworking applications: electronic mail, FTP, and Telnet on >>> the >>> first models of Macintosh. I am using PPP over the serial port as >>> the >>> link layer. >>> >>> I enjoy programming in 68000 assembly language, and I also know >>> 6502 for >>> the Apple II. I first started programming Apple computers around >>> 1999, >>> and vintage computing is today one of my hobbies. >>> >>> While implementing TCP on the early Macintosh, I have a few >>> questions >>> which are mostly on the philosophy of design, evolution, and >>> rationale >>> behind some features or design decisions in TCP/IP, and I'll dole >>> these >>> out in the coming days or weeks. >>> >>> -David Finnigan >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From df at macgui.com Tue Mar 11 07:03:44 2025 From: df at macgui.com (David Finnigan) Date: Tue, 11 Mar 2025 09:03:44 -0500 Subject: [ih] Hello, Internet History group In-Reply-To: <002601db928d$6fd3ff10$4f7bfd30$@smith.net> References: <1778381380.2389562.1741637363483@mail.yahoo.com> <002601db928d$6fd3ff10$4f7bfd30$@smith.net> Message-ID: Gaige B. Paulsen was one of the authors of NCSA Telnet at the U of I. Are Amanda Walker or Kurt Baumann still around? -David Finnigan On 11 Mar 2025 8:56 am, ben at smith.net wrote: > Yes, we InterCon folks are still loosely tied together, even thirty > years after we were acquired by PSINet. > > Gaige Paulsen was our CTO overseeing all of our products, he would be > a fantastic resource for you (David) to contact on this question. > > I've reached out to Gaige re this thread, as I'm not sure he is > on-list. > > I was on staff at InterCon from 1993-1996, initially in product > support and later in the technology sales group. > > -- Ben (Fairfax County, Virginia) > > > -----Original Message----- > From: Internet-history On > Behalf Of Karl Auerbach via Internet-history > Sent: Monday, March 10, 2025 4:59 PM > To: Barbara Denny ; Internet-history > ; df at macgui.com > Subject: Re: [ih] Hello, Internet History group > > By-the-way, the folks from Intercon, who did a commercial TCP/IP > product for the Mac back in the 1980s, are still around even if the > company is not. > > I think Craig Watkins would know more - I suspect he is still at > crw at transcend.com > > --karl-- > > On 3/10/25 1:09 PM, Barbara Denny via Internet-history wrote: >> You might also want to reach out to Jim Mathis. I think he >> implemented the first TCP/IP for Apple. I don't think he is on this >> mailing list. I am not sure if I still have his current email address >> but let me know if you can't find a way to reach him. >> barbara >> On Monday, March 10, 2025 at 09:27:26 AM PDT, David Finnigan via >> Internet-history wrote: >> >> Hello everyone, >> >> I just joined the Internet history group today. A brief introduction: >> Since April 2020 I have been working on implementing the Internet >> protocols on the earliest models of Apple Macintosh: the Mac 128K and >> Mac 512K from 1984. The goal is to implement the original triad of >> Internetworking applications: electronic mail, FTP, and Telnet on the >> first models of Macintosh. I am using PPP over the serial port as the >> link layer. >> >> I enjoy programming in 68000 assembly language, and I also know 6502 >> for the Apple II. I first started programming Apple computers around >> 1999, and vintage computing is today one of my hobbies. >> >> While implementing TCP on the early Macintosh, I have a few questions >> which are mostly on the philosophy of design, evolution, and rationale >> behind some features or design decisions in TCP/IP, and I'll dole >> these out in the coming days or weeks. >> >> -David Finnigan > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From df at macgui.com Tue Mar 11 07:05:47 2025 From: df at macgui.com (David Finnigan) Date: Tue, 11 Mar 2025 09:05:47 -0500 Subject: [ih] TCP RTT Estimator Message-ID: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> About 2 weeks ago I finally began writing the Round Trip Time (RTT) estimator for my TCP on the Mac. I had previously read many many documents which described the small, evolutionary changes in this important function of TCP: SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) I was interested in knowing the reason behind why this particular algorithm was selected. I found a reference to IEN 177, "Comments on Action Items from the January [1981] Meeting" which stated: "The algorithm described by RSRE at the October 80 meeting should be implemented. It will be included in the next edition of the TCP specification. The current best procedure for retransmission timeout is to measure the time elapsed between sending a data octet with a particular sequence number and receiving an ack that covers that sequence number (thus one does not have to match sends and acks one for one)." I continued looking back in older IEN documents and found in IEN 160, 7 November 1980, it was reported that "Brian Davies discussed some suggestions for performance improvements based on the experience at RSRE. The use of an adaptive retransmission timeout seems to be very helpful. RSRE has experimented with one based on the following: 1. For each segment record the sequence number and time sent. 2. For each acknowledgment determine the round trip time (RTT) of the sequence number thereby acknowledged. 3. Compute an Integrated Ack Time (IAT) as follows: IAT = ( ALPHA * IAT ) + RTT 4. Compute a Retransmission Time Estimate (RTE) as follows: RTE = Min [ BOUND, ( BETA * IAT ) ] Where BOUND is an upper bound on the retransmission time and BETA is an adjustment to the IAT to account for variation in the delay. RSRE currently uses ALPHA = 31/32 and BETA = 1.33. [Dave Clark noted that MIT-MULTICS uses the same algorithm but with ALPHA = 4/5 and BETA = 1.5.]" Going still further back to IEN 134 of 29 February 1980, it was reported that "Brian discussed some measurements of TCP conducted by RSRE to various other places in the internet. The performance is regular for round trips from RSRE to various points at UCL, and is consistent with the physical facilities. Once the round trip path includes the SATNET, however, the performance becomes irregular, with a few messages subject to very high delay. Also some unnecessary retransmissions are detected in the tests form RSRE to ISIE and back, these may be due to a too low retransmission threshold." And the topic is discussed in IEN 121, 25 October 1979. It looks like staff at RSRE (Royal Signals and Radar Establishment) took the lead in experimenting with formulae and methods for dynamic estimation of round trip times in TCP. Does anyone here have any further insight or recollection into these experiments for estimating RTT, and the development of the RTT formula? -David Finnigan From mcguire at lssmuseum.org Tue Mar 11 07:26:56 2025 From: mcguire at lssmuseum.org (Dave McGuire) Date: Tue, 11 Mar 2025 10:26:56 -0400 Subject: [ih] Hello, Internet History group In-Reply-To: References: <1778381380.2389562.1741637363483@mail.yahoo.com> <002601db928d$6fd3ff10$4f7bfd30$@smith.net> Message-ID: <143594ea-c20a-4284-83e5-6727e3cd1aac@lssmuseum.org> On 3/11/25 10:03, David Finnigan via Internet-history wrote: > Gaige B. Paulsen was one of the authors of NCSA Telnet at the U of I. > Are Amanda Walker or Kurt Baumann still around? I'm in touch with several of the old Intercon crew, mostly their developers from the early 1990s. Most are doing fine. I was at Digex at the beginning; at that time there was a great deal of technical and social cross-pollination between Digex and Intercon in those days. We also did a lot of interoperability testing with them. They had talented developers and good products. Nowadays, by way of introduction as this is my first post here, I am a contract embedded systems designer and I run the Large Scale Systems Museum (LSSM) in Pittsburgh. Some here may be interested in what we do; check us out or ask me questions. -Dave -- Dave McGuire President/Curator, Large Scale Systems Museum New Kensington, PA From craig at tereschau.net Tue Mar 11 09:01:41 2025 From: craig at tereschau.net (Craig Partridge) Date: Tue, 11 Mar 2025 10:01:41 -0600 Subject: [ih] TCP RTT Estimator In-Reply-To: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> Message-ID: Yes, informally known as the RSRE algorithm. The other detail is the value for Alpha was chosen to be fast on computers of the time and so is a fraction of 1/8. Craig On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < internet-history at elists.isoc.org> wrote: > About 2 weeks ago I finally began writing the Round Trip Time (RTT) > estimator for my TCP on the Mac. I had previously read many many > documents which described the small, evolutionary changes in this > important function of TCP: > > SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) > > I was interested in knowing the reason behind why this particular > algorithm was selected. I found a reference to IEN 177, "Comments on > Action Items from the January [1981] Meeting" which stated: > > "The algorithm described by RSRE at the October 80 meeting should be > implemented. It will be included in the next edition of the TCP > specification. > > The current best procedure for retransmission timeout is to measure the > time elapsed between sending a data octet with a particular sequence > number and receiving an ack that covers that sequence number (thus one > does not have to match sends and acks one for one)." > > I continued looking back in older IEN documents and found in IEN 160, 7 > November 1980, it was reported that > > "Brian Davies discussed some suggestions for performance improvements > based on the experience at RSRE. > > The use of an adaptive retransmission timeout seems to be very helpful. > RSRE has experimented with one based on the following: > > 1. For each segment record the sequence number and time sent. > > 2. For each acknowledgment determine the round trip time (RTT) of the > sequence number thereby acknowledged. > > 3. Compute an Integrated Ack Time (IAT) as follows: > > IAT = ( ALPHA * IAT ) + RTT > > 4. Compute a Retransmission Time Estimate (RTE) as follows: > > RTE = Min [ BOUND, ( BETA * IAT ) ] > > Where BOUND is an upper bound on the retransmission time and BETA is an > adjustment to the IAT to account for variation in the delay. > > RSRE currently uses ALPHA = 31/32 and BETA = 1.33. > > [Dave Clark noted that MIT-MULTICS uses the same algorithm but with > ALPHA = 4/5 and BETA = 1.5.]" > > > > Going still further back to IEN 134 of 29 February 1980, it was reported > that > "Brian discussed some measurements of TCP conducted by RSRE to various > other places in the internet. The performance is regular for round > trips from RSRE to various points at UCL, and is consistent with the > physical facilities. Once the round trip path includes the SATNET, > however, the performance becomes irregular, with a few messages subject > to very high delay. Also some unnecessary retransmissions are detected > in the tests form RSRE to ISIE and back, these may be due to a too low > retransmission threshold." > > And the topic is discussed in IEN 121, 25 October 1979. > > It looks like staff at RSRE (Royal Signals and Radar Establishment) took > the lead in experimenting with formulae and methods for dynamic > estimation of round trip times in TCP. Does anyone here have any further > insight or recollection into these experiments for estimating RTT, and > the development of the RTT formula? > > -David Finnigan > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From gregskinner0 at icloud.com Tue Mar 11 09:32:51 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Tue, 11 Mar 2025 09:32:51 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> Message-ID: <82234F6D-F836-4216-9575-6E9565E12DF4@icloud.com> On Mar 11, 2025, at 7:05?AM, David Finnigan via Internet-history wrote: > > It looks like staff at RSRE (Royal Signals and Radar Establishment) took > the lead in experimenting with formulae and methods for dynamic > estimation of round trip times in TCP. Does anyone here have any further > insight or recollection into these experiments for estimating RTT, and > the development of the RTT formula? > > -David Finnigan > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history During the standardization process of RFC 9293, I learned from Wes Eddy, the RFC 9293 editor, about an adaptive retransmission timeout algorithm paper written by Stephen Edge of UCL. [1] While this paper was published after then IENs you mentioned were written, the principles discussed in it may have influenced the RSRE work that eventually led to the RTT estimation strategy and formula that was incorporated into RFC 793. You might also consider Dave Mills? remarks in Section 3 of RFC 889. Although it was also published after the IENs you mentioned were written, he may have had some insight into why additional work was needed to generally improve TCP performance. --gregbo [1] https://dl.acm.org/doi/pdf/10.1145/800056.802085 From df at macgui.com Tue Mar 11 09:34:19 2025 From: df at macgui.com (David Finnigan) Date: Tue, 11 Mar 2025 11:34:19 -0500 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> Message-ID: Craig, I know that you did a lot of important work and (co)authored some papers on this subject. I am mostly interested in why that smoothing algorithm was initially chosen, and what consideration (if any) had been given to other methods of estimating round trip time for the purpose of computing an RTO. -David Finnigan On 11 Mar 2025 11:01 am, Craig Partridge wrote: > Yes, informally known as the RSRE algorithm. > > The other detail is the value for Alpha was chosen to be fast on > computers of the time and so is a fraction of 1/8. > > Craig > From jeanjour at comcast.net Tue Mar 11 10:51:34 2025 From: jeanjour at comcast.net (John Day) Date: Tue, 11 Mar 2025 13:51:34 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> Message-ID: <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> This algorithm and variations of it are used throughout science in many different places, sometimes called convolution to compute a moving average. It is far from unique to TCP or even networking. Take care, John > On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history wrote: > > Yes, informally known as the RSRE algorithm. > > The other detail is the value for Alpha was chosen to be fast on computers > of the time and so is a fraction of 1/8. > > Craig > > On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < > internet-history at elists.isoc.org> wrote: > >> About 2 weeks ago I finally began writing the Round Trip Time (RTT) >> estimator for my TCP on the Mac. I had previously read many many >> documents which described the small, evolutionary changes in this >> important function of TCP: >> >> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) >> >> I was interested in knowing the reason behind why this particular >> algorithm was selected. I found a reference to IEN 177, "Comments on >> Action Items from the January [1981] Meeting" which stated: >> >> "The algorithm described by RSRE at the October 80 meeting should be >> implemented. It will be included in the next edition of the TCP >> specification. >> >> The current best procedure for retransmission timeout is to measure the >> time elapsed between sending a data octet with a particular sequence >> number and receiving an ack that covers that sequence number (thus one >> does not have to match sends and acks one for one)." >> >> I continued looking back in older IEN documents and found in IEN 160, 7 >> November 1980, it was reported that >> >> "Brian Davies discussed some suggestions for performance improvements >> based on the experience at RSRE. >> >> The use of an adaptive retransmission timeout seems to be very helpful. >> RSRE has experimented with one based on the following: >> >> 1. For each segment record the sequence number and time sent. >> >> 2. For each acknowledgment determine the round trip time (RTT) of the >> sequence number thereby acknowledged. >> >> 3. Compute an Integrated Ack Time (IAT) as follows: >> >> IAT = ( ALPHA * IAT ) + RTT >> >> 4. Compute a Retransmission Time Estimate (RTE) as follows: >> >> RTE = Min [ BOUND, ( BETA * IAT ) ] >> >> Where BOUND is an upper bound on the retransmission time and BETA is an >> adjustment to the IAT to account for variation in the delay. >> >> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. >> >> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with >> ALPHA = 4/5 and BETA = 1.5.]" >> >> >> >> Going still further back to IEN 134 of 29 February 1980, it was reported >> that >> "Brian discussed some measurements of TCP conducted by RSRE to various >> other places in the internet. The performance is regular for round >> trips from RSRE to various points at UCL, and is consistent with the >> physical facilities. Once the round trip path includes the SATNET, >> however, the performance becomes irregular, with a few messages subject >> to very high delay. Also some unnecessary retransmissions are detected >> in the tests form RSRE to ISIE and back, these may be due to a too low >> retransmission threshold." >> >> And the topic is discussed in IEN 121, 25 October 1979. >> >> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >> the lead in experimenting with formulae and methods for dynamic >> estimation of round trip times in TCP. Does anyone here have any further >> insight or recollection into these experiments for estimating RTT, and >> the development of the RTT formula? >> >> -David Finnigan >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From michaelgreenwald58 at gmail.com Tue Mar 11 11:07:15 2025 From: michaelgreenwald58 at gmail.com (Michael Greenwald) Date: Tue, 11 Mar 2025 11:07:15 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> Message-ID: <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> If I am remembering correctly (I'm on the move, and don't have access to anything other than email to checkup anything at the moment), the RTT estimator is not exactly the standard algorithm for computing moving average. I think it separately estimated RTT average and RTT variance, and added the 2 together (which meant the estimate actually increased, briefly, if the RTT was dropping rapidly.) Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) to eliminate individual round-trips that may have been computed by a duplicate or missing packet? On 3/11/25 10:51 AM, John Day via Internet-history wrote: > This algorithm and variations of it are used throughout science in many different places, sometimes called convolution to compute a moving average. > > It is far from unique to TCP or even networking. > > Take care, > John > >> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history wrote: >> >> Yes, informally known as the RSRE algorithm. >> >> The other detail is the value for Alpha was chosen to be fast on computers >> of the time and so is a fraction of 1/8. >> >> Craig >> >> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) >>> estimator for my TCP on the Mac. I had previously read many many >>> documents which described the small, evolutionary changes in this >>> important function of TCP: >>> >>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) >>> >>> I was interested in knowing the reason behind why this particular >>> algorithm was selected. I found a reference to IEN 177, "Comments on >>> Action Items from the January [1981] Meeting" which stated: >>> >>> "The algorithm described by RSRE at the October 80 meeting should be >>> implemented. It will be included in the next edition of the TCP >>> specification. >>> >>> The current best procedure for retransmission timeout is to measure the >>> time elapsed between sending a data octet with a particular sequence >>> number and receiving an ack that covers that sequence number (thus one >>> does not have to match sends and acks one for one)." >>> >>> I continued looking back in older IEN documents and found in IEN 160, 7 >>> November 1980, it was reported that >>> >>> "Brian Davies discussed some suggestions for performance improvements >>> based on the experience at RSRE. >>> >>> The use of an adaptive retransmission timeout seems to be very helpful. >>> RSRE has experimented with one based on the following: >>> >>> 1. For each segment record the sequence number and time sent. >>> >>> 2. For each acknowledgment determine the round trip time (RTT) of the >>> sequence number thereby acknowledged. >>> >>> 3. Compute an Integrated Ack Time (IAT) as follows: >>> >>> IAT = ( ALPHA * IAT ) + RTT >>> >>> 4. Compute a Retransmission Time Estimate (RTE) as follows: >>> >>> RTE = Min [ BOUND, ( BETA * IAT ) ] >>> >>> Where BOUND is an upper bound on the retransmission time and BETA is an >>> adjustment to the IAT to account for variation in the delay. >>> >>> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. >>> >>> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with >>> ALPHA = 4/5 and BETA = 1.5.]" >>> >>> >>> >>> Going still further back to IEN 134 of 29 February 1980, it was reported >>> that >>> "Brian discussed some measurements of TCP conducted by RSRE to various >>> other places in the internet. The performance is regular for round >>> trips from RSRE to various points at UCL, and is consistent with the >>> physical facilities. Once the round trip path includes the SATNET, >>> however, the performance becomes irregular, with a few messages subject >>> to very high delay. Also some unnecessary retransmissions are detected >>> in the tests form RSRE to ISIE and back, these may be due to a too low >>> retransmission threshold." >>> >>> And the topic is discussed in IEN 121, 25 October 1979. >>> >>> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >>> the lead in experimenting with formulae and methods for dynamic >>> estimation of round trip times in TCP. Does anyone here have any further >>> insight or recollection into these experiments for estimating RTT, and >>> the development of the RTT formula? >>> >>> -David Finnigan >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ >>> >> >> -- >> ***** >> Craig Partridge's email account for professional society activities and >> mailing lists. >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ From vgcerf at gmail.com Tue Mar 11 11:22:54 2025 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 11 Mar 2025 14:22:54 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> Message-ID: one of the major participants in the Internet work in the UK in early days was John Laws at RSRE. I have cc'd him on this note. John, any comments on RSRE development of RTT computations? vint On Tue, Mar 11, 2025 at 2:07?PM Michael Greenwald via Internet-history < internet-history at elists.isoc.org> wrote: > If I am remembering correctly (I'm on the move, and don't have access to > anything other than email to checkup anything at the moment), the RTT > estimator is not exactly the standard algorithm for computing moving > average. I think it separately estimated RTT average and RTT variance, > and added the 2 together (which meant the estimate actually increased, > briefly, if the RTT was dropping rapidly.) > Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) > to eliminate individual round-trips that may have been computed by a > duplicate or missing packet? > > On 3/11/25 10:51 AM, John Day via Internet-history wrote: > > This algorithm and variations of it are used throughout science in many > different places, sometimes called convolution to compute a moving average. > > > > It is far from unique to TCP or even networking. > > > > Take care, > > John > > > >> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Yes, informally known as the RSRE algorithm. > >> > >> The other detail is the value for Alpha was chosen to be fast on > computers > >> of the time and so is a fraction of 1/8. > >> > >> Craig > >> > >> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < > >> internet-history at elists.isoc.org> wrote: > >> > >>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) > >>> estimator for my TCP on the Mac. I had previously read many many > >>> documents which described the small, evolutionary changes in this > >>> important function of TCP: > >>> > >>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) > >>> > >>> I was interested in knowing the reason behind why this particular > >>> algorithm was selected. I found a reference to IEN 177, "Comments on > >>> Action Items from the January [1981] Meeting" which stated: > >>> > >>> "The algorithm described by RSRE at the October 80 meeting should be > >>> implemented. It will be included in the next edition of the TCP > >>> specification. > >>> > >>> The current best procedure for retransmission timeout is to measure the > >>> time elapsed between sending a data octet with a particular sequence > >>> number and receiving an ack that covers that sequence number (thus one > >>> does not have to match sends and acks one for one)." > >>> > >>> I continued looking back in older IEN documents and found in IEN 160, 7 > >>> November 1980, it was reported that > >>> > >>> "Brian Davies discussed some suggestions for performance improvements > >>> based on the experience at RSRE. > >>> > >>> The use of an adaptive retransmission timeout seems to be very helpful. > >>> RSRE has experimented with one based on the following: > >>> > >>> 1. For each segment record the sequence number and time sent. > >>> > >>> 2. For each acknowledgment determine the round trip time (RTT) of the > >>> sequence number thereby acknowledged. > >>> > >>> 3. Compute an Integrated Ack Time (IAT) as follows: > >>> > >>> IAT = ( ALPHA * IAT ) + RTT > >>> > >>> 4. Compute a Retransmission Time Estimate (RTE) as follows: > >>> > >>> RTE = Min [ BOUND, ( BETA * IAT ) ] > >>> > >>> Where BOUND is an upper bound on the retransmission time and BETA is an > >>> adjustment to the IAT to account for variation in the delay. > >>> > >>> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. > >>> > >>> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with > >>> ALPHA = 4/5 and BETA = 1.5.]" > >>> > >>> > >>> > >>> Going still further back to IEN 134 of 29 February 1980, it was > reported > >>> that > >>> "Brian discussed some measurements of TCP conducted by RSRE to various > >>> other places in the internet. The performance is regular for round > >>> trips from RSRE to various points at UCL, and is consistent with the > >>> physical facilities. Once the round trip path includes the SATNET, > >>> however, the performance becomes irregular, with a few messages subject > >>> to very high delay. Also some unnecessary retransmissions are detected > >>> in the tests form RSRE to ISIE and back, these may be due to a too low > >>> retransmission threshold." > >>> > >>> And the topic is discussed in IEN 121, 25 October 1979. > >>> > >>> It looks like staff at RSRE (Royal Signals and Radar Establishment) > took > >>> the lead in experimenting with formulae and methods for dynamic > >>> estimation of round trip times in TCP. Does anyone here have any > further > >>> insight or recollection into these experiments for estimating RTT, and > >>> the development of the RTT formula? > >>> > >>> -David Finnigan > >>> -- > >>> Internet-history mailing list > >>> Internet-history at elists.isoc.org > >>> > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ > >>> > >> > >> -- > >> ***** > >> Craig Partridge's email account for professional society activities and > >> mailing lists. > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Tue Mar 11 11:25:57 2025 From: jeanjour at comcast.net (John Day) Date: Tue, 11 Mar 2025 14:25:57 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> Message-ID: That could be. I have encountered it in other places (outside computing) prior to its use in TCP. Are you speaking of estimating RTT or the value of the retransmission timer? They aren?t the same. The variance is a mean variance, rather than a standard deviation for obvious reasons. John > On Mar 11, 2025, at 14:07, Michael Greenwald via Internet-history wrote: > > If I am remembering correctly (I'm on the move, and don't have access to anything other than email to checkup anything at the moment), the RTT estimator is not exactly the standard algorithm for computing moving average. I think it separately estimated RTT average and RTT variance, and added the 2 together (which meant the estimate actually increased, briefly, if the RTT was dropping rapidly.) > Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) to eliminate individual round-trips that may have been computed by a duplicate or missing packet? > > On 3/11/25 10:51 AM, John Day via Internet-history wrote: >> This algorithm and variations of it are used throughout science in many different places, sometimes called convolution to compute a moving average. >> >> It is far from unique to TCP or even networking. >> >> Take care, >> John >> >>> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history wrote: >>> >>> Yes, informally known as the RSRE algorithm. >>> >>> The other detail is the value for Alpha was chosen to be fast on computers >>> of the time and so is a fraction of 1/8. >>> >>> Craig >>> >>> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) >>>> estimator for my TCP on the Mac. I had previously read many many >>>> documents which described the small, evolutionary changes in this >>>> important function of TCP: >>>> >>>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) >>>> >>>> I was interested in knowing the reason behind why this particular >>>> algorithm was selected. I found a reference to IEN 177, "Comments on >>>> Action Items from the January [1981] Meeting" which stated: >>>> >>>> "The algorithm described by RSRE at the October 80 meeting should be >>>> implemented. It will be included in the next edition of the TCP >>>> specification. >>>> >>>> The current best procedure for retransmission timeout is to measure the >>>> time elapsed between sending a data octet with a particular sequence >>>> number and receiving an ack that covers that sequence number (thus one >>>> does not have to match sends and acks one for one)." >>>> >>>> I continued looking back in older IEN documents and found in IEN 160, 7 >>>> November 1980, it was reported that >>>> >>>> "Brian Davies discussed some suggestions for performance improvements >>>> based on the experience at RSRE. >>>> >>>> The use of an adaptive retransmission timeout seems to be very helpful. >>>> RSRE has experimented with one based on the following: >>>> >>>> 1. For each segment record the sequence number and time sent. >>>> >>>> 2. For each acknowledgment determine the round trip time (RTT) of the >>>> sequence number thereby acknowledged. >>>> >>>> 3. Compute an Integrated Ack Time (IAT) as follows: >>>> >>>> IAT = ( ALPHA * IAT ) + RTT >>>> >>>> 4. Compute a Retransmission Time Estimate (RTE) as follows: >>>> >>>> RTE = Min [ BOUND, ( BETA * IAT ) ] >>>> >>>> Where BOUND is an upper bound on the retransmission time and BETA is an >>>> adjustment to the IAT to account for variation in the delay. >>>> >>>> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. >>>> >>>> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with >>>> ALPHA = 4/5 and BETA = 1.5.]" >>>> >>>> >>>> >>>> Going still further back to IEN 134 of 29 February 1980, it was reported >>>> that >>>> "Brian discussed some measurements of TCP conducted by RSRE to various >>>> other places in the internet. The performance is regular for round >>>> trips from RSRE to various points at UCL, and is consistent with the >>>> physical facilities. Once the round trip path includes the SATNET, >>>> however, the performance becomes irregular, with a few messages subject >>>> to very high delay. Also some unnecessary retransmissions are detected >>>> in the tests form RSRE to ISIE and back, these may be due to a too low >>>> retransmission threshold." >>>> >>>> And the topic is discussed in IEN 121, 25 October 1979. >>>> >>>> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >>>> the lead in experimenting with formulae and methods for dynamic >>>> estimation of round trip times in TCP. Does anyone here have any further >>>> insight or recollection into these experiments for estimating RTT, and >>>> the development of the RTT formula? >>>> >>>> -David Finnigan >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ >>>> >>> >>> -- >>> ***** >>> Craig Partridge's email account for professional society activities and >>> mailing lists. >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Tue Mar 11 12:23:33 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 11 Mar 2025 19:23:33 +0000 (UTC) Subject: [ih] TCP RTT Estimator References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> Message-ID: <1676873250.2847239.1741721013305@mail.yahoo.com> (Having trouble getting this posted so shortening the message. Hope I don't create duplicates.) While we are on the topic.. I have always been curious how packet radio may,? or may not, have impacted the calculations.? If anyone finds any info while looking into this,? I would appreciate it if you passed on where to find this discussion. barbara On Tuesday, March 11, 2025 at 11:23:21 AM PDT, vinton cerf via Internet-history wrote: one of the major participants in the Internet work in the UK in early days was John Laws at RSRE. I have cc'd him on this note. John, any comments on RSRE development of RTT computations? vint On Tue, Mar 11, 2025 at 2:07?PM Michael Greenwald via Internet-history < internet-history at elists.isoc.org> wrote: > If I am remembering correctly (I'm on the move, and don't have access to > anything other than email to checkup anything at the moment), the RTT > estimator is not exactly the standard algorithm for computing moving > average. I think it separately estimated RTT average and RTT variance, > and added the 2 together (which meant the estimate actually increased, > briefly, if the RTT was dropping rapidly.) > Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) > to eliminate individual round-trips that may have been computed by a > duplicate or missing packet? > > On 3/11/25 10:51 AM, John Day via Internet-history wrote: > > This algorithm and variations of it are used throughout science in many > different places, sometimes called convolution to compute a moving average. > > > > It is far from unique to TCP or even networking. > > > > Take care, > > John > > > >> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Yes, informally known as the RSRE algorithm. > >> > >> The other detail is the value for Alpha was chosen to be fast on > computers > >> of the time and so is a fraction of 1/8. > >> > >> Craig > >> > >> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < > >> internet-history at elists.isoc.org> wrote: > >> > >>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) > >>> estimator for my TCP on the Mac. I had previously read many many > >>> documents which described the small, evolutionary changes in this > >>> important function of TCP: > >>> > >>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) > >>> > >>> I was interested in knowing the reason behind why this particular > >>> algorithm was selected. I found a reference to IEN 177, "Comments on > >>> Action Items from the January [1981] Meeting" which stated: > >>> > >>> "The algorithm described by RSRE at the October 80 meeting should be > >>> implemented.? It will be included in the next edition of the TCP > >>> specification. > >>> > >>> The current best procedure for retransmission timeout is to measure the > >>> time elapsed between sending a data octet with a particular sequence > >>> number and receiving an ack that covers that sequence number (thus one > >>> does not have to match sends and acks one for one)." > >>> > >>> I continued looking back in older IEN documents and found in IEN 160, 7 > >>> November 1980, it was reported that > >>> > >>> "Brian Davies discussed some suggestions for performance improvements > >>> based on the experience at RSRE ??Rest of message deleted From jack at 3kitty.org Tue Mar 11 13:42:48 2025 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 11 Mar 2025 13:42:48 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> Message-ID: <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> On 3/11/25 07:05, David Finnigan via Internet-history wrote: > It looks like staff at RSRE (Royal Signals and Radar Establishment) took > the lead in experimenting with formulae and methods for dynamic > estimation of round trip times in TCP. Does anyone here have any further > insight or recollection into these experiments for estimating RTT, and > the development of the RTT formula? > IMHO the key factor was the state of the Internet at that time (1980ish).? The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution.?? The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE.?? They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. The ARPANET was a packet network, but it presented a virtual circuit service to its users.? Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact.?? So RTT values could be set very high - I recall one common choice was 3 seconds. For the UK users however, things were quite different.? The "core gateways" at the time were very limited by their hardware configurations.? They didn't have much buffering space.?? So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection.? IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement.? They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest.?? But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times".? That made little sense of course. After digging around with XNET, SNMP, etc., we discovered the cause.? That ARPANET/SATNET gateway had very few buffers.? The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them.? So the buffers filled up and datagrams were discarded -- just as expected. During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size).?? Buffers in the gateway would overflow and some of those datagrams were lost.? The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. Conversely, during "busy times", the traffic through the ARPANET would be spread out in time.?? With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead.? Result - faster FTP transfers. AFAIK, none of this behavior was ever analyzed mathematically.? The mathematical model of an Internet seemed beyond the capability of queuing theory et al.? Progress was very much driven by experimentation and "let's try this" activity. The solution, or actually workaround, was to improve the gateway's hardware.? More memory meant more buffering was available.?? That principle seems to have continued even today, but has caused other problems.? Google "buffer bloat" if you're curious. As far as I remember, there weren't any such problems reported with the various Packet Radio networks.?? They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. Jack Haverty -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From dhc at dcrocker.net Tue Mar 11 13:46:59 2025 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 11 Mar 2025 20:46:59 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: <1676873250.2847239.1741721013305@mail.yahoo.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> Message-ID: > I have always been curious how packet radio may,? or may not, have impacted the calculations. At the IETF, the presentation about an actual implementation of IP over Avian Carrier noted that it provided an excellent test of this algorithm. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net bluesky: @dcrocker.bsky.social mast: @dcrocker at mastodon.social From b_a_denny at yahoo.com Tue Mar 11 14:02:00 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 11 Mar 2025 21:02:00 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> Message-ID: <865890042.2905597.1741726920385@mail.yahoo.com> I do view packet radio as a stress test for the protocol(s).? I think it is important to consider all the different dynamics that might come into play with the networks.? I still need to really read Jack's message but there were also military testbeds that had packet radio networks.? I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was? to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN). barbara? On Tuesday, March 11, 2025 at 01:49:25 PM PDT, Dave Crocker wrote: I have always been curious how packet radio may,? or may not, have impacted the calculations.? At the IETF, the presentation about an actual implementation of IP over Avian Carrier noted that it provided an excellent test of this algorithm. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net bluesky: @dcrocker.bsky.social mast: @dcrocker at mastodon.social From touch at strayalpha.com Tue Mar 11 14:05:46 2025 From: touch at strayalpha.com (touch at strayalpha.com) Date: Tue, 11 Mar 2025 14:05:46 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> Message-ID: <25DB4858-43CC-4916-AC8F-1D32F2ABD8E0@strayalpha.com> > On Mar 11, 2025, at 11:07?AM, Michael Greenwald via Internet-history wrote: > > If I am remembering correctly (I'm on the move, and don't have access to anything other than email to checkup anything at the moment), the RTT estimator is not exactly the standard algorithm for computing moving average. I think it separately estimated RTT average and RTT variance, and added the 2 together (which meant the estimate actually increased, briefly, if the RTT was dropping rapidly.) > Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) to eliminate individual round-trips that may have been computed by a duplicate or missing packet? It?s an inverse exponential decay, similar to how a capacitor discharges over time. Each time measurement contributes (exponentially) less over time. A true moving average typically as a finite window and computes a mean over that window, in which each element counts the same until it doesn?t count at all. Joe ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com From vgcerf at gmail.com Tue Mar 11 14:10:45 2025 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 11 Mar 2025 17:10:45 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> Message-ID: jack'd recollections mirror my own. v On Tue, Mar 11, 2025 at 4:43?PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > On 3/11/25 07:05, David Finnigan via Internet-history wrote: > > It looks like staff at RSRE (Royal Signals and Radar Establishment) took > > the lead in experimenting with formulae and methods for dynamic > > estimation of round trip times in TCP. Does anyone here have any further > > insight or recollection into these experiments for estimating RTT, and > > the development of the RTT formula? > > > > IMHO the key factor was the state of the Internet at that time > (1980ish). The ARPANET was the primary "backbone" of The Internet in > what I think of as the "fuzzy peach" stage of Internet evolution. The > ARPANET was the peach, and sites on the ARPANET were adding LANs of some > type and connecting them with some kind of gateway to the ARPANET IMP. > > The exception to that structure was Europe, especially Peter Kirstein's > group at UCL and John Laws group at RSRE. They were interconnected > somehow in the UK, but their access to the Internet was through a > connection to a SATNET node (aka SIMP) at Goonhilly Downs. > > SATNET was connected to the ARPANET through one of the "core gateways" > that we at BBN were responsible to run as a 24x7 operational network. > > The ARPANET was a packet network, but it presented a virtual circuit > service to its users. Everything that went in one end came out the > other end, in order, with nothing missing, and nothing duplicated. TCPs > at a US site talking to TCPs at another US site didn't have much work to > do, since everything they sent would be received intact. So RTT values > could be set very high - I recall one common choice was 3 seconds. > > For the UK users however, things were quite different. The "core > gateways" at the time were very limited by their hardware > configurations. They didn't have much buffering space. So they did > drop datagrams, which of course had to be retransmitted by the host at > the end of the TCP connection. IIRC, at one point the ARPANET/SATNET > gateway had exactly one datagram of buffer space. > > I don't recall anyone ever saying it, but I suspect that situation > caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, > and try to figure out how best to deal with their skinny pipe across the > Atlantic. > > At one point, someone (from UCL or RSRE, can't remember) reported an > unexpected measurement. They did frequent file transfers, often trying > to "time" their transfers to happen at a time of day when UK and US > traffic flows would be lowest. But they observed that their transfers > during "busy times" went much faster than similar transfers during > "quiet times". That made little sense of course. > > After digging around with XNET, SNMP, etc., we discovered the cause. > That ARPANET/SATNET gateway had very few buffers. The LANs at users' > sites and the ARPANET path could deliver datagrams to that gateway > faster than SATNET could take them. So the buffers filled up and > datagrams were discarded -- just as expected. > > During "quiet times", the TCP connection would deliver datagrams to the > gateway in bursts (whatever the TCPs negotiated as a Window size). > Buffers in the gateway would overflow and some of those datagrams were > lost. The sending TCP would retransmit, but only after the RTT timer > expired, which was often set to 3 seconds. Result - slow FTPs. > > Conversely, during "busy times", the traffic through the ARPANET would > be spread out in time. With other users' traffic flows present, > chances were better that someone else's datagram would be dropped > instead. Result - faster FTP transfers. > > AFAIK, none of this behavior was ever analyzed mathematically. The > mathematical model of an Internet seemed beyond the capability of > queuing theory et al. Progress was very much driven by experimentation > and "let's try this" activity. > > The solution, or actually workaround, was to improve the gateway's > hardware. More memory meant more buffering was available. That > principle seems to have continued even today, but has caused other > problems. Google "buffer bloat" if you're curious. > > As far as I remember, there weren't any such problems reported with the > various Packet Radio networks. They tended to be used only > occasionally, for tests and demos, where the SATNET linkage was used > almost daily. > > The Laws and Kirstein groups in the UK were, IMHO, the first "real" > users of TCP on The Internet, exploring paths not protected by ARPANET > mechanisms. > > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From b_a_denny at yahoo.com Tue Mar 11 14:48:53 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 11 Mar 2025 21:48:53 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> Message-ID: <1333609295.2931332.1741729733743@mail.yahoo.com> I don't recall ever hearing, or reading, about TCP transport requirements from the underlying network but I wasn't there in the early days of TCP (70s).? I have trouble thinking the problem with the congestion assumption? wasn't brought up early but I certainly don't know. barbara On Tuesday, March 11, 2025 at 02:10:26 PM PDT, John Day wrote: I would disagree. The Transport Layer assumes a minimal service from the layers below (actually all layers do). If the underlying layer doesn?t meet that normally, then measures are needed to bring the service up to the expected level.? Given that the diameter of the net now is about 20 or so and probably back then 5 or 6. Packet radio constituted a small fraction of the lower layers that the packet had to cross. Assuming packet radio didn?t have to do anything had the tail wagging the dog. Of course the example some would point to was TCP congestion control assuming lost packets were due to congestion. That was a dumb assumption and didn?t take a systems view of the problem. (Of course, it wasn?t the only dumb thing in that design, it also maximized retransmissions.) Take care, John Day > On Mar 11, 2025, at 17:02, Barbara Denny via Internet-history wrote: > > I do view packet radio as a stress test for the protocol(s).? I think it is important to consider all the different dynamics that might come into play with the networks. > I still need to really read Jack's message but there were also military testbeds that had packet radio networks.? I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was? to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN). > barbara > > > > >? ? On Tuesday, March 11, 2025 at 01:49:25 PM PDT, Dave Crocker wrote:? > > > I have always been curious how packet radio may,? or may not, have impacted the calculations.? > > > > > At the IETF, the presentation about an actual implementation of IP over Avian Carrier noted that it provided an excellent test of this algorithm. > > > d/ > -- > Dave Crocker > > Brandenburg InternetWorking > bbiw.net > bluesky: @dcrocker.bsky.social > mast: @dcrocker at mastodon.social? > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From michaelgreenwald58 at gmail.com Tue Mar 11 15:06:10 2025 From: michaelgreenwald58 at gmail.com (Michael Greenwald) Date: Tue, 11 Mar 2025 15:06:10 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> Message-ID: "Are you speaking of estimating RTT or the value of the retransmission timer? They aren?t the same." You are, of course, right that they are different. And it is totally possible that I was/am conflating the two in my recollection. And, I agree that I too had seen and used exponential decay to approximate a rolling average rate before I used it for RTT estimation in TCP. That was before the changes referred to here (which is why/how I remember that it differed from "standard" exponential decay -- I changed my code). My recollection, for what it's worth, was that this (separate mean and variance estimates, combined by weighted addition) was used for an estimation of round-trip-time, and a simple multiple of that estimate was used for the retransmission timer. But your question makes me seriously reconsider whether my recollection is accurate. On 3/11/25 11:25 AM, John Day wrote: > That could be. I have encountered it in other places (outside computing) prior to its use in TCP. > > Are you speaking of estimating RTT or the value of the retransmission timer? They aren?t the same. > > The variance is a mean variance, rather than a standard deviation for obvious reasons. > > John > >> On Mar 11, 2025, at 14:07, Michael Greenwald via Internet-history wrote: >> >> If I am remembering correctly (I'm on the move, and don't have access to anything other than email to checkup anything at the moment), the RTT estimator is not exactly the standard algorithm for computing moving average. I think it separately estimated RTT average and RTT variance, and added the 2 together (which meant the estimate actually increased, briefly, if the RTT was dropping rapidly.) >> Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) to eliminate individual round-trips that may have been computed by a duplicate or missing packet? >> >> On 3/11/25 10:51 AM, John Day via Internet-history wrote: >>> This algorithm and variations of it are used throughout science in many different places, sometimes called convolution to compute a moving average. >>> >>> It is far from unique to TCP or even networking. >>> >>> Take care, >>> John >>> >>>> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history wrote: >>>> >>>> Yes, informally known as the RSRE algorithm. >>>> >>>> The other detail is the value for Alpha was chosen to be fast on computers >>>> of the time and so is a fraction of 1/8. >>>> >>>> Craig >>>> >>>> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) >>>>> estimator for my TCP on the Mac. I had previously read many many >>>>> documents which described the small, evolutionary changes in this >>>>> important function of TCP: >>>>> >>>>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) >>>>> >>>>> I was interested in knowing the reason behind why this particular >>>>> algorithm was selected. I found a reference to IEN 177, "Comments on >>>>> Action Items from the January [1981] Meeting" which stated: >>>>> >>>>> "The algorithm described by RSRE at the October 80 meeting should be >>>>> implemented. It will be included in the next edition of the TCP >>>>> specification. >>>>> >>>>> The current best procedure for retransmission timeout is to measure the >>>>> time elapsed between sending a data octet with a particular sequence >>>>> number and receiving an ack that covers that sequence number (thus one >>>>> does not have to match sends and acks one for one)." >>>>> >>>>> I continued looking back in older IEN documents and found in IEN 160, 7 >>>>> November 1980, it was reported that >>>>> >>>>> "Brian Davies discussed some suggestions for performance improvements >>>>> based on the experience at RSRE. >>>>> >>>>> The use of an adaptive retransmission timeout seems to be very helpful. >>>>> RSRE has experimented with one based on the following: >>>>> >>>>> 1. For each segment record the sequence number and time sent. >>>>> >>>>> 2. For each acknowledgment determine the round trip time (RTT) of the >>>>> sequence number thereby acknowledged. >>>>> >>>>> 3. Compute an Integrated Ack Time (IAT) as follows: >>>>> >>>>> IAT = ( ALPHA * IAT ) + RTT >>>>> >>>>> 4. Compute a Retransmission Time Estimate (RTE) as follows: >>>>> >>>>> RTE = Min [ BOUND, ( BETA * IAT ) ] >>>>> >>>>> Where BOUND is an upper bound on the retransmission time and BETA is an >>>>> adjustment to the IAT to account for variation in the delay. >>>>> >>>>> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. >>>>> >>>>> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with >>>>> ALPHA = 4/5 and BETA = 1.5.]" >>>>> >>>>> >>>>> >>>>> Going still further back to IEN 134 of 29 February 1980, it was reported >>>>> that >>>>> "Brian discussed some measurements of TCP conducted by RSRE to various >>>>> other places in the internet. The performance is regular for round >>>>> trips from RSRE to various points at UCL, and is consistent with the >>>>> physical facilities. Once the round trip path includes the SATNET, >>>>> however, the performance becomes irregular, with a few messages subject >>>>> to very high delay. Also some unnecessary retransmissions are detected >>>>> in the tests form RSRE to ISIE and back, these may be due to a too low >>>>> retransmission threshold." >>>>> >>>>> And the topic is discussed in IEN 121, 25 October 1979. >>>>> >>>>> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >>>>> the lead in experimenting with formulae and methods for dynamic >>>>> estimation of round trip times in TCP. Does anyone here have any further >>>>> insight or recollection into these experiments for estimating RTT, and >>>>> the development of the RTT formula? >>>>> >>>>> -David Finnigan >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ >>>>> >>>> -- >>>> ***** >>>> Craig Partridge's email account for professional society activities and >>>> mailing lists. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Tue Mar 11 16:02:11 2025 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 11 Mar 2025 16:02:11 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <1333609295.2931332.1741729733743@mail.yahoo.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> Message-ID: <09871cd3-737c-4342-be32-348b94d8b6db@3kitty.org> Congestion control was a major issue in the ARPANET as it got larger, and especially as it morphed into the Defense Data Network. A lot of effort was put into analyzing, simulating, and implementing changes to the internal mechanisms and algorithms that implemented the ARPANET's "virtual circuit" service analogous to TCP's role in The Internet. IMHO there's a difference between designing an algorithm (such as aspects of TCP) and designing a Network.?? The ARPANET and its clones used pretty much the same algorithms, but there was a lot of effort put into designing each particular network, and evolving it as user needs changed.? There was a large group at BBN called Network Analysis that did much of that work. Each network was designed to reflect the traffic requirements of the users.? Nodes were interconnected based on analysis of traffic patterns and historical data.? One ARPANET-clone, for a credit card processor, was designed for one particular day - Black Friday.?? If it worked then, it would work all year. Circuit sizes were selected based on traffic flow peaks, with an assumption that at any point in time some circuit might be out of service.?? So circuits were somewhat "over-provisioned" in order to keep traffic flowing even when some circuit was out of service.? The network knew how to divert traffic around failures. Queuing theory indicated that delays were highly coupled to line utilization.? I don't recall the exact numbers, but if a circuit was used more than about 75% it would result in occasional long delays.?? So networks were designed to keep all the circuits below that level during peak usage. That design principle caused some problems with the bean counters of the world.? To them, 75% utlization meant that 25% of those expensive circuit charges was being wasted.?? So we redefined "utilization" -- 100% utilization was reached at 75% load.? That meant occasionally a network circuit could achieve 110% utilization, which made the bean counters especially happy.? Win-win. As far as I know, The Internet has never been designed in the same way the ARPANET was.? TCP and other protocols were designed, algorithms for retransmission et al were tested experimentally and documented.?? But the Internet itself - the connectivity graph and the interconnection capacities - were (and are?) decided by local operators of pieces of The Internet.?? I don't know anything about how they make decisions of network topology and such, or how that's changed over the decades of Internet operation.?? Anyone else? At one point I remember a meeting, sometime in the early 1980s, where some bunch of us discussed "design" of the Internet.? Most of the ARPANET techniques weren't applicable -- how do you specify the "size" and delay of the network paths that interconnect gateways? Telco circuits were stable and predictable, and could be analyzed mathematically.? Analogous Internet connections were unpredictable and mathematically intractable. The conclusion at that meeting was that, while research continued to find appropriate mechanisms, the Internet would operate acceptably if it was always kept well below any kind of "saturation point". With enough processing power, memory for buffers, and alternate paths, everything would likely be mostly fine. Someone asked what the performance specs of The Internet should be - e.g., what packet drop rate would be "normal". ? After a little discussion, someone said "How about 1%?" and that became the consensus for "normal" behavior of the Internet.? I remember changing my TCP to report a network problem if a connection's drop rate (wildly guesstimated as the retransmission rate) hit 1%. Experiments could continue, seeking the "right answer" for Internet algorithms and developing principles for Internet design.?? Are we there yet...? Jack Haverty On 3/11/25 14:48, Barbara Denny via Internet-history wrote: > I don't recall ever hearing, or reading, about TCP transport requirements from the underlying network but I wasn't there in the early days of TCP (70s). > I have trouble thinking the problem with the congestion assumption? wasn't brought up early but I certainly don't know. > barbara > On Tuesday, March 11, 2025 at 02:10:26 PM PDT, John Day wrote: > > I would disagree. The Transport Layer assumes a minimal service from the layers below (actually all layers do). If the underlying layer doesn?t meet that normally, then measures are needed to bring the service up to the expected level.? Given that the diameter of the net now is about 20 or so and probably back then 5 or 6. Packet radio constituted a small fraction of the lower layers that the packet had to cross. Assuming packet radio didn?t have to do anything had the tail wagging the dog. > > Of course the example some would point to was TCP congestion control assuming lost packets were due to congestion. That was a dumb assumption and didn?t take a systems view of the problem. (Of course, it wasn?t the only dumb thing in that design, it also maximized retransmissions.) > > Take care, > John Day > >> On Mar 11, 2025, at 17:02, Barbara Denny via Internet-history wrote: >> >> I do view packet radio as a stress test for the protocol(s).? I think it is important to consider all the different dynamics that might come into play with the networks. >> I still need to really read Jack's message but there were also military testbeds that had packet radio networks.? I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was? to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN). >> barbara >> >> >> >> >> ? ? On Tuesday, March 11, 2025 at 01:49:25 PM PDT, Dave Crocker wrote: >> >> >> I have always been curious how packet radio may,? or may not, have impacted the calculations. >> >> >> >> >> At the IETF, the presentation about an actual implementation of IP over Avian Carrier noted that it provided an excellent test of this algorithm. >> >> >> d/ >> -- >> Dave Crocker >> >> Brandenburg InternetWorking >> bbiw.net >> bluesky: @dcrocker.bsky.social >> mast: @dcrocker at mastodon.social >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From craig at tereschau.net Tue Mar 11 16:46:22 2025 From: craig at tereschau.net (Craig Partridge) Date: Tue, 11 Mar 2025 17:46:22 -0600 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <777E0BE1-843A-4FC6-97C4-C50F179ECD19@comcast.net> <869923e8-5b1e-45b7-a287-7aa3fc6000be@cis.upenn.edu> Message-ID: Michael is right about capturing round-trip time measurements for segments that were retransmitted -- that was Karn's algorithm. The measurements go into the SRTT, which is in turn used to calculate the RTO. Jacobson created the updated algorithm for computing the RTO from the SRTT. Craig On Tue, Mar 11, 2025 at 4:06?PM Michael Greenwald via Internet-history < internet-history at elists.isoc.org> wrote: > "Are you speaking of estimating RTT or the value of the retransmission > timer? They aren?t the same." > > You are, of course, right that they are different. And it is totally > possible that I was/am conflating the two in my recollection. And, I > agree that I too had seen and used exponential decay to approximate a > rolling average rate before I used it for RTT estimation in TCP. That > was before the changes referred to here (which is why/how I remember > that it differed from "standard" exponential decay -- I changed my > code). My recollection, for what it's worth, was that this (separate > mean and variance estimates, combined by weighted addition) was used for > an estimation of round-trip-time, and a simple multiple of that estimate > was used for the retransmission timer. But your question makes me > seriously reconsider whether my recollection is accurate. > > On 3/11/25 11:25 AM, John Day wrote: > > That could be. I have encountered it in other places (outside computing) > prior to its use in TCP. > > > > Are you speaking of estimating RTT or the value of the retransmission > timer? They aren?t the same. > > > > The variance is a mean variance, rather than a standard deviation for > obvious reasons. > > > > John > > > >> On Mar 11, 2025, at 14:07, Michael Greenwald via Internet-history< > internet-history at elists.isoc.org> wrote: > >> > >> If I am remembering correctly (I'm on the move, and don't have access > to anything other than email to checkup anything at the moment), the RTT > estimator is not exactly the standard algorithm for computing moving > average. I think it separately estimated RTT average and RTT variance, and > added the 2 together (which meant the estimate actually increased, briefly, > if the RTT was dropping rapidly.) > >> Also, wasn't there some variation (Craig and MRose? or Van? or Lixia?) > to eliminate individual round-trips that may have been computed by a > duplicate or missing packet? > >> > >> On 3/11/25 10:51 AM, John Day via Internet-history wrote: > >>> This algorithm and variations of it are used throughout science in > many different places, sometimes called convolution to compute a moving > average. > >>> > >>> It is far from unique to TCP or even networking. > >>> > >>> Take care, > >>> John > >>> > >>>> On Mar 11, 2025, at 12:01, Craig Partridge via Internet-history< > internet-history at elists.isoc.org> wrote: > >>>> > >>>> Yes, informally known as the RSRE algorithm. > >>>> > >>>> The other detail is the value for Alpha was chosen to be fast on > computers > >>>> of the time and so is a fraction of 1/8. > >>>> > >>>> Craig > >>>> > >>>> On Tue, Mar 11, 2025 at 8:05?AM David Finnigan via Internet-history < > >>>> internet-history at elists.isoc.org> wrote: > >>>> > >>>>> About 2 weeks ago I finally began writing the Round Trip Time (RTT) > >>>>> estimator for my TCP on the Mac. I had previously read many many > >>>>> documents which described the small, evolutionary changes in this > >>>>> important function of TCP: > >>>>> > >>>>> SRTT = ( ALPHA * SRTT ) + ( (1-ALPHA) * RTT ) > >>>>> > >>>>> I was interested in knowing the reason behind why this particular > >>>>> algorithm was selected. I found a reference to IEN 177, "Comments on > >>>>> Action Items from the January [1981] Meeting" which stated: > >>>>> > >>>>> "The algorithm described by RSRE at the October 80 meeting should be > >>>>> implemented. It will be included in the next edition of the TCP > >>>>> specification. > >>>>> > >>>>> The current best procedure for retransmission timeout is to measure > the > >>>>> time elapsed between sending a data octet with a particular sequence > >>>>> number and receiving an ack that covers that sequence number (thus > one > >>>>> does not have to match sends and acks one for one)." > >>>>> > >>>>> I continued looking back in older IEN documents and found in IEN > 160, 7 > >>>>> November 1980, it was reported that > >>>>> > >>>>> "Brian Davies discussed some suggestions for performance improvements > >>>>> based on the experience at RSRE. > >>>>> > >>>>> The use of an adaptive retransmission timeout seems to be very > helpful. > >>>>> RSRE has experimented with one based on the following: > >>>>> > >>>>> 1. For each segment record the sequence number and time sent. > >>>>> > >>>>> 2. For each acknowledgment determine the round trip time (RTT) of > the > >>>>> sequence number thereby acknowledged. > >>>>> > >>>>> 3. Compute an Integrated Ack Time (IAT) as follows: > >>>>> > >>>>> IAT = ( ALPHA * IAT ) + RTT > >>>>> > >>>>> 4. Compute a Retransmission Time Estimate (RTE) as follows: > >>>>> > >>>>> RTE = Min [ BOUND, ( BETA * IAT ) ] > >>>>> > >>>>> Where BOUND is an upper bound on the retransmission time and BETA is > an > >>>>> adjustment to the IAT to account for variation in the delay. > >>>>> > >>>>> RSRE currently uses ALPHA = 31/32 and BETA = 1.33. > >>>>> > >>>>> [Dave Clark noted that MIT-MULTICS uses the same algorithm but with > >>>>> ALPHA = 4/5 and BETA = 1.5.]" > >>>>> > >>>>> > >>>>> > >>>>> Going still further back to IEN 134 of 29 February 1980, it was > reported > >>>>> that > >>>>> "Brian discussed some measurements of TCP conducted by RSRE to > various > >>>>> other places in the internet. The performance is regular for round > >>>>> trips from RSRE to various points at UCL, and is consistent with the > >>>>> physical facilities. Once the round trip path includes the SATNET, > >>>>> however, the performance becomes irregular, with a few messages > subject > >>>>> to very high delay. Also some unnecessary retransmissions are > detected > >>>>> in the tests form RSRE to ISIE and back, these may be due to a too > low > >>>>> retransmission threshold." > >>>>> > >>>>> And the topic is discussed in IEN 121, 25 October 1979. > >>>>> > >>>>> It looks like staff at RSRE (Royal Signals and Radar Establishment) > took > >>>>> the lead in experimenting with formulae and methods for dynamic > >>>>> estimation of round trip times in TCP. Does anyone here have any > further > >>>>> insight or recollection into these experiments for estimating RTT, > and > >>>>> the development of the RTT formula? > >>>>> > >>>>> -David Finnigan > >>>>> -- > >>>>> Internet-history mailing list > >>>>> Internet-history at elists.isoc.org > >>>>> > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ > >>>>> > >>>> -- > >>>> ***** > >>>> Craig Partridge's email account for professional society activities > and > >>>> mailing lists. > >>>> -- > >>>> Internet-history mailing list > >>>> Internet-history at elists.isoc.org > >>>> > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!IBzWLUs!Qx0EPP3x4ywRB5-XWkp_6aP7r-BhO-BE73RdziAIZDXr0Nih4_6iMBhKd4qXnPQOeBDL7BYXBXqb1lrq54b1GG5X5Vfp1vceEvud$ > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From gregskinner0 at icloud.com Sun Mar 16 23:06:34 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Sun, 16 Mar 2025 23:06:34 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> Message-ID: <1B63DB11-FFF9-4EDF-85C2-AC347C8AC4AF@icloud.com> On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history wrote: > > IMHO the key factor was the state of the Internet at that time (1980ish). The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution. The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. > > The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE. They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. > > SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. > > The ARPANET was a packet network, but it presented a virtual circuit service to its users. Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact. So RTT values could be set very high - I recall one common choice was 3 seconds. > > For the UK users however, things were quite different. The "core gateways" at the time were very limited by their hardware configurations. They didn't have much buffering space. So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection. IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. > > I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. > > At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement. They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest. But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times". That made little sense of course. > > After digging around with XNET, SNMP, etc., we discovered the cause. That ARPANET/SATNET gateway had very few buffers. The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them. So the buffers filled up and datagrams were discarded -- just as expected. > > During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size). Buffers in the gateway would overflow and some of those datagrams were lost. The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. > > Conversely, during "busy times", the traffic through the ARPANET would be spread out in time. With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead. Result - faster FTP transfers. > > AFAIK, none of this behavior was ever analyzed mathematically. The mathematical model of an Internet seemed beyond the capability of queuing theory et al. Progress was very much driven by experimentation and "let's try this" activity. > > The solution, or actually workaround, was to improve the gateway's hardware. More memory meant more buffering was available. That principle seems to have continued even today, but has caused other problems. Google "buffer bloat" if you're curious. > > As far as I remember, there weren't any such problems reported with the various Packet Radio networks. They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. > > The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. > > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history There was a packet radio network at the Ft. Bragg, North Carolina site, with an SRI field office supporting the XVIII Airborne Corps. [1] The TIU implementation run at that site was written by Jim Mathis. (IEN 98 and RFC 801 provide more information. The latter notes that the TIU was used daily for communications between Ft. Bragg users and ISID. A test program called PTIME used to determine ?user-visible throughput? also appears in [1].) Some ISI staff supported this testbed also. [2] By 1986, the database applications such as the Tactical Reporting System were running on Sun workstations, and mostly used Sun RPC over TCP. --gregbo [1] https://pdos.csail.mit.edu/archive/decouto/papers/frankel82.pdf [2] https://apps.dtic.mil/sti/tr/pdf/ADA157991.pdf From b_a_denny at yahoo.com Mon Mar 17 13:10:32 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 17 Mar 2025 20:10:32 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: <1B63DB11-FFF9-4EDF-85C2-AC347C8AC4AF@icloud.com> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> <1B63DB11-FFF9-4EDF-85C2-AC347C8AC4AF@icloud.com> Message-ID: <506833157.5125533.1742242232103@mail.yahoo.com> People at SRI might be able to reveal more about TCP problems encountered in the military packet radio testbeds.? I was at BBN working on the packet radio station at the time.? I usually would get a call from Don Cone (SRI) telling me that something happened. I would get the breadcrumbs, usually core dumps from the station if I remember correctly.? ?I would put all the info in a notebook(s) and eventually I could piece together what might have gone wrong, often when other crashes occurred.? Because Jim was at SRI, I think he might have gotten involved with any TCP TIU problems before I was called. barbara On Sunday, March 16, 2025 at 11:06:59 PM PDT, Greg Skinner via Internet-history wrote: On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history wrote: > > IMHO the key factor was the state of the Internet at that time (1980ish).? The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution.? The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. > > The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE.? They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. > > SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. > > The ARPANET was a packet network, but it presented a virtual circuit service to its users.? Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact.? So RTT values could be set very high - I recall one common choice was 3 seconds. > > For the UK users however, things were quite different.? The "core gateways" at the time were very limited by their hardware configurations.? They didn't have much buffering space.? So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection.? IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. > > I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. > > At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement.? They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest.? But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times".? That made little sense of course. > > After digging around with XNET, SNMP, etc., we discovered the cause.? That ARPANET/SATNET gateway had very few buffers.? The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them.? So the buffers filled up and datagrams were discarded -- just as expected. > > During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size).? Buffers in the gateway would overflow and some of those datagrams were lost.? The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. > > Conversely, during "busy times", the traffic through the ARPANET would be spread out in time.? With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead.? Result - faster FTP transfers. > > AFAIK, none of this behavior was ever analyzed mathematically.? The mathematical model of an Internet seemed beyond the capability of queuing theory et al.? Progress was very much driven by experimentation and "let's try this" activity. > > The solution, or actually workaround, was to improve the gateway's hardware.? More memory meant more buffering was available.? That principle seems to have continued even today, but has caused other problems.? Google "buffer bloat" if you're curious. > > As far as I remember, there weren't any such problems reported with the various Packet Radio networks.? They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. > > The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. > > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history There was a packet radio network at the Ft. Bragg, North Carolina site, with an SRI field office supporting the XVIII Airborne Corps. [1]? The TIU implementation run at that site was written by Jim Mathis.? (IEN 98 and RFC 801 provide more information.? The latter notes that the TIU was used daily for communications between Ft. Bragg users and ISID.? A test program called PTIME used to determine ?user-visible throughput? also appears in [1].)? Some ISI staff supported this testbed also. [2]? By 1986, the database applications such as the Tactical Reporting System were running on Sun workstations, and mostly used Sun RPC over TCP. --gregbo [1] https://pdos.csail.mit.edu/archive/decouto/papers/frankel82.pdf [2] https://apps.dtic.mil/sti/tr/pdf/ADA157991.pdf -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Mon Mar 17 14:04:02 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 17 Mar 2025 21:04:02 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: <506833157.5125533.1742242232103@mail.yahoo.com> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> <1B63DB11-FFF9-4EDF-85C2-AC347C8AC4AF@icloud.com> <506833157.5125533.1742242232103@mail.yahoo.com> Message-ID: <461434100.5175306.1742245442979@mail.yahoo.com> I should add that other members of the BBN packet radio group ( Mike Beeler and Charlie Lynn) were great in helping me to resolve problems at the time in addition to the folks at Rockwell Collins (John Jubin and Neil Gower stand out in my memory). barbara On Monday, March 17, 2025 at 01:11:20 PM PDT, Barbara Denny via Internet-history wrote: People at SRI might be able to reveal more about TCP problems encountered in the military packet radio testbeds.? I was at BBN working on the packet radio station at the time.? I usually would get a call from Don Cone (SRI) telling me that something happened. I would get the breadcrumbs, usually core dumps from the station if I remember correctly.? ?I would put all the info in a notebook(s) and eventually I could piece together what might have gone wrong, often when other crashes occurred.? Because Jim was at SRI, I think he might have gotten involved with any TCP TIU problems before I was called. barbara ? ? On Sunday, March 16, 2025 at 11:06:59 PM PDT, Greg Skinner via Internet-history wrote:? On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history wrote: > > IMHO the key factor was the state of the Internet at that time (1980ish).? The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution.? The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. > > The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE.? They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. > > SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. > > The ARPANET was a packet network, but it presented a virtual circuit service to its users.? Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact.? So RTT values could be set very high - I recall one common choice was 3 seconds. > > For the UK users however, things were quite different.? The "core gateways" at the time were very limited by their hardware configurations.? They didn't have much buffering space.? So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection.? IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. > > I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. > > At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement.? They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest.? But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times".? That made little sense of course. > > After digging around with XNET, SNMP, etc., we discovered the cause.? That ARPANET/SATNET gateway had very few buffers.? The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them.? So the buffers filled up and datagrams were discarded -- just as expected. > > During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size).? Buffers in the gateway would overflow and some of those datagrams were lost.? The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. > > Conversely, during "busy times", the traffic through the ARPANET would be spread out in time.? With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead.? Result - faster FTP transfers. > > AFAIK, none of this behavior was ever analyzed mathematically.? The mathematical model of an Internet seemed beyond the capability of queuing theory et al.? Progress was very much driven by experimentation and "let's try this" activity. > > The solution, or actually workaround, was to improve the gateway's hardware.? More memory meant more buffering was available.? That principle seems to have continued even today, but has caused other problems.? Google "buffer bloat" if you're curious. > > As far as I remember, there weren't any such problems reported with the various Packet Radio networks.? They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. > > The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. > > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history There was a packet radio network at the Ft. Bragg, North Carolina site, with an SRI field office supporting the XVIII Airborne Corps. [1]? The TIU implementation run at that site was written by Jim Mathis.? (IEN 98 and RFC 801 provide more information.? The latter notes that the TIU was used daily for communications between Ft. Bragg users and ISID.? A test program called PTIME used to determine ?user-visible throughput? also appears in [1].)? Some ISI staff supported this testbed also. [2]? By 1986, the database applications such as the Tactical Reporting System were running on Sun workstations, and mostly used Sun RPC over TCP. --gregbo [1] https://pdos.csail.mit.edu/archive/decouto/papers/frankel82.pdf [2] https://apps.dtic.mil/sti/tr/pdf/ADA157991.pdf -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history ? -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From lk at cs.ucla.edu Mon Mar 17 21:46:08 2025 From: lk at cs.ucla.edu (Leonard Kleinrock) Date: Mon, 17 Mar 2025 21:46:08 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> Message-ID: <0A5978A4-91E0-4AE1-B085-9C0891961128@cs.ucla.edu> Hi Jack, There were some queueing theory papers in those early days that did indeed shed some light on the phenomena and performance of the Arpanet and of Satnet. Here are a couple of references where analysis and measurement were both of value in providing understanding: https://www.lk.cs.ucla.edu/data/files/Naylor/On%20Measured%20Behavior%20of%20the%20ARPA%20Network.pdf and https://www.lk.cs.ucla.edu/data/files/Kleinrock/packet_satellite_multiple_access.pdf and this last paper even showed the ?capture" effect with the SIMPs. In particular, one phenomenon was that if site A at one end of the Satnet was sending traffic to site B at the other end, then the fact that a message traveling from A to B forced a RFNM reply from B to A and this prevented B from sending its own messages to A since the RFNMs hogged the B to A channel. Lots more was observed and these are just some of the performance papers that used measurement and queueing models in those early days. Len > On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history wrote: > > On 3/11/25 07:05, David Finnigan via Internet-history wrote: >> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >> the lead in experimenting with formulae and methods for dynamic >> estimation of round trip times in TCP. Does anyone here have any further >> insight or recollection into these experiments for estimating RTT, and >> the development of the RTT formula? >> > > IMHO the key factor was the state of the Internet at that time (1980ish). The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution. The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. > > The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE. They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. > > SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. > > The ARPANET was a packet network, but it presented a virtual circuit service to its users. Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact. So RTT values could be set very high - I recall one common choice was 3 seconds. > > For the UK users however, things were quite different. The "core gateways" at the time were very limited by their hardware configurations. They didn't have much buffering space. So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection. IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. > > I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. > > At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement. They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest. But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times". That made little sense of course. > > After digging around with XNET, SNMP, etc., we discovered the cause. That ARPANET/SATNET gateway had very few buffers. The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them. So the buffers filled up and datagrams were discarded -- just as expected. > > During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size). Buffers in the gateway would overflow and some of those datagrams were lost. The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. > > Conversely, during "busy times", the traffic through the ARPANET would be spread out in time. With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead. Result - faster FTP transfers. > > AFAIK, none of this behavior was ever analyzed mathematically. The mathematical model of an Internet seemed beyond the capability of queuing theory et al. Progress was very much driven by experimentation and "let's try this" activity. > > The solution, or actually workaround, was to improve the gateway's hardware. More memory meant more buffering was available. That principle seems to have continued even today, but has caused other problems. Google "buffer bloat" if you're curious. > > As far as I remember, there weren't any such problems reported with the various Packet Radio networks. They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. > > The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. > > Jack Haverty > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gaige at gbpsw.com Tue Mar 18 10:18:24 2025 From: gaige at gbpsw.com (Gaige B. Paulsen) Date: Tue, 18 Mar 2025 19:18:24 +0200 Subject: [ih] Hello, Internet History group Message-ID: <3356AA07-83B9-4A6A-9E3D-173A2803D79B@gbpsw.com> ?Yep, InterCon?s products were built from the original work that Tim Krauskopf and I did on the TCP/IP stack for NCSA Telnet. The original versions of NCSA Telnet (1985/1986) used an AppleTalk (LocalTalk) gateway that came out of Stanford (SEAGATE, the Stanford Ethernet-AppleTalk Gateway) and commercialized by Kinetics. Our SLIP implementation came quite a bit later, and the PPP implementation a bit later than that. I honestly don?t think we ever released a version of SLIP for the internal stack, only with MacTCP. I?m happy to discuss any of the particulars of what we did back in the day. I?m not sure if Kurt or Amanda are on the list here, but they?re definitely still around. -Gaige > Gaige B. Paulsen was one of the authors of NCSA Telnet at the U of I. > Are Amanda Walker or Kurt Baumann still around? > > -David Finnigan From df at macgui.com Tue Mar 18 11:09:06 2025 From: df at macgui.com (David Finnigan) Date: Tue, 18 Mar 2025 13:09:06 -0500 Subject: [ih] NCSA & TCP on Mac, was Re: Hello, Internet History group In-Reply-To: <3356AA07-83B9-4A6A-9E3D-173A2803D79B@gbpsw.com> References: <3356AA07-83B9-4A6A-9E3D-173A2803D79B@gbpsw.com> Message-ID: <6292f016a804b848e7abd1a42f5e774a@macgui.com> On 18 Mar 2025 12:18 pm, Gaige B. Paulsen via Internet-history wrote: > ?Yep, InterCon?s products were built from the original work that Tim > Krauskopf and I did on the TCP/IP stack for NCSA Telnet. > The original versions of NCSA Telnet (1985/1986) used an AppleTalk > (LocalTalk) gateway that came out of Stanford (SEAGATE, the Stanford > Ethernet-AppleTalk Gateway) and commercialized by Kinetics. Our SLIP > implementation came quite a bit later, and the PPP implementation a > bit later than that. I honestly don?t think we ever released a version > of SLIP for the internal stack, only with MacTCP. > I?m happy to discuss any of the particulars of what we did back in the > day. > I?m not sure if Kurt or Amanda are on the list here, but they?re > definitely still around. > -Gaige As a side project, I began researching other TCP implementations for Macintosh, and this is the web page that I put together: https://macgui.com/sabina/other_tcp.html An unanswered question was whether Apple's MacTCP was an original implementation, or a port or adaptation of some existing code. -David Finnigan From woody at pch.net Tue Mar 18 12:05:05 2025 From: woody at pch.net (Bill Woodcock) Date: Tue, 18 Mar 2025 20:05:05 +0100 Subject: [ih] NCSA & TCP on Mac, was Re: Hello, Internet History group In-Reply-To: <6292f016a804b848e7abd1a42f5e774a@macgui.com> References: <6292f016a804b848e7abd1a42f5e774a@macgui.com> Message-ID: <4BC5BBB9-ABC0-4E13-B09F-3ED0499E8A63@pch.net> Richard Ford or Garry Hornbuckle could answer that, they?re both still around. -Bill > On Mar 18, 2025, at 19:09, David Finnigan via Internet-history wrote: > > ?On 18 Mar 2025 12:18 pm, Gaige B. Paulsen via Internet-history wrote: >> ?Yep, InterCon?s products were built from the original work that Tim >> Krauskopf and I did on the TCP/IP stack for NCSA Telnet. >> The original versions of NCSA Telnet (1985/1986) used an AppleTalk >> (LocalTalk) gateway that came out of Stanford (SEAGATE, the Stanford >> Ethernet-AppleTalk Gateway) and commercialized by Kinetics. Our SLIP >> implementation came quite a bit later, and the PPP implementation a >> bit later than that. I honestly don?t think we ever released a version >> of SLIP for the internal stack, only with MacTCP. >> I?m happy to discuss any of the particulars of what we did back in the day. >> I?m not sure if Kurt or Amanda are on the list here, but they?re >> definitely still around. >> -Gaige > > As a side project, I began researching other TCP implementations for Macintosh, and this is the web page that I put together: > https://macgui.com/sabina/other_tcp.html > > An unanswered question was whether Apple's MacTCP was an original implementation, or a port or adaptation of some existing code. > > -David Finnigan > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gaige at gbpsw.com Tue Mar 18 12:26:17 2025 From: gaige at gbpsw.com (Gaige B. Paulsen) Date: Tue, 18 Mar 2025 21:26:17 +0200 Subject: [ih] NCSA & TCP on Mac, was Re: Hello, Internet History group In-Reply-To: <4BC5BBB9-ABC0-4E13-B09F-3ED0499E8A63@pch.net> References: <4BC5BBB9-ABC0-4E13-B09F-3ED0499E8A63@pch.net> Message-ID: <1FD2C28C-A95C-46E1-94E9-766D6A072AD9@gbpsw.com> I?d agree with Woody here. Open Transport was definitely based on Streams (licensed from Mentat, IIRC); but I?m not sure of the provenance of MacTCP. -Gaige > On Mar 18, 2025, at 21:05, Bill Woodcock wrote: > > ?Richard Ford or Garry Hornbuckle could answer that, they?re both still around. > > -Bill > > >>> On Mar 18, 2025, at 19:09, David Finnigan via Internet-history wrote: >>> >>> ?On 18 Mar 2025 12:18 pm, Gaige B. Paulsen via Internet-history wrote: >>> ?Yep, InterCon?s products were built from the original work that Tim >>> Krauskopf and I did on the TCP/IP stack for NCSA Telnet. >>> The original versions of NCSA Telnet (1985/1986) used an AppleTalk >>> (LocalTalk) gateway that came out of Stanford (SEAGATE, the Stanford >>> Ethernet-AppleTalk Gateway) and commercialized by Kinetics. Our SLIP >>> implementation came quite a bit later, and the PPP implementation a >>> bit later than that. I honestly don?t think we ever released a version >>> of SLIP for the internal stack, only with MacTCP. >>> I?m happy to discuss any of the particulars of what we did back in the day. >>> I?m not sure if Kurt or Amanda are on the list here, but they?re >>> definitely still around. >>> -Gaige >> >> As a side project, I began researching other TCP implementations for Macintosh, and this is the web page that I put together: >> https://macgui.com/sabina/other_tcp.html >> >> An unanswered question was whether Apple's MacTCP was an original implementation, or a port or adaptation of some existing code. >> >> -David Finnigan >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > From jack at 3kitty.org Tue Mar 18 16:16:42 2025 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 18 Mar 2025 16:16:42 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <0A5978A4-91E0-4AE1-B085-9C0891961128@cs.ucla.edu> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> <0A5978A4-91E0-4AE1-B085-9C0891961128@cs.ucla.edu> Message-ID: <4c256f97-dc28-474b-beff-bc9a757748a6@3kitty.org> Hi Len, Thanks for the pointers.? They fill in a bit more of the History. In particular I've seen little written about the early days of SATNET, AlohaNet, and such.? Also, in those days? ( 1970s+- ) there was no Web, no Internet, no search engines, and no easy way to access such papers except by attending the conferences. I wasn't involved with SATNET in its early days.?? It came onto my radar when Vint put "make the core gateways a 24x7 operational service" onto an ARPA contract I was managing.? I think it was fall 1978.? By that time, SATNET was running CPODA and was in "operation" mode, monitored by the BBN NOC which also similarly managed the ARPANET.? The technology was pretty stable by then.? MATNET had also been deployed, as a clone of SATNET, with installations on Navy sites including the USS Carl Vinson.?? It was the next step in the progression from research to operational "technology transfer" into the "real world" of DoD. From the papers you highlighted, it seems that the experiments were carried out before the CPODA introduction.? I'm a bit confused about exactly what was involved.?? There was SATNET with sites in West Virginia US and Goonhilly Downs UK.?? There was also an ARPANET IMP (actually UCL-TIP IIRC) linked to IMPs in the US by satellite.? I always thought those were two separate networks, but maybe somehow the ARPANET IMP-IMP "circuit" used the SATNET satellite channel? The paper references RFNMs on SATNET.? But I don't remember if those were part of the SATNET mechanisms (CPODA?) or somehow part of the ARPANET internal mechanisms.? I don't recall ever hearing anything about RFNMs being part of SATNET's mechanisms while I was responsible for it. In any event, I studied quite a bit of queueing theory and other branches of mathematics (e.g., statistics, operations research, etc.) while a student at MIT.?? It was all very enlightening to understand how things work, and to be able to use the techniques to compare possible internal algorithms. But I also learned that there can be large differences between theory and practice. One example was while I had a student job programming a PDP-8 for data collection in a lab where inertial navigation equipment was developed, used in Apollo, Minuteman, and such systems.? I had studied lots of mathematical techniques for engineering design, e.g., use of Karnaugh Maps to minimize logic circuit components. My desk happened to be next to one of the career engineer's desk (an actual "rocket scientist").?? So I asked him what kinds of tools he had found were most useful for his work.? His answer -- none of them.? By analyzing enormous amounts of data, they had discovered that almost all failures were caused by some kind of metal-metal connector problem.? So their engineering principle was to minimize the number of such connections in a design.?? There were no tools for that. Another example occurred at BBN, when the ARPANET was being transformed into the Defense Data Network, to become a DoD-wide operational infrastructure.? Someone (can't remember who) had produced a scientific paper proving that the ARPANET algorithms would "lock up" and the entire network would crash.? That understandably caused significant concern in the DoD.?? The DDN couldn't be allowed to crash. After BBN investigated, we discovered that the research was true. But there were assumptions made in order for the analysis to be tractable.? In particular, the analysis assumed that every IMP in the network ran at exactly the same speed, and was started at exactly the same time, so that all the programs were running in perfect synchrony, with instructions being executed simultaneously in every IMP.? That assumption made the analysis mathematically feasible. Without that assumption, the analysis was still accurate, but became irrelevant.? We advised the DoD not to worry, explaining that the probability of such an occurrence was infinitesimal.? If we had to make that behavior happen, we didn't know how to do so.? They agreed.? DDN continued to be deployed. So my personal conclusion has been that scientific analysis is important and useful, but has to be viewed in the context of real-world conditions.? The Internet in particular is a real-world environment that seems, to me at least, to be mathematically intractable.? There are many components in use, even within a single TCP connection, where some of the mechanisms (retransmissions, error detection, queue management, timing, etc.) are in the switches, some are in the hosts' implementations of TCP, and some are in the particular operating systems involved. There is a quote, attributed to Yogi Berra, which captures the situation: "In theory, there is no difference between theory and practice.?? In practice, there is." While I was involved in designing internals of The Internet, generally between 1972 and 1997, I don't recall much if any "analysis" of the Internet as a whole communications system, including TCP, IP, UDP, as well as mechanisms in each of the underlying network technologies.? Mostly design decisions were driven by intuition and/or experience.?? Perhaps there was some comprehensive analysis, but I missed it. Perhaps The Internet as a whole is just too complex for the existing capabilities of mathematical tools? Jack On 3/17/25 21:46, Leonard Kleinrock wrote: > Hi Jack, > > There were some queueing theory papers in those early days that did > indeed shed some light on the phenomena and performance of the Arpanet > and of Satnet. ?Here are a couple of references where analysis and > measurement were both of value in providing understanding: > > https://www.lk.cs.ucla.edu/data/files/Naylor/On%20Measured%20Behavior%20of%20the%20ARPA%20Network.pdf > > and > > https://www.lk.cs.ucla.edu/data/files/Kleinrock/packet_satellite_multiple_access.pdf > > and this last paper even showed the ?capture" effect with the SIMPs. > ?In particular, one phenomenon was that if site A at one end of the > Satnet was sending traffic to site B at the other end, then the fact > that a message traveling from A to B forced a RFNM reply from B to A > and this prevented B from sending its own messages to A since the > RFNMs hogged the B to A channel. ?Lots more was observed and these are > just some of the performance papers that used measurement and queueing > models in those early days. > > Len > > > >> On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history >> wrote: >> >> On 3/11/25 07:05, David Finnigan via Internet-history wrote: >>> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >>> the lead in experimenting with formulae and methods for dynamic >>> estimation of round trip times in TCP. Does anyone here have any further >>> insight or recollection into these experiments for estimating RTT, and >>> the development of the RTT formula? >>> >> >> IMHO the key factor was the state of the Internet at that time >> (1980ish).? The ARPANET was the primary "backbone" of The Internet in >> what I think of as the "fuzzy peach" stage of Internet evolution.?? >> The ARPANET was the peach, and sites on the ARPANET were adding LANs >> of some type and connecting them with some kind of gateway to the >> ARPANET IMP. >> >> The exception to that structure was Europe, especially Peter >> Kirstein's group at UCL and John Laws group at RSRE.?? They were >> interconnected somehow in the UK, but their access to the Internet >> was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. >> >> SATNET was connected to the ARPANET through one of the "core >> gateways" that we at BBN were responsible to run as a 24x7 >> operational network. >> >> The ARPANET was a packet network, but it presented a virtual circuit >> service to its users.? Everything that went in one end came out the >> other end, in order, with nothing missing, and nothing duplicated. >> TCPs at a US site talking to TCPs at another US site didn't have much >> work to do, since everything they sent would be received intact.?? So >> RTT values could be set very high - I recall one common choice was 3 >> seconds. >> >> For the UK users however, things were quite different. The "core >> gateways" at the time were very limited by their hardware >> configurations.? They didn't have much buffering space.?? So they did >> drop datagrams, which of course had to be retransmitted by the host >> at the end of the TCP connection.? IIRC, at one point the >> ARPANET/SATNET gateway had exactly one datagram of buffer space. >> >> I don't recall anyone ever saying it, but I suspect that situation >> caused the UCL and RSRE crews to pay a lot of attention to TCP >> behavior, and try to figure out how best to deal with their skinny >> pipe across the Atlantic. >> >> At one point, someone (from UCL or RSRE, can't remember) reported an >> unexpected measurement.? They did frequent file transfers, often >> trying to "time" their transfers to happen at a time of day when UK >> and US traffic flows would be lowest.?? But they observed that their >> transfers during "busy times" went much faster than similar transfers >> during "quiet times".? That made little sense of course. >> >> After digging around with XNET, SNMP, etc., we discovered the cause.? >> That ARPANET/SATNET gateway had very few buffers.? The LANs at users' >> sites and the ARPANET path could deliver datagrams to that gateway >> faster than SATNET could take them.? So the buffers filled up and >> datagrams were discarded -- just as expected. >> >> During "quiet times", the TCP connection would deliver datagrams to >> the gateway in bursts (whatever the TCPs negotiated as a Window >> size).?? Buffers in the gateway would overflow and some of those >> datagrams were lost. The sending TCP would retransmit, but only after >> the RTT timer expired, which was often set to 3 seconds. Result - >> slow FTPs. >> >> Conversely, during "busy times", the traffic through the ARPANET >> would be spread out in time.?? With other users' traffic flows >> present, chances were better that someone else's datagram would be >> dropped instead.? Result - faster FTP transfers. >> >> AFAIK, none of this behavior was ever analyzed mathematically.? The >> mathematical model of an Internet seemed beyond the capability of >> queuing theory et al. Progress was very much driven by >> experimentation and "let's try this" activity. >> >> The solution, or actually workaround, was to improve the gateway's >> hardware.? More memory meant more buffering was available.?? That >> principle seems to have continued even today, but has caused other >> problems.? Google "buffer bloat" if you're curious. >> >> As far as I remember, there weren't any such problems reported with >> the various Packet Radio networks.?? They tended to be used only >> occasionally, for tests and demos, where the SATNET linkage was used >> almost daily. >> >> The Laws and Kirstein groups in the UK were, IMHO, the first "real" >> users of TCP on The Internet, exploring paths not protected by >> ARPANET mechanisms. >> >> Jack Haverty >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From poslfit at gmail.com Tue Mar 18 19:32:13 2025 From: poslfit at gmail.com (John Chew) Date: Tue, 18 Mar 2025 22:32:13 -0400 Subject: [ih] Hello, Internet History group In-Reply-To: <3356AA07-83B9-4A6A-9E3D-173A2803D79B@gbpsw.com> References: <3356AA07-83B9-4A6A-9E3D-173A2803D79B@gbpsw.com> Message-ID: I remember submitting a patch to NCSA Telnet (I don't think it was ever accepted) that we used extensively where I was working in 1989 at AIS/Berger-Levrault in France. It added an ANSI escape sequence that took as a string argument the name of a sound to play, so that our Unix server could send different audible alerts to our Macs. John On Tue, Mar 18, 2025 at 1:18?PM Gaige B. Paulsen via Internet-history < internet-history at elists.isoc.org> wrote: > Yep, InterCon?s products were built from the original work that Tim > Krauskopf and I did on the TCP/IP stack for NCSA Telnet. > The original versions of NCSA Telnet (1985/1986) used an AppleTalk > (LocalTalk) gateway that came out of Stanford (SEAGATE, the Stanford > Ethernet-AppleTalk Gateway) and commercialized by Kinetics. Our SLIP > implementation came quite a bit later, and the PPP implementation a bit > later than that. I honestly don?t think we ever released a version of SLIP > for the internal stack, only with MacTCP. > I?m happy to discuss any of the particulars of what we did back in the day. > I?m not sure if Kurt or Amanda are on the list here, but they?re > definitely still around. > -Gaige > > > Gaige B. Paulsen was one of the authors of NCSA Telnet at the U of I. > > Are Amanda Walker or Kurt Baumann still around? > > > > -David Finnigan > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- John Chew * +1 416 876 7675 * http://www.poslfit.com poslfit at gmail.com (personal/general correspondence) info at canadianenglishdictionary.ca (as editor-in-chief of the CED) info at scrabbleplayers.org (as CEO of NASPA) jjchew at math.utoronto.ca (as a mathematical researcher) From beebe at math.utah.edu Wed Mar 19 16:53:59 2025 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Wed, 19 Mar 2025 17:53:59 -0600 Subject: [ih] new history of Netnews Message-ID: List members may be interested in this new article published today: Steven M. Bellovin Netnews: The Origin Story IEEE Annals of the History of Computing 47(1) 7--21 Jan/Mar 2025 https://doi.org/10.1109/MAHC.2024.3420896 ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: https://www.math.utah.edu/~beebe - ------------------------------------------------------------------------------- From gregskinner0 at icloud.com Fri Mar 21 13:52:48 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Fri, 21 Mar 2025 13:52:48 -0700 Subject: [ih] Fwd: Packet Radio and Internet Documents References: <285931365.402191.1742582668223@mail.yahoo.com> Message-ID: <620754FC-68DF-43CF-8586-9D4D19ECD029@icloud.com> Forwarded for Barbara. I would also like to add that PRTN 268 is a simulation study by Zaw-Sing Su of the Ft. Bragg packet radio testbed described in the advanced technology testbed paper authored by Mike Frankel that I posted some time ago. The simulation study also includes some correspondence between Zaw-Sing and MF. Some of those PRTNs, such as PRTN 292 (Radia Perlman?s ?Flying Packet Radios and Network Partitions") became IENs (IEN 146 in this case). https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf > Begin forwarded message: > > From: Barbara Denny > Subject: Fw: Packet Radio and Internet Documents > Date: March 21, 2025 at 11:44:28?AM PDT > > ----- Forwarded Message ----- > From: Barbara Denny > To: Internet-history > Sent: Friday, March 21, 2025 at 10:59:51 AM PDT > Subject: Packet Radio and Internet Documents > > I was having a side discussion with Greg Skinner and he found a document that is an index of the PRTNs (Packet Radio Temporary Notes). Packet Radio and SURAN had its own set of documents. Since I think the gateway was originally part of the packet radio station, I thought people might like to see the list. A few look directly related to the Internet, including a couple by Vint and one by Danny Cohen. I haven't tried yet to see if the DTIC website also has the Internet documents. The PRTN number might help you find them. > > https://apps.dtic.mil/sti/tr/pdf/ADA141528.pdf > > barbara > From jack at 3kitty.org Fri Mar 21 16:09:40 2025 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 21 Mar 2025 16:09:40 -0700 Subject: [ih] Fwd: Packet Radio and Internet Documents In-Reply-To: <620754FC-68DF-43CF-8586-9D4D19ECD029@icloud.com> References: <285931365.402191.1742582668223@mail.yahoo.com> <620754FC-68DF-43CF-8586-9D4D19ECD029@icloud.com> Message-ID: Thanks Barbara (& Greg!).?? I can't recall that I ever saw a PRTN until today, except for a few that were also issued as IENs.? I wonder if that was because of limited distribution or just being too busy with other stuff (like writing code, etc.). ? Maybe there was a lot more on the NIC server than I ever ran across. I suspect other people working on The Internet back in the 80s didn't see many of the reports from the various "other" projects that were part of The Internet - e.g., all the work on AlohaNet, ARPANET internals, SATNET, WBNET, et al.?? It's sure a lot easier now to find such things, now that we have search engines and archives like DTIC. By the way, that PRTN identifies the contract number that funded the work - MDA 903-77-C-0272?? I've found such numbers to be excellent "search terms" for finding other old related technical info.? For example, if you go to discover.dtic.mil and type that number into the search box, a lot of other packet radio literature will show up. Jack On 3/21/25 13:52, Greg Skinner wrote: > Forwarded for Barbara. ?I would also like to add that PRTN 268 is a > simulation study by Zaw-Sing Su of the Ft. Bragg packet radio testbed > described in the advanced technology testbed paper authored by Mike > Frankel that I posted some time ago. ?The simulation study also > includes some correspondence between Zaw-Sing and MF. ?Some of those > PRTNs, such as PRTN 292 (Radia Perlman?s ?Flying Packet Radios and > Network Partitions") became IENs (IEN 146 in this case). > > https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf > >> Begin forwarded message: >> >> *From: *Barbara Denny >> *Subject: **Fw: Packet Radio and Internet Documents* >> *Date: *March 21, 2025 at 11:44:28?AM PDT >> >> ----- Forwarded Message ----- >> *From:* Barbara Denny >> *To:* Internet-history >> *Sent:* Friday, March 21, 2025 at 10:59:51 AM PDT >> *Subject:* Packet Radio and Internet Documents >> >> I was having a side discussion with Greg Skinner and he found a >> document that is an index of the PRTNs (Packet Radio Temporary >> Notes).? Packet Radio and SURAN had its own set of documents. Since I >> think the gateway was originally part of the packet radio station, I >> thought people might like to see the list. A few look directly >> related to the Internet,? including a couple by Vint and one by Danny >> Cohen.? I haven't tried yet to see if the DTIC website also has the >> Internet documents. The PRTN number might help you find them. >> >> https://apps.dtic.mil/sti/tr/pdf/ADA141528.pdf >> >> barbara >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From jim at deitygraveyard.com Fri Mar 21 21:25:14 2025 From: jim at deitygraveyard.com (Jim Carpenter) Date: Sat, 22 Mar 2025 00:25:14 -0400 Subject: [ih] new history of Netnews In-Reply-To: References: Message-ID: On Wed, Mar 19, 2025 at 7:54?PM Nelson H. F. Beebe via Internet-history wrote: > > List members may be interested in this new article published today: > > Steven M. Bellovin > Netnews: The Origin Story > IEEE Annals of the History of Computing 47(1) 7--21 Jan/Mar 2025 > https://doi.org/10.1109/MAHC.2024.3420896 > You can read it at https://www.cs.columbia.edu/~smb/papers/netnews-hist.pdf . Jim From lk at cs.ucla.edu Sat Mar 22 15:27:04 2025 From: lk at cs.ucla.edu (Leonard Kleinrock) Date: Sat, 22 Mar 2025 15:27:04 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <4c256f97-dc28-474b-beff-bc9a757748a6@3kitty.org> References: <364ae96367fe7da8d14deb4e915ffdb5@macgui.com> <4f5b7616-8326-45eb-89d8-2bede9a77b0d@3kitty.org> <0A5978A4-91E0-4AE1-B085-9C0891961128@cs.ucla.edu> <4c256f97-dc28-474b-beff-bc9a757748a6@3kitty.org> Message-ID: <74E304C1-72DE-4533-B625-CB70A40A14CA@cs.ucla.edu> Hi Jack, Thanks for your additional data on the early networks and the ongoing discussion re such topics as ?TCP RTT Estimator? and network congestion. Regarding your comment below, "So my personal conclusion has been that scientific analysis is important and useful, but has to be viewed in the context of real-world conditions. The Internet in particular is a real-world environment that seems, to me at least, to be mathematically intractable.?. I agree with most of that, but want to address the ?mathematically intractable? tone. I think you would agree that we should be sure that we continue to realize that mathematical analysis, although with its simplifying assumptions, has a valuable role in that a combination of mathematical models, analysis, optimization, simulation, measurement, experiments, and testing should all work together and iteratively to provide useful results, understanding, principles, intuition, judgment, and guidelines etc, all of which help enable us to deal with the intricacies and behavior of such complex systems as the Internet. For example, even in those very early Arpanet days, by taking a system viewpoint, we were able to anticipate and/or measure the deadlocks and degradations due to the ad-hoc flow control measures that were introduced with incomplete understanding of their interaction. Another example in recent discussions in this mailing group was the emergence of Buffer Bloat that could have been anticipated had there been a proper analysis and understanding of the source of network congestion (but that?s a whole other discussion). Let me offer a quote due, not to Yogi Berra, but rather to Einstein ?Make everything as simple as possible, but not simpler?. Best, Len > On Mar 18, 2025, at 4:16?PM, Jack Haverty wrote: > > Hi Len, > > Thanks for the pointers. They fill in a bit more of the History. In particular I've seen little written about the early days of SATNET, AlohaNet, and such. Also, in those days ( 1970s+- ) there was no Web, no Internet, no search engines, and no easy way to access such papers except by attending the conferences. > > I wasn't involved with SATNET in its early days. It came onto my radar when Vint put "make the core gateways a 24x7 operational service" onto an ARPA contract I was managing. I think it was fall 1978. By that time, SATNET was running CPODA and was in "operation" mode, monitored by the BBN NOC which also similarly managed the ARPANET. The technology was pretty stable by then. MATNET had also been deployed, as a clone of SATNET, with installations on Navy sites including the USS Carl Vinson. It was the next step in the progression from research to operational "technology transfer" into the "real world" of DoD. > > From the papers you highlighted, it seems that the experiments were carried out before the CPODA introduction. I'm a bit confused about exactly what was involved. There was SATNET with sites in West Virginia US and Goonhilly Downs UK. There was also an ARPANET IMP (actually UCL-TIP IIRC) linked to IMPs in the US by satellite. I always thought those were two separate networks, but maybe somehow the ARPANET IMP-IMP "circuit" used the SATNET satellite channel? The paper references RFNMs on SATNET. But I don't remember if those were part of the SATNET mechanisms (CPODA?) or somehow part of the ARPANET internal mechanisms. I don't recall ever hearing anything about RFNMs being part of SATNET's mechanisms while I was responsible for it. > > In any event, I studied quite a bit of queueing theory and other branches of mathematics (e.g., statistics, operations research, etc.) while a student at MIT. It was all very enlightening to understand how things work, and to be able to use the techniques to compare possible internal algorithms. > > But I also learned that there can be large differences between theory and practice. > > One example was while I had a student job programming a PDP-8 for data collection in a lab where inertial navigation equipment was developed, used in Apollo, Minuteman, and such systems. I had studied lots of mathematical techniques for engineering design, e.g., use of Karnaugh Maps to minimize logic circuit components. > > My desk happened to be next to one of the career engineer's desk (an actual "rocket scientist"). So I asked him what kinds of tools he had found were most useful for his work. His answer -- none of them. By analyzing enormous amounts of data, they had discovered that almost all failures were caused by some kind of metal-metal connector problem. So their engineering principle was to minimize the number of such connections in a design. There were no tools for that. > > Another example occurred at BBN, when the ARPANET was being transformed into the Defense Data Network, to become a DoD-wide operational infrastructure. Someone (can't remember who) had produced a scientific paper proving that the ARPANET algorithms would "lock up" and the entire network would crash. That understandably caused significant concern in the DoD. The DDN couldn't be allowed to crash. > > After BBN investigated, we discovered that the research was true. But there were assumptions made in order for the analysis to be tractable. In particular, the analysis assumed that every IMP in the network ran at exactly the same speed, and was started at exactly the same time, so that all the programs were running in perfect synchrony, with instructions being executed simultaneously in every IMP. That assumption made the analysis mathematically feasible. > > Without that assumption, the analysis was still accurate, but became irrelevant. We advised the DoD not to worry, explaining that the probability of such an occurrence was infinitesimal. If we had to make that behavior happen, we didn't know how to do so. They agreed. DDN continued to be deployed. > > So my personal conclusion has been that scientific analysis is important and useful, but has to be viewed in the context of real-world conditions. The Internet in particular is a real-world environment that seems, to me at least, to be mathematically intractable. There are many components in use, even within a single TCP connection, where some of the mechanisms (retransmissions, error detection, queue management, timing, etc.) are in the switches, some are in the hosts' implementations of TCP, and some are in the particular operating systems involved. > > There is a quote, attributed to Yogi Berra, which captures the situation: > > "In theory, there is no difference between theory and practice. In practice, there is." > > While I was involved in designing internals of The Internet, generally between 1972 and 1997, I don't recall much if any "analysis" of the Internet as a whole communications system, including TCP, IP, UDP, as well as mechanisms in each of the underlying network technologies. Mostly design decisions were driven by intuition and/or experience. Perhaps there was some comprehensive analysis, but I missed it. > > Perhaps The Internet as a whole is just too complex for the existing capabilities of mathematical tools? > > Jack > > > > > > On 3/17/25 21:46, Leonard Kleinrock wrote: >> Hi Jack, >> >> There were some queueing theory papers in those early days that did indeed shed some light on the phenomena and performance of the Arpanet and of Satnet. Here are a couple of references where analysis and measurement were both of value in providing understanding: >> >> https://www.lk.cs.ucla.edu/data/files/Naylor/On%20Measured%20Behavior%20of%20the%20ARPA%20Network.pdf >> >> and >> >> https://www.lk.cs.ucla.edu/data/files/Kleinrock/packet_satellite_multiple_access.pdf >> >> and this last paper even showed the ?capture" effect with the SIMPs. In particular, one phenomenon was that if site A at one end of the Satnet was sending traffic to site B at the other end, then the fact that a message traveling from A to B forced a RFNM reply from B to A and this prevented B from sending its own messages to A since the RFNMs hogged the B to A channel. Lots more was observed and these are just some of the performance papers that used measurement and queueing models in those early days. >> >> Len >> >> >> >>> On Mar 11, 2025, at 1:42?PM, Jack Haverty via Internet-history wrote: >>> >>> On 3/11/25 07:05, David Finnigan via Internet-history wrote: >>>> It looks like staff at RSRE (Royal Signals and Radar Establishment) took >>>> the lead in experimenting with formulae and methods for dynamic >>>> estimation of round trip times in TCP. Does anyone here have any further >>>> insight or recollection into these experiments for estimating RTT, and >>>> the development of the RTT formula? >>>> >>> >>> IMHO the key factor was the state of the Internet at that time (1980ish). The ARPANET was the primary "backbone" of The Internet in what I think of as the "fuzzy peach" stage of Internet evolution. The ARPANET was the peach, and sites on the ARPANET were adding LANs of some type and connecting them with some kind of gateway to the ARPANET IMP. >>> >>> The exception to that structure was Europe, especially Peter Kirstein's group at UCL and John Laws group at RSRE. They were interconnected somehow in the UK, but their access to the Internet was through a connection to a SATNET node (aka SIMP) at Goonhilly Downs. >>> >>> SATNET was connected to the ARPANET through one of the "core gateways" that we at BBN were responsible to run as a 24x7 operational network. >>> >>> The ARPANET was a packet network, but it presented a virtual circuit service to its users. Everything that went in one end came out the other end, in order, with nothing missing, and nothing duplicated. TCPs at a US site talking to TCPs at another US site didn't have much work to do, since everything they sent would be received intact. So RTT values could be set very high - I recall one common choice was 3 seconds. >>> >>> For the UK users however, things were quite different. The "core gateways" at the time were very limited by their hardware configurations. They didn't have much buffering space. So they did drop datagrams, which of course had to be retransmitted by the host at the end of the TCP connection. IIRC, at one point the ARPANET/SATNET gateway had exactly one datagram of buffer space. >>> >>> I don't recall anyone ever saying it, but I suspect that situation caused the UCL and RSRE crews to pay a lot of attention to TCP behavior, and try to figure out how best to deal with their skinny pipe across the Atlantic. >>> >>> At one point, someone (from UCL or RSRE, can't remember) reported an unexpected measurement. They did frequent file transfers, often trying to "time" their transfers to happen at a time of day when UK and US traffic flows would be lowest. But they observed that their transfers during "busy times" went much faster than similar transfers during "quiet times". That made little sense of course. >>> >>> After digging around with XNET, SNMP, etc., we discovered the cause. That ARPANET/SATNET gateway had very few buffers. The LANs at users' sites and the ARPANET path could deliver datagrams to that gateway faster than SATNET could take them. So the buffers filled up and datagrams were discarded -- just as expected. >>> >>> During "quiet times", the TCP connection would deliver datagrams to the gateway in bursts (whatever the TCPs negotiated as a Window size). Buffers in the gateway would overflow and some of those datagrams were lost. The sending TCP would retransmit, but only after the RTT timer expired, which was often set to 3 seconds. Result - slow FTPs. >>> >>> Conversely, during "busy times", the traffic through the ARPANET would be spread out in time. With other users' traffic flows present, chances were better that someone else's datagram would be dropped instead. Result - faster FTP transfers. >>> >>> AFAIK, none of this behavior was ever analyzed mathematically. The mathematical model of an Internet seemed beyond the capability of queuing theory et al. Progress was very much driven by experimentation and "let's try this" activity. >>> >>> The solution, or actually workaround, was to improve the gateway's hardware. More memory meant more buffering was available. That principle seems to have continued even today, but has caused other problems. Google "buffer bloat" if you're curious. >>> >>> As far as I remember, there weren't any such problems reported with the various Packet Radio networks. They tended to be used only occasionally, for tests and demos, where the SATNET linkage was used almost daily. >>> >>> The Laws and Kirstein groups in the UK were, IMHO, the first "real" users of TCP on The Internet, exploring paths not protected by ARPANET mechanisms. >>> >>> Jack Haverty >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> > ? From gregskinner0 at icloud.com Sun Mar 23 01:19:13 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Sun, 23 Mar 2025 01:19:13 -0700 Subject: [ih] Fwd: Possible source of additional info References: <1261866728.733598.1742689373251@mail.yahoo.com> Message-ID: Forwarded for Barbara > Begin forwarded message: > > From: Barbara Denny > Subject: Re: Possible source of additional info > Date: March 22, 2025 at 5:22:53?PM PDT > > On Saturday, March 22, 2025 at 05:20:01 PM PDT, Barbara Denny wrote: > > > I tripped on this website. It looks like it could be interesting to folks. > > https://historyofcomputercommunications.info/ > > barbara From jeanjour at comcast.net Sun Mar 23 03:27:07 2025 From: jeanjour at comcast.net (John Day) Date: Sun, 23 Mar 2025 06:27:07 -0400 Subject: [ih] Possible source of additional info In-Reply-To: References: <1261866728.733598.1742689373251@mail.yahoo.com> Message-ID: <28D2AAC1-A8C2-4B1C-A39E-17BCD87E6FD7@comcast.net> Yes, this one is interesting and is the primary source for the book, Circuits, Packets, and Protocols published just before Jim passed away. The interviews in the 1980s are a very interesting window into what people were thinking before the Internet was generally known by the public in the 1990s. Take care, John > On Mar 23, 2025, at 04:19, Greg Skinner via Internet-history wrote: > > Forwarded for Barbara > >> Begin forwarded message: >> >> From: Barbara Denny >> Subject: Re: Possible source of additional info >> Date: March 22, 2025 at 5:22:53?PM PDT >> >> On Saturday, March 22, 2025 at 05:20:01 PM PDT, Barbara Denny wrote: >> >> >> I tripped on this website. It looks like it could be interesting to folks. >> >> https://historyofcomputercommunications.info/ >> >> barbara > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From j at shoch.com Sun Mar 23 12:50:59 2025 From: j at shoch.com (John Shoch) Date: Sun, 23 Mar 2025 12:50:59 -0700 Subject: [ih] Internet-history Digest, Vol 64, Issue 24 In-Reply-To: References: Message-ID: For those who may not already be familiar with Jim Pelkey, his web site, and his book, allow me to add a bit more to John Day's comments: --Jim Pelkey was a fascinating guy, spending time in the world of finance, venture capital, and startups. --In the 1980's he went on a quest to interview people in the world of data- and computer-communications (starting, I think, with a list of introductions from Paul Baran). --The result of that effort was a massive collection of interviews, and a comprehensive web site describing the history based on the interviews. --For some, like me, the web site was a great reference to dip into. But the flexibility of a massive hypertext site did not necessarily make it easy to understand the story, from beginning to end. --Years later Jim joined with two others to take all this material and produce a new book (over 500 pages), which John Day has referenced; it was published in 2022 by the ACM. --All the original transcripts were donated to the Computer History Museum in Mountain View. --And the CHM web site is, in turn, derived from the book (sort of coming full circle). The book is not particularly technical, but it helps frame the technical issues; it is filled with wonderful stories and context. [We know, however, that people's memories are not perfect; so there are some errors I noticed, and the occasional....reimagining....of history.] For those who would enjoy the more linear treatment, it is a great read -- got me through several transcontinental flights. John Shoch > ------------------------------ > > Message: 3 > Date: Sun, 23 Mar 2025 06:27:07 -0400 > From: John Day > To: Greg Skinner , Greg Skinner via > Internet-history > Subject: Re: [ih] Possible source of additional info > Message-ID: <28D2AAC1-A8C2-4B1C-A39E-17BCD87E6FD7 at comcast.net> > Content-Type: text/plain; charset=utf-8 > > Yes, this one is interesting and is the primary source for the book, > Circuits, Packets, and Protocols published just before Jim passed away. > > The interviews in the 1980s are a very interesting window into what people > were thinking before the Internet was generally known by the public in the > 1990s. > > Take care, > John > > From vint at google.com Sun Mar 23 18:42:21 2025 From: vint at google.com (Vint Cerf) Date: Sun, 23 Mar 2025 21:42:21 -0400 Subject: [ih] Internet-history Digest, Vol 64, Issue 24 In-Reply-To: References: Message-ID: Pelkey's book is indeed a good glimpse into that time period - like Samuel Pepys diary. The other massive work is Andreu Vea's 300 interview book about the Internet. https://www.inmesol.com/blog/history-internet-told-creators-single-book/#:~:text=A%20book%20has%20recently%20been,the%20fathers%20of%20the%20web. v v On Sun, Mar 23, 2025 at 3:51?PM John Shoch via Internet-history < internet-history at elists.isoc.org> wrote: > For those who may not already be familiar with Jim Pelkey, his web site, > and his book, allow me to add a bit more to John Day's comments: > > --Jim Pelkey was a fascinating guy, spending time in the world of finance, > venture capital, and startups. > --In the 1980's he went on a quest to interview people in the world of > data- and computer-communications (starting, I think, with a list of > introductions from Paul Baran). > --The result of that effort was a massive collection of interviews, and a > comprehensive web site describing the history based on the interviews. > --For some, like me, the web site was a great reference to dip into. But > the flexibility of a massive hypertext site did not necessarily make it > easy to understand the story, from beginning to end. > --Years later Jim joined with two others to take all this material and > produce a new book (over 500 pages), which John Day has referenced; it was > published in 2022 by the ACM. > --All the original transcripts were donated to the Computer History Museum > in Mountain View. > --And the CHM web site is, in turn, derived from the book (sort of coming > full circle). > > The book is not particularly technical, but it helps frame the technical > issues; it is filled with wonderful stories and context. > [We know, however, that people's memories are not perfect; so there are > some errors I noticed, and the occasional....reimagining....of history.] > > For those who would enjoy the more linear treatment, it is a great read -- > got me through several transcontinental flights. > > John Shoch > > > > ------------------------------ > > > > Message: 3 > > Date: Sun, 23 Mar 2025 06:27:07 -0400 > > From: John Day > > To: Greg Skinner , Greg Skinner via > > Internet-history > > Subject: Re: [ih] Possible source of additional info > > Message-ID: <28D2AAC1-A8C2-4B1C-A39E-17BCD87E6FD7 at comcast.net> > > Content-Type: text/plain; charset=utf-8 > > > > Yes, this one is interesting and is the primary source for the book, > > Circuits, Packets, and Protocols published just before Jim passed away. > > > > The interviews in the 1980s are a very interesting window into what > people > > were thinking before the Internet was generally known by the public in > the > > 1990s. > > > > Take care, > > John > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From gregskinner0 at icloud.com Mon Mar 24 17:16:06 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Mon, 24 Mar 2025 17:16:06 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <1333609295.2931332.1741729733743@mail.yahoo.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> Message-ID: <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> On Mar 11, 2025, at 2:48?PM, Barbara Denny wrote: > > I don't recall ever hearing, or reading, about TCP transport requirements from the underlying network but I wasn't there in the early days of TCP (70s).? > I have trouble thinking the problem with the congestion assumption? wasn't brought up early but I certainly don't know. > barbara > On Tuesday, March 11, 2025 at 02:10:26 PM PDT, John Day wrote: > > I would disagree. The Transport Layer assumes a minimal service from the layers below (actually all layers do). If the underlying layer doesn?t meet that normally, then measures are needed to bring the service up to the expected level.? Given that the diameter of the net now is about 20 or so and probably back then 5 or 6. Packet radio constituted a small fraction of the lower layers that the packet had to cross. Assuming packet radio didn?t have to do anything had the tail wagging the dog. > > Of course the example some would point to was TCP congestion control assuming lost packets were due to congestion. That was a dumb assumption and didn?t take a systems view of the problem. (Of course, it wasn?t the only dumb thing in that design, it also maximized retransmissions.) > > Take care, > John Day > >> On Mar 11, 2025, at 17:02, Barbara Denny via Internet-history wrote: >> >> I do view packet radio as a stress test for the protocol(s).? I think it is important to consider all the different dynamics that might come into play with the networks. >> I still need to really read Jack's message but there were also military testbeds that had packet radio networks.? I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was? to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN). >> barbara >> For what it's worth, there was a discussion on this list back in 2016 about early TCP work and how packet radio networks were involved. [1] [2] Based on what I?ve been able to find, I would agree with John that most Internet traffic didn?t cross packet radio networks back then. The Ft. Bragg testbed was one of the most used, as far as I can tell. As for people who tried to deal with how inherent loss problems of packet radio networks affected TCP, I did find a comp.protocols.tcp-ip thread from 1987 on that subject. [3] Jil Westcott, who contributed to the thread, also had email correspondence with Zaw-Sing Su in the Ft. Bragg PRNET simulation paper he wrote. --gregbo [1] https://elists.isoc.org/pipermail/internet-history/2016-August/thread.html [2] https://elists.isoc.org/pipermail/internet-history/2016-September/thread.html [3] https://groups.google.com/g/comp.protocols.tcp-ip/c/vuPqc12SLis/m/1GoOsEem954J From jeanjour at comcast.net Mon Mar 24 17:53:06 2025 From: jeanjour at comcast.net (John Day) Date: Mon, 24 Mar 2025 20:53:06 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> Message-ID: <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important. The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient. For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective. There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ?atomic? and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven?t encountered them.) ;-) One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-) It leads to a nice principle of protocol design. Take care, John > On Mar 24, 2025, at 20:16, Greg Skinner via Internet-history wrote: > > > On Mar 11, 2025, at 2:48?PM, Barbara Denny wrote: >> >> I don't recall ever hearing, or reading, about TCP transport requirements from the underlying network but I wasn't there in the early days of TCP (70s).? >> I have trouble thinking the problem with the congestion assumption? wasn't brought up early but I certainly don't know. >> barbara >> On Tuesday, March 11, 2025 at 02:10:26 PM PDT, John Day wrote: >> >> I would disagree. The Transport Layer assumes a minimal service from the layers below (actually all layers do). If the underlying layer doesn?t meet that normally, then measures are needed to bring the service up to the expected level.? Given that the diameter of the net now is about 20 or so and probably back then 5 or 6. Packet radio constituted a small fraction of the lower layers that the packet had to cross. Assuming packet radio didn?t have to do anything had the tail wagging the dog. >> >> Of course the example some would point to was TCP congestion control assuming lost packets were due to congestion. That was a dumb assumption and didn?t take a systems view of the problem. (Of course, it wasn?t the only dumb thing in that design, it also maximized retransmissions.) >> >> Take care, >> John Day >> >>> On Mar 11, 2025, at 17:02, Barbara Denny via Internet-history wrote: >>> >>> I do view packet radio as a stress test for the protocol(s).? I think it is important to consider all the different dynamics that might come into play with the networks. >>> I still need to really read Jack's message but there were also military testbeds that had packet radio networks.? I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was? to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN). >>> barbara >>> > > For what it's worth, there was a discussion on this list back in 2016 about early TCP work and how packet radio networks were involved. [1] [2] Based on what I?ve been able to find, I would agree with John that most Internet traffic didn?t cross packet radio networks back then. The Ft. Bragg testbed was one of the most used, as far as I can tell. > > As for people who tried to deal with how inherent loss problems of packet radio networks affected TCP, I did find a comp.protocols.tcp-ip thread from 1987 on that subject. [3] Jil Westcott, who contributed to the thread, also had email correspondence with Zaw-Sing Su in the Ft. Bragg PRNET simulation paper he wrote. > > --gregbo > > [1] https://elists.isoc.org/pipermail/internet-history/2016-August/thread.html > [2] https://elists.isoc.org/pipermail/internet-history/2016-September/thread.html > [3] https://groups.google.com/g/comp.protocols.tcp-ip/c/vuPqc12SLis/m/1GoOsEem954J > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gregskinner0 at icloud.com Wed Mar 26 10:10:27 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Wed, 26 Mar 2025 10:10:27 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> Message-ID: <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> On Mar 24, 2025, at 5:53?PM, John Day wrote: > > I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important. > > The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient. > > For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective. > > There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ?atomic? and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven?t encountered them.) ;-) > > One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-) > > It leads to a nice principle of protocol design. > > Take care, > John > Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call ?TCP Menlo? or ?TCP Alpine? might have been created that would have addressed the lossy networks problem more directly. [2] --gregbo [1] https://en.wikipedia.org/wiki/TCP_congestion_control [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ From jeanjour at comcast.net Wed Mar 26 11:56:25 2025 From: jeanjour at comcast.net (John Day) Date: Wed, 26 Mar 2025 14:56:25 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> Message-ID: I don?t quite understand about using TCP (or variants) here. The PRNET consisted of the van and two repeaters to a gateway to the ARPANET. The repeaters were physical layer relays I assume that did not interpret the packets. I presume that the PRNET had a link layer that did some error control. The van to gateway is generating TCP datagrams over its ?link layer' protocol. (IP had not yet been created, or had it?.) I presume that the ARPANET relayed the TCP packets as Type 3 packets, i.e., datagrams. The PRNET-Gateway would have looked like a host to the IMP it was connected to. The IMPs had their own hop-by-hop error control over the physical lines. (There weren?t really layers in the IMPs. At least that is what Dave Walden told me. But we can assume that this error control was sort of like a link layer.) The error characteristics of the Van-Gateway link layer were very different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of the IMP-IMP lines had very low error rates.) There was one Van-Gateway link and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been meeting the requirements for a network layer as described in my email. The only major difference in error rate was the van-gateway. It would make more sense (and consistent with what is described in my email) to provide a more robust to enhance the van-gateway link protocol to be more robust to met the error characteristics. Using TCP would be in some sense trying to use 'the tail to wag the dog,? i.e., using an end-to-end transport protocol to compensate for 1 link that was not meeting the requirements of the network layer. This would have been much less effective. It is easy to see that errors in a smaller scope (the link layer) should not be propagated to layers of a greater scope for recovery. (Unless their frequency is very low as described previously, which this isn?t.) This what the architecture model requires. Not sure what congestion control has to do with this. The TCP congestion solution is pretty awful solution. The implicit notification makes it predatory and assumes that lost messages are due to congestion, which they aren?t. (Is that the connection?) It works by causing congestion (some congestion avoidance strategy!) which generates many more retransmissions. A scheme that minimizes congestion events and retransmissions would be much preferred. (And one existed at the time.) Take care, John > On Mar 26, 2025, at 13:10, Greg Skinner wrote: > > > On Mar 24, 2025, at 5:53?PM, John Day wrote: >> >> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important. >> >> The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient. >> >> For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective. >> >> There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ?atomic? and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven?t encountered them.) ;-) >> >> One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-) >> >> It leads to a nice principle of protocol design. >> >> Take care, >> John >> > > Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call ?TCP Menlo? or ?TCP Alpine? might have been created that would have addressed the lossy networks problem more directly. [2] > > --gregbo > > [1] https://en.wikipedia.org/wiki/TCP_congestion_control > [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ From vgcerf at gmail.com Wed Mar 26 12:23:09 2025 From: vgcerf at gmail.com (vinton cerf) Date: Wed, 26 Mar 2025 15:23:09 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> Message-ID: see inline, adding don nielson On Wed, Mar 26, 2025 at 2:56?PM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > I don?t quite understand about using TCP (or variants) here. > > The PRNET consisted of the van and two repeaters to a gateway to the > ARPANET. The repeaters were physical layer relays I assume that did not > interpret the packets. no, they were full up packet radios as I remember it. > I presume that the PRNET had a link layer that did some error control. yes > The van to gateway is generating TCP datagrams over its ?link layer' > protocol. (IP had not yet been created, or had it?.) IP came about 1977, the first tests in 1976, TCP only. The Nov 1977 tests were full up TCP/IP > I presume that the ARPANET relayed the TCP packets as Type 3 packets, > i.e., datagrams. Well, not necessarily. We used Type 3 for voice comms but not necessarily for TCP traffic > The PRNET-Gateway would have looked like a host to the IMP it was > connected to. yes > The IMPs had their own hop-by-hop error control over the physical lines. > (There weren?t really layers in the IMPs. At least that is what Dave Walden > told me. But we can assume that this error control was sort of like a link > layer.) > the IMPs carried Arpanet packets (TCP/IP packets were "messages" to the IMP and broken into Arpanet packets for transport. The IMPs had sequenced delivery, message reassembly and RFNM flow control except that was not the case for Type 3 "uncontrolled" IMP packets) > > The error characteristics of the Van-Gateway link layer were very > different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of > the IMP-IMP lines had very low error rates.) There was one Van-Gateway link > and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been > meeting the requirements for a network layer as described in my email. The > only major difference in error rate was the van-gateway. It would make more > sense (and consistent with what is described in my email) to provide a more > robust to enhance the van-gateway link protocol to be more robust to met > the error characteristics. > > Using TCP would be in some sense trying to use 'the tail to wag the dog,? > i.e., using an end-to-end transport protocol to compensate for 1 link that > was not meeting the requirements of the network layer. This would have been > much less effective. It is easy to see that errors in a smaller scope (the > link layer) should not be propagated to layers of a greater scope for > recovery. (Unless their frequency is very low as described previously, > which this isn?t.) This what the architecture model requires. > Generally that's a fair line of argument (ie, try to bring links up to better quality by forward error correction, link level retransmission, ARQ) for each of the "networks' in the Internet. The end/end TCP was mostly to deal with packets lost in the PRNET, in this case, because the PRNET was potentially multihop and packets could be lost due to lack of connectivity, timeouts, loss of a packet radio. There is a 1978 IEEE Proceedings with a lot of the details of that time period for PRNET, SATNET, etc. > > Not sure what congestion control has to do with this. The TCP congestion > solution is pretty awful solution. The implicit notification makes it > predatory and assumes that lost messages are due to congestion, which they > aren?t. (Is that the connection?) It works by causing congestion (some > congestion avoidance strategy!) which generates many more retransmissions. > A scheme that minimizes congestion events and retransmissions would be much > preferred. (And one existed at the time.) > Many congestion control methods have since been introduced (think Sally Floyd, Van Jacobson) since the early and relatively naive TCP days. > > Take care, > John > > > On Mar 26, 2025, at 13:10, Greg Skinner wrote: > > > > > > On Mar 24, 2025, at 5:53?PM, John Day wrote: > >> > >> I would go further and say that this is a general property of layers. > We tend to focus on the service provided by a layer, but the minimal > service the layer expects from supporting layers is just as important. > >> > >> The original concept of best-effort and end-to-end transport (circa > 1972) was that errors in the network layer were from congestion and rare > memory errors during relaying. Congestion research was already underway and > the results were expected to keep the frequency of lost packets fairly low. > Thus keeping retransmissions at the transport layer relatively low and > allowing the transport layer to be reasonably efficient. > >> > >> For those early networks, the link layer was something HDLC-like and so > reliable. However, it was recognized that this did not imply that the Link > layers had to be reliable, but have an error rate well below the error rate > created by the network layer. Given that there would be n link layers > contributing to the additional error rate, where n is the diameter of the > network, it is possible to estimate an upper bound that each link layer > must meet to keep the error rate at the network layer low enough for the > transport layer to be effective. > >> > >> There were soon examples of link layers that were datagram services but > sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to > take up the packet radio topic, 802.11 is another good example, where what > happens during the NAV (RTS, CTS, send data, get an Ack) is considered > ?atomic? and if the Ack is not received, it is assumed that the packet was > not delivered. (As WiFi data rates of increased this has been modified.) > This seems to have the same property of providing a sufficiently low error > rate to the network layer that transport remains effective. (Although I > have to admit I have never come across typical goodput measurements for > 802.11. They must exist, I just haven?t encountered them.) ;-) > >> > >> One fun thing to do with students and WiFi is to point out that the > original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks > use stop-and-wait as a simple introductory protocol and show that under > most circumstances would be quite slow and inefficient. Then since they use > WiFi everyday is it slow? No. Then why not? ;-) > >> > >> It leads to a nice principle of protocol design. > >> > >> Take care, > >> John > >> > > > > Looking at this from another direction, there are several specialized > versions of TCP, [1] Given the conditions experienced in the SF Bay Area > PRNET, I can see how if circumstances permitted, something that today we > might call ?TCP Menlo? or ?TCP Alpine? might have been created that would > have addressed the lossy networks problem more directly. [2] > > > > --gregbo > > > > [1] https://en.wikipedia.org/wiki/TCP_congestion_control > > [2] > https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Wed Mar 26 13:30:19 2025 From: jeanjour at comcast.net (John Day) Date: Wed, 26 Mar 2025 16:30:19 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> Message-ID: > On Mar 26, 2025, at 15:23, vinton cerf wrote: > > see inline, adding don nielson > > On Wed, Mar 26, 2025 at 2:56?PM John Day via Internet-history > wrote: >> I don?t quite understand about using TCP (or variants) here. >> >> The PRNET consisted of the van and two repeaters to a gateway to the ARPANET. The repeaters were physical layer relays I assume that did not interpret the packets. > no, they were full up packet radios as I remember it. Right, I was assuming that the ?station? in the van was a full packet radio with some sort of link layer and then TCP over that to the gateway. That comment was that the repeaters were repeaters not what Ethernet would call a ?bridge.? Did IP exist at that point? I was assuming IP was ?78, which is close, so I wasn?t sure. >> I presume that the PRNET had a link layer that did some error control. > yes Makes sense. >> The van to gateway is generating TCP datagrams over its ?link layer' protocol. (IP had not yet been created, or had it?.) > IP came about 1977, the first tests in 1976, TCP only. The Nov 1977 tests were full up TCP/IP >> I presume that the ARPANET relayed the TCP packets as Type 3 packets, i.e., datagrams. > Well, not necessarily. We used Type 3 for voice comms but not necessarily for TCP traffic Okay. minor difference. So an ARPANET ?network layer?. >> The PRNET-Gateway would have looked like a host to the IMP it was connected to. > yes >> The IMPs had their own hop-by-hop error control over the physical lines. (There weren?t really layers in the IMPs. At least that is what Dave Walden told me. But we can assume that this error control was sort of like a link layer.) > the IMPs carried Arpanet packets (TCP/IP packets were "messages" to the IMP and broken into Arpanet packets for transport. The IMPs had sequenced delivery, message reassembly and RFNM flow control except that was not the case for Type 3 "uncontrolled" IMP packets) Right. That was what I was thinking. >> >> The error characteristics of the Van-Gateway link layer were very different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of the IMP-IMP lines had very low error rates.) There was one Van-Gateway link and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been meeting the requirements for a network layer as described in my email. The only major difference in error rate was the van-gateway. It would make more sense (and consistent with what is described in my email) to provide a more robust to enhance the van-gateway link protocol to be more robust to met the error characteristics. >> >> Using TCP would be in some sense trying to use 'the tail to wag the dog,? i.e., using an end-to-end transport protocol to compensate for 1 link that was not meeting the requirements of the network layer. This would have been much less effective. It is easy to see that errors in a smaller scope (the link layer) should not be propagated to layers of a greater scope for recovery. (Unless their frequency is very low as described previously, which this isn?t.) This what the architecture model requires. > Generally that's a fair line of argument (ie, try to bring links up to better quality by forward error correction, link level retransmission, ARQ) for each of the "networks' in the Internet. The end/end TCP was mostly to deal with packets lost in the PRNET, in this case, because the PRNET was potentially multihop and packets could be lost due to lack of connectivity, timeouts, loss of a packet radio. There is a 1978 IEEE Proceedings with a lot of the details of that time period for PRNET, SATNET, etc. Okay, that makes sense. >> >> Not sure what congestion control has to do with this. The TCP congestion solution is pretty awful solution. The implicit notification makes it predatory and assumes that lost messages are due to congestion, which they aren?t. (Is that the connection?) It works by causing congestion (some congestion avoidance strategy!) which generates many more retransmissions. A scheme that minimizes congestion events and retransmissions would be much preferred. (And one existed at the time.) > Many congestion control methods have since been introduced (think Sally Floyd, Van Jacobson) since the early and relatively naive TCP days. Well, I think Jain?s group at DEC nailed the problem, at least as a first approximation. (We might extend it today based on what we have learned.) First was recognizing the need for ECN. That ensures the response is to congestion and not something else and it limits the response to events in THAT layer. ECN is essential. The Jacobson approach is more a network solution than an internet solution. The other thing Jain?s group did was show that notification should begin when the average queue length was greater than or equal to 1. That is very early and would really reduce the probability of retransmissions, rather creating congestion and causing more retransmissions. The Floyd/Jacobson hung on to the basic implicit notification, cause congestion model and tried to tweak that which was a dead end. I should also mention that Jain?s group optimized for the knee of the curve, while Jacobson?s solution optimized for the edge of the cliff where congestion collapse started. Hence more retransmissions. The other thing that Jain?s group showed was that congestion is a stochastic phenomena and many ?congestion? events actually clear on their own. (which is why it is the average queue length, a filter for those sorts of events.) The other mistake they all made was putting it in Transport. We know that ALL congestion control strategies deteriorate with increasing time-to-notify. Transport maximizes time-to-notify. (It has the largest scope.) Everyone always thought that congestion would go in the network layers where scope was bounded supporting the Internet Transport Layer. Take care, John >> >> Take care, >> John >> >> > On Mar 26, 2025, at 13:10, Greg Skinner > wrote: >> > >> > >> > On Mar 24, 2025, at 5:53?PM, John Day > wrote: >> >> >> >> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important. >> >> >> >> The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient. >> >> >> >> For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective. >> >> >> >> There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ?atomic? and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven?t encountered them.) ;-) >> >> >> >> One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-) >> >> >> >> It leads to a nice principle of protocol design. >> >> >> >> Take care, >> >> John >> >> >> > >> > Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call ?TCP Menlo? or ?TCP Alpine? might have been created that would have addressed the lossy networks problem more directly. [2] >> > >> > --gregbo >> > >> > [1] https://en.wikipedia.org/wiki/TCP_congestion_control >> > [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Wed Mar 26 13:46:14 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Wed, 26 Mar 2025 20:46:14 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> Message-ID: <665693090.2324601.1743021974531@mail.yahoo.com> Saw Vint's message after I started this one so adding Don Nielson to this thread too.?? I would like to mention the PRnet in the Bay Area was larger than 2 nodes.? ?I am guessing you are referring to the diagram I sent out for the 1976 demo/test.? That diagram shows the path the packets took to reach SRI from Rissotti's.? I am trying to find out if the rest of the network wasn't deployed in 1976 but I haven't been able to track it down.? If you look at the?DTIC reference Greg Skinner provided?previously (https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf),? there are diagrams starting at page 244 that show more of the Bay Area PR network in that report .? It includes sites at Grizzly Peak, Mission Peak, Mt. San Bruno, etc.? I am not sure I ever got a copy when I was at BBN so I don't feel I can comment if some of the node locations would change based on what connectivity was needed. BTW,? was the repeater marked Eichler in the 1976 demo diagram perhaps near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize the Dish belongs to SRI and not Stanford. barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via Internet-history wrote: I don?t quite understand about using TCP (or variants) here. The PRNET consisted of the van and two repeaters to a gateway to the ARPANET. The repeaters were physical layer relays I assume that did not interpret the packets. I presume that the PRNET had a link layer that did some error control. The van to gateway is generating TCP datagrams over its ?link layer' protocol. (IP had not yet been created, or had it?.) I presume that the ARPANET relayed the TCP packets as Type 3 packets, i.e., datagrams. The PRNET-Gateway would have looked like a host to the IMP it was connected to.? The IMPs had their own hop-by-hop error control over the physical lines. (There weren?t really layers in the IMPs. At least that is what Dave Walden told me. But we can assume that this error control was sort of like a link layer.) The error characteristics of the Van-Gateway link layer were very different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of the IMP-IMP lines had very low error rates.) There was one Van-Gateway link and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been meeting the requirements for a network layer as described in my email. The only major difference in error rate was the van-gateway. It would make more sense (and consistent with what is described in my email) to provide a more robust to enhance the van-gateway link protocol to be more robust to met the error characteristics. Using TCP would be in some sense trying to use 'the tail to wag the dog,? i.e., using an end-to-end transport protocol to compensate for 1 link that was not meeting the requirements of the network layer. This would have been much less effective. It is easy to see that errors in a smaller scope (the link layer) should not be propagated to layers of a greater scope for recovery. (Unless their frequency is very low as described previously, which this isn?t.) This what the architecture model requires. Not sure what congestion control has to do with this. The TCP congestion solution is pretty awful solution. The implicit notification makes it predatory and assumes that lost messages are due to congestion, which they aren?t. (Is that the connection?)? It works by causing congestion (some congestion avoidance strategy!) which generates many more retransmissions. A scheme that minimizes congestion events and retransmissions would be much preferred. (And one existed at the time.) Take care, John > On Mar 26, 2025, at 13:10, Greg Skinner wrote: > > > On Mar 24, 2025, at 5:53?PM, John Day wrote: >> >> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important. >> >> The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient. >> >> For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective. >> >> There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ?atomic? and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven?t encountered them.)? ;-) >> >> One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-) >> >> It leads to a nice principle of protocol design. >> >> Take care, >> John >> > > Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call ?TCP Menlo? or ?TCP Alpine? might have been created that would have addressed the lossy networks problem more directly. [2] > > --gregbo > > [1] https://en.wikipedia.org/wiki/TCP_congestion_control > [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ From vint at google.com Wed Mar 26 13:57:30 2025 From: vint at google.com (Vint Cerf) Date: Wed, 26 Mar 2025 16:57:30 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <665693090.2324601.1743021974531@mail.yahoo.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> Message-ID: I think we had a fair number of nodes - at least a half dozen, possibly more? Don would know, if you don't Barbara. Yes to multiple mountain sites. Eichler - sounds like somebody's house! I used to live in an Eichler in Palo Alto but never had a packet radio installed. Xerox PARC had one (fixed location) though. v On Wed, Mar 26, 2025 at 4:46?PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > > Saw Vint's message after I started this one so adding Don Nielson to this > thread too. > I would like to mention the PRnet in the Bay Area was larger than 2 > nodes. I am guessing you are referring to the diagram I sent out for the > 1976 demo/test. That diagram shows the path the packets took to reach SRI > from Rissotti's. I am trying to find out if the rest of the network wasn't > deployed in 1976 but I haven't been able to track it down. If you look at > the DTIC reference Greg Skinner provided previously ( > https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams > starting at page 244 that show more of the Bay Area PR network in that > report . It includes sites at Grizzly Peak, Mission Peak, Mt. San Bruno, > etc. I am not sure I ever got a copy when I was at BBN so I don't feel I > can comment if some of the node locations would change based on what > connectivity was needed. > BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps > near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize the > Dish belongs to SRI and not Stanford. > barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via > Internet-history wrote: > > I don?t quite understand about using TCP (or variants) here. > > The PRNET consisted of the van and two repeaters to a gateway to the > ARPANET. The repeaters were physical layer relays I assume that did not > interpret the packets. I presume that the PRNET had a link layer that did > some error control. The van to gateway is generating TCP datagrams over its > ?link layer' protocol. (IP had not yet been created, or had it?.) I presume > that the ARPANET relayed the TCP packets as Type 3 packets, i.e., > datagrams. The PRNET-Gateway would have looked like a host to the IMP it > was connected to. The IMPs had their own hop-by-hop error control over the > physical lines. (There weren?t really layers in the IMPs. At least that is > what Dave Walden told me. But we can assume that this error control was > sort of like a link layer.) > > The error characteristics of the Van-Gateway link layer were very > different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of > the IMP-IMP lines had very low error rates.) There was one Van-Gateway link > and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been > meeting the requirements for a network layer as described in my email. The > only major difference in error rate was the van-gateway. It would make more > sense (and consistent with what is described in my email) to provide a more > robust to enhance the van-gateway link protocol to be more robust to met > the error characteristics. > > Using TCP would be in some sense trying to use 'the tail to wag the dog,? > i.e., using an end-to-end transport protocol to compensate for 1 link that > was not meeting the requirements of the network layer. This would have been > much less effective. It is easy to see that errors in a smaller scope (the > link layer) should not be propagated to layers of a greater scope for > recovery. (Unless their frequency is very low as described previously, > which this isn?t.) This what the architecture model requires. > > Not sure what congestion control has to do with this. The TCP congestion > solution is pretty awful solution. The implicit notification makes it > predatory and assumes that lost messages are due to congestion, which they > aren?t. (Is that the connection?) It works by causing congestion (some > congestion avoidance strategy!) which generates many more retransmissions. > A scheme that minimizes congestion events and retransmissions would be much > preferred. (And one existed at the time.) > > Take care, > John > > > On Mar 26, 2025, at 13:10, Greg Skinner wrote: > > > > > > On Mar 24, 2025, at 5:53?PM, John Day wrote: > >> > >> I would go further and say that this is a general property of layers. > We tend to focus on the service provided by a layer, but the minimal > service the layer expects from supporting layers is just as important. > >> > >> The original concept of best-effort and end-to-end transport (circa > 1972) was that errors in the network layer were from congestion and rare > memory errors during relaying. Congestion research was already underway and > the results were expected to keep the frequency of lost packets fairly low. > Thus keeping retransmissions at the transport layer relatively low and > allowing the transport layer to be reasonably efficient. > >> > >> For those early networks, the link layer was something HDLC-like and so > reliable. However, it was recognized that this did not imply that the Link > layers had to be reliable, but have an error rate well below the error rate > created by the network layer. Given that there would be n link layers > contributing to the additional error rate, where n is the diameter of the > network, it is possible to estimate an upper bound that each link layer > must meet to keep the error rate at the network layer low enough for the > transport layer to be effective. > >> > >> There were soon examples of link layers that were datagram services but > sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to > take up the packet radio topic, 802.11 is another good example, where what > happens during the NAV (RTS, CTS, send data, get an Ack) is considered > ?atomic? and if the Ack is not received, it is assumed that the packet was > not delivered. (As WiFi data rates of increased this has been modified.) > This seems to have the same property of providing a sufficiently low error > rate to the network layer that transport remains effective. (Although I > have to admit I have never come across typical goodput measurements for > 802.11. They must exist, I just haven?t encountered them.) ;-) > >> > >> One fun thing to do with students and WiFi is to point out that the > original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks > use stop-and-wait as a simple introductory protocol and show that under > most circumstances would be quite slow and inefficient. Then since they use > WiFi everyday is it slow? No. Then why not? ;-) > >> > >> It leads to a nice principle of protocol design. > >> > >> Take care, > >> John > >> > > > > Looking at this from another direction, there are several specialized > versions of TCP, [1] Given the conditions experienced in the SF Bay Area > PRNET, I can see how if circumstances permitted, something that today we > might call ?TCP Menlo? or ?TCP Alpine? might have been created that would > have addressed the lossy networks problem more directly. [2] > > > > --gregbo > > > > [1] https://en.wikipedia.org/wiki/TCP_congestion_control > > [2] > https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jeanjour at comcast.net Wed Mar 26 14:08:24 2025 From: jeanjour at comcast.net (John Day) Date: Wed, 26 Mar 2025 17:08:24 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> Message-ID: <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> And those nodes relayed among themselves as well as with the gateway? IOW, PRNET wasn?t a star network with the gateway as the center, like a WIFI access point. So there would have been TCP connections between PRNET nodes as well as TCP connections potentially relayed by other PRNET nodes through the gateway to ARPANET hosts. Right? Take care, John > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history wrote: > > I think we had a fair number of nodes - at least a half dozen, possibly > more? Don would know, if you don't Barbara. > Yes to multiple mountain sites. Eichler - sounds like somebody's house! I > used to live in an Eichler in Palo Alto but never had a packet radio > installed. Xerox PARC had one (fixed location) though. > > v > > > On Wed, Mar 26, 2025 at 4:46?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > >> >> Saw Vint's message after I started this one so adding Don Nielson to this >> thread too. >> I would like to mention the PRnet in the Bay Area was larger than 2 >> nodes. I am guessing you are referring to the diagram I sent out for the >> 1976 demo/test. That diagram shows the path the packets took to reach SRI >> from Rissotti's. I am trying to find out if the rest of the network wasn't >> deployed in 1976 but I haven't been able to track it down. If you look at >> the DTIC reference Greg Skinner provided previously ( >> https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams >> starting at page 244 that show more of the Bay Area PR network in that >> report . It includes sites at Grizzly Peak, Mission Peak, Mt. San Bruno, >> etc. I am not sure I ever got a copy when I was at BBN so I don't feel I >> can comment if some of the node locations would change based on what >> connectivity was needed. >> BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps >> near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize the >> Dish belongs to SRI and not Stanford. >> barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via >> Internet-history wrote: >> >> I don?t quite understand about using TCP (or variants) here. >> >> The PRNET consisted of the van and two repeaters to a gateway to the >> ARPANET. The repeaters were physical layer relays I assume that did not >> interpret the packets. I presume that the PRNET had a link layer that did >> some error control. The van to gateway is generating TCP datagrams over its >> ?link layer' protocol. (IP had not yet been created, or had it?.) I presume >> that the ARPANET relayed the TCP packets as Type 3 packets, i.e., >> datagrams. The PRNET-Gateway would have looked like a host to the IMP it >> was connected to. The IMPs had their own hop-by-hop error control over the >> physical lines. (There weren?t really layers in the IMPs. At least that is >> what Dave Walden told me. But we can assume that this error control was >> sort of like a link layer.) >> >> The error characteristics of the Van-Gateway link layer were very >> different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of >> the IMP-IMP lines had very low error rates.) There was one Van-Gateway link >> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been >> meeting the requirements for a network layer as described in my email. The >> only major difference in error rate was the van-gateway. It would make more >> sense (and consistent with what is described in my email) to provide a more >> robust to enhance the van-gateway link protocol to be more robust to met >> the error characteristics. >> >> Using TCP would be in some sense trying to use 'the tail to wag the dog,? >> i.e., using an end-to-end transport protocol to compensate for 1 link that >> was not meeting the requirements of the network layer. This would have been >> much less effective. It is easy to see that errors in a smaller scope (the >> link layer) should not be propagated to layers of a greater scope for >> recovery. (Unless their frequency is very low as described previously, >> which this isn?t.) This what the architecture model requires. >> >> Not sure what congestion control has to do with this. The TCP congestion >> solution is pretty awful solution. The implicit notification makes it >> predatory and assumes that lost messages are due to congestion, which they >> aren?t. (Is that the connection?) It works by causing congestion (some >> congestion avoidance strategy!) which generates many more retransmissions. >> A scheme that minimizes congestion events and retransmissions would be much >> preferred. (And one existed at the time.) >> >> Take care, >> John >> >>> On Mar 26, 2025, at 13:10, Greg Skinner wrote: >>> >>> >>> On Mar 24, 2025, at 5:53?PM, John Day wrote: >>>> >>>> I would go further and say that this is a general property of layers. >> We tend to focus on the service provided by a layer, but the minimal >> service the layer expects from supporting layers is just as important. >>>> >>>> The original concept of best-effort and end-to-end transport (circa >> 1972) was that errors in the network layer were from congestion and rare >> memory errors during relaying. Congestion research was already underway and >> the results were expected to keep the frequency of lost packets fairly low. >> Thus keeping retransmissions at the transport layer relatively low and >> allowing the transport layer to be reasonably efficient. >>>> >>>> For those early networks, the link layer was something HDLC-like and so >> reliable. However, it was recognized that this did not imply that the Link >> layers had to be reliable, but have an error rate well below the error rate >> created by the network layer. Given that there would be n link layers >> contributing to the additional error rate, where n is the diameter of the >> network, it is possible to estimate an upper bound that each link layer >> must meet to keep the error rate at the network layer low enough for the >> transport layer to be effective. >>>> >>>> There were soon examples of link layers that were datagram services but >> sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to >> take up the packet radio topic, 802.11 is another good example, where what >> happens during the NAV (RTS, CTS, send data, get an Ack) is considered >> ?atomic? and if the Ack is not received, it is assumed that the packet was >> not delivered. (As WiFi data rates of increased this has been modified.) >> This seems to have the same property of providing a sufficiently low error >> rate to the network layer that transport remains effective. (Although I >> have to admit I have never come across typical goodput measurements for >> 802.11. They must exist, I just haven?t encountered them.) ;-) >>>> >>>> One fun thing to do with students and WiFi is to point out that the >> original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks >> use stop-and-wait as a simple introductory protocol and show that under >> most circumstances would be quite slow and inefficient. Then since they use >> WiFi everyday is it slow? No. Then why not? ;-) >>>> >>>> It leads to a nice principle of protocol design. >>>> >>>> Take care, >>>> John >>>> >>> >>> Looking at this from another direction, there are several specialized >> versions of TCP, [1] Given the conditions experienced in the SF Bay Area >> PRNET, I can see how if circumstances permitted, something that today we >> might call ?TCP Menlo? or ?TCP Alpine? might have been created that would >> have addressed the lossy networks problem more directly. [2] >>> >>> --gregbo >>> >>> [1] https://en.wikipedia.org/wiki/TCP_congestion_control >>> [2] >> https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ >> >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vint at google.com Wed Mar 26 14:17:15 2025 From: vint at google.com (Vint Cerf) Date: Wed, 26 Mar 2025 17:17:15 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> Message-ID: yes, the gateway was colocated with the Station (on the same computer). The Station managed the Packet Radio network, maintained information about connectivity among the radio relays. PRNET was not a star network. Topology changes were tracked by the mobile nodes periodically reporting to the station which other Packet Radios they could reach. Hosts on the PRNET nodes could communicate with each other and, through the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run TCP, that was running on the hosts like the LSI-11/23's or the Station or.... v On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: > And those nodes relayed among themselves as well as with the gateway? > > IOW, PRNET wasn?t a star network with the gateway as the center, like a > WIFI access point. > > So there would have been TCP connections between PRNET nodes as well as > TCP connections potentially relayed by other PRNET nodes through the > gateway to ARPANET hosts. Right? > > Take care, > John > > > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I think we had a fair number of nodes - at least a half dozen, possibly > > more? Don would know, if you don't Barbara. > > Yes to multiple mountain sites. Eichler - sounds like somebody's house! I > > used to live in an Eichler in Palo Alto but never had a packet radio > > installed. Xerox PARC had one (fixed location) though. > > > > v > > > > > > On Wed, Mar 26, 2025 at 4:46?PM Barbara Denny via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> > >> Saw Vint's message after I started this one so adding Don Nielson to > this > >> thread too. > >> I would like to mention the PRnet in the Bay Area was larger than 2 > >> nodes. I am guessing you are referring to the diagram I sent out for > the > >> 1976 demo/test. That diagram shows the path the packets took to reach > SRI > >> from Rissotti's. I am trying to find out if the rest of the network > wasn't > >> deployed in 1976 but I haven't been able to track it down. If you look > at > >> the DTIC reference Greg Skinner provided previously ( > >> https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams > >> starting at page 244 that show more of the Bay Area PR network in that > >> report . It includes sites at Grizzly Peak, Mission Peak, Mt. San > Bruno, > >> etc. I am not sure I ever got a copy when I was at BBN so I don't feel > I > >> can comment if some of the node locations would change based on what > >> connectivity was needed. > >> BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps > >> near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize > the > >> Dish belongs to SRI and not Stanford. > >> barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via > >> Internet-history wrote: > >> > >> I don?t quite understand about using TCP (or variants) here. > >> > >> The PRNET consisted of the van and two repeaters to a gateway to the > >> ARPANET. The repeaters were physical layer relays I assume that did not > >> interpret the packets. I presume that the PRNET had a link layer that > did > >> some error control. The van to gateway is generating TCP datagrams over > its > >> ?link layer' protocol. (IP had not yet been created, or had it?.) I > presume > >> that the ARPANET relayed the TCP packets as Type 3 packets, i.e., > >> datagrams. The PRNET-Gateway would have looked like a host to the IMP it > >> was connected to. The IMPs had their own hop-by-hop error control over > the > >> physical lines. (There weren?t really layers in the IMPs. At least that > is > >> what Dave Walden told me. But we can assume that this error control was > >> sort of like a link layer.) > >> > >> The error characteristics of the Van-Gateway link layer were very > >> different and much more lossy than the Host-IMP or IMP-IMP lines. (Some > of > >> the IMP-IMP lines had very low error rates.) There was one Van-Gateway > link > >> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been > >> meeting the requirements for a network layer as described in my email. > The > >> only major difference in error rate was the van-gateway. It would make > more > >> sense (and consistent with what is described in my email) to provide a > more > >> robust to enhance the van-gateway link protocol to be more robust to met > >> the error characteristics. > >> > >> Using TCP would be in some sense trying to use 'the tail to wag the > dog,? > >> i.e., using an end-to-end transport protocol to compensate for 1 link > that > >> was not meeting the requirements of the network layer. This would have > been > >> much less effective. It is easy to see that errors in a smaller scope > (the > >> link layer) should not be propagated to layers of a greater scope for > >> recovery. (Unless their frequency is very low as described previously, > >> which this isn?t.) This what the architecture model requires. > >> > >> Not sure what congestion control has to do with this. The TCP congestion > >> solution is pretty awful solution. The implicit notification makes it > >> predatory and assumes that lost messages are due to congestion, which > they > >> aren?t. (Is that the connection?) It works by causing congestion (some > >> congestion avoidance strategy!) which generates many more > retransmissions. > >> A scheme that minimizes congestion events and retransmissions would be > much > >> preferred. (And one existed at the time.) > >> > >> Take care, > >> John > >> > >>> On Mar 26, 2025, at 13:10, Greg Skinner > wrote: > >>> > >>> > >>> On Mar 24, 2025, at 5:53?PM, John Day wrote: > >>>> > >>>> I would go further and say that this is a general property of layers. > >> We tend to focus on the service provided by a layer, but the minimal > >> service the layer expects from supporting layers is just as important. > >>>> > >>>> The original concept of best-effort and end-to-end transport (circa > >> 1972) was that errors in the network layer were from congestion and rare > >> memory errors during relaying. Congestion research was already underway > and > >> the results were expected to keep the frequency of lost packets fairly > low. > >> Thus keeping retransmissions at the transport layer relatively low and > >> allowing the transport layer to be reasonably efficient. > >>>> > >>>> For those early networks, the link layer was something HDLC-like and > so > >> reliable. However, it was recognized that this did not imply that the > Link > >> layers had to be reliable, but have an error rate well below the error > rate > >> created by the network layer. Given that there would be n link layers > >> contributing to the additional error rate, where n is the diameter of > the > >> network, it is possible to estimate an upper bound that each link layer > >> must meet to keep the error rate at the network layer low enough for the > >> transport layer to be effective. > >>>> > >>>> There were soon examples of link layers that were datagram services > but > >> sufficiently reliable to meet those conditions, e.g., Ethernet. Later, > to > >> take up the packet radio topic, 802.11 is another good example, where > what > >> happens during the NAV (RTS, CTS, send data, get an Ack) is considered > >> ?atomic? and if the Ack is not received, it is assumed that the packet > was > >> not delivered. (As WiFi data rates of increased this has been modified.) > >> This seems to have the same property of providing a sufficiently low > error > >> rate to the network layer that transport remains effective. (Although I > >> have to admit I have never come across typical goodput measurements for > >> 802.11. They must exist, I just haven?t encountered them.) ;-) > >>>> > >>>> One fun thing to do with students and WiFi is to point out that the > >> original use of the NAV makes WiFi a stop-and-wait protocol. Most > textbooks > >> use stop-and-wait as a simple introductory protocol and show that under > >> most circumstances would be quite slow and inefficient. Then since they > use > >> WiFi everyday is it slow? No. Then why not? ;-) > >>>> > >>>> It leads to a nice principle of protocol design. > >>>> > >>>> Take care, > >>>> John > >>>> > >>> > >>> Looking at this from another direction, there are several specialized > >> versions of TCP, [1] Given the conditions experienced in the SF Bay Area > >> PRNET, I can see how if circumstances permitted, something that today we > >> might call ?TCP Menlo? or ?TCP Alpine? might have been created that > would > >> have addressed the lossy networks problem more directly. [2] > >>> > >>> --gregbo > >>> > >>> [1] https://en.wikipedia.org/wiki/TCP_congestion_control > >>> [2] > >> > https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ > >> > >> > >> > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > > > > > > -- > > Please send any postal/overnight deliveries to: > > Vint Cerf > > Google, LLC > > 1900 Reston Metro Plaza, 16th Floor > > Reston, VA 20190 > > +1 (571) 213 1346 <(571)%20213-1346> > > > > > > until further notice > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jeanjour at comcast.net Wed Mar 26 14:26:29 2025 From: jeanjour at comcast.net (John Day) Date: Wed, 26 Mar 2025 17:26:29 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> Message-ID: <66F8F99B-C244-499E-86C5-0B0D333BB757@comcast.net> > On Mar 26, 2025, at 17:17, Vint Cerf wrote: > > yes, the gateway was colocated with the Station (on the same computer).h I missed something. What is the Station? > The Station managed the Packet Radio network, maintained information about connectivity among the radio relays. PRNET was not a star network. That is what I was assuming. > Topology changes were tracked by the mobile nodes periodically reporting to the station which other Packet Radios they could reach. So a sort of centralized routing on the Station. An early ad hoc network. > Hosts on the PRNET nodes could communicate with each other and, through the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run TCP, So there were distinct machines acting as 'PRNET routers? and PRNET hosts. > that was running on the hosts like the LSI-11/23's or the Station or.... ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix running on one the year before but as a terminal connected to our Unix on an 11/45 but it was running NCP. Thanks, John > > v > > > On Wed, Mar 26, 2025 at 5:08?PM John Day > wrote: >> And those nodes relayed among themselves as well as with the gateway? >> >> IOW, PRNET wasn?t a star network with the gateway as the center, like a WIFI access point. >> >> So there would have been TCP connections between PRNET nodes as well as TCP connections potentially relayed by other PRNET nodes through the gateway to ARPANET hosts. Right? >> >> Take care, >> John >> >> > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history > wrote: >> > >> > I think we had a fair number of nodes - at least a half dozen, possibly >> > more? Don would know, if you don't Barbara. >> > Yes to multiple mountain sites. Eichler - sounds like somebody's house! I >> > used to live in an Eichler in Palo Alto but never had a packet radio >> > installed. Xerox PARC had one (fixed location) though. >> > >> > v >> > >> > >> > On Wed, Mar 26, 2025 at 4:46?PM Barbara Denny via Internet-history < >> > internet-history at elists.isoc.org > wrote: >> > >> >> >> >> Saw Vint's message after I started this one so adding Don Nielson to this >> >> thread too. >> >> I would like to mention the PRnet in the Bay Area was larger than 2 >> >> nodes. I am guessing you are referring to the diagram I sent out for the >> >> 1976 demo/test. That diagram shows the path the packets took to reach SRI >> >> from Rissotti's. I am trying to find out if the rest of the network wasn't >> >> deployed in 1976 but I haven't been able to track it down. If you look at >> >> the DTIC reference Greg Skinner provided previously ( >> >> https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams >> >> starting at page 244 that show more of the Bay Area PR network in that >> >> report . It includes sites at Grizzly Peak, Mission Peak, Mt. San Bruno, >> >> etc. I am not sure I ever got a copy when I was at BBN so I don't feel I >> >> can comment if some of the node locations would change based on what >> >> connectivity was needed. >> >> BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps >> >> near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize the >> >> Dish belongs to SRI and not Stanford. >> >> barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via >> >> Internet-history > wrote: >> >> >> >> I don?t quite understand about using TCP (or variants) here. >> >> >> >> The PRNET consisted of the van and two repeaters to a gateway to the >> >> ARPANET. The repeaters were physical layer relays I assume that did not >> >> interpret the packets. I presume that the PRNET had a link layer that did >> >> some error control. The van to gateway is generating TCP datagrams over its >> >> ?link layer' protocol. (IP had not yet been created, or had it?.) I presume >> >> that the ARPANET relayed the TCP packets as Type 3 packets, i.e., >> >> datagrams. The PRNET-Gateway would have looked like a host to the IMP it >> >> was connected to. The IMPs had their own hop-by-hop error control over the >> >> physical lines. (There weren?t really layers in the IMPs. At least that is >> >> what Dave Walden told me. But we can assume that this error control was >> >> sort of like a link layer.) >> >> >> >> The error characteristics of the Van-Gateway link layer were very >> >> different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of >> >> the IMP-IMP lines had very low error rates.) There was one Van-Gateway link >> >> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been >> >> meeting the requirements for a network layer as described in my email. The >> >> only major difference in error rate was the van-gateway. It would make more >> >> sense (and consistent with what is described in my email) to provide a more >> >> robust to enhance the van-gateway link protocol to be more robust to met >> >> the error characteristics. >> >> >> >> Using TCP would be in some sense trying to use 'the tail to wag the dog,? >> >> i.e., using an end-to-end transport protocol to compensate for 1 link that >> >> was not meeting the requirements of the network layer. This would have been >> >> much less effective. It is easy to see that errors in a smaller scope (the >> >> link layer) should not be propagated to layers of a greater scope for >> >> recovery. (Unless their frequency is very low as described previously, >> >> which this isn?t.) This what the architecture model requires. >> >> >> >> Not sure what congestion control has to do with this. The TCP congestion >> >> solution is pretty awful solution. The implicit notification makes it >> >> predatory and assumes that lost messages are due to congestion, which they >> >> aren?t. (Is that the connection?) It works by causing congestion (some >> >> congestion avoidance strategy!) which generates many more retransmissions. >> >> A scheme that minimizes congestion events and retransmissions would be much >> >> preferred. (And one existed at the time.) >> >> >> >> Take care, >> >> John >> >> >> >>> On Mar 26, 2025, at 13:10, Greg Skinner > wrote: >> >>> >> >>> >> >>> On Mar 24, 2025, at 5:53?PM, John Day > wrote: >> >>>> >> >>>> I would go further and say that this is a general property of layers. >> >> We tend to focus on the service provided by a layer, but the minimal >> >> service the layer expects from supporting layers is just as important. >> >>>> >> >>>> The original concept of best-effort and end-to-end transport (circa >> >> 1972) was that errors in the network layer were from congestion and rare >> >> memory errors during relaying. Congestion research was already underway and >> >> the results were expected to keep the frequency of lost packets fairly low. >> >> Thus keeping retransmissions at the transport layer relatively low and >> >> allowing the transport layer to be reasonably efficient. >> >>>> >> >>>> For those early networks, the link layer was something HDLC-like and so >> >> reliable. However, it was recognized that this did not imply that the Link >> >> layers had to be reliable, but have an error rate well below the error rate >> >> created by the network layer. Given that there would be n link layers >> >> contributing to the additional error rate, where n is the diameter of the >> >> network, it is possible to estimate an upper bound that each link layer >> >> must meet to keep the error rate at the network layer low enough for the >> >> transport layer to be effective. >> >>>> >> >>>> There were soon examples of link layers that were datagram services but >> >> sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to >> >> take up the packet radio topic, 802.11 is another good example, where what >> >> happens during the NAV (RTS, CTS, send data, get an Ack) is considered >> >> ?atomic? and if the Ack is not received, it is assumed that the packet was >> >> not delivered. (As WiFi data rates of increased this has been modified.) >> >> This seems to have the same property of providing a sufficiently low error >> >> rate to the network layer that transport remains effective. (Although I >> >> have to admit I have never come across typical goodput measurements for >> >> 802.11. They must exist, I just haven?t encountered them.) ;-) >> >>>> >> >>>> One fun thing to do with students and WiFi is to point out that the >> >> original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks >> >> use stop-and-wait as a simple introductory protocol and show that under >> >> most circumstances would be quite slow and inefficient. Then since they use >> >> WiFi everyday is it slow? No. Then why not? ;-) >> >>>> >> >>>> It leads to a nice principle of protocol design. >> >>>> >> >>>> Take care, >> >>>> John >> >>>> >> >>> >> >>> Looking at this from another direction, there are several specialized >> >> versions of TCP, [1] Given the conditions experienced in the SF Bay Area >> >> PRNET, I can see how if circumstances permitted, something that today we >> >> might call ?TCP Menlo? or ?TCP Alpine? might have been created that would >> >> have addressed the lossy networks problem more directly. [2] >> >>> >> >>> --gregbo >> >>> >> >>> [1] https://en.wikipedia.org/wiki/TCP_congestion_control >> >>> [2] >> >> https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ >> >> >> >> >> >> >> >> -- >> >> Internet-history mailing list >> >> Internet-history at elists.isoc.org >> >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> >> > >> > >> > -- >> > Please send any postal/overnight deliveries to: >> > Vint Cerf >> > Google, LLC >> > 1900 Reston Metro Plaza, 16th Floor >> > Reston, VA 20190 >> > +1 (571) 213 1346 >> > >> > >> > until further notice >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > > From vint at google.com Wed Mar 26 16:07:19 2025 From: vint at google.com (Vint Cerf) Date: Wed, 26 Mar 2025 19:07:19 -0400 Subject: [ih] TCP RTT Estimator In-Reply-To: <66F8F99B-C244-499E-86C5-0B0D333BB757@comcast.net> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> <66F8F99B-C244-499E-86C5-0B0D333BB757@comcast.net> Message-ID: Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. v On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: > > > On Mar 26, 2025, at 17:17, Vint Cerf wrote: > > yes, the gateway was colocated with the Station (on the same computer).h > > > I missed something. What is the Station? > > The Station managed the Packet Radio network, maintained information about > connectivity among the radio relays. PRNET was not a star network. > > > That is what I was assuming. > > Topology changes were tracked by the mobile nodes periodically reporting > to the station which other Packet Radios they could reach. > > > So a sort of centralized routing on the Station. An early ad hoc network. > > Hosts on the PRNET nodes could communicate with each other and, through > the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run > TCP, > > > So there were distinct machines acting as 'PRNET routers? and PRNET hosts. > > that was running on the hosts like the LSI-11/23's or the Station or.... > > > ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix > running on one the year before but as a terminal connected to our Unix on > an 11/45 but it was running NCP. > > Thanks, > John > > > v > > > On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: > >> And those nodes relayed among themselves as well as with the gateway? >> >> IOW, PRNET wasn?t a star network with the gateway as the center, like a >> WIFI access point. >> >> So there would have been TCP connections between PRNET nodes as well as >> TCP connections potentially relayed by other PRNET nodes through the >> gateway to ARPANET hosts. Right? >> >> Take care, >> John >> >> > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < >> internet-history at elists.isoc.org> wrote: >> > >> > I think we had a fair number of nodes - at least a half dozen, possibly >> > more? Don would know, if you don't Barbara. >> > Yes to multiple mountain sites. Eichler - sounds like somebody's house! >> I >> > used to live in an Eichler in Palo Alto but never had a packet radio >> > installed. Xerox PARC had one (fixed location) though. >> > >> > v >> > >> > >> > On Wed, Mar 26, 2025 at 4:46?PM Barbara Denny via Internet-history < >> > internet-history at elists.isoc.org> wrote: >> > >> >> >> >> Saw Vint's message after I started this one so adding Don Nielson to >> this >> >> thread too. >> >> I would like to mention the PRnet in the Bay Area was larger than 2 >> >> nodes. I am guessing you are referring to the diagram I sent out for >> the >> >> 1976 demo/test. That diagram shows the path the packets took to reach >> SRI >> >> from Rissotti's. I am trying to find out if the rest of the network >> wasn't >> >> deployed in 1976 but I haven't been able to track it down. If you >> look at >> >> the DTIC reference Greg Skinner provided previously ( >> >> https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams >> >> starting at page 244 that show more of the Bay Area PR network in that >> >> report . It includes sites at Grizzly Peak, Mission Peak, Mt. San >> Bruno, >> >> etc. I am not sure I ever got a copy when I was at BBN so I don't >> feel I >> >> can comment if some of the node locations would change based on what >> >> connectivity was needed. >> >> BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps >> >> near/at Stanford's Dish (Don?/Vint?)? I think most people don't >> realize the >> >> Dish belongs to SRI and not Stanford. >> >> barbara On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day >> via >> >> Internet-history wrote: >> >> >> >> I don?t quite understand about using TCP (or variants) here. >> >> >> >> The PRNET consisted of the van and two repeaters to a gateway to the >> >> ARPANET. The repeaters were physical layer relays I assume that did not >> >> interpret the packets. I presume that the PRNET had a link layer that >> did >> >> some error control. The van to gateway is generating TCP datagrams >> over its >> >> ?link layer' protocol. (IP had not yet been created, or had it?.) I >> presume >> >> that the ARPANET relayed the TCP packets as Type 3 packets, i.e., >> >> datagrams. The PRNET-Gateway would have looked like a host to the IMP >> it >> >> was connected to. The IMPs had their own hop-by-hop error control >> over the >> >> physical lines. (There weren?t really layers in the IMPs. At least >> that is >> >> what Dave Walden told me. But we can assume that this error control was >> >> sort of like a link layer.) >> >> >> >> The error characteristics of the Van-Gateway link layer were very >> >> different and much more lossy than the Host-IMP or IMP-IMP lines. >> (Some of >> >> the IMP-IMP lines had very low error rates.) There was one Van-Gateway >> link >> >> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been >> >> meeting the requirements for a network layer as described in my email. >> The >> >> only major difference in error rate was the van-gateway. It would make >> more >> >> sense (and consistent with what is described in my email) to provide a >> more >> >> robust to enhance the van-gateway link protocol to be more robust to >> met >> >> the error characteristics. >> >> >> >> Using TCP would be in some sense trying to use 'the tail to wag the >> dog,? >> >> i.e., using an end-to-end transport protocol to compensate for 1 link >> that >> >> was not meeting the requirements of the network layer. This would have >> been >> >> much less effective. It is easy to see that errors in a smaller scope >> (the >> >> link layer) should not be propagated to layers of a greater scope for >> >> recovery. (Unless their frequency is very low as described previously, >> >> which this isn?t.) This what the architecture model requires. >> >> >> >> Not sure what congestion control has to do with this. The TCP >> congestion >> >> solution is pretty awful solution. The implicit notification makes it >> >> predatory and assumes that lost messages are due to congestion, which >> they >> >> aren?t. (Is that the connection?) It works by causing congestion (some >> >> congestion avoidance strategy!) which generates many more >> retransmissions. >> >> A scheme that minimizes congestion events and retransmissions would be >> much >> >> preferred. (And one existed at the time.) >> >> >> >> Take care, >> >> John >> >> >> >>> On Mar 26, 2025, at 13:10, Greg Skinner >> wrote: >> >>> >> >>> >> >>> On Mar 24, 2025, at 5:53?PM, John Day wrote: >> >>>> >> >>>> I would go further and say that this is a general property of layers. >> >> We tend to focus on the service provided by a layer, but the minimal >> >> service the layer expects from supporting layers is just as important. >> >>>> >> >>>> The original concept of best-effort and end-to-end transport (circa >> >> 1972) was that errors in the network layer were from congestion and >> rare >> >> memory errors during relaying. Congestion research was already >> underway and >> >> the results were expected to keep the frequency of lost packets fairly >> low. >> >> Thus keeping retransmissions at the transport layer relatively low and >> >> allowing the transport layer to be reasonably efficient. >> >>>> >> >>>> For those early networks, the link layer was something HDLC-like and >> so >> >> reliable. However, it was recognized that this did not imply that the >> Link >> >> layers had to be reliable, but have an error rate well below the error >> rate >> >> created by the network layer. Given that there would be n link layers >> >> contributing to the additional error rate, where n is the diameter of >> the >> >> network, it is possible to estimate an upper bound that each link layer >> >> must meet to keep the error rate at the network layer low enough for >> the >> >> transport layer to be effective. >> >>>> >> >>>> There were soon examples of link layers that were datagram services >> but >> >> sufficiently reliable to meet those conditions, e.g., Ethernet. Later, >> to >> >> take up the packet radio topic, 802.11 is another good example, where >> what >> >> happens during the NAV (RTS, CTS, send data, get an Ack) is considered >> >> ?atomic? and if the Ack is not received, it is assumed that the packet >> was >> >> not delivered. (As WiFi data rates of increased this has been >> modified.) >> >> This seems to have the same property of providing a sufficiently low >> error >> >> rate to the network layer that transport remains effective. (Although I >> >> have to admit I have never come across typical goodput measurements for >> >> 802.11. They must exist, I just haven?t encountered them.) ;-) >> >>>> >> >>>> One fun thing to do with students and WiFi is to point out that the >> >> original use of the NAV makes WiFi a stop-and-wait protocol. Most >> textbooks >> >> use stop-and-wait as a simple introductory protocol and show that under >> >> most circumstances would be quite slow and inefficient. Then since >> they use >> >> WiFi everyday is it slow? No. Then why not? ;-) >> >>>> >> >>>> It leads to a nice principle of protocol design. >> >>>> >> >>>> Take care, >> >>>> John >> >>>> >> >>> >> >>> Looking at this from another direction, there are several specialized >> >> versions of TCP, [1] Given the conditions experienced in the SF Bay >> Area >> >> PRNET, I can see how if circumstances permitted, something that today >> we >> >> might call ?TCP Menlo? or ?TCP Alpine? might have been created that >> would >> >> have addressed the lossy networks problem more directly. [2] >> >>> >> >>> --gregbo >> >>> >> >>> [1] https://en.wikipedia.org/wiki/TCP_congestion_control >> >>> [2] >> >> >> https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/ >> >> >> >> >> >> >> >> -- >> >> Internet-history mailing list >> >> Internet-history at elists.isoc.org >> >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> >> > >> > >> > -- >> > Please send any postal/overnight deliveries to: >> > Vint Cerf >> > Google, LLC >> > 1900 Reston Metro Plaza, 16th Floor >> > Reston, VA 20190 >> > +1 (571) 213 1346 <(571)%20213-1346> >> > >> > >> > until further notice >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> >> > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 <(571)%20213-1346> > > > until further notice > > > > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From b_a_denny at yahoo.com Wed Mar 26 17:30:06 2025 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 27 Mar 2025 00:30:06 +0000 (UTC) Subject: [ih] TCP RTT Estimator In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> <3AED089A-517F-48C6-AFE8-90A1F455DB19@comcast.net> <66F8F99B-C244-499E-86C5-0B0D333BB757@comcast.net> Message-ID: <454122141.2387791.1743035406231@mail.yahoo.com> Having trouble sending to the email list again so I shortened the original thread. Hope no duplicates. ****I might be repeating but I will add a few comments. Hope my memory is pretty good. Packet Radio nodes could act as sink, source, or repeater/relay for data.? They could also have an attached device (like a station, end user host, tiu, etc).? I think the packet radio addressing space was broken up so you could determine the type of entity by the ID (need to double check this). The station provided routes to packet radios when the packet radio didn't know how to reach a destination. Any packet radio could be mobile.? I don't remember if there was a limit initially on how many neighbors a packet radio could have.? Packet radio nodes did not use IP related protocols but could handle IP traffic generated by other entities. Packet Radio nodes also had multiple hardware generations (EPR, UBR, IPR, VPR, and also? the LPR which was actually done under a follow-on? program called SURAN) .? There were also multiple versions of the radio software known as CAPX where X was a number.? ?I think the earliest version I encountered was CAP5 so I have no knowledge of the protocol implementation used in the simulation Greg Skinner presented in his email message. In the early 1980s packet radio was implementing multi-station so you could have more than one station in a packet radio network. I think this was known as CAP 6.2 (6.4???).? There was also a stationless design being discussed at the close of the packet radio program (CAP7). barbara On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf wrote: Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. v On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: On Mar 26, 2025, at 17:17, Vint Cerf wrote: yes, the gateway was colocated with the Station (on the same computer).h I missed something. What is the Station? The Station managed the Packet Radio network, maintained information about connectivity among the radio relays. PRNET was not a star network. That is what I was assuming. Topology changes were tracked by the mobile nodes periodically reporting to the station which other Packet Radios they could reach. So a sort of centralized routing on the Station. An early ad hoc network. Hosts on the PRNET nodes could communicate with each other and, through the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run TCP, So there were distinct machines acting as 'PRNET routers? and PRNET hosts. that was running on the hosts like the LSI-11/23's or the Station or.... ;-) an LSI-11/23 wasn?t a lot of machine. ;-) ?We had a strip down Unix running on one the year before but as a terminal connected to our Unix on an 11/45 but it was running NCP. Thanks,John v On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: And those nodes relayed among themselves as well as with the gateway? IOW, PRNET wasn?t a star network with the gateway as the center, like a WIFI access point. So there would have been TCP connections between PRNET nodes as well as TCP connections potentially relayed by other PRNET nodes through the gateway to ARPANET hosts.? Right? Take care, John From j at shoch.com Wed Mar 26 21:24:27 2025 From: j at shoch.com (John Shoch) Date: Wed, 26 Mar 2025 22:24:27 -0600 Subject: [ih] Internet-history Digest, Vol 64, Issue 30 In-Reply-To: References: Message-ID: >> > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < internet-history at elists.isoc.org > wrote: >> > I think we had a fair number of nodes - at least a half dozen, possibly >> > more? Don would know, if you don't Barbara. >> > Yes to multiple mountain sites. Eichler - sounds like somebody's house! I >> > used to live in an Eichler in Palo Alto but never had a packet radio >> > installed. Xerox PARC had one (fixed location) though. Vint is understating his generosity and support: --Vint (at Arpa) and Don Nielson and team (at SRI) supported us with TWO Packet Radio Units in Palo Alto. --They were stationary installations, at the main PARC building and another about a mile away. --The PRUs had an 1822 interface, and we had built an 1822 interface for the Alto (to connect to an Imp). --So we built 2 more interfaces, and had an Alto at each PRU -- which ran our standard internet gateway, and could also connect to an Ethernet, and then on to the rest of our internet. --We did not modify the PRU code. A network driver was written to encapsulate internet packets for transmission through the PRNet, so it became a transit network between two Ethernets (and packets coming off the PRNet could be routed on through other gateways to machines elsewhere in the country). --The PRNet and an Ethernet differed in throughput by maybe 2 decimal orders of magnitude -- so it taught us all a lot about flow and congestion control, retransmission algorithms, lossy sub-neworks, delayed duplicates, intra-network fragmentation, and more.. --It was a great experiment. Could not have done it without Vint, Don, et al. (I helped organize the project, but real kudos go to Larry Stewart, who made it all happen!) John PS: I sometimes give a talk that includes a picture of a rack of equipment holding the PRU and the Alto gateway -- and then quip, "If you squint real hard, and apply pressure from 20-30 years of Moore's law, out pops a WiFi access point!" From gregskinner0 at icloud.com Wed Mar 26 22:22:46 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Wed, 26 Mar 2025 22:22:46 -0700 Subject: [ih] TCP RTT Estimator In-Reply-To: <665693090.2324601.1743021974531@mail.yahoo.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> Message-ID: On Mar 26, 2025, at 1:46?PM, Barbara Denny via Internet-history wrote: > > Saw Vint's message after I started this one so adding Don Nielson to this thread too. > I would like to mention the PRnet in the Bay Area was larger than 2 nodes. I am guessing you are referring to the diagram I sent out for the 1976 demo/test. That diagram shows the path the packets took to reach SRI from Rissotti's. I am trying to find out if the rest of the network wasn't deployed in 1976 but I haven't been able to track it down. If you look at the DTIC reference Greg Skinner provided previously (https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf), there are diagrams starting at page 244 that show more of the Bay Area PR network in that report . It includes sites at Grizzly Peak, Mission Peak, Mt. San Bruno, etc. I am not sure I ever got a copy when I was at BBN so I don't feel I can comment if some of the node locations would change based on what connectivity was needed. > BTW, was the repeater marked Eichler in the 1976 demo diagram perhaps near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize the Dish belongs to SRI and not Stanford. > barbara A map on the ''40th anniversary of the Internet page'' shows the Eichler site as 6 km (roughly) south of SRI. According to Google Maps, the Stanford Dish is about 3.27 miles (roughly) south of SRI HQ. BTW, Jim Mathis? TIU implementation is available on Noel Chiappa?s ana-3.lcs.mit.edu site. [1] The documentation is dated April 1979. --gregbo [1] http://ana-3.lcs.mit.edu/~jnc/tech/mos/ From vint at google.com Thu Mar 27 02:30:23 2025 From: vint at google.com (Vint Cerf) Date: Thu, 27 Mar 2025 05:30:23 -0400 Subject: [ih] Comments on Packet Radio In-Reply-To: <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> Message-ID: I was not sure whether Don's note got to the internet-history list, so apologies if this is a duplicate. I went back and re-read the long paper on Packet Radio in Proceedings of the IEEE Special Issue published November 1978. Don is correct that there was a reliable Station-PRU protocol (called SPP) but I believe this was only used for Station/PRU communication. Not all traffic was carried that way and this, in part, motivated the development of the end/end TCP/IP protocol. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 vint On Thu, Mar 27, 2025 at 2:46?AM Don Nielson wrote: > Sorry, didn't pay attention to limited addressees below. > Also a few typos corrected below. Don > > > -------- Forwarded Message -------- > Subject: Re: [ih] TCP RTT Estimator > Date: Wed, 26 Mar 2025 23:36:21 -0700 > From: Don Nielson > To: Barbara Denny , > Internet-history > > > Hi, All, > The threads here are a bit random so I think I'll just try to give > a few comments that hopefully will answer much at issue. > > 1. The Packet Radio Network (PRNET) was, from the outset, to be a self- > forming network with a central controlling node called a station, but with > the ongoing potential of stationless operation. That station had > an ARPANET interface for maintaining its software. The PRNET was > dynamic in the sense of self-restructuring with the addition or loss of > nodes. As early as 1974-5 it interconnection to the ARPANET was planned. > > 2. The packet radio units (PRUs) were at once a network repeater > and an entrance node to the network. Some were placed at > promontories for area coverage and some were sited at user sites > as nodes for network access. They were half digital/half radio and > sophisticated for their time. For example, any PRU could be > software-maintained (debugged) remotely. This obviously required > reliable PRNET protocols I think we called SPP and NCP they rode atop a > channel access layer that resided only in the PRUs. That PRU layer > faced all the early issues of contention, routing, and efficiency and > the PRU radio section was designed as best to deal with mulitpath, etc. > > 3. The PRUs required interfaces to the terminals/hosts to which > they were connected. (Exceptions to that were some traffic > generators built for early testing and some IMP interfaces to the PDP-11 > station computer.). SRI built the terminal interface units and it was > in those that TCP was eventually placed. > > 4. SRI was testing PRNET configurations in 1974-5 and doing so had > a number of PRUs (and one station computer) available. By the end > of 1975 we had at least a half dozen in use and probably more in > backup. By the end of 1976 about 14 were on site. > > 5. Before mentioning TCP it needs to be said that PRNET intranet > protocols were end-to-end reliable, handling all the problems of > flow control, duplicate detection, sequencing, and retransmission. > > 6. TCP implementation was anticipated in 1975 and preparations for > a station gateway that arrived in early 1976. TCP for the SRI > TIUs was, according to one report, based on Stanford Tech Note 68 > by Vint dated Nov 1975. As Vint said, Jim Mathis lead that implementation. > In early 1975 the BBN-provided gateway from Ginny Strazisar was > first tested without PRUs and early problems resolved. > > 7. So, with the gateway operating, it was time to take TCP to the field > and after some brief testing it was decided to have a little celebration > in that regard. Ron Kunzelman of SRI suggested a nice accessible spot > for the SRI van and was at least one PRNET hop from the station/gateway > was Rossotti's. (I don't recall or if anyone with ever know whether other > PRNET repeaters that day were passing this traffic. Given the absence > of other PRNET traffic, it would have been improbable.) > > Several SRI participants were there, one Army visitor, and I took > the pictures. Please recall that the PRNET protocol was reliable, > so testing TCP exclusively on it wouldn't have made sense. > While the PRNET could halt under environmental issues, that didn't mean > it was lossy. While the demo at Rossotti's was not mobile, we had > countless mobile demos of numeric patterns in which transmission > often would be interrupted, but we never saw errors. We even would > disable the PRU radio unit to halt transmission to show no errors. > > 8. TCP reliability was, now that I think about it, at that point mainly a > test of the > new gateway and possibly if the ARPANET routing was somehow lossy. > If you saw the very lengthy weekly report entered manually from Rossotti's > you would see how well it all worked end-to-end. > > If I haven't bent your ears enough, I could try and answer anything the > above doesn't mention or errors in my memory. I did look back at some > of the packet radio notes for the dates and numbers. Don > > > > > > On 3/26/25 5:30 PM, Barbara Denny wrote: > > Having trouble sending to the email list again so I shortened the original > thread. Hope no duplicates. > > **** > I might be repeating but I will add a few comments. Hope my memory is > pretty good. > > Packet Radio nodes could act as sink, source, or repeater/relay for data. > They could also have an attached device (like a station, end user host, > tiu, etc). I think the packet radio addressing space was broken up so you > could determine the type of entity by the ID (need to double check this). > The station provided routes to packet radios when the packet radio didn't > know how to reach a destination. Any packet radio could be mobile. I don't > remember if there was a limit initially on how many neighbors a packet > radio could have. Packet radio nodes did not use IP related protocols but > could handle IP traffic generated by other entities. > > Packet Radio nodes also had multiple hardware generations (EPR, UBR, IPR, > VPR, and also the LPR which was actually done under a follow-on program > called SURAN) . There were also multiple versions of the radio software > known as CAPX where X was a number. I think the earliest version I > encountered was CAP5 so I have no knowledge of the protocol implementation > used in the simulation Greg Skinner presented in his email message. > > In the early 1980s packet radio was implementing multi-station so you > could have more than one station in a packet radio network. I think this > was known as CAP 6.2 (6.4???). There was also a stationless design being > discussed at the close of the packet radio program (CAP7). > > barbara > On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf > wrote: > > > Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. > > v > > > On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: > > > > On Mar 26, 2025, at 17:17, Vint Cerf wrote: > > yes, the gateway was colocated with the Station (on the same computer).h > > > I missed something. What is the Station? > > The Station managed the Packet Radio network, maintained information about > connectivity among the radio relays. PRNET was not a star network. > > > That is what I was assuming. > > Topology changes were tracked by the mobile nodes periodically reporting > to the station which other Packet Radios they could reach. > > > So a sort of centralized routing on the Station. An early ad hoc network. > > Hosts on the PRNET nodes could communicate with each other and, through > the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run > TCP, > > > So there were distinct machines acting as 'PRNET routers? and PRNET hosts. > > that was running on the hosts like the LSI-11/23's or the Station or.... > > > ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix > running on one the year before but as a terminal connected to our Unix on > an 11/45 but it was running NCP. > > Thanks, > John > > > v > > > On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: > > And those nodes relayed among themselves as well as with the gateway? > > IOW, PRNET wasn?t a star network with the gateway as the center, like a > WIFI access point. > > So there would have been TCP connections between PRNET nodes as well as > TCP connections potentially relayed by other PRNET nodes through the > gateway to ARPANET hosts. Right? > > Take care, > John > > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From vint at google.com Thu Mar 27 02:52:45 2025 From: vint at google.com (Vint Cerf) Date: Thu, 27 Mar 2025 05:52:45 -0400 Subject: [ih] Internet-history Digest, Vol 64, Issue 30 In-Reply-To: References: Message-ID: love the quip!!!! v On Thu, Mar 27, 2025 at 12:24?AM John Shoch via Internet-history < internet-history at elists.isoc.org> wrote: > >> > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < > internet-history at elists.isoc.org >> > wrote: > > >> > I think we had a fair number of nodes - at least a half dozen, > possibly > >> > more? Don would know, if you don't Barbara. > >> > Yes to multiple mountain sites. Eichler - sounds like somebody's > house! I > >> > used to live in an Eichler in Palo Alto but never had a packet radio > >> > installed. Xerox PARC had one (fixed location) though. > > Vint is understating his generosity and support: > > --Vint (at Arpa) and Don Nielson and team (at SRI) supported us with TWO > Packet Radio Units in Palo Alto. > --They were stationary installations, at the main PARC building and another > about a mile away. > --The PRUs had an 1822 interface, and we had built an 1822 interface for > the Alto (to connect to an Imp). > --So we built 2 more interfaces, and had an Alto at each PRU -- which ran > our standard internet gateway, and could also connect to an Ethernet, and > then on to the rest of our internet. > --We did not modify the PRU code. A network driver was written to > encapsulate internet packets for transmission through the PRNet, so it > became a transit network between two Ethernets (and packets coming off the > PRNet could be routed on through other gateways to machines elsewhere in > the country). > --The PRNet and an Ethernet differed in throughput by maybe 2 decimal > orders of magnitude -- so it taught us all a lot about flow and congestion > control, retransmission algorithms, lossy sub-neworks, delayed duplicates, > intra-network fragmentation, and more.. > --It was a great experiment. > > Could not have done it without Vint, Don, et al. > (I helped organize the project, but real kudos go to Larry Stewart, who > made it all happen!) > > John > > PS: I sometimes give a talk that includes a picture of a rack of equipment > holding the PRU and the Alto gateway -- and then quip, "If you squint real > hard, and apply pressure from 20-30 years of Moore's law, out pops a WiFi > access point!" > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jeanjour at comcast.net Thu Mar 27 04:18:57 2025 From: jeanjour at comcast.net (John Day) Date: Thu, 27 Mar 2025 07:18:57 -0400 Subject: [ih] Internet-history Digest, Vol 64, Issue 30 In-Reply-To: References: Message-ID: <415AF486-7699-4228-8537-D574D86F7834@comcast.net> It is a great quip! There are several things that people did early on that were pushing the hardware to the limits and sometimes beyond and we had to wait for the hardware to catch up. ;-) I have had young profs ask if I wasn?t blown away by some event in the 90s and having to say, that I was just glad to see us getting back to where we were in the 70s. ;-) It doesn?t make them happy. > On Mar 27, 2025, at 05:52, Vint Cerf via Internet-history wrote: > > love the quip!!!! > > v > > > On Thu, Mar 27, 2025 at 12:24?AM John Shoch via Internet-history < > internet-history at elists.isoc.org> wrote: > >>>>> On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < >> internet-history at elists.isoc.org >>> >> wrote: >> >>>>> I think we had a fair number of nodes - at least a half dozen, >> possibly >>>>> more? Don would know, if you don't Barbara. >>>>> Yes to multiple mountain sites. Eichler - sounds like somebody's >> house! I >>>>> used to live in an Eichler in Palo Alto but never had a packet radio >>>>> installed. Xerox PARC had one (fixed location) though. >> >> Vint is understating his generosity and support: >> >> --Vint (at Arpa) and Don Nielson and team (at SRI) supported us with TWO >> Packet Radio Units in Palo Alto. >> --They were stationary installations, at the main PARC building and another >> about a mile away. >> --The PRUs had an 1822 interface, and we had built an 1822 interface for >> the Alto (to connect to an Imp). >> --So we built 2 more interfaces, and had an Alto at each PRU -- which ran >> our standard internet gateway, and could also connect to an Ethernet, and >> then on to the rest of our internet. >> --We did not modify the PRU code. A network driver was written to >> encapsulate internet packets for transmission through the PRNet, so it >> became a transit network between two Ethernets (and packets coming off the >> PRNet could be routed on through other gateways to machines elsewhere in >> the country). >> --The PRNet and an Ethernet differed in throughput by maybe 2 decimal >> orders of magnitude -- so it taught us all a lot about flow and congestion >> control, retransmission algorithms, lossy sub-neworks, delayed duplicates, >> intra-network fragmentation, and more.. >> --It was a great experiment. >> >> Could not have done it without Vint, Don, et al. >> (I helped organize the project, but real kudos go to Larry Stewart, who >> made it all happen!) >> >> John >> >> PS: I sometimes give a talk that includes a picture of a rack of equipment >> holding the PRU and the Alto gateway -- and then quip, "If you squint real >> hard, and apply pressure from 20-30 years of Moore's law, out pops a WiFi >> access point!" >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vgcerf at gmail.com Thu Mar 27 04:50:03 2025 From: vgcerf at gmail.com (vinton cerf) Date: Thu, 27 Mar 2025 07:50:03 -0400 Subject: [ih] weight of the internet Message-ID: funny! https://www.wired.com/story/weight-of-the-internet/#:~:text=At%20room%20temperature%2C%20the%20entirety,53%20quadrillionths%20of%20a%20gram. hope this isn't paywalled v From steve at shinkuro.com Thu Mar 27 05:10:29 2025 From: steve at shinkuro.com (Steve Crocker) Date: Thu, 27 Mar 2025 08:10:29 -0400 Subject: [ih] weight of the internet In-Reply-To: References: Message-ID: Reminiscent of Norm Augustine's classic essay examining the trends for fighter aircraft. He observed that over many years the cost kept going up but the weight remained about the same. He pondered how it would be possible to continue this trend, which would obviously require finding something increasingly expensive but also weightless. Software! Estimating the weight of the Internet might be amusing on its own, but it gets more interesting when combined with the value of the Internet. Steve On Thu, Mar 27, 2025 at 7:50?AM vinton cerf via Internet-history < internet-history at elists.isoc.org> wrote: > funny! > > > https://www.wired.com/story/weight-of-the-internet/#:~:text=At%20room%20temperature%2C%20the%20entirety,53%20quadrillionths%20of%20a%20gram > . > > hope this isn't paywalled > v > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Sent by a Verified sender From stewart at serissa.com Thu Mar 27 06:15:45 2025 From: stewart at serissa.com (Lawrence Stewart) Date: Thu, 27 Mar 2025 09:15:45 -0400 Subject: [ih] Internet-history Digest, Vol 64, Issue 31 In-Reply-To: References: Message-ID: <31C2BB9D-9D8B-4AD3-9D87-51159378F61C@serissa.com> I?ll add a little bit more to John Shoch?s story about the PARC involvement in PRNet. Credit to Dave Boggs for teaching me the correct way to design Alto hardware, which explains why the Alto-1822 is as small as it is. There?s a photo in ien78. Credit to Hal Murray, whose Mesa Gateway software was used for the PRNet work, rather than the BCPL version. There?s a technical report, see https://www.rfc-editor.org/ien/ien78.pdf > 2. Re: Internet-history Digest, Vol 64, Issue 30 (John Shoch) > Message: 2 > Date: Wed, 26 Mar 2025 22:24:27 -0600 > From: John Shoch > To: internet-history at elists.isoc.org > Subject: Re: [ih] Internet-history Digest, Vol 64, Issue 30 > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > >>>> On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history < > internet-history at elists.isoc.org > > wrote: > >>>> I think we had a fair number of nodes - at least a half dozen, possibly >>>> more? Don would know, if you don't Barbara. >>>> Yes to multiple mountain sites. Eichler - sounds like somebody's > house! I >>>> used to live in an Eichler in Palo Alto but never had a packet radio >>>> installed. Xerox PARC had one (fixed location) though. > > Vint is understating his generosity and support: > > --Vint (at Arpa) and Don Nielson and team (at SRI) supported us with TWO > Packet Radio Units in Palo Alto. > --They were stationary installations, at the main PARC building and another > about a mile away. > --The PRUs had an 1822 interface, and we had built an 1822 interface for > the Alto (to connect to an Imp). > --So we built 2 more interfaces, and had an Alto at each PRU -- which ran > our standard internet gateway, and could also connect to an Ethernet, and > then on to the rest of our internet. > --We did not modify the PRU code. A network driver was written to > encapsulate internet packets for transmission through the PRNet, so it > became a transit network between two Ethernets (and packets coming off the > PRNet could be routed on through other gateways to machines elsewhere in > the country). > --The PRNet and an Ethernet differed in throughput by maybe 2 decimal > orders of magnitude -- so it taught us all a lot about flow and congestion > control, retransmission algorithms, lossy sub-neworks, delayed duplicates, > intra-network fragmentation, and more.. > --It was a great experiment. > > Could not have done it without Vint, Don, et al. > (I helped organize the project, but real kudos go to Larry Stewart, who > made it all happen!) > > John > > PS: I sometimes give a talk that includes a picture of a rack of equipment > holding the PRU and the Alto gateway -- and then quip, "If you squint real > hard, and apply pressure from 20-30 years of Moore's law, out pops a WiFi > access point!" > > From jeanjour at comcast.net Thu Mar 27 08:19:37 2025 From: jeanjour at comcast.net (John Day) Date: Thu, 27 Mar 2025 11:19:37 -0400 Subject: [ih] weight of the internet In-Reply-To: References: Message-ID: <480C3894-5092-4ED6-8602-969EFA81B159@comcast.net> To be a purist, isn?t it even less than that? The web is an application, not the Internet. So wouldn?t it be what is in TCP and below? ;-) Amusing. > On Mar 27, 2025, at 07:50, vinton cerf via Internet-history wrote: > > funny! > > https://www.wired.com/story/weight-of-the-internet/#:~:text=At%20room%20temperature%2C%20the%20entirety,53%20quadrillionths%20of%20a%20gram. > > hope this isn't paywalled > v > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From df at macgui.com Thu Mar 27 10:03:55 2025 From: df at macgui.com (David Finnigan) Date: Thu, 27 Mar 2025 12:03:55 -0500 Subject: [ih] What is the Web, was Re: weight of the internet In-Reply-To: <480C3894-5092-4ED6-8602-969EFA81B159@comcast.net> References: <480C3894-5092-4ED6-8602-969EFA81B159@comcast.net> Message-ID: It might be something different. My impression is that the Web is a concept which uses in part HTTP and the URL as just two of its vehicles. But the Web as a concept also unified Gopher, FTP, Netnews, and other protocols of the day under a single user interface as an information retrieval tool. In short: the Web is a concept of unified information systems. -David L. Finnigan On 27 Mar 2025 10:19 am, John Day via Internet-history wrote: > To be a purist, isn?t it even less than that? The web is an > application, not the Internet. > > So wouldn?t it be what is in TCP and below? ;-) > > Amusing. > >> On Mar 27, 2025, at 07:50, vinton cerf via Internet-history >> wrote: >> >> funny! >> >> https://www.wired.com/story/weight-of-the-internet/#:~:text=At%20room%20temperature%2C%20the%20entirety,53%20quadrillionths%20of%20a%20gram. >> >> hope this isn't paywalled >> v >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From gnu at toad.com Thu Mar 27 12:41:06 2025 From: gnu at toad.com (John Gilmore) Date: Thu, 27 Mar 2025 12:41:06 -0700 Subject: [ih] TCP RTT Estimator (JNC history site contents) In-Reply-To: References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> Message-ID: <3463.1743104466@hop.toad.com> Greg Skinner via Internet-history wrote: > BTW, Jim Mathis' TIU implementation is available on Noel Chiappa's ana-3.lcs.mit.edu site. [1] > The documentation is dated April 1979. > --gregbo > [1] http://ana-3.lcs.mit.edu/~jnc/tech/mos/ Noel's ana-3 site is dead now, but it has apparently moved to: http://mercury.lcs.mit.edu/~jnc/ However, http://mercury.lcs.mit.edu/~jnc/tech/mos is inaccessible or nonexistent. However, I saved a copy of ana-3's 2023 contents in the Internet Archive here: https://web.archive.org/web/20230315052958/http://ana-3.lcs.mit.edu/~jnc/tech/mos/docs/tiunv1.lpt https://web.archive.org/web/20230315053015/http://ana-3.lcs.mit.edu/~jnc/tech/mos/tiu/telnet-1.mac (PS: the whole HTTPS site mercury.lcs.mit.edu site's is inaccessible from modern Firefox, because the server uses a TLS version lower than 1.2; one must use original http.) John From gregskinner0 at icloud.com Thu Mar 27 13:28:11 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Thu, 27 Mar 2025 13:28:11 -0700 Subject: [ih] TCP RTT Estimator (JNC history site contents) In-Reply-To: <3463.1743104466@hop.toad.com> References: <1676873250.2847239.1741721013305.ref@mail.yahoo.com> <1676873250.2847239.1741721013305@mail.yahoo.com> <865890042.2905597.1741726920385@mail.yahoo.com> <4A604A2D-5CA1-4869-91EA-DC8BD9DBD254@comcast.net> <1333609295.2931332.1741729733743@mail.yahoo.com> <6D902D9F-6DDC-41A9-8E9A-21AA49C6FD35@icloud.com> <8441175E-BFC2-49D2-9C75-5F0C0F96F52B@comcast.net> <0BE57BC0-CF5E-4983-970B-6CA186397FC5@icloud.com> <665693090.2324601.1743021974531@mail.yahoo.com> <3463.1743104466@hop.toad.com> Message-ID: <2456A6B9-16B5-48A5-91A1-E4A7F42D225B@icloud.com> On Mar 27, 2025, at 12:41?PM, John Gilmore wrote: > > Greg Skinner via Internet-history wrote: >> BTW, Jim Mathis' TIU implementation is available on Noel Chiappa's ana-3.lcs.mit.edu site. [1] >> The documentation is dated April 1979. >> --gregbo >> [1] http://ana-3.lcs.mit.edu/~jnc/tech/mos/ > > Noel's ana-3 site is dead now, but it has apparently moved to: > > http://mercury.lcs.mit.edu/~jnc/ Odd, because I have been able to access it. Here is part of the documentation on TCP retransmission, from http://ana-3.lcs.mit.edu/~jnc/tech/mos/docs/tiunv2.lpt: 2. Output Packet Processing ______ ______ __________ [?] d. Retransmission of Unacknowledged Text or Control ______________ __ ______________ ____ __ _______ At present, the TCP process gets signaled every second and counts down the signals by using the WAKEUP counter in the TCB. When this counter reaches zero, the retransmission is triggered; a new value for the retransmission interval (RTXDLY) is calculated; and WAKEUP is initialized to that value. Currently, the new value for RTXDLY can either be constant or increase linearly or exponentially with the number of unsuccessful retransmissions of a packet. 111 17 April 1979 D R A F T at 11:10 When the retransmission time-out expires, the TCP determines whether the head of the retransmission queue has been retransmitted the maximum number of times; if so, the remote TCP is declared "not responding," and the connection is marked as suspended. A count is kept in the RETRY field of the TCB, is incremented on every retransmission, and is reset to zero whenever a new packet has been received. If the connection is not in a usable state (i.e., if it is attempting to open or close), the connection is aborted. After deciding to retransmit, the TCP checks the head of the retransmission buffer to see whether it contains control or data; these two cases are handled separately. If it contains control, a packet containing this control function is constructed and transmitted. If it contains data, a packet is constructed that contains all the data from the head of the retransmission buffer to its end or to the first control function indicator, with the amount of data not exceeding the maximum allowed in a single packet. Thus the data packet boundaries are not preserved. From gregskinner0 at icloud.com Thu Mar 27 14:20:09 2025 From: gregskinner0 at icloud.com (Greg Skinner) Date: Thu, 27 Mar 2025 14:20:09 -0700 Subject: [ih] Fwd: Comments on Packet Radio References: <200792132.2691016.1743096520159@mail.yahoo.com> Message-ID: Forwarded for Barbara > Begin forwarded message: > > From: Barbara Denny > Subject: Fw: [ih] Comments on Packet Radio > Date: March 27, 2025 at 10:28:40?AM PDT > > ****** > > More details than I was expecting. I would add a couple things that I think might be interesting to some people and reflect the multi hop broadcast nature of the network. > > These features were available when I worked on packet radio. I don't think pacing was available during the first demos. I don't know when alternate routing was included. > > Packet radios also did something called pacing to try to eliminate the hidden terminal problem: clobbering the receipt of a packet at the next node because you didn't hear a transmission of another radio two hops away from you. Rather than relying on my memory to recreate the details, I found a paper by the Rockwell folks if you are interested. > > N. Gower and J. Jubin, "Congestion Control using Pacing in a Packet Radio Network," MILCOM 1982 - IEEE Military Communications Conference - Progress in Spread Spectrum Communications, Boston, MA, USA, 1982, pp. 23.1-1-23.1-6, doi: 10.1109/MILCOM.1982.4805945. > > Packet radios also performed alt (alternate) routing. If a packet radio did not hear the follow on transmission of its packet to the next hop, or the explicit ack if the next hop was the destination, after three attempts, the sending radio could request help from a neighboring packet radio to transmit the packet if it had a route whose tier level (think hop count) to the destination was equal or less than the tier level in the packet header. An alt route request bit? was used for this help request. If the new next hop radio was equal to tier level in the packet header, this was known as lateral alternate routing and a flag was set so no other radio at the same tier level would try to forward the packet if help was needed using this new next hop. Hope I got that description right. > > BTW. I never got written information about the protocols in the radios. My knowledge is from meetings or asking Rockwell packet radio folks about a problem I was looking into. > > Packet radios did use omnidirectional antennas so it was a broadcast network. Challenges presented to later DARPA packet radio projects included the use of unidirectional antennas. > > barbara From jnc at mercury.lcs.mit.edu Thu Mar 27 14:27:45 2025 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 27 Mar 2025 17:27:45 -0400 (EDT) Subject: [ih] TCP RTT Estimator (JNC history site contents) Message-ID: <20250327212745.8186718C08F@mercury.lcs.mit.edu> > From: John Gilmore > Noel's ana-3 site is dead now, but it has apparently moved to: > http://mercury.lcs.mit.edu/~jnc/ > However, http://mercury.lcs.mit.edu/~jnc/tech/mos is inaccessible or > nonexistent. These days, mercury.lcs.mit.edu and ana-3.lcs.mit.edu are the same machine (~jnc at each goes to different places in my home dir on that machine), so one shouldn't be up and the other down - unless someone has broken something in a config somewhere (which seems to happen on a semi-regular basis). At the moment, they seem to both be up; if you notice one down, ding me. Noel From vint at google.com Thu Mar 27 15:21:05 2025 From: vint at google.com (Vint Cerf) Date: Thu, 27 Mar 2025 18:21:05 -0400 Subject: [ih] Comments on Packet Radio In-Reply-To: <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> Message-ID: thanks Don! v On Thu, Mar 27, 2025 at 6:09?PM Don Nielson wrote: > I don't understand this internet history routing but I need to > reply to Vint's observation about ETE reliability in the PRNET. > > In the early days of the PRNET it was sometimes envisaged to > be a stand alone net to serve Army and other needs. To make it > reliable to users, some intranet ETE protocol was needed. (Beyond > the reliable SPP used to maintain the packet radio units themselves.) > I thought I recalled some pre-TCP effort but I may well be mistaken. > For as early as 1977 TCP was suggested for PRNET *intranet* use > as well as an overlay for internet traffic through the PRNET. (Attached > is an excerpt from a Jan 1978 final report to DARPA indicating that use.) > In essence TCP emerged to fill that stand alone need. > > A feature of the PRNET in dealing with its uncertain mobile environment > led to its reliability even though packets were lost. One reason for loss > was a error checking technique in each PRU that would immediately discard > bad packets. The other was simply channel failure. To help, a hop-by-hop > retransmission/ack arrangement was used with limited repeats and a > concurrent local detection scheme to discard duplicates. While still > retained, the scheme was able to be simplified once TCP came into use. > Don > > On 3/27/25 2:30 AM, Vint Cerf wrote: > > I was not sure whether Don's note got to the internet-history list, so > apologies if this is a duplicate. > > I went back and re-read the long paper on Packet Radio in Proceedings of > the IEEE Special Issue published November 1978. Don is correct that there > was a reliable Station-PRU protocol (called SPP) but I believe this was > only used for Station/PRU communication. Not all traffic was carried that > way and this, in part, motivated the development of the end/end TCP/IP > protocol. > > https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 > > vint > > On Thu, Mar 27, 2025 at 2:46?AM Don Nielson wrote: > >> Sorry, didn't pay attention to limited addressees below. >> Also a few typos corrected below. Don >> >> >> -------- Forwarded Message -------- >> Subject: Re: [ih] TCP RTT Estimator >> Date: Wed, 26 Mar 2025 23:36:21 -0700 >> From: Don Nielson >> To: Barbara Denny , >> Internet-history >> >> >> Hi, All, >> The threads here are a bit random so I think I'll just try to give >> a few comments that hopefully will answer much at issue. >> >> 1. The Packet Radio Network (PRNET) was, from the outset, to be a self- >> forming network with a central controlling node called a station, but with >> the ongoing potential of stationless operation. That station had >> an ARPANET interface for maintaining its software. The PRNET was >> dynamic in the sense of self-restructuring with the addition or loss of >> nodes. As early as 1974-5 it interconnection to the ARPANET was planned. >> >> 2. The packet radio units (PRUs) were at once a network repeater >> and an entrance node to the network. Some were placed at >> promontories for area coverage and some were sited at user sites >> as nodes for network access. They were half digital/half radio and >> sophisticated for their time. For example, any PRU could be >> software-maintained (debugged) remotely. This obviously required >> reliable PRNET protocols I think we called SPP and NCP they rode atop a >> channel access layer that resided only in the PRUs. That PRU layer >> faced all the early issues of contention, routing, and efficiency and >> the PRU radio section was designed as best to deal with mulitpath, etc. >> >> 3. The PRUs required interfaces to the terminals/hosts to which >> they were connected. (Exceptions to that were some traffic >> generators built for early testing and some IMP interfaces to the PDP-11 >> station computer.). SRI built the terminal interface units and it was >> in those that TCP was eventually placed. >> >> 4. SRI was testing PRNET configurations in 1974-5 and doing so had >> a number of PRUs (and one station computer) available. By the end >> of 1975 we had at least a half dozen in use and probably more in >> backup. By the end of 1976 about 14 were on site. >> >> 5. Before mentioning TCP it needs to be said that PRNET intranet >> protocols were end-to-end reliable, handling all the problems of >> flow control, duplicate detection, sequencing, and retransmission. >> >> 6. TCP implementation was anticipated in 1975 and preparations for >> a station gateway that arrived in early 1976. TCP for the SRI >> TIUs was, according to one report, based on Stanford Tech Note 68 >> by Vint dated Nov 1975. As Vint said, Jim Mathis lead that >> implementation. >> In early 1975 the BBN-provided gateway from Ginny Strazisar was >> first tested without PRUs and early problems resolved. >> >> 7. So, with the gateway operating, it was time to take TCP to the field >> and after some brief testing it was decided to have a little celebration >> in that regard. Ron Kunzelman of SRI suggested a nice accessible spot >> for the SRI van and was at least one PRNET hop from the station/gateway >> was Rossotti's. (I don't recall or if anyone with ever know whether other >> PRNET repeaters that day were passing this traffic. Given the absence >> of other PRNET traffic, it would have been improbable.) >> >> Several SRI participants were there, one Army visitor, and I took >> the pictures. Please recall that the PRNET protocol was reliable, >> so testing TCP exclusively on it wouldn't have made sense. >> While the PRNET could halt under environmental issues, that didn't mean >> it was lossy. While the demo at Rossotti's was not mobile, we had >> countless mobile demos of numeric patterns in which transmission >> often would be interrupted, but we never saw errors. We even would >> disable the PRU radio unit to halt transmission to show no errors. >> >> 8. TCP reliability was, now that I think about it, at that point mainly a >> test of the >> new gateway and possibly if the ARPANET routing was somehow lossy. >> If you saw the very lengthy weekly report entered manually from Rossotti's >> you would see how well it all worked end-to-end. >> >> If I haven't bent your ears enough, I could try and answer anything the >> above doesn't mention or errors in my memory. I did look back at some >> of the packet radio notes for the dates and numbers. Don >> >> >> >> >> >> On 3/26/25 5:30 PM, Barbara Denny wrote: >> >> Having trouble sending to the email list again so I shortened the >> original thread. Hope no duplicates. >> >> **** >> I might be repeating but I will add a few comments. Hope my memory is >> pretty good. >> >> Packet Radio nodes could act as sink, source, or repeater/relay for >> data. They could also have an attached device (like a station, end user >> host, tiu, etc). I think the packet radio addressing space was broken up >> so you could determine the type of entity by the ID (need to double check >> this). The station provided routes to packet radios when the packet radio >> didn't know how to reach a destination. Any packet radio could be mobile. >> I don't remember if there was a limit initially on how many neighbors a >> packet radio could have. Packet radio nodes did not use IP related >> protocols but could handle IP traffic generated by other entities. >> >> Packet Radio nodes also had multiple hardware generations (EPR, UBR, IPR, >> VPR, and also the LPR which was actually done under a follow-on program >> called SURAN) . There were also multiple versions of the radio software >> known as CAPX where X was a number. I think the earliest version I >> encountered was CAP5 so I have no knowledge of the protocol implementation >> used in the simulation Greg Skinner presented in his email message. >> >> In the early 1980s packet radio was implementing multi-station so you >> could have more than one station in a packet radio network. I think this >> was known as CAP 6.2 (6.4???). There was also a stationless design being >> discussed at the close of the packet radio program (CAP7). >> >> barbara >> On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf >> wrote: >> >> >> Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. >> >> v >> >> >> On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: >> >> >> >> On Mar 26, 2025, at 17:17, Vint Cerf wrote: >> >> yes, the gateway was colocated with the Station (on the same computer).h >> >> >> I missed something. What is the Station? >> >> The Station managed the Packet Radio network, maintained information >> about connectivity among the radio relays. PRNET was not a star network. >> >> >> That is what I was assuming. >> >> Topology changes were tracked by the mobile nodes periodically reporting >> to the station which other Packet Radios they could reach. >> >> >> So a sort of centralized routing on the Station. An early ad hoc network. >> >> Hosts on the PRNET nodes could communicate with each other and, through >> the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run >> TCP, >> >> >> So there were distinct machines acting as 'PRNET routers? and PRNET hosts. >> >> that was running on the hosts like the LSI-11/23's or the Station or.... >> >> >> ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix >> running on one the year before but as a terminal connected to our Unix on >> an 11/45 but it was running NCP. >> >> Thanks, >> John >> >> >> v >> >> >> On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: >> >> And those nodes relayed among themselves as well as with the gateway? >> >> IOW, PRNET wasn?t a star network with the gateway as the center, like a >> WIFI access point. >> >> So there would have been TCP connections between PRNET nodes as well as >> TCP connections potentially relayed by other PRNET nodes through the >> gateway to ARPANET hosts. Right? >> >> Take care, >> John >> >> >> > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 <(571)%20213-1346> > > > until further notice > > > > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jack at 3kitty.org Thu Mar 27 16:29:15 2025 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 27 Mar 2025 16:29:15 -0700 Subject: [ih] Comments on Packet Radio In-Reply-To: References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> Message-ID: <265fd9bc-7dfa-44c3-8a4e-64621ace0999@3kitty.org> There may be a larger historical milestone revealed in these discussions. In the early 1970s, networks were traditionally "closed" technologies, with all internal mechanisms designed, built, and operated by a single group.?? Such networks also presented a "virtual circuit" service to their customers.? Both Packet Radio and ARPANET networks had internal mechanisms to enforce a reliable byte-stream behavior, and were managed uniformly by a single team. I think other contemporary networks were similar, e.g. IBM's SNA (or whatever it was called then). The Internet, and TCP in particular, broke with that tradition, by moving the "virtual circuit" mechanisms out of the underlying switching systems and into the masses of "host" computers.? Each users' computer was now responsible for providing a "virtual circuit" service to its own users, as well as offering a new unguaranteed "datagram" service for users who needed minimal latency. To me, that was always the root theory to be tested in the "Internet Experiment" -- to answer the question "Is it possible to create a wide area network in which the wildly diverse and numerous users' computers provide much of the required mechanisms and the many components inside The Internet are not designed, installed, managed, or operated by a single owner or company?" It still seems to me like such a system shouldn't work very well. But it does..... Jack On 3/27/25 15:21, Vint Cerf via Internet-history wrote: > thanks Don! > v > > > On Thu, Mar 27, 2025 at 6:09?PM Don Nielson wrote: > >> I don't understand this internet history routing but I need to >> reply to Vint's observation about ETE reliability in the PRNET. >> >> In the early days of the PRNET it was sometimes envisaged to >> be a stand alone net to serve Army and other needs. To make it >> reliable to users, some intranet ETE protocol was needed. (Beyond >> the reliable SPP used to maintain the packet radio units themselves.) >> I thought I recalled some pre-TCP effort but I may well be mistaken. >> For as early as 1977 TCP was suggested for PRNET *intranet* use >> as well as an overlay for internet traffic through the PRNET. (Attached >> is an excerpt from a Jan 1978 final report to DARPA indicating that use.) >> In essence TCP emerged to fill that stand alone need. >> >> A feature of the PRNET in dealing with its uncertain mobile environment >> led to its reliability even though packets were lost. One reason for loss >> was a error checking technique in each PRU that would immediately discard >> bad packets. The other was simply channel failure. To help, a hop-by-hop >> retransmission/ack arrangement was used with limited repeats and a >> concurrent local detection scheme to discard duplicates. While still >> retained, the scheme was able to be simplified once TCP came into use. >> Don >> >> On 3/27/25 2:30 AM, Vint Cerf wrote: >> >> I was not sure whether Don's note got to the internet-history list, so >> apologies if this is a duplicate. >> >> I went back and re-read the long paper on Packet Radio in Proceedings of >> the IEEE Special Issue published November 1978. Don is correct that there >> was a reliable Station-PRU protocol (called SPP) but I believe this was >> only used for Station/PRU communication. Not all traffic was carried that >> way and this, in part, motivated the development of the end/end TCP/IP >> protocol. >> >> https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 >> >> vint >> >> On Thu, Mar 27, 2025 at 2:46?AM Don Nielson wrote: >> >>> Sorry, didn't pay attention to limited addressees below. >>> Also a few typos corrected below. Don >>> >>> >>> -------- Forwarded Message -------- >>> Subject: Re: [ih] TCP RTT Estimator >>> Date: Wed, 26 Mar 2025 23:36:21 -0700 >>> From: Don Nielson >>> To: Barbara Denny , >>> Internet-history >>> >>> >>> Hi, All, >>> The threads here are a bit random so I think I'll just try to give >>> a few comments that hopefully will answer much at issue. >>> >>> 1. The Packet Radio Network (PRNET) was, from the outset, to be a self- >>> forming network with a central controlling node called a station, but with >>> the ongoing potential of stationless operation. That station had >>> an ARPANET interface for maintaining its software. The PRNET was >>> dynamic in the sense of self-restructuring with the addition or loss of >>> nodes. As early as 1974-5 it interconnection to the ARPANET was planned. >>> >>> 2. The packet radio units (PRUs) were at once a network repeater >>> and an entrance node to the network. Some were placed at >>> promontories for area coverage and some were sited at user sites >>> as nodes for network access. They were half digital/half radio and >>> sophisticated for their time. For example, any PRU could be >>> software-maintained (debugged) remotely. This obviously required >>> reliable PRNET protocols I think we called SPP and NCP they rode atop a >>> channel access layer that resided only in the PRUs. That PRU layer >>> faced all the early issues of contention, routing, and efficiency and >>> the PRU radio section was designed as best to deal with mulitpath, etc. >>> >>> 3. The PRUs required interfaces to the terminals/hosts to which >>> they were connected. (Exceptions to that were some traffic >>> generators built for early testing and some IMP interfaces to the PDP-11 >>> station computer.). SRI built the terminal interface units and it was >>> in those that TCP was eventually placed. >>> >>> 4. SRI was testing PRNET configurations in 1974-5 and doing so had >>> a number of PRUs (and one station computer) available. By the end >>> of 1975 we had at least a half dozen in use and probably more in >>> backup. By the end of 1976 about 14 were on site. >>> >>> 5. Before mentioning TCP it needs to be said that PRNET intranet >>> protocols were end-to-end reliable, handling all the problems of >>> flow control, duplicate detection, sequencing, and retransmission. >>> >>> 6. TCP implementation was anticipated in 1975 and preparations for >>> a station gateway that arrived in early 1976. TCP for the SRI >>> TIUs was, according to one report, based on Stanford Tech Note 68 >>> by Vint dated Nov 1975. As Vint said, Jim Mathis lead that >>> implementation. >>> In early 1975 the BBN-provided gateway from Ginny Strazisar was >>> first tested without PRUs and early problems resolved. >>> >>> 7. So, with the gateway operating, it was time to take TCP to the field >>> and after some brief testing it was decided to have a little celebration >>> in that regard. Ron Kunzelman of SRI suggested a nice accessible spot >>> for the SRI van and was at least one PRNET hop from the station/gateway >>> was Rossotti's. (I don't recall or if anyone with ever know whether other >>> PRNET repeaters that day were passing this traffic. Given the absence >>> of other PRNET traffic, it would have been improbable.) >>> >>> Several SRI participants were there, one Army visitor, and I took >>> the pictures. Please recall that the PRNET protocol was reliable, >>> so testing TCP exclusively on it wouldn't have made sense. >>> While the PRNET could halt under environmental issues, that didn't mean >>> it was lossy. While the demo at Rossotti's was not mobile, we had >>> countless mobile demos of numeric patterns in which transmission >>> often would be interrupted, but we never saw errors. We even would >>> disable the PRU radio unit to halt transmission to show no errors. >>> >>> 8. TCP reliability was, now that I think about it, at that point mainly a >>> test of the >>> new gateway and possibly if the ARPANET routing was somehow lossy. >>> If you saw the very lengthy weekly report entered manually from Rossotti's >>> you would see how well it all worked end-to-end. >>> >>> If I haven't bent your ears enough, I could try and answer anything the >>> above doesn't mention or errors in my memory. I did look back at some >>> of the packet radio notes for the dates and numbers. Don >>> >>> >>> >>> >>> >>> On 3/26/25 5:30 PM, Barbara Denny wrote: >>> >>> Having trouble sending to the email list again so I shortened the >>> original thread. Hope no duplicates. >>> >>> **** >>> I might be repeating but I will add a few comments. Hope my memory is >>> pretty good. >>> >>> Packet Radio nodes could act as sink, source, or repeater/relay for >>> data. They could also have an attached device (like a station, end user >>> host, tiu, etc). I think the packet radio addressing space was broken up >>> so you could determine the type of entity by the ID (need to double check >>> this). The station provided routes to packet radios when the packet radio >>> didn't know how to reach a destination. Any packet radio could be mobile. >>> I don't remember if there was a limit initially on how many neighbors a >>> packet radio could have. Packet radio nodes did not use IP related >>> protocols but could handle IP traffic generated by other entities. >>> >>> Packet Radio nodes also had multiple hardware generations (EPR, UBR, IPR, >>> VPR, and also the LPR which was actually done under a follow-on program >>> called SURAN) . There were also multiple versions of the radio software >>> known as CAPX where X was a number. I think the earliest version I >>> encountered was CAP5 so I have no knowledge of the protocol implementation >>> used in the simulation Greg Skinner presented in his email message. >>> >>> In the early 1980s packet radio was implementing multi-station so you >>> could have more than one station in a packet radio network. I think this >>> was known as CAP 6.2 (6.4???). There was also a stationless design being >>> discussed at the close of the packet radio program (CAP7). >>> >>> barbara >>> On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf >>> wrote: >>> >>> >>> Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. >>> >>> v >>> >>> >>> On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: >>> >>> >>> >>> On Mar 26, 2025, at 17:17, Vint Cerf wrote: >>> >>> yes, the gateway was colocated with the Station (on the same computer).h >>> >>> >>> I missed something. What is the Station? >>> >>> The Station managed the Packet Radio network, maintained information >>> about connectivity among the radio relays. PRNET was not a star network. >>> >>> >>> That is what I was assuming. >>> >>> Topology changes were tracked by the mobile nodes periodically reporting >>> to the station which other Packet Radios they could reach. >>> >>> >>> So a sort of centralized routing on the Station. An early ad hoc network. >>> >>> Hosts on the PRNET nodes could communicate with each other and, through >>> the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run >>> TCP, >>> >>> >>> So there were distinct machines acting as 'PRNET routers? and PRNET hosts. >>> >>> that was running on the hosts like the LSI-11/23's or the Station or.... >>> >>> >>> ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix >>> running on one the year before but as a terminal connected to our Unix on >>> an 11/45 but it was running NCP. >>> >>> Thanks, >>> John >>> >>> >>> v >>> >>> >>> On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: >>> >>> And those nodes relayed among themselves as well as with the gateway? >>> >>> IOW, PRNET wasn?t a star network with the gateway as the center, like a >>> WIFI access point. >>> >>> So there would have been TCP connections between PRNET nodes as well as >>> TCP connections potentially relayed by other PRNET nodes through the >>> gateway to ARPANET hosts. Right? >>> >>> Take care, >>> John >>> >>> >>> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 <(571)%20213-1346> >> >> >> until further notice >> >> >> >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From jeanjour at comcast.net Thu Mar 27 17:12:36 2025 From: jeanjour at comcast.net (John Day) Date: Thu, 27 Mar 2025 20:12:36 -0400 Subject: [ih] Comments on Packet Radio In-Reply-To: <265fd9bc-7dfa-44c3-8a4e-64621ace0999@3kitty.org> References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> <265fd9bc-7dfa-44c3-8a4e-64621ace0999@3kitty.org> Message-ID: <8A388E13-726B-49C3-890D-B9C0C790A287@comcast.net> Jack, The difference between virtual-circuit (VC) and datagrams is the difference between static and dynamic resource allocation. Dynamic resource allocation is much more efficient, by orders of magnitude. The earliest result on this that I have found is a paper by Peter Denning from 1968 on supporting terminals for a time-sharing system, where the difference is so stark that it is obvious it is a general result. I know of at least 3 other times this result was rediscovered in networking. The last time in 2004 where it was found that 90% of the buffers in core routers were unnecessary. The first datagram network that broke with VC was CYCLADES and the first datagram network was their CIGALE network that became operational in mid-1973. It was Louis Pouzin of CYCLADES that led the controversy with the PTTs over VC vs datagrams. His efforts caused the CYCLADES project to be shut down prematurely. Actually Transport protocols are not VC. This was an argument introduced by the phone companies to confuse matters. It was not effectively countered at the time and largely distracted the focus on connection establishment, when the actual difference between is wholly within the data transfer phase. In a VC technology, the path of the packets is fixed and while the buffering in the switches didn?t have to be static, it often was. But VC does require the allocation of capacity to be static (the fixed path). The ?connection? nature of Transport-like protocols is totally different. It is created by the feedback mechanisms for retransmission and flow control, not by making the path or the buffering static. Hence, the real debate is between VC and datagram, which to my mind there is no debate. I hope this clarifies matters. Take care, John Day > On Mar 27, 2025, at 19:29, Jack Haverty via Internet-history wrote: > > There may be a larger historical milestone revealed in these discussions. > > In the early 1970s, networks were traditionally "closed" technologies, with all internal mechanisms designed, built, and operated by a single group. Such networks also presented a "virtual circuit" service to their customers. Both Packet Radio and ARPANET networks had internal mechanisms to enforce a reliable byte-stream behavior, and were managed uniformly by a single team. I think other contemporary networks were similar, e.g. IBM's SNA (or whatever it was called then). > > The Internet, and TCP in particular, broke with that tradition, by moving the "virtual circuit" mechanisms out of the underlying switching systems and into the masses of "host" computers. Each users' computer was now responsible for providing a "virtual circuit" service to its own users, as well as offering a new unguaranteed "datagram" service for users who needed minimal latency. > > To me, that was always the root theory to be tested in the "Internet Experiment" -- to answer the question "Is it possible to create a wide area network in which the wildly diverse and numerous users' computers provide much of the required mechanisms and the many components inside The Internet are not designed, installed, managed, or operated by a single owner or company?" > > It still seems to me like such a system shouldn't work very well. But it does..... > > Jack > > > On 3/27/25 15:21, Vint Cerf via Internet-history wrote: >> thanks Don! >> v >> >> >> On Thu, Mar 27, 2025 at 6:09?PM Don Nielson wrote: >> >>> I don't understand this internet history routing but I need to >>> reply to Vint's observation about ETE reliability in the PRNET. >>> >>> In the early days of the PRNET it was sometimes envisaged to >>> be a stand alone net to serve Army and other needs. To make it >>> reliable to users, some intranet ETE protocol was needed. (Beyond >>> the reliable SPP used to maintain the packet radio units themselves.) >>> I thought I recalled some pre-TCP effort but I may well be mistaken. >>> For as early as 1977 TCP was suggested for PRNET *intranet* use >>> as well as an overlay for internet traffic through the PRNET. (Attached >>> is an excerpt from a Jan 1978 final report to DARPA indicating that use.) >>> In essence TCP emerged to fill that stand alone need. >>> >>> A feature of the PRNET in dealing with its uncertain mobile environment >>> led to its reliability even though packets were lost. One reason for loss >>> was a error checking technique in each PRU that would immediately discard >>> bad packets. The other was simply channel failure. To help, a hop-by-hop >>> retransmission/ack arrangement was used with limited repeats and a >>> concurrent local detection scheme to discard duplicates. While still >>> retained, the scheme was able to be simplified once TCP came into use. >>> Don >>> >>> On 3/27/25 2:30 AM, Vint Cerf wrote: >>> >>> I was not sure whether Don's note got to the internet-history list, so >>> apologies if this is a duplicate. >>> >>> I went back and re-read the long paper on Packet Radio in Proceedings of >>> the IEEE Special Issue published November 1978. Don is correct that there >>> was a reliable Station-PRU protocol (called SPP) but I believe this was >>> only used for Station/PRU communication. Not all traffic was carried that >>> way and this, in part, motivated the development of the end/end TCP/IP >>> protocol. >>> >>> https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 >>> >>> vint >>> >>> On Thu, Mar 27, 2025 at 2:46?AM Don Nielson wrote: >>> >>>> Sorry, didn't pay attention to limited addressees below. >>>> Also a few typos corrected below. Don >>>> >>>> >>>> -------- Forwarded Message -------- >>>> Subject: Re: [ih] TCP RTT Estimator >>>> Date: Wed, 26 Mar 2025 23:36:21 -0700 >>>> From: Don Nielson >>>> To: Barbara Denny , >>>> Internet-history >>>> >>>> >>>> Hi, All, >>>> The threads here are a bit random so I think I'll just try to give >>>> a few comments that hopefully will answer much at issue. >>>> >>>> 1. The Packet Radio Network (PRNET) was, from the outset, to be a self- >>>> forming network with a central controlling node called a station, but with >>>> the ongoing potential of stationless operation. That station had >>>> an ARPANET interface for maintaining its software. The PRNET was >>>> dynamic in the sense of self-restructuring with the addition or loss of >>>> nodes. As early as 1974-5 it interconnection to the ARPANET was planned. >>>> >>>> 2. The packet radio units (PRUs) were at once a network repeater >>>> and an entrance node to the network. Some were placed at >>>> promontories for area coverage and some were sited at user sites >>>> as nodes for network access. They were half digital/half radio and >>>> sophisticated for their time. For example, any PRU could be >>>> software-maintained (debugged) remotely. This obviously required >>>> reliable PRNET protocols I think we called SPP and NCP they rode atop a >>>> channel access layer that resided only in the PRUs. That PRU layer >>>> faced all the early issues of contention, routing, and efficiency and >>>> the PRU radio section was designed as best to deal with mulitpath, etc. >>>> >>>> 3. The PRUs required interfaces to the terminals/hosts to which >>>> they were connected. (Exceptions to that were some traffic >>>> generators built for early testing and some IMP interfaces to the PDP-11 >>>> station computer.). SRI built the terminal interface units and it was >>>> in those that TCP was eventually placed. >>>> >>>> 4. SRI was testing PRNET configurations in 1974-5 and doing so had >>>> a number of PRUs (and one station computer) available. By the end >>>> of 1975 we had at least a half dozen in use and probably more in >>>> backup. By the end of 1976 about 14 were on site. >>>> >>>> 5. Before mentioning TCP it needs to be said that PRNET intranet >>>> protocols were end-to-end reliable, handling all the problems of >>>> flow control, duplicate detection, sequencing, and retransmission. >>>> >>>> 6. TCP implementation was anticipated in 1975 and preparations for >>>> a station gateway that arrived in early 1976. TCP for the SRI >>>> TIUs was, according to one report, based on Stanford Tech Note 68 >>>> by Vint dated Nov 1975. As Vint said, Jim Mathis lead that >>>> implementation. >>>> In early 1975 the BBN-provided gateway from Ginny Strazisar was >>>> first tested without PRUs and early problems resolved. >>>> >>>> 7. So, with the gateway operating, it was time to take TCP to the field >>>> and after some brief testing it was decided to have a little celebration >>>> in that regard. Ron Kunzelman of SRI suggested a nice accessible spot >>>> for the SRI van and was at least one PRNET hop from the station/gateway >>>> was Rossotti's. (I don't recall or if anyone with ever know whether other >>>> PRNET repeaters that day were passing this traffic. Given the absence >>>> of other PRNET traffic, it would have been improbable.) >>>> >>>> Several SRI participants were there, one Army visitor, and I took >>>> the pictures. Please recall that the PRNET protocol was reliable, >>>> so testing TCP exclusively on it wouldn't have made sense. >>>> While the PRNET could halt under environmental issues, that didn't mean >>>> it was lossy. While the demo at Rossotti's was not mobile, we had >>>> countless mobile demos of numeric patterns in which transmission >>>> often would be interrupted, but we never saw errors. We even would >>>> disable the PRU radio unit to halt transmission to show no errors. >>>> >>>> 8. TCP reliability was, now that I think about it, at that point mainly a >>>> test of the >>>> new gateway and possibly if the ARPANET routing was somehow lossy. >>>> If you saw the very lengthy weekly report entered manually from Rossotti's >>>> you would see how well it all worked end-to-end. >>>> >>>> If I haven't bent your ears enough, I could try and answer anything the >>>> above doesn't mention or errors in my memory. I did look back at some >>>> of the packet radio notes for the dates and numbers. Don >>>> >>>> >>>> >>>> >>>> >>>> On 3/26/25 5:30 PM, Barbara Denny wrote: >>>> >>>> Having trouble sending to the email list again so I shortened the >>>> original thread. Hope no duplicates. >>>> >>>> **** >>>> I might be repeating but I will add a few comments. Hope my memory is >>>> pretty good. >>>> >>>> Packet Radio nodes could act as sink, source, or repeater/relay for >>>> data. They could also have an attached device (like a station, end user >>>> host, tiu, etc). I think the packet radio addressing space was broken up >>>> so you could determine the type of entity by the ID (need to double check >>>> this). The station provided routes to packet radios when the packet radio >>>> didn't know how to reach a destination. Any packet radio could be mobile. >>>> I don't remember if there was a limit initially on how many neighbors a >>>> packet radio could have. Packet radio nodes did not use IP related >>>> protocols but could handle IP traffic generated by other entities. >>>> >>>> Packet Radio nodes also had multiple hardware generations (EPR, UBR, IPR, >>>> VPR, and also the LPR which was actually done under a follow-on program >>>> called SURAN) . There were also multiple versions of the radio software >>>> known as CAPX where X was a number. I think the earliest version I >>>> encountered was CAP5 so I have no knowledge of the protocol implementation >>>> used in the simulation Greg Skinner presented in his email message. >>>> >>>> In the early 1980s packet radio was implementing multi-station so you >>>> could have more than one station in a packet radio network. I think this >>>> was known as CAP 6.2 (6.4???). There was also a stationless design being >>>> discussed at the close of the packet radio program (CAP7). >>>> >>>> barbara >>>> On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf >>>> wrote: >>>> >>>> >>>> Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. >>>> >>>> v >>>> >>>> >>>> On Wed, Mar 26, 2025 at 5:26?PM John Day wrote: >>>> >>>> >>>> >>>> On Mar 26, 2025, at 17:17, Vint Cerf wrote: >>>> >>>> yes, the gateway was colocated with the Station (on the same computer).h >>>> >>>> >>>> I missed something. What is the Station? >>>> >>>> The Station managed the Packet Radio network, maintained information >>>> about connectivity among the radio relays. PRNET was not a star network. >>>> >>>> >>>> That is what I was assuming. >>>> >>>> Topology changes were tracked by the mobile nodes periodically reporting >>>> to the station which other Packet Radios they could reach. >>>> >>>> >>>> So a sort of centralized routing on the Station. An early ad hoc network. >>>> >>>> Hosts on the PRNET nodes could communicate with each other and, through >>>> the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT run >>>> TCP, >>>> >>>> >>>> So there were distinct machines acting as 'PRNET routers? and PRNET hosts. >>>> >>>> that was running on the hosts like the LSI-11/23's or the Station or.... >>>> >>>> >>>> ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down Unix >>>> running on one the year before but as a terminal connected to our Unix on >>>> an 11/45 but it was running NCP. >>>> >>>> Thanks, >>>> John >>>> >>>> >>>> v >>>> >>>> >>>> On Wed, Mar 26, 2025 at 5:08?PM John Day wrote: >>>> >>>> And those nodes relayed among themselves as well as with the gateway? >>>> >>>> IOW, PRNET wasn?t a star network with the gateway as the center, like a >>>> WIFI access point. >>>> >>>> So there would have been TCP connections between PRNET nodes as well as >>>> TCP connections potentially relayed by other PRNET nodes through the >>>> gateway to ARPANET hosts. Right? >>>> >>>> Take care, >>>> John >>>> >>>> >>>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 <(571)%20213-1346> >>> >>> >>> until further notice >>> >>> >>> >>> >>> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vgcerf at gmail.com Thu Mar 27 17:16:03 2025 From: vgcerf at gmail.com (vinton cerf) Date: Thu, 27 Mar 2025 20:16:03 -0400 Subject: [ih] Comments on Packet Radio In-Reply-To: <8A388E13-726B-49C3-890D-B9C0C790A287@comcast.net> References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> <265fd9bc-7dfa-44c3-8a4e-64621ace0999@3kitty.org> <8A388E13-726B-49C3-890D-B9C0C790A287@comcast.net> Message-ID: ironically, larry roberts rejected TCP/IP and developed X.25/X.75 because he said he could not sell TCP/IP and datagrams but he thought he could sell "virtual" circuits. v On Thu, Mar 27, 2025 at 8:12?PM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > Jack, > > The difference between virtual-circuit (VC) and datagrams is the > difference between static and dynamic resource allocation. Dynamic resource > allocation is much more efficient, by orders of magnitude. The earliest > result on this that I have found is a paper by Peter Denning from 1968 on > supporting terminals for a time-sharing system, where the difference is so > stark that it is obvious it is a general result. I know of at least 3 other > times this result was rediscovered in networking. The last time in 2004 > where it was found that 90% of the buffers in core routers were unnecessary. > > The first datagram network that broke with VC was CYCLADES and the first > datagram network was their CIGALE network that became operational in > mid-1973. It was Louis Pouzin of CYCLADES that led the controversy with the > PTTs over VC vs datagrams. His efforts caused the CYCLADES project to be > shut down prematurely. > > Actually Transport protocols are not VC. This was an argument introduced > by the phone companies to confuse matters. It was not effectively countered > at the time and largely distracted the focus on connection establishment, > when the actual difference between is wholly within the data transfer phase. > > In a VC technology, the path of the packets is fixed and while the > buffering in the switches didn?t have to be static, it often was. But VC > does require the allocation of capacity to be static (the fixed path). The > ?connection? nature of Transport-like protocols is totally different. It is > created by the feedback mechanisms for retransmission and flow control, not > by making the path or the buffering static. Hence, the real debate is > between VC and datagram, which to my mind there is no debate. > > I hope this clarifies matters. > > Take care, > John Day > > > On Mar 27, 2025, at 19:29, Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > There may be a larger historical milestone revealed in these discussions. > > > > In the early 1970s, networks were traditionally "closed" technologies, > with all internal mechanisms designed, built, and operated by a single > group. Such networks also presented a "virtual circuit" service to their > customers. Both Packet Radio and ARPANET networks had internal mechanisms > to enforce a reliable byte-stream behavior, and were managed uniformly by a > single team. I think other contemporary networks were similar, e.g. IBM's > SNA (or whatever it was called then). > > > > The Internet, and TCP in particular, broke with that tradition, by > moving the "virtual circuit" mechanisms out of the underlying switching > systems and into the masses of "host" computers. Each users' computer was > now responsible for providing a "virtual circuit" service to its own users, > as well as offering a new unguaranteed "datagram" service for users who > needed minimal latency. > > > > To me, that was always the root theory to be tested in the "Internet > Experiment" -- to answer the question "Is it possible to create a wide area > network in which the wildly diverse and numerous users' computers provide > much of the required mechanisms and the many components inside The Internet > are not designed, installed, managed, or operated by a single owner or > company?" > > > > It still seems to me like such a system shouldn't work very well. But it > does..... > > > > Jack > > > > > > On 3/27/25 15:21, Vint Cerf via Internet-history wrote: > >> thanks Don! > >> v > >> > >> > >> On Thu, Mar 27, 2025 at 6:09?PM Don Nielson > wrote: > >> > >>> I don't understand this internet history routing but I need to > >>> reply to Vint's observation about ETE reliability in the PRNET. > >>> > >>> In the early days of the PRNET it was sometimes envisaged to > >>> be a stand alone net to serve Army and other needs. To make it > >>> reliable to users, some intranet ETE protocol was needed. (Beyond > >>> the reliable SPP used to maintain the packet radio units themselves.) > >>> I thought I recalled some pre-TCP effort but I may well be mistaken. > >>> For as early as 1977 TCP was suggested for PRNET *intranet* use > >>> as well as an overlay for internet traffic through the PRNET. > (Attached > >>> is an excerpt from a Jan 1978 final report to DARPA indicating that > use.) > >>> In essence TCP emerged to fill that stand alone need. > >>> > >>> A feature of the PRNET in dealing with its uncertain mobile environment > >>> led to its reliability even though packets were lost. One reason for > loss > >>> was a error checking technique in each PRU that would immediately > discard > >>> bad packets. The other was simply channel failure. To help, a > hop-by-hop > >>> retransmission/ack arrangement was used with limited repeats and a > >>> concurrent local detection scheme to discard duplicates. While still > >>> retained, the scheme was able to be simplified once TCP came into use. > >>> Don > >>> > >>> On 3/27/25 2:30 AM, Vint Cerf wrote: > >>> > >>> I was not sure whether Don's note got to the internet-history list, so > >>> apologies if this is a duplicate. > >>> > >>> I went back and re-read the long paper on Packet Radio in Proceedings > of > >>> the IEEE Special Issue published November 1978. Don is correct that > there > >>> was a reliable Station-PRU protocol (called SPP) but I believe this was > >>> only used for Station/PRU communication. Not all traffic was carried > that > >>> way and this, in part, motivated the development of the end/end TCP/IP > >>> protocol. > >>> > >>> https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 > >>> > >>> vint > >>> > >>> On Thu, Mar 27, 2025 at 2:46?AM Don Nielson > wrote: > >>> > >>>> Sorry, didn't pay attention to limited addressees below. > >>>> Also a few typos corrected below. Don > >>>> > >>>> > >>>> -------- Forwarded Message -------- > >>>> Subject: Re: [ih] TCP RTT Estimator > >>>> Date: Wed, 26 Mar 2025 23:36:21 -0700 > >>>> From: Don Nielson > >>>> To: Barbara Denny , > >>>> Internet-history > >>>> > >>>> > >>>> Hi, All, > >>>> The threads here are a bit random so I think I'll just try to give > >>>> a few comments that hopefully will answer much at issue. > >>>> > >>>> 1. The Packet Radio Network (PRNET) was, from the outset, to be a > self- > >>>> forming network with a central controlling node called a station, but > with > >>>> the ongoing potential of stationless operation. That station had > >>>> an ARPANET interface for maintaining its software. The PRNET was > >>>> dynamic in the sense of self-restructuring with the addition or loss > of > >>>> nodes. As early as 1974-5 it interconnection to the ARPANET was > planned. > >>>> > >>>> 2. The packet radio units (PRUs) were at once a network repeater > >>>> and an entrance node to the network. Some were placed at > >>>> promontories for area coverage and some were sited at user sites > >>>> as nodes for network access. They were half digital/half radio and > >>>> sophisticated for their time. For example, any PRU could be > >>>> software-maintained (debugged) remotely. This obviously required > >>>> reliable PRNET protocols I think we called SPP and NCP they rode atop > a > >>>> channel access layer that resided only in the PRUs. That PRU layer > >>>> faced all the early issues of contention, routing, and efficiency and > >>>> the PRU radio section was designed as best to deal with mulitpath, > etc. > >>>> > >>>> 3. The PRUs required interfaces to the terminals/hosts to which > >>>> they were connected. (Exceptions to that were some traffic > >>>> generators built for early testing and some IMP interfaces to the > PDP-11 > >>>> station computer.). SRI built the terminal interface units and it was > >>>> in those that TCP was eventually placed. > >>>> > >>>> 4. SRI was testing PRNET configurations in 1974-5 and doing so had > >>>> a number of PRUs (and one station computer) available. By the end > >>>> of 1975 we had at least a half dozen in use and probably more in > >>>> backup. By the end of 1976 about 14 were on site. > >>>> > >>>> 5. Before mentioning TCP it needs to be said that PRNET intranet > >>>> protocols were end-to-end reliable, handling all the problems of > >>>> flow control, duplicate detection, sequencing, and retransmission. > >>>> > >>>> 6. TCP implementation was anticipated in 1975 and preparations for > >>>> a station gateway that arrived in early 1976. TCP for the SRI > >>>> TIUs was, according to one report, based on Stanford Tech Note 68 > >>>> by Vint dated Nov 1975. As Vint said, Jim Mathis lead that > >>>> implementation. > >>>> In early 1975 the BBN-provided gateway from Ginny Strazisar was > >>>> first tested without PRUs and early problems resolved. > >>>> > >>>> 7. So, with the gateway operating, it was time to take TCP to the > field > >>>> and after some brief testing it was decided to have a little > celebration > >>>> in that regard. Ron Kunzelman of SRI suggested a nice accessible spot > >>>> for the SRI van and was at least one PRNET hop from the > station/gateway > >>>> was Rossotti's. (I don't recall or if anyone with ever know whether > other > >>>> PRNET repeaters that day were passing this traffic. Given the absence > >>>> of other PRNET traffic, it would have been improbable.) > >>>> > >>>> Several SRI participants were there, one Army visitor, and I took > >>>> the pictures. Please recall that the PRNET protocol was reliable, > >>>> so testing TCP exclusively on it wouldn't have made sense. > >>>> While the PRNET could halt under environmental issues, that didn't > mean > >>>> it was lossy. While the demo at Rossotti's was not mobile, we had > >>>> countless mobile demos of numeric patterns in which transmission > >>>> often would be interrupted, but we never saw errors. We even would > >>>> disable the PRU radio unit to halt transmission to show no errors. > >>>> > >>>> 8. TCP reliability was, now that I think about it, at that point > mainly a > >>>> test of the > >>>> new gateway and possibly if the ARPANET routing was somehow lossy. > >>>> If you saw the very lengthy weekly report entered manually from > Rossotti's > >>>> you would see how well it all worked end-to-end. > >>>> > >>>> If I haven't bent your ears enough, I could try and answer anything > the > >>>> above doesn't mention or errors in my memory. I did look back at some > >>>> of the packet radio notes for the dates and numbers. Don > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> On 3/26/25 5:30 PM, Barbara Denny wrote: > >>>> > >>>> Having trouble sending to the email list again so I shortened the > >>>> original thread. Hope no duplicates. > >>>> > >>>> **** > >>>> I might be repeating but I will add a few comments. Hope my memory is > >>>> pretty good. > >>>> > >>>> Packet Radio nodes could act as sink, source, or repeater/relay for > >>>> data. They could also have an attached device (like a station, end > user > >>>> host, tiu, etc). I think the packet radio addressing space was > broken up > >>>> so you could determine the type of entity by the ID (need to double > check > >>>> this). The station provided routes to packet radios when the packet > radio > >>>> didn't know how to reach a destination. Any packet radio could be > mobile. > >>>> I don't remember if there was a limit initially on how many neighbors > a > >>>> packet radio could have. Packet radio nodes did not use IP related > >>>> protocols but could handle IP traffic generated by other entities. > >>>> > >>>> Packet Radio nodes also had multiple hardware generations (EPR, UBR, > IPR, > >>>> VPR, and also the LPR which was actually done under a follow-on > program > >>>> called SURAN) . There were also multiple versions of the radio > software > >>>> known as CAPX where X was a number. I think the earliest version I > >>>> encountered was CAP5 so I have no knowledge of the protocol > implementation > >>>> used in the simulation Greg Skinner presented in his email message. > >>>> > >>>> In the early 1980s packet radio was implementing multi-station so you > >>>> could have more than one station in a packet radio network. I think > this > >>>> was known as CAP 6.2 (6.4???). There was also a stationless design > being > >>>> discussed at the close of the packet radio program (CAP7). > >>>> > >>>> barbara > >>>> On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf > >>>> wrote: > >>>> > >>>> > >>>> Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. > >>>> > >>>> v > >>>> > >>>> > >>>> On Wed, Mar 26, 2025 at 5:26?PM John Day > wrote: > >>>> > >>>> > >>>> > >>>> On Mar 26, 2025, at 17:17, Vint Cerf wrote: > >>>> > >>>> yes, the gateway was colocated with the Station (on the same > computer).h > >>>> > >>>> > >>>> I missed something. What is the Station? > >>>> > >>>> The Station managed the Packet Radio network, maintained information > >>>> about connectivity among the radio relays. PRNET was not a star > network. > >>>> > >>>> > >>>> That is what I was assuming. > >>>> > >>>> Topology changes were tracked by the mobile nodes periodically > reporting > >>>> to the station which other Packet Radios they could reach. > >>>> > >>>> > >>>> So a sort of centralized routing on the Station. An early ad hoc > network. > >>>> > >>>> Hosts on the PRNET nodes could communicate with each other and, > through > >>>> the gateway, with Arpanet and SATNET hosts. The PRNET nodes did NOT > run > >>>> TCP, > >>>> > >>>> > >>>> So there were distinct machines acting as 'PRNET routers? and PRNET > hosts. > >>>> > >>>> that was running on the hosts like the LSI-11/23's or the Station > or.... > >>>> > >>>> > >>>> ;-) an LSI-11/23 wasn?t a lot of machine. ;-) We had a strip down > Unix > >>>> running on one the year before but as a terminal connected to our > Unix on > >>>> an 11/45 but it was running NCP. > >>>> > >>>> Thanks, > >>>> John > >>>> > >>>> > >>>> v > >>>> > >>>> > >>>> On Wed, Mar 26, 2025 at 5:08?PM John Day > wrote: > >>>> > >>>> And those nodes relayed among themselves as well as with the gateway? > >>>> > >>>> IOW, PRNET wasn?t a star network with the gateway as the center, like > a > >>>> WIFI access point. > >>>> > >>>> So there would have been TCP connections between PRNET nodes as well > as > >>>> TCP connections potentially relayed by other PRNET nodes through the > >>>> gateway to ARPANET hosts. Right? > >>>> > >>>> Take care, > >>>> John > >>>> > >>>> > >>>> > >>> -- > >>> Please send any postal/overnight deliveries to: > >>> Vint Cerf > >>> Google, LLC > >>> 1900 Reston Metro Plaza, 16th Floor > >>> Reston, VA 20190 > >>> +1 (571) 213 1346 <(571)%20213-1346> > >>> > >>> > >>> until further notice > >>> > >>> > >>> > >>> > >>> > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Thu Mar 27 19:43:07 2025 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 27 Mar 2025 19:43:07 -0700 Subject: [ih] Comments on Packet Radio In-Reply-To: References: <6bca78b5-8f64-4612-9f9a-c62e5d80d61b@pacbell.net> <3627a202-500f-42fe-953b-acf963b2cb8d@pacbell.net> <40eb730e-66e9-48da-9f57-e32df9a53339@pacbell.net> <265fd9bc-7dfa-44c3-8a4e-64621ace0999@3kitty.org> <8A388E13-726B-49C3-890D-B9C0C790A287@comcast.net> Message-ID: <18a808a5-e1ba-4a0e-8cdd-ce9446e22473@3kitty.org> That's actually the tectonic shift I was trying to describe.? At the time, "everyone knew" that datagrams wouldn't work with the virtual circuit mechanisms migrated out of the various network switches and into the end users' computers.? Tradition and experience made it so. But some people didn't believe it was impossible, and deemed it worth trying experimentally to see if such a hare-brained scheme could work.?? TCP was defined by a pair of out-of-the-box visionaries, and a gaggle of implementors collected who also were unaware that their task was hopeless.?? Together, they made The Internet work, and proved the collective traditional wisdom wrong. Jack On 3/27/25 17:16, vinton cerf wrote: > ironically, larry roberts rejected TCP/IP and developed X.25/X.75 > because he said he could not sell TCP/IP and datagrams but he thought > he could sell "virtual" circuits. > > v > > > > > On Thu, Mar 27, 2025 at 8:12?PM John Day via Internet-history > wrote: > > Jack, > > The difference between virtual-circuit (VC) and datagrams is the > difference between static and dynamic resource allocation. Dynamic > resource allocation is much more efficient, by orders of > magnitude. The earliest result on this that I have found is a > paper by Peter Denning from 1968 on supporting terminals for a > time-sharing system, where the difference is so stark that it is > obvious it is a general result. I know of at least 3 other times > this result was rediscovered in networking. The last time in 2004 > where it was found that 90% of the buffers in core routers were > unnecessary. > > The first datagram network that broke with VC was CYCLADES and the > first datagram network was their CIGALE network that became > operational in mid-1973. It was Louis Pouzin of CYCLADES that led > the controversy with the PTTs over VC vs datagrams. His efforts > caused the CYCLADES project to be shut down prematurely. > > Actually Transport protocols are not VC. This was an argument > introduced by the phone companies to confuse matters. It was not > effectively countered at the time and largely distracted the focus > on connection establishment, when the actual difference between is > wholly within the data transfer phase. > > In a VC technology, the path of the packets is fixed and while the > buffering in the switches didn?t have to be static, it often was. > But VC does require the allocation of capacity to be static (the > fixed path). The ?connection? nature of Transport-like protocols > is totally different. It is created by the feedback mechanisms for > retransmission and flow control, not by making the path or the > buffering static. Hence, the real debate is between VC and > datagram, which to my mind there is no debate. > > I hope this clarifies matters. > > Take care, > John Day > > > On Mar 27, 2025, at 19:29, Jack Haverty via Internet-history > wrote: > > > > There may be a larger historical milestone revealed in these > discussions. > > > > In the early 1970s, networks were traditionally "closed" > technologies, with all internal mechanisms designed, built, and > operated by a single group.? ?Such networks also presented a > "virtual circuit" service to their customers.? Both Packet Radio > and ARPANET networks had internal mechanisms to enforce a reliable > byte-stream behavior, and were managed uniformly by a single team. > I think other contemporary networks were similar, e.g. IBM's SNA > (or whatever it was called then). > > > > The Internet, and TCP in particular, broke with that tradition, > by moving the "virtual circuit" mechanisms out of the underlying > switching systems and into the masses of "host" computers.? Each > users' computer was now responsible for providing a "virtual > circuit" service to its own users, as well as offering a new > unguaranteed "datagram" service for users who needed minimal latency. > > > > To me, that was always the root theory to be tested in the > "Internet Experiment" -- to answer the question "Is it possible to > create a wide area network in which the wildly diverse and > numerous users' computers provide much of the required mechanisms > and the many components inside The Internet are not designed, > installed, managed, or operated by a single owner or company?" > > > > It still seems to me like such a system shouldn't work very > well. But it does..... > > > > Jack > > > > > > On 3/27/25 15:21, Vint Cerf via Internet-history wrote: > >> thanks Don! > >> v > >> > >> > >> On Thu, Mar 27, 2025 at 6:09?PM Don > Nielson wrote: > >> > >>> I don't understand this internet history routing but I need to > >>> reply to Vint's observation about ETE reliability in the PRNET. > >>> > >>> In the early days of the PRNET it was sometimes envisaged to > >>> be a stand alone net to serve Army and other needs.? To make it > >>> reliable to users, some intranet ETE protocol was needed.? (Beyond > >>> the reliable SPP used to maintain the packet radio units > themselves.) > >>> I thought I recalled some pre-TCP effort but I may well be > mistaken. > >>> For as early as 1977 TCP was suggested for PRNET *intranet* use > >>> as well as an overlay for internet traffic through the PRNET.? > (Attached > >>> is an excerpt from a Jan 1978 final report to DARPA indicating > that use.) > >>> In essence TCP emerged to fill that stand alone need. > >>> > >>> A feature of the PRNET in dealing with its uncertain mobile > environment > >>> led to its reliability even though packets were lost.? One > reason for loss > >>> was a error checking technique in each PRU that would > immediately discard > >>> bad packets. The other was simply channel failure.? To help, a > hop-by-hop > >>> retransmission/ack arrangement was used with limited repeats and a > >>> concurrent local detection scheme to discard duplicates.? > While still > >>> retained, the scheme was able to be simplified once TCP came > into use. > >>> Don > >>> > >>> On 3/27/25 2:30 AM, Vint Cerf wrote: > >>> > >>> I was not sure whether Don's note got to the internet-history > list, so > >>> apologies if this is a duplicate. > >>> > >>> I went back and re-read the long paper on Packet Radio in > Proceedings of > >>> the IEEE Special Issue published November 1978. Don is correct > that there > >>> was a reliable Station-PRU protocol (called SPP) but I believe > this was > >>> only used for Station/PRU communication. Not all traffic was > carried that > >>> way and this, in part, motivated the development of the > end/end TCP/IP > >>> protocol. > >>> > >>> https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1455409 > >>> > >>> vint > >>> > >>> On Thu, Mar 27, 2025 at 2:46?AM Don > Nielson wrote: > >>> > >>>> Sorry, didn't pay attention to limited addressees below. > >>>> Also a few typos corrected below.? Don > >>>> > >>>> > >>>> -------- Forwarded Message -------- > >>>> Subject: Re: [ih] TCP RTT Estimator > >>>> Date: Wed, 26 Mar 2025 23:36:21 -0700 > >>>> From: Don Nielson > >>>> To: Barbara Denny , > >>>> Internet-history > >>>> > >>>> > >>>> Hi, All, > >>>> The threads here are a bit random so I think I'll just try to > give > >>>> a few comments that hopefully will answer much at issue. > >>>> > >>>> 1. The Packet Radio Network (PRNET) was, from the outset, to > be a self- > >>>> forming network with a central controlling node called a > station, but with > >>>> the ongoing potential of stationless operation.? That station had > >>>> an ARPANET interface for maintaining its software. The PRNET was > >>>> dynamic in the sense of self-restructuring with the addition > or loss of > >>>> nodes. As early as 1974-5 it interconnection to the ARPANET > was planned. > >>>> > >>>> 2. The packet radio units (PRUs) were at once a network repeater > >>>> and an entrance node to the network.? Some were placed at > >>>> promontories for area coverage and some were sited at user sites > >>>> as nodes for network access.? They were half digital/half > radio and > >>>> sophisticated for their time.? For example, any PRU could be > >>>> software-maintained (debugged) remotely. This obviously required > >>>> reliable PRNET protocols I think we called SPP and NCP they > rode atop a > >>>> channel access layer that resided only in the PRUs.? That PRU > layer > >>>> faced all the early issues of contention, routing, and > efficiency and > >>>> the PRU radio section was designed as best to deal with > mulitpath, etc. > >>>> > >>>> 3. The PRUs required interfaces to the terminals/hosts to which > >>>> they were connected.? (Exceptions to that were some traffic > >>>> generators built for early testing and some IMP interfaces to > the PDP-11 > >>>> station computer.). SRI built the terminal interface units > and it was > >>>> in those that TCP was eventually placed. > >>>> > >>>> 4.? SRI was testing PRNET configurations in 1974-5 and doing > so had > >>>> a number of PRUs (and one station computer) available.? By > the end > >>>> of 1975 we had at least a half dozen in use and probably more in > >>>> backup.? By the end of 1976 about 14 were on site. > >>>> > >>>> 5.? Before mentioning TCP it needs to be said that PRNET intranet > >>>> protocols were end-to-end reliable, handling all the problems of > >>>> flow control, duplicate detection, sequencing, and > retransmission. > >>>> > >>>> 6.? TCP implementation was anticipated in 1975 and > preparations for > >>>> a station gateway that arrived in early 1976.? TCP for the SRI > >>>> TIUs was, according to one report, based on Stanford Tech Note 68 > >>>> by Vint dated Nov 1975.? As Vint said, Jim Mathis lead that > >>>> implementation. > >>>> In early 1975 the BBN-provided gateway from Ginny Strazisar was > >>>> first tested without PRUs and early problems resolved. > >>>> > >>>> 7. So, with the gateway operating, it was time to take TCP to > the field > >>>> and after some brief testing it was decided to have a little > celebration > >>>> in that regard.? Ron Kunzelman of SRI suggested a nice > accessible spot > >>>> for the SRI van and was at least one PRNET hop from the > station/gateway > >>>> was Rossotti's. (I don't recall or if anyone with ever know > whether other > >>>> PRNET repeaters that day were passing this traffic.? Given > the absence > >>>> of other PRNET traffic, it would have been improbable.) > >>>> > >>>> Several SRI participants were there, one Army visitor, and I took > >>>> the pictures.? Please recall that the PRNET protocol was > reliable, > >>>> so testing TCP exclusively on it wouldn't have made sense. > >>>> While the PRNET could halt under environmental issues, that > didn't mean > >>>> it was lossy.? While the demo at Rossotti's was not mobile, > we had > >>>> countless mobile demos of numeric patterns in which transmission > >>>> often would be interrupted, but we never saw errors.? We even > would > >>>> disable the PRU radio unit to halt transmission to show no > errors. > >>>> > >>>> 8. TCP reliability was, now that I think about it, at that > point mainly a > >>>> test of the > >>>> new gateway and possibly if the ARPANET routing was somehow > lossy. > >>>> If you saw the very lengthy weekly report entered manually > from Rossotti's > >>>> you would see how well it all worked end-to-end. > >>>> > >>>> If I haven't bent your ears enough, I could try and answer > anything the > >>>> above doesn't mention or errors in my memory.? I did look > back at some > >>>> of the packet radio notes for the dates and numbers.? ?Don > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> On 3/26/25 5:30 PM, Barbara Denny wrote: > >>>> > >>>> Having trouble sending to the email list again so I shortened the > >>>> original thread. Hope no duplicates. > >>>> > >>>> **** > >>>> I might be repeating but I will add a few comments. Hope my > memory is > >>>> pretty good. > >>>> > >>>> Packet Radio nodes could act as sink, source, or > repeater/relay for > >>>> data.? They could also have an attached device (like a > station, end user > >>>> host, tiu, etc).? I think the packet radio addressing space > was broken up > >>>> so you could determine the type of entity by the ID (need to > double check > >>>> this). The station provided routes to packet radios when the > packet radio > >>>> didn't know how to reach a destination. Any packet radio > could be mobile. > >>>> I don't remember if there was a limit initially on how many > neighbors a > >>>> packet radio could have.? Packet radio nodes did not use IP > related > >>>> protocols but could handle IP traffic generated by other > entities. > >>>> > >>>> Packet Radio nodes also had multiple hardware generations > (EPR, UBR, IPR, > >>>> VPR, and also? the LPR which was actually done under a > follow-on? program > >>>> called SURAN) .? There were also multiple versions of the > radio software > >>>> known as CAPX where X was a number.? ?I think the earliest > version I > >>>> encountered was CAP5 so I have no knowledge of the protocol > implementation > >>>> used in the simulation Greg Skinner presented in his email > message. > >>>> > >>>> In the early 1980s packet radio was implementing > multi-station so you > >>>> could have more than one station in a packet radio network. I > think this > >>>> was known as CAP 6.2 (6.4???).? There was also a stationless > design being > >>>> discussed at the close of the packet radio program (CAP7). > >>>> > >>>> barbara > >>>> On Wednesday, March 26, 2025 at 04:07:34 PM PDT, Vint Cerf > >>>> ? wrote: > >>>> > >>>> > >>>> Jim Mathis wrote TCP/IP for the LSI-11/23. Nice piece of work. > >>>> > >>>> v > >>>> > >>>> > >>>> On Wed, Mar 26, 2025 at 5:26?PM John > Day wrote: > >>>> > >>>> > >>>> > >>>> On Mar 26, 2025, at 17:17, Vint Cerf wrote: > >>>> > >>>> yes, the gateway was colocated with the Station (on the same > computer).h > >>>> > >>>> > >>>> I missed something. What is the Station? > >>>> > >>>> The Station managed the Packet Radio network, maintained > information > >>>> about connectivity among the radio relays. PRNET was not a > star network. > >>>> > >>>> > >>>> That is what I was assuming. > >>>> > >>>> Topology changes were tracked by the mobile nodes > periodically reporting > >>>> to the station which other Packet Radios they could reach. > >>>> > >>>> > >>>> So a sort of centralized routing on the Station. An early ad > hoc network. > >>>> > >>>> Hosts on the PRNET nodes could communicate with each other > and, through > >>>> the gateway, with Arpanet and SATNET hosts. The PRNET nodes > did NOT run > >>>> TCP, > >>>> > >>>> > >>>> So there were distinct machines acting as 'PRNET routers? and > PRNET hosts. > >>>> > >>>> that was running on the hosts like the LSI-11/23's or the > Station or.... > >>>> > >>>> > >>>> ;-) an LSI-11/23 wasn?t a lot of machine. ;-)? We had a strip > down Unix > >>>> running on one the year before but as a terminal connected to > our Unix on > >>>> an 11/45 but it was running NCP. > >>>> > >>>> Thanks, > >>>> John > >>>> > >>>> > >>>> v > >>>> > >>>> > >>>> On Wed, Mar 26, 2025 at 5:08?PM John > Day wrote: > >>>> > >>>> And those nodes relayed among themselves as well as with the > gateway? > >>>> > >>>> IOW, PRNET wasn?t a star network with the gateway as the > center, like a > >>>> WIFI access point. > >>>> > >>>> So there would have been TCP connections between PRNET nodes > as well as > >>>> TCP connections potentially relayed by other PRNET nodes > through the > >>>> gateway to ARPANET hosts.? Right? > >>>> > >>>> Take care, > >>>> John > >>>> > >>>> > >>>> > >>> -- > >>> Please send any postal/overnight deliveries to: > >>> Vint Cerf > >>> Google, LLC > >>> 1900 Reston Metro Plaza, 16th Floor > >>> Reston, VA 20190 > >>> +1 (571) 213 1346 <(571)%20213-1346> > >>> > >>> > >>> until further notice > >>> > >>> > >>> > >>> > >>> > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: