From gregskinner0 at icloud.com Tue Oct 1 13:11:27 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Tue, 1 Oct 2024 13:11:27 -0700 Subject: [ih] booting linux on a 4004 Message-ID: Forwarded for Barbara ==== From: Barbara Denny To: Internet-history Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT Subject: Re: [ih] booting linux on a 4004 I am having trouble with the mailing list again so tried some editing on the thread. I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. FYI, I think the reconstitution protocol borrowed ICMP source quench. We used it for telling a host which gateway to use if a packet arrived at a gateway/interface when the packet should have been sent to a different gateway/interface. Reminder, a RP gateway used gateway centric addressing, not network. barbara From michaelgreenwald58 at gmail.com Tue Oct 1 14:19:09 2024 From: michaelgreenwald58 at gmail.com (Michael Greenwald) Date: Tue, 1 Oct 2024 14:19:09 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: References: Message-ID: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > Forwarded for Barbara > > ==== > > From: Barbara Denny > Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate thesis about the use of Source Quench for congestion control. I believe it included some experiments (maybe all artificial, or only through simulation). I don't think it had much impact on the rest of the world. From brian.e.carpenter at gmail.com Tue Oct 1 15:22:32 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 2 Oct 2024 11:22:32 +1300 Subject: [ih] booting linux on a 4004 In-Reply-To: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> Message-ID: <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > > On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >> Forwarded for Barbara >> >> ==== >> >> From: Barbara Denny >> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. > Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an > advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > thesis about the use of Source Quench for congestion control. I believe > it included some experiments (maybe all artificial, or only through > simulation). > I don't think it had much impact on the rest of the world. Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). A trail of breadcrumbs tells me that he has an MSCS from Stanford, so I guess he probably wasn't an MIT undergrad. Source quench was effectively deprecated by RFC 1812 (dated 1995). People had played around with ideas (e.g. RFC 1016) but it seems that basically it was no use. A bit more Google found this, however: "4.3. Internet Congestion Control Lixia Zhang began a study of network resource allocation techniques suitable for the DARPA Internet. The Internet currently has a simple technique for resource allocation, called "Source Quench." Simple simulations have shown that this technique is not effective, and this work has produced an alternative which seems considerably more workable. Simulation of this new technique is now being performed." [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! Brian Carpenter From michaelgreenwald58 at gmail.com Tue Oct 1 15:53:31 2024 From: michaelgreenwald58 at gmail.com (Michael Greenwald) Date: Tue, 1 Oct 2024 15:53:31 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> Message-ID: <6f92fbaf-7c12-4b2d-86e7-340df85f7070@cis.upenn.edu> It wasn't Lixia (I believe it predated her arrival at MIT). It wasn't John Nagle, who wasn't (to my knowledge) at MIT at the time. As I said it had little impact, and is primarily of historical interest. But we can ask Dave Clark if anyone cares. On 10/1/24 3:22 PM, Brian E Carpenter wrote: > On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >> >> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>> Forwarded for Barbara >>> >>> ==== >>> >>> From: Barbara Denny >>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>> I think congestion issues were discussed because I remember an ICMP >>> message type called source quench (now deprecated). It was used for >>> notifying a host to reduce the traffic load to a destination.? I >>> don't remember hearing about any actual congestion experiments using >>> this message type. >> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >> thesis about the use of Source Quench for congestion control. I believe >> it included some experiments (maybe all artificial, or only through >> simulation). >> I don't think it had much impact on the rest of the world. > > Source quench is discussed in detail in John Nagle's RFC 896 (dated > 1984). > A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > I guess he probably wasn't an MIT undergrad. > > Source quench was effectively deprecated by RFC 1812 (dated 1995). People > had played around with ideas (e.g. RFC 1016) but it seems that basically > it was no use. > > A bit more Google found this, however: > > "4.3. Internet Congestion Control > Lixia Zhang began a study of network resource allocation techniques > suitable for > the DARPA Internet. The Internet currently has a simple technique for > resource > allocation, called "Source Quench." > Simple simulations have shown that this technique is not effective, > and this work > has produced an alternative which seems considerably more workable. > Simulation > of this new technique is now being performed." > > [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > > Lixia was then a grad student under Dave Clark. Of course she's at > UCLA now. If she isn't on this list, she should be! > > ?? Brian Carpenter > > From sob at sobco.com Tue Oct 1 16:34:38 2024 From: sob at sobco.com (Scott Bradner) Date: Tue, 1 Oct 2024 19:34:38 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> Message-ID: <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> I remember talking with Craig Partridge (on a flight to somewhere) about source quench during the time when 1812 was being written - I do not recall the specific issues but I recall that there were more than one issue (if DoS was not an issue at the time, it should have been) Scott > On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history wrote: > > On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>> Forwarded for Barbara >>> >>> ==== >>> >>> From: Barbara Denny >>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >> thesis about the use of Source Quench for congestion control. I believe >> it included some experiments (maybe all artificial, or only through >> simulation). >> I don't think it had much impact on the rest of the world. > > Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). > A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > I guess he probably wasn't an MIT undergrad. > > Source quench was effectively deprecated by RFC 1812 (dated 1995). People > had played around with ideas (e.g. RFC 1016) but it seems that basically > it was no use. > > A bit more Google found this, however: > > "4.3. Internet Congestion Control > Lixia Zhang began a study of network resource allocation techniques suitable for > the DARPA Internet. The Internet currently has a simple technique for resource > allocation, called "Source Quench." > Simple simulations have shown that this technique is not effective, and this work > has produced an alternative which seems considerably more workable. Simulation > of this new technique is now being performed." > > [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > > Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! > > Brian Carpenter > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From sob at sobco.com Tue Oct 1 16:37:23 2024 From: sob at sobco.com (Scott Bradner) Date: Tue, 1 Oct 2024 19:37:23 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> Message-ID: <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> multicast is also an issue but I do not recall if that was one that Craig & I talked about Scott > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history wrote: > > I remember talking with Craig Partridge (on a flight to somewhere) about source quench > during the time when 1812 was being written - I do not recall > the specific issues but I recall that there were more than one issue > > (if DoS was not an issue at the time, it should have been) > > Scott > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history wrote: >> >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> From: Barbara Denny >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>> thesis about the use of Source Quench for congestion control. I believe >>> it included some experiments (maybe all artificial, or only through >>> simulation). >>> I don't think it had much impact on the rest of the world. >> >> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >> I guess he probably wasn't an MIT undergrad. >> >> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >> had played around with ideas (e.g. RFC 1016) but it seems that basically >> it was no use. >> >> A bit more Google found this, however: >> >> "4.3. Internet Congestion Control >> Lixia Zhang began a study of network resource allocation techniques suitable for >> the DARPA Internet. The Internet currently has a simple technique for resource >> allocation, called "Source Quench." >> Simple simulations have shown that this technique is not effective, and this work >> has produced an alternative which seems considerably more workable. Simulation >> of this new technique is now being performed." >> >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >> >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >> >> Brian Carpenter >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From karl at iwl.com Tue Oct 1 16:53:35 2024 From: karl at iwl.com (Karl Auerbach) Date: Tue, 1 Oct 2024 16:53:35 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> Message-ID: <4941b30d-a6cb-48a4-870b-74e7eef8a43d@iwl.com> Not quite apropos Source Quench, but the MIT PC/IP project (John Romkey and Dave Bridgham under Saltzer and Clark) wrote a TCP stack that ignored honoring outbound (transmit) TCP window sizes. The reason was that the TCP was presumed to be used only under a Telnet client operated by a human typing on a keyboard.? It was expected that the amount of data would be so small - typing keystrokes - that there would never be an outbound window size issue.? So the code simply transmitted whenever it had data. As the PC/IP project morphed into FTP Software and its real TCP stack, actual honoring of flow control windows as added. ??? ??? --karl-- On 10/1/24 2:19 PM, Michael Greenwald via Internet-history wrote: > > On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >> Forwarded for Barbara >> >> ==== >> >> From: Barbara Denny >> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >> I think congestion issues were discussed because I remember an ICMP >> message type called source quench (now deprecated). It was used for >> notifying a host to reduce the traffic load to a destination.? I >> don't remember hearing about any actual congestion experiments using >> this message type. > Of only academic interest: I believe that, circa 1980 +/- 1-2 years, > an advisee of either Dave Clark or Jerry Saltzer, wrote an > undergraduate thesis about the use of Source Quench for congestion > control. I believe it included some experiments (maybe all artificial, > or only through simulation). > I don't think it had much impact on the rest of the world. From b_a_denny at yahoo.com Tue Oct 1 17:50:23 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Wed, 2 Oct 2024 00:50:23 +0000 (UTC) Subject: [ih] booting linux on a 4004 In-Reply-To: <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> Message-ID: <1191418713.15094180.1727830223232@mail.yahoo.com> In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC.? It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings.? I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. barbara On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history wrote: multicast is also an issue but I do not recall if that was one that Craig & I talked about Scott > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history wrote: > > I remember talking with Craig Partridge (on a flight to somewhere) about source quench > during the time when 1812 was being written - I do not recall > the specific issues but I recall that there were more than one issue > > (if DoS was not an issue at the time, it should have been) > > Scott > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history wrote: >> >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> From: Barbara Denny >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination.? I don't remember hearing about any actual congestion experiments using this message type. >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>> thesis about the use of Source Quench for congestion control. I believe >>> it included some experiments (maybe all artificial, or only through >>> simulation). >>> I don't think it had much impact on the rest of the world. >> >> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >> I guess he probably wasn't an MIT undergrad. >> >> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >> had played around with ideas (e.g. RFC 1016) but it seems that basically >> it was no use. >> >> A bit more Google found this, however: >> >> "4.3. Internet Congestion Control >> Lixia Zhang began a study of network resource allocation techniques suitable for >> the DARPA Internet. The Internet currently has a simple technique for resource >> allocation, called "Source Quench." >> Simple simulations have shown that this technique is not effective, and this work >> has produced an alternative which seems considerably more workable. Simulation >> of this new technique is now being performed." >> >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >> >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >> >>? Brian Carpenter From vint at google.com Tue Oct 1 19:10:10 2024 From: vint at google.com (Vint Cerf) Date: Tue, 1 Oct 2024 22:10:10 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <1191418713.15094180.1727830223232@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> Message-ID: One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. v On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > In a brief attempt to try to find some information about the early MIT > work you mentioned, I ended up tripping on this Final Report from ISI in > DTIC. It does talk a fair amount about congestion control and source > quench (plus other things that might interest people). The period of > performance is 1987 to 1990 which is much later than I was considering in > my earlier message. > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > Even though the report mentions testing on DARTnet, I don't remember > anything about this during our DARTnet meetings. I did join the project > after the start so perhaps the work was done before I began to participate. > I also couldn't easily find the journal they mention as a place for > publishing their findings. I will have more time later to see if I can > something that covers this testing. > > barbara > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via > Internet-history wrote: > > multicast is also an issue but I do not recall if that was one that Craig > & I talked about > > Scott > > > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I remember talking with Craig Partridge (on a flight to somewhere) about > source quench > > during the time when 1812 was being written - I do not recall > > the specific issues but I recall that there were more than one issue > > > > (if DoS was not an issue at the time, it should have been) > > > > Scott > > > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > >>>> Forwarded for Barbara > >>>> > >>>> ==== > >>>> > >>>> From: Barbara Denny > >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > >>>> I think congestion issues were discussed because I remember an ICMP > message type called source quench (now deprecated). It was used for > notifying a host to reduce the traffic load to a destination. I don't > remember hearing about any actual congestion experiments using this message > type. > >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an > >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > >>> thesis about the use of Source Quench for congestion control. I believe > >>> it included some experiments (maybe all artificial, or only through > >>> simulation). > >>> I don't think it had much impact on the rest of the world. > >> > >> Source quench is discussed in detail in John Nagle's RFC 896 (dated > 1984). > >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > >> I guess he probably wasn't an MIT undergrad. > >> > >> Source quench was effectively deprecated by RFC 1812 (dated 1995). > People > >> had played around with ideas (e.g. RFC 1016) but it seems that basically > >> it was no use. > >> > >> A bit more Google found this, however: > >> > >> "4.3. Internet Congestion Control > >> Lixia Zhang began a study of network resource allocation techniques > suitable for > >> the DARPA Internet. The Internet currently has a simple technique for > resource > >> allocation, called "Source Quench." > >> Simple simulations have shown that this technique is not effective, and > this work > >> has produced an alternative which seems considerably more workable. > Simulation > >> of this new technique is now being performed." > >> > >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > >> > >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA > now. If she isn't on this list, she should be! > >> > >> Brian Carpenter > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From b_a_denny at yahoo.com Tue Oct 1 22:48:18 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Wed, 2 Oct 2024 05:48:18 +0000 (UTC) Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> Message-ID: <1416965914.15145634.1727848098390@mail.yahoo.com> Just throwing some thoughts out here ...... I can see how this happens in a FIFO queuing world.? ?However a lot of work has gone into fair queuing starting in the late 80s.? Just wondering if anyone has done work utilizing fair queuing and source quench.? For example, I think I can see how to use fair queuing information to better select who to send a source quench to. At least I can see how to do it with Stochastic Fairness Queueing since I worked on it and I? remember a fair amount about how it was implemented. I wouldn't be able to provide a guarantee that the wrong host would never receive a source quench but the likelihood should be much lower.? Considering whether the use of NAT creates undesirable behavior is also important and I am sure there are probably other cases that need to be checked. Hum,? it might also be interesting to speculate whether this could have any effect on bufferbloat but I fess up I need to learn more about the work done in the area of bufferbloat.? I was involved with other things when this started to appear on my radar screen as a hot topic.? I will admit I wish I had done more work on possible buffering effects from implementation choices at the time I did work on SFQ but there were contractual obligations that restricted how much time I could devote to the SFQ part of the project.? Just curious, ECN (Explicit Congestion Notification) is optional . Does anyone have any idea about its use in the Internet? barbara On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf wrote: One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. v On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history wrote: ?In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC.? It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings.? I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. barbara ? ? On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history wrote:? ?multicast is also an issue but I do not recall if that was one that Craig & I talked about Scott > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history wrote: > > I remember talking with Craig Partridge (on a flight to somewhere) about source quench > during the time when 1812 was being written - I do not recall > the specific issues but I recall that there were more than one issue > > (if DoS was not an issue at the time, it should have been) > > Scott > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history wrote: >> >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> From: Barbara Denny >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination.? I don't remember hearing about any actual congestion experiments using this message type. >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>> thesis about the use of Source Quench for congestion control. I believe >>> it included some experiments (maybe all artificial, or only through >>> simulation). >>> I don't think it had much impact on the rest of the world. >> >> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >> I guess he probably wasn't an MIT undergrad. >> >> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >> had played around with ideas (e.g. RFC 1016) but it seems that basically >> it was no use. >> >> A bit more Google found this, however: >> >> "4.3. Internet Congestion Control >> Lixia Zhang began a study of network resource allocation techniques suitable for >> the DARPA Internet. The Internet currently has a simple technique for resource >> allocation, called "Source Quench." >> Simple simulations have shown that this technique is not effective, and this work >> has produced an alternative which seems considerably more workable. Simulation >> of this new technique is now being performed." >> >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >> >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >> >>? Brian Carpenter -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history -- Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 until further notice From vgcerf at gmail.com Tue Oct 1 23:39:50 2024 From: vgcerf at gmail.com (vinton cerf) Date: Wed, 2 Oct 2024 02:39:50 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <1416965914.15145634.1727848098390@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: ECN seems to be helpful but congestion and flow control have not been my expertise as should be obvious by now!! V On Wed, Oct 2, 2024, 01:49 Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Just throwing some thoughts out here ...... > I can see how this happens in a FIFO queuing world. However a lot of > work has gone into fair queuing starting in the late 80s. Just wondering > if anyone has done work utilizing fair queuing and source quench. For > example, I think I can see how to use fair queuing information to better > select who to send a source quench to. At least I can see how to do it with > Stochastic Fairness Queueing since I worked on it and I remember a fair > amount about how it was implemented. I wouldn't be able to provide a > guarantee that the wrong host would never receive a source quench but the > likelihood should be much lower. Considering whether the use of NAT > creates undesirable behavior is also important and I am sure there are > probably other cases that need to be checked. > Hum, it might also be interesting to speculate whether this could have > any effect on bufferbloat but I fess up I need to learn more about the work > done in the area of bufferbloat. I was involved with other things when > this started to appear on my radar screen as a hot topic. I will admit I > wish I had done more work on possible buffering effects from implementation > choices at the time I did work on SFQ but there were contractual > obligations that restricted how much time I could devote to the SFQ part of > the project. > Just curious, ECN (Explicit Congestion Notification) is optional . Does > anyone have any idea about its use in the Internet? > barbara > > On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < > vint at google.com> wrote: > > One basic problem with blaming the "last packet that caused intermediate > router congestion" is that it usually blamed the wrong source, among other > problems. Van Jacobson was/is the guru of flow control (among others) who > might remember more. > > v > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > In a brief attempt to try to find some information about the early MIT > work you mentioned, I ended up tripping on this Final Report from ISI in > DTIC. It does talk a fair amount about congestion control and source > quench (plus other things that might interest people). The period of > performance is 1987 to 1990 which is much later than I was considering in > my earlier message. > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > Even though the report mentions testing on DARTnet, I don't remember > anything about this during our DARTnet meetings. I did join the project > after the start so perhaps the work was done before I began to participate. > I also couldn't easily find the journal they mention as a place for > publishing their findings. I will have more time later to see if I can > something that covers this testing. > > barbara > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via > Internet-history wrote: > > multicast is also an issue but I do not recall if that was one that Craig > & I talked about > > Scott > > > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I remember talking with Craig Partridge (on a flight to somewhere) about > source quench > > during the time when 1812 was being written - I do not recall > > the specific issues but I recall that there were more than one issue > > > > (if DoS was not an issue at the time, it should have been) > > > > Scott > > > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > >>>> Forwarded for Barbara > >>>> > >>>> ==== > >>>> > >>>> From: Barbara Denny > >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > >>>> I think congestion issues were discussed because I remember an ICMP > message type called source quench (now deprecated). It was used for > notifying a host to reduce the traffic load to a destination. I don't > remember hearing about any actual congestion experiments using this message > type. > >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an > >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > >>> thesis about the use of Source Quench for congestion control. I believe > >>> it included some experiments (maybe all artificial, or only through > >>> simulation). > >>> I don't think it had much impact on the rest of the world. > >> > >> Source quench is discussed in detail in John Nagle's RFC 896 (dated > 1984). > >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > >> I guess he probably wasn't an MIT undergrad. > >> > >> Source quench was effectively deprecated by RFC 1812 (dated 1995). > People > >> had played around with ideas (e.g. RFC 1016) but it seems that basically > >> it was no use. > >> > >> A bit more Google found this, however: > >> > >> "4.3. Internet Congestion Control > >> Lixia Zhang began a study of network resource allocation techniques > suitable for > >> the DARPA Internet. The Internet currently has a simple technique for > resource > >> allocation, called "Source Quench." > >> Simple simulations have shown that this technique is not effective, and > this work > >> has produced an alternative which seems considerably more workable. > Simulation > >> of this new technique is now being performed." > >> > >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > >> > >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA > now. If she isn't on this list, she should be! > >> > >> Brian Carpenter > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 > Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 > > > until further notice > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From craig at tereschau.net Wed Oct 2 02:50:29 2024 From: craig at tereschau.net (Craig Partridge) Date: Wed, 2 Oct 2024 03:50:29 -0600 Subject: [ih] booting linux on a 4004 In-Reply-To: <1416965914.15145634.1727848098390@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: It has been a long time since I thought about many of these questions, but here are some recollections. Per Barbara's point, in a FIFO world, Van showed c. 1990 that the dominant traffic flows tended to synchronize their traffic to moments there was space in the queue, with the result that the non-dominant flows were the ones that had packets arrive when the queue was full, and thus had their packets dropped. So the wrong flows got penalized. It may not be the same work, but Van also had graphs showing that when multiple TCPs flowed through a bottleneck, there tended to be winning and losing TCPs (as I recall, often one TCP getting a trickle of bandwidth while others shared roughly equally). I believe Van's graphs motivated Sally Floyd to look at the traffic dynamics in much greater depth and figure out the dynamics of ECN (with various co-contributors). Further pushing my memory, I believe Jon Crowcroft did some studies of heavily overloaded links (where # TCP connections exceeded the # of packets in the delay*bandwidth product) that took Van's graphs to the extreme and showed links behaving like a token-sharing network. Craig On Tue, Oct 1, 2024 at 11:49?PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Just throwing some thoughts out here ...... > I can see how this happens in a FIFO queuing world. However a lot of > work has gone into fair queuing starting in the late 80s. Just wondering > if anyone has done work utilizing fair queuing and source quench. For > example, I think I can see how to use fair queuing information to better > select who to send a source quench to. At least I can see how to do it with > Stochastic Fairness Queueing since I worked on it and I remember a fair > amount about how it was implemented. I wouldn't be able to provide a > guarantee that the wrong host would never receive a source quench but the > likelihood should be much lower. Considering whether the use of NAT > creates undesirable behavior is also important and I am sure there are > probably other cases that need to be checked. > Hum, it might also be interesting to speculate whether this could have > any effect on bufferbloat but I fess up I need to learn more about the work > done in the area of bufferbloat. I was involved with other things when > this started to appear on my radar screen as a hot topic. I will admit I > wish I had done more work on possible buffering effects from implementation > choices at the time I did work on SFQ but there were contractual > obligations that restricted how much time I could devote to the SFQ part of > the project. > Just curious, ECN (Explicit Congestion Notification) is optional . Does > anyone have any idea about its use in the Internet? > barbara > > On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < > vint at google.com> wrote: > > One basic problem with blaming the "last packet that caused intermediate > router congestion" is that it usually blamed the wrong source, among other > problems. Van Jacobson was/is the guru of flow control (among others) who > might remember more. > > v > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > In a brief attempt to try to find some information about the early MIT > work you mentioned, I ended up tripping on this Final Report from ISI in > DTIC. It does talk a fair amount about congestion control and source > quench (plus other things that might interest people). The period of > performance is 1987 to 1990 which is much later than I was considering in > my earlier message. > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > Even though the report mentions testing on DARTnet, I don't remember > anything about this during our DARTnet meetings. I did join the project > after the start so perhaps the work was done before I began to participate. > I also couldn't easily find the journal they mention as a place for > publishing their findings. I will have more time later to see if I can > something that covers this testing. > > barbara > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via > Internet-history wrote: > > multicast is also an issue but I do not recall if that was one that Craig > & I talked about > > Scott > > > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I remember talking with Craig Partridge (on a flight to somewhere) about > source quench > > during the time when 1812 was being written - I do not recall > > the specific issues but I recall that there were more than one issue > > > > (if DoS was not an issue at the time, it should have been) > > > > Scott > > > >> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > >>>> Forwarded for Barbara > >>>> > >>>> ==== > >>>> > >>>> From: Barbara Denny > >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > >>>> I think congestion issues were discussed because I remember an ICMP > message type called source quench (now deprecated). It was used for > notifying a host to reduce the traffic load to a destination. I don't > remember hearing about any actual congestion experiments using this message > type. > >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an > >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > >>> thesis about the use of Source Quench for congestion control. I believe > >>> it included some experiments (maybe all artificial, or only through > >>> simulation). > >>> I don't think it had much impact on the rest of the world. > >> > >> Source quench is discussed in detail in John Nagle's RFC 896 (dated > 1984). > >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > >> I guess he probably wasn't an MIT undergrad. > >> > >> Source quench was effectively deprecated by RFC 1812 (dated 1995). > People > >> had played around with ideas (e.g. RFC 1016) but it seems that basically > >> it was no use. > >> > >> A bit more Google found this, however: > >> > >> "4.3. Internet Congestion Control > >> Lixia Zhang began a study of network resource allocation techniques > suitable for > >> the DARPA Internet. The Internet currently has a simple technique for > resource > >> allocation, called "Source Quench." > >> Simple simulations have shown that this technique is not effective, and > this work > >> has produced an alternative which seems considerably more workable. > Simulation > >> of this new technique is now being performed." > >> > >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > >> > >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA > now. If she isn't on this list, she should be! > >> > >> Brian Carpenter > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 > Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 > > > until further notice > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From michaelgreenwald58 at gmail.com Wed Oct 2 11:18:37 2024 From: michaelgreenwald58 at gmail.com (Michael Greenwald) Date: Wed, 2 Oct 2024 11:18:37 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <32293b4f-ce9b-4bae-958b-fc9bdba4b773@gmail.com> References: <32293b4f-ce9b-4bae-958b-fc9bdba4b773@gmail.com> Message-ID: <338b9d0b-6a6f-408c-b178-4cbf8210ada0@cis.upenn.edu> Hi Lixia! (Yes, it's been a long time. At least a decade or 2) My memory is (no surprise) cloudy. For some reason I thought you arrived a year or two later than '81. I have no memory (again, no surprise) of what, specifically, your promising "alternative" was, but an early reference to (virtual)clock based control of packets sounds plausible. I'm not shocked by the difference between Van's comment back then and BBR now. There's never a guarantee that claims that something "simply cannot work" or "will obviously help" back then (or even now) will hold up over long time periods, unless there is a formal proof or a lot of experimental validation. (And even then conditions change). About the work that I thought predated you: Dave Clark remembers that there was an undergraduate who did work on Source Quench, but not his name, nor details of what he did. So at least I didn't completely hallucinate.... On 10/2/24 7:46 AM, Lixia Zhang wrote: > Hi Michael, > > A very long time no "see"! (a few decades now?) > I'm not on this internet-history list, Len forwarded the following msg > to me. Just to add a bit fact: I started at MIT September 1981, the > same month of RFC 791-793 publication (IP/ICMP/TCP specifications). I > still recall that Dave (Clark) handed me a printed copy and said > "these are hot off the press, you read them". > > In my early years, after I deciphered Noel's C-Gateway implementation, > I did look into using IP source quench for congestion control (I guess > that's why the mentioning in the report Brian pointed to). I can't > recall I got any simulation results, but do remember the problem I ran > into: IP forwarding was stateless, a gateway either had to send ICMP > source quench for every dropped IP packet, or otherwise maintain some > state... > > The "alternative which seems considerably more workable" might be > referring to an idea for having the sending host pace out packets, and > I recall Van claimed at the time that rate-based control "simply wont > work" > (now BBR does exactly that, calculating packet pacing rate:-) > > Lixia > >> *From: *Michael Greenwald via Internet-history >> >> *Subject: **Re: [ih] booting linux on a 4004* >> *Date: *October 1, 2024 at 3:53:31?PM PDT >> *To: *Brian E Carpenter , >> internet-history at elists.isoc.org >> *Cc: *Michael Greenwald , >> "greenwald at cs.stanford.edu" >> *Reply-To: *greenwald at cis.upenn.edu >> >> It wasn't Lixia (I believe it predated her arrival at MIT). >> It wasn't John Nagle, who wasn't (to my knowledge) at MIT at the time. >> As I said it had little impact, and is primarily of historical >> interest. But we can ask Dave Clark if anyone cares. >> >> On 10/1/24 3:22 PM, Brian E Carpenter wrote: >>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>> >>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>> Forwarded for Barbara >>>>> >>>>> ==== >>>>> >>>>> From: Barbara Denny >>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>> I think congestion issues were discussed because I remember an >>>>> ICMP message type called source quench (now deprecated). It was >>>>> used for notifying a host to reduce the traffic load to a >>>>> destination.? I don't remember hearing about any actual congestion >>>>> experiments using this message type. >>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>> thesis about the use of Source Quench for congestion control. I believe >>>> it included some experiments (maybe all artificial, or only through >>>> simulation). >>>> I don't think it had much impact on the rest of the world. >>> >>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>> 1984). >>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>> I guess he probably wasn't an MIT undergrad. >>> >>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>> People >>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>> it was no use. >>> >>> A bit more Google found this, however: >>> >>> "4.3. Internet Congestion Control >>> Lixia Zhang began a study of network resource allocation techniques >>> suitable for >>> the DARPA Internet. The Internet currently has a simple technique >>> for resource >>> allocation, called "Source Quench." >>> Simple simulations have shown that this technique is not effective, >>> and this work >>> has produced an alternative which seems considerably more workable. >>> Simulation >>> of this new technique is now being performed." >>> >>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>> >>> Lixia was then a grad student under Dave Clark. Of course she's at >>> UCLA now. If she isn't on this list, she should be! >>> >>> ?? Brian Carpenter >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Wed Oct 2 14:18:41 2024 From: jeanjour at comcast.net (John Day) Date: Wed, 2 Oct 2024 17:18:41 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> Message-ID: <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> Busy day. Just getting to looking at these. AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions: Flow control is a pair-wise issue, Congestion management is an n-party issue. Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.) A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.) Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late. The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify. Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.) Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers. Jacobson?s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van?s fault. It is the classic engineer?s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times. As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions. There is probably much more, but this is what comes to mind. Take care, John > On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history wrote: > > One basic problem with blaming the "last packet that caused intermediate > router congestion" is that it usually blamed the wrong source, among other > problems. Van Jacobson was/is the guru of flow control (among others) who > might remember more. > > > v > > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > >> In a brief attempt to try to find some information about the early MIT >> work you mentioned, I ended up tripping on this Final Report from ISI in >> DTIC. It does talk a fair amount about congestion control and source >> quench (plus other things that might interest people). The period of >> performance is 1987 to 1990 which is much later than I was considering in >> my earlier message. >> >> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >> >> Even though the report mentions testing on DARTnet, I don't remember >> anything about this during our DARTnet meetings. I did join the project >> after the start so perhaps the work was done before I began to participate. >> I also couldn't easily find the journal they mention as a place for >> publishing their findings. I will have more time later to see if I can >> something that covers this testing. >> >> barbara >> >> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >> Internet-history wrote: >> >> multicast is also an issue but I do not recall if that was one that Craig >> & I talked about >> >> Scott >> >>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> >>> I remember talking with Craig Partridge (on a flight to somewhere) about >> source quench >>> during the time when 1812 was being written - I do not recall >>> the specific issues but I recall that there were more than one issue >>> >>> (if DoS was not an issue at the time, it should have been) >>> >>> Scott >>> >>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >> internet-history at elists.isoc.org> wrote: >>>> >>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>> Forwarded for Barbara >>>>>> >>>>>> ==== >>>>>> >>>>>> From: Barbara Denny >>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>> I think congestion issues were discussed because I remember an ICMP >> message type called source quench (now deprecated). It was used for >> notifying a host to reduce the traffic load to a destination. I don't >> remember hearing about any actual congestion experiments using this message >> type. >>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>> thesis about the use of Source Quench for congestion control. I believe >>>>> it included some experiments (maybe all artificial, or only through >>>>> simulation). >>>>> I don't think it had much impact on the rest of the world. >>>> >>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >> 1984). >>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>> I guess he probably wasn't an MIT undergrad. >>>> >>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >> People >>>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>>> it was no use. >>>> >>>> A bit more Google found this, however: >>>> >>>> "4.3. Internet Congestion Control >>>> Lixia Zhang began a study of network resource allocation techniques >> suitable for >>>> the DARPA Internet. The Internet currently has a simple technique for >> resource >>>> allocation, called "Source Quench." >>>> Simple simulations have shown that this technique is not effective, and >> this work >>>> has produced an alternative which seems considerably more workable. >> Simulation >>>> of this new technique is now being performed." >>>> >>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>>> >>>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA >> now. If she isn't on this list, she should be! >>>> >>>> Brian Carpenter >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Wed Oct 2 14:30:42 2024 From: jeanjour at comcast.net (John Day) Date: Wed, 2 Oct 2024 17:30:42 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <1416965914.15145634.1727848098390@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: The response to bufferbloat has always struck me as looking for your keys under a street light when that wasn?t where you dropped them but there is light there. Initially, bufferbloat was not a problem because memory was expensive and when TCP ran out of buffers (or got low), the connection simply blocked the sending application until buffers were available. This was still true with the advent of NIC cards. Memory was still tight. However, as memory got cheap and NIC cards had oceans of memory, TCP never got low on buffers and no one told the application to slow down or wait, so there was local congestion collapse: bufferbloat. One part of the solution would be interface flow control between the sending application and TCP (you would have thought that would have occurred to implementers any way, it is obvious) and/or simply restrict the amount of buffers TCP has available so that it runs out and blocks the sending the application before things get bad and opens up when buffers are available. But virtually all of the papers I see are on different drop-strategies, and oddly enough they never find their keys. Take care, John > On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history wrote: > > Just throwing some thoughts out here ...... > I can see how this happens in a FIFO queuing world. However a lot of work has gone into fair queuing starting in the late 80s. Just wondering if anyone has done work utilizing fair queuing and source quench. For example, I think I can see how to use fair queuing information to better select who to send a source quench to. At least I can see how to do it with Stochastic Fairness Queueing since I worked on it and I remember a fair amount about how it was implemented. I wouldn't be able to provide a guarantee that the wrong host would never receive a source quench but the likelihood should be much lower. Considering whether the use of NAT creates undesirable behavior is also important and I am sure there are probably other cases that need to be checked. > Hum, it might also be interesting to speculate whether this could have any effect on bufferbloat but I fess up I need to learn more about the work done in the area of bufferbloat. I was involved with other things when this started to appear on my radar screen as a hot topic. I will admit I wish I had done more work on possible buffering effects from implementation choices at the time I did work on SFQ but there were contractual obligations that restricted how much time I could devote to the SFQ part of the project. > Just curious, ECN (Explicit Congestion Notification) is optional . Does anyone have any idea about its use in the Internet? > barbara > > On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf wrote: > > One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. > > v > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history wrote: > > In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC. It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings. I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. > > barbara > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history wrote: > > multicast is also an issue but I do not recall if that was one that Craig & I talked about > > Scott > >> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history wrote: >> >> I remember talking with Craig Partridge (on a flight to somewhere) about source quench >> during the time when 1812 was being written - I do not recall >> the specific issues but I recall that there were more than one issue >> >> (if DoS was not an issue at the time, it should have been) >> >> Scott >> >>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history wrote: >>> >>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>> Forwarded for Barbara >>>>> >>>>> ==== >>>>> >>>>> From: Barbara Denny >>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>> thesis about the use of Source Quench for congestion control. I believe >>>> it included some experiments (maybe all artificial, or only through >>>> simulation). >>>> I don't think it had much impact on the rest of the world. >>> >>> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>> I guess he probably wasn't an MIT undergrad. >>> >>> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>> it was no use. >>> >>> A bit more Google found this, however: >>> >>> "4.3. Internet Congestion Control >>> Lixia Zhang began a study of network resource allocation techniques suitable for >>> the DARPA Internet. The Internet currently has a simple technique for resource >>> allocation, called "Source Quench." >>> Simple simulations have shown that this technique is not effective, and this work >>> has produced an alternative which seems considerably more workable. Simulation >>> of this new technique is now being performed." >>> >>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>> >>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >>> >>> Brian Carpenter > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 > > > until further notice > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vint at google.com Wed Oct 2 14:51:00 2024 From: vint at google.com (Vint Cerf) Date: Wed, 2 Oct 2024 17:51:00 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> Message-ID: John, you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. v On Wed, Oct 2, 2024 at 5:19?PM John Day wrote: > Busy day. Just getting to looking at these. > > AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC > Report is a masterpiece. The epitome of what good computer research should > be. Their initial work really nailed the problem. It is unfortunate that it > appears to have been totally forgotten. Of course there was still work to > do. A few conclusions: > > Flow control is a pair-wise issue, Congestion management is an n-party > issue. > > Any layer that relays will exhibit congestion. (Contention for > multi-access media is a form of congestion.) > > A Congestion solution should minimize congestion events and > retransmissions. (TCP maximizes both.) > > Congestion is a stochastic phenomena. The cause is too many packets > arriving with a given short period. > > Load is not the root cause of congestion but does increase the > probability. (This is an error I see in most every paper I read on the > topic.) Congestion has been observed on a network with a .1% loading. Often > congestion will clear on its own. Waiting for load to be the condition for > a response makes the response late. > > The effectiveness of any congestion avoidance solution will deteriorate > with increasing time-to-notify. > > Something like ECN or SourceQuench (if like ECN it is sent to all sources > of the congested router) is absolutely required to ensure that the effects > of congestion management remain localized to the layer in which it > occurred. However, neither one alone is sufficient without the action to be > taken in response to receiving them. (I would think SQ would have some > advantage in that the sender would be notified sooner than with ECN.) > > Without ECN, the congestion scheme is predatory and will interact badly > with congestion solutions in lower layers. > > > Jacobson?s solution for TCP is about the worst, one could expect: A > congestion *avoidance* solution that works by causing congestion? It has > potentially done irreparable damage to the Internet, because it is > predatory. (implicit notification, no ECN) In a way this is not Van?s > fault. It is the classic engineer?s mistake: Solve the narrow problem but > fail to consider the context. This solution might acceptable for a network, > but not for an Internet, where multiple layers (some of less scope) relay > and are thus subject to congestion. Attempts to do congestion control in > lower layers with TCP congestion control results in warring feedback loops > with very different response times. > > As Jain and KK point out, TCP optimizes for the edge of the cliff of > congestion collapse, while they propose optimizing for the knee of the > throughput/delay curve to minimize both congestion events and > retransmissions. > > There is probably much more, but this is what comes to mind. > > Take care, > John > > > On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > > One basic problem with blaming the "last packet that caused intermediate > router congestion" is that it usually blamed the wrong source, among other > problems. Van Jacobson was/is the guru of flow control (among others) who > might remember more. > > > v > > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > In a brief attempt to try to find some information about the early MIT > work you mentioned, I ended up tripping on this Final Report from ISI in > DTIC. It does talk a fair amount about congestion control and source > quench (plus other things that might interest people). The period of > performance is 1987 to 1990 which is much later than I was considering in > my earlier message. > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > Even though the report mentions testing on DARTnet, I don't remember > anything about this during our DARTnet meetings. I did join the project > after the start so perhaps the work was done before I began to participate. > I also couldn't easily find the journal they mention as a place for > publishing their findings. I will have more time later to see if I can > something that covers this testing. > > barbara > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via > Internet-history wrote: > > multicast is also an issue but I do not recall if that was one that Craig > & I talked about > > Scott > > On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > I remember talking with Craig Partridge (on a flight to somewhere) about > > source quench > > during the time when 1812 was being written - I do not recall > the specific issues but I recall that there were more than one issue > > (if DoS was not an issue at the time, it should have been) > > Scott > > On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > > On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > > Forwarded for Barbara > > ==== > > From: Barbara Denny > Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > I think congestion issues were discussed because I remember an ICMP > > message type called source quench (now deprecated). It was used for > notifying a host to reduce the traffic load to a destination. I don't > remember hearing about any actual congestion experiments using this message > type. > > Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an > advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > thesis about the use of Source Quench for congestion control. I believe > it included some experiments (maybe all artificial, or only through > simulation). > I don't think it had much impact on the rest of the world. > > > Source quench is discussed in detail in John Nagle's RFC 896 (dated > > 1984). > > A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > I guess he probably wasn't an MIT undergrad. > > Source quench was effectively deprecated by RFC 1812 (dated 1995). > > People > > had played around with ideas (e.g. RFC 1016) but it seems that basically > it was no use. > > A bit more Google found this, however: > > "4.3. Internet Congestion Control > Lixia Zhang began a study of network resource allocation techniques > > suitable for > > the DARPA Internet. The Internet currently has a simple technique for > > resource > > allocation, called "Source Quench." > Simple simulations have shown that this technique is not effective, and > > this work > > has produced an alternative which seems considerably more workable. > > Simulation > > of this new technique is now being performed." > > [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] > > Lixia was then a grad student under Dave Clark. Of course she's at UCLA > > now. If she isn't on this list, she should be! > > > Brian Carpenter > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 <(571)%20213-1346> > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jack at 3kitty.org Wed Oct 2 15:02:03 2024 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 2 Oct 2024 15:02:03 -0700 Subject: [ih] Fuzzballs and the Neonatal Internet (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> Message-ID: [Changed the subject since we've drifted far away from linux and Intel - such wandering is an Internet tradition] Dave Mills and his crew were one of the most prolific, and adventurous, sources of ideas and running code in the era of 1980 +- a few years.? Dave was an avid experimenter, trying out his ideas in code rather than equations and diagrams on whiteboards. Experiments requiremeasurements and instrumentation, which was sorely lacking in the Internet design at the time.? So Dave and crew created NTP, somehow got it embraced by NIST, and as a result all of our devices today know what time it is.? With amazing accuracy. Fuzzballs were Dave's medium for trying out ideas about networking.?? They existed in the Internet well before they were chosen for use in NSF's network.? My experience with Dave's "Fuzzies" occurred in the late 1970s and early 1980s. Dave was also a member of Vint's ICCB at that time, where his zeal at experimentation was legendary.? My own task from Vint was to make the "core gateways" highly reliable and operational as a 24x7 service.?? That was especially important for the research community in Europe, who had to rely on the Internet for connectivity, while researchers in the US were able to simply use the Arpanet.? When the core gateways weren't working, they couldn't access any US resources.?? They were not shy about complaining, and had a 5+ hour head start on seeing problems due to the time zones involved. Research priorities often conflict with operational ones.? Our goal was to keep the "core" running.?? Dave's goal was to try out new ideas and see how well they worked.??? Sometimes the new ideas broke the Internet.?? Maybe such outages were caused by a simple bug, or possibly an unforeseen consequence of the new idea.?? Whatever the cause, my phone rang -- "The core gateways aren't working!". We had encountered such conflicts during the previous 10 years of Arpanet, and used traditional solutions.? New code and algorithms were developed on a separate "clone" network.?? They were extensively tested, usually for many months. The test environments were highly instrumented, data collected, and analyzed in depth. Eventually, the new code was then carefully introduced into the operational Arpanet itself, with provisions for "backing out" to the old system if necessary. Such rigor is suitable for an operational network.?? But it is too limiting for a research environment, especially the Internet where lots of people had lots of ideas about techniques to be tried. Creating a separate "Test Internet" was pragmatically unrealistic. After some ICCB meeting, back at BBN, I corralled one of the Arpanet "thinkers" for an afternoon to brainstorm how to keep the research activities somehow "insulated" from the operational users of the Internet.?? That led to the creation of the notion of "Autonomous Systems", and the Exterior Gateway Protocol (see RFC 827, published in 1982). EGP enabled the "core gateways" to be isolated into their own "Autonomous System".? Research could continue until the "right" architecture, algorithms, and protocols were proven in trials on the Internet.?? Then the new system could be deployed and the need for Autonomous Systems and EGP would disappear. How naive we were...! With a bit of extra code, the "core gateways" could simply ignore any information from "outside" that didn't seem reasonable, e.g., any routing information pertaining to networks that were already connected directly to a core gateway.? With EGP in place, the core gateways were able to protect themselves from whatever any researcher system did, so that "operational" and "research" activities could co-exist on the Internet. Fuzzies still attacked the "core".? But it was protected by the EGP wall.?? Research and Operations could coexist on The Internet. All of the above happened in the 1979-1983 timeframe.? Fuzzballs continued on and were used in a variety of places including NSF's projects.?? But I wasn't involved then - someone else will have to explain that part of their history. Hope this helps, Jack Haverty On 9/30/24 18:28, Kyle Duren wrote: > Where did the Fuzzball routers fit into this timeline/architecture? > > On Mon, Sep 30, 2024 at 5:45?PM Jack Haverty via Internet-history > wrote: > > I'm not sure I remember all of the "gateway issues" but here's > some of > them... > > Circa 1978/9, it wasn't clear what "gateways" were in an > architectural > sense.? TCP version 2.5 had not yet evolved into TCP/IP version 4, > which > split the "TCP" and "IP" functions more cleanly, and also enabled the > introduction of additional functionality as an alternative to > TCP.? In > particular, this enabled the definition of UDP, which was deemed > necessary for experimentation with real-time interactive voice.? Some > usage required a reliable byte-stream; other usage required > getting as > much as possible as fast as possible. > > I was one of Vint's "ICCB" members, and we had lots of discussions > about > the role of "gateways", even after TCP and IP were split in > Version 4. > Vint had moved the "gateway project" to my group at BBN, so I was > tasked > to "make the Internet a 24x7 operational service".? Or something like > that.? Gateways had become my problem. > > Gateways were characterized by the fact that they connected to > more than > one network.?? When they connected to three or more they had to make > routing decisions, and thus participate in some kind of routing > algorithm and information exchanges with other gateways. > > However, we also realized that, in some cases, "host" computers > also had > to perform gateway functions.?? In particular, if a host computer > (e.g., > your favorite PDP-10) was connected to more than one network, it > had to > make a routing decision about where to send each datagram. To do so, > the host needed some "routing information". This led to the notion > of a > "half-gateway" inside a host TCP/IP implementation.?? A > multi-connected > "host" could also possibly pass transit traffic from one network to > another, essentially acting as a "full gateway".?? With the advent of > LANs and Workstations, the quantity of "hosts" was expected to > explode. > > Additionally, different kinds of user applications might need > different > network service.? Interactive voice might desire low-latency service. > Large file transfers might prefer a high-capacity service. Some > networks would only carry traffic from "approved (by the network > owner) > users".?? Some networks charged by amount of traffic you sent over > them. > > The approach to these needs, purely as an experiment (we didn't know > exactly how it would work), was to have multiple routing mechanisms > running in parallel and coordinated somehow.?? Each mechanism would > capture its own data to use in routing decisions.? Each datagram > would > have a "Type Of Service" designator in the IP header, that would > indicate what network behavior that datagram desired.? The separate > routing mechanisms would (somehow) coordinate their decisions to > try to > allocate the available network resources in a "fair" way. Lots of > ideas flew around.?? Lots of experimentation to be done. > > Pragmatically, we had an experimental environment suitable for such > exploration.?? The Arpanet was the main long-haul US backbone, > extending > across the Atlantic to parts of Europe.?? However, the WideBandNet > (WBNet) also provided connectivity across the US, using a satellite > channel.? The Arpanet was primarily a terrestrial network of circuits > running at 56 kilobits/second; the WBNet had a 3 megabits/second > satellite channel, and of course had much higher latency than the > Arpanet but could carry much more traffic. SATNET, also satellite > based, > covered parts of the US and Europe; MATNET was a clone of SATNET, > installed on Navy ships.?? Packet Radio networks existed in > testbed use > at various military sites. Since these were funded by ARPA, use was > restricted to users associated with ARPA projects.? The public > X.25/X.75 > network also provided connectivity between the US and Europe.?? They > were available for any use, but incurred costs based on "calls" > like the > rest of the telephony system.? NSF (and NSFNet) had not yet > appeared on > the Internet; Al Gore did however speak at one of our meetings. > > All of these networks were in place and connected by gateways to form > the Internet of the early 1980s.? The user scenarios we used to drive > technical discussions included one where a teleconference is being > held, > with participants scattered across the Internet, some connected by > Arpanet, some on ships connected by satellite, some in motion > connected > by Packet Radio, etc.? The teleconference was multimedia, involving > spoken conversations, interactive graphics, shared displays, and > viewing > documents.?? We didn't even imagine video (well, maybe some > did...) with > the technology of the day -- but if you use Zoom/Skype/etc today, > you'll > get the idea. > > Somehow, the Internet was supposed to make all of that "routing" > work, > enabling the use of such scenarios where different "types of service" > were handled by the net to get maximal use of the limited resources. > Traffic needing low latency should use terrestrial paths. Large > volumes > of time-insensitive traffic should go by satellite.?? Networks with > rules about who could use them would be happy. > > In addition, there were other "gateway issues" that needed > experimentation. > > One was called "Expressway Routing". ? The name was derived from an > analogy to the highway system. ? Many cities have grids of streets > that > can extend for miles.? They may also have an "Expressway" (Autobahn, > etc.) that is some distance away but parallels a particular > street. ? As > you leave your building, you make a "routing decision" to select a > route > to your destination.? In some cities, that destination might be on > the > same street you are on now, but many blocks away. ? So you might make > the decision to use the local Expressway instead of? just driving > up the > street you are already on.?? That might involve going "the wrong > way" to > get to an Expressway on-ramp.? People know how to make such > decisions; > gateways didn't. > > That particular situation was endemic to the WBNet at the time. There > were no "hosts" connected to the WBNet; only gateways were directly > connected, between the WBNet and Arpanet at various locations.? > With the > standard routing mechanisms of the time, traffic would never use the > WBNet.? Since both source and destination were on the Arpanet (or > a LAN > connected to it), traffic would naturally just use the Arpanet. > > Another "gateway issue" was "Multi-Homed Hosts" (MHH).?? These are > simply host (users') computers that are somehow connected to more > than > one network.? That was rare at the time.? Network connections were > quite > expensive.? But we envisioned that such connectivity would become > more > available.? For example, a "host computer" in a military vehicle > might > be connected to a Packet Radio network while in motion, but might be > able to "plug in" to a terrestrial network (such as Arpanet) when > it was > back "at base". > > In addition to improving reliability by such redundancy, MHH could > take > advantage of multiple connections -- if the networking technology > knew > how to do so.? One basic advantage would be increased throughput by > using the capacity of both connections.? But there were problems > to be > addressed.? Each connection would have a unique IP address - how > do you > get that to be useful for a single TCP connection? > > That may sound like an ancient problem....? But my cell phone > today has > both "cell data" and "Wifi" capability.?? It can only use one at a > time > however.?? It also has a different IP address for each > connection.?? At > best it's a MHH with just a backup capability.? We thought we > could do > better... > > I'm sure there were other "gateway issues".? But we recognized the > limits of the technology of the day.? The gateways were severely > limited > in memory and computing power.? The network speeds would be > considered > unusable today.? To make routing decisions such as choosing a > low-latency path for interactive usage required some way to measure > datagram transit time.?? But the gateway hardware had no ability to > measure time. > > In the interim, the only viable approach was to base routing on "hop > counts" while the hardware was improved and the experimentation > hopefully revealed a viable algorithm to use within the Internet -- > including "gateways" and "half-gateways".??? We introduced various > kinds > of "source routing" so that experimenters could forec traffic to > follow > routes that the primitive existing routing mechanisms would reject. > The "next release" after TCP/IP version 4 would hopefully address > some > of the issues.?? I lost track after that; another reorganization > moved > the project elsewhere. > > All of the above occurred about ~45 years ago.? AFAIK, the > specifications for "half" and "full" gateways were never created. > And it > seems we're still using hop counts?? Perhaps computing and > communications technology just exploded fast enough so it no > longer matters. > > Except for latency.? Physics still rules.? The speed of light, and > digital signals, is still the Law. > > Hope this helps, > Jack Haverty > > > > > On 9/30/24 12:43, John Day via Internet-history wrote: > > I am confused.? Could someone clarify for me what all of these > gateway issues were? Why gateways were such a big deal? > > > > Thanks, > > John > > > >> On Sep 30, 2024, at 13:06, Barbara Denny via > Internet-history wrote: > >> > >> I have been trying to remember some things surrounding this > topic so I did some poking as my knowledge/memory is hazy. I found > some documents on DTIC which may be of interest to people. It > seems not all documents in DTIC provide useable links so use the > document IDs in the search bar on their website. > >> ADA093135 > >> > >> This one confirms a long suspicion of mine regarding gateways. > The gateway functionality/software originally resided in the > packet radio station. It also mentions getting TCP from SRI and > porting it? to ELF (The packet radio station was an LSI-11 if I > remember correctly and ELF was the operating system). > >> You might also be interested in the following report for the > discussion of Internet and gateway issues. It mentions removing > support for versions of IP that weren't v4 for example. > >>? ?ADA099617 > >> > >> I also remember Jim talking about PMOS which I think stood for > Portable MOS ( Micro Operating System aka Mathis's Operating > System).? ?I think Jim's TCP code also ran on the TIU (Terminal > Interface Unit) using PMOS which was a PDP-11 and was part of the > packet radio architecture.? Not sure how many people used the term > PMOS though. > >> For more info see > >> https://gunkies.org/wiki/MOS_operating_system > >> > >> BYW, I have never heard of this website before.? It might be a > little buggy but it certainly strikes familiar chords in my > memory.? BTW? the NIU (Network Interface Unit) was a 68000 and ran > PMOS. This was used for the SURAN project which was a follow on to > packet radio. > >> Finally i also found a description of the IPR (Improved Packet > Radio) in DTIC. It covers the hardware and the operating system.? > This version of packet radio hardware used 2 processors.? I think > this was due to performance problems with the previous generation > of? packet radio. > >> https://apps.dtic.mil/sti/citations/ADB075938 > >> > >> barbara > >> > >>? ? ?On Sunday, September 29, 2024 at 01:33:14 PM PDT, Jack > Haverty via Internet-history wrote: > >> > >> Yeah, the "Stone Age of Computing" was quite different from today. > >> > >> The Unix (lack of) IPC was a serious obstacle.? ?I struggled > with it in > >> the late 70s when I got the assignment to implement some new thing > >> called "TCP" for ARPA.? ?I used Jim Mathis implementation for the > >> LSI-11s being used in Packet Radio, and shoehorned it into Unix. > >> Several of us even went to Bell Labs and spent an afternoon > discussing > >> networking with Ritchie.? ?All part of all of us learning about > networking. > >> > >> More info on what the "underlying architectures" were like back > then, > >> including details of the experience of creating TCP > implementations for > >> various Unices: > >> > >> http://exbbn.weebly.com/note-47.html > >> > https://www.sophiehonerkamp.com/othersite/isoc-internet-history/2016/oct/msg00000.html > >> > >> There was a paper ("Interprocess Communications for a Server in > Unix") > >> for some IEEE conference in 1978 where we described the > additions to > >> Unix to make it possible to write TCP.? ?But I can't find it > online - > >> probably the Conference Proceedings are behind a paywall > somewhere though. > >> > >> Jack > >> > >> > >> On 9/29/24 10:42, John Day wrote: > >>> Good point, Jack. Dave did a lot of good work. I always liked > his comment when I asked him about his collaboration with > CYCLADES. He said, it was ?so they wouldn?t make the same mistakes > we did.? ;-) Everyone was learning back then. > >>> > >>> Perhaps more relevant is that the first Unix system was > brought up on the ?Net? at UIUC in the summer of 1975 on a > PDP-11/45.? It was then stripped down and by the Spring of 1976 > ported to an LSI-11 (a single board PDP-11) for a ?terminal? with > a plasma screen and touch. That was fielded as part of a land-use > management system for the 6 counties around Chicago and for the > DoD at various places including CINCPAC. > >>> > >>> Unix didn?t have a real IPC facility then. (Pipes were > blocking and not at all suitable.) Once the first version was up > and running with NCP in the kernel and Telnet, etc in user mode, a > true IPC was implemented. (To do Telnet in that early version > without IPC, there were two processes, one, in-bound and one > out-bound and stty and gtty were hacked to coordinate them.)? > file_io was hacked for the API, so that to open a connection, it > was simply ?open(ucsd/telnet)?. > >>> > >>> Years later there was an attempt to convince Bill Joy to do > something similar for Berkley Unix but he was too enamored with > his Sockets idea. It is too bad because with the original API, the > Internet could have seamless moved away from well-known ports and > to application-names and no one would have noticed. As it was > domain names were nothing more than automating downloading the > host file from the NIC. > >>> > >>> Take care, > >>> John Day > >>> > >>>> On Sep 29, 2024, at 13:16, Jack Haverty via > Internet-history ?wrote: > >>>> > >>>> On 9/29/24 08:58, Dave Taht via Internet-history wrote: > >>>>> See: > >>>>> > >>>>> https://dmitry.gr/?r=05.Projects&proj=35.%20Linux4004 > > >>>>> > >>>>> While a neat hack and not directly relevant to ih, it > sparked curiosity in > >>>>> me as to the characteristics of the underlying architectures > arpanet was > >>>>> implemented on. > >>>>> > >>>>> > >>>> For anyone interested in the "underlying architectures > arpanet was implemented on", I suggest looking at: > >>>> > >>>> https://walden-family.com/bbn/imp-code.pdf > >>>> > >>>> Dave Walden was one of the original Arpanet programmers.? He > literally wrote the code.? This paper describes how the Arpanet > software and hardware were created. Part 2 of his paper describes > more recent (2010s) work to resurrect the original IMP code and > get it running again to create the original 4-node Arpanet network > as it was in 1970. The code is publicly available - so anyone can > look at it, and even get it running again on your own modern > hardware. Check out the rest of the walden-family website. > >>>> > >>>> When Arpanet was being constructed, microprocessors such as > the Intel 4004 did not yet exist. Neither did Unix, the precursor > to Linux.? Computers were quite different - only one processor, no > cores, threads, or such.? Lots of boards, each containing a few > logic gates, interconnected by wires.? Logic operated at speeds of > perhaps a Megahertz, rather than Gigahertz.? Memory was scarce, > measured in Kilobytes, rather than Gigabytes.? Communication > circuits came in Kilobits per second, not Gigabits. Persistent > storage (disks, drums) were acquired in Megabytes, not Terabytes. > Everything also cost a lot more than today. > >>>> > >>>> Computing engineering was quite different in 1969 from > today.? Every resource was scarce and expensive. Much effort went > towards efficiency, getting every bit of work out of the available > hardware.? As technology advanced and the Arpanet evolved into the > Internet, I often wonder how the attitudes and approaches to > computing implementations changed over that history.? We now have > the luxury of much more powerful hardware, costing a tiny fraction > of what a similar system might have cost in the Arpanet era.? How > did hardware and software engineering change over that time? > >>>> > >>>> Curiously, my multi-core desktop machine today, with its > gigabytes of memory, terabytes of storage, and gigabits/second > network, running the Ubuntu version of Linux, takes longer to > "boot up" and be ready to work for me than the PDP-10 did, back > when I used that machine on the Arpanet in the 1970s.? I sometimes > wonder what it's doing while executing those trillions of > instructions to boot up. > >>>> > >>>> Jack Haverty > >>>> > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From dave.taht at gmail.com Wed Oct 2 15:21:28 2024 From: dave.taht at gmail.com (Dave Taht) Date: Wed, 2 Oct 2024 15:21:28 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: I wish I had had the time and resources to (help) write more papers. (For example there isn't much on "drop head queueing") fq_codel is now a linux-wide default and has the following unique properties: codel queue management, which measure the time a packet spends in a queue and gradually attempts to find an optimum point for queue length, which is 5ms by default. (it has been tested in software below 250us in the DC). There is another subsystem, called BQL, which attempts to limit bytes on the device txring to one interrupt's worth. (a pretty good explanation of modern layers here) [2] It drops from the head, not the tail of the queue, with a small (BQL or HTB) FIFO in front of the lowest bits of the hardware to account for interrupt latency. (I am kind of curious if a txring existed back in the day and how close an application sat to the hardware) Anecdote: when van and kathy were working on what became codel (january 2012), she rang me up one day and asked me just how much overhead there was in getting a timestamp from the hardware nowadays. And I explained that it was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs and so on and so forth, and she said thanks, and hung up. Getting a timestamp must have been mighty hard back in the day! The "flow queueing" mechanism sends packets that have an arrival rate of less than the departure rate of all the other flows, out first.[1] This is an improvement over prior FQ mechanisms like SFQ and DRR, which always put a new flow at the tail of the flow list. It is pretty amazing how often this works on real traffic. Also it automatically puts flows that build a queue into a queue that is managed by codel. One (eventual) benefit of these approaches, combined, is it makes delay based congestion control more feasible (indeed, BBR spends most of its time in this mode), but the flow isolation makes for most interactive traffic never being queued at all. IMHO the edges of the internet at least, would have been much better were some form of FQ always in it (which we kind of got from switched networks naturally) but the idea of FQ was roundly rejected in the first ietf meeting in 1989, and it's been uphill ever since. Just to touch upon pacing a bit - pacing is the default for the linux stack no matter the overlying qdisc or congestion control algorithm. I don't know if anyone has ever attempted to compare pacing w/cubic vs pacing w/bbr, and very few, until recently, have attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] [1] https://ieeexplore.ieee.org/document/8469111 [2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 [3] https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable Varying the packet pacing to get a pre-congestion notification is a paper I'd like more to pursue. https://www.usenix.org/system/files/atc24-han.pdf (I so want to believe this paper) A tiny bit more below.... On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > The response to bufferbloat has always struck me as looking for your keys > under a street light when that wasn?t where you dropped them but there is > light there. > > Initially, bufferbloat was not a problem because memory was expensive and > when TCP ran out of buffers (or got low), the connection simply blocked the > sending application until buffers were available. This was still true with > the advent of NIC cards. Memory was still tight. However, as memory got > cheap and NIC cards had oceans of memory, TCP never got low on buffers and > no one told the application to slow down or wait, so there was local > congestion collapse: bufferbloat. > > One part of the solution would be interface flow control between the > sending application and TCP (you would have thought that would have > occurred to implementers any way, it is obvious) and/or simply restrict the > amount of buffers TCP has available so that it runs out and blocks the > sending the application before things get bad and opens up when buffers are > available. But virtually all of the papers I see are on different > drop-strategies, and oddly enough they never find their keys. > don't have a lot of time for papers! The most modern stuff for tcp is using EDF (earliest deadline first) to manage the packet pacing. There are virtual and actual physical devices nowadays that take a "time to be sent" and packet. This paper was highly influential: https://saeed.github.io/files/carousel-sigcomm17.pdf the latest commit to the linux kernel about it: https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ PS IMHO eric dumazet belongs a spot in the internet hall of fame for so many things... > Take care, > John > > > On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > Just throwing some thoughts out here ...... > > I can see how this happens in a FIFO queuing world. However a lot of > work has gone into fair queuing starting in the late 80s. Just wondering > if anyone has done work utilizing fair queuing and source quench. For > example, I think I can see how to use fair queuing information to better > select who to send a source quench to. At least I can see how to do it with > Stochastic Fairness Queueing since I worked on it and I remember a fair > amount about how it was implemented. I wouldn't be able to provide a > guarantee that the wrong host would never receive a source quench but the > likelihood should be much lower. Considering whether the use of NAT > creates undesirable behavior is also important and I am sure there are > probably other cases that need to be checked. > > Hum, it might also be interesting to speculate whether this could have > any effect on bufferbloat but I fess up I need to learn more about the work > done in the area of bufferbloat. I was involved with other things when > this started to appear on my radar screen as a hot topic. I will admit I > wish I had done more work on possible buffering effects from implementation > choices at the time I did work on SFQ but there were contractual > obligations that restricted how much time I could devote to the SFQ part of > the project. > > Just curious, ECN (Explicit Congestion Notification) is optional . Does > anyone have any idea about its use in the Internet? > > barbara > > > > On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < > vint at google.com> wrote: > > > > One basic problem with blaming the "last packet that caused intermediate > router congestion" is that it usually blamed the wrong source, among other > problems. Van Jacobson was/is the guru of flow control (among others) who > might remember more. > > > > v > > > > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > In a brief attempt to try to find some information about the early MIT > work you mentioned, I ended up tripping on this Final Report from ISI in > DTIC. It does talk a fair amount about congestion control and source > quench (plus other things that might interest people). The period of > performance is 1987 to 1990 which is much later than I was considering in > my earlier message. > > > > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf > > > > > Even though the report mentions testing on DARTnet, I don't remember > anything about this during our DARTnet meetings. I did join the project > after the start so perhaps the work was done before I began to participate. > I also couldn't easily find the journal they mention as a place for > publishing their findings. I will have more time later to see if I can > something that covers this testing. > > > > barbara > > > > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via > Internet-history wrote: > > > > multicast is also an issue but I do not recall if that was one that > Craig & I talked about > > > > Scott > > > >> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> I remember talking with Craig Partridge (on a flight to somewhere) > about source quench > >> during the time when 1812 was being written - I do not recall > >> the specific issues but I recall that there were more than one issue > >> > >> (if DoS was not an issue at the time, it should have been) > >> > >> Scott > >> > >>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > >>> > >>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: > >>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: > >>>>> Forwarded for Barbara > >>>>> > >>>>> ==== > >>>>> > >>>>> From: Barbara Denny > >>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT > >>>>> I think congestion issues were discussed because I remember an ICMP > message type called source quench (now deprecated). It was used for > notifying a host to reduce the traffic load to a destination. I don't > remember hearing about any actual congestion experiments using this message > type. > >>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, > an > >>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate > >>>> thesis about the use of Source Quench for congestion control. I > believe > >>>> it included some experiments (maybe all artificial, or only through > >>>> simulation). > >>>> I don't think it had much impact on the rest of the world. > >>> > >>> Source quench is discussed in detail in John Nagle's RFC 896 (dated > 1984). > >>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so > >>> I guess he probably wasn't an MIT undergrad. > >>> > >>> Source quench was effectively deprecated by RFC 1812 (dated 1995). > People > >>> had played around with ideas (e.g. RFC 1016) but it seems that > basically > >>> it was no use. > >>> > >>> A bit more Google found this, however: > >>> > >>> "4.3. Internet Congestion Control > >>> Lixia Zhang began a study of network resource allocation techniques > suitable for > >>> the DARPA Internet. The Internet currently has a simple technique for > resource > >>> allocation, called "Source Quench." > >>> Simple simulations have shown that this technique is not effective, > and this work > >>> has produced an alternative which seems considerably more workable. > Simulation > >>> of this new technique is now being performed." > >>> > >>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, > >>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf > ] > >>> > >>> Lixia was then a grad student under Dave Clark. Of course she's at > UCLA now. If she isn't on this list, she should be! > >>> > >>> Brian Carpenter > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > > > -- > > Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 > Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 > > > > > > until further notice > > > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Dave T?ht CSO, LibreQos From jeanjour at comcast.net Wed Oct 2 16:41:37 2024 From: jeanjour at comcast.net (John Day) Date: Wed, 2 Oct 2024 19:41:37 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> Message-ID: Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS. One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.) The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer. John > On Oct 2, 2024, at 17:51, Vint Cerf wrote: > > John, > > you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. > > v > > > On Wed, Oct 2, 2024 at 5:19?PM John Day > wrote: >> Busy day. Just getting to looking at these. >> >> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions: >> >> Flow control is a pair-wise issue, Congestion management is an n-party issue. >> >> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.) >> >> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.) >> >> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. >> >> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late. >> >> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify. >> >> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.) >> >> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers. >> >> Jacobson?s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van?s fault. It is the classic engineer?s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times. >> >> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions. >> >> There is probably much more, but this is what comes to mind. >> >> Take care, >> John >> >> >>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history > wrote: >>> >>> One basic problem with blaming the "last packet that caused intermediate >>> router congestion" is that it usually blamed the wrong source, among other >>> problems. Van Jacobson was/is the guru of flow control (among others) who >>> might remember more. >>> >>> >>> v >>> >>> >>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>> internet-history at elists.isoc.org > wrote: >>> >>>> In a brief attempt to try to find some information about the early MIT >>>> work you mentioned, I ended up tripping on this Final Report from ISI in >>>> DTIC. It does talk a fair amount about congestion control and source >>>> quench (plus other things that might interest people). The period of >>>> performance is 1987 to 1990 which is much later than I was considering in >>>> my earlier message. >>>> >>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>> >>>> Even though the report mentions testing on DARTnet, I don't remember >>>> anything about this during our DARTnet meetings. I did join the project >>>> after the start so perhaps the work was done before I began to participate. >>>> I also couldn't easily find the journal they mention as a place for >>>> publishing their findings. I will have more time later to see if I can >>>> something that covers this testing. >>>> >>>> barbara >>>> >>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>>> Internet-history > wrote: >>>> >>>> multicast is also an issue but I do not recall if that was one that Craig >>>> & I talked about >>>> >>>> Scott >>>> >>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>>> internet-history at elists.isoc.org > wrote: >>>>> >>>>> I remember talking with Craig Partridge (on a flight to somewhere) about >>>> source quench >>>>> during the time when 1812 was being written - I do not recall >>>>> the specific issues but I recall that there were more than one issue >>>>> >>>>> (if DoS was not an issue at the time, it should have been) >>>>> >>>>> Scott >>>>> >>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>>> internet-history at elists.isoc.org > wrote: >>>>>> >>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>> Forwarded for Barbara >>>>>>>> >>>>>>>> ==== >>>>>>>> >>>>>>>> From: Barbara Denny > >>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>> I think congestion issues were discussed because I remember an ICMP >>>> message type called source quench (now deprecated). It was used for >>>> notifying a host to reduce the traffic load to a destination. I don't >>>> remember hearing about any actual congestion experiments using this message >>>> type. >>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>> thesis about the use of Source Quench for congestion control. I believe >>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>> simulation). >>>>>>> I don't think it had much impact on the rest of the world. >>>>>> >>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>>> 1984). >>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>> I guess he probably wasn't an MIT undergrad. >>>>>> >>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>>> People >>>>>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>>>>> it was no use. >>>>>> >>>>>> A bit more Google found this, however: >>>>>> >>>>>> "4.3. Internet Congestion Control >>>>>> Lixia Zhang began a study of network resource allocation techniques >>>> suitable for >>>>>> the DARPA Internet. The Internet currently has a simple technique for >>>> resource >>>>>> allocation, called "Source Quench." >>>>>> Simple simulations have shown that this technique is not effective, and >>>> this work >>>>>> has produced an alternative which seems considerably more workable. >>>> Simulation >>>>>> of this new technique is now being performed." >>>>>> >>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>>>>> >>>>>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA >>>> now. If she isn't on this list, she should be! >>>>>> >>>>>> Brian Carpenter >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> >>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> >>> until further notice >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > > From jack at 3kitty.org Wed Oct 2 17:08:41 2024 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 2 Oct 2024 17:08:41 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> Re: Source Quench... It's been 40+ years, but I remember meetings where Source Quench was first discussed.? My reaction was that it was too simplistic and wouldn't be effective.?? At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40.? When I asked what a TCP should do when it received a SQ, no one could provide much of an answer.? If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? Other implementors had different ideas about how to handle an incoming SQ.? One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent.?? So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire.?? You knew it had been discarded, so you should retransmit it immediately. In my TCP, I think I just incremented a counter when I received a SQ.? Could always change it later.... At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue.? There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340.? There are many others describing the Arpanet experience.? In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs.? Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer.?? The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them.? Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP.? Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels.? "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs.? That would effectively block all communications from the blocked host to anywhere else on the Arpanet.? By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter.? Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it.?? I have wondered how many implementations actually did. All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail.??? The ancient IMP code itself is even available online today.?? As always, the ultimate documentation is the code itself.?? But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s.? It's not easy to figure out how it worked. The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes.? In the earliest gateways, "getting a timestamp" from the processor wasn't hard.? It was impossible.? The gateway hardware simply didn't have any way to measure time. IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back.? They could calculate the delay along a route. Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit.?? So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available.?? That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet.? But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports.? The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. ?? I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones.? If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. Hope this helps explain how we got from there to here... Jack Haverty On 10/2/24 15:21, Dave Taht via Internet-history wrote: > I wish I had had the time and resources to (help) write more papers. (For > example there isn't much on "drop head queueing") > > fq_codel is now a linux-wide default and has the following unique > properties: > > codel queue management, which measure the time a packet spends in a queue > and gradually attempts to find an optimum point for queue length, which is > 5ms by default. (it has been tested in software below 250us in the DC). > There is another subsystem, called BQL, which attempts to limit bytes on > the device txring to one interrupt's worth. (a pretty good explanation of > modern layers here) [2] > > It drops from the head, not the tail of the queue, with a small (BQL or > HTB) FIFO in front of the lowest bits of the hardware to account > for interrupt latency. > > (I am kind of curious if a txring existed back in the day and how close an > application sat to the hardware) > > Anecdote: when van and kathy were working on what became codel (january > 2012), she rang me up one day and asked me just how much overhead there was > in getting a timestamp from the hardware nowadays. And I explained that it > was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs > and so on and so forth, and she said thanks, and hung up. Getting a > timestamp must have been mighty hard back in the day! > > The "flow queueing" mechanism sends packets that have an arrival rate of > less than the departure rate of all the other flows, out first.[1] This is > an improvement over prior FQ mechanisms like SFQ and DRR, which always put > a new flow at the tail of the flow list. It is pretty amazing how often > this works on real traffic. Also it automatically puts flows that build a > queue into a queue that is managed by codel. > > One (eventual) benefit of these approaches, combined, is it makes delay > based congestion control more feasible (indeed, > BBR spends most of its time in this mode), but the flow isolation makes for > most interactive traffic never being queued at all. > > IMHO the edges of the internet at least, would have been much better were > some form of FQ always in it (which we kind of got from switched networks > naturally) but the idea of FQ was roundly rejected in the first ietf > meeting in 1989, and it's been uphill ever since. > > Just to touch upon pacing a bit - pacing is the default for the linux stack > no matter the overlying qdisc or congestion control algorithm. > I don't know if anyone has ever attempted to compare pacing w/cubic vs > pacing w/bbr, and very few, until recently, have > attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] > > [1]https://ieeexplore.ieee.org/document/8469111 > [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 > [3] > https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable > > Varying the packet pacing to get a pre-congestion notification is a paper > I'd like more to pursue. > https://www.usenix.org/system/files/atc24-han.pdf > (I so want to believe this paper) > > A tiny bit more below.... > > On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> The response to bufferbloat has always struck me as looking for your keys >> under a street light when that wasn?t where you dropped them but there is >> light there. >> >> Initially, bufferbloat was not a problem because memory was expensive and >> when TCP ran out of buffers (or got low), the connection simply blocked the >> sending application until buffers were available. This was still true with >> the advent of NIC cards. Memory was still tight. However, as memory got >> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >> no one told the application to slow down or wait, so there was local >> congestion collapse: bufferbloat. >> >> One part of the solution would be interface flow control between the >> sending application and TCP (you would have thought that would have >> occurred to implementers any way, it is obvious) and/or simply restrict the >> amount of buffers TCP has available so that it runs out and blocks the >> sending the application before things get bad and opens up when buffers are >> available. But virtually all of the papers I see are on different >> drop-strategies, and oddly enough they never find their keys. >> > don't have a lot of time for papers! The most modern stuff for tcp is > using EDF (earliest deadline first) to manage the packet pacing. > There are virtual and actual physical devices nowadays that take a "time to > be sent" and packet. This paper was highly influential: > > https://saeed.github.io/files/carousel-sigcomm17.pdf > > the latest commit to the linux kernel about it: > > https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ > > PS IMHO eric dumazet belongs a spot in the internet hall of fame for so > many things... > > >> Take care, >> John >> >>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> Just throwing some thoughts out here ...... >>> I can see how this happens in a FIFO queuing world. However a lot of >> work has gone into fair queuing starting in the late 80s. Just wondering >> if anyone has done work utilizing fair queuing and source quench. For >> example, I think I can see how to use fair queuing information to better >> select who to send a source quench to. At least I can see how to do it with >> Stochastic Fairness Queueing since I worked on it and I remember a fair >> amount about how it was implemented. I wouldn't be able to provide a >> guarantee that the wrong host would never receive a source quench but the >> likelihood should be much lower. Considering whether the use of NAT >> creates undesirable behavior is also important and I am sure there are >> probably other cases that need to be checked. >>> Hum, it might also be interesting to speculate whether this could have >> any effect on bufferbloat but I fess up I need to learn more about the work >> done in the area of bufferbloat. I was involved with other things when >> this started to appear on my radar screen as a hot topic. I will admit I >> wish I had done more work on possible buffering effects from implementation >> choices at the time I did work on SFQ but there were contractual >> obligations that restricted how much time I could devote to the SFQ part of >> the project. >>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >> anyone have any idea about its use in the Internet? >>> barbara >>> >>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >> vint at google.com> wrote: >>> One basic problem with blaming the "last packet that caused intermediate >> router congestion" is that it usually blamed the wrong source, among other >> problems. Van Jacobson was/is the guru of flow control (among others) who >> might remember more. >>> v >>> >>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> In a brief attempt to try to find some information about the early MIT >> work you mentioned, I ended up tripping on this Final Report from ISI in >> DTIC. It does talk a fair amount about congestion control and source >> quench (plus other things that might interest people). The period of >> performance is 1987 to 1990 which is much later than I was considering in >> my earlier message. >>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>> Even though the report mentions testing on DARTnet, I don't remember >> anything about this during our DARTnet meetings. I did join the project >> after the start so perhaps the work was done before I began to participate. >> I also couldn't easily find the journal they mention as a place for >> publishing their findings. I will have more time later to see if I can >> something that covers this testing. >>> barbara >>> >>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >> Internet-history wrote: >>> multicast is also an issue but I do not recall if that was one that >> Craig & I talked about >>> Scott >>> >>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >> internet-history at elists.isoc.org> wrote: >>>> I remember talking with Craig Partridge (on a flight to somewhere) >> about source quench >>>> during the time when 1812 was being written - I do not recall >>>> the specific issues but I recall that there were more than one issue >>>> >>>> (if DoS was not an issue at the time, it should have been) >>>> >>>> Scott >>>> >>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >> internet-history at elists.isoc.org> wrote: >>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>> Forwarded for Barbara >>>>>>> >>>>>>> ==== >>>>>>> >>>>>>> From: Barbara Denny >>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>> I think congestion issues were discussed because I remember an ICMP >> message type called source quench (now deprecated). It was used for >> notifying a host to reduce the traffic load to a destination. I don't >> remember hearing about any actual congestion experiments using this message >> type. >>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >> an >>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>> thesis about the use of Source Quench for congestion control. I >> believe >>>>>> it included some experiments (maybe all artificial, or only through >>>>>> simulation). >>>>>> I don't think it had much impact on the rest of the world. >>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >> 1984). >>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>> I guess he probably wasn't an MIT undergrad. >>>>> >>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >> People >>>>> had played around with ideas (e.g. RFC 1016) but it seems that >> basically >>>>> it was no use. >>>>> >>>>> A bit more Google found this, however: >>>>> >>>>> "4.3. Internet Congestion Control >>>>> Lixia Zhang began a study of network resource allocation techniques >> suitable for >>>>> the DARPA Internet. The Internet currently has a simple technique for >> resource >>>>> allocation, called "Source Quench." >>>>> Simple simulations have shown that this technique is not effective, >> and this work >>>>> has produced an alternative which seems considerably more workable. >> Simulation >>>>> of this new technique is now being performed." >>>>> >>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >> ] >>>>> Lixia was then a grad student under Dave Clark. Of course she's at >> UCLA now. If she isn't on this list, she should be! >>>>> Brian Carpenter >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> >>> -- >>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>> >>> until further notice >>> >>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From gregskinner0 at icloud.com Wed Oct 2 19:18:22 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Wed, 2 Oct 2024 19:18:22 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <338b9d0b-6a6f-408c-b178-4cbf8210ada0@cis.upenn.edu> References: <32293b4f-ce9b-4bae-958b-fc9bdba4b773@gmail.com> <338b9d0b-6a6f-408c-b178-4cbf8210ada0@cis.upenn.edu> Message-ID: On Oct 2, 2024, at 11:18?AM, Michael Greenwald via Internet-history wrote: > > Hi Lixia! (Yes, it's been a long time. At least a decade or 2) > My memory is (no surprise) cloudy. For some reason I thought you arrived a year or two later than '81. > I have no memory (again, no surprise) of what, specifically, your promising "alternative" was, but an early reference to (virtual)clock based control of packets sounds plausible. > I'm not shocked by the difference between Van's comment back then and BBR now. There's never a guarantee that claims that something "simply cannot work" or "will obviously help" back then (or even now) will hold up over long time periods, unless there is a formal proof or a lot of experimental validation. (And even then conditions change). > > About the work that I thought predated you: Dave Clark remembers that there was an undergraduate who did work on Source Quench, but not his name, nor details of what he did. So at least I didn't completely hallucinate?. Offhand, I don?t know who it was, but it occurred to me that it could be someone on this list of Multics Technical Bulletin authors. [1] A few of them were MIT undergrads during the early 1980s. --gregbo [1] https://www.multicians.org/mtb-index.html From jack at 3kitty.org Wed Oct 2 22:18:17 2024 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 2 Oct 2024 22:18:17 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> Message-ID: <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> [changed the subject to reflect the content...] The recent discussions about SQ and Congestion Control reminded me of another "gateway issue" from the 80s. Gateway Issue -- Certification In the early 1980s, TCP/IP were being established as "DoD Standards".?? Part of that process involved creating Specifications, a task which Jon Postel took on to create the RFCs defining TCP and IP in the early 1980s. However, other actions also occurred.?? Procurement regulations for the US DoD were changed.? Systems procured by DoD and containing computers that might have to communicate with other computers were required to implement TCP and IP. NIST (perhaps still called NBS then) produced a testing program which could verify that a system being procured did in fact correctly implement TCP and IP.?? The testing program included a procedure by which laboratories became accredited to perform the tests and then issue certificates of compliance to the owner of the TCP/IP implementation.? Such certificates were required for sales to DoD customers. Although the tests involved were very basic, they did establish a mechanism whereby customers could become more confident that a system they were procuring would actually work when attached to the Internet. No such tests or testing program was defined for Gateways, or anything else AFAIK - such as DNS servers, mail, file, telnet servers and clients, etc.? TCP and IP were subject to testing, but other important technology, such as ICMP, was not. The need for such tests and certifications was noted on the ICCB list of "things we still need to do", circa 1983. -------------- Today, there are thousands of RFCs defining protocols, algorithms, and standards for the Internet.? AFAIK, there is no way to confirm that any particular product implements any of these mechanisms, or if they do, that the implementation is correct.?? This is an issue for all components of the Internet today, including but not limited to gateways. Jack Haverty On 9/30/24 18:00, John Day wrote: > Thanks, Jack. This is very helpful. It really explains what was and wasn?t understood at the time. > > A few comments inline below. > >> On Sep 30, 2024, at 17:44, Jack Haverty via Internet-history wrote: >> >> I'm not sure I remember all of the "gateway issues" but here's some of them... >> >> Circa 1978/9, it wasn't clear what "gateways" were in an architectural sense. TCP version 2.5 had not yet evolved into TCP/IP version 4, which split the "TCP" and "IP" functions more cleanly, and also enabled the introduction of additional functionality as an alternative to TCP. In particular, this enabled the definition of UDP, which was deemed necessary for experimentation with real-time interactive voice. Some usage required a reliable byte-stream; other usage required getting as much as possible as fast as possible. >> >> I was one of Vint's "ICCB" members, and we had lots of discussions about the role of "gateways", even after TCP and IP were split in Version 4. Vint had moved the "gateway project" to my group at BBN, so I was tasked to "make the Internet a 24x7 operational service". Or something like that. Gateways had become my problem. >> >> Gateways were characterized by the fact that they connected to more than one network. When they connected to three or more they had to make routing decisions, and thus participate in some kind of routing algorithm and information exchanges with other gateways. > Yes, this was understood in 1973. The primary problem being a routing scheme that scaled to the needed size. It was eventually provided (for better or worse) by BGP. >> However, we also realized that, in some cases, "host" computers also had to perform gateway functions. In particular, if a host computer (e.g., your favorite PDP-10) was connected to more than one network, it had to make a routing decision about where to send each datagram. To do so, the host needed some "routing information". This led to the notion of a "half-gateway" inside a host TCP/IP implementation. A multi-connected "host" could also possibly pass transit traffic from one network to another, essentially acting as a "full gateway". With the advent of LANs and Workstations, the quantity of "hosts" was expected to explode. > This is to some extent an artifact of the ARPANET initial conditions. IMPs were both a front end and a router. This meant that Hosts were not part of the network. After 1973 and the more or less adoption of the CYCLADES model, routers were just routers and hosts were part of the network. This is the shift from the ITU model of networking to the new Layered Model (for want of a better term). Layers have become resource allocators. >> Additionally, different kinds of user applications might need different network service. Interactive voice might desire low-latency service. Large file transfers might prefer a high-capacity service. Some networks would only carry traffic from "approved (by the network owner) users". Some networks charged by amount of traffic you sent over them. > This is QoS and should be passed as parameters to the Internet Layer. It is tthen up to the layer to figure out how to provide that level of QoS (probably within a range as these systems aren?t very sensitive) using the QoS provided by the layer below. (Since traffic is being multiplexed on the lower layer, the lower layer QoS ranges will be different and an aggregate of the QoS of the QoS flow multiplexed on to it. >> The approach to these needs, purely as an experiment (we didn't know exactly how it would work), was to have multiple routing mechanisms running in parallel and coordinated somehow. Each mechanism would capture its own data to use in routing decisions. Each datagram would have a "Type Of Service" designator in the IP header, that would indicate what network behavior that datagram desired. The separate routing mechanisms would (somehow) coordinate their decisions to try to allocate the available network resources in a "fair" way. Lots of ideas flew around. Lots of experimentation to be done. > Good start you were on the right track. Not sure different routing strategies were necessary so much as different metrics. The real isn?t so much QoS per se but the trade-offs between QoS-classes. >> Pragmatically, we had an experimental environment suitable for such exploration. The Arpanet was the main long-haul US backbone, extending across the Atlantic to parts of Europe. However, the WideBandNet (WBNet) also provided connectivity across the US, using a satellite channel. The Arpanet was primarily a terrestrial network of circuits running at 56 kilobits/second; the WBNet had a 3 megabits/second satellite channel, and of course had much higher latency than the Arpanet but could carry much more traffic. SATNET, also satellite based, covered parts of the US and Europe; MATNET was a clone of SATNET, installed on Navy ships. Packet Radio networks existed in testbed use at various military sites. Since these were funded by ARPA, use was restricted to users associated with ARPA projects. The public X.25/X.75 network also provided connectivity between the US and Europe. They were available for any use, but incurred costs based on "calls" like the rest of the telephony system. NSF (and NSFNet) had not yet appeared on the Internet; Al Gore did however speak at one of our meetings. > See below. There are multiple aspects to the solution to this. Part of it is QoS, but there are other parts. >> All of these networks were in place and connected by gateways to form the Internet of the early 1980s. The user scenarios we used to drive technical discussions included one where a teleconference is being held, with participants scattered across the Internet, some connected by Arpanet, some on ships connected by satellite, some in motion connected by Packet Radio, etc. The teleconference was multimedia, involving spoken conversations, interactive graphics, shared displays, and viewing documents. We didn't even imagine video (well, maybe some did...) with the technology of the day -- but if you use Zoom/Skype/etc today, you'll get the idea. > Good example. See above and below. >> Somehow, the Internet was supposed to make all of that "routing" work, enabling the use of such scenarios where different "types of service" were handled by the net to get maximal use of the limited resources. Traffic needing low latency should use terrestrial paths. Large volumes of time-insensitive traffic should go by satellite. Networks with rules about who could use them would be happy. > This is primarily a QoS problem noted above using the QoS of the network layers supporting the Internet Layer. It was less important whether the lines were terrestrial or not but more important what their measured latency and probably RTT were. >> In addition, there were other "gateway issues" that needed experimentation. >> >> One was called "Expressway Routing". The name was derived from an analogy to the highway system. Many cities have grids of streets that can extend for miles. They may also have an "Expressway" (Autobahn, etc.) that is some distance away but parallels a particular street. As you leave your building, you make a "routing decision" to select a route to your destination. In some cities, that destination might be on the same street you are on now, but many blocks away. So you might make the decision to use the local Expressway instead of just driving up the street you are already on. That might involve going "the wrong way" to get to an Expressway on-ramp. People know how to make such decisions; gateways didn't. > Expressways were lower layers perhaps supported by other networks that would provide ?bulk? traffic between major regions. This is something MPLS might have done if the developers had had sufficient imagination. It is likely these Expressways would have been virtual-circuit. (When there is lots of traffic all going to the same place, why look at every packet, just move the darn stuff! I always found it amazing how the vc-advocates never proposed equipment for what vc?s were good for and only for what they were not good for.) ;-) Also in this environment one wants to be relaying more stuff less often not less stuff more often.) The example I always give is there may not be constant traffic between Lake Forest, IL and Lexington, MA (use datagrams) but there will be constant traffic between the Boston and Chicago regions (use vc?s). Make these ?Expressways? a different layer under the Internet Layer optimized for this. > >> That particular situation was endemic to the WBNet at the time. There were no "hosts" connected to the WBNet; only gateways were directly connected, between the WBNet and Arpanet at various locations. With the standard routing mechanisms of the time, traffic would never use the WBNet. Since both source and destination were on the Arpanet (or a LAN connected to it), traffic would naturally just use the Arpanet. > The WBNet was a perfect example of what I described above. Sounds like Flat Earth thinking, >> Another "gateway issue" was "Multi-Homed Hosts" (MHH). These are simply host (users') computers that are somehow connected to more than one network. That was rare at the time. Network connections were quite expensive. But we envisioned that such connectivity would become more available. For example, a "host computer" in a military vehicle might be connected to a Packet Radio network while in motion, but might be able to "plug in" to a terrestrial network (such as Arpanet) when it was back "at base". > The solution to multihoming has been known since 1972. CYCLADES solved by making an inherent part of the model. Yea, I know there are still a lot of people in the Internet today who refuse to believe that. They are simply wrong. O, and to those who complain that not all multi homed hosts want to act as a transit node, the answer is simple: Don?t advertise that it is a route to anything but itself. Sheesh! >> In addition to improving reliability by such redundancy, MHH could take advantage of multiple connections -- if the networking technology knew how to do so. One basic advantage would be increased throughput by using the capacity of both connections. But there were problems to be addressed. Each connection would have a unique IP address - how do you get that to be useful for a single TCP connection?si > It is not relevant to a single TCP connection. It is a different layer and it is that layer?s task to do the resource allocation to use multiple paths. One of the things that has always bugged me is the idea that one should be able to identify a single host-to-host flow in the middle of the network. Good grief! This isn?t the PSTN!! The IP addresses are irrelevant. Each layer has its own addresses. (Remember addresses belong to layers not to protocols >> That may sound like an ancient problem.... But my cell phone today has both "cell data" and "Wifi" capability. It can only use one at a time however. It also has a different IP address for each connection. At best it's a MHH with just a backup capability. We thought we could do better... > Mobility is still in the ITU model as is the Internet and is totally screwed up as it is in the Internet. In the early days, that was understandable. But it hasn?t been the case 30 years or more. This is just a variation of the multihoming problem, which as I said the solution had been known for almost a decade at that point. In a well-formed architecture, mobility doesn?t require anything new. Certainly doesn?t need any foreign agents or home agents or tunnels or new protocols. > >> I'm sure there were other "gateway issues". But we recognized the limits of the technology of the day. The gateways were severely limited in memory and computing power. The network speeds would be considered unusable today. To make routing decisions such as choosing a low-latency path for interactive usage required some way to measure datagram transit time. But the gateway hardware had no ability to measure time. > Yes, there would have been a congestion issue in both the Internet Layer and the Network Layer, or for that matter any layer that relays. >> In the interim, the only viable approach was to base routing on "hop counts" while the hardware was improved and the experimentation hopefully revealed a viable algorithm to use within the Internet -- including "gateways" and "half-gateways". We introduced various kinds of "source routing" so that experimenters could forec traffic to follow routes that the primitive existing routing mechanisms would reject. The "next release" after TCP/IP version 4 would hopefully address some of the issues. I lost track after that; another reorganization moved the project elsewhere. > Never been a fan of source routing. It always seemed like virtual circuit by another name. >> All of the above occurred about ~45 years ago. AFAIK, the specifications for "half" and "full" gateways were never created. And it seems we're still using hop counts? Perhaps computing and communications technology just exploded fast enough so it no longer matters. >> >> Except for latency. Physics still rules. The speed of light, and digital signals, is still the Law. > Yep, you can?t fool mother nature. > > Take care, > John >> Hope this helps, >> Jack Haverty >> >> >> >> >> On 9/30/24 12:43, John Day via Internet-history wrote: >>> I am confused. Could someone clarify for me what all of these gateway issues were? Why gateways were such a big deal? >>> >>> Thanks, >>> John >>> >>>> On Sep 30, 2024, at 13:06, Barbara Denny via Internet-history wrote: >>>> >>>> I have been trying to remember some things surrounding this topic so I did some poking as my knowledge/memory is hazy. I found some documents on DTIC which may be of interest to people. It seems not all documents in DTIC provide useable links so use the document IDs in the search bar on their website. >>>> ADA093135 >>>> >>>> This one confirms a long suspicion of mine regarding gateways. The gateway functionality/software originally resided in the packet radio station. It also mentions getting TCP from SRI and porting it to ELF (The packet radio station was an LSI-11 if I remember correctly and ELF was the operating system). >>>> You might also be interested in the following report for the discussion of Internet and gateway issues. It mentions removing support for versions of IP that weren't v4 for example. >>>> ADA099617 >>>> >>>> I also remember Jim talking about PMOS which I think stood for Portable MOS ( Micro Operating System aka Mathis's Operating System). I think Jim's TCP code also ran on the TIU (Terminal Interface Unit) using PMOS which was a PDP-11 and was part of the packet radio architecture. Not sure how many people used the term PMOS though. >>>> For more info see >>>> https://gunkies.org/wiki/MOS_operating_system >>>> >>>> BYW, I have never heard of this website before. It might be a little buggy but it certainly strikes familiar chords in my memory. BTW the NIU (Network Interface Unit) was a 68000 and ran PMOS. This was used for the SURAN project which was a follow on to packet radio. >>>> Finally i also found a description of the IPR (Improved Packet Radio) in DTIC. It covers the hardware and the operating system. This version of packet radio hardware used 2 processors. I think this was due to performance problems with the previous generation of packet radio. >>>> https://apps.dtic.mil/sti/citations/ADB075938 >>>> >>>> barbara >>>> >>>> On Sunday, September 29, 2024 at 01:33:14 PM PDT, Jack Haverty via Internet-history wrote: >>>> >>>> Yeah, the "Stone Age of Computing" was quite different from today. >>>> >>>> The Unix (lack of) IPC was a serious obstacle. I struggled with it in >>>> the late 70s when I got the assignment to implement some new thing >>>> called "TCP" for ARPA. I used Jim Mathis implementation for the >>>> LSI-11s being used in Packet Radio, and shoehorned it into Unix. >>>> Several of us even went to Bell Labs and spent an afternoon discussing >>>> networking with Ritchie. All part of all of us learning about networking. >>>> >>>> More info on what the "underlying architectures" were like back then, >>>> including details of the experience of creating TCP implementations for >>>> various Unices: >>>> >>>> http://exbbn.weebly.com/note-47.html >>>> https://www.sophiehonerkamp.com/othersite/isoc-internet-history/2016/oct/msg00000.html >>>> >>>> There was a paper ("Interprocess Communications for a Server in Unix") >>>> for some IEEE conference in 1978 where we described the additions to >>>> Unix to make it possible to write TCP. But I can't find it online - >>>> probably the Conference Proceedings are behind a paywall somewhere though. >>>> >>>> Jack >>>> >>>> >>>> On 9/29/24 10:42, John Day wrote: >>>>> Good point, Jack. Dave did a lot of good work. I always liked his comment when I asked him about his collaboration with CYCLADES. He said, it was ?so they wouldn?t make the same mistakes we did.? ;-) Everyone was learning back then. >>>>> >>>>> Perhaps more relevant is that the first Unix system was brought up on the ?Net at UIUC in the summer of 1975 on a PDP-11/45. It was then stripped down and by the Spring of 1976 ported to an LSI-11 (a single board PDP-11) for a ?terminal? with a plasma screen and touch. That was fielded as part of a land-use management system for the 6 counties around Chicago and for the DoD at various places including CINCPAC. >>>>> >>>>> Unix didn?t have a real IPC facility then. (Pipes were blocking and not at all suitable.) Once the first version was up and running with NCP in the kernel and Telnet, etc in user mode, a true IPC was implemented. (To do Telnet in that early version without IPC, there were two processes, one, in-bound and one out-bound and stty and gtty were hacked to coordinate them.) file_io was hacked for the API, so that to open a connection, it was simply ?open(ucsd/telnet)?. >>>>> >>>>> Years later there was an attempt to convince Bill Joy to do something similar for Berkley Unix but he was too enamored with his Sockets idea. It is too bad because with the original API, the Internet could have seamless moved away from well-known ports and to application-names and no one would have noticed. As it was domain names were nothing more than automating downloading the host file from the NIC. >>>>> >>>>> Take care, >>>>> John Day >>>>> >>>>>> On Sep 29, 2024, at 13:16, Jack Haverty via Internet-history wrote: >>>>>> >>>>>> On 9/29/24 08:58, Dave Taht via Internet-history wrote: >>>>>>> See: >>>>>>> >>>>>>> https://dmitry.gr/?r=05.Projects&proj=35.%20Linux4004 >>>>>>> >>>>>>> While a neat hack and not directly relevant to ih, it sparked curiosity in >>>>>>> me as to the characteristics of the underlying architectures arpanet was >>>>>>> implemented on. >>>>>>> >>>>>>> >>>>>> For anyone interested in the "underlying architectures arpanet was implemented on", I suggest looking at: >>>>>> >>>>>> https://walden-family.com/bbn/imp-code.pdf >>>>>> >>>>>> Dave Walden was one of the original Arpanet programmers. He literally wrote the code. This paper describes how the Arpanet software and hardware were created. Part 2 of his paper describes more recent (2010s) work to resurrect the original IMP code and get it running again to create the original 4-node Arpanet network as it was in 1970. The code is publicly available - so anyone can look at it, and even get it running again on your own modern hardware. Check out the rest of the walden-family website. >>>>>> >>>>>> When Arpanet was being constructed, microprocessors such as the Intel 4004 did not yet exist. Neither did Unix, the precursor to Linux. Computers were quite different - only one processor, no cores, threads, or such. Lots of boards, each containing a few logic gates, interconnected by wires. Logic operated at speeds of perhaps a Megahertz, rather than Gigahertz. Memory was scarce, measured in Kilobytes, rather than Gigabytes. Communication circuits came in Kilobits per second, not Gigabits. Persistent storage (disks, drums) were acquired in Megabytes, not Terabytes. Everything also cost a lot more than today. >>>>>> >>>>>> Computing engineering was quite different in 1969 from today. Every resource was scarce and expensive. Much effort went towards efficiency, getting every bit of work out of the available hardware. As technology advanced and the Arpanet evolved into the Internet, I often wonder how the attitudes and approaches to computing implementations changed over that history. We now have the luxury of much more powerful hardware, costing a tiny fraction of what a similar system might have cost in the Arpanet era. How did hardware and software engineering change over that time? >>>>>> >>>>>> Curiously, my multi-core desktop machine today, with its gigabytes of memory, terabytes of storage, and gigabits/second network, running the Ubuntu version of Linux, takes longer to "boot up" and be ready to work for me than the PDP-10 did, back when I used that machine on the Arpanet in the 1970s. I sometimes wonder what it's doing while executing those trillions of instructions to boot up. >>>>>> >>>>>> Jack Haverty >>>>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From craig at tereschau.net Thu Oct 3 03:15:55 2024 From: craig at tereschau.net (Craig Partridge) Date: Thu, 3 Oct 2024 04:15:55 -0600 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: On Wed, Oct 2, 2024 at 11:18?PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > [changed the subject to reflect the content...] > > No such tests or testing program was defined for Gateways, or anything > else AFAIK - such as DNS servers, mail, file, telnet servers and > clients, etc. TCP and IP were subject to testing, but other important > technology, such as ICMP, was not. > > The need for such tests and certifications was noted on the ICCB list of > "things we still need to do", circa 1983. > > The issue was picked up again in the late 1980s as NSF was working to make NSFNET happen. The first realization was that there was not a list of "these are the RFCs [and their modifying RFCs/best practices/whatever]" that a router must implement. So NSFNET participants had trouble specifying what their router needs were to vendors. Bob Braden and Jon Postel were tasked with creating a router/gateway profile, RFC 1009, which notably still uses the term router and gateway semi-interchangeably. RFC 1009 was a big step forward, but (quietly) a number of folks also reacted it was well short of what was required. It was a tutorial of about 50 pages, rather than a firm specification of "do this", "don't do this". It was an awkward document to use in a procurement. So when NSF encouraged the IETF to create a similar host requirements, a bunch of the not quite happy with RFC 1009 folks joined together to work with Bob Braden to try to do a better requirements document. And mostly, in my biased view (I was a participant), did a pretty good job -- 200 pages of dense requirements split over two RFCs (1122 and 1123). The group also developed the now familiar "MUST", "SHOULD", "MAY" terminology that defined conformance with the requirements. Bob deserves huge credit for stewarding the effort. Based on the success of Host Requirements, folks turned around to look at router requirements again -- it took years until finally, RFC 1812 appeared (c. 175 pages). And, I think (not sure), RFC 1812 was only that short because people went back and updated RFCs (a chunk of Host Requirements was text saying "oh, by the way, you MUST NOT do X and MUST do Y as documented in paper Z"). Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From lixia at cs.ucla.edu Wed Oct 2 07:46:30 2024 From: lixia at cs.ucla.edu (Lixia Zhang) Date: Wed, 2 Oct 2024 07:46:30 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <14583F4D-543F-4F12-B234-C0627C393538@cs.ucla.edu> References: <6f92fbaf-7c12-4b2d-86e7-340df85f7070@cis.upenn.edu> <14583F4D-543F-4F12-B234-C0627C393538@cs.ucla.edu> Message-ID: <0E120A8F-116E-4368-949D-E85CC4B658DE@cs.ucla.edu> Hi Michael, A very long time no "see"! (a few decades now?) I'm not on this internet-history list, Len forwarded the following msg to me. Just to add a bit fact: I started at MIT September 1981, the same month of RFC 791-793 publication (IP/ICMP/TCP specifications). I still recall that Dave (Clark) handed me a printed copy and said "these are hot off the press, you read them". In my early years, after I deciphered Noel's C-Gateway implementation, I did look into using IP source quench for congestion control (I guess that's why the mentioning in the report Brian pointed to). I can't recall I got any simulation results, but do remember the problem I ran into: IP forwarding was stateless, a gateway either had to send ICMP source quench for every dropped IP packet, or otherwise maintain some state... The "alternative which seems considerably more workable" might be referring to an idea for having the sending host pace out packets, and I recall Van claimed at the time that rate-based control "simply wont work" (now BBR does exactly that, calculating packet pacing rate:-) Lixia > From: Michael Greenwald via Internet-history > Subject: Re: [ih] booting linux on a 4004 > Date: October 1, 2024 at 3:53:31?PM PDT > To: Brian E Carpenter , internet-history at elists.isoc.org > Cc: Michael Greenwald , "greenwald at cs.stanford.edu" > Reply-To: greenwald at cis.upenn.edu > > It wasn't Lixia (I believe it predated her arrival at MIT). > It wasn't John Nagle, who wasn't (to my knowledge) at MIT at the time. > As I said it had little impact, and is primarily of historical interest. But we can ask Dave Clark if anyone cares. > > On 10/1/24 3:22 PM, Brian E Carpenter wrote: >> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>> >>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> From: Barbara Denny >>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>> thesis about the use of Source Quench for congestion control. I believe >>> it included some experiments (maybe all artificial, or only through >>> simulation). >>> I don't think it had much impact on the rest of the world. >> >> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >> I guess he probably wasn't an MIT undergrad. >> >> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >> had played around with ideas (e.g. RFC 1016) but it seems that basically >> it was no use. >> >> A bit more Google found this, however: >> >> "4.3. Internet Congestion Control >> Lixia Zhang began a study of network resource allocation techniques suitable for >> the DARPA Internet. The Internet currently has a simple technique for resource >> allocation, called "Source Quench." >> Simple simulations have shown that this technique is not effective, and this work >> has produced an alternative which seems considerably more workable. Simulation >> of this new technique is now being performed." >> >> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >> >> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >> >> Brian Carpenter > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From lixia at cs.ucla.edu Wed Oct 2 22:10:07 2024 From: lixia at cs.ucla.edu (Lixia Zhang) Date: Wed, 2 Oct 2024 22:10:07 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: <32293b4f-ce9b-4bae-958b-fc9bdba4b773@gmail.com> References: <6f92fbaf-7c12-4b2d-86e7-340df85f7070@cis.upenn.edu> <14583F4D-543F-4F12-B234-C0627C393538@cs.ucla.edu> <0E120A8F-116E-4368-949D-E85CC4B658DE@cs.ucla.edu> <32293b4f-ce9b-4bae-958b-fc9bdba4b773@gmail.com> Message-ID: > On Oct 2, 2024, at 9:25?AM, Michael Greenwald wrote: > Hi Lixia! (Yes, it's been a long time. At least a decade or 2) > My memory is (no surprise) cloudy. For some reason I thought you arrived a year or two later than '81. > I have no memory (again, no surprise) of what, specifically, your promising "alternative" was, but an early reference to (virtual)clock based control of packets sounds plausible. > Thanks for remembering the Virtual Clock work! That's actually about packet queuing order at routers. The host packet pacing idea was sender control. I believe that Van said rate-control wouldn't work but window-control would was because window control has built-in feedback loop (the ACKs); he might have thought rate-control as open-loop (which was not the case, though no one figured out how to use source quench feedback). > I'm not shocked by the difference between Van's comment back then and BBR now. There's never a guarantee that claims that something "simply cannot work" or "will obviously help" back then (or even now) will hold up over long time periods, unless there is a formal proof or a lot of experimental validation. (And even then conditions change). > > About the work that I thought predated you: Dave Clark remembers that there was an undergraduate who did work on Source Quench, but not his name, nor details of what he did. So at least I didn't completely hallucinate.... > I had some vague memory about this too, and the problem was unsure the right thing to do when receiving source quenches. Lixia From jeanjour at comcast.net Thu Oct 3 08:50:35 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 11:50:35 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> Message-ID: Jack, Good stuff. I agree and as I said before SQ alone is not sufficient unless the action to be taken is also defined, both when to send it and what to do when it arrives. Raj and KK said that ECN should be sent when the average queue length was greater than or equal to 1. This is very early and gives the senders time to back off before packets are dropped (hopefully) and retransmissions are generated. TCP by using implicit notification waits until the queue is full and packets are being dropped, and will continue to be dropped (it would seem) until the senders notice the lost Acks. This would appear to generate a lot of retransmissions. A question for you: It has been my impression that with the RFNMs between IMPs, congestion would not have occurred within the IMP subnet (or very rarely).* However, there would have been congestion at the gateways as you describe. Is that correct? Take care, John * Early on there were some deadlock conditions caused by the fact that a message could be 8 packets and was reassembled in the IMP before being delivered to the host, but that isn?t congestion. > On Oct 2, 2024, at 20:08, Jack Haverty via Internet-history wrote: > > Re: Source Quench... > > It's been 40+ years, but I remember meetings where Source Quench was first discussed. My reaction was that it was too simplistic and wouldn't be effective. At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40. When I asked what a TCP should do when it received a SQ, no one could provide much of an answer. If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? > > Other implementors had different ideas about how to handle an incoming SQ. One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent. So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire. You knew it had been discarded, so you should retransmit it immediately. > > In my TCP, I think I just incremented a counter when I received a SQ. Could always change it later.... > > At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue. There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340. There are many others describing the Arpanet experience. In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". > > There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs. Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer. The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. > > Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them. Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP. Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels. "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. > > The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs. That would effectively block all communications from the blocked host to anywhere else on the Arpanet. By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter. Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it. I have wondered how many implementations actually did. > > All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail. The ancient IMP code itself is even available online today. As always, the ultimate documentation is the code itself. But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s. It's not easy to figure out how it worked. > > The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes. In the earliest gateways, "getting a timestamp" from the processor wasn't hard. It was impossible. The gateway hardware simply didn't have any way to measure time. > > IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back. They could calculate the delay along a route. > > Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit. So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available. That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. > > Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet. But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. > > Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports. The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". > > In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones. If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. > > Hope this helps explain how we got from there to here... > Jack Haverty > > > > > > On 10/2/24 15:21, Dave Taht via Internet-history wrote: >> I wish I had had the time and resources to (help) write more papers. (For >> example there isn't much on "drop head queueing") >> >> fq_codel is now a linux-wide default and has the following unique >> properties: >> >> codel queue management, which measure the time a packet spends in a queue >> and gradually attempts to find an optimum point for queue length, which is >> 5ms by default. (it has been tested in software below 250us in the DC). >> There is another subsystem, called BQL, which attempts to limit bytes on >> the device txring to one interrupt's worth. (a pretty good explanation of >> modern layers here) [2] >> >> It drops from the head, not the tail of the queue, with a small (BQL or >> HTB) FIFO in front of the lowest bits of the hardware to account >> for interrupt latency. >> >> (I am kind of curious if a txring existed back in the day and how close an >> application sat to the hardware) >> >> Anecdote: when van and kathy were working on what became codel (january >> 2012), she rang me up one day and asked me just how much overhead there was >> in getting a timestamp from the hardware nowadays. And I explained that it >> was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs >> and so on and so forth, and she said thanks, and hung up. Getting a >> timestamp must have been mighty hard back in the day! >> >> The "flow queueing" mechanism sends packets that have an arrival rate of >> less than the departure rate of all the other flows, out first.[1] This is >> an improvement over prior FQ mechanisms like SFQ and DRR, which always put >> a new flow at the tail of the flow list. It is pretty amazing how often >> this works on real traffic. Also it automatically puts flows that build a >> queue into a queue that is managed by codel. >> >> One (eventual) benefit of these approaches, combined, is it makes delay >> based congestion control more feasible (indeed, >> BBR spends most of its time in this mode), but the flow isolation makes for >> most interactive traffic never being queued at all. >> >> IMHO the edges of the internet at least, would have been much better were >> some form of FQ always in it (which we kind of got from switched networks >> naturally) but the idea of FQ was roundly rejected in the first ietf >> meeting in 1989, and it's been uphill ever since. >> >> Just to touch upon pacing a bit - pacing is the default for the linux stack >> no matter the overlying qdisc or congestion control algorithm. >> I don't know if anyone has ever attempted to compare pacing w/cubic vs >> pacing w/bbr, and very few, until recently, have >> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >> >> [1]https://ieeexplore.ieee.org/document/8469111 >> [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >> [3] >> https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >> >> Varying the packet pacing to get a pre-congestion notification is a paper >> I'd like more to pursue. >> https://www.usenix.org/system/files/atc24-han.pdf >> (I so want to believe this paper) >> >> A tiny bit more below.... >> >> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> The response to bufferbloat has always struck me as looking for your keys >>> under a street light when that wasn?t where you dropped them but there is >>> light there. >>> >>> Initially, bufferbloat was not a problem because memory was expensive and >>> when TCP ran out of buffers (or got low), the connection simply blocked the >>> sending application until buffers were available. This was still true with >>> the advent of NIC cards. Memory was still tight. However, as memory got >>> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >>> no one told the application to slow down or wait, so there was local >>> congestion collapse: bufferbloat. >>> >>> One part of the solution would be interface flow control between the >>> sending application and TCP (you would have thought that would have >>> occurred to implementers any way, it is obvious) and/or simply restrict the >>> amount of buffers TCP has available so that it runs out and blocks the >>> sending the application before things get bad and opens up when buffers are >>> available. But virtually all of the papers I see are on different >>> drop-strategies, and oddly enough they never find their keys. >>> >> don't have a lot of time for papers! The most modern stuff for tcp is >> using EDF (earliest deadline first) to manage the packet pacing. >> There are virtual and actual physical devices nowadays that take a "time to >> be sent" and packet. This paper was highly influential: >> >> https://saeed.github.io/files/carousel-sigcomm17.pdf >> >> the latest commit to the linux kernel about it: >> >> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >> >> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so >> many things... >> >> >>> Take care, >>> John >>> >>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>> Just throwing some thoughts out here ...... >>>> I can see how this happens in a FIFO queuing world. However a lot of >>> work has gone into fair queuing starting in the late 80s. Just wondering >>> if anyone has done work utilizing fair queuing and source quench. For >>> example, I think I can see how to use fair queuing information to better >>> select who to send a source quench to. At least I can see how to do it with >>> Stochastic Fairness Queueing since I worked on it and I remember a fair >>> amount about how it was implemented. I wouldn't be able to provide a >>> guarantee that the wrong host would never receive a source quench but the >>> likelihood should be much lower. Considering whether the use of NAT >>> creates undesirable behavior is also important and I am sure there are >>> probably other cases that need to be checked. >>>> Hum, it might also be interesting to speculate whether this could have >>> any effect on bufferbloat but I fess up I need to learn more about the work >>> done in the area of bufferbloat. I was involved with other things when >>> this started to appear on my radar screen as a hot topic. I will admit I >>> wish I had done more work on possible buffering effects from implementation >>> choices at the time I did work on SFQ but there were contractual >>> obligations that restricted how much time I could devote to the SFQ part of >>> the project. >>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >>> anyone have any idea about its use in the Internet? >>>> barbara >>>> >>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >>> vint at google.com> wrote: >>>> One basic problem with blaming the "last packet that caused intermediate >>> router congestion" is that it usually blamed the wrong source, among other >>> problems. Van Jacobson was/is the guru of flow control (among others) who >>> might remember more. >>>> v >>>> >>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>> In a brief attempt to try to find some information about the early MIT >>> work you mentioned, I ended up tripping on this Final Report from ISI in >>> DTIC. It does talk a fair amount about congestion control and source >>> quench (plus other things that might interest people). The period of >>> performance is 1987 to 1990 which is much later than I was considering in >>> my earlier message. >>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>> Even though the report mentions testing on DARTnet, I don't remember >>> anything about this during our DARTnet meetings. I did join the project >>> after the start so perhaps the work was done before I began to participate. >>> I also couldn't easily find the journal they mention as a place for >>> publishing their findings. I will have more time later to see if I can >>> something that covers this testing. >>>> barbara >>>> >>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>> Internet-history wrote: >>>> multicast is also an issue but I do not recall if that was one that >>> Craig & I talked about >>>> Scott >>>> >>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>>> I remember talking with Craig Partridge (on a flight to somewhere) >>> about source quench >>>>> during the time when 1812 was being written - I do not recall >>>>> the specific issues but I recall that there were more than one issue >>>>> >>>>> (if DoS was not an issue at the time, it should have been) >>>>> >>>>> Scott >>>>> >>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>> Forwarded for Barbara >>>>>>>> >>>>>>>> ==== >>>>>>>> >>>>>>>> From: Barbara Denny >>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>> I think congestion issues were discussed because I remember an ICMP >>> message type called source quench (now deprecated). It was used for >>> notifying a host to reduce the traffic load to a destination. I don't >>> remember hearing about any actual congestion experiments using this message >>> type. >>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >>> an >>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>> thesis about the use of Source Quench for congestion control. I >>> believe >>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>> simulation). >>>>>>> I don't think it had much impact on the rest of the world. >>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>> 1984). >>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>> I guess he probably wasn't an MIT undergrad. >>>>>> >>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>> People >>>>>> had played around with ideas (e.g. RFC 1016) but it seems that >>> basically >>>>>> it was no use. >>>>>> >>>>>> A bit more Google found this, however: >>>>>> >>>>>> "4.3. Internet Congestion Control >>>>>> Lixia Zhang began a study of network resource allocation techniques >>> suitable for >>>>>> the DARPA Internet. The Internet currently has a simple technique for >>> resource >>>>>> allocation, called "Source Quench." >>>>>> Simple simulations have shown that this technique is not effective, >>> and this work >>>>>> has produced an alternative which seems considerably more workable. >>> Simulation >>>>>> of this new technique is now being performed." >>>>>> >>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >>> ] >>>>>> Lixia was then a grad student under Dave Clark. Of course she's at >>> UCLA now. If she isn't on this list, she should be! >>>>>> Brian Carpenter >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> >>>> >>>> -- >>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >>> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>> >>>> until further notice >>>> >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gregskinner0 at icloud.com Thu Oct 3 09:02:08 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Thu, 3 Oct 2024 09:02:08 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004 Message-ID: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> Forwarded for Barbara ==== Having trouble emailing again so i did some trimming on the original message.... Putting my packet radio hat back on, a source quench message could help disambiguate whether loss in the network is due to congestion or something else (like in wireless, loss due to harsh environments, jamming, mobility). I also think it is not obvious what you should do when you receive a source quench, but to me trying to understand this is just part of trying to see if we can make things work better. How about what you could do when you don't receive a source quench but have experienced loss? How is network coding coming along these days? barbara From dhc at dcrocker.net Thu Oct 3 10:10:16 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 03 Oct 2024 17:10:16 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: <047fb33d-cd94-4bde-8678-f7662ab26b89@dcrocker.net> On 10/2/2024 10:18 PM, Jack Haverty via Internet-history wrote: > Today, there are thousands of RFCs defining protocols, algorithms, and > standards for the Internet.? AFAIK, there is no way to confirm that > any particular product implements any of these mechanisms, or if they > do, that the implementation is correct. Worse, there is no obvious and reliable way for a product developer to know exactly which RFCs they need to implement. Some years ago, I explored an approach that might have helped, but it never got any traction: Navigating RFCs:? Internet Technology Document Suites https://bbiw.net/clusters/ I still think something like this is needed. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From jeanjour at comcast.net Thu Oct 3 10:38:11 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 13:38:11 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> Message-ID: This is all well and good and actually quite interesting; but it doesn?t address the systems problems. As long as detection is implicit, it is predatory. The senders will react to conditions that did not occur in this layer and mechanisms in lower layers will have already reacted (having smaller scope), so there are conflicting reactions. Delay is going to be a noisy signal, which will result in false positives. Packets may have been discarded by a lower layer. Of course this falls prey to the ridiculous complaint we have seen that ?TCP doesn?t support wireless,? but it isn?t suppose to wireless is suppose to support TCP. However, if there is congestion in the lower layers (doing what is necessary to support TCP), you don?t want TCP reacting to it. Congestion in TCP is difficult if not impossible to coordinate with QoS. QoS mechanisms are primarily in the layer below. As you describe there is some loose coordination but it is far from sufficient. As I said, I have not been impressed by these implicit indirect signals of congestion. Take care, John > On Oct 2, 2024, at 18:21, Dave Taht wrote: > > I wish I had had the time and resources to (help) write more papers. (For example there isn't much on "drop head queueing") > > fq_codel is now a linux-wide default and has the following unique properties: > > codel queue management, which measure the time a packet spends in a queue and gradually attempts to find an optimum point for queue length, which is 5ms by default. (it has been tested in software below 250us in the DC). There is another subsystem, called BQL, which attempts to limit bytes on the device txring to one interrupt's worth. (a pretty good explanation of modern layers here) [2] > > It drops from the head, not the tail of the queue, with a small (BQL or HTB) FIFO in front of the lowest bits of the hardware to account > for interrupt latency. > > (I am kind of curious if a txring existed back in the day and how close an application sat to the hardware) > > Anecdote: when van and kathy were working on what became codel (january 2012), she rang me up one day and asked me just how much overhead there was in getting a timestamp from the hardware nowadays. And I explained that it was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs and so on and so forth, and she said thanks, and hung up. Getting a timestamp must have been mighty hard back in the day! > > The "flow queueing" mechanism sends packets that have an arrival rate of less than the departure rate of all the other flows, out first.[1] This is an improvement over prior FQ mechanisms like SFQ and DRR, which always put a new flow at the tail of the flow list. It is pretty amazing how often this works on real traffic. Also it automatically puts flows that build a queue into a queue that is managed by codel. > > One (eventual) benefit of these approaches, combined, is it makes delay based congestion control more feasible (indeed, > BBR spends most of its time in this mode), but the flow isolation makes for most interactive traffic never being queued at all. > > IMHO the edges of the internet at least, would have been much better were some form of FQ always in it (which we kind of got from switched networks naturally) but the idea of FQ was roundly rejected in the first ietf meeting in 1989, and it's been uphill ever since. > > Just to touch upon pacing a bit - pacing is the default for the linux stack no matter the overlying qdisc or congestion control algorithm. > I don't know if anyone has ever attempted to compare pacing w/cubic vs pacing w/bbr, and very few, until recently, have > attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] > > [1] https://ieeexplore.ieee.org/document/8469111 > [2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 > [3] https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable > > Varying the packet pacing to get a pre-congestion notification is a paper I'd like more to pursue. > https://www.usenix.org/system/files/atc24-han.pdf > (I so want to believe this paper) > > A tiny bit more below.... > > On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history > wrote: >> The response to bufferbloat has always struck me as looking for your keys under a street light when that wasn?t where you dropped them but there is light there. >> >> Initially, bufferbloat was not a problem because memory was expensive and when TCP ran out of buffers (or got low), the connection simply blocked the sending application until buffers were available. This was still true with the advent of NIC cards. Memory was still tight. However, as memory got cheap and NIC cards had oceans of memory, TCP never got low on buffers and no one told the application to slow down or wait, so there was local congestion collapse: bufferbloat. >> >> One part of the solution would be interface flow control between the sending application and TCP (you would have thought that would have occurred to implementers any way, it is obvious) and/or simply restrict the amount of buffers TCP has available so that it runs out and blocks the sending the application before things get bad and opens up when buffers are available. But virtually all of the papers I see are on different drop-strategies, and oddly enough they never find their keys. > > don't have a lot of time for papers! The most modern stuff for tcp is using EDF (earliest deadline first) to manage the packet pacing. > There are virtual and actual physical devices nowadays that take a "time to be sent" and packet. This paper was highly influential: > > https://saeed.github.io/files/carousel-sigcomm17.pdf > > the latest commit to the linux kernel about it: > > https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ > > PS IMHO eric dumazet belongs a spot in the internet hall of fame for so many things... > >> >> Take care, >> John >> >> > On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history > wrote: >> > >> > Just throwing some thoughts out here ...... >> > I can see how this happens in a FIFO queuing world. However a lot of work has gone into fair queuing starting in the late 80s. Just wondering if anyone has done work utilizing fair queuing and source quench. For example, I think I can see how to use fair queuing information to better select who to send a source quench to. At least I can see how to do it with Stochastic Fairness Queueing since I worked on it and I remember a fair amount about how it was implemented. I wouldn't be able to provide a guarantee that the wrong host would never receive a source quench but the likelihood should be much lower. Considering whether the use of NAT creates undesirable behavior is also important and I am sure there are probably other cases that need to be checked. >> > Hum, it might also be interesting to speculate whether this could have any effect on bufferbloat but I fess up I need to learn more about the work done in the area of bufferbloat. I was involved with other things when this started to appear on my radar screen as a hot topic. I will admit I wish I had done more work on possible buffering effects from implementation choices at the time I did work on SFQ but there were contractual obligations that restricted how much time I could devote to the SFQ part of the project. >> > Just curious, ECN (Explicit Congestion Notification) is optional . Does anyone have any idea about its use in the Internet? >> > barbara >> > >> > On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf > wrote: >> > >> > One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. >> > >> > v >> > >> > On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history > wrote: >> > >> > In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC. It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. >> > >> > https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >> > >> > Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings. I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. >> > >> > barbara >> > >> > On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history > wrote: >> > >> > multicast is also an issue but I do not recall if that was one that Craig & I talked about >> > >> > Scott >> > >> >> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history > wrote: >> >> >> >> I remember talking with Craig Partridge (on a flight to somewhere) about source quench >> >> during the time when 1812 was being written - I do not recall >> >> the specific issues but I recall that there were more than one issue >> >> >> >> (if DoS was not an issue at the time, it should have been) >> >> >> >> Scott >> >> >> >>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history > wrote: >> >>> >> >>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >> >>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >> >>>>> Forwarded for Barbara >> >>>>> >> >>>>> ==== >> >>>>> >> >>>>> From: Barbara Denny > >> >>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >> >>>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >> >>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >> >>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >> >>>> thesis about the use of Source Quench for congestion control. I believe >> >>>> it included some experiments (maybe all artificial, or only through >> >>>> simulation). >> >>>> I don't think it had much impact on the rest of the world. >> >>> >> >>> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >> >>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >> >>> I guess he probably wasn't an MIT undergrad. >> >>> >> >>> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >> >>> had played around with ideas (e.g. RFC 1016) but it seems that basically >> >>> it was no use. >> >>> >> >>> A bit more Google found this, however: >> >>> >> >>> "4.3. Internet Congestion Control >> >>> Lixia Zhang began a study of network resource allocation techniques suitable for >> >>> the DARPA Internet. The Internet currently has a simple technique for resource >> >>> allocation, called "Source Quench." >> >>> Simple simulations have shown that this technique is not effective, and this work >> >>> has produced an alternative which seems considerably more workable. Simulation >> >>> of this new technique is now being performed." >> >>> >> >>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >> >>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >> >>> >> >>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >> >>> >> >>> Brian Carpenter >> > >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> > >> > >> > -- >> > Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >> > >> > >> > until further notice >> > >> > >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Dave T?ht CSO, LibreQos From jack at 3kitty.org Thu Oct 3 10:43:25 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 10:43:25 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Hi Craig, Thanks for the history; it helped me fill in the gaps of what happened after I was directly involved. But... (there's always a but) Those RFCs are not part of the issue I was remembering.? That issue was "Certification", which is distinct from "Specification". Specifications are documents which delineate what an implementation MUST, SHOULD, MAY, or MAY NOT do.? Certification is a process whereby a specific implementation is tested, often by an independent third party, to see if it actually does what the Specifications dictate. Around the time that TCP/IP became a DoD Standard in the 1980s, NIST/NBS also created a Certification methodology.?? DoD altered its procurement regulations to require such Certifications for everything it purchased.? I've never learned who specifically made either of those things happen.? But someone did.? Perhaps Vint remembers more? The RFCs you mention are a follow-on to the Specifications that Jon orchestrated.? Bob Braden was also on the ICCB during that time, so it was probably natural for him to champion subsequent rounds of Specifications for NSF. But, AFAIK, no one continued the work that NIST/NBS had started, to further evolve Certification for the Internet.? I also never heard that DoD's procurement regulations were changed to require compliance with additional RFCs.?? Maybe it happened, but I suspect they couldn't do that unless there was some well-defined way to Certify that a product met those Specifications. It's curious to me that such mechanisms have not been created for the Internet Industry.??? Other computing technologies did develop such mechanisms.? For example, in the Database Industry where I worked in the 1990s, there were concepts like "Transactions", and testing procedures to see how a particular software/hardware combination actually worked in standard tests.? For example, vendors touted their particular hardware and software products as Certified to achieve some number of TPS (Transactions Per Second). Similarly, even today there are lots of "Benchmarks" that are used today to evaluate computers and their component software and hardware.?? Magazines and websites compare products and show graphs indicating how their test results compare, so that customers can make informed purchase decisions based on independent test results. Most devices we can now buy contain hardware and software that enables them to interact on the Internet.? But, other than raw speed, I've never seen any of such test results that even mention conformance with any RFC Specifications. Why not? IMHO, such testing and certification is more important in a networked environment than in a single computer.? In network environments, there are at least two, and probably many more computers involved in anything a user does.? Some of them are servers, some are clients, some are routers, modems, switches, etc. etc.? All of these affect the users' experience, as well as affecting the network and the experience of others using it. The ongoing discussions about source quench, congestion, queue management, et al made we wonder.? My home LAN has more than 50 "devices" attached to it, and contains a bunch of switches, modems, routers, cables, and other such stuff we all grew up with. How can I tell if they all implement ??? Or if any of them do? Jack Haverty On 10/3/24 03:15, Craig Partridge wrote: > > > On Wed, Oct 2, 2024 at 11:18?PM Jack Haverty via Internet-history > wrote: > > [changed the subject to reflect the content...] > > No such tests or testing program was defined for Gateways, or > anything > else AFAIK - such as DNS servers, mail, file, telnet servers and > clients, etc.? TCP and IP were subject to testing, but other > important > technology, such as ICMP, was not. > > The need for such tests and certifications was noted on the ICCB > list of > "things we still need to do", circa 1983. > > > The issue was picked up again in the late 1980s as NSF was working to > make NSFNET happen. > > The first realization was that there was not a list of "these are the > RFCs [and their modifying RFCs/best practices/whatever]" that a router > must implement.? So NSFNET participants had trouble specifying what > their router needs were to vendors. Bob Braden and Jon Postel were > tasked with creating a router/gateway profile, RFC 1009, which notably > still uses the term router and gateway semi-interchangeably. > > RFC 1009 was a big step forward, but (quietly) a number of folks also > reacted it was well short of what was required.? It was a tutorial?of > about?50 pages, rather than a firm specification of "do this", "don't > do this".? It was an awkward document to use in a procurement. > > So when NSF encouraged the IETF to create a similar host requirements, > a bunch of the not quite happy with RFC 1009 folks joined together to > work with Bob Braden to try to do a better requirements document.? And > mostly, in my biased view (I was a participant), did a pretty good job > -- 200 pages of dense requirements split over two RFCs (1122 and > 1123).? The group also developed the now familiar "MUST", "SHOULD", > "MAY" terminology that?defined?conformance with the requirements.? Bob > deserves huge credit for stewarding the effort. > > Based on the success of Host Requirements, folks turned around to look > at router requirements again -- it took years until finally, RFC 1812 > appeared (c. 175 pages).? And, I think (not sure), RFC 1812 was only > that short because people went back and updated RFCs (a chunk of Host > Requirements was text saying "oh, by the way, you MUST?NOT do X and > MUST do Y as documented in paper Z"). > > Craig > -- > ***** > Craig Partridge's email account for professional society activities > and mailing lists. -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From steve at shinkuro.com Thu Oct 3 10:45:09 2024 From: steve at shinkuro.com (Steve Crocker) Date: Thu, 3 Oct 2024 13:45:09 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: Message-ID: <7C44799A-0A3E-424B-8B90-578373A4AB39@shinkuro.com> John, The RFNMs restricted flow on a per "connection" basis. As I accidentally demonstrated when I suggested to the folks at Tinker that they use eight parallel connections to maximize throughput, the early Arpanet could be brought down nearly instantly. The reassembly problem was separate and at least partly related to focusing on eight packet messages instead of two packet messages. Steve Sent from my iPhone > On Oct 3, 2024, at 1:38?PM, John Day via Internet-history wrote: > > ?This is all well and good and actually quite interesting; but it doesn?t address the systems problems. > > As long as detection is implicit, it is predatory. The senders will react to conditions that did not occur in this layer and mechanisms in lower layers will have already reacted (having smaller scope), so there are conflicting reactions. > > Delay is going to be a noisy signal, which will result in false positives. Packets may have been discarded by a lower layer. Of course this falls prey to the ridiculous complaint we have seen that ?TCP doesn?t support wireless,? but it isn?t suppose to wireless is suppose to support TCP. However, if there is congestion in the lower layers (doing what is necessary to support TCP), you don?t want TCP reacting to it. > > Congestion in TCP is difficult if not impossible to coordinate with QoS. QoS mechanisms are primarily in the layer below. As you describe there is some loose coordination but it is far from sufficient. > > As I said, I have not been impressed by these implicit indirect signals of congestion. > > Take care, > John > > >> On Oct 2, 2024, at 18:21, Dave Taht wrote: >> >> I wish I had had the time and resources to (help) write more papers. (For example there isn't much on "drop head queueing") >> >> fq_codel is now a linux-wide default and has the following unique properties: >> >> codel queue management, which measure the time a packet spends in a queue and gradually attempts to find an optimum point for queue length, which is 5ms by default. (it has been tested in software below 250us in the DC). There is another subsystem, called BQL, which attempts to limit bytes on the device txring to one interrupt's worth. (a pretty good explanation of modern layers here) [2] >> >> It drops from the head, not the tail of the queue, with a small (BQL or HTB) FIFO in front of the lowest bits of the hardware to account >> for interrupt latency. >> >> (I am kind of curious if a txring existed back in the day and how close an application sat to the hardware) >> >> Anecdote: when van and kathy were working on what became codel (january 2012), she rang me up one day and asked me just how much overhead there was in getting a timestamp from the hardware nowadays. And I explained that it was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs and so on and so forth, and she said thanks, and hung up. Getting a timestamp must have been mighty hard back in the day! >> >> The "flow queueing" mechanism sends packets that have an arrival rate of less than the departure rate of all the other flows, out first.[1] This is an improvement over prior FQ mechanisms like SFQ and DRR, which always put a new flow at the tail of the flow list. It is pretty amazing how often this works on real traffic. Also it automatically puts flows that build a queue into a queue that is managed by codel. >> >> One (eventual) benefit of these approaches, combined, is it makes delay based congestion control more feasible (indeed, >> BBR spends most of its time in this mode), but the flow isolation makes for most interactive traffic never being queued at all. >> >> IMHO the edges of the internet at least, would have been much better were some form of FQ always in it (which we kind of got from switched networks naturally) but the idea of FQ was roundly rejected in the first ietf meeting in 1989, and it's been uphill ever since. >> >> Just to touch upon pacing a bit - pacing is the default for the linux stack no matter the overlying qdisc or congestion control algorithm. >> I don't know if anyone has ever attempted to compare pacing w/cubic vs pacing w/bbr, and very few, until recently, have >> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >> >> [1] https://ieeexplore.ieee.org/document/8469111 >> [2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >> [3] https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >> >> Varying the packet pacing to get a pre-congestion notification is a paper I'd like more to pursue. >> https://www.usenix.org/system/files/atc24-han.pdf >> (I so want to believe this paper) >> >> A tiny bit more below.... >> >>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history > wrote: >>> The response to bufferbloat has always struck me as looking for your keys under a street light when that wasn?t where you dropped them but there is light there. >>> >>> Initially, bufferbloat was not a problem because memory was expensive and when TCP ran out of buffers (or got low), the connection simply blocked the sending application until buffers were available. This was still true with the advent of NIC cards. Memory was still tight. However, as memory got cheap and NIC cards had oceans of memory, TCP never got low on buffers and no one told the application to slow down or wait, so there was local congestion collapse: bufferbloat. >>> >>> One part of the solution would be interface flow control between the sending application and TCP (you would have thought that would have occurred to implementers any way, it is obvious) and/or simply restrict the amount of buffers TCP has available so that it runs out and blocks the sending the application before things get bad and opens up when buffers are available. But virtually all of the papers I see are on different drop-strategies, and oddly enough they never find their keys. >> >> don't have a lot of time for papers! The most modern stuff for tcp is using EDF (earliest deadline first) to manage the packet pacing. >> There are virtual and actual physical devices nowadays that take a "time to be sent" and packet. This paper was highly influential: >> >> https://saeed.github.io/files/carousel-sigcomm17.pdf >> >> the latest commit to the linux kernel about it: >> >> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >> >> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so many things... >> >>> >>> Take care, >>> John >>> >>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history > wrote: >>>> >>>> Just throwing some thoughts out here ...... >>>> I can see how this happens in a FIFO queuing world. However a lot of work has gone into fair queuing starting in the late 80s. Just wondering if anyone has done work utilizing fair queuing and source quench. For example, I think I can see how to use fair queuing information to better select who to send a source quench to. At least I can see how to do it with Stochastic Fairness Queueing since I worked on it and I remember a fair amount about how it was implemented. I wouldn't be able to provide a guarantee that the wrong host would never receive a source quench but the likelihood should be much lower. Considering whether the use of NAT creates undesirable behavior is also important and I am sure there are probably other cases that need to be checked. >>>> Hum, it might also be interesting to speculate whether this could have any effect on bufferbloat but I fess up I need to learn more about the work done in the area of bufferbloat. I was involved with other things when this started to appear on my radar screen as a hot topic. I will admit I wish I had done more work on possible buffering effects from implementation choices at the time I did work on SFQ but there were contractual obligations that restricted how much time I could devote to the SFQ part of the project. >>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does anyone have any idea about its use in the Internet? >>>> barbara >>>> >>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf > wrote: >>>> >>>> One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. >>>> >>>> v >>>> >>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history > wrote: >>>> >>>> In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC. It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. >>>> >>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>> >>>> Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings. I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. >>>> >>>> barbara >>>> >>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history > wrote: >>>> >>>> multicast is also an issue but I do not recall if that was one that Craig & I talked about >>>> >>>> Scott >>>> >>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history > wrote: >>>>> >>>>> I remember talking with Craig Partridge (on a flight to somewhere) about source quench >>>>> during the time when 1812 was being written - I do not recall >>>>> the specific issues but I recall that there were more than one issue >>>>> >>>>> (if DoS was not an issue at the time, it should have been) >>>>> >>>>> Scott >>>>> >>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history > wrote: >>>>>> >>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>> Forwarded for Barbara >>>>>>>> >>>>>>>> ==== >>>>>>>> >>>>>>>> From: Barbara Denny > >>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>> thesis about the use of Source Quench for congestion control. I believe >>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>> simulation). >>>>>>> I don't think it had much impact on the rest of the world. >>>>>> >>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>> I guess he probably wasn't an MIT undergrad. >>>>>> >>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >>>>>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>>>>> it was no use. >>>>>> >>>>>> A bit more Google found this, however: >>>>>> >>>>>> "4.3. Internet Congestion Control >>>>>> Lixia Zhang began a study of network resource allocation techniques suitable for >>>>>> the DARPA Internet. The Internet currently has a simple technique for resource >>>>>> allocation, called "Source Quench." >>>>>> Simple simulations have shown that this technique is not effective, and this work >>>>>> has produced an alternative which seems considerably more workable. Simulation >>>>>> of this new technique is now being performed." >>>>>> >>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>>>>> >>>>>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >>>>>> >>>>>> Brian Carpenter >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> >>>> >>>> -- >>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>> >>>> >>>> until further notice >>>> >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> >> -- >> Dave T?ht CSO, LibreQos > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From craig at tereschau.net Thu Oct 3 10:45:57 2024 From: craig at tereschau.net (Craig Partridge) Date: Thu, 3 Oct 2024 11:45:57 -0600 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: Hi Jack: Complete agreement. My poorly expressed point in the larger history was it gets hard to certify what isn't specified and the Internet community had struggled to make clear what was specified. Craig On Thu, Oct 3, 2024 at 11:43?AM Jack Haverty wrote: > Hi Craig, > > Thanks for the history; it helped me fill in the gaps of what happened > after I was directly involved. > > But... (there's always a but) > > Those RFCs are not part of the issue I was remembering. That issue was > "Certification", which is distinct from "Specification". Specifications > are documents which delineate what an implementation MUST, SHOULD, MAY, or > MAY NOT do. Certification is a process whereby a specific implementation > is tested, often by an independent third party, to see if it actually does > what the Specifications dictate. > > Around the time that TCP/IP became a DoD Standard in the 1980s, NIST/NBS > also created a Certification methodology. DoD altered its procurement > regulations to require such Certifications for everything it purchased. > I've never learned who specifically made either of those things happen. > But someone did. Perhaps Vint remembers more? > > The RFCs you mention are a follow-on to the Specifications that Jon > orchestrated. Bob Braden was also on the ICCB during that time, so it was > probably natural for him to champion subsequent rounds of Specifications > for NSF. > > But, AFAIK, no one continued the work that NIST/NBS had started, to > further evolve Certification for the Internet. I also never heard that > DoD's procurement regulations were changed to require compliance with > additional RFCs. Maybe it happened, but I suspect they couldn't do that > unless there was some well-defined way to Certify that a product met those > Specifications. > > It's curious to me that such mechanisms have not been created for the > Internet Industry. Other computing technologies did develop such > mechanisms. For example, in the Database Industry where I worked in the > 1990s, there were concepts like "Transactions", and testing procedures to > see how a particular software/hardware combination actually worked in > standard tests. For example, vendors touted their particular hardware and > software products as Certified to achieve some number of TPS (Transactions > Per Second). > > Similarly, even today there are lots of "Benchmarks" that are used today > to evaluate computers and their component software and hardware. > Magazines and websites compare products and show graphs indicating how > their test results compare, so that customers can make informed purchase > decisions based on independent test results. > > Most devices we can now buy contain hardware and software that enables > them to interact on the Internet. But, other than raw speed, I've never > seen any of such test results that even mention conformance with any RFC > Specifications. > > Why not? > > IMHO, such testing and certification is more important in a networked > environment than in a single computer. In network environments, there are > at least two, and probably many more computers involved in anything a user > does. Some of them are servers, some are clients, some are routers, > modems, switches, etc. etc. All of these affect the users' experience, as > well as affecting the network and the experience of others using it. > > The ongoing discussions about source quench, congestion, queue management, > et al made we wonder. My home LAN has more than 50 "devices" attached to > it, and contains a bunch of switches, modems, routers, cables, and other > such stuff we all grew up with. > > How can I tell if they all implement ? Or if any of > them do? > > Jack Haverty > > > On 10/3/24 03:15, Craig Partridge wrote: > > > > On Wed, Oct 2, 2024 at 11:18?PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> [changed the subject to reflect the content...] >> >> No such tests or testing program was defined for Gateways, or anything >> else AFAIK - such as DNS servers, mail, file, telnet servers and >> clients, etc. TCP and IP were subject to testing, but other important >> technology, such as ICMP, was not. >> >> The need for such tests and certifications was noted on the ICCB list of >> "things we still need to do", circa 1983. >> >> > The issue was picked up again in the late 1980s as NSF was working to make > NSFNET happen. > > The first realization was that there was not a list of "these are the RFCs > [and their modifying RFCs/best practices/whatever]" that a router must > implement. So NSFNET participants had trouble specifying what their router > needs were to vendors. Bob Braden and Jon Postel were tasked with creating > a router/gateway profile, RFC 1009, which notably still uses the term > router and gateway semi-interchangeably. > > RFC 1009 was a big step forward, but (quietly) a number of folks also > reacted it was well short of what was required. It was a tutorial of > about 50 pages, rather than a firm specification of "do this", "don't do > this". It was an awkward document to use in a procurement. > > So when NSF encouraged the IETF to create a similar host requirements, a > bunch of the not quite happy with RFC 1009 folks joined together to work > with Bob Braden to try to do a better requirements document. And mostly, > in my biased view (I was a participant), did a pretty good job -- 200 pages > of dense requirements split over two RFCs (1122 and 1123). The group also > developed the now familiar "MUST", "SHOULD", "MAY" terminology > that defined conformance with the requirements. Bob deserves huge credit > for stewarding the effort. > > Based on the success of Host Requirements, folks turned around to look at > router requirements again -- it took years until finally, RFC 1812 appeared > (c. 175 pages). And, I think (not sure), RFC 1812 was only that short > because people went back and updated RFCs (a chunk of Host Requirements was > text saying "oh, by the way, you MUST NOT do X and MUST do Y as documented > in paper Z"). > > Craig > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From jeanjour at comcast.net Thu Oct 3 10:50:35 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 13:50:35 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <7C44799A-0A3E-424B-8B90-578373A4AB39@shinkuro.com> References: <7C44799A-0A3E-424B-8B90-578373A4AB39@shinkuro.com> Message-ID: Right. That was the deadlock condition I was thinking of. ;-) I didn?t know you had caused one. The one I knew about was the Xmas Eve one. John > On Oct 3, 2024, at 13:45, Steve Crocker wrote: > > John, > > The RFNMs restricted flow on a per "connection" basis. As I accidentally demonstrated when I suggested to the folks at Tinker that they use eight parallel connections to maximize throughput, the early Arpanet could be brought down nearly instantly. > > The reassembly problem was separate and at least partly related to focusing on eight packet messages instead of two packet messages. > > Steve > > Sent from my iPhone > >> On Oct 3, 2024, at 1:38?PM, John Day via Internet-history wrote: >> >> ?This is all well and good and actually quite interesting; but it doesn?t address the systems problems. >> >> As long as detection is implicit, it is predatory. The senders will react to conditions that did not occur in this layer and mechanisms in lower layers will have already reacted (having smaller scope), so there are conflicting reactions. >> >> Delay is going to be a noisy signal, which will result in false positives. Packets may have been discarded by a lower layer. Of course this falls prey to the ridiculous complaint we have seen that ?TCP doesn?t support wireless,? but it isn?t suppose to wireless is suppose to support TCP. However, if there is congestion in the lower layers (doing what is necessary to support TCP), you don?t want TCP reacting to it. >> >> Congestion in TCP is difficult if not impossible to coordinate with QoS. QoS mechanisms are primarily in the layer below. As you describe there is some loose coordination but it is far from sufficient. >> >> As I said, I have not been impressed by these implicit indirect signals of congestion. >> >> Take care, >> John >> >> >>> On Oct 2, 2024, at 18:21, Dave Taht wrote: >>> >>> I wish I had had the time and resources to (help) write more papers. (For example there isn't much on "drop head queueing") >>> >>> fq_codel is now a linux-wide default and has the following unique properties: >>> >>> codel queue management, which measure the time a packet spends in a queue and gradually attempts to find an optimum point for queue length, which is 5ms by default. (it has been tested in software below 250us in the DC). There is another subsystem, called BQL, which attempts to limit bytes on the device txring to one interrupt's worth. (a pretty good explanation of modern layers here) [2] >>> >>> It drops from the head, not the tail of the queue, with a small (BQL or HTB) FIFO in front of the lowest bits of the hardware to account >>> for interrupt latency. >>> >>> (I am kind of curious if a txring existed back in the day and how close an application sat to the hardware) >>> >>> Anecdote: when van and kathy were working on what became codel (january 2012), she rang me up one day and asked me just how much overhead there was in getting a timestamp from the hardware nowadays. And I explained that it was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs and so on and so forth, and she said thanks, and hung up. Getting a timestamp must have been mighty hard back in the day! >>> >>> The "flow queueing" mechanism sends packets that have an arrival rate of less than the departure rate of all the other flows, out first.[1] This is an improvement over prior FQ mechanisms like SFQ and DRR, which always put a new flow at the tail of the flow list. It is pretty amazing how often this works on real traffic. Also it automatically puts flows that build a queue into a queue that is managed by codel. >>> >>> One (eventual) benefit of these approaches, combined, is it makes delay based congestion control more feasible (indeed, >>> BBR spends most of its time in this mode), but the flow isolation makes for most interactive traffic never being queued at all. >>> >>> IMHO the edges of the internet at least, would have been much better were some form of FQ always in it (which we kind of got from switched networks naturally) but the idea of FQ was roundly rejected in the first ietf meeting in 1989, and it's been uphill ever since. >>> >>> Just to touch upon pacing a bit - pacing is the default for the linux stack no matter the overlying qdisc or congestion control algorithm. >>> I don't know if anyone has ever attempted to compare pacing w/cubic vs pacing w/bbr, and very few, until recently, have >>> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >>> >>> [1] https://ieeexplore.ieee.org/document/8469111 >>> [2] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >>> [3] https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >>> >>> Varying the packet pacing to get a pre-congestion notification is a paper I'd like more to pursue. >>> https://www.usenix.org/system/files/atc24-han.pdf >>> (I so want to believe this paper) >>> >>> A tiny bit more below.... >>> >>>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history > wrote: >>>> The response to bufferbloat has always struck me as looking for your keys under a street light when that wasn?t where you dropped them but there is light there. >>>> >>>> Initially, bufferbloat was not a problem because memory was expensive and when TCP ran out of buffers (or got low), the connection simply blocked the sending application until buffers were available. This was still true with the advent of NIC cards. Memory was still tight. However, as memory got cheap and NIC cards had oceans of memory, TCP never got low on buffers and no one told the application to slow down or wait, so there was local congestion collapse: bufferbloat. >>>> >>>> One part of the solution would be interface flow control between the sending application and TCP (you would have thought that would have occurred to implementers any way, it is obvious) and/or simply restrict the amount of buffers TCP has available so that it runs out and blocks the sending the application before things get bad and opens up when buffers are available. But virtually all of the papers I see are on different drop-strategies, and oddly enough they never find their keys. >>> >>> don't have a lot of time for papers! The most modern stuff for tcp is using EDF (earliest deadline first) to manage the packet pacing. >>> There are virtual and actual physical devices nowadays that take a "time to be sent" and packet. This paper was highly influential: >>> >>> https://saeed.github.io/files/carousel-sigcomm17.pdf >>> >>> the latest commit to the linux kernel about it: >>> >>> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >>> >>> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so many things... >>> >>>> >>>> Take care, >>>> John >>>> >>>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history > wrote: >>>>> >>>>> Just throwing some thoughts out here ...... >>>>> I can see how this happens in a FIFO queuing world. However a lot of work has gone into fair queuing starting in the late 80s. Just wondering if anyone has done work utilizing fair queuing and source quench. For example, I think I can see how to use fair queuing information to better select who to send a source quench to. At least I can see how to do it with Stochastic Fairness Queueing since I worked on it and I remember a fair amount about how it was implemented. I wouldn't be able to provide a guarantee that the wrong host would never receive a source quench but the likelihood should be much lower. Considering whether the use of NAT creates undesirable behavior is also important and I am sure there are probably other cases that need to be checked. >>>>> Hum, it might also be interesting to speculate whether this could have any effect on bufferbloat but I fess up I need to learn more about the work done in the area of bufferbloat. I was involved with other things when this started to appear on my radar screen as a hot topic. I will admit I wish I had done more work on possible buffering effects from implementation choices at the time I did work on SFQ but there were contractual obligations that restricted how much time I could devote to the SFQ part of the project. >>>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does anyone have any idea about its use in the Internet? >>>>> barbara >>>>> >>>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf > wrote: >>>>> >>>>> One basic problem with blaming the "last packet that caused intermediate router congestion" is that it usually blamed the wrong source, among other problems. Van Jacobson was/is the guru of flow control (among others) who might remember more. >>>>> >>>>> v >>>>> >>>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history > wrote: >>>>> >>>>> In a brief attempt to try to find some information about the early MIT work you mentioned, I ended up tripping on this Final Report from ISI in DTIC. It does talk a fair amount about congestion control and source quench (plus other things that might interest people). The period of performance is 1987 to 1990 which is much later than I was considering in my earlier message. >>>>> >>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>>> >>>>> Even though the report mentions testing on DARTnet, I don't remember anything about this during our DARTnet meetings. I did join the project after the start so perhaps the work was done before I began to participate. I also couldn't easily find the journal they mention as a place for publishing their findings. I will have more time later to see if I can something that covers this testing. >>>>> >>>>> barbara >>>>> >>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via Internet-history > wrote: >>>>> >>>>> multicast is also an issue but I do not recall if that was one that Craig & I talked about >>>>> >>>>> Scott >>>>> >>>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history > wrote: >>>>>> >>>>>> I remember talking with Craig Partridge (on a flight to somewhere) about source quench >>>>>> during the time when 1812 was being written - I do not recall >>>>>> the specific issues but I recall that there were more than one issue >>>>>> >>>>>> (if DoS was not an issue at the time, it should have been) >>>>>> >>>>>> Scott >>>>>> >>>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history > wrote: >>>>>>> >>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>>> Forwarded for Barbara >>>>>>>>> >>>>>>>>> ==== >>>>>>>>> >>>>>>>>> From: Barbara Denny > >>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>>> I think congestion issues were discussed because I remember an ICMP message type called source quench (now deprecated). It was used for notifying a host to reduce the traffic load to a destination. I don't remember hearing about any actual congestion experiments using this message type. >>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, an >>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>>> thesis about the use of Source Quench for congestion control. I believe >>>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>>> simulation). >>>>>>>> I don't think it had much impact on the rest of the world. >>>>>>> >>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated 1984). >>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>>> I guess he probably wasn't an MIT undergrad. >>>>>>> >>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). People >>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that basically >>>>>>> it was no use. >>>>>>> >>>>>>> A bit more Google found this, however: >>>>>>> >>>>>>> "4.3. Internet Congestion Control >>>>>>> Lixia Zhang began a study of network resource allocation techniques suitable for >>>>>>> the DARPA Internet. The Internet currently has a simple technique for resource >>>>>>> allocation, called "Source Quench." >>>>>>> Simple simulations have shown that this technique is not effective, and this work >>>>>>> has produced an alternative which seems considerably more workable. Simulation >>>>>>> of this new technique is now being performed." >>>>>>> >>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf ] >>>>>>> >>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at UCLA now. If she isn't on this list, she should be! >>>>>>> >>>>>>> Brian Carpenter >>>>> >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>>> >>>>> >>>>> -- >>>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>>> >>>>> >>>>> until further notice >>>>> >>>>> >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> -- >>> Dave T?ht CSO, LibreQos >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Thu Oct 3 10:54:32 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 03 Oct 2024 17:54:32 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: > It's curious to me that such mechanisms have not been created for the > Internet Industry. Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete.? (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From karl at iwl.com Thu Oct 3 11:31:57 2024 From: karl at iwl.com (Karl Auerbach) Date: Thu, 3 Oct 2024 11:31:57 -0700 Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: My grandfather was a radio repair guy, my father repaired TV's that other repair people could not fix.? So I grew up with my hands inside electronics learning how to figure out what was going wrong and what to do about it.? (I also learned a lot about keeping my fingers clear of high voltages - some day ask me about how the phrase "beating the bounds" [with regard to land titles] came about, and yes, there is an analogy to high voltage shocks.) I've carried that family history (of repairing, not shocking) into the land of networks. I am extremely concerned, and I mean *extremely* concerned, that our race to lock and secure things is slowly making it increasingly difficult for us to monitor, diagnose, and repair the Internet (and the increasing number of other important infrastructures that have become intermeshed with the net.) I wrote a note about this issue: Is The Internet At Risk From Too Much Security https://www.cavebear.com/cavebear-blog/netsecurity/ My experience with designing, deploying, and running the Interop show networks informed me that we have few decent tools.? I looked in awe with the collection of well designed tools that AT&T guys (they were always guys in that era) had dangling from their tool belts.? So I designed and sold the first Internet buttset - a tool to get one up and running within seconds to do testing and evaluation of a IP (and Netware) network.? (The tool was "Dr. Watson, The Network Detective's Assistant" - https://www.cavebear.com/archive/dwtnda/ .? However, I was learning about how to run a company at that time and I didn't watch, much less control, what my marketing group was spending - so we went under.? I then helped Fluke pick up some of the remnant ideas for their products.) Anyway, I have been bothered at how few test points we build into network software.? Even one of the most fundamental - remote loopback - is barely present in network equipment (yes, we have ICMP Echo/ping) but that's rather primitive. And I've long worked with SNMP and MIBs.? (I wrote and implemented an alternative to SNMP and Netconf that I though was much more useful then either: KNMP at https://www.iwl.com/idocs/knmp-overview ) My wife (Chris Wellens) and wrote up a paper in 1996 titled "Towards Useful Management" in which we made several proposals to improve our means to monitor and test networks. https://www.cavebear.com/docs/simple-times-vol4-num3.pdf In the meantime Marshall Rose and my? wife spun a new company, Interworking Labs, out from the Interop company.? The initial purpose was to develop test suits for network protocols.? (These suites still exist and often reveal mistakes in network code.? One of my favorite is to repackage Ethnernet frames that have short IP packets inside those Ethernet frames.? The IP packet is put into an Ethernet frame that is larger than it needs to be to hold that IP packet. (Some vendors have used that space to do things like announcing license identifiers in the unused space in an Ethernet frame after an ARP packet.)? Far too much code uses the ethernet frame length rather than properly using the IP length fields - bad things can happen as a result.? And there is still code out there that uses signed integer math on unsigned integer packet fields - so a lot of code still wobbles if one tickles packets with numbers just below or just above the point where that high order bit toggles.) Jon Postel came up with a testing idea for the bakeoff test events we had at places like FTP Software and ISI - a router that does things wrong in a controlled way.? A few years later Steve Casner and I were working to develop a portable RTP/RTCP engine for entertainment grade audio/video (on IP multicast); we longed for a device such as Jon's "flakeway" because of the need to evaluate all of the potential race conditions that can happen when running several related media streams in real time. So a few years later at Interworking labs we started to develop Jon's flakeway into a real tool.? We called the line "Maxwell" after James Clerk Maxwell's thought experiment about a daemon that could select and control the flow of hot and cold particles, seemingly violating the laws of Thermodynamics.? It is still rather surprising how much code out there wobbles (or worse) when faced with simple network behaviour such as packet order resequencing (such as can happen when there are parallel/load balanced/bound) network paths, or when packets are accumulated for a short while and then suddenly released (as if a dam, holding back a lake of packets, suddenly bursts.) I have seen many network test suites that check that a protocol implementation complies with the mandatory or suggested parts of RFCs.? Those are nice.? But my concern is on the other side of the RFCs - what about the DO NOT cases or undefined cases, what happens when those situations happen. For instance, I remember Dave Bridgham (FTP Software) one afternoon saying "You know, if I received the last IP fragment first I would have information that let me do better receive buffer allocation."? So he changed the FTP Software IP stack to send last fragment first.? It worked.? That is it worked until an FTP Software based machine was added to a network running competitor Netmanage TCP/IP code.? That latter code simply up and died when it got the last fragment first. And at a TCP bakeoff I had a tool to test ARP, a protocol that has many knobs and levers that are rarely used.? I managed to generate a broadcast ARP packet that used some of those knobs and levers.? That ARP hit the router between our test networks and the host company's main network - that router crashed, but before it did it (for some reason) propagated that ARP further along, causing every other (I believe Proteon) router in the company to also crash. We found a lot of things like that on the Interop show network. (I usually got blamed because I was usually near, if not operating, the device that triggered the flaws.)? One of the worst was a difference in opinion between Cisco and Wellfleet routers about what to do with expansion of IP multicast packets into Ethernet frames (in particular what group MAC addresses to use) resulting in infinite IP multicast routing across the show net - every load LED on every one of our hundreds of routers and switches turned red.? (And, of course, all fingers pointed at me. ;-) The Interop show net was a wonderful place to discover flaws in protocol standards and implementations.? One of our team members (who I believe is on this list) found a flaw the FDDI standard.? I have a memory of companies reworking their code and blasting new firmware overnight in their hotel rooms. The point of this long note is that the state of the art of testing Internet protocol implementation is weak.? It's not an exciting field, QA people are not honored employees, and as more and more people believe (often quite wrongly) that they can write code we are actually moving backwards in some regards. In addition, we do not adequately consider monitoring, testing, and repair in our work defining protocols. In 2003 I gave a long talk with a title that is now a bit misleading:?? From Barnstorming to Boeing ? Transforming the Internet Into a Lifeline Utility. (The slides are at https://www.cavebear.com/archive/rw/Barnstorming-to-Boeing-slides.pdf and the speaker notes at https://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.pdf ) (One of my suggestions was the imposition of legal, civil tort, liability for network design, implementation, and operational errors - using a negligence standard so that simple mistakes would not suffer liability.? Wow, the groans from the audience were quite loud.) I had other suggestions as well - such as design rules and operational practices that must be followed unless the person looking to deviate could express a compelling, cogent, argument why deviation is appropriate.? This is the norm in many engineering disciplines, but not for software where we are largely still in the anything goes, wild west.) By-the-way, I have over the years been working on ideas to advance our testing/repair capabilities. One piece that we are missing is a database of network pathology.? I am thinking here of a database of symptoms that are tied to possible causes and tests to distinguish among those causes.? (Yes, I am taking a cue from the practice of medicine.) Once we have such a database one could build tools to do symptom-to-cause reasoning, including running of diagnostic tests to work through the branches of the possible causation tree.? To do this right one needs trusted test agents disseminated throughout the network - the word "trusted" is important because network tests can be intrusive, sharp, and dangerous, like a surgeon's scalpel.? (Imagine a world where surgeons were required to use dull, but safe plastic butter knives rather than sharp scalpels.) Baseline records are important - and we do gather some of that, but we always want more detail.? But the amount of data to be collected is voluminous and is subject to concerns about how it could be used competitively.? (This is why in our Interworking Labs test contracts we prohibit the publishing of results to the public - we want to encourage correction for the benefit of us all rather than creation of competitive cudgels.) (One element that I've slowly been working on in my zero free time is a precisely timed beacon and precisely timed listeners - all tightly synchronized to GPS time.? The idea is for beacons to take subscriptions from listeners and then to emit highly predictable patterns of packets of various sizes and timings. I've been meaning to corner some of my astrophysicist friends to adopt some of their methods of using that kind of predictable behaviour, observed at a distance, to evaluate what lies between the beacon's hither and the listerner's yon.? [And yes, I did pick up some ideas from Van J's pathchar and Bruce Mah's re-implementation as pchar.) I am also thinking that we need some legal and accounting rule changes so that vendors are more able to share improvements and tests without running afoul of restraint of trade laws or damaging their balance sheets and that ever present, false fable of "shareholder value".) ??? ??? --karl-- From jack at 3kitty.org Thu Oct 3 12:25:58 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 12:25:58 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> Message-ID: John, RFNMs were messages sent from an IMP to one of its attached Host computers - literally "Request For Next Message".?? AFAIK, RFNMs were not sent internally between IMPs.?? But there was a lot of internal mechanism to exchange information between IMPs.?? An IMP could send a RFNM to a Host to indicate that it was OK for that Host to send more data. If your Host didn't obey the RFNM rules, as a last measure to protect the network, the IMP could shut off that Host by turning off the hardware clock on the cable connecting the Host and IMP.?? For a while, that Host wouldn't be able to talk to any other Host.?? But service would be restored when the traffic jam cleared. But I'm not the right person to ask about IMP internals.? When I joined BBN, the Arpanet had already been running for 8 years and had transitioned into "operational" mode, with DCA rather than ARPA in charge.? I was in the "research" area associated with then-current ARPA projects such as TCP.? But the "IMP Guys" were close by.? When I was implementing TCP for Unix, I suspect I learned about RFNMs from one of the IMP programmers, and in particular learned what my TCP had to do in order to avoid ever being blocked by the IMP. I'm not sure if other TCP implementers knew much about how the Arpanet worked, or what they should do in their TCP implementations.? Such stuff was probably described somewhere in reports, but I had the advantage of being able to just go ask an IMP programmer.? That was even preferable to looking at the ultimate documentation -- the code itself, which was not easy to understand. One of the features of the "Arpanet Architecture" was that the internal mechanisms were insulated from the world outside, which was defined by the "1822" interface specifications.?? So the internal mechanisms could be changed without any need for Host computers to modify their hardware or software.? The Host-IMP interface did change sometimes, but very rarely, e.g,. to introduce "96-bit leaders".? The internal IMP mechanisms could (and did) change with every release of the IMP software.? They also changed with everyday "patches" that addressed some current operational problem. As the Arpanet grew during the 80s, lots of traffic, lots of new applications, and lots of new users surfaced a lot of issues.? The Arpanet internal mechanisms were studied as they were in use, simulations were performed, analyses were done, and new mechanisms were implemented and carefully introduced into the active network, instrumented to see how well the theory matched the results in practice. Congestion control was one such issue.?? I recall others, e.g., "MultiPath Routing".? This was surfaced by the observation that, at the time, there were 3 separate paths through the Arpanet mesh to get data from the East Coast to the West Coast.?? The "routing algorithm" always had an idea of the "best path", and sent all data along that route.? Each route involved 56 kilobits/second circuits.?? But since all data flowed on the current "best route", it was not possible to attain more than 56 kb/s throughput between any two hosts, even though the cross-country capacity was available for more. Personally, I learned about these kinds of Arpanet issues mostly from proximity to the Arpanet NOC and IMP Guys.? There were lots of reports documenting Arpanet behavior, but they may not have been readily available (no Web yet) or widely distributed, or even of much interest to the researchers pursuing Internet Projects. The DTIC documents I mentioned earlier are some of those reports that may be not only of historical interest, but also relate to current issues which exist in today's Internet.? For example, "MultiPath Routing" is an issue in today's Internet.? My cell phone has two paths available to it for using the Internet (Wifi and Cell).? But it can only use one at any time.?? Flow Control was handled by the internal IMP mechanisms such as RFNMs.? But it didn't prevent Congestion.? Congestion Control was another hot topic back in the 1980s Arpanet. The "Arpanet Architecture" put mechanisms for congestion control, routing, flow control, instrumentation, et al as internal mechanisms.? The "Internet Architecture" places some of those functions into the "switching" fabric of routers, switches, and modems.?? It places other functions into the "Host" devices where TCP et al are implemented. Both Hosts and Switching Fabric contain products developed by many different, and often competing, manufacturers. IMHO, those two architectures are quite different, yet reflect approaches to the same problem of building a distributed computing infrastructure (Licklider's "Galactic Network" vision). I don't recall much about the changes that were made to the Arpanet for things like congestion control.? That's why I mentioned those reports saved by DTIC (Defense Technical Information Center).? There may be some gems of experience still in those reports that might apply in today's world.?? I'm even listed as the Author of some of them; but that just reflects that, at the time, I was the designated manager of the associated contract.?? That doesn't mean that I knew anything about the work; I was just responsible for getting the report submitted so the customer would pay the bill. Andy Malis, who worked with IMPs, may remember more.? Also Bob Hinden, who we recruited into the Internet world from the Arpanet group. Andy??? Bob? Jack Haverty On 10/3/24 08:50, John Day wrote: > Jack, > > Good stuff. I agree and as I said before SQ alone is not sufficient unless the action to be taken is also defined, both when to send it and what to do when it arrives. Raj and KK said that ECN should be sent when the average queue length was greater than or equal to 1. This is very early and gives the senders time to back off before packets are dropped (hopefully) and retransmissions are generated. TCP by using implicit notification waits until the queue is full and packets are being dropped, and will continue to be dropped (it would seem) until the senders notice the lost Acks. This would appear to generate a lot of retransmissions. > > A question for you: It has been my impression that with the RFNMs between IMPs, congestion would not have occurred within the IMP subnet (or very rarely).* However, there would have been congestion at the gateways as you describe. Is that correct? > > Take care, > John > > * Early on there were some deadlock conditions caused by the fact that a message could be 8 packets and was reassembled in the IMP before being delivered to the host, but that isn?t congestion. > >> On Oct 2, 2024, at 20:08, Jack Haverty via Internet-history wrote: >> >> Re: Source Quench... >> >> It's been 40+ years, but I remember meetings where Source Quench was first discussed. My reaction was that it was too simplistic and wouldn't be effective. At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40. When I asked what a TCP should do when it received a SQ, no one could provide much of an answer. If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? >> >> Other implementors had different ideas about how to handle an incoming SQ. One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent. So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire. You knew it had been discarded, so you should retransmit it immediately. >> >> In my TCP, I think I just incremented a counter when I received a SQ. Could always change it later.... >> >> At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue. There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340. There are many others describing the Arpanet experience. In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". >> >> There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs. Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer. The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. >> >> Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them. Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP. Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels. "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. >> >> The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs. That would effectively block all communications from the blocked host to anywhere else on the Arpanet. By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter. Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it. I have wondered how many implementations actually did. >> >> All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail. The ancient IMP code itself is even available online today. As always, the ultimate documentation is the code itself. But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s. It's not easy to figure out how it worked. >> >> The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes. In the earliest gateways, "getting a timestamp" from the processor wasn't hard. It was impossible. The gateway hardware simply didn't have any way to measure time. >> >> IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back. They could calculate the delay along a route. >> >> Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit. So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available. That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. >> >> Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet. But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. >> >> Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports. The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". >> >> In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones. If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. >> >> Hope this helps explain how we got from there to here... >> Jack Haverty >> >> >> >> >> >> On 10/2/24 15:21, Dave Taht via Internet-history wrote: >>> I wish I had had the time and resources to (help) write more papers. (For >>> example there isn't much on "drop head queueing") >>> >>> fq_codel is now a linux-wide default and has the following unique >>> properties: >>> >>> codel queue management, which measure the time a packet spends in a queue >>> and gradually attempts to find an optimum point for queue length, which is >>> 5ms by default. (it has been tested in software below 250us in the DC). >>> There is another subsystem, called BQL, which attempts to limit bytes on >>> the device txring to one interrupt's worth. (a pretty good explanation of >>> modern layers here) [2] >>> >>> It drops from the head, not the tail of the queue, with a small (BQL or >>> HTB) FIFO in front of the lowest bits of the hardware to account >>> for interrupt latency. >>> >>> (I am kind of curious if a txring existed back in the day and how close an >>> application sat to the hardware) >>> >>> Anecdote: when van and kathy were working on what became codel (january >>> 2012), she rang me up one day and asked me just how much overhead there was >>> in getting a timestamp from the hardware nowadays. And I explained that it >>> was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs >>> and so on and so forth, and she said thanks, and hung up. Getting a >>> timestamp must have been mighty hard back in the day! >>> >>> The "flow queueing" mechanism sends packets that have an arrival rate of >>> less than the departure rate of all the other flows, out first.[1] This is >>> an improvement over prior FQ mechanisms like SFQ and DRR, which always put >>> a new flow at the tail of the flow list. It is pretty amazing how often >>> this works on real traffic. Also it automatically puts flows that build a >>> queue into a queue that is managed by codel. >>> >>> One (eventual) benefit of these approaches, combined, is it makes delay >>> based congestion control more feasible (indeed, >>> BBR spends most of its time in this mode), but the flow isolation makes for >>> most interactive traffic never being queued at all. >>> >>> IMHO the edges of the internet at least, would have been much better were >>> some form of FQ always in it (which we kind of got from switched networks >>> naturally) but the idea of FQ was roundly rejected in the first ietf >>> meeting in 1989, and it's been uphill ever since. >>> >>> Just to touch upon pacing a bit - pacing is the default for the linux stack >>> no matter the overlying qdisc or congestion control algorithm. >>> I don't know if anyone has ever attempted to compare pacing w/cubic vs >>> pacing w/bbr, and very few, until recently, have >>> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >>> >>> [1]https://ieeexplore.ieee.org/document/8469111 >>> [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >>> [3] >>> https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >>> >>> Varying the packet pacing to get a pre-congestion notification is a paper >>> I'd like more to pursue. >>> https://www.usenix.org/system/files/atc24-han.pdf >>> (I so want to believe this paper) >>> >>> A tiny bit more below.... >>> >>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> The response to bufferbloat has always struck me as looking for your keys >>>> under a street light when that wasn?t where you dropped them but there is >>>> light there. >>>> >>>> Initially, bufferbloat was not a problem because memory was expensive and >>>> when TCP ran out of buffers (or got low), the connection simply blocked the >>>> sending application until buffers were available. This was still true with >>>> the advent of NIC cards. Memory was still tight. However, as memory got >>>> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >>>> no one told the application to slow down or wait, so there was local >>>> congestion collapse: bufferbloat. >>>> >>>> One part of the solution would be interface flow control between the >>>> sending application and TCP (you would have thought that would have >>>> occurred to implementers any way, it is obvious) and/or simply restrict the >>>> amount of buffers TCP has available so that it runs out and blocks the >>>> sending the application before things get bad and opens up when buffers are >>>> available. But virtually all of the papers I see are on different >>>> drop-strategies, and oddly enough they never find their keys. >>>> >>> don't have a lot of time for papers! The most modern stuff for tcp is >>> using EDF (earliest deadline first) to manage the packet pacing. >>> There are virtual and actual physical devices nowadays that take a "time to >>> be sent" and packet. This paper was highly influential: >>> >>> https://saeed.github.io/files/carousel-sigcomm17.pdf >>> >>> the latest commit to the linux kernel about it: >>> >>> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >>> >>> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so >>> many things... >>> >>> >>>> Take care, >>>> John >>>> >>>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>> Just throwing some thoughts out here ...... >>>>> I can see how this happens in a FIFO queuing world. However a lot of >>>> work has gone into fair queuing starting in the late 80s. Just wondering >>>> if anyone has done work utilizing fair queuing and source quench. For >>>> example, I think I can see how to use fair queuing information to better >>>> select who to send a source quench to. At least I can see how to do it with >>>> Stochastic Fairness Queueing since I worked on it and I remember a fair >>>> amount about how it was implemented. I wouldn't be able to provide a >>>> guarantee that the wrong host would never receive a source quench but the >>>> likelihood should be much lower. Considering whether the use of NAT >>>> creates undesirable behavior is also important and I am sure there are >>>> probably other cases that need to be checked. >>>>> Hum, it might also be interesting to speculate whether this could have >>>> any effect on bufferbloat but I fess up I need to learn more about the work >>>> done in the area of bufferbloat. I was involved with other things when >>>> this started to appear on my radar screen as a hot topic. I will admit I >>>> wish I had done more work on possible buffering effects from implementation >>>> choices at the time I did work on SFQ but there were contractual >>>> obligations that restricted how much time I could devote to the SFQ part of >>>> the project. >>>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >>>> anyone have any idea about its use in the Internet? >>>>> barbara >>>>> >>>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >>>> vint at google.com> wrote: >>>>> One basic problem with blaming the "last packet that caused intermediate >>>> router congestion" is that it usually blamed the wrong source, among other >>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>> might remember more. >>>>> v >>>>> >>>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>> In a brief attempt to try to find some information about the early MIT >>>> work you mentioned, I ended up tripping on this Final Report from ISI in >>>> DTIC. It does talk a fair amount about congestion control and source >>>> quench (plus other things that might interest people). The period of >>>> performance is 1987 to 1990 which is much later than I was considering in >>>> my earlier message. >>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>>> Even though the report mentions testing on DARTnet, I don't remember >>>> anything about this during our DARTnet meetings. I did join the project >>>> after the start so perhaps the work was done before I began to participate. >>>> I also couldn't easily find the journal they mention as a place for >>>> publishing their findings. I will have more time later to see if I can >>>> something that covers this testing. >>>>> barbara >>>>> >>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>>> Internet-history wrote: >>>>> multicast is also an issue but I do not recall if that was one that >>>> Craig & I talked about >>>>> Scott >>>>> >>>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>>> I remember talking with Craig Partridge (on a flight to somewhere) >>>> about source quench >>>>>> during the time when 1812 was being written - I do not recall >>>>>> the specific issues but I recall that there were more than one issue >>>>>> >>>>>> (if DoS was not an issue at the time, it should have been) >>>>>> >>>>>> Scott >>>>>> >>>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>>> Forwarded for Barbara >>>>>>>>> >>>>>>>>> ==== >>>>>>>>> >>>>>>>>> From: Barbara Denny >>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>>> I think congestion issues were discussed because I remember an ICMP >>>> message type called source quench (now deprecated). It was used for >>>> notifying a host to reduce the traffic load to a destination. I don't >>>> remember hearing about any actual congestion experiments using this message >>>> type. >>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >>>> an >>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>>> thesis about the use of Source Quench for congestion control. I >>>> believe >>>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>>> simulation). >>>>>>>> I don't think it had much impact on the rest of the world. >>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>>> 1984). >>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>>> I guess he probably wasn't an MIT undergrad. >>>>>>> >>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>>> People >>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that >>>> basically >>>>>>> it was no use. >>>>>>> >>>>>>> A bit more Google found this, however: >>>>>>> >>>>>>> "4.3. Internet Congestion Control >>>>>>> Lixia Zhang began a study of network resource allocation techniques >>>> suitable for >>>>>>> the DARPA Internet. The Internet currently has a simple technique for >>>> resource >>>>>>> allocation, called "Source Quench." >>>>>>> Simple simulations have shown that this technique is not effective, >>>> and this work >>>>>>> has produced an alternative which seems considerably more workable. >>>> Simulation >>>>>>> of this new technique is now being performed." >>>>>>> >>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >>>> ] >>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at >>>> UCLA now. If she isn't on this list, she should be! >>>>>>> Brian Carpenter >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>>> >>>>> >>>>> -- >>>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >>>> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>>> until further notice >>>>> >>>>> >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From jeanjour at comcast.net Thu Oct 3 12:42:22 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 15:42:22 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: Just out of curiosity, what other standards organizations do compliance testing? John > On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history wrote: > > On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >> It's curious to me that such mechanisms have not been created for the Internet Industry. > > > Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) > > There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. > > Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. > > Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. > > It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > mast:@dcrocker at mastodon.social > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From vint at google.com Thu Oct 3 12:43:35 2024 From: vint at google.com (Vint Cerf) Date: Thu, 3 Oct 2024 15:43:35 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: ISO? v On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > Just out of curiosity, what other standards organizations do compliance > testing? > > John > > > On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: > >> It's curious to me that such mechanisms have not been created for the > Internet Industry. > > > > > > Informal interoperability testing, versus formal compliance testing, was > and remains a key distinction between the Internet's culture and the > cultures of various other standards organization. Compliance testing is > typically expensive and incomplete. (As a tool for initial code debugging, > tests like that can be efficient; as a guarantee of field interoperability, > not so much.) > > > > There was a wonderful panel that Vint was on, circa 1990, along with a > number of other folk, including a vigorous OSI proponent from Boeing. > > > > Vint made his comments about experiences with the Internet's technology > and specifically noted the reliance on interoperability testing rather than > (bench) compliance testing. > > > > Other panelists made various comments and then the Boeing person made > theirs, vigorously asserting that it is not possible to get widespread > interoperability without formal compliance testing. > > > > It was fun to watch Vint slowly lean slightly forward and then slowly > turn his head toward the Boeing person. > > > > d/ > > > > -- > > Dave Crocker > > Brandenburg InternetWorking > > bbiw.net > > mast:@dcrocker at mastodon.social > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From dhc at dcrocker.net Thu Oct 3 13:02:47 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 03 Oct 2024 20:02:47 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: <1c52c51f-7847-41fb-8b69-d97334071fe8@dcrocker.net> On 10/3/2024 12:42 PM, John Day wrote: > Just out of curiosity, what other standards organizations do compliance testing? Hmmm.? I guess my phrasing did attribute the requirement to the SDOs.? Didn't phrase that carefully enough. The ecosystems using those SDOs' specs typically embodied the requirement. I don't hink the SDOs, themselves, did.? Certainly the OSI world, as typical for ITU and ISO standards. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From brian.e.carpenter at gmail.com Thu Oct 3 13:03:30 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Oct 2024 09:03:30 +1300 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: <1daef726-deb0-4ffc-8e36-41fd53f930c9@gmail.com> There's a variant of Godwin's law something like: "As an online discussion of Internet history grows longer, the probability of a comparison involving OSI approaches 1." OSI people were very keen on formal conformance testing and certification. It was supposed to be one of the great benefits of formal specifications, state diagrams, and the like. Consultants made good money out of it. Over here, we had rough consensus and running code. We know what happened. But it's true that the IETF has repeatedly failed to solve the problem that Dave identified ("know exactly which RFCs they need to implement"). See the following URL to appreciate the problem: https://github.com/becarpenter/book6/blob/main/20.%20Further%20Reading/RFC%20bibliography.md Brian Regards Brian Carpenter On 04-Oct-24 06:54, Dave Crocker via Internet-history wrote: > On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >> It's curious to me that such mechanisms have not been created for the >> Internet Industry. > > > Informal interoperability testing, versus formal compliance testing, was > and remains a key distinction between the Internet's culture and the > cultures of various other standards organization. Compliance testing is > typically expensive and incomplete.? (As a tool for initial code > debugging, tests like that can be efficient; as a guarantee of field > interoperability, not so much.) > > There was a wonderful panel that Vint was on, circa 1990, along with a > number of other folk, including a vigorous OSI proponent from Boeing. > > Vint made his comments about experiences with the Internet's technology > and specifically noted the reliance on interoperability testing rather > than (bench) compliance testing. > > Other panelists made various comments and then the Boeing person made > theirs, vigorously asserting that it is not possible to get widespread > interoperability without formal compliance testing. > > It was fun to watch Vint slowly lean slightly forward and then slowly > turn his head toward the Boeing person. > > d/ > From jeanjour at comcast.net Thu Oct 3 13:04:03 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 16:04:03 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> Message-ID: <7E5F8742-9C2C-463A-81EC-E9F1D2BC56CA@comcast.net> Okay, thanks. That clarifies the RFNM issue. What were ?all of those applications in the 80s? you were alluding to? Well, some of them. In the 80s, was the ARPANET becoming a smaller and smaller part of the Internet? Your last comment: The ARPANET did process-to-process flow control in NCP and I would venture a guess, hop-by-hop flow control host-to-host through IMP-Host to IMP subnet to IMP-Host. I need to dig int the nature of the IMP-IMP flow and congestion control. Thanks, John > On Oct 3, 2024, at 15:25, Jack Haverty wrote: > > John, > > RFNMs were messages sent from an IMP to one of its attached Host computers - literally "Request For Next Message". AFAIK, RFNMs were not sent internally between IMPs. But there was a lot of internal mechanism to exchange information between IMPs. An IMP could send a RFNM to a Host to indicate that it was OK for that Host to send more data. > > If your Host didn't obey the RFNM rules, as a last measure to protect the network, the IMP could shut off that Host by turning off the hardware clock on the cable connecting the Host and IMP. For a while, that Host wouldn't be able to talk to any other Host. But service would be restored when the traffic jam cleared. > > But I'm not the right person to ask about IMP internals. When I joined BBN, the Arpanet had already been running for 8 years and had transitioned into "operational" mode, with DCA rather than ARPA in charge. I was in the "research" area associated with then-current ARPA projects such as TCP. But the "IMP Guys" were close by. When I was implementing TCP for Unix, I suspect I learned about RFNMs from one of the IMP programmers, and in particular learned what my TCP had to do in order to avoid ever being blocked by the IMP. > > I'm not sure if other TCP implementers knew much about how the Arpanet worked, or what they should do in their TCP implementations. Such stuff was probably described somewhere in reports, but I had the advantage of being able to just go ask an IMP programmer. That was even preferable to looking at the ultimate documentation -- the code itself, which was not easy to understand. > > One of the features of the "Arpanet Architecture" was that the internal mechanisms were insulated from the world outside, which was defined by the "1822" interface specifications. So the internal mechanisms could be changed without any need for Host computers to modify their hardware or software. The Host-IMP interface did change sometimes, but very rarely, e.g,. to introduce "96-bit leaders". The internal IMP mechanisms could (and did) change with every release of the IMP software. They also changed with everyday "patches" that addressed some current operational problem. > > As the Arpanet grew during the 80s, lots of traffic, lots of new applications, and lots of new users surfaced a lot of issues. The Arpanet internal mechanisms were studied as they were in use, simulations were performed, analyses were done, and new mechanisms were implemented and carefully introduced into the active network, instrumented to see how well the theory matched the results in practice. > > Congestion control was one such issue. I recall others, e.g., "MultiPath Routing". This was surfaced by the observation that, at the time, there were 3 separate paths through the Arpanet mesh to get data from the East Coast to the West Coast. The "routing algorithm" always had an idea of the "best path", and sent all data along that route. Each route involved 56 kilobits/second circuits. But since all data flowed on the current "best route", it was not possible to attain more than 56 kb/s throughput between any two hosts, even though the cross-country capacity was available for more. > > Personally, I learned about these kinds of Arpanet issues mostly from proximity to the Arpanet NOC and IMP Guys. There were lots of reports documenting Arpanet behavior, but they may not have been readily available (no Web yet) or widely distributed, or even of much interest to the researchers pursuing Internet Projects. > > The DTIC documents I mentioned earlier are some of those reports that may be not only of historical interest, but also relate to current issues which exist in today's Internet. For example, "MultiPath Routing" is an issue in today's Internet. My cell phone has two paths available to it for using the Internet (Wifi and Cell). But it can only use one at any time. Flow Control was handled by the internal IMP mechanisms such as RFNMs. But it didn't prevent Congestion. Congestion Control was another hot topic back in the 1980s Arpanet. > > The "Arpanet Architecture" put mechanisms for congestion control, routing, flow control, instrumentation, et al as internal mechanisms. The "Internet Architecture" places some of those functions into the "switching" fabric of routers, switches, and modems. It places other functions into the "Host" devices where TCP et al are implemented. > Both Hosts and Switching Fabric contain products developed by many different, and often competing, manufacturers. > > IMHO, those two architectures are quite different, yet reflect approaches to the same problem of building a distributed computing infrastructure (Licklider's "Galactic Network" vision). > > I don't recall much about the changes that were made to the Arpanet for things like congestion control. That's why I mentioned those reports saved by DTIC (Defense Technical Information Center). There may be some gems of experience still in those reports that might apply in today's world. I'm even listed as the Author of some of them; but that just reflects that, at the time, I was the designated manager of the associated contract. That doesn't mean that I knew anything about the work; I was just responsible for getting the report submitted so the customer would pay the bill. > > Andy Malis, who worked with IMPs, may remember more. Also Bob Hinden, who we recruited into the Internet world from the Arpanet group. > > Andy? Bob? > > Jack Haverty > > > On 10/3/24 08:50, John Day wrote: >> Jack, >> >> Good stuff. I agree and as I said before SQ alone is not sufficient unless the action to be taken is also defined, both when to send it and what to do when it arrives. Raj and KK said that ECN should be sent when the average queue length was greater than or equal to 1. This is very early and gives the senders time to back off before packets are dropped (hopefully) and retransmissions are generated. TCP by using implicit notification waits until the queue is full and packets are being dropped, and will continue to be dropped (it would seem) until the senders notice the lost Acks. This would appear to generate a lot of retransmissions. >> >> A question for you: It has been my impression that with the RFNMs between IMPs, congestion would not have occurred within the IMP subnet (or very rarely).* However, there would have been congestion at the gateways as you describe. Is that correct? >> >> Take care, >> John >> >> * Early on there were some deadlock conditions caused by the fact that a message could be 8 packets and was reassembled in the IMP before being delivered to the host, but that isn?t congestion. >> >>> On Oct 2, 2024, at 20:08, Jack Haverty via Internet-history wrote: >>> >>> Re: Source Quench... >>> >>> It's been 40+ years, but I remember meetings where Source Quench was first discussed. My reaction was that it was too simplistic and wouldn't be effective. At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40. When I asked what a TCP should do when it received a SQ, no one could provide much of an answer. If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? >>> >>> Other implementors had different ideas about how to handle an incoming SQ. One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent. So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire. You knew it had been discarded, so you should retransmit it immediately. >>> >>> In my TCP, I think I just incremented a counter when I received a SQ. Could always change it later.... >>> >>> At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue. There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340. There are many others describing the Arpanet experience. In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". >>> >>> There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs. Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer. The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. >>> >>> Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them. Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP. Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels. "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. >>> >>> The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs. That would effectively block all communications from the blocked host to anywhere else on the Arpanet. By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter. Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it. I have wondered how many implementations actually did. >>> >>> All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail. The ancient IMP code itself is even available online today. As always, the ultimate documentation is the code itself. But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s. It's not easy to figure out how it worked. >>> >>> The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes. In the earliest gateways, "getting a timestamp" from the processor wasn't hard. It was impossible. The gateway hardware simply didn't have any way to measure time. >>> >>> IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back. They could calculate the delay along a route. >>> >>> Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit. So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available. That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. >>> >>> Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet. But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. >>> >>> Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports. The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". >>> >>> In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones. If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. >>> >>> Hope this helps explain how we got from there to here... >>> Jack Haverty >>> >>> >>> >>> >>> >>> On 10/2/24 15:21, Dave Taht via Internet-history wrote: >>>> I wish I had had the time and resources to (help) write more papers. (For >>>> example there isn't much on "drop head queueing") >>>> >>>> fq_codel is now a linux-wide default and has the following unique >>>> properties: >>>> >>>> codel queue management, which measure the time a packet spends in a queue >>>> and gradually attempts to find an optimum point for queue length, which is >>>> 5ms by default. (it has been tested in software below 250us in the DC). >>>> There is another subsystem, called BQL, which attempts to limit bytes on >>>> the device txring to one interrupt's worth. (a pretty good explanation of >>>> modern layers here) [2] >>>> >>>> It drops from the head, not the tail of the queue, with a small (BQL or >>>> HTB) FIFO in front of the lowest bits of the hardware to account >>>> for interrupt latency. >>>> >>>> (I am kind of curious if a txring existed back in the day and how close an >>>> application sat to the hardware) >>>> >>>> Anecdote: when van and kathy were working on what became codel (january >>>> 2012), she rang me up one day and asked me just how much overhead there was >>>> in getting a timestamp from the hardware nowadays. And I explained that it >>>> was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs >>>> and so on and so forth, and she said thanks, and hung up. Getting a >>>> timestamp must have been mighty hard back in the day! >>>> >>>> The "flow queueing" mechanism sends packets that have an arrival rate of >>>> less than the departure rate of all the other flows, out first.[1] This is >>>> an improvement over prior FQ mechanisms like SFQ and DRR, which always put >>>> a new flow at the tail of the flow list. It is pretty amazing how often >>>> this works on real traffic. Also it automatically puts flows that build a >>>> queue into a queue that is managed by codel. >>>> >>>> One (eventual) benefit of these approaches, combined, is it makes delay >>>> based congestion control more feasible (indeed, >>>> BBR spends most of its time in this mode), but the flow isolation makes for >>>> most interactive traffic never being queued at all. >>>> >>>> IMHO the edges of the internet at least, would have been much better were >>>> some form of FQ always in it (which we kind of got from switched networks >>>> naturally) but the idea of FQ was roundly rejected in the first ietf >>>> meeting in 1989, and it's been uphill ever since. >>>> >>>> Just to touch upon pacing a bit - pacing is the default for the linux stack >>>> no matter the overlying qdisc or congestion control algorithm. >>>> I don't know if anyone has ever attempted to compare pacing w/cubic vs >>>> pacing w/bbr, and very few, until recently, have >>>> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >>>> >>>> [1]https://ieeexplore.ieee.org/document/8469111 >>>> [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >>>> [3] >>>> https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >>>> >>>> Varying the packet pacing to get a pre-congestion notification is a paper >>>> I'd like more to pursue. >>>> https://www.usenix.org/system/files/atc24-han.pdf >>>> (I so want to believe this paper) >>>> >>>> A tiny bit more below.... >>>> >>>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < >>>> internet-history at elists.isoc.org > wrote: >>>> >>>>> The response to bufferbloat has always struck me as looking for your keys >>>>> under a street light when that wasn?t where you dropped them but there is >>>>> light there. >>>>> >>>>> Initially, bufferbloat was not a problem because memory was expensive and >>>>> when TCP ran out of buffers (or got low), the connection simply blocked the >>>>> sending application until buffers were available. This was still true with >>>>> the advent of NIC cards. Memory was still tight. However, as memory got >>>>> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >>>>> no one told the application to slow down or wait, so there was local >>>>> congestion collapse: bufferbloat. >>>>> >>>>> One part of the solution would be interface flow control between the >>>>> sending application and TCP (you would have thought that would have >>>>> occurred to implementers any way, it is obvious) and/or simply restrict the >>>>> amount of buffers TCP has available so that it runs out and blocks the >>>>> sending the application before things get bad and opens up when buffers are >>>>> available. But virtually all of the papers I see are on different >>>>> drop-strategies, and oddly enough they never find their keys. >>>>> >>>> don't have a lot of time for papers! The most modern stuff for tcp is >>>> using EDF (earliest deadline first) to manage the packet pacing. >>>> There are virtual and actual physical devices nowadays that take a "time to >>>> be sent" and packet. This paper was highly influential: >>>> >>>> https://saeed.github.io/files/carousel-sigcomm17.pdf >>>> >>>> the latest commit to the linux kernel about it: >>>> >>>> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >>>> >>>> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so >>>> many things... >>>> >>>> >>>>> Take care, >>>>> John >>>>> >>>>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >>>>> internet-history at elists.isoc.org > wrote: >>>>>> Just throwing some thoughts out here ...... >>>>>> I can see how this happens in a FIFO queuing world. However a lot of >>>>> work has gone into fair queuing starting in the late 80s. Just wondering >>>>> if anyone has done work utilizing fair queuing and source quench. For >>>>> example, I think I can see how to use fair queuing information to better >>>>> select who to send a source quench to. At least I can see how to do it with >>>>> Stochastic Fairness Queueing since I worked on it and I remember a fair >>>>> amount about how it was implemented. I wouldn't be able to provide a >>>>> guarantee that the wrong host would never receive a source quench but the >>>>> likelihood should be much lower. Considering whether the use of NAT >>>>> creates undesirable behavior is also important and I am sure there are >>>>> probably other cases that need to be checked. >>>>>> Hum, it might also be interesting to speculate whether this could have >>>>> any effect on bufferbloat but I fess up I need to learn more about the work >>>>> done in the area of bufferbloat. I was involved with other things when >>>>> this started to appear on my radar screen as a hot topic. I will admit I >>>>> wish I had done more work on possible buffering effects from implementation >>>>> choices at the time I did work on SFQ but there were contractual >>>>> obligations that restricted how much time I could devote to the SFQ part of >>>>> the project. >>>>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >>>>> anyone have any idea about its use in the Internet? >>>>>> barbara >>>>>> >>>>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >>>>> vint at google.com > wrote: >>>>>> One basic problem with blaming the "last packet that caused intermediate >>>>> router congestion" is that it usually blamed the wrong source, among other >>>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>>> might remember more. >>>>>> v >>>>>> >>>>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>>>> internet-history at elists.isoc.org > wrote: >>>>>> In a brief attempt to try to find some information about the early MIT >>>>> work you mentioned, I ended up tripping on this Final Report from ISI in >>>>> DTIC. It does talk a fair amount about congestion control and source >>>>> quench (plus other things that might interest people). The period of >>>>> performance is 1987 to 1990 which is much later than I was considering in >>>>> my earlier message. >>>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>>>> Even though the report mentions testing on DARTnet, I don't remember >>>>> anything about this during our DARTnet meetings. I did join the project >>>>> after the start so perhaps the work was done before I began to participate. >>>>> I also couldn't easily find the journal they mention as a place for >>>>> publishing their findings. I will have more time later to see if I can >>>>> something that covers this testing. >>>>>> barbara >>>>>> >>>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>>>> Internet-history wrote: >>>>>> multicast is also an issue but I do not recall if that was one that >>>>> Craig & I talked about >>>>>> Scott >>>>>> >>>>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>>>> internet-history at elists.isoc.org > wrote: >>>>>>> I remember talking with Craig Partridge (on a flight to somewhere) >>>>> about source quench >>>>>>> during the time when 1812 was being written - I do not recall >>>>>>> the specific issues but I recall that there were more than one issue >>>>>>> >>>>>>> (if DoS was not an issue at the time, it should have been) >>>>>>> >>>>>>> Scott >>>>>>> >>>>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>>>> internet-history at elists.isoc.org > wrote: >>>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>>>> Forwarded for Barbara >>>>>>>>>> >>>>>>>>>> ==== >>>>>>>>>> >>>>>>>>>> From: Barbara Denny >>>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>>>> I think congestion issues were discussed because I remember an ICMP >>>>> message type called source quench (now deprecated). It was used for >>>>> notifying a host to reduce the traffic load to a destination. I don't >>>>> remember hearing about any actual congestion experiments using this message >>>>> type. >>>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >>>>> an >>>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>>>> thesis about the use of Source Quench for congestion control. I >>>>> believe >>>>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>>>> simulation). >>>>>>>>> I don't think it had much impact on the rest of the world. >>>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>>>> 1984). >>>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>>>> I guess he probably wasn't an MIT undergrad. >>>>>>>> >>>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>>>> People >>>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that >>>>> basically >>>>>>>> it was no use. >>>>>>>> >>>>>>>> A bit more Google found this, however: >>>>>>>> >>>>>>>> "4.3. Internet Congestion Control >>>>>>>> Lixia Zhang began a study of network resource allocation techniques >>>>> suitable for >>>>>>>> the DARPA Internet. The Internet currently has a simple technique for >>>>> resource >>>>>>>> allocation, called "Source Quench." >>>>>>>> Simple simulations have shown that this technique is not effective, >>>>> and this work >>>>>>>> has produced an alternative which seems considerably more workable. >>>>> Simulation >>>>>>>> of this new technique is now being performed." >>>>>>>> >>>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >>>>> ] >>>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at >>>>> UCLA now. If she isn't on this list, she should be! >>>>>>>> Brian Carpenter >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >>>>> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>>>> until further notice >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > > From jeanjour at comcast.net Thu Oct 3 13:06:26 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 16:06:26 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: No, never has on anything I know of. The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. John > On Oct 3, 2024, at 15:43, Vint Cerf wrote: > > ISO? > v > > > On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >> Just out of curiosity, what other standards organizations do compliance testing? >> >> John >> >> > On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >> > >> > On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >> >> It's curious to me that such mechanisms have not been created for the Internet Industry. >> > >> > >> > Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >> > >> > There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >> > >> > Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >> > >> > Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >> > >> > It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >> > >> > d/ >> > >> > -- >> > Dave Crocker >> > Brandenburg InternetWorking >> > bbiw.net >> > mast:@dcrocker at mastodon.social >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > > From jeanjour at comcast.net Thu Oct 3 13:14:39 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 16:14:39 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <1c52c51f-7847-41fb-8b69-d97334071fe8@dcrocker.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <1c52c51f-7847-41fb-8b69-d97334071fe8@dcrocker.net> Message-ID: <993EACE0-B035-44B2-BC11-B8FC077C8ABA@comcast.net> As I just replied to Vint, so far as I know ISO did no certification testing. There was a lot of complaining by traditional standards people that standards were only suppose to contain requirements (think screw threads) and not how to do it. They didn?t like how much of the ?how? to build it was in the specifications, like when certain packets were sent or what do with them when they were received. They were more or less beat into submission. The only thing that was done was a Formal Description Techniques to try to avoid the ambiguity of prose. Two methods were developed and were used to find bugs during development of some of the protocols. But I don?t know of any standards organization that does certification. There was a demand for it, and a group did try to specify some conformance tests for some protocols, but it was never very thorough, and a far cry from certification, nor used by anyone but possibly those DoD guys in AZ. John > On Oct 3, 2024, at 16:02, Dave Crocker wrote: > > On 10/3/2024 12:42 PM, John Day wrote: >> Just out of curiosity, what other standards organizations do compliance testing? > > Hmmm. I guess my phrasing did attribute the requirement to the SDOs. Didn't phrase that carefully enough. > > The ecosystems using those SDOs' specs typically embodied the requirement. I don't hink the SDOs, themselves, did. Certainly the OSI world, as typical for ITU and ISO standards. > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > mast:@dcrocker at mastodon.social From sob at sobco.com Thu Oct 3 13:15:31 2024 From: sob at sobco.com (Scott O. Bradner) Date: Thu, 3 Oct 2024 16:15:31 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: relevant to the question of OSI testing An Overview of OSI Conformance Testing https://www.site.uottawa.ca/~bochmann/CSI5174/CourseNotes/Literature/Tretmans%20-%20Overview%20of%20OSI%20conformance%20testing%20-%20iso9646.pdf > On Oct 3, 2024, at 4:06?PM, John Day via Internet-history wrote: > > No, never has on anything I know of. > > The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. > > John > >> On Oct 3, 2024, at 15:43, Vint Cerf wrote: >> >> ISO? >> v >> >> >> On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >>> Just out of curiosity, what other standards organizations do compliance testing? >>> >>> John >>> >>>> On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >>>> >>>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>>> It's curious to me that such mechanisms have not been created for the Internet Industry. >>>> >>>> >>>> Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >>>> >>>> There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >>>> >>>> Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >>>> >>>> Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >>>> >>>> It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >>>> >>>> d/ >>>> >>>> -- >>>> Dave Crocker >>>> Brandenburg InternetWorking >>>> bbiw.net >>>> mast:@dcrocker at mastodon.social >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 >> >> >> until further notice >> >> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Thu Oct 3 13:30:06 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 16:30:06 -0400 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: <1AA92124-7A80-41EE-8DFD-CA6F3E90A6B2@comcast.net> Ahh, yes, one of Chris Vissers students. Vissers developed a Temporal Ordering approach to Formal Description. It was interesting in that it said a minimal amount about the implementation (leaving greater leeway to the implementor, which seemed important then) but I never found anyone who could design in it. From the URL, this appears to come from Gregor Bochmann?s course who worked on a formal description for CCITT. I forget now what it was but SDL had some feature that they insisted on that made the description ambiguous. Typical ITU. As I said, there was a lot of push by the traditionalists for it, but certainly there was nothing in ISO to actually do it. I don?t know if Europe did but as I said the only group in the US was the DoD group in AZ and they were very plodding. Take care, John > On Oct 3, 2024, at 16:15, Scott O. Bradner via Internet-history wrote: > > relevant to the question of OSI testing > > An Overview of OSI Conformance Testing > https://www.site.uottawa.ca/~bochmann/CSI5174/CourseNotes/Literature/Tretmans%20-%20Overview%20of%20OSI%20conformance%20testing%20-%20iso9646.pdf > > >> On Oct 3, 2024, at 4:06?PM, John Day via Internet-history wrote: >> >> No, never has on anything I know of. >> >> The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. >> >> John >> >>> On Oct 3, 2024, at 15:43, Vint Cerf wrote: >>> >>> ISO? >>> v >>> >>> >>> On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >>>> Just out of curiosity, what other standards organizations do compliance testing? >>>> >>>> John >>>> >>>>> On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >>>>> >>>>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>>>> It's curious to me that such mechanisms have not been created for the Internet Industry. >>>>> >>>>> >>>>> Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >>>>> >>>>> There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >>>>> >>>>> Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >>>>> >>>>> Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >>>>> >>>>> It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >>>>> >>>>> d/ >>>>> >>>>> -- >>>>> Dave Crocker >>>>> Brandenburg InternetWorking >>>>> bbiw.net >>>>> mast:@dcrocker at mastodon.social >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> >>> until further notice >>> >>> >>> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Thu Oct 3 13:52:42 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Oct 2024 09:52:42 +1300 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <1AA92124-7A80-41EE-8DFD-CA6F3E90A6B2@comcast.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <1AA92124-7A80-41EE-8DFD-CA6F3E90A6B2@comcast.net> Message-ID: In Europe it was a whole industry in itself. Just two examples(Google finds many): https://shop.elsevier.com/books/osi-conformance-testing-methodology-and-ttcn/baumgarten/978-0-444-89712-1 https://link.springer.com/chapter/10.1007/BFb0026973 Derek Rayner at NPL was the high priest; the OSI promoters in Brussels were worshippers. Regards Brian On 04-Oct-24 09:30, John Day via Internet-history wrote: > Ahh, yes, one of Chris Vissers students. Vissers developed a Temporal Ordering approach to Formal Description. It was interesting in that it said a minimal amount about the implementation (leaving greater leeway to the implementor, which seemed important then) but I never found anyone who could design in it. From the URL, this appears to come from Gregor Bochmann?s course who worked on a formal description for CCITT. I forget now what it was but SDL had some feature that they insisted on that made the description ambiguous. Typical ITU. > > As I said, there was a lot of push by the traditionalists for it, but certainly there was nothing in ISO to actually do it. I don?t know if Europe did but as I said the only group in the US was the DoD group in AZ and they were very plodding. > > Take care, > John > >> On Oct 3, 2024, at 16:15, Scott O. Bradner via Internet-history wrote: >> >> relevant to the question of OSI testing >> >> An Overview of OSI Conformance Testing >> https://www.site.uottawa.ca/~bochmann/CSI5174/CourseNotes/Literature/Tretmans%20-%20Overview%20of%20OSI%20conformance%20testing%20-%20iso9646.pdf >> >> >>> On Oct 3, 2024, at 4:06?PM, John Day via Internet-history wrote: >>> >>> No, never has on anything I know of. >>> >>> The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. >>> >>> John >>> >>>> On Oct 3, 2024, at 15:43, Vint Cerf wrote: >>>> >>>> ISO? >>>> v >>>> >>>> >>>> On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >>>>> Just out of curiosity, what other standards organizations do compliance testing? >>>>> >>>>> John >>>>> >>>>>> On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >>>>>> >>>>>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>>>>> It's curious to me that such mechanisms have not been created for the Internet Industry. >>>>>> >>>>>> >>>>>> Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >>>>>> >>>>>> There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >>>>>> >>>>>> Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >>>>>> >>>>>> Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >>>>>> >>>>>> It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >>>>>> >>>>>> d/ >>>>>> >>>>>> -- >>>>>> Dave Crocker >>>>>> Brandenburg InternetWorking >>>>>> bbiw.net >>>>>> mast:@dcrocker at mastodon.social >>>>>> >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> >>>> -- >>>> Please send any postal/overnight deliveries to: >>>> Vint Cerf >>>> Google, LLC >>>> 1900 Reston Metro Plaza, 16th Floor >>>> Reston, VA 20190 >>>> +1 (571) 213 1346 >>>> >>>> >>>> until further notice >>>> >>>> >>>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From davida at pobox.com Thu Oct 3 14:17:20 2024 From: davida at pobox.com (David Arnold) Date: Fri, 4 Oct 2024 07:17:20 +1000 Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: References: Message-ID: > On 4 Oct 2024, at 04:32, Karl Auerbach via Internet-history wrote: > (One of my suggestions was the imposition of legal, civil tort, liability for network design, implementation, and operational errors - using a negligence standard so that simple mistakes would not suffer liability. Wow, the groans from the audience were quite loud.) As a generalization, software and network engineers test until it works. Proper engineers test until it breaks. The results are obvious. d From jeanjour at comcast.net Thu Oct 3 14:26:52 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 17:26:52 -0400 Subject: [ih] Fwd: Gateway Issue: Certification (was Re: booting linux on a 4004) References: Message-ID: <573BD16A-B7F4-493F-9ACA-743C6AA7099A@comcast.net> Forgot reply-all > Begin forwarded message: > > From: John Day > Subject: Re: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) > Date: October 3, 2024 at 17:23:56 EDT > To: Brian E Carpenter > > Figures. ;-) > > Name doesn?t ring a bell. You sure it wasn?t Dave Rayner, but I could be wrong on that too. > > With the governments in Europe funding things all sorts of stuff happened that probably shouldn?t have. > > That stuff was under the committee I chaired in the US and we pretty much ignored it. Do you remember COS? What a waste of time that was. > > The group in AZ was a piece of work: The committee responsible for a something would approve updates, but they wouldn?t adopt them and then refuse to certify implementations that did it right. Or one company implemented a support tool that used part of a protocol but they wouldn?t certify it because it didn?t use the whole protocol, even though it never needed it or used. As I said, it was totally stupid. > > As you seem to allude, they were more infatuated with themselves than what made sense. > > There were some major advances in OSI, but the PTTs made pretty sure they were unusable or sufficiently obscure that no one knew. > TP4 for example was a major advance over TCP. (But then when they decided to cooperate with CCITT in 82, I knew the jig was up.) > > John > >> On Oct 3, 2024, at 16:52, Brian E Carpenter via Internet-history wrote: >> >> In Europe it was a whole industry in itself. Just two examples(Google finds many): >> >> https://shop.elsevier.com/books/osi-conformance-testing-methodology-and-ttcn/baumgarten/978-0-444-89712-1 >> >> https://link.springer.com/chapter/10.1007/BFb0026973 >> >> Derek Rayner at NPL was the high priest; the OSI promoters in Brussels were worshippers. >> >> Regards >> Brian >> On 04-Oct-24 09:30, John Day via Internet-history wrote: >>> Ahh, yes, one of Chris Vissers students. Vissers developed a Temporal Ordering approach to Formal Description. It was interesting in that it said a minimal amount about the implementation (leaving greater leeway to the implementor, which seemed important then) but I never found anyone who could design in it. From the URL, this appears to come from Gregor Bochmann?s course who worked on a formal description for CCITT. I forget now what it was but SDL had some feature that they insisted on that made the description ambiguous. Typical ITU. >>> As I said, there was a lot of push by the traditionalists for it, but certainly there was nothing in ISO to actually do it. I don?t know if Europe did but as I said the only group in the US was the DoD group in AZ and they were very plodding. >>> Take care, >>> John >>>> On Oct 3, 2024, at 16:15, Scott O. Bradner via Internet-history wrote: >>>> >>>> relevant to the question of OSI testing >>>> >>>> An Overview of OSI Conformance Testing >>>> https://www.site.uottawa.ca/~bochmann/CSI5174/CourseNotes/Literature/Tretmans%20-%20Overview%20of%20OSI%20conformance%20testing%20-%20iso9646.pdf >>>> >>>> >>>>> On Oct 3, 2024, at 4:06?PM, John Day via Internet-history wrote: >>>>> >>>>> No, never has on anything I know of. >>>>> >>>>> The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. >>>>> >>>>> John >>>>> >>>>>> On Oct 3, 2024, at 15:43, Vint Cerf wrote: >>>>>> >>>>>> ISO? >>>>>> v >>>>>> >>>>>> >>>>>> On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >>>>>>> Just out of curiosity, what other standards organizations do compliance testing? >>>>>>> >>>>>>> John >>>>>>> >>>>>>>> On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >>>>>>>> >>>>>>>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>>>>>>> It's curious to me that such mechanisms have not been created for the Internet Industry. >>>>>>>> >>>>>>>> >>>>>>>> Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >>>>>>>> >>>>>>>> There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >>>>>>>> >>>>>>>> Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >>>>>>>> >>>>>>>> Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >>>>>>>> >>>>>>>> It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >>>>>>>> >>>>>>>> d/ >>>>>>>> >>>>>>>> -- >>>>>>>> Dave Crocker >>>>>>>> Brandenburg InternetWorking >>>>>>>> bbiw.net >>>>>>>> mast:@dcrocker at mastodon.social >>>>>>>> >>>>>>>> -- >>>>>>>> Internet-history mailing list >>>>>>>> Internet-history at elists.isoc.org >>>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>>> >>>>>>> -- >>>>>>> Internet-history mailing list >>>>>>> Internet-history at elists.isoc.org >>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>>>> >>>>>> -- >>>>>> Please send any postal/overnight deliveries to: >>>>>> Vint Cerf >>>>>> Google, LLC >>>>>> 1900 Reston Metro Plaza, 16th Floor >>>>>> Reston, VA 20190 >>>>>> +1 (571) 213 1346 >>>>>> >>>>>> >>>>>> until further notice >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Thu Oct 3 14:27:23 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 17:27:23 -0400 Subject: [ih] Fwd: Gateway Issue: Certification (was Re: booting linux on a 4004) References: <583671EE-DC37-43E4-BBAF-5E5EC763505F@comcast.net> Message-ID: > Begin forwarded message: > > From: John Day > Subject: Re: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) > Date: October 3, 2024 at 16:20:46 EDT > To: Brian E Carpenter > > You give the OSI effort far too much credit. There were a lot of things that OSI created opportunities for consultants. But then isn?t the motto of consultants, ?There is one born every minute?? > > No one I knew took it seriously and as I said, the conformance stuff was pretty lousy/goosy. It would immediately get back into the debates between the PTTs and everyone else. > > The attempts at formal description were more useful. But even there, the view was that they were ultimately no more correct than the prose specification. As I said, they were useful in finding bugs during the development process. > > John > >> On Oct 3, 2024, at 16:03, Brian E Carpenter via Internet-history wrote: >> >> There's a variant of Godwin's law something like: "As an online >> discussion of Internet history grows longer, the probability of a >> comparison involving OSI approaches 1." >> >> OSI people were very keen on formal conformance testing and >> certification. It was supposed to be one of the great benefits >> of formal specifications, state diagrams, and the like. Consultants >> made good money out of it. >> >> Over here, we had rough consensus and running code. >> >> We know what happened. >> >> But it's true that the IETF has repeatedly failed to solve the >> problem that Dave identified ("know exactly which RFCs they need >> to implement"). See the following URL to appreciate the problem: >> https://github.com/becarpenter/book6/blob/main/20.%20Further%20Reading/RFC%20bibliography.md >> >> Brian >> >> Regards >> Brian Carpenter >> >> On 04-Oct-24 06:54, Dave Crocker via Internet-history wrote: >>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>> It's curious to me that such mechanisms have not been created for the >>>> Internet Industry. >>> Informal interoperability testing, versus formal compliance testing, was >>> and remains a key distinction between the Internet's culture and the >>> cultures of various other standards organization. Compliance testing is >>> typically expensive and incomplete. (As a tool for initial code >>> debugging, tests like that can be efficient; as a guarantee of field >>> interoperability, not so much.) >>> There was a wonderful panel that Vint was on, circa 1990, along with a >>> number of other folk, including a vigorous OSI proponent from Boeing. >>> Vint made his comments about experiences with the Internet's technology >>> and specifically noted the reliance on interoperability testing rather >>> than (bench) compliance testing. >>> Other panelists made various comments and then the Boeing person made >>> theirs, vigorously asserting that it is not possible to get widespread >>> interoperability without formal compliance testing. >>> It was fun to watch Vint slowly lean slightly forward and then slowly >>> turn his head toward the Boeing person. >>> d/ >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Thu Oct 3 14:33:40 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 14:33:40 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <7E5F8742-9C2C-463A-81EC-E9F1D2BC56CA@comcast.net> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> <7E5F8742-9C2C-463A-81EC-E9F1D2BC56CA@comcast.net> Message-ID: The Arpanet was shrinking through the 80s and was eventually decommissioned.? But at the same time, the Defense Data Network (aka DDN) was growing explosively.? Other governmental entities outside of DDN were also deploying their own networks using the then-proven DDN technology. For example, Customs had a network which was used by all those agents at kiosks checking your passport when you tried to enter the US.? The conversion of DDN from 1822 to X.25 as the Host/IMP interface specification made it much easier to deploy such networks and connect existing or new host computers.?? There were a bunch of other parts of the US government that had their own such networks, but many of them did not allow that fact to be publicized.?? Forty years later, I wonder if any of that world is still running. All of those networks used the same Arpanet internals (IMP code) as was used in the Arpanet and evolved over time.? I don't remember if all, or any, of them implemented TCP.? It was required to be present in DoD procurements, but not others that I know of.? Even in DoD systems, it was required to be present but not necessarily to be used.? Applications in DoD systems could, for example, use X.25 PADs to connect terminals to computers, with no TCP involved. In addition, IMPs were being used in commercial environments.?? Some of them were even deploying their own private "intranets", with LANs connected by gateways (usually Cisco) to IMPs.?? Such systems were rarely integrated into "The Internet", partly because there was no need and mostly because of security concerns.?? The corporate intranet I was involved with during the 1990s was one such example.?? It was a worldwide intranet connecting about 100 countries using Cisco routers and multiple protocols.? No IMPs involved, but still a nightmare to operate! All of those applications provided new loads, and new usage patterns, affecting the internal operations of the IMP mechanisms. This was especially evident as PCs proliferated and client/server interactions slowly replaced the terminal traffic caused by from many users typing. Another effect was the change from research to operations. Operational users have different priorities from Researchers.?? We learned that as the Arpanet evolved from research into operational environments. Researchers like to dive into a crashed system, do a full autopsy in the corpse, figure out what went wrong, and contemplate what to do to fix it and avoid future occurrences.? Meanwhile, the Users wait. Operational users want to get the system back up and running ASAP. It's costing them money, losing sales, and disrupting their business (or military, whatever) activities.?? Get the system up now!? Go figure out what happened and how to fix it on your own time! We had one network that was used to process credit card transactions for a major player in the Finance industry.? When the network was down, customers would simply use a different card in their wallet. Network downtime was directly translatable into dollars per minute of lost revenue.? Many dollars. Those shifts in traffic patterns motivated changes to the mechanisms within IMPs for managing flows, congestion, fairness, and anything else needed to keep the Users happy.?? At one point I recall someone at BBN who was a frequent traveller commenting that he had learned to avoid carrying a briefcase that identified him as a BBN employee, because he suspected from experiences that made him more likely to be hassled at the Immigration desk.? Any problem with the computer, application, server, or anything else tended to be blamed on "the network", and everyone using the system knew that BBN had supplied the network. One of the DDN applications I remember was the Army's Payroll system - the one which caused Army personnel to get their paychecks.? You didn't want to piss them off.? They could potentially do some serious "hassling".? I remember talking with someone from the Army BRL (Ballistics Research Lab) and joking about the BSQ variant of Source Quench.? BSQ (Ballistic Source Quench) involved ordnance, which the Army of course had readily available and they knew how to use it. It was an interesting time... Jack Haverty On 10/3/24 13:04, John Day wrote: > Okay, thanks. That clarifies the RFNM issue. > > What were ?all of those applications in the 80s? you were alluding to? > Well, some of them. ?In the 80s, was the ARPANET becoming a smaller > and smaller part of the Internet? > > Your last comment: ?The ARPANET did process-to-process flow control in > NCP and I would venture a guess, hop-by-hop flow control host-to-host > through IMP-Host to IMP subnet to IMP-Host. ?I need to dig int the > nature of the IMP-IMP flow and congestion control. > > Thanks, > John > >> On Oct 3, 2024, at 15:25, Jack Haverty wrote: >> >> John, >> >> RFNMs were messages sent from an IMP to one of its attached Host >> computers - literally "Request For Next Message".?? AFAIK, RFNMs were >> not sent internally between IMPs.?? But there was a lot of internal >> mechanism to exchange information between IMPs.?? An IMP could send a >> RFNM to a Host to indicate that it was OK for that Host to send more >> data. >> >> If your Host didn't obey the RFNM rules, as a last measure to protect >> the network, the IMP could shut off that Host by turning off the >> hardware clock on the cable connecting the Host and IMP.?? For a >> while, that Host wouldn't be able to talk to any other Host.?? But >> service would be restored when the traffic jam cleared. >> >> But I'm not the right person to ask about IMP internals.? When I >> joined BBN, the Arpanet had already been running for 8 years and had >> transitioned into "operational" mode, with DCA rather than ARPA in >> charge.? I was in the "research" area associated with then-current >> ARPA projects such as TCP.? But the "IMP Guys" were close by.? When I >> was implementing TCP for Unix, I suspect I learned about RFNMs from >> one of the IMP programmers, and in particular learned what my TCP had >> to do in order to avoid ever being blocked by the IMP. >> >> I'm not sure if other TCP implementers knew much about how the >> Arpanet worked, or what they should do in their TCP implementations.? >> Such stuff was probably described somewhere in reports, but I had the >> advantage of being able to just go ask an IMP programmer.? That was >> even preferable to looking at the ultimate documentation -- the code >> itself, which was not easy to understand. >> >> One of the features of the "Arpanet Architecture" was that the >> internal mechanisms were insulated from the world outside, which was >> defined by the "1822" interface specifications.?? So the internal >> mechanisms could be changed without any need for Host computers to >> modify their hardware or software.? The Host-IMP interface did change >> sometimes, but very rarely, e.g,. to introduce "96-bit leaders".? The >> internal IMP mechanisms could (and did) change with every release of >> the IMP software.? They also changed with everyday "patches" that >> addressed some current operational problem. >> >> As the Arpanet grew during the 80s, lots of traffic, lots of new >> applications, and lots of new users surfaced a lot of issues.? The >> Arpanet internal mechanisms were studied as they were in use, >> simulations were performed, analyses were done, and new mechanisms >> were implemented and carefully introduced into the active network, >> instrumented to see how well the theory matched the results in practice. >> >> Congestion control was one such issue.?? I recall others, e.g., >> "MultiPath Routing".? This was surfaced by the observation that, at >> the time, there were 3 separate paths through the Arpanet mesh to get >> data from the East Coast to the West Coast.?? The "routing algorithm" >> always had an idea of the "best path", and sent all data along that >> route.? Each route involved 56 kilobits/second circuits.?? But since >> all data flowed on the current "best route", it was not possible to >> attain more than 56 kb/s throughput between any two hosts, even >> though the cross-country capacity was available for more. >> >> Personally, I learned about these kinds of Arpanet issues mostly from >> proximity to the Arpanet NOC and IMP Guys.? There were lots of >> reports documenting Arpanet behavior, but they may not have been >> readily available (no Web yet) or widely distributed, or even of much >> interest to the researchers pursuing Internet Projects. >> >> The DTIC documents I mentioned earlier are some of those reports that >> may be not only of historical interest, but also relate to current >> issues which exist in today's Internet.? For example, "MultiPath >> Routing" is an issue in today's Internet.? My cell phone has two >> paths available to it for using the Internet (Wifi and Cell). But it >> can only use one at any time.?? Flow Control was handled by the >> internal IMP mechanisms such as RFNMs. But it didn't prevent >> Congestion.? Congestion Control was another hot topic back in the >> 1980s Arpanet. >> >> The "Arpanet Architecture" put mechanisms for congestion control, >> routing, flow control, instrumentation, et al as internal >> mechanisms.? The "Internet Architecture" places some of those >> functions into the "switching" fabric of routers, switches, and >> modems.?? It places other functions into the "Host" devices where TCP >> et al are implemented. >> Both Hosts and Switching Fabric contain products developed by many >> different, and often competing, manufacturers. >> >> IMHO, those two architectures are quite different, yet reflect >> approaches to the same problem of building a distributed computing >> infrastructure (Licklider's "Galactic Network" vision). >> >> I don't recall much about the changes that were made to the Arpanet >> for things like congestion control.? That's why I mentioned those >> reports saved by DTIC (Defense Technical Information Center).? There >> may be some gems of experience still in those reports that might >> apply in today's world.?? I'm even listed as the Author of some of >> them; but that just reflects that, at the time, I was the designated >> manager of the associated contract. That doesn't mean that I knew >> anything about the work; I was just responsible for getting the >> report submitted so the customer would pay the bill. >> >> Andy Malis, who worked with IMPs, may remember more. Also Bob Hinden, >> who we recruited into the Internet world from the Arpanet group. >> >> Andy??? Bob? >> >> Jack Haverty >> >> >> On 10/3/24 08:50, John Day wrote: >>> Jack, >>> >>> Good stuff. I agree and as I said before SQ alone is not sufficient unless the action to be taken is also defined, both when to send it and what to do when it arrives. Raj and KK said that ECN should be sent when the average queue length was greater than or equal to 1. This is very early and gives the senders time to back off before packets are dropped (hopefully) and retransmissions are generated. TCP by using implicit notification waits until the queue is full and packets are being dropped, and will continue to be dropped (it would seem) until the senders notice the lost Acks. This would appear to generate a lot of retransmissions. >>> >>> A question for you: It has been my impression that with the RFNMs between IMPs, congestion would not have occurred within the IMP subnet (or very rarely).* However, there would have been congestion at the gateways as you describe. Is that correct? >>> >>> Take care, >>> John >>> >>> * Early on there were some deadlock conditions caused by the fact that a message could be 8 packets and was reassembled in the IMP before being delivered to the host, but that isn?t congestion. >>> >>>> On Oct 2, 2024, at 20:08, Jack Haverty via Internet-history wrote: >>>> >>>> Re: Source Quench... >>>> >>>> It's been 40+ years, but I remember meetings where Source Quench was first discussed. My reaction was that it was too simplistic and wouldn't be effective. At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40. When I asked what a TCP should do when it received a SQ, no one could provide much of an answer. If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? >>>> >>>> Other implementors had different ideas about how to handle an incoming SQ. One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent. So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire. You knew it had been discarded, so you should retransmit it immediately. >>>> >>>> In my TCP, I think I just incremented a counter when I received a SQ. Could always change it later.... >>>> >>>> At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue. There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340. There are many others describing the Arpanet experience. In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". >>>> >>>> There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs. Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer. The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. >>>> >>>> Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them. Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP. Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels. "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. >>>> >>>> The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs. That would effectively block all communications from the blocked host to anywhere else on the Arpanet. By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter. Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it. I have wondered how many implementations actually did. >>>> >>>> All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail. The ancient IMP code itself is even available online today. As always, the ultimate documentation is the code itself. But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s. It's not easy to figure out how it worked. >>>> >>>> The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes. In the earliest gateways, "getting a timestamp" from the processor wasn't hard. It was impossible. The gateway hardware simply didn't have any way to measure time. >>>> >>>> IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back. They could calculate the delay along a route. >>>> >>>> Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit. So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available. That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. >>>> >>>> Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet. But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. >>>> >>>> Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports. The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". >>>> >>>> In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones. If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. >>>> >>>> Hope this helps explain how we got from there to here... >>>> Jack Haverty >>>> >>>> >>>> >>>> >>>> >>>> On 10/2/24 15:21, Dave Taht via Internet-history wrote: >>>>> I wish I had had the time and resources to (help) write more papers. (For >>>>> example there isn't much on "drop head queueing") >>>>> >>>>> fq_codel is now a linux-wide default and has the following unique >>>>> properties: >>>>> >>>>> codel queue management, which measure the time a packet spends in a queue >>>>> and gradually attempts to find an optimum point for queue length, which is >>>>> 5ms by default. (it has been tested in software below 250us in the DC). >>>>> There is another subsystem, called BQL, which attempts to limit bytes on >>>>> the device txring to one interrupt's worth. (a pretty good explanation of >>>>> modern layers here) [2] >>>>> >>>>> It drops from the head, not the tail of the queue, with a small (BQL or >>>>> HTB) FIFO in front of the lowest bits of the hardware to account >>>>> for interrupt latency. >>>>> >>>>> (I am kind of curious if a txring existed back in the day and how close an >>>>> application sat to the hardware) >>>>> >>>>> Anecdote: when van and kathy were working on what became codel (january >>>>> 2012), she rang me up one day and asked me just how much overhead there was >>>>> in getting a timestamp from the hardware nowadays. And I explained that it >>>>> was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs >>>>> and so on and so forth, and she said thanks, and hung up. Getting a >>>>> timestamp must have been mighty hard back in the day! >>>>> >>>>> The "flow queueing" mechanism sends packets that have an arrival rate of >>>>> less than the departure rate of all the other flows, out first.[1] This is >>>>> an improvement over prior FQ mechanisms like SFQ and DRR, which always put >>>>> a new flow at the tail of the flow list. It is pretty amazing how often >>>>> this works on real traffic. Also it automatically puts flows that build a >>>>> queue into a queue that is managed by codel. >>>>> >>>>> One (eventual) benefit of these approaches, combined, is it makes delay >>>>> based congestion control more feasible (indeed, >>>>> BBR spends most of its time in this mode), but the flow isolation makes for >>>>> most interactive traffic never being queued at all. >>>>> >>>>> IMHO the edges of the internet at least, would have been much better were >>>>> some form of FQ always in it (which we kind of got from switched networks >>>>> naturally) but the idea of FQ was roundly rejected in the first ietf >>>>> meeting in 1989, and it's been uphill ever since. >>>>> >>>>> Just to touch upon pacing a bit - pacing is the default for the linux stack >>>>> no matter the overlying qdisc or congestion control algorithm. >>>>> I don't know if anyone has ever attempted to compare pacing w/cubic vs >>>>> pacing w/bbr, and very few, until recently, have >>>>> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >>>>> >>>>> [1]https://ieeexplore.ieee.org/document/8469111 >>>>> [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >>>>> [3] >>>>> https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >>>>> >>>>> Varying the packet pacing to get a pre-congestion notification is a paper >>>>> I'd like more to pursue. >>>>> https://www.usenix.org/system/files/atc24-han.pdf >>>>> (I so want to believe this paper) >>>>> >>>>> A tiny bit more below.... >>>>> >>>>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < >>>>> internet-history at elists.isoc.org> wrote: >>>>> >>>>>> The response to bufferbloat has always struck me as looking for your keys >>>>>> under a street light when that wasn?t where you dropped them but there is >>>>>> light there. >>>>>> >>>>>> Initially, bufferbloat was not a problem because memory was expensive and >>>>>> when TCP ran out of buffers (or got low), the connection simply blocked the >>>>>> sending application until buffers were available. This was still true with >>>>>> the advent of NIC cards. Memory was still tight. However, as memory got >>>>>> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >>>>>> no one told the application to slow down or wait, so there was local >>>>>> congestion collapse: bufferbloat. >>>>>> >>>>>> One part of the solution would be interface flow control between the >>>>>> sending application and TCP (you would have thought that would have >>>>>> occurred to implementers any way, it is obvious) and/or simply restrict the >>>>>> amount of buffers TCP has available so that it runs out and blocks the >>>>>> sending the application before things get bad and opens up when buffers are >>>>>> available. But virtually all of the papers I see are on different >>>>>> drop-strategies, and oddly enough they never find their keys. >>>>>> >>>>> don't have a lot of time for papers! The most modern stuff for tcp is >>>>> using EDF (earliest deadline first) to manage the packet pacing. >>>>> There are virtual and actual physical devices nowadays that take a "time to >>>>> be sent" and packet. This paper was highly influential: >>>>> >>>>> https://saeed.github.io/files/carousel-sigcomm17.pdf >>>>> >>>>> the latest commit to the linux kernel about it: >>>>> >>>>> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >>>>> >>>>> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so >>>>> many things... >>>>> >>>>> >>>>>> Take care, >>>>>> John >>>>>> >>>>>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >>>>>> internet-history at elists.isoc.org> wrote: >>>>>>> Just throwing some thoughts out here ...... >>>>>>> I can see how this happens in a FIFO queuing world. However a lot of >>>>>> work has gone into fair queuing starting in the late 80s. Just wondering >>>>>> if anyone has done work utilizing fair queuing and source quench. For >>>>>> example, I think I can see how to use fair queuing information to better >>>>>> select who to send a source quench to. At least I can see how to do it with >>>>>> Stochastic Fairness Queueing since I worked on it and I remember a fair >>>>>> amount about how it was implemented. I wouldn't be able to provide a >>>>>> guarantee that the wrong host would never receive a source quench but the >>>>>> likelihood should be much lower. Considering whether the use of NAT >>>>>> creates undesirable behavior is also important and I am sure there are >>>>>> probably other cases that need to be checked. >>>>>>> Hum, it might also be interesting to speculate whether this could have >>>>>> any effect on bufferbloat but I fess up I need to learn more about the work >>>>>> done in the area of bufferbloat. I was involved with other things when >>>>>> this started to appear on my radar screen as a hot topic. I will admit I >>>>>> wish I had done more work on possible buffering effects from implementation >>>>>> choices at the time I did work on SFQ but there were contractual >>>>>> obligations that restricted how much time I could devote to the SFQ part of >>>>>> the project. >>>>>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >>>>>> anyone have any idea about its use in the Internet? >>>>>>> barbara >>>>>>> >>>>>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >>>>>> vint at google.com> wrote: >>>>>>> One basic problem with blaming the "last packet that caused intermediate >>>>>> router congestion" is that it usually blamed the wrong source, among other >>>>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>>>> might remember more. >>>>>>> v >>>>>>> >>>>>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>>>>> internet-history at elists.isoc.org> wrote: >>>>>>> In a brief attempt to try to find some information about the early MIT >>>>>> work you mentioned, I ended up tripping on this Final Report from ISI in >>>>>> DTIC. It does talk a fair amount about congestion control and source >>>>>> quench (plus other things that might interest people). The period of >>>>>> performance is 1987 to 1990 which is much later than I was considering in >>>>>> my earlier message. >>>>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>>>>> Even though the report mentions testing on DARTnet, I don't remember >>>>>> anything about this during our DARTnet meetings. I did join the project >>>>>> after the start so perhaps the work was done before I began to participate. >>>>>> I also couldn't easily find the journal they mention as a place for >>>>>> publishing their findings. I will have more time later to see if I can >>>>>> something that covers this testing. >>>>>>> barbara >>>>>>> >>>>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>>>>> Internet-history wrote: >>>>>>> multicast is also an issue but I do not recall if that was one that >>>>>> Craig & I talked about >>>>>>> Scott >>>>>>> >>>>>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>>>>> internet-history at elists.isoc.org> wrote: >>>>>>>> I remember talking with Craig Partridge (on a flight to somewhere) >>>>>> about source quench >>>>>>>> during the time when 1812 was being written - I do not recall >>>>>>>> the specific issues but I recall that there were more than one issue >>>>>>>> >>>>>>>> (if DoS was not an issue at the time, it should have been) >>>>>>>> >>>>>>>> Scott >>>>>>>> >>>>>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>>>>> internet-history at elists.isoc.org> wrote: >>>>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>>>>> Forwarded for Barbara >>>>>>>>>>> >>>>>>>>>>> ==== >>>>>>>>>>> >>>>>>>>>>> From: Barbara Denny >>>>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>>>>> I think congestion issues were discussed because I remember an ICMP >>>>>> message type called source quench (now deprecated). It was used for >>>>>> notifying a host to reduce the traffic load to a destination. I don't >>>>>> remember hearing about any actual congestion experiments using this message >>>>>> type. >>>>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >>>>>> an >>>>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>>>>> thesis about the use of Source Quench for congestion control. I >>>>>> believe >>>>>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>>>>> simulation). >>>>>>>>>> I don't think it had much impact on the rest of the world. >>>>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>>>>> 1984). >>>>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>>>>> I guess he probably wasn't an MIT undergrad. >>>>>>>>> >>>>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>>>>> People >>>>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that >>>>>> basically >>>>>>>>> it was no use. >>>>>>>>> >>>>>>>>> A bit more Google found this, however: >>>>>>>>> >>>>>>>>> "4.3. Internet Congestion Control >>>>>>>>> Lixia Zhang began a study of network resource allocation techniques >>>>>> suitable for >>>>>>>>> the DARPA Internet. The Internet currently has a simple technique for >>>>>> resource >>>>>>>>> allocation, called "Source Quench." >>>>>>>>> Simple simulations have shown that this technique is not effective, >>>>>> and this work >>>>>>>>> has produced an alternative which seems considerably more workable. >>>>>> Simulation >>>>>>>>> of this new technique is now being performed." >>>>>>>>> >>>>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >>>>>> ] >>>>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at >>>>>> UCLA now. If she isn't on this list, she should be! >>>>>>>>> Brian Carpenter >>>>>>> -- >>>>>>> Internet-history mailing list >>>>>>> Internet-history at elists.isoc.org >>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >>>>>> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>>>>> until further notice >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Internet-history mailing list >>>>>>> Internet-history at elists.isoc.org >>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From karl at iwl.com Thu Oct 3 14:59:38 2024 From: karl at iwl.com (Karl Auerbach) Date: Thu, 3 Oct 2024 14:59:38 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: <8517ba7d-78bc-41dc-bca1-83cf0b1d3499@iwl.com> On 10/3/24 10:43 AM, Jack Haverty via Internet-history wrote: > Around the time that TCP/IP became a DoD Standard in the 1980s, > NIST/NBS also created a Certification methodology.?? DoD altered its > procurement regulations to require such Certifications for everything > it purchased.? I've never learned who specifically made either of > those things happen.? But someone did. My memory is vague on this, but I believe that some DoD formal documents were done by Dave Kaufman (SDC) and Carl Sunshine.? But I could be completely wrong about this. ??? --karl-- From b_a_denny at yahoo.com Thu Oct 3 15:22:55 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 3 Oct 2024 22:22:55 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <8517ba7d-78bc-41dc-bca1-83cf0b1d3499@iwl.com> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <8517ba7d-78bc-41dc-bca1-83cf0b1d3499@iwl.com> Message-ID: <530950442.12557.1727994176002@mail.yahoo.com> I just remembered this activity but I don't know anything about it. https://www.iol.unh.edu/testing/ipv6 barbara On Thursday, October 3, 2024 at 02:59:52 PM PDT, Karl Auerbach via Internet-history wrote: On 10/3/24 10:43 AM, Jack Haverty via Internet-history wrote: > Around the time that TCP/IP became a DoD Standard in the 1980s, > NIST/NBS also created a Certification methodology.?? DoD altered its > procurement regulations to require such Certifications for everything > it purchased.? I've never learned who specifically made either of > those things happen.? But someone did. My memory is vague on this, but I believe that some DoD formal documents were done by Dave Kaufman (SDC) and Carl Sunshine.? But I could be completely wrong about this. ??? --karl- From jeanjour at comcast.net Thu Oct 3 16:31:33 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 19:31:33 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <1416965914.15145634.1727848098390@mail.yahoo.com> <403b1d6d-a2b3-4255-a85a-d8fe8ef2f0f1@3kitty.org> <7E5F8742-9C2C-463A-81EC-E9F1D2BC56CA@comcast.net> Message-ID: <15B7348C-19B8-4377-AEA3-CB5793A9D311@comcast.net> Great stuff Jack. Thanks, John > On Oct 3, 2024, at 17:33, Jack Haverty wrote: > > The Arpanet was shrinking through the 80s and was eventually decommissioned. But at the same time, the Defense Data Network (aka DDN) was growing explosively. Other governmental entities outside of DDN were also deploying their own networks using the then-proven DDN technology. > > For example, Customs had a network which was used by all those agents at kiosks checking your passport when you tried to enter the US. The conversion of DDN from 1822 to X.25 as the Host/IMP interface specification made it much easier to deploy such networks and connect existing or new host computers. There were a bunch of other parts of the US government that had their own such networks, but many of them did not allow that fact to be publicized. Forty years later, I wonder if any of that world is still running. > > All of those networks used the same Arpanet internals (IMP code) as was used in the Arpanet and evolved over time. I don't remember if all, or any, of them implemented TCP. It was required to be present in DoD procurements, but not others that I know of. Even in DoD systems, it was required to be present but not necessarily to be used. Applications in DoD systems could, for example, use X.25 PADs to connect terminals to computers, with no TCP involved. > > In addition, IMPs were being used in commercial environments. Some of them were even deploying their own private "intranets", with LANs connected by gateways (usually Cisco) to IMPs. Such systems were rarely integrated into "The Internet", partly because there was no need and mostly because of security concerns. The corporate intranet I was involved with during the 1990s was one such example. It was a worldwide intranet connecting about 100 countries using Cisco routers and multiple protocols. No IMPs involved, but still a nightmare to operate! > > All of those applications provided new loads, and new usage patterns, affecting the internal operations of the IMP mechanisms. This was especially evident as PCs proliferated and client/server interactions slowly replaced the terminal traffic caused by from many users typing. > > Another effect was the change from research to operations. Operational users have different priorities from Researchers. We learned that as the Arpanet evolved from research into operational environments. > > Researchers like to dive into a crashed system, do a full autopsy in the corpse, figure out what went wrong, and contemplate what to do to fix it and avoid future occurrences. Meanwhile, the Users wait. > > Operational users want to get the system back up and running ASAP. It's costing them money, losing sales, and disrupting their business (or military, whatever) activities. Get the system up now! Go figure out what happened and how to fix it on your own time! > > We had one network that was used to process credit card transactions for a major player in the Finance industry. When the network was down, customers would simply use a different card in their wallet. Network downtime was directly translatable into dollars per minute of lost revenue. Many dollars. > > Those shifts in traffic patterns motivated changes to the mechanisms within IMPs for managing flows, congestion, fairness, and anything else needed to keep the Users happy. At one point I recall someone at BBN who was a frequent traveller commenting that he had learned to avoid carrying a briefcase that identified him as a BBN employee, because he suspected from experiences that made him more likely to be hassled at the Immigration desk. Any problem with the computer, application, server, or anything else tended to be blamed on "the network", and everyone using the system knew that BBN had supplied the network. > > One of the DDN applications I remember was the Army's Payroll system - the one which caused Army personnel to get their paychecks. You didn't want to piss them off. They could potentially do some serious "hassling". I remember talking with someone from the Army BRL (Ballistics Research Lab) and joking about the BSQ variant of Source Quench. BSQ (Ballistic Source Quench) involved ordnance, which the Army of course had readily available and they knew how to use it. > > It was an interesting time... > > Jack Haverty > > > On 10/3/24 13:04, John Day wrote: >> Okay, thanks. That clarifies the RFNM issue. >> >> What were ?all of those applications in the 80s? you were alluding to? Well, some of them. In the 80s, was the ARPANET becoming a smaller and smaller part of the Internet? >> >> Your last comment: The ARPANET did process-to-process flow control in NCP and I would venture a guess, hop-by-hop flow control host-to-host through IMP-Host to IMP subnet to IMP-Host. I need to dig int the nature of the IMP-IMP flow and congestion control. >> >> Thanks, >> John >> >>> On Oct 3, 2024, at 15:25, Jack Haverty wrote: >>> >>> John, >>> >>> RFNMs were messages sent from an IMP to one of its attached Host computers - literally "Request For Next Message". AFAIK, RFNMs were not sent internally between IMPs. But there was a lot of internal mechanism to exchange information between IMPs. An IMP could send a RFNM to a Host to indicate that it was OK for that Host to send more data. >>> >>> If your Host didn't obey the RFNM rules, as a last measure to protect the network, the IMP could shut off that Host by turning off the hardware clock on the cable connecting the Host and IMP. For a while, that Host wouldn't be able to talk to any other Host. But service would be restored when the traffic jam cleared. >>> >>> But I'm not the right person to ask about IMP internals. When I joined BBN, the Arpanet had already been running for 8 years and had transitioned into "operational" mode, with DCA rather than ARPA in charge. I was in the "research" area associated with then-current ARPA projects such as TCP. But the "IMP Guys" were close by. When I was implementing TCP for Unix, I suspect I learned about RFNMs from one of the IMP programmers, and in particular learned what my TCP had to do in order to avoid ever being blocked by the IMP. >>> >>> I'm not sure if other TCP implementers knew much about how the Arpanet worked, or what they should do in their TCP implementations. Such stuff was probably described somewhere in reports, but I had the advantage of being able to just go ask an IMP programmer. That was even preferable to looking at the ultimate documentation -- the code itself, which was not easy to understand. >>> >>> One of the features of the "Arpanet Architecture" was that the internal mechanisms were insulated from the world outside, which was defined by the "1822" interface specifications. So the internal mechanisms could be changed without any need for Host computers to modify their hardware or software. The Host-IMP interface did change sometimes, but very rarely, e.g,. to introduce "96-bit leaders". The internal IMP mechanisms could (and did) change with every release of the IMP software. They also changed with everyday "patches" that addressed some current operational problem. >>> >>> As the Arpanet grew during the 80s, lots of traffic, lots of new applications, and lots of new users surfaced a lot of issues. The Arpanet internal mechanisms were studied as they were in use, simulations were performed, analyses were done, and new mechanisms were implemented and carefully introduced into the active network, instrumented to see how well the theory matched the results in practice. >>> >>> Congestion control was one such issue. I recall others, e.g., "MultiPath Routing". This was surfaced by the observation that, at the time, there were 3 separate paths through the Arpanet mesh to get data from the East Coast to the West Coast. The "routing algorithm" always had an idea of the "best path", and sent all data along that route. Each route involved 56 kilobits/second circuits. But since all data flowed on the current "best route", it was not possible to attain more than 56 kb/s throughput between any two hosts, even though the cross-country capacity was available for more. >>> >>> Personally, I learned about these kinds of Arpanet issues mostly from proximity to the Arpanet NOC and IMP Guys. There were lots of reports documenting Arpanet behavior, but they may not have been readily available (no Web yet) or widely distributed, or even of much interest to the researchers pursuing Internet Projects. >>> >>> The DTIC documents I mentioned earlier are some of those reports that may be not only of historical interest, but also relate to current issues which exist in today's Internet. For example, "MultiPath Routing" is an issue in today's Internet. My cell phone has two paths available to it for using the Internet (Wifi and Cell). But it can only use one at any time. Flow Control was handled by the internal IMP mechanisms such as RFNMs. But it didn't prevent Congestion. Congestion Control was another hot topic back in the 1980s Arpanet. >>> >>> The "Arpanet Architecture" put mechanisms for congestion control, routing, flow control, instrumentation, et al as internal mechanisms. The "Internet Architecture" places some of those functions into the "switching" fabric of routers, switches, and modems. It places other functions into the "Host" devices where TCP et al are implemented. >>> Both Hosts and Switching Fabric contain products developed by many different, and often competing, manufacturers. >>> >>> IMHO, those two architectures are quite different, yet reflect approaches to the same problem of building a distributed computing infrastructure (Licklider's "Galactic Network" vision). >>> >>> I don't recall much about the changes that were made to the Arpanet for things like congestion control. That's why I mentioned those reports saved by DTIC (Defense Technical Information Center). There may be some gems of experience still in those reports that might apply in today's world. I'm even listed as the Author of some of them; but that just reflects that, at the time, I was the designated manager of the associated contract. That doesn't mean that I knew anything about the work; I was just responsible for getting the report submitted so the customer would pay the bill. >>> >>> Andy Malis, who worked with IMPs, may remember more. Also Bob Hinden, who we recruited into the Internet world from the Arpanet group. >>> >>> Andy? Bob? >>> >>> Jack Haverty >>> >>> >>> On 10/3/24 08:50, John Day wrote: >>>> Jack, >>>> >>>> Good stuff. I agree and as I said before SQ alone is not sufficient unless the action to be taken is also defined, both when to send it and what to do when it arrives. Raj and KK said that ECN should be sent when the average queue length was greater than or equal to 1. This is very early and gives the senders time to back off before packets are dropped (hopefully) and retransmissions are generated. TCP by using implicit notification waits until the queue is full and packets are being dropped, and will continue to be dropped (it would seem) until the senders notice the lost Acks. This would appear to generate a lot of retransmissions. >>>> >>>> A question for you: It has been my impression that with the RFNMs between IMPs, congestion would not have occurred within the IMP subnet (or very rarely).* However, there would have been congestion at the gateways as you describe. Is that correct? >>>> >>>> Take care, >>>> John >>>> >>>> * Early on there were some deadlock conditions caused by the fact that a message could be 8 packets and was reassembled in the IMP before being delivered to the host, but that isn?t congestion. >>>> >>>>> On Oct 2, 2024, at 20:08, Jack Haverty via Internet-history wrote: >>>>> >>>>> Re: Source Quench... >>>>> >>>>> It's been 40+ years, but I remember meetings where Source Quench was first discussed. My reaction was that it was too simplistic and wouldn't be effective. At the time, I was the programmer responsible for the Unix TCP I had written for the PDP-11/40. When I asked what a TCP should do when it received a SQ, no one could provide much of an answer. If the initial datagram you sent out to open a TCP connection resulted in an incoming SQ, exactly how would you "slow down" that connection flow?? >>>>> >>>>> Other implementors had different ideas about how to handle an incoming SQ. One (Dave Mills IIRC) opined that receiving an SQ meant that a gateway somewhere in the path had discarded the datagram you had sent. So the obvious response by the TCP should be to simply retransmit the datagram without waiting for any "retransmission timer" to fire. You knew it had been discarded, so you should retransmit it immediately. >>>>> >>>>> In my TCP, I think I just incremented a counter when I received a SQ. Could always change it later.... >>>>> >>>>> At the time, there had been a decade's worth of experience in running the Arpanet, and "congestion control" was a well-known, if not well-understood, issue. There's a bunch of old reports available in DTIC that captured a lot of the analysis and experimentation that was done on the Arpanet to change its inner working as issues wee identified during operations - see, for example, DTIC reports accessible as ADA086338, and ADA086340. There are many others describing the Arpanet experience. In particular ADA121350 contains discussions of topics such as "Congestion Control" and "Issues in Internet Gateway Design". >>>>> >>>>> There were internal mechanisms within the Arpanet that enabled it to provide a "virtual circuit" service to host computers attached to IMPs. Although individual packets were routed and handled separately, they were "reassembled" at the destination IMP before delivering them to the attached computer. The Arpanet was widely characterized as a "packet network", but it had elaborate internal mechanisms to deliver a virtual circuit service to the computers it served. >>>>> >>>>> Essentially, packets in the Arpanet didn't start travelling toward their destination until the destination confirmed that there was buffer space reserved for them. Internal messages were exchanged to manage buffer allocations - e.g., the "ALLO" message (ALLOcate) was used to reserve space at a destination IMP. Packets would then traverse each circuit between pairs of IMPs, with error-checking and retransmission as needed to keep it intact in its travels. "RFNM" messages were used to indicate, to the sending host computer, that it was OK to send more data. >>>>> >>>>> The ultimate flow control was available to the IMP as a hardware capability, which could simply stop the clock that controlled the flow of data between Hosts and IMPs. That would effectively block all communications from the blocked host to anywhere else on the Arpanet. By "counting RFNMs", a host could avoid such drastic flow control by not sending any data that would violate the RFNM counter. Any TCP or gateway implementation attached to the Arpanet was subject to such control, and had to implement RFNM counting to avoid it. I have wondered how many implementations actually did. >>>>> >>>>> All of these mechanisms were well-documented in the technical reports, often in excruciating (and likely boring) detail. The ancient IMP code itself is even available online today. As always, the ultimate documentation is the code itself. But it's written in assembly language, and used every programming trick imaginable to make it fast, efficient, and functional in the minicomputer technology of the 1960s. It's not easy to figure out how it worked. >>>>> >>>>> The IMPs had the hardware necessary to measure time, so routing was based on finding lowest delay routes. In the earliest gateways, "getting a timestamp" from the processor wasn't hard. It was impossible. The gateway hardware simply didn't have any way to measure time. >>>>> >>>>> IMPs had clocks, and were interconnected by circuits, so the IMPs could "loop back" any circuit and measure the time to send data and get it back. They could calculate the delay along a route. >>>>> >>>>> Gateways were interconnected by networks, which were much less stable and variable than a terrestrial or satellite circuit. So Gateway routing was based on "hops" rather than time - as an interim mechanism until a time-based approach was available. That would then enable handling datagrams which needed "low latency" TOS by sending them on a low-delay route. >>>>> >>>>> Based on what I knew about the Arpanet, gleaned by osmosis from the activity at the Arpanet NOC down the hall and the Arpanet Group around the corner, I didn't think the Source Quench mechanism would work in the Internet. But it also made a good place-holder, to be replaced someday when the research community figured out what mechanism would actually work for congestion control. >>>>> >>>>> Much of what I knew about the internal structure of the Arpanet was available, but I think it's likely that few of the Internet researchers ever even saw the Arpanet reports. The reports were sent to DoD and ARPA, but AFAIK never released as IENs or RFCs, or otherwise distributed within the "research community". >>>>> >>>>> In addition, there was a prevailing policy from ARPA to avoid using old ideas and prefer trying new concepts. I recall being told by someone at ARPA that they needed to promote trying new ideas rather than replicating old ones. If you don't have enough failures, you're not following the "Advanced" part of the ARPA name. >>>>> >>>>> Hope this helps explain how we got from there to here... >>>>> Jack Haverty >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On 10/2/24 15:21, Dave Taht via Internet-history wrote: >>>>>> I wish I had had the time and resources to (help) write more papers. (For >>>>>> example there isn't much on "drop head queueing") >>>>>> >>>>>> fq_codel is now a linux-wide default and has the following unique >>>>>> properties: >>>>>> >>>>>> codel queue management, which measure the time a packet spends in a queue >>>>>> and gradually attempts to find an optimum point for queue length, which is >>>>>> 5ms by default. (it has been tested in software below 250us in the DC). >>>>>> There is another subsystem, called BQL, which attempts to limit bytes on >>>>>> the device txring to one interrupt's worth. (a pretty good explanation of >>>>>> modern layers here) [2] >>>>>> >>>>>> It drops from the head, not the tail of the queue, with a small (BQL or >>>>>> HTB) FIFO in front of the lowest bits of the hardware to account >>>>>> for interrupt latency. >>>>>> >>>>>> (I am kind of curious if a txring existed back in the day and how close an >>>>>> application sat to the hardware) >>>>>> >>>>>> Anecdote: when van and kathy were working on what became codel (january >>>>>> 2012), she rang me up one day and asked me just how much overhead there was >>>>>> in getting a timestamp from the hardware nowadays. And I explained that it >>>>>> was only a few cycles and a pipeline bubble, and the cost of unsynced TSQs >>>>>> and so on and so forth, and she said thanks, and hung up. Getting a >>>>>> timestamp must have been mighty hard back in the day! >>>>>> >>>>>> The "flow queueing" mechanism sends packets that have an arrival rate of >>>>>> less than the departure rate of all the other flows, out first.[1] This is >>>>>> an improvement over prior FQ mechanisms like SFQ and DRR, which always put >>>>>> a new flow at the tail of the flow list. It is pretty amazing how often >>>>>> this works on real traffic. Also it automatically puts flows that build a >>>>>> queue into a queue that is managed by codel. >>>>>> >>>>>> One (eventual) benefit of these approaches, combined, is it makes delay >>>>>> based congestion control more feasible (indeed, >>>>>> BBR spends most of its time in this mode), but the flow isolation makes for >>>>>> most interactive traffic never being queued at all. >>>>>> >>>>>> IMHO the edges of the internet at least, would have been much better were >>>>>> some form of FQ always in it (which we kind of got from switched networks >>>>>> naturally) but the idea of FQ was roundly rejected in the first ietf >>>>>> meeting in 1989, and it's been uphill ever since. >>>>>> >>>>>> Just to touch upon pacing a bit - pacing is the default for the linux stack >>>>>> no matter the overlying qdisc or congestion control algorithm. >>>>>> I don't know if anyone has ever attempted to compare pacing w/cubic vs >>>>>> pacing w/bbr, and very few, until recently, have >>>>>> attempted to also compare the cc-of-the-day vs fq_codel or cake. [3] >>>>>> >>>>>> [1]https://ieeexplore.ieee.org/document/8469111 >>>>>> [2]https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9541151 >>>>>> [3] >>>>>> https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0304609&type=printable >>>>>> >>>>>> Varying the packet pacing to get a pre-congestion notification is a paper >>>>>> I'd like more to pursue. >>>>>> https://www.usenix.org/system/files/atc24-han.pdf >>>>>> (I so want to believe this paper) >>>>>> >>>>>> A tiny bit more below.... >>>>>> >>>>>> On Wed, Oct 2, 2024 at 2:31?PM John Day via Internet-history < >>>>>> internet-history at elists.isoc.org > wrote: >>>>>> >>>>>>> The response to bufferbloat has always struck me as looking for your keys >>>>>>> under a street light when that wasn?t where you dropped them but there is >>>>>>> light there. >>>>>>> >>>>>>> Initially, bufferbloat was not a problem because memory was expensive and >>>>>>> when TCP ran out of buffers (or got low), the connection simply blocked the >>>>>>> sending application until buffers were available. This was still true with >>>>>>> the advent of NIC cards. Memory was still tight. However, as memory got >>>>>>> cheap and NIC cards had oceans of memory, TCP never got low on buffers and >>>>>>> no one told the application to slow down or wait, so there was local >>>>>>> congestion collapse: bufferbloat. >>>>>>> >>>>>>> One part of the solution would be interface flow control between the >>>>>>> sending application and TCP (you would have thought that would have >>>>>>> occurred to implementers any way, it is obvious) and/or simply restrict the >>>>>>> amount of buffers TCP has available so that it runs out and blocks the >>>>>>> sending the application before things get bad and opens up when buffers are >>>>>>> available. But virtually all of the papers I see are on different >>>>>>> drop-strategies, and oddly enough they never find their keys. >>>>>>> >>>>>> don't have a lot of time for papers! The most modern stuff for tcp is >>>>>> using EDF (earliest deadline first) to manage the packet pacing. >>>>>> There are virtual and actual physical devices nowadays that take a "time to >>>>>> be sent" and packet. This paper was highly influential: >>>>>> >>>>>> https://saeed.github.io/files/carousel-sigcomm17.pdf >>>>>> >>>>>> the latest commit to the linux kernel about it: >>>>>> >>>>>> https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet at google.com/T/ >>>>>> >>>>>> PS IMHO eric dumazet belongs a spot in the internet hall of fame for so >>>>>> many things... >>>>>> >>>>>> >>>>>>> Take care, >>>>>>> John >>>>>>> >>>>>>>> On Oct 2, 2024, at 01:48, Barbara Denny via Internet-history < >>>>>>> internet-history at elists.isoc.org > wrote: >>>>>>>> Just throwing some thoughts out here ...... >>>>>>>> I can see how this happens in a FIFO queuing world. However a lot of >>>>>>> work has gone into fair queuing starting in the late 80s. Just wondering >>>>>>> if anyone has done work utilizing fair queuing and source quench. For >>>>>>> example, I think I can see how to use fair queuing information to better >>>>>>> select who to send a source quench to. At least I can see how to do it with >>>>>>> Stochastic Fairness Queueing since I worked on it and I remember a fair >>>>>>> amount about how it was implemented. I wouldn't be able to provide a >>>>>>> guarantee that the wrong host would never receive a source quench but the >>>>>>> likelihood should be much lower. Considering whether the use of NAT >>>>>>> creates undesirable behavior is also important and I am sure there are >>>>>>> probably other cases that need to be checked. >>>>>>>> Hum, it might also be interesting to speculate whether this could have >>>>>>> any effect on bufferbloat but I fess up I need to learn more about the work >>>>>>> done in the area of bufferbloat. I was involved with other things when >>>>>>> this started to appear on my radar screen as a hot topic. I will admit I >>>>>>> wish I had done more work on possible buffering effects from implementation >>>>>>> choices at the time I did work on SFQ but there were contractual >>>>>>> obligations that restricted how much time I could devote to the SFQ part of >>>>>>> the project. >>>>>>>> Just curious, ECN (Explicit Congestion Notification) is optional . Does >>>>>>> anyone have any idea about its use in the Internet? >>>>>>>> barbara >>>>>>>> >>>>>>>> On Tuesday, October 1, 2024 at 07:10:25 PM PDT, Vint Cerf < >>>>>>> vint at google.com > wrote: >>>>>>>> One basic problem with blaming the "last packet that caused intermediate >>>>>>> router congestion" is that it usually blamed the wrong source, among other >>>>>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>>>>> might remember more. >>>>>>>> v >>>>>>>> >>>>>>>> On Tue, Oct 1, 2024 at 8:50?PM Barbara Denny via Internet-history < >>>>>>> internet-history at elists.isoc.org > wrote: >>>>>>>> In a brief attempt to try to find some information about the early MIT >>>>>>> work you mentioned, I ended up tripping on this Final Report from ISI in >>>>>>> DTIC. It does talk a fair amount about congestion control and source >>>>>>> quench (plus other things that might interest people). The period of >>>>>>> performance is 1987 to 1990 which is much later than I was considering in >>>>>>> my earlier message. >>>>>>>> https://apps.dtic.mil/sti/tr/pdf/ADA236542.pdf >>>>>>>> Even though the report mentions testing on DARTnet, I don't remember >>>>>>> anything about this during our DARTnet meetings. I did join the project >>>>>>> after the start so perhaps the work was done before I began to participate. >>>>>>> I also couldn't easily find the journal they mention as a place for >>>>>>> publishing their findings. I will have more time later to see if I can >>>>>>> something that covers this testing. >>>>>>>> barbara >>>>>>>> >>>>>>>> On Tuesday, October 1, 2024 at 04:37:47 PM PDT, Scott Bradner via >>>>>>> Internet-history wrote: >>>>>>>> multicast is also an issue but I do not recall if that was one that >>>>>>> Craig & I talked about >>>>>>>> Scott >>>>>>>> >>>>>>>>> On Oct 1, 2024, at 7:34?PM, Scott Bradner via Internet-history < >>>>>>> internet-history at elists.isoc.org > wrote: >>>>>>>>> I remember talking with Craig Partridge (on a flight to somewhere) >>>>>>> about source quench >>>>>>>>> during the time when 1812 was being written - I do not recall >>>>>>>>> the specific issues but I recall that there were more than one issue >>>>>>>>> >>>>>>>>> (if DoS was not an issue at the time, it should have been) >>>>>>>>> >>>>>>>>> Scott >>>>>>>>> >>>>>>>>>> On Oct 1, 2024, at 6:22?PM, Brian E Carpenter via Internet-history < >>>>>>> internet-history at elists.isoc.org > wrote: >>>>>>>>>> On 02-Oct-24 10:19, Michael Greenwald via Internet-history wrote: >>>>>>>>>>> On 10/1/24 1:11 PM, Greg Skinner via Internet-history wrote: >>>>>>>>>>>> Forwarded for Barbara >>>>>>>>>>>> >>>>>>>>>>>> ==== >>>>>>>>>>>> >>>>>>>>>>>> From: Barbara Denny >>>>>>>>>>>> Sent: Tuesday, October 1, 2024 at 10:26:16 AM PDT >>>>>>>>>>>> I think congestion issues were discussed because I remember an ICMP >>>>>>> message type called source quench (now deprecated). It was used for >>>>>>> notifying a host to reduce the traffic load to a destination. I don't >>>>>>> remember hearing about any actual congestion experiments using this message >>>>>>> type. >>>>>>>>>>> Of only academic interest: I believe that, circa 1980 +/- 1-2 years, >>>>>>> an >>>>>>>>>>> advisee of either Dave Clark or Jerry Saltzer, wrote an undergraduate >>>>>>>>>>> thesis about the use of Source Quench for congestion control. I >>>>>>> believe >>>>>>>>>>> it included some experiments (maybe all artificial, or only through >>>>>>>>>>> simulation). >>>>>>>>>>> I don't think it had much impact on the rest of the world. >>>>>>>>>> Source quench is discussed in detail in John Nagle's RFC 896 (dated >>>>>>> 1984). >>>>>>>>>> A trail of breadcrumbs tells me that he has an MSCS from Stanford, so >>>>>>>>>> I guess he probably wasn't an MIT undergrad. >>>>>>>>>> >>>>>>>>>> Source quench was effectively deprecated by RFC 1812 (dated 1995). >>>>>>> People >>>>>>>>>> had played around with ideas (e.g. RFC 1016) but it seems that >>>>>>> basically >>>>>>>>>> it was no use. >>>>>>>>>> >>>>>>>>>> A bit more Google found this, however: >>>>>>>>>> >>>>>>>>>> "4.3. Internet Congestion Control >>>>>>>>>> Lixia Zhang began a study of network resource allocation techniques >>>>>>> suitable for >>>>>>>>>> the DARPA Internet. The Internet currently has a simple technique for >>>>>>> resource >>>>>>>>>> allocation, called "Source Quench." >>>>>>>>>> Simple simulations have shown that this technique is not effective, >>>>>>> and this work >>>>>>>>>> has produced an alternative which seems considerably more workable. >>>>>>> Simulation >>>>>>>>>> of this new technique is now being performed." >>>>>>>>>> >>>>>>>>>> [MIT LCS Progress Report to DARPA, July 1983 - June 1984, AD-A158299, >>>>>>>>>> https://apps.dtic.mil/sti/pdfs/ADA158299.pdf >>>>>>> ] >>>>>>>>>> Lixia was then a grad student under Dave Clark. Of course she's at >>>>>>> UCLA now. If she isn't on this list, she should be! >>>>>>>>>> Brian Carpenter >>>>>>>> -- >>>>>>>> Internet-history mailing list >>>>>>>> Internet-history at elists.isoc.org >>>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Please send any postal/overnight deliveries to:Vint CerfGoogle, LLC1900 >>>>>>> Reston Metro Plaza, 16th FloorReston, VA 20190+1 (571) 213 1346 >>>>>>>> until further notice >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Internet-history mailing list >>>>>>>> Internet-history at elists.isoc.org >>>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>>> -- >>>>>>> Internet-history mailing list >>>>>>> Internet-history at elists.isoc.org >>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >> > > From brian.e.carpenter at gmail.com Thu Oct 3 16:35:37 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Oct 2024 12:35:37 +1300 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <1AA92124-7A80-41EE-8DFD-CA6F3E90A6B2@comcast.net> Message-ID: <5587ab03-a5b2-4378-bea5-67c0244ec205@gmail.com> Correction, it was *David* Rayner at NPL. Regards Brian On 04-Oct-24 09:52, Brian E Carpenter wrote: > In Europe it was a whole industry in itself. Just two examples(Google finds many): > > https://shop.elsevier.com/books/osi-conformance-testing-methodology-and-ttcn/baumgarten/978-0-444-89712-1 > > https://link.springer.com/chapter/10.1007/BFb0026973 > > Derek Rayner at NPL was the high priest; the OSI promoters in Brussels were worshippers. > > Regards > Brian > On 04-Oct-24 09:30, John Day via Internet-history wrote: >> Ahh, yes, one of Chris Vissers students. Vissers developed a Temporal Ordering approach to Formal Description. It was interesting in that it said a minimal amount about the implementation (leaving greater leeway to the implementor, which seemed important then) but I never found anyone who could design in it. From the URL, this appears to come from Gregor Bochmann?s course who worked on a formal description for CCITT. I forget now what it was but SDL had some feature that they insisted on that made the description ambiguous. Typical ITU. >> >> As I said, there was a lot of push by the traditionalists for it, but certainly there was nothing in ISO to actually do it. I don?t know if Europe did but as I said the only group in the US was the DoD group in AZ and they were very plodding. >> >> Take care, >> John >> >>> On Oct 3, 2024, at 16:15, Scott O. Bradner via Internet-history wrote: >>> >>> relevant to the question of OSI testing >>> >>> An Overview of OSI Conformance Testing >>> https://www.site.uottawa.ca/~bochmann/CSI5174/CourseNotes/Literature/Tretmans%20-%20Overview%20of%20OSI%20conformance%20testing%20-%20iso9646.pdf >>> >>> >>>> On Oct 3, 2024, at 4:06?PM, John Day via Internet-history wrote: >>>> >>>> No, never has on anything I know of. >>>> >>>> The only certification done on OSI was done by the DoD at some place in Arizona and it was a complete fiasco. >>>> >>>> John >>>> >>>>> On Oct 3, 2024, at 15:43, Vint Cerf wrote: >>>>> >>>>> ISO? >>>>> v >>>>> >>>>> >>>>> On Thu, Oct 3, 2024 at 3:42?PM John Day via Internet-history > wrote: >>>>>> Just out of curiosity, what other standards organizations do compliance testing? >>>>>> >>>>>> John >>>>>> >>>>>>> On Oct 3, 2024, at 13:54, Dave Crocker via Internet-history > wrote: >>>>>>> >>>>>>> On 10/3/2024 10:43 AM, Jack Haverty via Internet-history wrote: >>>>>>>> It's curious to me that such mechanisms have not been created for the Internet Industry. >>>>>>> >>>>>>> >>>>>>> Informal interoperability testing, versus formal compliance testing, was and remains a key distinction between the Internet's culture and the cultures of various other standards organization. Compliance testing is typically expensive and incomplete. (As a tool for initial code debugging, tests like that can be efficient; as a guarantee of field interoperability, not so much.) >>>>>>> >>>>>>> There was a wonderful panel that Vint was on, circa 1990, along with a number of other folk, including a vigorous OSI proponent from Boeing. >>>>>>> >>>>>>> Vint made his comments about experiences with the Internet's technology and specifically noted the reliance on interoperability testing rather than (bench) compliance testing. >>>>>>> >>>>>>> Other panelists made various comments and then the Boeing person made theirs, vigorously asserting that it is not possible to get widespread interoperability without formal compliance testing. >>>>>>> >>>>>>> It was fun to watch Vint slowly lean slightly forward and then slowly turn his head toward the Boeing person. >>>>>>> >>>>>>> d/ >>>>>>> >>>>>>> -- >>>>>>> Dave Crocker >>>>>>> Brandenburg InternetWorking >>>>>>> bbiw.net >>>>>>> mast:@dcrocker at mastodon.social >>>>>>> >>>>>>> -- >>>>>>> Internet-history mailing list >>>>>>> Internet-history at elists.isoc.org >>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>>> >>>>> -- >>>>> Please send any postal/overnight deliveries to: >>>>> Vint Cerf >>>>> Google, LLC >>>>> 1900 Reston Metro Plaza, 16th Floor >>>>> Reston, VA 20190 >>>>> +1 (571) 213 1346 >>>>> >>>>> >>>>> until further notice >>>>> >>>>> >>>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> From brian.e.carpenter at gmail.com Thu Oct 3 16:58:18 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Oct 2024 12:58:18 +1300 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <530950442.12557.1727994176002@mail.yahoo.com> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <8517ba7d-78bc-41dc-bca1-83cf0b1d3499@iwl.com> <530950442.12557.1727994176002@mail.yahoo.com> Message-ID: <60e42858-0026-4264-94a7-7a0e6992b4a6@gmail.com> On 04-Oct-24 11:22, Barbara Denny via Internet-history wrote: > I just remembered this activity but I don't know anything about it. > https://www.iol.unh.edu/testing/ipv6 It's serious. Closely connected to ongoing work such as draft-winters-v6ops-rfc7084bis, "Basic Requirements for IPv6 Customer Edge Routers" and draft-clw-6man-rfc8504-bis "IPv6 Node Requirements", so directly impacting both standards development and running code. Brian > barbara > On Thursday, October 3, 2024 at 02:59:52 PM PDT, Karl Auerbach via Internet-history wrote: > > On 10/3/24 10:43 AM, Jack Haverty via Internet-history wrote: > >> Around the time that TCP/IP became a DoD Standard in the 1980s, >> NIST/NBS also created a Certification methodology.?? DoD altered its >> procurement regulations to require such Certifications for everything >> it purchased.? I've never learned who specifically made either of >> those things happen.? But someone did. > > My memory is vague on this, but I believe that some DoD formal documents > were done by Dave Kaufman (SDC) and Carl Sunshine.? But I could be > completely wrong about this. > > ??? --karl- > From jack at 3kitty.org Thu Oct 3 18:21:45 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 18:21:45 -0700 Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: Hi Karl, You made me drag out my ancient notebooks to look at the "Problem List" that I wrote down from the first ICCB meeting back on September 21, 1981.?? The list included "test and verification of components" and "instrumentation and operational support". So yes, there should also be "Gateway Issue: Operations Tools". Perhaps it should be "Internet Issue:...", since Gateways (routers) are just one component involved. You and I at least are on the same page, extremely concerned about tools and techniques for operating networks. When Vint gave me the assignment to "make the core gateways an operational 24x7 service" (also September 1981 but I had been informed earlier that year), there were virtually no "operations tools" available.?? IIRC, the original 1984 paper defining TCP didn't address the issue at all.?? It focussed on how the system would work when everything was implemented and operating correctly. As the first TCP/IP software was implemented, few if any mechanisms such as "test points" or "loopbacks" had been included - e.g., the implementation I did for Unix had no such features.?? It simply wasn't a priority for a research environment, especially when the computer involved was sitting in front of you and all of its regular debugging tools were readily usable.?? IIRC, none of the people implementing those first TCPs had ever been involved in any network operations. I didn't have any experience either? in operating a 24x7 service. But the Arpanet NOC was literally down the hall, and by that time the Arpanet had been operating for over a decade.?? Many tools, procedures, and mechanisms had been created over that time. The research community didn't seem to have any ideas about operating a network, or have much interest in researching that area.?? So, ... the obvious way to operate the Internet as a 24x7 service was to simply steal the mechanisms that the Arpanet was using successfully to operate the Arpanet as a 24x7 service.?? So that's what we did. For example, the Arpanet had an internal mechanism called "Traps", whereby IMPs scattered around the network reported anomalous events, traffic statistics, and other such data back to the NOC at BBN.? All such Traps were printed out (on a Mod33 TTY IIRC) and eventually ended up in a large and ever-growing stack of paper off in a corner of the NOC.?? But an Operator, or IMP programmer, could look back at the paper logs and often discover an imminent problem, or see the events which led up to a reported problem to be fixed.?? The log was the first stop for anyone called in to fix a problem. IMPs had "Fake Hosts", which were simply hosts implemented inside the IMP software, but able to do things that a normal host might do.?? One example was simply generating, or sinking, a flow of traffic.? Another "Fake Host" contained a built-in DDT (a common debugger program of the era).?? By connecting to the DDT Fake Host, an IMP programmer could examine the IMP memory, make patches or changes, load new software, or do whatever else you could normally do with DDT and a local machine.? But the IMP might be many miles away. Within the Internet, we lobbied, cajoled, encouraged, and implemented similar tools to replicate the Arpanet operations functionality.? Arpanet Traps evolved into Internet SNMP mechanisms, and were extended to end-user computers, to access functionality (flow control, retransmissions, et al) that had been moved to the Hosts by the TCP architecture.? Fake Hosts such as DDT evolved into XNet, which IIRC was a Ray Tomlinson project.? I ended up documenting after it was updated for TCP/IP Version 4 (RFC 643 and IEN 158).? As such tools were created, operating Gateways became very similar to operating IMPs.?? At some point (can't remember exactly when), the Arpanet NOC began also operating the "core gateways", able to perform simple tasks like reloading software, and able to call the programmers when a more serious situation was detected. At the time, IIRC the "Gateway Group" was Bob Hinden, Mike Brescia, and Alan Sheltzer, all of whom interacted with the NOC and kept the Internet running.?? David Floodpage built a system called the CMCC (Catenet Monitoring and Control Center) as a tool analogous to the NOC "U" program (Utilities) that was used to do maintenance activities. When TCP/IP was standardized by DoD, Jon prepared the RFCs.?? But he forgot to include some of the pieces that we, as operators, considered mandatory.? In particular, ICMP was absent as part of the Specification.?? So government contractors felt no need to implement it.?? That meant that tools, such as ECHO and SQ, wouldn't be available for use in operating and debugging.? We always thought that such mechanisms were just a part of IP. ? After much grousing and complaining, ICMP was documented in an RFC and IIRC Dod contracts started requiring it. We developed other tools as the need and inspiration allowed.? For example, we used the "hooks" still present in the IMP code which had permitted the NMC (Network Monitoring Center) at UCLA to collect performance data about the infant Arpanet.? That code had long been unused, but we noticed that we could easily point it to another network address and send reports wherever we liked. That enabled the creation of a "Remote Datascope" (RD) tool, which (IIRC) was a program running on a Sun Sparc.? A remote IMP could be patched (using DDT in the IMP) to send reports to the RD computer. One very valuable use of that was to capture the beginning of an Arpanet "message", of sufficient size to contain an IP and TCP header.? Thus a "Internet Engineer" debugging some problem at a remote site could "hook a datascope" to that host's traffic flow and see exactly what was going on. Of course, such a Datascope would also be a great Spy tool.? But we avoided mentioning that.... Another tool was the "Flakeway".? I don't know the timing or if the flakeway idea from Jon Postel was independent or not.? It's likely that I at some point told the ICCB and Jon about our Flakeway and how we were using it in operating the core part of the Internet. Our "Flakeway" was built by Dan Wood on a Sun Sparc, which by then had become pretty common.? The problem we were facing was that the Arpanet was too reliable.? It never dropped, reordered, delayed, duplicated, or corrupted anything that one Host sent to another.? So the Arpanet was a poor testbed for TCP implementations. The Flakeway that Dan built was a weekend project.? We noticed a quirk in the Internet protocols that made it possible.? Flakeway took advantage of a probably serious vulnerability in the Internet protocols and their implementations.? On a LAN (almost always Ethernet at the time), IP addresses were converted into LAN addresses using ARP.? Basically, a Host needing to send a packet to another computer on the LAN would broadcast a query saying "Does anyone know where x.x.x.x is?"?? The host which believed it was x.x.x.x would reply and say "It's me!? I'm at Ethernet address XX:XX:XX:XX:XX:XX. The Flakeway inserted itself into traffic flows by watching for ARP exchanges, and then immediately contradicting the "It's me!" message with another of its own - "No, it's ME!? I'm really at YY:YY:YY:YY:YY:YY"? That would effectively direct all subsequent traffic to the Flakeway.? By exercising the same exchange with a user computer on the LAN, and a gateway on the LAN, the Flakeway could insert itself into the bidirectional flow of IP datagrams. Nothing needed to be changed on either the Hosts or Gateways to accomplish this. Flakeway could then do whatever it wanted with the datagram flows. Reorder, duplicate, modify, delay, etc., were all easy.??? Delays in particular were far more feasible than traditional network methods - which typically involved a *huge* roll of cable to create in a lab the delays that would normally be seen in a trans-continental circuit. ?It was even possible for Flakeway to alter where a new TCP connection went - so that when a user tried to connect to some particular IP address, the connection would instead go somewhere else.? Flakeway would modify the IP addresses in the headers as needed to make it all work.? I suspect it's similar to how NAT operates. I don't know what the Specifications for ARP said about how ARPs SHOULD, MUST, or MAY be handled.? But in practice, the Flakeway worked for all the hosts we tried.?? They all seemed to simply believe whoever most recently answered their ARP query. When I migrated to the West Coast and "up the stack" to Webs and Databases, I took the Flakeway idea with me.? We used it extensively in operating our own intranet, to be able to see what all the different computers and their TCP/IP implementations were actually doing, without changing a thing on any of those computers.?? It was a valuable tool. We didn't talk about the Flakeway a lot, since it seemed like a dangerous tool to have around in a Hackers' Toolbox.? I do remember that we reported it to someone in the IETF world as a serious vulnerability.? But I don't know if anything every changed.?? It also seemed to be becoming less useful as LANs became switched and it was hard to find a place to plug in a Flakeway so that it could do its thing.?? But that was before Wifi became dominant.? Computers now are often on a broadcast channel.? Perhaps a Flakeway would still work. Thanks for "dropping the dime" on the culprit of the great Reverse Packet Caper.? I recall one operational incident in the core Internet when a gateway suddenly started reporting all sorts of IP errors.? Investigation revealed that some computer out there was sending out IP datagrams in some kind of reverse order.? I thought we traced it to a BSD machine somewhere, but it may have been FTP Software. Some years ago, one of our government clients struggling with operating their own (inter)network asked me to write up a report outlining how to do it.? I remember writing a report, called something like "Managing Large Network Systems", but it was delivered and disappeared into the bowels of the government. Haven't found it since. The report outlined basic elements of a operators' toolbox.? For example, one component was an "Anchor Host", which was simply a computer installed at each remote location and accessible to the operators from their remote location.? Anchor Hosts were to be well-understood by the operators, so they could run familiar tests, tools such as Flakeway, or whatever else was needed remotely to debug a problem.?? Similarly, some kind of database would be used to collect data about performance, not only during problem conditions but also during normal everyday operation.? Being able to compare what's happening when it's broken to what happens when it's working was a very useful tool - very similar to your "pathology database".?? It also included things like conformance tests, not to verify that the implementation met a specification, but rather to collect measurements characterizing how the system behaved when it was officially "working", for use later when it wasn't. All of the ideas contained in that report stemmed from years of watching network problems and observing how people (including myself) attacked such problems. IMHO, researchers, and anyone creating protocols, algorithms, and implementations, should spend some time involved in actually operating one of these beasts, taking user complaints on the "network help line", and figuring out what the problem is, and what components of hardware and software have to be changed.?? With no finger-pointing allowed. But I agree, it's not easy to do that, and it's probably getting harder.?? Today's systems' complexity seems to offer an increasing opportunity for finger-pointing, in addition to the security efforts you mentioned. In addition to talking with vendors, who are often competing, I suggest talking to Users, who buy all those vendors' products and are faced with somehow getting it all to work.? Is there an "Internet Users Community"??? Almost every organization, company, government and even individual on the planet might be a member. Jack Haverty On 10/3/24 11:31, Karl Auerbach wrote: > My grandfather was a radio repair guy, my father repaired TV's that > other repair people could not fix.? So I grew up with my hands inside > electronics learning how to figure out what was going wrong and what > to do about it.? (I also learned a lot about keeping my fingers clear > of high voltages - some day ask me about how the phrase "beating the > bounds" [with regard to land titles] came about, and yes, there is an > analogy to high voltage shocks.) > > I've carried that family history (of repairing, not shocking) into the > land of networks. > > I am extremely concerned, and I mean *extremely* concerned, that our > race to lock and secure things is slowly making it increasingly > difficult for us to monitor, diagnose, and repair the Internet (and > the increasing number of other important infrastructures that have > become intermeshed with the net.) > > I wrote a note about this issue: > > Is The Internet At Risk From Too Much Security > https://www.cavebear.com/cavebear-blog/netsecurity/ > > My experience with designing, deploying, and running the Interop show > networks informed me that we have few decent tools.? I looked in awe > with the collection of well designed tools that AT&T guys (they were > always guys in that era) had dangling from their tool belts.? So I > designed and sold the first Internet buttset - a tool to get one up > and running within seconds to do testing and evaluation of a IP (and > Netware) network.? (The tool was "Dr. Watson, The Network Detective's > Assistant" - https://www.cavebear.com/archive/dwtnda/ .? However, I > was learning about how to run a company at that time and I didn't > watch, much less control, what my marketing group was spending - so we > went under.? I then helped Fluke pick up some of the remnant ideas for > their products.) > > Anyway, I have been bothered at how few test points we build into > network software.? Even one of the most fundamental - remote loopback > - is barely present in network equipment (yes, we have ICMP Echo/ping) > but that's rather primitive. And I've long worked with SNMP and MIBs.? > (I wrote and implemented an alternative to SNMP and Netconf that I > though was much more useful then either: KNMP at > https://www.iwl.com/idocs/knmp-overview ) > > My wife (Chris Wellens) and wrote up a paper in 1996 titled "Towards > Useful Management" in which we made several proposals to improve our > means to monitor and test networks. > https://www.cavebear.com/docs/simple-times-vol4-num3.pdf > > In the meantime Marshall Rose and my? wife spun a new company, > Interworking Labs, out from the Interop company.? The initial purpose > was to develop test suits for network protocols.? (These suites still > exist and often reveal mistakes in network code.? One of my favorite > is to repackage Ethnernet frames that have short IP packets inside > those Ethernet frames.? The IP packet is put into an Ethernet frame > that is larger than it needs to be to hold that IP packet. (Some > vendors have used that space to do things like announcing license > identifiers in the unused space in an Ethernet frame after an ARP > packet.)? Far too much code uses the ethernet frame length rather than > properly using the IP length fields - bad things can happen as a > result.? And there is still code out there that uses signed integer > math on unsigned integer packet fields - so a lot of code still > wobbles if one tickles packets with numbers just below or just above > the point where that high order bit toggles.) > > Jon Postel came up with a testing idea for the bakeoff test events we > had at places like FTP Software and ISI - a router that does things > wrong in a controlled way.? A few years later Steve Casner and I were > working to develop a portable RTP/RTCP engine for entertainment grade > audio/video (on IP multicast); we longed for a device such as Jon's > "flakeway" because of the need to evaluate all of the potential race > conditions that can happen when running several related media streams > in real time. > > So a few years later at Interworking labs we started to develop Jon's > flakeway into a real tool.? We called the line "Maxwell" after James > Clerk Maxwell's thought experiment about a daemon that could select > and control the flow of hot and cold particles, seemingly violating > the laws of Thermodynamics.? It is still rather surprising how much > code out there wobbles (or worse) when faced with simple network > behaviour such as packet order resequencing (such as can happen when > there are parallel/load balanced/bound) network paths, or when packets > are accumulated for a short while and then suddenly released (as if a > dam, holding back a lake of packets, suddenly bursts.) > > I have seen many network test suites that check that a protocol > implementation complies with the mandatory or suggested parts of > RFCs.? Those are nice.? But my concern is on the other side of the > RFCs - what about the DO NOT cases or undefined cases, what happens > when those situations happen. > > For instance, I remember Dave Bridgham (FTP Software) one afternoon > saying "You know, if I received the last IP fragment first I would > have information that let me do better receive buffer allocation."? So > he changed the FTP Software IP stack to send last fragment first.? It > worked.? That is it worked until an FTP Software based machine was > added to a network running competitor Netmanage TCP/IP code.? That > latter code simply up and died when it got the last fragment first. > > And at a TCP bakeoff I had a tool to test ARP, a protocol that has > many knobs and levers that are rarely used.? I managed to generate a > broadcast ARP packet that used some of those knobs and levers. That > ARP hit the router between our test networks and the host company's > main network - that router crashed, but before it did it (for some > reason) propagated that ARP further along, causing every other (I > believe Proteon) router in the company to also crash. > > We found a lot of things like that on the Interop show network. (I > usually got blamed because I was usually near, if not operating, the > device that triggered the flaws.)? One of the worst was a difference > in opinion between Cisco and Wellfleet routers about what to do with > expansion of IP multicast packets into Ethernet frames (in particular > what group MAC addresses to use) resulting in infinite IP multicast > routing across the show net - every load LED on every one of our > hundreds of routers and switches turned red.? (And, of course, all > fingers pointed at me. ;-) > > The Interop show net was a wonderful place to discover flaws in > protocol standards and implementations.? One of our team members (who > I believe is on this list) found a flaw the FDDI standard.? I have a > memory of companies reworking their code and blasting new firmware > overnight in their hotel rooms. > > The point of this long note is that the state of the art of testing > Internet protocol implementation is weak.? It's not an exciting field, > QA people are not honored employees, and as more and more people > believe (often quite wrongly) that they can write code we are actually > moving backwards in some regards. > > In addition, we do not adequately consider monitoring, testing, and > repair in our work defining protocols. > > In 2003 I gave a long talk with a title that is now a bit > misleading:?? From Barnstorming to Boeing ? > Transforming the Internet Into a Lifeline Utility. > > (The slides are at > https://www.cavebear.com/archive/rw/Barnstorming-to-Boeing-slides.pdf > and the speaker notes at > https://www.cavebear.com/archive/rw/Barnstorming-to-Boeing.pdf ) > > (One of my suggestions was the imposition of legal, civil tort, > liability for network design, implementation, and operational errors - > using a negligence standard so that simple mistakes would not suffer > liability.? Wow, the groans from the audience were quite loud.) > > I had other suggestions as well - such as design rules and operational > practices that must be followed unless the person looking to deviate > could express a compelling, cogent, argument why deviation is > appropriate.? This is the norm in many engineering disciplines, but > not for software where we are largely still in the anything goes, wild > west.) > > By-the-way, I have over the years been working on ideas to advance our > testing/repair capabilities. > > One piece that we are missing is a database of network pathology. I am > thinking here of a database of symptoms that are tied to possible > causes and tests to distinguish among those causes. (Yes, I am taking > a cue from the practice of medicine.) Once we have such a database one > could build tools to do symptom-to-cause reasoning, including running > of diagnostic tests to work through the branches of the possible > causation tree.? To do this right one needs trusted test agents > disseminated throughout the network - the word "trusted" is important > because network tests can be intrusive, sharp, and dangerous, like a > surgeon's scalpel. (Imagine a world where surgeons were required to > use dull, but safe plastic butter knives rather than sharp scalpels.) > > Baseline records are important - and we do gather some of that, but we > always want more detail.? But the amount of data to be collected is > voluminous and is subject to concerns about how it could be used > competitively.? (This is why in our Interworking Labs test contracts > we prohibit the publishing of results to the public - we want to > encourage correction for the benefit of us all rather than creation of > competitive cudgels.) > > (One element that I've slowly been working on in my zero free time is > a precisely timed beacon and precisely timed listeners - all tightly > synchronized to GPS time.? The idea is for beacons to take > subscriptions from listeners and then to emit highly predictable > patterns of packets of various sizes and timings. I've been meaning to > corner some of my astrophysicist friends to adopt some of their > methods of using that kind of predictable behaviour, observed at a > distance, to evaluate what lies between the beacon's hither and the > listerner's yon.? [And yes, I did pick up some ideas from Van J's > pathchar and Bruce Mah's re-implementation as pchar.) > > I am also thinking that we need some legal and accounting rule changes > so that vendors are more able to share improvements and tests without > running afoul of restraint of trade laws or damaging their balance > sheets and that ever present, false fable of "shareholder value".) > > ??? ??? --karl-- > > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From dhc at dcrocker.net Thu Oct 3 18:28:24 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 04 Oct 2024 01:28:24 +0000 (UTC) Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> Message-ID: <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> On 10/3/2024 6:21 PM, Jack Haverty via Internet-history wrote: > But an Operator, or IMP programmer, could look back at the paper logs > and often discover an imminent problem, or see the events which led up > to a reported problem to be fixed. I heard a story that in the early days, NOC folk would see an increasing rate of errors on a link, contact the relevant phone company -- not all were AT&T -- and tell them that that line was about to fail.? The phone folk reportedly had no experience with a customer's offering such predictions and were initially dismissive.? I was told that that attitude changed quickly... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From b_a_denny at yahoo.com Thu Oct 3 18:39:26 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Fri, 4 Oct 2024 01:39:26 +0000 (UTC) Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> Message-ID: <1549678057.16041987.1728005966497@mail.yahoo.com> The minilogger (hope my memory is correct for the name) was invaluable to me for understanding? what might be going on in the packet radio network. It was basically packet dumping to a printer. Unfortunately it couldn't run all the time because of performance issues but at least it gave me clue when I tried replicate problems after a station crash or try to understand what was causing a problem. barbara On Thursday, October 3, 2024 at 06:28:54 PM PDT, Dave Crocker via Internet-history wrote: On 10/3/2024 6:21 PM, Jack Haverty via Internet-history wrote: > But an Operator, or IMP programmer, could look back at the paper logs > and often discover an imminent problem, or see the events which led up > to a reported problem to be fixed. I heard a story that in the early days, NOC folk would see an increasing rate of errors on a link, contact the relevant phone company -- not all were AT&T -- and tell them that that line was about to fail.? The phone folk reportedly had no experience with a customer's offering such predictions and were initially dismissive.? I was told that that attitude changed quickly... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Thu Oct 3 18:47:20 2024 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Oct 2024 21:47:20 -0400 Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> Message-ID: <675DF4FB-93FE-4A23-97BE-0D9FCD4C6475@comcast.net> I have heard the same story, but there was a bit more detail. As I remember it, the line from Santa Barbara to Menlo Park (that could be wrong) was flaky and it acted a peculiar way when it was about to go down. The NOC saw it acting that way, and called PacBell. Told them their line such and such from Santa Barbara to Menlo Park was about to go down. The conversation went like this: PacBell: You're in Santa Barbara? NOC: No. PacBell: You?re in Menlo Park? NOC: No. PacBell: Then where ARE you?!! (there may have been an expletive in there.) ;-) NOC: Cambridge, Massachusetts. PacBell: Yea, right!! and hung up. then the line went down. ;-) > On Oct 3, 2024, at 21:28, Dave Crocker via Internet-history wrote: > > On 10/3/2024 6:21 PM, Jack Haverty via Internet-history wrote: >> But an Operator, or IMP programmer, could look back at the paper logs and often discover an imminent problem, or see the events which led up to a reported problem to be fixed. > > I heard a story that in the early days, NOC folk would see an increasing rate of errors on a link, contact the relevant phone company -- not all were AT&T -- and tell them that that line was about to fail. The phone folk reportedly had no experience with a customer's offering such predictions and were initially dismissive. I was told that that attitude changed quickly... > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > mast:@dcrocker at mastodon.social > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Thu Oct 3 18:51:38 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 18:51:38 -0700 Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> Message-ID: <8729643a-0b97-4c67-a0bf-7ba035da8ee1@3kitty.org> Yes, I heard the same story.? There was a significant advantage in having a computer at each end of some telecommunications circuit, watching every hiccup of that line and raising an alarm if things weren't normal. IIRC, there was a similar experience with Satnet, which spanned the Atlantic using a channel that ARPA leased from Intelsat on the Intelsat IV-A "bird".? Satnet had SIMPs attached to the satellite channel.?? SIMPs were "Satellite IMPs", using the same hardware and much of the software as in the Arpanet. At the time, most of the satellite channels were used for things like television feeds and telephone calls.? So impending circuit problems would likely impact users as just some static in the audio or video, easily ignored at least at first. Computers were much more critical of circuit problems, and the Arpanet/Satnet NOC would be informed.? Similarly, it took a while for Intelsat to believe that some little company in Massachusetts could be telling them about an actual imminent failure of their bird's system. Jack On 10/3/24 18:28, Dave Crocker wrote: > On 10/3/2024 6:21 PM, Jack Haverty via Internet-history wrote: >> But an Operator, or IMP programmer, could look back at the paper logs >> and often discover an imminent problem, or see the events which led >> up to a reported problem to be fixed. > > I heard a story that in the early days, NOC folk would see an > increasing rate of errors on a link, contact the relevant phone > company -- not all were AT&T -- and tell them that that line was about > to fail.? The phone folk reportedly had no experience with a > customer's offering such predictions and were initially dismissive.? I > was told that that attitude changed quickly... > > d/ > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From gregskinner0 at icloud.com Thu Oct 3 19:13:19 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Thu, 3 Oct 2024 19:13:19 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: On Oct 3, 2024, at 10:43?AM, Jack Haverty via Internet-history wrote: > > Hi Craig, > > Thanks for the history; it helped me fill in the gaps of what happened after I was directly involved. > > But... (there's always a but) > > Those RFCs are not part of the issue I was remembering. That issue was "Certification", which is distinct from "Specification". Specifications are documents which delineate what an implementation MUST, SHOULD, MAY, or MAY NOT do. Certification is a process whereby a specific implementation is tested, often by an independent third party, to see if it actually does what the Specifications dictate. > > Around the time that TCP/IP became a DoD Standard in the 1980s, NIST/NBS also created a Certification methodology. DoD altered its procurement regulations to require such Certifications for everything it purchased. I've never learned who specifically made either of those things happen. But someone did. Perhaps Vint remembers more? > > The RFCs you mention are a follow-on to the Specifications that Jon orchestrated. Bob Braden was also on the ICCB during that time, so it was probably natural for him to champion subsequent rounds of Specifications for NSF. > > But, AFAIK, no one continued the work that NIST/NBS had started, to further evolve Certification for the Internet. I also never heard that DoD's procurement regulations were changed to require compliance with additional RFCs. Maybe it happened, but I suspect they couldn't do that unless there was some well-defined way to Certify that a product met those Specifications. > > It's curious to me that such mechanisms have not been created for the Internet Industry. Other computing technologies did develop such mechanisms. For example, in the Database Industry where I worked in the 1990s, there were concepts like "Transactions", and testing procedures to see how a particular software/hardware combination actually worked in standard tests. For example, vendors touted their particular hardware and software products as Certified to achieve some number of TPS (Transactions Per Second). > > Similarly, even today there are lots of "Benchmarks" that are used today to evaluate computers and their component software and hardware. Magazines and websites compare products and show graphs indicating how their test results compare, so that customers can make informed purchase decisions based on independent test results. > > Most devices we can now buy contain hardware and software that enables them to interact on the Internet. But, other than raw speed, I've never seen any of such test results that even mention conformance with any RFC Specifications. > > Why not? > > IMHO, such testing and certification is more important in a networked environment than in a single computer. In network environments, there are at least two, and probably many more computers involved in anything a user does. Some of them are servers, some are clients, some are routers, modems, switches, etc. etc. All of these affect the users' experience, as well as affecting the network and the experience of others using it. > > The ongoing discussions about source quench, congestion, queue management, et al made we wonder. My home LAN has more than 50 "devices" attached to it, and contains a bunch of switches, modems, routers, cables, and other such stuff we all grew up with. > > How can I tell if they all implement ? Or if any of them do? > > Jack Haverty > I decided to apply your questions to the device I?m currently typing on (a MacBook Pro running macOS Sonoma), pretending that I didn?t know anything about any IETF standards that are supported. After a bit of googling, I found an Apple Support page detailing which TCP and UDP ports are used by Apple software products. [1] But that doesn?t tell the entire story. So I cheated a bit, taking into account that I know that Darwin is the OS that supports macOS, and via its Wikipedia page [2], found the Apple open source page. [3] From there, it is possible to visit several GitHub repos that have more details about which IETF standards are supported. [1] https://support.apple.com/en-us/103229 [2] https://en.wikipedia.org/wiki/Darwin_(operating_system) [3] https://opensource.apple.com You might be asking if someone should have to go to all of this trouble to find out which IETF specifications are supported on a MacBook Pro running macOS Sonoma. Let?s see what Bing Copilot has to say: ==== macOS Sonoma supports a wide range of IETF (Internet Engineering Task Force) specifications, particularly those related to networking, security, and internet protocols. Here are some key IETF specifications that are typically supported: 1. **IPv6**: The latest version of the Internet Protocol, which provides an identification and location system for computers on networks and routes traffic across the Internet. 2. **TLS (Transport Layer Security)**: Protocols for secure communication over a computer network. 3. **HTTP/2 and HTTP/3**: Versions of the Hypertext Transfer Protocol used for transmitting hypermedia documents, such as HTML. 4. **DNS (Domain Name System)**: Protocols for translating domain names into IP addresses. 5. **SMTP, IMAP, and POP3**: Protocols for email transmission and retrieval. These specifications ensure that macOS Sonoma can handle modern networking and security requirements efficiently. If you have specific IETF specifications in mind, feel free to ask! ? Source: Conversation with Copilot, 10/3/2024 (1) macOS Sonoma is compatible with these computers - Apple Support. https://support.apple.com/en-us/105113. (2) macOS Compatibility Checker: What macOS Can My Mac Run? - MacBook Journal. https://macbookjournal.com/macos-compatibility-checker-all-macos-updates/. (3) macOS Sonoma (macOS 14) Q&A - EveryMac.com. https://everymac.com/mac-answers/macos-14-sonoma-faq/macos-sonoma-macos-14-compatbility-list-system-requirements.html. (4) macOS Sonoma compatibility: The Macs that run macOS 14. https://www.macworld.com/article/1942421/macos-14-sonoma-compatibility-features.html. (5) macOS Sonoma System Requirements - iTechScreen. https://www.itechscreen.com/blogs/macos-sonoma-system-requirements/. ==== I don?t know if this meets your criteria for ?findability", but it seems that the information is available for those willing and able to spend time looking for it. The other thing (which is a valid concern) is because much of it is open source, it is not ?supported?, per se, in the same sense the DoD Standard RFCs were. Depending upon how carefully people read the EULAs, they may understand the difference between what is and isn?t open source, and adjust their expectations. --gregbo From dhc at dcrocker.net Thu Oct 3 19:15:37 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 04 Oct 2024 02:15:37 +0000 (UTC) Subject: [ih] Yet another subject change: Testing (Was Re: Gateway Issue: Certification (was Re: booting linux on a 4004)) In-Reply-To: <675DF4FB-93FE-4A23-97BE-0D9FCD4C6475@comcast.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <2dc2882f-c23e-4a51-815a-039a0bf7a07c@dcrocker.net> <675DF4FB-93FE-4A23-97BE-0D9FCD4C6475@comcast.net> Message-ID: <2a6cb94f-3502-4902-99ad-9c3ec7b021ac@dcrocker.net> > The conversation went like this: > PacBell: You're in Santa Barbara? > NOC: No. > PacBell: You?re in Menlo Park? > NOC: No. > PacBell: Then where ARE you?!! (there may have been an expletive in there.) ? > NOC: Cambridge, Massachusetts. > PacBell: Yea, right!! > and hung up. > then the line went down. ? sorry, yes.? that was in the story I heard too. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From jack at 3kitty.org Thu Oct 3 21:44:03 2024 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 3 Oct 2024 21:44:03 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> Message-ID: <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> On 10/3/24 19:13, Greg Skinner wrote: > Here are some key IETF specifications that are typically supported: Bing says macOS implements a "wide range" of specifications are "typically" supported??? Does that mean they are or aren't implemented in the code?? Who says so - the programmer, the Marketing literature, or an independent trusted third party??? What document did they use as specifications for all those protocols and mechanisms?? What does "support" mean?? There are also hordes of ancillary RFCs defining methods for things like queue management, flow control, retransmission, etc.?? Which, if any, of them are implemented??? Did anybody test the code to see if it correctly follows the specs??? Who? Sorry, such documentation tells me very little. Jack -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From gregskinner0 at icloud.com Fri Oct 4 00:18:51 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Fri, 4 Oct 2024 00:18:51 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> Message-ID: <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> On Oct 3, 2024, at 9:44?PM, Jack Haverty wrote: > > On 10/3/24 19:13, Greg Skinner wrote: >> Here are some key IETF specifications that are typically supported: > Bing says macOS implements a "wide range" of specifications are "typically" supported? Does that mean they are or aren't implemented in the code? Who says so - the programmer, the Marketing literature, or an independent trusted third party? What document did they use as specifications for all those protocols and mechanisms? What does "support" mean? There are also hordes of ancillary RFCs defining methods for things like queue management, flow control, retransmission, etc. Which, if any, of them are implemented? Did anybody test the code to see if it correctly follows the specs? Who? > > Sorry, such documentation tells me very little. > > Jack > OK, so I decided to make my question to Bing Copilot a bit more specific, just to see what came up. ==== Is there an independent trusted third party that I can use to verify that BGP implementations conform to IETF BGP standards? Yes, there are independent third parties that can verify BGP implementations for conformance to IETF standards. One notable example is the **University of New Hampshire InterOperability Laboratory (UNH-IOL)**. They offer testing services for various networking protocols, including BGP, to ensure compliance with industry standards?. Would you like more details on how these verification processes work or information on a specific aspect of BGP conformance? Source: Conversation with Copilot, 10/3/2024 (1) RFC 4271 - A Border Gateway Protocol 4 (BGP-4) - IETF Datatracker. https://datatracker.ietf.org/doc/html/rfc4271. (2) Updated BGP Operations and Security - Internet Engineering Task Force. https://www.ietf.org/archive/id/draft-fiebig-grow-bgpopsecupd-00.html. (3) RFC 7353 - Security Requirements for BGP Path Validation - IETF Datatracker. https://datatracker.ietf.org/doc/rfc7353/. (4) undefined. https://datatracker.ietf.org/drafts/current/. (5) undefined. https://trustee.ietf.org/license-info%29. (6) RFC 9552: Distribution of Link-State and Traffic Engineering .... https://www.ietf.org/rfc/rfc9552.html. (7) RFC 7454 - BGP Operations and Security - IETF Datatracker. https://datatracker.ietf.org/doc/rfc7454/. ==== It is able to give the UNH Interoperability Lab (which Barbara mentioned earlier) as a resource. (I have heard of them, but have never interacted with them, as far as I can remember.) Visiting a few of their pages reveals that they (claim to) do testing of several IETF protocols. [1] I?m not trying to claim that they should be held in the same regard as the NIST, but I haven?t seen anything yet that would cause me to distrust their claims. (But I am reading this as someone who has a few decades of experience with the IETF.) I don?t know how someone who, for example, is fairly new to IETF protocols, but very knowledgeable about IEEE 802 protocols would feel. --gregbo [1] https://www.iol.unh.edu/solutions/test-tools/intact From b_a_denny at yahoo.com Fri Oct 4 10:40:04 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Fri, 4 Oct 2024 17:40:04 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> Message-ID: <806618676.16280720.1728063604562@mail.yahoo.com> Since you mentioned NIST. Just a FYI pulled from the UNH site. 'We offer ISO/IEC 17025 accredited testing designed specifically for the USGv6 Test Program. USGv6 is a test program designated by NIST that provides a proof of compliance to IPv6 specifications outlined in current industry standards for common network products. It is meant as a strategic planning guide for USG (United States Government) IT acquisitions to help ensure the completeness, correctness, interoperability and security of early IPv6 product offerings so as to protect early USG investments in the technology." Also more on USGv6 https://www.nist.gov/programs-projects/usgv6-program/usgv6-revision-1 It is rather interesting (too mild a word?) about their potential IETF involvement. As an example from the site "To remove specifications for IPv6 capabilities included in the first version of the profile, but that have since failed to achieve significant support in commercial products and network deployments." barbara On Friday, October 4, 2024 at 12:19:16 AM PDT, Greg Skinner via Internet-history wrote: On Oct 3, 2024, at 9:44?PM, Jack Haverty wrote: > > On 10/3/24 19:13, Greg Skinner wrote: >> Here are some key IETF specifications that are typically supported: > Bing says macOS implements a "wide range" of specifications are "typically" supported?? Does that mean they are or aren't implemented in the code?? Who says so - the programmer, the Marketing literature, or an independent trusted third party?? What document did they use as specifications for all those protocols and mechanisms?? What does "support" mean?? There are also hordes of ancillary RFCs defining methods for things like queue management, flow control, retransmission, etc.? Which, if any, of them are implemented?? Did anybody test the code to see if it correctly follows the specs?? Who?? > > Sorry, such documentation tells me very little. > > Jack > OK, so I decided to make my question to Bing Copilot a bit more specific, just to see what came up. ==== Is there an independent trusted third party that I can use to verify that BGP implementations conform to IETF BGP standards? Yes, there are independent third parties that can verify BGP implementations for conformance to IETF standards. One notable example is the **University of New Hampshire InterOperability Laboratory (UNH-IOL)**. They offer testing services for various networking protocols, including BGP, to ensure compliance with industry standards?. Would you like more details on how these verification processes work or information on a specific aspect of BGP conformance? Source: Conversation with Copilot, 10/3/2024 (1) RFC 4271 - A Border Gateway Protocol 4 (BGP-4) - IETF Datatracker. https://datatracker.ietf.org/doc/html/rfc4271. (2) Updated BGP Operations and Security - Internet Engineering Task Force. https://www.ietf.org/archive/id/draft-fiebig-grow-bgpopsecupd-00.html. (3) RFC 7353 - Security Requirements for BGP Path Validation - IETF Datatracker. https://datatracker.ietf.org/doc/rfc7353/. (4) undefined. https://datatracker.ietf.org/drafts/current/. (5) undefined. https://trustee.ietf.org/license-info%29. (6) RFC 9552: Distribution of Link-State and Traffic Engineering .... https://www.ietf.org/rfc/rfc9552.html. (7) RFC 7454 - BGP Operations and Security - IETF Datatracker. https://datatracker.ietf.org/doc/rfc7454/. ==== It is able to give the UNH Interoperability Lab (which Barbara mentioned earlier) as a resource.? (I have heard of them, but have never interacted with them, as far as I can remember.)? Visiting a few of their pages reveals that they (claim to) do testing of several IETF protocols. [1] I?m not trying to claim that they should be held in the same regard as the NIST, but I haven?t seen anything yet that would cause me to distrust their claims.? (But I am reading this as someone who has a few decades of experience with the IETF.)? I don?t know how someone who, for example, is fairly new to IETF protocols, but very knowledgeable about IEEE 802 protocols would feel. --gregbo [1] https://www.iol.unh.edu/solutions/test-tools/intact From jack at 3kitty.org Fri Oct 4 13:37:58 2024 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 4 Oct 2024 13:37:58 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <806618676.16280720.1728063604562@mail.yahoo.com> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> Message-ID: <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> Thanks, Greg and Barbara.?? I hadn't heard of UNH's lab or NIST's test suites.? It's good to see that somebody's doing something. I think a real measure of such testing and certification is the extent to which organizations, or even individuals, are using such tests and certificates to influence their purchasing decisions.??? I don't recall ever seeing a NIST or UNH logo or other indication that a product passed their tests or that an ISP I might sign up with provides some specific certified "capability".?? But of course I wasn't really looking for such labelling either.? I wonder if Copilot has any info on that...? NIST's states that certain "IPV6 capabilities ... have since failed to achieve significant support ..." ? That statement implies that NIST has data about what "capabilities" are in fact present in products and/or actually used in operational networks. ? I wonder what they know, how they learned it, and whether or not they certified it by testing. I haven't used Copilot, but I've been playing a bit with GPT4All running on my desktop.? So far, I'm not impressed much.? I fed it a large folder of "network stuff" I've collected over the decades - emails, papers, reports, documents, and anything else I've gleaned from the email forums or the Web. ? I then asked it when was the first gateway on the Internet.? It reported back that the first gateway on the Internet was the Arpanet circa 1970. ? Not quite what I expected. Jack On 10/4/24 10:40, Barbara Denny via Internet-history wrote: > Since you mentioned NIST. Just a FYI pulled from the UNH site. > 'We offer ISO/IEC 17025 accredited testing designed specifically for the USGv6 Test Program. USGv6 is a test program designated by NIST that provides a proof of compliance to IPv6 specifications outlined in current industry standards for common network products. It is meant as a strategic planning guide for USG (United States Government) IT acquisitions to help ensure the completeness, correctness, interoperability and security of early IPv6 product offerings so as to protect early USG investments in the technology." > > Also more on USGv6 > https://www.nist.gov/programs-projects/usgv6-program/usgv6-revision-1 > > It is rather interesting (too mild a word?) about their potential IETF involvement. > As an example from the site > "To remove specifications for IPv6 capabilities included in the first version of the profile, but that have since failed to achieve significant support in commercial products and network deployments." > barbara > On Friday, October 4, 2024 at 12:19:16 AM PDT, Greg Skinner via Internet-history wrote: > > On Oct 3, 2024, at 9:44?PM, Jack Haverty wrote: >> On 10/3/24 19:13, Greg Skinner wrote: >>> Here are some key IETF specifications that are typically supported: >> Bing says macOS implements a "wide range" of specifications are "typically" supported?? Does that mean they are or aren't implemented in the code?? Who says so - the programmer, the Marketing literature, or an independent trusted third party?? What document did they use as specifications for all those protocols and mechanisms?? What does "support" mean?? There are also hordes of ancillary RFCs defining methods for things like queue management, flow control, retransmission, etc.? Which, if any, of them are implemented?? Did anybody test the code to see if it correctly follows the specs?? Who? >> >> Sorry, such documentation tells me very little. >> >> Jack >> > OK, so I decided to make my question to Bing Copilot a bit more specific, just to see what came up. > > ==== > > Is there an independent trusted third party that I can use to verify that BGP implementations conform to IETF BGP standards? > > Yes, there are independent third parties that can verify BGP implementations for conformance to IETF standards. One notable example is the **University of New Hampshire InterOperability Laboratory (UNH-IOL)**. They offer testing services for various networking protocols, including BGP, to ensure compliance with industry standards?. > > Would you like more details on how these verification processes work or information on a specific aspect of BGP conformance? > > Source: Conversation with Copilot, 10/3/2024 > (1) RFC 4271 - A Border Gateway Protocol 4 (BGP-4) - IETF Datatracker.https://datatracker.ietf.org/doc/html/rfc4271. > (2) Updated BGP Operations and Security - Internet Engineering Task Force.https://www.ietf.org/archive/id/draft-fiebig-grow-bgpopsecupd-00.html. > (3) RFC 7353 - Security Requirements for BGP Path Validation - IETF Datatracker.https://datatracker.ietf.org/doc/rfc7353/. > (4) undefined.https://datatracker.ietf.org/drafts/current/. > (5) undefined.https://trustee.ietf.org/license-info%29. > (6) RFC 9552: Distribution of Link-State and Traffic Engineering ....https://www.ietf.org/rfc/rfc9552.html. > (7) RFC 7454 - BGP Operations and Security - IETF Datatracker.https://datatracker.ietf.org/doc/rfc7454/. > > ==== > > It is able to give the UNH Interoperability Lab (which Barbara mentioned earlier) as a resource.? (I have heard of them, but have never interacted with them, as far as I can remember.)? Visiting a few of their pages reveals that they (claim to) do testing of several IETF protocols. [1] > > I?m not trying to claim that they should be held in the same regard as the NIST, but I haven?t seen anything yet that would cause me to distrust their claims.? (But I am reading this as someone who has a few decades of experience with the IETF.)? I don?t know how someone who, for example, is fairly new to IETF protocols, but very knowledgeable about IEEE 802 protocols would feel. > > --gregbo > > [1]https://www.iol.unh.edu/solutions/test-tools/intact > > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From gregskinner0 at icloud.com Sat Oct 5 09:03:43 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Sat, 5 Oct 2024 09:03:43 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> Message-ID: <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > wrote: > > Forwarded for Barbara > > ==== > > Having trouble emailing again so i did some trimming on the original message.... > > Putting my packet radio hat back on, a source quench message could help disambiguate whether loss in the network is due to congestion or something else (like in wireless, loss due to harsh environments, jamming, mobility). I also think it is not obvious what you should do when you receive a source quench, but to me trying to understand this is just part of trying to see if we can make things work better. How about what you could do when you don't receive a source quench but have experienced loss? > > How is network coding coming along these days? > > barbara Any serious attempts to reinstitute ICMP source quench would have to go through the IETF RFC process again because it?s been deprecated for some time. [1] Also, many sites block ICMP outright (even though they?ve been warned not to do this). [2] --gregbo [1] https://datatracker.ietf.org/doc/rfc6633/ [2] https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf From vint at google.com Sat Oct 5 09:50:13 2024 From: vint at google.com (Vint Cerf) Date: Sat, 5 Oct 2024 12:50:13 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP messages. v On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < internet-history at elists.isoc.org> wrote: > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > internet-history at elists.isoc.org > > wrote: > > > > Forwarded for Barbara > > > > ==== > > > > Having trouble emailing again so i did some trimming on the original > message.... > > > > Putting my packet radio hat back on, a source quench message could help > disambiguate whether loss in the network is due to congestion or something > else (like in wireless, loss due to harsh environments, jamming, > mobility). I also think it is not obvious what you should do when you > receive a source quench, but to me trying to understand this is just part > of trying to see if we can make things work better. How about what you > could do when you don't receive a source quench but have experienced loss? > > > > How is network coding coming along these days? > > > > barbara > > Any serious attempts to reinstitute ICMP source quench would have to go > through the IETF RFC process again because it?s been deprecated for some > time. [1] Also, many sites block ICMP outright (even though they?ve been > warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jeanjour at comcast.net Sat Oct 5 10:17:47 2024 From: jeanjour at comcast.net (John Day) Date: Sat, 5 Oct 2024 13:17:47 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: Vint is right. But also, SQ isn?t sufficient without when to generated it (Jain said originally when the average queue length is greater than or equal to 1) and what to do when it is received (how to back off, there is also data that shows degrees of response are useful and which flows to affect is useful, such as jitter-insensitive flows, but only more sensitive ones if it gets worse, etc.). That can be done and coordinated with something like DiffServ. SQ was avoided because it generates another packet when the problem is too many already. However, if the goal is congestion avoidance which implies early notification (as Jain proposed), then it might be the case that a SQ packet would be ahead of spreading congestion. Of course one problem is that it is often the case that the reverse path is not the same as the forward path. > On Oct 5, 2024, at 12:03, Greg Skinner via Internet-history wrote: > > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > wrote: >> >> Forwarded for Barbara >> >> ==== >> >> Having trouble emailing again so i did some trimming on the original message.... >> >> Putting my packet radio hat back on, a source quench message could help disambiguate whether loss in the network is due to congestion or something else (like in wireless, loss due to harsh environments, jamming, mobility). I also think it is not obvious what you should do when you receive a source quench, but to me trying to understand this is just part of trying to see if we can make things work better. How about what you could do when you don't receive a source quench but have experienced loss? >> >> How is network coding coming along these days? >> >> barbara > > Any serious attempts to reinstitute ICMP source quench would have to go through the IETF RFC process again because it?s been deprecated for some time. [1] Also, many sites block ICMP outright (even though they?ve been warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Sat Oct 5 10:30:09 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sat, 5 Oct 2024 17:30:09 +0000 (UTC) Subject: [ih] Fw: bufferbloat and modern congestion control (was 4004) In-Reply-To: <50017755.16554900.1728149243132@mail.yahoo.com> References: <50017755.16554900.1728149243132.ref@mail.yahoo.com> <50017755.16554900.1728149243132@mail.yahoo.com> Message-ID: <620505255.16547971.1728149409979@mail.yahoo.com> Yes there are more ICMP messages. Redirect popped into my head after my initial message about the Reconstitution Protocol gateway and Source Quench. It was like oops maybe I a remembered wrong. I haven't had a chance to check out the RP document yet. :-( Doesn't ping use icmp request and reply? I have been wondering if a dislike for ping started this we should just drop ICMP messages. barbara On Saturday, October 5, 2024 at 09:50:38 AM PDT, Vint Cerf via Internet-history wrote: isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP messages. v On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < internet-history at elists.isoc.org> wrote: > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > internet-history at elists.isoc.org > > wrote: > > > > Forwarded for Barbara > > > > ==== > > > > Having trouble emailing again so i did some trimming on the original > message.... > > > > Putting my packet radio hat back on, a source quench message could help > disambiguate whether loss in the network is due to congestion or something > else (like in wireless, loss due to harsh environments, jamming, > mobility).? I also think it is not obvious what you should do when you > receive a source quench, but to me trying to understand this is just part > of trying to see if we can make things work better.? How about what you > could do when you don't receive a source quench but have experienced loss? > > > > How is network coding coming along these days? > > > > barbara > > Any serious attempts to reinstitute ICMP source quench would have to go > through the IETF RFC process again because it?s been deprecated for some > time. [1]? Also, many sites block ICMP outright (even though they?ve been > warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From b_a_denny at yahoo.com Sat Oct 5 10:33:43 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sat, 5 Oct 2024 17:33:43 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: <1088205864.16560089.1728149623908@mail.yahoo.com> Yes there are more ICMP messages. Redirect popped into my head after my initial message about the Reconstitution Protocol gateway and Source Quench. It was like oops maybe remembered wrong. I haven't had a chance to check out the RP document yet. :-( Doesn't ping use icmp request and reply? In the past I wondered if a dislike for ping started this we should just drop ICMP messages. barbara On Saturday, October 5, 2024 at 09:50:38 AM PDT, Vint Cerf via Internet-history wrote: isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP messages. v On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < internet-history at elists.isoc.org> wrote: > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > internet-history at elists.isoc.org > > wrote: > > > > Forwarded for Barbara > > > > ==== > > > > Having trouble emailing again so i did some trimming on the original > message.... > > > > Putting my packet radio hat back on, a source quench message could help > disambiguate whether loss in the network is due to congestion or something > else (like in wireless, loss due to harsh environments, jamming, > mobility).? I also think it is not obvious what you should do when you > receive a source quench, but to me trying to understand this is just part > of trying to see if we can make things work better.? How about what you > could do when you don't receive a source quench but have experienced loss? > > > > How is network coding coming along these days? > > > > barbara > > Any serious attempts to reinstitute ICMP source quench would have to go > through the IETF RFC process again because it?s been deprecated for some > time. [1]? Also, many sites block ICMP outright (even though they?ve been > warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From b_a_denny at yahoo.com Sat Oct 5 10:38:31 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sat, 5 Oct 2024 17:38:31 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <1088205864.16560089.1728149623908@mail.yahoo.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <1088205864.16560089.1728149623908@mail.yahoo.com> Message-ID: <473060439.16548561.1728149911120@mail.yahoo.com> Sorry about the two messages.? I just thought I was in the black hole again and decided might as well do some minor editing barbara On Saturday, October 5, 2024 at 10:34:00 AM PDT, Barbara Denny via Internet-history wrote: Yes there are more ICMP messages. Redirect popped into my head after my initial message about the Reconstitution Protocol gateway and Source Quench. It was like oops maybe remembered wrong. I haven't had a chance to check out the RP document yet. :-( Doesn't ping use icmp request and reply? In the past I wondered if a dislike for ping started this we should just drop ICMP messages. barbara ? ? On Saturday, October 5, 2024 at 09:50:38 AM PDT, Vint Cerf via Internet-history wrote:? isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP messages. v On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < internet-history at elists.isoc.org> wrote: > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > internet-history at elists.isoc.org > > wrote: > > > > Forwarded for Barbara > > > > ==== > > > > Having trouble emailing again so i did some trimming on the original > message.... > > > > Putting my packet radio hat back on, a source quench message could help > disambiguate whether loss in the network is due to congestion or something > else (like in wireless, loss due to harsh environments, jamming, > mobility).? I also think it is not obvious what you should do when you > receive a source quench, but to me trying to understand this is just part > of trying to see if we can make things work better.? How about what you > could do when you don't receive a source quench but have experienced loss? > > > > How is network coding coming along these days? > > > > barbara > > Any serious attempts to reinstitute ICMP source quench would have to go > through the IETF RFC process again because it?s been deprecated for some > time. [1]? Also, many sites block ICMP outright (even though they?ve been > warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice ? From jeanjour at comcast.net Sat Oct 5 10:46:58 2024 From: jeanjour at comcast.net (John Day) Date: Sat, 5 Oct 2024 13:46:58 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004)a In-Reply-To: <620505255.16547971.1728149409979@mail.yahoo.com> References: <50017755.16554900.1728149243132.ref@mail.yahoo.com> <50017755.16554900.1728149243132@mail.yahoo.com> <620505255.16547971.1728149409979@mail.yahoo.com> Message-ID: Yes, there are lots of other ICMP messages. Ping, destination not available, etc. etc. I still need to go through that Reconstitution Protocol, but from what I have read, someone didn?t understand something. Either it has nothing to do with network partitions (one can?t tell if it is partition or not until it is over, maybe those hosts were all down), if that wasn?t it, then it is just internet layer routing (BGP, inter-domain) instead of network layer routing (IS-IS/OSPF, intra-domain). > On Oct 5, 2024, at 13:30, Barbara Denny via Internet-history wrote: > > > > Yes there are more ICMP messages. Redirect popped into my head after my initial message about the Reconstitution Protocol gateway and Source Quench. It was like oops maybe I a remembered wrong. I haven't had a chance to check out the RP document yet. :-( > Doesn't ping use icmp request and reply? I have been wondering if a dislike for ping started this we should just drop ICMP messages. > barbara > > On Saturday, October 5, 2024 at 09:50:38 AM PDT, Vint Cerf via Internet-history wrote: > > isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP > messages. > > v > > > On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < > internet-history at elists.isoc.org> wrote: > >> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >> internet-history at elists.isoc.org > >> wrote: >>> >>> Forwarded for Barbara >>> >>> ==== >>> >>> Having trouble emailing again so i did some trimming on the original >> message.... >>> >>> Putting my packet radio hat back on, a source quench message could help >> disambiguate whether loss in the network is due to congestion or something >> else (like in wireless, loss due to harsh environments, jamming, >> mobility). I also think it is not obvious what you should do when you >> receive a source quench, but to me trying to understand this is just part >> of trying to see if we can make things work better. How about what you >> could do when you don't receive a source quench but have experienced loss? >>> >>> How is network coding coming along these days? >>> >>> barbara >> >> Any serious attempts to reinstitute ICMP source quench would have to go >> through the IETF RFC process again because it?s been deprecated for some >> time. [1] Also, many sites block ICMP outright (even though they?ve been >> warned not to do this). [2] >> >> --gregbo >> >> [1] https://datatracker.ietf.org/doc/rfc6633/ >> [2] >> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From craig at tereschau.net Sat Oct 5 11:26:08 2024 From: craig at tereschau.net (Craig Partridge) Date: Sat, 5 Oct 2024 12:26:08 -0600 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: All sorts of goodies: ICMP Echo (what used to power Ping until people decided they didn't like folks probing) ICMP Unreachable (port or host) ICMP Problem Param (diagnostic) many more. On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < internet-history at elists.isoc.org> wrote: > isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP > messages. > > v > > > On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > > internet-history at elists.isoc.org internet-history at elists.isoc.org>> > > wrote: > > > > > > Forwarded for Barbara > > > > > > ==== > > > > > > Having trouble emailing again so i did some trimming on the original > > message.... > > > > > > Putting my packet radio hat back on, a source quench message could help > > disambiguate whether loss in the network is due to congestion or > something > > else (like in wireless, loss due to harsh environments, jamming, > > mobility). I also think it is not obvious what you should do when you > > receive a source quench, but to me trying to understand this is just part > > of trying to see if we can make things work better. How about what you > > could do when you don't receive a source quench but have experienced > loss? > > > > > > How is network coding coming along these days? > > > > > > barbara > > > > Any serious attempts to reinstitute ICMP source quench would have to go > > through the IETF RFC process again because it?s been deprecated for some > > time. [1] Also, many sites block ICMP outright (even though they?ve been > > warned not to do this). [2] > > > > --gregbo > > > > [1] https://datatracker.ietf.org/doc/rfc6633/ > > [2] > > > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From b_a_denny at yahoo.com Sat Oct 5 11:28:49 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sat, 5 Oct 2024 18:28:49 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004)a In-Reply-To: References: <50017755.16554900.1728149243132.ref@mail.yahoo.com> <50017755.16554900.1728149243132@mail.yahoo.com> <620505255.16547971.1728149409979@mail.yahoo.com> Message-ID: <1922259201.2080481.1728152929927@mail.yahoo.com> I should check the ICMP RFC? but? that may be where it says you aren't supposed to send an error in response to an error.? ?I also think ICMP messages were considered more in the error camp (Another example, don't fragment flag is set in a packet so an ICMP destination unreachable should be sent when the packet requires fragmentation). So no using source quench to get rid of pings that are bothering you :-). Btw CMU has some info about :-).?? barbara On Saturday, October 5, 2024 at 10:47:22 AM PDT, John Day wrote: Yes, there are lots of other ICMP messages. Ping, destination not available, etc. etc. I still need to go through that Reconstitution Protocol, but from what I have read, someone didn?t understand something. Either it has nothing to do with network partitions (one can?t tell if it is partition or not until it is over, maybe those hosts were all down), if that wasn?t it, then it is just internet layer routing (BGP, inter-domain) instead of network layer routing (IS-IS/OSPF, intra-domain). > On Oct 5, 2024, at 13:30, Barbara Denny via Internet-history wrote: > > > >? Yes there are more ICMP messages. Redirect popped into my head after my initial message about the Reconstitution Protocol gateway and Source Quench. It was like oops maybe I a remembered wrong. I haven't had a chance to check out the RP document yet. :-( > Doesn't ping use icmp request and reply? I have been wondering if a dislike for ping started this we should just drop ICMP messages. > barbara > >? ? On Saturday, October 5, 2024 at 09:50:38 AM PDT, Vint Cerf via Internet-history wrote:? > > isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP > messages. > > v > > > On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < > internet-history at elists.isoc.org> wrote: > >> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >> internet-history at elists.isoc.org > >> wrote: >>> >>> Forwarded for Barbara >>> >>> ==== >>> >>> Having trouble emailing again so i did some trimming on the original >> message.... >>> >>> Putting my packet radio hat back on, a source quench message could help >> disambiguate whether loss in the network is due to congestion or something >> else (like in wireless, loss due to harsh environments, jamming, >> mobility).? I also think it is not obvious what you should do when you >> receive a source quench, but to me trying to understand this is just part >> of trying to see if we can make things work better.? How about what you >> could do when you don't receive a source quench but have experienced loss? >>> >>> How is network coding coming along these days? >>> >>> barbara >> >> Any serious attempts to reinstitute ICMP source quench would have to go >> through the IETF RFC process again because it?s been deprecated for some >> time. [1]? Also, many sites block ICMP outright (even though they?ve been >> warned not to do this). [2] >> >> --gregbo >> >> [1] https://datatracker.ietf.org/doc/rfc6633/ >> [2] >> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Sat Oct 5 12:37:35 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 6 Oct 2024 08:37:35 +1300 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: On 06-Oct-24 05:03, Greg Skinner via Internet-history wrote: > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > wrote: >> >> Forwarded for Barbara ... >> How is network coding coming along these days? I don't believe it has seen much deployment. A colleague of mine has worked on its applicability to mitigating problems with satellite-based connectivity for remote Pacific islands. [3] > Any serious attempts to reinstitute ICMP source quench would have to go through the IETF RFC process again because it?s been deprecated for some time. [1] Also, many sites block ICMP outright (even though they?ve been warned not to do this). [2] The main operational problem this causes is that it breaks path MTU discovery, which damages performance. Applies to ICMPv6 too, of course. However, ICMP is viewed as a security threat, including as a DOS vector, so blocking it in firewalls is pretty common. There are recommendations in [4] and [5] for ICMPv6. Brian > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf [3] U. Speidel, "Improving goodput on shared satellite links with coded tunnels," 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 2021, pp. 1-6, doi: 10.1109/GLOBECOM46510.2021.9686021. [4] https://www.rfc-editor.org/info/rfc4890 [5] https://www.rfc-editor.org/info/rfc9099 From als at thangorodrim.ch Sat Oct 5 12:59:52 2024 From: als at thangorodrim.ch (Alexander Schreiber) Date: Sat, 5 Oct 2024 21:59:52 +0200 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: On Sat, Oct 05, 2024 at 12:50:13PM -0400, Vint Cerf via Internet-history wrote: > isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP > messages. Yes. A lot more, after all, it's called Internet _Control_ Messages Protocol for a reason. One of the more obvious signs of "some clown along the packet path drops all ICMP" is path MTU being broken, because the ICMP Fragmentation Needed packets never make it back to you. It also (obviously) breaks ping, which is annoying, but not as much as PMTU discovery breakage. And yes, there are still plenty of people on links with "MTU < Ethernet packet size" due to e.g. VPNs and such. The Linkedin article in the previous mail has a good summary of the things blocking ICMP breaks. We've made fun of [censored] blocking ICMP "for security reasons" already back in the 1990s when I first got on the Internet. I'm sure those on this list older than me made fun of such people long before. I'm disappointed, but not surprised, that there still seem to be people who think "ICMP evil, only for h4x0rs, drop all". After all, there is the evil bit (RFC3514) for easily blocking evil traffic. ;-) On the other hand, networks blocking ICMP also helpfully announce to the world "The [censored] in charge of our network have absolutely no idea what they are doing and should not be in charge of anything more complex than a simple broom". And one (of many) ways to rile up competent networking folks is to ask them (ideally looking all innocent and sincere if you can pull that off) "So, blocking ICMP is just basic good network security policy, right?" - be prepared to duck & run, though ;-) Kind regards, Alex. -- "Opportunity is missed by most people because it is dressed in overalls and looks like work." -- Thomas A. Edison From gregskinner0 at icloud.com Sat Oct 5 15:16:03 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Sat, 5 Oct 2024 15:16:03 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> Message-ID: <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> Unfortunately, I only have time for a quick response. Also, unfortunately, neither the ietf nor end2end-interest archives go back far enough to get more of an idea of what VJ, Mike Karels, etc. considered in their approach to congestion avoidance and control. There was some discussion of it on the tcp-ip list, which can be accessed via Google Groups. [1] [2] I could go into more detail, but I don?t think they intended their approach to somehow set a paradigm for how congestion avoidance and control would be addressed on a (modern) Internet scale. Something to consider here is with the limited resources they had, they needed (and were able) to come up with something that worked on networks based on the ARPAnet IMP technology of that time, which many Internet users still relied on. --gregbo [1] https://groups.google.com/g/mod.protocols.tcp-ip [2] https://groups.google.com/g/comp.protocols.tcp-ip > On Oct 2, 2024, at 4:41?PM, John Day via Internet-history wrote: > > Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS. One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. > > We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.) > > The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer. > > John > >> On Oct 2, 2024, at 17:51, Vint Cerf wrote: >> >> John, >> >> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. >> >> v >> >> >> On Wed, Oct 2, 2024 at 5:19?PM John Day > wrote: >>> Busy day. Just getting to looking at these. >>> >>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions: >>> >>> Flow control is a pair-wise issue, Congestion management is an n-party issue. >>> >>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.) >>> >>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.) >>> >>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. >>> >>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late. >>> >>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify. >>> >>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.) >>> >>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers. >>> >>> Jacobson?s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van?s fault. It is the classic engineer?s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times. >>> >>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions. >>> >>> There is probably much more, but this is what comes to mind. >>> >>> Take care, >>> John >>> >>> >>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history > wrote: >>>> >>>> One basic problem with blaming the "last packet that caused intermediate >>>> router congestion" is that it usually blamed the wrong source, among other >>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>> might remember more. >>>> >>>> >>>> v From jack at 3kitty.org Sat Oct 5 15:28:07 2024 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 5 Oct 2024 15:28:07 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> IIRC: When the internal mechanisms (such as SQ) were being debated and choices made to create TCP/IP V4 for adoption as the DoD Standard, the technology world was quite different.?? At the time (early 1980s), gateways had very little memory - sometimes only enough to hold one or at most a few IP datagrams.?? If a datagram arrived and there was no place to hold it, SQ back to the source was a way to say "Slow down.? I just had to drop your last datagram". Over the decades, memory became a lot more available.? So gateways could easily have space to queue many datagrams.? In one test I did just a few years ago, a stream of datagrams was sent from one site to another.? All were received intact and in order as sent.? No SQ messages were received.?? But latency soared.? Some datagrams took more than 30 seconds to reach their destination.?? Memory had become cheap enough that datagrams could just be held as long as needed. For anyone involved in operating a piece of the Internet, or for diagnosing users' complaints like "it's too slow", ICMP's facilities were crucial tools.?? They were flawed and incomplete, but still useful as ways to figure out what was happening. When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was not included.? As someone involved in diagnosing operational problems, we yelled, screamed, cajoled, encouraged, lobbied, and did whatever we could to get the DoD procurement folks to add ICMP to their list of required implementations. This discussion about SQ reminded me of another "gateway issue" from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll write what I remember about that separately. Jack On 10/5/24 11:26, Craig Partridge via Internet-history wrote: > All sorts of goodies: > > ICMP Echo (what used to power Ping until people decided they didn't like > folks probing) > > ICMP Unreachable (port or host) > > ICMP Problem Param (diagnostic) > > many more. > > On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >> messages. >> >> v >> >> >> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>> internet-history at elists.isoc.org > internet-history at elists.isoc.org>> >>> wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> Having trouble emailing again so i did some trimming on the original >>> message.... >>>> Putting my packet radio hat back on, a source quench message could help >>> disambiguate whether loss in the network is due to congestion or >> something >>> else (like in wireless, loss due to harsh environments, jamming, >>> mobility). I also think it is not obvious what you should do when you >>> receive a source quench, but to me trying to understand this is just part >>> of trying to see if we can make things work better. How about what you >>> could do when you don't receive a source quench but have experienced >> loss? >>>> How is network coding coming along these days? >>>> >>>> barbara >>> Any serious attempts to reinstitute ICMP source quench would have to go >>> through the IETF RFC process again because it?s been deprecated for some >>> time. [1] Also, many sites block ICMP outright (even though they?ve been >>> warned not to do this). [2] >>> >>> --gregbo >>> >>> [1]https://datatracker.ietf.org/doc/rfc6633/ >>> [2] >>> >> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 >> >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From jeanjour at comcast.net Sat Oct 5 16:33:34 2024 From: jeanjour at comcast.net (John Day) Date: Sat, 5 Oct 2024 19:33:34 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> Message-ID: The work of Jain?s DEC team existed at the same time and I believe Jacobson?s original paper references it. As I said, at least it does congestion avoidance without causing congestion (unless under extreme conditions). I suspect that the main reason Jacobson didn?t adopt it was that they were trying to maximize the data rate by running as close to congestion collapse as they could. While Jain?s work attempted to balance the trade-off between throughput and response time. But that is just policy they still could have used ECN to keep from being predatory and used ECN while waiting until the queue is full to mark the packets. That is what TCP use of ECN does now. Of course, I think that is bad choice because it generates lots of retransmissions. When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, but an experimenter. But it is not uncommon to be so focused on the immediate problem to fail to notice the system implications. Take care, John > On Oct 5, 2024, at 18:16, Greg Skinner wrote: > > Unfortunately, I only have time for a quick response. Also, unfortunately, neither the ietf nor end2end-interest archives go back far enough to get more of an idea of what VJ, Mike Karels, etc. considered in their approach to congestion avoidance and control. There was some discussion of it on the tcp-ip list, which can be accessed via Google Groups. [1] [2] I could go into more detail, but I don?t think they intended their approach to somehow set a paradigm for how congestion avoidance and control would be addressed on a (modern) Internet scale. Something to consider here is with the limited resources they had, they needed (and were able) to come up with something that worked on networks based on the ARPAnet IMP technology of that time, which many Internet users still relied on. > > --gregbo > > [1] https://groups.google.com/g/mod.protocols.tcp-ip > [2] https://groups.google.com/g/comp.protocols.tcp-ip > >> On Oct 2, 2024, at 4:41?PM, John Day via Internet-history wrote: >> >> Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS. One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. >> >> We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.) >> >> The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer. >> >> John >> >>> On Oct 2, 2024, at 17:51, Vint Cerf wrote: >>> >>> John, >>> >>> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. >>> >>> v >>> >>> >>> On Wed, Oct 2, 2024 at 5:19?PM John Day > wrote: >>>> Busy day. Just getting to looking at these. >>>> >>>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions: >>>> >>>> Flow control is a pair-wise issue, Congestion management is an n-party issue. >>>> >>>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.) >>>> >>>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.) >>>> >>>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. >>>> >>>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late. >>>> >>>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify. >>>> >>>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.) >>>> >>>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers. >>>> >>>> Jacobson?s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van?s fault. It is the classic engineer?s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times. >>>> >>>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions. >>>> >>>> There is probably much more, but this is what comes to mind. >>>> >>>> Take care, >>>> John >>>> >>>> >>>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history > wrote: >>>>> >>>>> One basic problem with blaming the "last packet that caused intermediate >>>>> router congestion" is that it usually blamed the wrong source, among other >>>>> problems. Van Jacobson was/is the guru of flow control (among others) who >>>>> might remember more. >>>>> >>>>> >>>>> v > From b_a_denny at yahoo.com Sat Oct 5 17:00:45 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sun, 6 Oct 2024 00:00:45 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> Message-ID: <35157553.16622819.1728172845829@mail.yahoo.com> I noticed ICMP wasn't included at the time but did they provide any justification? I always wondered why did this happen. Ping has been in existence for a very long time.? As far as the timeline in my brain, I think? it was around before any kind of significant work in network management. I know I relied on it quite a bit in trying to understand what was happening in testbeds. Life would have been miserable without it as I was trying to get things up and running. Same is true for the field demos. I was working in isolated islands. barbara On Saturday, October 5, 2024 at 03:28:22 PM PDT, Jack Haverty via Internet-history wrote: IIRC: When the internal mechanisms (such as SQ) were being debated and choices made to create TCP/IP V4 for adoption as the DoD Standard, the technology world was quite different.?? At the time (early 1980s), gateways had very little memory - sometimes only enough to hold one or at most a few IP datagrams.?? If a datagram arrived and there was no place to hold it, SQ back to the source was a way to say "Slow down.? I just had to drop your last datagram". Over the decades, memory became a lot more available.? So gateways could easily have space to queue many datagrams.? In one test I did just a few years ago, a stream of datagrams was sent from one site to another.? All were received intact and in order as sent.? No SQ messages were received.?? But latency soared.? Some datagrams took more than 30 seconds to reach their destination.?? Memory had become cheap enough that datagrams could just be held as long as needed. For anyone involved in operating a piece of the Internet, or for diagnosing users' complaints like "it's too slow", ICMP's facilities were crucial tools.?? They were flawed and incomplete, but still useful as ways to figure out what was happening. When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was not included.? As someone involved in diagnosing operational problems, we yelled, screamed, cajoled, encouraged, lobbied, and did whatever we could to get the DoD procurement folks to add ICMP to their list of required implementations. This discussion about SQ reminded me of another "gateway issue" from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll write what I remember about that separately. Jack On 10/5/24 11:26, Craig Partridge via Internet-history wrote: > All sorts of goodies: > > ICMP Echo (what used to power Ping until people decided they didn't like > folks probing) > > ICMP Unreachable (port or host) > > ICMP Problem Param (diagnostic) > > many more. > > On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >> messages. >> >> v >> >> >> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>> internet-history at elists.isoc.org? > internet-history at elists.isoc.org>> >>> wrote: >>>> Forwarded for Barbara >>>> >>>> ==== >>>> >>>> Having trouble emailing again so i did some trimming on the original >>> message.... >>>> Putting my packet radio hat back on, a source quench message could help >>> disambiguate whether loss in the network is due to congestion or >> something >>> else (like in wireless, loss due to harsh environments, jamming, >>> mobility).? I also think it is not obvious what you should do when you >>> receive a source quench, but to me trying to understand this is just part >>> of trying to see if we can make things work better.? How about what you >>> could do when you don't receive a source quench but have experienced >> loss? >>>> How is network coding coming along these days? >>>> >>>> barbara >>> Any serious attempts to reinstitute ICMP source quench would have to go >>> through the IETF RFC process again because it?s been deprecated for some >>> time. [1]? Also, many sites block ICMP outright (even though they?ve been >>> warned not to do this). [2] >>> >>> --gregbo >>> >>> [1]https://datatracker.ietf.org/doc/rfc6633/ >>> [2] >>> >> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 >> >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From craig at tereschau.net Sat Oct 5 17:42:32 2024 From: craig at tereschau.net (Craig Partridge) Date: Sat, 5 Oct 2024 18:42:32 -0600 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> Message-ID: As someone who was in touch with Raj/KK and Van/Mike during the development of congestion control. They were unaware of each other's work until spring of 1988, when they realized they were doing very similar stuff. I think, someone (Dave Clark) in the End2End Research Group became aware of Raj & KK's work and invited them to come present to an E2E meeting in early 1988 and E2E (more than IETF) was where Van was working out the kinks in his congestion control work with Mike. Craig On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > The work of Jain?s DEC team existed at the same time and I believe > Jacobson?s original paper references it. > > As I said, at least it does congestion avoidance without causing > congestion (unless under extreme conditions). > > I suspect that the main reason Jacobson didn?t adopt it was that they were > trying to maximize the data rate by running as close to congestion collapse > as they could. While Jain?s work attempted to balance the trade-off between > throughput and response time. But that is just policy they still could > have used ECN to keep from being predatory and used ECN while waiting until > the queue is full to mark the packets. That is what TCP use of ECN does > now. Of course, I think that is bad choice because it generates lots of > retransmissions. > > When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, > but an experimenter. > > But it is not uncommon to be so focused on the immediate problem to fail > to notice the system implications. > > Take care, > John > > > On Oct 5, 2024, at 18:16, Greg Skinner wrote: > > > > Unfortunately, I only have time for a quick response. Also, > unfortunately, neither the ietf nor end2end-interest archives go back far > enough to get more of an idea of what VJ, Mike Karels, etc. considered in > their approach to congestion avoidance and control. There was some > discussion of it on the tcp-ip list, which can be accessed via Google > Groups. [1] [2] I could go into more detail, but I don?t think they > intended their approach to somehow set a paradigm for how congestion > avoidance and control would be addressed on a (modern) Internet scale. > Something to consider here is with the limited resources they had, they > needed (and were able) to come up with something that worked on networks > based on the ARPAnet IMP technology of that time, which many Internet users > still relied on. > > > > --gregbo > > > > [1] https://groups.google.com/g/mod.protocols.tcp-ip > > [2] https://groups.google.com/g/comp.protocols.tcp-ip > > > >> On Oct 2, 2024, at 4:41?PM, John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Yes, I have been reading the recent work. But as long as there is no > ECN (or something like it), as long as the concentration is on load, as > long as the focus is on ramming as much through as possible. They are > headed in the wrong direction. Also putting congestion control in TCP > maximizes time to notify. But it also thwarts doing QoS. One needs to be > able to coordinate congestion control with traffic management. Inferring > congestion from Ack timing is very imprecise. Given the stochastic nature > of congestion, it is important to detect congestion well-before it gets > bad. > >> > >> We are back to the reality that congestion avoidance is needed in the > internet layer, the network layer and the data link layer (bridges are > relays). And 802 realizes that is using an innovative modification IS-IS > and a congestion control scheme for bridged networks. (Not sure it is the > greatest.) > >> > >> The case I was trying to make was TCP congestion control got off on the > wrong foot. It established the box that people are still in. As long as > detection is implicit, it will be predatory. Congestion is not limited to > one layer. > >> > >> John > >> > >>> On Oct 2, 2024, at 17:51, Vint Cerf wrote: > >>> > >>> John, > >>> > >>> you may be referring to an early Van Jacobson idea, "slow start" - > things have gone well beyond that, I believe, with mechanisms that use the > acknowledgement intervals to assess/control flow. Round-trip time is no > longer a key metric. > >>> > >>> v > >>> > >>> > >>> On Wed, Oct 2, 2024 at 5:19?PM John Day jeanjour at comcast.net>> wrote: > >>>> Busy day. Just getting to looking at these. > >>>> > >>>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC > Report is a masterpiece. The epitome of what good computer research should > be. Their initial work really nailed the problem. It is unfortunate that it > appears to have been totally forgotten. Of course there was still work to > do. A few conclusions: > >>>> > >>>> Flow control is a pair-wise issue, Congestion management is an > n-party issue. > >>>> > >>>> Any layer that relays will exhibit congestion. (Contention for > multi-access media is a form of congestion.) > >>>> > >>>> A Congestion solution should minimize congestion events and > retransmissions. (TCP maximizes both.) > >>>> > >>>> Congestion is a stochastic phenomena. The cause is too many packets > arriving with a given short period. > >>>> > >>>> Load is not the root cause of congestion but does increase the > probability. (This is an error I see in most every paper I read on the > topic.) Congestion has been observed on a network with a .1% loading. Often > congestion will clear on its own. Waiting for load to be the condition for > a response makes the response late. > >>>> > >>>> The effectiveness of any congestion avoidance solution will > deteriorate with increasing time-to-notify. > >>>> > >>>> Something like ECN or SourceQuench (if like ECN it is sent to all > sources of the congested router) is absolutely required to ensure that the > effects of congestion management remain localized to the layer in which it > occurred. However, neither one alone is sufficient without the action to be > taken in response to receiving them. (I would think SQ would have some > advantage in that the sender would be notified sooner than with ECN.) > >>>> > >>>> Without ECN, the congestion scheme is predatory and will interact > badly with congestion solutions in lower layers. > >>>> > >>>> Jacobson?s solution for TCP is about the worst, one could expect: A > congestion *avoidance* solution that works by causing congestion? It has > potentially done irreparable damage to the Internet, because it is > predatory. (implicit notification, no ECN) In a way this is not Van?s > fault. It is the classic engineer?s mistake: Solve the narrow problem but > fail to consider the context. This solution might acceptable for a network, > but not for an Internet, where multiple layers (some of less scope) relay > and are thus subject to congestion. Attempts to do congestion control in > lower layers with TCP congestion control results in warring feedback loops > with very different response times. > >>>> > >>>> As Jain and KK point out, TCP optimizes for the edge of the cliff of > congestion collapse, while they propose optimizing for the knee of the > throughput/delay curve to minimize both congestion events and > retransmissions. > >>>> > >>>> There is probably much more, but this is what comes to mind. > >>>> > >>>> Take care, > >>>> John > >>>> > >>>> > >>>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history < > internet-history at elists.isoc.org > > wrote: > >>>>> > >>>>> One basic problem with blaming the "last packet that caused > intermediate > >>>>> router congestion" is that it usually blamed the wrong source, among > other > >>>>> problems. Van Jacobson was/is the guru of flow control (among > others) who > >>>>> might remember more. > >>>>> > >>>>> > >>>>> v > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From jeanjour at comcast.net Sat Oct 5 17:52:10 2024 From: jeanjour at comcast.net (John Day) Date: Sat, 5 Oct 2024 20:52:10 -0400 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> Message-ID: <709F8112-8285-4366-B867-69C9D268AF28@comcast.net> Ah, Thanks, that helps clarify my picture of it. However, it doesn?t change my interpretation. The Raj/KK work is yields a much better system solution that minimizes retransmissions. And there is just something about congestion avoidance that causes congestion that doesn?t sit well. ;-) The two Part 1s* of the 4 part DEC Report and the Report as a whole is one of the finest pieces of computer science research I have ever read. It is laid out logically. They take the problem apart, evaluate each part thoroughly and make reasonable decisions while also indicating that other possibilities are to be explored. I really wish more of the papers I read were this good. *There are two Part 1s. The one that is part of the DEC Report and the version that was published. The fun thing is one is not a subset of the other and both have good information in them that is not in the other one. ;-) Take care, John > On Oct 5, 2024, at 20:42, Craig Partridge wrote: > > As someone who was in touch with Raj/KK and Van/Mike during the development of congestion control. They were unaware of each other's work until spring of 1988, when they realized they were doing very similar stuff. I think, someone (Dave Clark) in the End2End Research Group became aware of Raj & KK's work and invited them to come present to an E2E meeting in early 1988 and E2E (more than IETF) was where Van was working out the kinks in his congestion control work with Mike. > > Craig > > On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history > wrote: >> The work of Jain?s DEC team existed at the same time and I believe Jacobson?s original paper references it. >> >> As I said, at least it does congestion avoidance without causing congestion (unless under extreme conditions). >> >> I suspect that the main reason Jacobson didn?t adopt it was that they were trying to maximize the data rate by running as close to congestion collapse as they could. While Jain?s work attempted to balance the trade-off between throughput and response time. But that is just policy they still could have used ECN to keep from being predatory and used ECN while waiting until the queue is full to mark the packets. That is what TCP use of ECN does now. Of course, I think that is bad choice because it generates lots of retransmissions. >> >> When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, but an experimenter. >> >> But it is not uncommon to be so focused on the immediate problem to fail to notice the system implications. >> >> Take care, >> John >> >> > On Oct 5, 2024, at 18:16, Greg Skinner > wrote: >> > >> > Unfortunately, I only have time for a quick response. Also, unfortunately, neither the ietf nor end2end-interest archives go back far enough to get more of an idea of what VJ, Mike Karels, etc. considered in their approach to congestion avoidance and control. There was some discussion of it on the tcp-ip list, which can be accessed via Google Groups. [1] [2] I could go into more detail, but I don?t think they intended their approach to somehow set a paradigm for how congestion avoidance and control would be addressed on a (modern) Internet scale. Something to consider here is with the limited resources they had, they needed (and were able) to come up with something that worked on networks based on the ARPAnet IMP technology of that time, which many Internet users still relied on. >> > >> > --gregbo >> > >> > [1] https://groups.google.com/g/mod.protocols.tcp-ip >> > [2] https://groups.google.com/g/comp.protocols.tcp-ip >> > >> >> On Oct 2, 2024, at 4:41?PM, John Day via Internet-history > wrote: >> >> >> >> Yes, I have been reading the recent work. But as long as there is no ECN (or something like it), as long as the concentration is on load, as long as the focus is on ramming as much through as possible. They are headed in the wrong direction. Also putting congestion control in TCP maximizes time to notify. But it also thwarts doing QoS. One needs to be able to coordinate congestion control with traffic management. Inferring congestion from Ack timing is very imprecise. Given the stochastic nature of congestion, it is important to detect congestion well-before it gets bad. >> >> >> >> We are back to the reality that congestion avoidance is needed in the internet layer, the network layer and the data link layer (bridges are relays). And 802 realizes that is using an innovative modification IS-IS and a congestion control scheme for bridged networks. (Not sure it is the greatest.) >> >> >> >> The case I was trying to make was TCP congestion control got off on the wrong foot. It established the box that people are still in. As long as detection is implicit, it will be predatory. Congestion is not limited to one layer. >> >> >> >> John >> >> >> >>> On Oct 2, 2024, at 17:51, Vint Cerf > wrote: >> >>> >> >>> John, >> >>> >> >>> you may be referring to an early Van Jacobson idea, "slow start" - things have gone well beyond that, I believe, with mechanisms that use the acknowledgement intervals to assess/control flow. Round-trip time is no longer a key metric. >> >>> >> >>> v >> >>> >> >>> >> >>> On Wed, Oct 2, 2024 at 5:19?PM John Day >> wrote: >> >>>> Busy day. Just getting to looking at these. >> >>>> >> >>>> AFAIK, Raj and KK are the real experts on this topic. The 4-part DEC Report is a masterpiece. The epitome of what good computer research should be. Their initial work really nailed the problem. It is unfortunate that it appears to have been totally forgotten. Of course there was still work to do. A few conclusions: >> >>>> >> >>>> Flow control is a pair-wise issue, Congestion management is an n-party issue. >> >>>> >> >>>> Any layer that relays will exhibit congestion. (Contention for multi-access media is a form of congestion.) >> >>>> >> >>>> A Congestion solution should minimize congestion events and retransmissions. (TCP maximizes both.) >> >>>> >> >>>> Congestion is a stochastic phenomena. The cause is too many packets arriving with a given short period. >> >>>> >> >>>> Load is not the root cause of congestion but does increase the probability. (This is an error I see in most every paper I read on the topic.) Congestion has been observed on a network with a .1% loading. Often congestion will clear on its own. Waiting for load to be the condition for a response makes the response late. >> >>>> >> >>>> The effectiveness of any congestion avoidance solution will deteriorate with increasing time-to-notify. >> >>>> >> >>>> Something like ECN or SourceQuench (if like ECN it is sent to all sources of the congested router) is absolutely required to ensure that the effects of congestion management remain localized to the layer in which it occurred. However, neither one alone is sufficient without the action to be taken in response to receiving them. (I would think SQ would have some advantage in that the sender would be notified sooner than with ECN.) >> >>>> >> >>>> Without ECN, the congestion scheme is predatory and will interact badly with congestion solutions in lower layers. >> >>>> >> >>>> Jacobson?s solution for TCP is about the worst, one could expect: A congestion *avoidance* solution that works by causing congestion? It has potentially done irreparable damage to the Internet, because it is predatory. (implicit notification, no ECN) In a way this is not Van?s fault. It is the classic engineer?s mistake: Solve the narrow problem but fail to consider the context. This solution might acceptable for a network, but not for an Internet, where multiple layers (some of less scope) relay and are thus subject to congestion. Attempts to do congestion control in lower layers with TCP congestion control results in warring feedback loops with very different response times. >> >>>> >> >>>> As Jain and KK point out, TCP optimizes for the edge of the cliff of congestion collapse, while they propose optimizing for the knee of the throughput/delay curve to minimize both congestion events and retransmissions. >> >>>> >> >>>> There is probably much more, but this is what comes to mind. >> >>>> >> >>>> Take care, >> >>>> John >> >>>> >> >>>> >> >>>>> On Oct 1, 2024, at 22:10, Vint Cerf via Internet-history >> wrote: >> >>>>> >> >>>>> One basic problem with blaming the "last packet that caused intermediate >> >>>>> router congestion" is that it usually blamed the wrong source, among other >> >>>>> problems. Van Jacobson was/is the guru of flow control (among others) who >> >>>>> might remember more. >> >>>>> >> >>>>> >> >>>>> v >> > >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > ***** > Craig Partridge's email account for professional society activities and mailing lists. From stewart at serissa.com Sat Oct 5 18:25:46 2024 From: stewart at serissa.com (Serissa) Date: Sat, 5 Oct 2024 21:25:46 -0400 Subject: [ih] Internet-history Digest, Vol 59, Issue 29 In-Reply-To: References: Message-ID: <75C3B2E8-A456-4A8F-9794-299552BECFD5@serissa.com> > > Message: 1 > Date: Sun, 6 Oct 2024 08:37:35 +1300 > From: Brian E Carpenter > To: internet-history at elists.isoc.org > Subject: Re: [ih] bufferbloat and modern congestion control (was 4004) > Message-ID: > >> On 06-Oct-24 05:03, Greg Skinner via Internet-history wrote: >>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > wrote: >>> >>> Forwarded for Barbara > ... >>> How is network coding coming along these days? > > I don't believe it has seen much deployment. A colleague of mine has worked on its applicability to mitigating problems with satellite-based connectivity for remote Pacific islands. [3] > >> Any serious attempts to reinstitute ICMP source quench would have to go through the IETF RFC process again because it?s been deprecated for some time. [1] Also, many sites block ICMP outright (even though they?ve been warned not to do this). [2] > > The main operational problem this causes is that it breaks path MTU discovery, which damages performance. Applies to ICMPv6 too, of course. However, ICMP is viewed as a security threat, including as a DOS vector, so blocking it in firewalls is pretty common. There are recommendations in [4] and [5] for ICMPv6. > > Brian > >> >> --gregbo >> >> [1] https://datatracker.ietf.org/doc/rfc6633/ >> [2] https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > > [3] U. Speidel, "Improving goodput on shared satellite links with coded tunnels," 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 2021, pp. 1-6, doi: 10.1109/GLOBECOM46510.2021.9686021. > > [4] https://www.rfc-editor.org/info/rfc4890 > > [5] https://www.rfc-editor.org/info/rfc9099 > > ------------------------------ > Regarding network coding, I am aware of a couple of other experiments. Microsoft used network coding for a tunnel between a commuter bus for employees and their home base, to solve problems of excessive packet loss on a cellular hop. Matteo Frigo and I, at Quanta Research around 2014, experimented with end-to-end network coding to make real time video work better on lossy links. Just in case not everyone is familiar with network coding, there are (at least) two use cases. In one, recoding packets in the interior of a network can improve throughput for everyone. In the other, network coding is used as a form of rateless erasure coding for point to point streams, so rather than retransmit lost packets, you smear out the information across many packets, so that once you receive "enough" of them, you can reconstruct the original message. Roughly speaking, if you imagine multiplying the packets of a message times a random matrix of 1's and 0's, what you transmit are random linear combinations of packets. Once you have enough of them so that the received random matrix is not singular, you can reconstruct the original. In practice, the random linear combinations are according to a PRNG known to the sender and receiver, so from an id in the packet you know what input packets contributed to it. Approximately, once you receive N good packets, you can reconstruct the original N packets, it doesn't really matter which N you receive, because they are randomized anyway. There are no retransmissions, and out of order doesn't matter. Pretty useful stuff. Within a flow, no packet is more important than any other. -Larry From vint at google.com Sat Oct 5 23:29:57 2024 From: vint at google.com (Vint Cerf) Date: Sun, 6 Oct 2024 02:29:57 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> Message-ID: sounds like your test discovered bufferbloat.... v On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: > IIRC: > > When the internal mechanisms (such as SQ) were being debated and choices > made to create TCP/IP V4 for adoption as the DoD Standard, the technology > world was quite different. At the time (early 1980s), gateways had very > little memory - sometimes only enough to hold one or at most a few IP > datagrams. If a datagram arrived and there was no place to hold it, SQ > back to the source was a way to say "Slow down. I just had to drop your > last datagram". > > Over the decades, memory became a lot more available. So gateways could > easily have space to queue many datagrams. In one test I did just a few > years ago, a stream of datagrams was sent from one site to another. All > were received intact and in order as sent. No SQ messages were received. > But latency soared. Some datagrams took more than 30 seconds to reach > their destination. Memory had become cheap enough that datagrams could > just be held as long as needed. > > For anyone involved in operating a piece of the Internet, or for > diagnosing users' complaints like "it's too slow", ICMP's facilities were > crucial tools. They were flawed and incomplete, but still useful as ways > to figure out what was happening. > > When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was not > included. As someone involved in diagnosing operational problems, we > yelled, screamed, cajoled, encouraged, lobbied, and did whatever we could > to get the DoD procurement folks to add ICMP to their list of required > implementations. > > This discussion about SQ reminded me of another "gateway issue" from the > 1980s ICCB to-do list - "End-Middle Interactions". I'll write what I > remember about that separately. > > Jack > > > > On 10/5/24 11:26, Craig Partridge via Internet-history wrote: > > All sorts of goodies: > > ICMP Echo (what used to power Ping until people decided they didn't like > folks probing) > > ICMP Unreachable (port or host) > > ICMP Problem Param (diagnostic) > > many more. > > On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history wrote: > > > isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP > messages. > > v > > > On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history wrote: > > > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > internet-history at elists.isoc.org>> > > wrote: > > Forwarded for Barbara > > ==== > > Having trouble emailing again so i did some trimming on the original > > message.... > > Putting my packet radio hat back on, a source quench message could help > > disambiguate whether loss in the network is due to congestion or > > something > > else (like in wireless, loss due to harsh environments, jamming, > mobility). I also think it is not obvious what you should do when you > receive a source quench, but to me trying to understand this is just part > of trying to see if we can make things work better. How about what you > could do when you don't receive a source quench but have experienced > > loss? > > How is network coding coming along these days? > > barbara > > Any serious attempts to reinstitute ICMP source quench would have to go > through the IETF RFC process again because it?s been deprecated for some > time. [1] Also, many sites block ICMP outright (even though they?ve been > warned not to do this). [2] > > --gregbo > > [1] https://datatracker.ietf.org/doc/rfc6633/ > [2] > > > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > > -- > Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190+1 (571) 213 1346 <(571)%20213-1346> > > > until further notice > -- > Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history > > > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jack at 3kitty.org Sun Oct 6 10:22:32 2024 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 6 Oct 2024 10:22:32 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> Message-ID: <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Yes, I agree that Bufferbloat is the most likely root cause of what I saw.? In fact, that testing experience is when I actually heard the term "bufferbloat" for the first time and learned what it meant.?? I can imagine how it probably happened over the years.?? It was undoubtedly far easier to just add now-inexpensive memory to components inside the network than it was to invent, and deploy, appropriate mechanisms to replace the rudimentary "placeholders" of Source Quench, Type Of Service, hop-based routing, et al, in all of the components and organizations involved in the Internet. But what I also discovered was more disturbing than bufferbloat. Using the same tools I remembered from 40 years ago, we determined that the bloated buffers were likely deep in the bowels of the Internet - most likely inside a fiber carrier several ISPs away from either endpoint of the test.? Our ability to analyze was hindered by the lack of pervasive support today for mechanisms such as pings and traceroutes at various points along the route.?? Parts of the route through the Internet were cloaked in impenetrable (to us mere Users) shields. But the disturbing part was the attitude of the "providers" who operated the various pieces involved along the route we were trying to use.? Some of them, deep in the bowels of the Internet, wouldn't even talk to us mere Users.?? Their customers were other ISPs.? They don't talk to retail customers.? The ISPs involved all did their tests and measurements, and reported that *their* part of the Internet was working just fine.?? The software vendors in the Users' computers similarly said their technology was working as it should, nothing to be fixed. No one knew much about Source Quench or other congestion control issues and mechanisms.? Or Type of Service.? I assume that the IETF had by now also deprecated even the rudimentary and ineffective mechanisms of Source Quench, with no replacement mechanisms defined and deployed. My User friend tried all sorts of possible fixes.? As taught by Marketing, he upgraded to higher speeds of Internet service.? That was supposed to fix whatever problem you were experiencing.? It didn't.? He switched to several different ISPs, at each end of the route.? No joy. This finger-pointing environment results in a situation where all of the "operators" involved in my User's Internet communications believe that everything of theirs is working fine and the problem must be somewhere else.? But the User believes that the Internet is broken, unsuitable for what he's trying to do, and no one is working to fix it. That polar disagreement between the Users and Providers of the Internet was a disturbing (to me at least) revelation. I suspect the situation will deteriorate, since I frequently see articles describing plans to use the Internet for tasks involving real-time remote manipulation (telemedicine, remote surgery, distant control of vehicles, equipment, etc.).?? My experience is admittedly anecdotal, but I suspect it's not unique. I recommended to my User friend that he might try installing ancient technology - dial-up modems at each end!?? Amazingly, you can still purchase dial-up modems, even from Amazon.?? But I also advised him that even such old tech might not be an improvement.?? If his "voice call" became VOIP at any point along the way, his problems might not change much. His alternative was to forget about doing remote operations over the Internet.?? It might be easier to simply move. Jack Haverty On 10/5/24 23:29, Vint Cerf wrote: > sounds like your test discovered bufferbloat.... > > v > > > On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: > > IIRC: > > When the internal mechanisms (such as SQ) were being debated and > choices made to create TCP/IP V4 for adoption as the DoD Standard, > the technology world was quite different.?? At the time (early > 1980s), gateways had very little memory - sometimes only enough to > hold one or at most a few IP datagrams.?? If a datagram arrived > and there was no place to hold it, SQ back to the source was a way > to say "Slow down. I just had to drop your last datagram". > > Over the decades, memory became a lot more available.? So gateways > could easily have space to queue many datagrams. In one test I did > just a few years ago, a stream of datagrams was sent from one site > to another.? All were received intact and in order as sent.? No SQ > messages were received.?? But latency soared.? Some datagrams took > more than 30 seconds to reach their destination.?? Memory had > become cheap enough that datagrams could just be held as long as > needed. > > For anyone involved in operating a piece of the Internet, or for > diagnosing users' complaints like "it's too slow", ICMP's > facilities were crucial tools.?? They were flawed and incomplete, > but still useful as ways to figure out what was happening. > > When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was > not included.? As someone involved in diagnosing operational > problems, we yelled, screamed, cajoled, encouraged, lobbied, and > did whatever we could to get the DoD procurement folks to add ICMP > to their list of required implementations. > > This discussion about SQ reminded me of another "gateway issue" > from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll > write what I remember about that separately. > > Jack > > > > On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >> All sorts of goodies: >> >> ICMP Echo (what used to power Ping until people decided they didn't like >> folks probing) >> >> ICMP Unreachable (port or host) >> >> ICMP Problem Param (diagnostic) >> >> many more. >> >> On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >>> messages. >>> >>> v >>> >>> >>> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>>> internet-history at elists.isoc.org >> internet-history at elists.isoc.org>> >>>> wrote: >>>>> Forwarded for Barbara >>>>> >>>>> ==== >>>>> >>>>> Having trouble emailing again so i did some trimming on the original >>>> message.... >>>>> Putting my packet radio hat back on, a source quench message could help >>>> disambiguate whether loss in the network is due to congestion or >>> something >>>> else (like in wireless, loss due to harsh environments, jamming, >>>> mobility). I also think it is not obvious what you should do when you >>>> receive a source quench, but to me trying to understand this is just part >>>> of trying to see if we can make things work better. How about what you >>>> could do when you don't receive a source quench but have experienced >>> loss? >>>>> How is network coding coming along these days? >>>>> >>>>> barbara >>>> Any serious attempts to reinstitute ICMP source quench would have to go >>>> through the IETF RFC process again because it?s been deprecated for some >>>> time. [1] Also, many sites block ICMP outright (even though they?ve been >>>> warned not to do this). [2] >>>> >>>> --gregbo >>>> >>>> [1]https://datatracker.ietf.org/doc/rfc6633/ >>>> [2] >>>> >>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> >>> until further notice >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From jeanjour at comcast.net Sun Oct 6 11:01:42 2024 From: jeanjour at comcast.net (John Day) Date: Sun, 6 Oct 2024 14:01:42 -0400 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> Message-ID: <2A11EFAF-3553-484A-B8D1-F1BAD530B987@comcast.net> This is one of the things that really bothers me. When buffer bloat first became a big thing, several people recounted papers reporting seeing it as far back as the mid-90s, Jack?s may be even earlier. Yet no one took notice. We can conjecture why not, but it would only be conjecture. Similarly in 2004, Appenzeller then at Stanford reported on the advantage of pooled vs static buffers in routers and that 90% of the memory in high routers was unnecessary. That fundamental result had been reported by Peter Denning in 1968 in timesharing systems. But the differences were so stark that it was obvious that it applied in general. (I have been using the result since I first read the paper in the early 70s.) It has been rediscovered at least two other times (probably more) on about a 10 year cycle. (There is a very good paper on DCTCP that stumbled on to it and was surprised by the result but didn?t seem to realize why it happened.) This discussion of congestion suffers from the same thing. I have not seen a paper in 20 years that cites the Raj/KK?s work or that they found that ECN notification should begin when the average queue length is less than or equal to 1. All of these have different degrees of being non-intuitive and should be pointed out. Who knows how many other major results are so poorly known. They certainly aren?t covered in the networking textbooks. I cover them in my course along with other unreported principles. This is the behavior of a craft, not a science. Take care, John > On Oct 6, 2024, at 02:29, Vint Cerf via Internet-history wrote: > > sounds like your test discovered bufferbloat.... > > v > > > On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: > >> IIRC: >> >> When the internal mechanisms (such as SQ) were being debated and choices >> made to create TCP/IP V4 for adoption as the DoD Standard, the technology >> world was quite different. At the time (early 1980s), gateways had very >> little memory - sometimes only enough to hold one or at most a few IP >> datagrams. If a datagram arrived and there was no place to hold it, SQ >> back to the source was a way to say "Slow down. I just had to drop your >> last datagram". >> >> Over the decades, memory became a lot more available. So gateways could >> easily have space to queue many datagrams. In one test I did just a few >> years ago, a stream of datagrams was sent from one site to another. All >> were received intact and in order as sent. No SQ messages were received. >> But latency soared. Some datagrams took more than 30 seconds to reach >> their destination. Memory had become cheap enough that datagrams could >> just be held as long as needed. >> >> For anyone involved in operating a piece of the Internet, or for >> diagnosing users' complaints like "it's too slow", ICMP's facilities were >> crucial tools. They were flawed and incomplete, but still useful as ways >> to figure out what was happening. >> >> When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was not >> included. As someone involved in diagnosing operational problems, we >> yelled, screamed, cajoled, encouraged, lobbied, and did whatever we could >> to get the DoD procurement folks to add ICMP to their list of required >> implementations. >> >> This discussion about SQ reminded me of another "gateway issue" from the >> 1980s ICCB to-do list - "End-Middle Interactions". I'll write what I >> remember about that separately. >> >> Jack >> >> >> >> On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >> >> All sorts of goodies: >> >> ICMP Echo (what used to power Ping until people decided they didn't like >> folks probing) >> >> ICMP Unreachable (port or host) >> >> ICMP Problem Param (diagnostic) >> >> many more. >> >> On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history wrote: >> >> >> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >> messages. >> >> v >> >> >> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history wrote: >> >> >> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history > >> internet-history at elists.isoc.org>> >> >> wrote: >> >> Forwarded for Barbara >> >> ==== >> >> Having trouble emailing again so i did some trimming on the original >> >> message.... >> >> Putting my packet radio hat back on, a source quench message could help >> >> disambiguate whether loss in the network is due to congestion or >> >> something >> >> else (like in wireless, loss due to harsh environments, jamming, >> mobility). I also think it is not obvious what you should do when you >> receive a source quench, but to me trying to understand this is just part >> of trying to see if we can make things work better. How about what you >> could do when you don't receive a source quench but have experienced >> >> loss? >> >> How is network coding coming along these days? >> >> barbara >> >> Any serious attempts to reinstitute ICMP source quench would have to go >> through the IETF RFC process again because it?s been deprecated for some >> time. [1] Also, many sites block ICMP outright (even though they?ve been >> warned not to do this). [2] >> >> --gregbo >> >> [1] https://datatracker.ietf.org/doc/rfc6633/ >> [2] >> >> >> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >> >> -- >> Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190+1 (571) 213 1346 <(571)%20213-1346> >> >> >> until further notice >> -- >> Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history >> >> >> > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Sun Oct 6 11:10:15 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sun, 6 Oct 2024 18:10:15 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Message-ID: <1699812208.16759485.1728238215209@mail.yahoo.com> Yes providers don't seem to be very good at doing anything for users. In my residence I kept a 2wire gateway for an extremely long time because I could actually look at stats, etc. Whenever I had a problem my provider's? support people would look at? the age of the equipment and immediately declare that was a problem and I needed to get a new router.? Of course I didn't believe them and never replaced it.? In a while whatever problem I was trying to get them to look at disappeared.? Once they even sent me what they called a commercial router for free after x calls. I never took the time to get it out of the box.? I feel sorry for people. barbara On Sunday, October 6, 2024 at 10:22:46 AM PDT, Jack Haverty via Internet-history wrote: Yes, I agree that Bufferbloat is the most likely root cause of what I saw.? In fact, that testing experience is when I actually heard the term "bufferbloat" for the first time and learned what it meant.?? I can imagine how it probably happened over the years.?? It was undoubtedly far easier to just add now-inexpensive memory to components inside the network than it was to invent, and deploy, appropriate mechanisms to replace the rudimentary "placeholders" of Source Quench, Type Of Service, hop-based routing, et al, in all of the components and organizations involved in the Internet. But what I also discovered was more disturbing than bufferbloat. Using the same tools I remembered from 40 years ago, we determined that the bloated buffers were likely deep in the bowels of the Internet - most likely inside a fiber carrier several ISPs away from either endpoint of the test.? Our ability to analyze was hindered by the lack of pervasive support today for mechanisms such as pings and traceroutes at various points along the route.?? Parts of the route through the Internet were cloaked in impenetrable (to us mere Users) shields. But the disturbing part was the attitude of the "providers" who operated the various pieces involved along the route we were trying to use.? Some of them, deep in the bowels of the Internet, wouldn't even talk to us mere Users.?? Their customers were other ISPs.? They don't talk to retail customers.? The ISPs involved all did their tests and measurements, and reported that *their* part of the Internet was working just fine.?? The software vendors in the Users' computers similarly said their technology was working as it should, nothing to be fixed. No one knew much about Source Quench or other congestion control issues and mechanisms.? Or Type of Service.? I assume that the IETF had by now also deprecated even the rudimentary and ineffective mechanisms of Source Quench, with no replacement mechanisms defined and deployed. My User friend tried all sorts of possible fixes.? As taught by Marketing, he upgraded to higher speeds of Internet service.? That was supposed to fix whatever problem you were experiencing.? It didn't.? He switched to several different ISPs, at each end of the route.? No joy. This finger-pointing environment results in a situation where all of the "operators" involved in my User's Internet communications believe that everything of theirs is working fine and the problem must be somewhere else.? But the User believes that the Internet is broken, unsuitable for what he's trying to do, and no one is working to fix it. That polar disagreement between the Users and Providers of the Internet was a disturbing (to me at least) revelation. I suspect the situation will deteriorate, since I frequently see articles describing plans to use the Internet for tasks involving real-time remote manipulation (telemedicine, remote surgery, distant control of vehicles, equipment, etc.).?? My experience is admittedly anecdotal, but I suspect it's not unique. I recommended to my User friend that he might try installing ancient technology - dial-up modems at each end!?? Amazingly, you can still purchase dial-up modems, even from Amazon.?? But I also advised him that even such old tech might not be an improvement.?? If his "voice call" became VOIP at any point along the way, his problems might not change much. His alternative was to forget about doing remote operations over the Internet.?? It might be easier to simply move. Jack Haverty On 10/5/24 23:29, Vint Cerf wrote: > sounds like your test discovered bufferbloat.... > > v > > > On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: > >? ? IIRC: > >? ? When the internal mechanisms (such as SQ) were being debated and >? ? choices made to create TCP/IP V4 for adoption as the DoD Standard, >? ? the technology world was quite different.?? At the time (early >? ? 1980s), gateways had very little memory - sometimes only enough to >? ? hold one or at most a few IP datagrams.?? If a datagram arrived >? ? and there was no place to hold it, SQ back to the source was a way >? ? to say "Slow down. I just had to drop your last datagram". > >? ? Over the decades, memory became a lot more available.? So gateways >? ? could easily have space to queue many datagrams. In one test I did >? ? just a few years ago, a stream of datagrams was sent from one site >? ? to another.? All were received intact and in order as sent.? No SQ >? ? messages were received.?? But latency soared.? Some datagrams took >? ? more than 30 seconds to reach their destination.?? Memory had >? ? become cheap enough that datagrams could just be held as long as >? ? needed. > >? ? For anyone involved in operating a piece of the Internet, or for >? ? diagnosing users' complaints like "it's too slow", ICMP's >? ? facilities were crucial tools.?? They were flawed and incomplete, >? ? but still useful as ways to figure out what was happening. > >? ? When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was >? ? not included.? As someone involved in diagnosing operational >? ? problems, we yelled, screamed, cajoled, encouraged, lobbied, and >? ? did whatever we could to get the DoD procurement folks to add ICMP >? ? to their list of required implementations. > >? ? This discussion about SQ reminded me of another "gateway issue" >? ? from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll >? ? write what I remember about that separately. > >? ? Jack > > > >? ? On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >>? ? All sorts of goodies: >> >>? ? ICMP Echo (what used to power Ping until people decided they didn't like >>? ? folks probing) >> >>? ? ICMP Unreachable (port or host) >> >>? ? ICMP Problem Param (diagnostic) >> >>? ? many more. >> >>? ? On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < >>? ? internet-history at elists.isoc.org> wrote: >> >>>? ? isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >>>? ? messages. >>> >>>? ? v >>> >>> >>>? ? On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < >>>? ? internet-history at elists.isoc.org> wrote: >>> >>>>? ? On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>>>? ? internet-history at elists.isoc.org? >>? ? internet-history at elists.isoc.org>> >>>>? ? wrote: >>>>>? ? Forwarded for Barbara >>>>> >>>>>? ? ==== >>>>> >>>>>? ? Having trouble emailing again so i did some trimming on the original >>>>? ? message.... >>>>>? ? Putting my packet radio hat back on, a source quench message could help >>>>? ? disambiguate whether loss in the network is due to congestion or >>>? ? something >>>>? ? else (like in wireless, loss due to harsh environments, jamming, >>>>? ? mobility).? I also think it is not obvious what you should do when you >>>>? ? receive a source quench, but to me trying to understand this is just part >>>>? ? of trying to see if we can make things work better.? How about what you >>>>? ? could do when you don't receive a source quench but have experienced >>>? ? loss? >>>>>? ? How is network coding coming along these days? >>>>> >>>>>? ? barbara >>>>? ? Any serious attempts to reinstitute ICMP source quench would have to go >>>>? ? through the IETF RFC process again because it?s been deprecated for some >>>>? ? time. [1]? Also, many sites block ICMP outright (even though they?ve been >>>>? ? warned not to do this). [2] >>>> >>>>? ? --gregbo >>>> >>>>? ? [1]https://datatracker.ietf.org/doc/rfc6633/ >>>>? ? [2] >>>> >>>? ? https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>>>? ? -- >>>>? ? Internet-history mailing list >>>>? ? Internet-history at elists.isoc.org >>>>? ? https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>? ? -- >>>? ? Please send any postal/overnight deliveries to: >>>? ? Vint Cerf >>>? ? Google, LLC >>>? ? 1900 Reston Metro Plaza, 16th Floor >>>? ? Reston, VA 20190 >>>? ? +1 (571) 213 1346? >>> >>> >>>? ? until further notice >>>? ? -- >>>? ? Internet-history mailing list >>>? ? Internet-history at elists.isoc.org >>>? ? https://elists.isoc.org/mailman/listinfo/internet-history >>> > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > > > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Sun Oct 6 11:55:42 2024 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 6 Oct 2024 11:55:42 -0700 Subject: [ih] "Gateway Issue": End-Middle Interactions Message-ID: Recent discussions on this list reminded me of another issue that was on the ICCB's "things we need to do" list in the early 1980s. My ancient notebook just lists something like "control mechanisms". Here's what I remember about what that meant on the 1980s ICCB to-do list as I feverishly worked to copy it off some whiteboard: --------------------- Issue: End-Middle Interactions Within the Internet, activity generally involves some kind of interaction between computers over a TCP connection.? At each end, a program is running under the control of one of that computer's Users, who has been identified by some scheme involving a password and then verified as authorized to use the Internet.?? Some users access remote computers by a connection from their terminal to a TAC (Terminal Access Controller), which also requires password authentication from each User.? Additionally, various projects are creating mechanisms to permit such TCP connections to be protected by encryption techniques.?? The Internet architecture provides mechanisms whereby Users can be reliably identified and authorized to use the Internet, with their interactions protected against interference or disruption by outsiders. However, such "End-to-end" communications also involves considerable "End-to-Middle" ancillary, but critical, communications.?? One example is some of the ICMP interactions, e.g., between a gateway somewhere in the Internet and a TCP running in a Host computer. Another is the interactions performed by the ARP mechanisms, which enable an IP address to be associated with the appropriate LAN address. Such interactions are examples of End-to-Middle communications. They are necessary elements involved in setting up and operating every TCP connection.? As such, each End-to-Middle communications pathway should be identified, and suitable mechanisms be defined and implemented for each End-to-Middle information flow to provide adequate protection appropriate to the protection methods used for TCP connections. --------------------- Since the early 1980s, many more End-to-Middle mechanisms have been created.? Some of them seem to have been examined and protection mechanisms defined (e.g., DNSSEC, HTTPS, SSL, TLS, etc.).? But I don't know how well they've been designed and implemented or how extensively they've been actually deployed throughout the Internet. Also, usage patterns of the Internet have changed significantly.? We no longer just type at terminals and transfer files.? Humans are no longer the only Users.? Perhaps they're not even the dominant User community.?? There's a lot of Servers out there, madly and constantly talking to each other for their own missions.?? Servers are Users of the Internet too. Some End-to-Middle mechanisms don't seem to have changed much.? The ARP interactions which we exploited in the 80s to create the debugging tool "Flakeway" seems to still exist.? I've been told it has even become a critical element of NAT implementations. With the explosion of "applications" that use TCP, new End-to-Middle interactions have been introduced.? An example is in the Email Service, where Headers now contain all sorts of information, placed there by all sorts of actors along the way, presumably for use by all sorts of other actors in other places.?? Whether and how all these information flows are authenticated and protected is unclear to me at least. But all such End-to-Middle information flows are important to us Users, even the Servers.?? Even if we don't know it. Jack Haverty -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From brian.e.carpenter at gmail.com Sun Oct 6 12:29:45 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 7 Oct 2024 08:29:45 +1300 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <2A11EFAF-3553-484A-B8D1-F1BAD530B987@comcast.net> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <2A11EFAF-3553-484A-B8D1-F1BAD530B987@comcast.net> Message-ID: <014cacec-f39b-4fed-b3dc-745b57f316b3@gmail.com> Thank you for the analogy with working sets, I hadn't thought of that. "Resource allocation is a tricky business." - Denning, "The Working Set Model for Program Behavior", 1968. (I still think that was one of the last great discoveries in CS, in the same year as the Dijkstra letter.) I just found this wonderful page: http://denninginstitute.com/pjd/PUBS/Workingsets.html Regards Brian On 07-Oct-24 07:01, John Day via Internet-history wrote: > This is one of the things that really bothers me. When buffer bloat first became a big thing, several people recounted papers reporting seeing it as far back as the mid-90s, Jack?s may be even earlier. Yet no one took notice. We can conjecture why not, but it would only be conjecture. > > Similarly in 2004, Appenzeller then at Stanford reported on the advantage of pooled vs static buffers in routers and that 90% of the memory in high routers was unnecessary. That fundamental result had been reported by Peter Denning in 1968 in timesharing systems. But the differences were so stark that it was obvious that it applied in general. (I have been using the result since I first read the paper in the early 70s.) It has been rediscovered at least two other times (probably more) on about a 10 year cycle. (There is a very good paper on DCTCP that stumbled on to it and was surprised by the result but didn?t seem to realize why it happened.) > > This discussion of congestion suffers from the same thing. I have not seen a paper in 20 years that cites the Raj/KK?s work or that they found that ECN notification should begin when the average queue length is less than or equal to 1. > > All of these have different degrees of being non-intuitive and should be pointed out. Who knows how many other major results are so poorly known. They certainly aren?t covered in the networking textbooks. I cover them in my course along with other unreported principles. > > This is the behavior of a craft, not a science. > > Take care, > John > >> On Oct 6, 2024, at 02:29, Vint Cerf via Internet-history wrote: >> >> sounds like your test discovered bufferbloat.... >> >> v >> >> >> On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: >> >>> IIRC: >>> >>> When the internal mechanisms (such as SQ) were being debated and choices >>> made to create TCP/IP V4 for adoption as the DoD Standard, the technology >>> world was quite different. At the time (early 1980s), gateways had very >>> little memory - sometimes only enough to hold one or at most a few IP >>> datagrams. If a datagram arrived and there was no place to hold it, SQ >>> back to the source was a way to say "Slow down. I just had to drop your >>> last datagram". >>> >>> Over the decades, memory became a lot more available. So gateways could >>> easily have space to queue many datagrams. In one test I did just a few >>> years ago, a stream of datagrams was sent from one site to another. All >>> were received intact and in order as sent. No SQ messages were received. >>> But latency soared. Some datagrams took more than 30 seconds to reach >>> their destination. Memory had become cheap enough that datagrams could >>> just be held as long as needed. >>> >>> For anyone involved in operating a piece of the Internet, or for >>> diagnosing users' complaints like "it's too slow", ICMP's facilities were >>> crucial tools. They were flawed and incomplete, but still useful as ways >>> to figure out what was happening. >>> >>> When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was not >>> included. As someone involved in diagnosing operational problems, we >>> yelled, screamed, cajoled, encouraged, lobbied, and did whatever we could >>> to get the DoD procurement folks to add ICMP to their list of required >>> implementations. >>> >>> This discussion about SQ reminded me of another "gateway issue" from the >>> 1980s ICCB to-do list - "End-Middle Interactions". I'll write what I >>> remember about that separately. >>> >>> Jack >>> >>> >>> >>> On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >>> >>> All sorts of goodies: >>> >>> ICMP Echo (what used to power Ping until people decided they didn't like >>> folks probing) >>> >>> ICMP Unreachable (port or host) >>> >>> ICMP Problem Param (diagnostic) >>> >>> many more. >>> >>> On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history wrote: >>> >>> >>> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >>> messages. >>> >>> v >>> >>> >>> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history wrote: >>> >>> >>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history >> >>> internet-history at elists.isoc.org>> >>> >>> wrote: >>> >>> Forwarded for Barbara >>> >>> ==== >>> >>> Having trouble emailing again so i did some trimming on the original >>> >>> message.... >>> >>> Putting my packet radio hat back on, a source quench message could help >>> >>> disambiguate whether loss in the network is due to congestion or >>> >>> something >>> >>> else (like in wireless, loss due to harsh environments, jamming, >>> mobility). I also think it is not obvious what you should do when you >>> receive a source quench, but to me trying to understand this is just part >>> of trying to see if we can make things work better. How about what you >>> could do when you don't receive a source quench but have experienced >>> >>> loss? >>> >>> How is network coding coming along these days? >>> >>> barbara >>> >>> Any serious attempts to reinstitute ICMP source quench would have to go >>> through the IETF RFC process again because it?s been deprecated for some >>> time. [1] Also, many sites block ICMP outright (even though they?ve been >>> warned not to do this). [2] >>> >>> --gregbo >>> >>> [1] https://datatracker.ietf.org/doc/rfc6633/ >>> [2] >>> >>> >>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>> >>> -- >>> Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190+1 (571) 213 1346 <(571)%20213-1346> >>> >>> >>> until further notice >>> -- >>> Internet-history mailing listInternet-history at elists.isoc.orghttps://elists.isoc.org/mailman/listinfo/internet-history >>> >>> >>> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 >> >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Sun Oct 6 12:41:52 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 7 Oct 2024 08:41:52 +1300 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Message-ID: "The ISPs involved all did their tests and measurements, and reported that *their* part of the Internet was working just fine. " Of course it was. The Internet's "service model" is a best effort to deliver independent datagrams. Guaranteed delivery, error-free delivery, prompt delivery and in-order delivery are "nice to have". That's what the ISPs' economic model have always been based on, because it scales. This has consequences, one of which is buffer bloat. (The Web success story is quite similar - all attempts at two-way hyperlink systems have failed to scale, but the HTTP/HTML model based on best-effort one-way hyperlinks has succeeded.) Regards Brian On 07-Oct-24 06:22, Jack Haverty via Internet-history wrote: > Yes, I agree that Bufferbloat is the most likely root cause of what I > saw.? In fact, that testing experience is when I actually heard the term > "bufferbloat" for the first time and learned what it meant.?? I can > imagine how it probably happened over the years.?? It was undoubtedly > far easier to just add now-inexpensive memory to components inside the > network than it was to invent, and deploy, appropriate mechanisms to > replace the rudimentary "placeholders" of Source Quench, Type Of > Service, hop-based routing, et al, in all of the components and > organizations involved in the Internet. > > But what I also discovered was more disturbing than bufferbloat. > > Using the same tools I remembered from 40 years ago, we determined that > the bloated buffers were likely deep in the bowels of the Internet - > most likely inside a fiber carrier several ISPs away from either > endpoint of the test.? Our ability to analyze was hindered by the lack > of pervasive support today for mechanisms such as pings and traceroutes > at various points along the route.?? Parts of the route through the > Internet were cloaked in impenetrable (to us mere Users) shields. > > But the disturbing part was the attitude of the "providers" who operated > the various pieces involved along the route we were trying to use.? Some > of them, deep in the bowels of the Internet, wouldn't even talk to us > mere Users.?? Their customers were other ISPs.? They don't talk to > retail customers.? The ISPs involved all did their tests and > measurements, and reported that *their* part of the Internet was working > just fine.?? The software vendors in the Users' computers similarly said > their technology was working as it should, nothing to be fixed. > > No one knew much about Source Quench or other congestion control issues > and mechanisms.? Or Type of Service.? I assume that the IETF had by now > also deprecated even the rudimentary and ineffective mechanisms of > Source Quench, with no replacement mechanisms defined and deployed. > > My User friend tried all sorts of possible fixes.? As taught by > Marketing, he upgraded to higher speeds of Internet service.? That was > supposed to fix whatever problem you were experiencing.? It didn't.? He > switched to several different ISPs, at each end of the route.? No joy. > > This finger-pointing environment results in a situation where all of the > "operators" involved in my User's Internet communications believe that > everything of theirs is working fine and the problem must be somewhere > else.? But the User believes that the Internet is broken, unsuitable for > what he's trying to do, and no one is working to fix it. > > That polar disagreement between the Users and Providers of the Internet > was a disturbing (to me at least) revelation. > > I suspect the situation will deteriorate, since I frequently see > articles describing plans to use the Internet for tasks involving > real-time remote manipulation (telemedicine, remote surgery, distant > control of vehicles, equipment, etc.).?? My experience is admittedly > anecdotal, but I suspect it's not unique. > > I recommended to my User friend that he might try installing ancient > technology - dial-up modems at each end!?? Amazingly, you can still > purchase dial-up modems, even from Amazon.?? But I also advised him that > even such old tech might not be an improvement.?? If his "voice call" > became VOIP at any point along the way, his problems might not change much. > > His alternative was to forget about doing remote operations over the > Internet.?? It might be easier to simply move. > > Jack Haverty > > On 10/5/24 23:29, Vint Cerf wrote: >> sounds like your test discovered bufferbloat.... >> >> v >> >> >> On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: >> >> IIRC: >> >> When the internal mechanisms (such as SQ) were being debated and >> choices made to create TCP/IP V4 for adoption as the DoD Standard, >> the technology world was quite different.?? At the time (early >> 1980s), gateways had very little memory - sometimes only enough to >> hold one or at most a few IP datagrams.?? If a datagram arrived >> and there was no place to hold it, SQ back to the source was a way >> to say "Slow down. I just had to drop your last datagram". >> >> Over the decades, memory became a lot more available.? So gateways >> could easily have space to queue many datagrams. In one test I did >> just a few years ago, a stream of datagrams was sent from one site >> to another.? All were received intact and in order as sent.? No SQ >> messages were received.?? But latency soared.? Some datagrams took >> more than 30 seconds to reach their destination.?? Memory had >> become cheap enough that datagrams could just be held as long as >> needed. >> >> For anyone involved in operating a piece of the Internet, or for >> diagnosing users' complaints like "it's too slow", ICMP's >> facilities were crucial tools.?? They were flawed and incomplete, >> but still useful as ways to figure out what was happening. >> >> When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was >> not included.? As someone involved in diagnosing operational >> problems, we yelled, screamed, cajoled, encouraged, lobbied, and >> did whatever we could to get the DoD procurement folks to add ICMP >> to their list of required implementations. >> >> This discussion about SQ reminded me of another "gateway issue" >> from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll >> write what I remember about that separately. >> >> Jack >> >> >> >> On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >>> All sorts of goodies: >>> >>> ICMP Echo (what used to power Ping until people decided they didn't like >>> folks probing) >>> >>> ICMP Unreachable (port or host) >>> >>> ICMP Problem Param (diagnostic) >>> >>> many more. >>> >>> On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> isn't there more to ICMP than source quench? Seems wrong to ignore all ICMP >>>> messages. >>>> >>>> v >>>> >>>> >>>> On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>>> On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>>>> internet-history at elists.isoc.org >>> internet-history at elists.isoc.org>> >>>>> wrote: >>>>>> Forwarded for Barbara >>>>>> >>>>>> ==== >>>>>> >>>>>> Having trouble emailing again so i did some trimming on the original >>>>> message.... >>>>>> Putting my packet radio hat back on, a source quench message could help >>>>> disambiguate whether loss in the network is due to congestion or >>>> something >>>>> else (like in wireless, loss due to harsh environments, jamming, >>>>> mobility). I also think it is not obvious what you should do when you >>>>> receive a source quench, but to me trying to understand this is just part >>>>> of trying to see if we can make things work better. How about what you >>>>> could do when you don't receive a source quench but have experienced >>>> loss? >>>>>> How is network coding coming along these days? >>>>>> >>>>>> barbara >>>>> Any serious attempts to reinstitute ICMP source quench would have to go >>>>> through the IETF RFC process again because it?s been deprecated for some >>>>> time. [1] Also, many sites block ICMP outright (even though they?ve been >>>>> warned not to do this). [2] >>>>> >>>>> --gregbo >>>>> >>>>> [1]https://datatracker.ietf.org/doc/rfc6633/ >>>>> [2] >>>>> >>>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>> -- >>>> Please send any postal/overnight deliveries to: >>>> Vint Cerf >>>> Google, LLC >>>> 1900 Reston Metro Plaza, 16th Floor >>>> Reston, VA 20190 >>>> +1 (571) 213 1346 >>>> >>>> >>>> until further notice >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >> >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 >> >> >> until further notice >> >> >> > > From karl at iwl.com Sun Oct 6 13:04:53 2024 From: karl at iwl.com (Karl Auerbach) Date: Sun, 6 Oct 2024 13:04:53 -0700 Subject: [ih] "Gateway Issue": End-Middle Interactions In-Reply-To: References: Message-ID: <4c7c397b-7d8e-4efe-b2f3-cef424132583@iwl.com> On 10/6/24 11:55 AM, Jack Haverty via Internet-history wrote: > --------------------- > Issue: End-Middle Interactions > In the mid/late 1990's Fred Baker and I worked on the router and client halves of the path resource reservation protocol, RSVP. That was a kind of end-to-middle interaction.? RSVP never caught on. But I do think that the idea ought to be resurrected in a modern form, but I have a nagging feel that even if we got the tech to work it would not be something that would be practical given the competitive nature of today's providers. A bit later when I was trying to figure out how to do very fast and cheap discovery and binding of video clients to video services I came up with the beginnings of a protocol to evaluate hither-to-yon paths (including routing branches) in a bit more than one round trip time and with low priority processing in the routers along those paths.? I called it a "Fast Path Characterization Protocol" or FPCP.? (This was part of a contest I had with Bruce Mah - I wanted to see what information I could squeeze out of a position inside the routers and switches along the way while Bruce was seeing what he could get from the outside.? He was more successful than I was, partially because I tend to over-design and end up sinking myself into a swamp of code.) The very sketchy draft of FPCP is at the URL below.? Although this work was done at Cisco I got permission to publish it. https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html ??? ??? --karl-- From brian.e.carpenter at gmail.com Sun Oct 6 14:00:42 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 7 Oct 2024 10:00:42 +1300 Subject: [ih] "Gateway Issue": End-Middle Interactions In-Reply-To: <4c7c397b-7d8e-4efe-b2f3-cef424132583@iwl.com> References: <4c7c397b-7d8e-4efe-b2f3-cef424132583@iwl.com> Message-ID: <8ceba83e-9bd7-4334-a003-a5841931c67c@gmail.com> Karl, There's inevitably a lot of end-to-middle in the ongoing work on deterministic networking [1]. As far as I can tell from a quick glance, they will likely use RSVP-TE [2] for signaling. However, at Internet scale, I doubt we'll ever see any descendants of RSVP. It would rely on stateful collaboration between ISPs along the path, and that basically doesn't happen. The diffserv model has some success across some ISP-ISP boundaries, but that's stateless. [1] https://datatracker.ietf.org/wg/detnet/documents/ [2] https://www.rfc-editor.org/rfc/rfc3209.html Regards Brian Carpenter On 07-Oct-24 09:04, Karl Auerbach via Internet-history wrote: > On 10/6/24 11:55 AM, Jack Haverty via Internet-history wrote: > >> --------------------- >> Issue: End-Middle Interactions >> > In the mid/late 1990's Fred Baker and I worked on the router and client > halves of the path resource reservation protocol, RSVP. That was a kind > of end-to-middle interaction.? RSVP never caught on. But I do think that > the idea ought to be resurrected in a modern form, but I have a nagging > feel that even if we got the tech to work it would not be something that > would be practical given the competitive nature of today's providers. > > A bit later when I was trying to figure out how to do very fast and > cheap discovery and binding of video clients to video services I came up > with the beginnings of a protocol to evaluate hither-to-yon paths > (including routing branches) in a bit more than one round trip time and > with low priority processing in the routers along those paths.? I called > it a "Fast Path Characterization Protocol" or FPCP.? (This was part of a > contest I had with Bruce Mah - I wanted to see what information I could > squeeze out of a position inside the routers and switches along the way > while Bruce was seeing what he could get from the outside.? He was more > successful than I was, partially because I tend to over-design and end > up sinking myself into a swamp of code.) > > The very sketchy draft of FPCP is at the URL below.? Although this work > was done at Cisco I got permission to publish it. > > https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html > > ??? ??? --karl-- > From jack at 3kitty.org Sun Oct 6 14:04:16 2024 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 6 Oct 2024 14:04:16 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Message-ID: Yes, I agree, that's how it works. But I think that the Service Model has changed over time; the original goals in the early 80s were to provide multiple types of service, e.g., one for interactive needs where timeliness was most important, and another for bulk transfers where accurate delivery of everything sent was crucial.? That's why TCP was split from IP to enable services such as UDP. ?At some point that goal was apparently abandoned.? It might be of historical interest to know when that occurred and if it was an explicit decision and if so who made it. Another main difference now is in the Management of "The Internet".?? It has also changed over the decades. In the 80s, ARPA was in charge of the Internet.? Users knew who to call if they had problems.? The first "real" User I remember was Peter Kirstein and his crew at UCL, who relied on the Internet to do their everyday work. When Peter had a problem, he would call or email Vint at ARPA.? If the problem look like it might be related to the "core gateways", I then got a call or email from Vint.? Peter figured this process out and would then simply CC me on his first complaint. The "Gateway Group" at BBN (usually Hinden, Brescia, and/or Sheltzer) would get to work and figure it all out.? Perhaps it was a SATNET issue, but Dale McNeill was down the hall with the SATNET team if needed.?? Same with the Arpanet. When NSF entered the scene, I suspect Steve Wolff's phone number became more popular.?? Problems probably cascaded to Dave Mills or other NSF researchers? In today's Internet, who is in charge????? When you have a phantasmal Internet experience, "Who Ya Gonna Call?"? Where are the Internet's Ghostbusters? Jack On 10/6/24 12:41, Brian E Carpenter via Internet-history wrote: > "The ISPs involved all did their tests and > measurements, and reported that *their* part of the Internet was working > just fine. " > > Of course it was. The Internet's "service model" is a best effort to > deliver independent datagrams. Guaranteed delivery, error-free delivery, > prompt delivery and in-order delivery are "nice to have". That's what > the ISPs' economic model have always been based on, because it scales. > > This has consequences, one of which is buffer bloat. > > (The Web success story is quite similar - all attempts at two-way > hyperlink systems have failed to scale, but the HTTP/HTML model based > on best-effort one-way hyperlinks has succeeded.) > > Regards > ?? Brian > > On 07-Oct-24 06:22, Jack Haverty via Internet-history wrote: >> Yes, I agree that Bufferbloat is the most likely root cause of what I >> saw.? In fact, that testing experience is when I actually heard the term >> "bufferbloat" for the first time and learned what it meant.?? I can >> imagine how it probably happened over the years.?? It was undoubtedly >> far easier to just add now-inexpensive memory to components inside the >> network than it was to invent, and deploy, appropriate mechanisms to >> replace the rudimentary "placeholders" of Source Quench, Type Of >> Service, hop-based routing, et al, in all of the components and >> organizations involved in the Internet. >> >> But what I also discovered was more disturbing than bufferbloat. >> >> Using the same tools I remembered from 40 years ago, we determined that >> the bloated buffers were likely deep in the bowels of the Internet - >> most likely inside a fiber carrier several ISPs away from either >> endpoint of the test.? Our ability to analyze was hindered by the lack >> of pervasive support today for mechanisms such as pings and traceroutes >> at various points along the route.?? Parts of the route through the >> Internet were cloaked in impenetrable (to us mere Users) shields. >> >> But the disturbing part was the attitude of the "providers" who operated >> the various pieces involved along the route we were trying to use.? Some >> of them, deep in the bowels of the Internet, wouldn't even talk to us >> mere Users.?? Their customers were other ISPs.? They don't talk to >> retail customers.? The ISPs involved all did their tests and >> measurements, and reported that *their* part of the Internet was working >> just fine.?? The software vendors in the Users' computers similarly said >> their technology was working as it should, nothing to be fixed. >> >> No one knew much about Source Quench or other congestion control issues >> and mechanisms.? Or Type of Service.? I assume that the IETF had by now >> also deprecated even the rudimentary and ineffective mechanisms of >> Source Quench, with no replacement mechanisms defined and deployed. >> >> My User friend tried all sorts of possible fixes.? As taught by >> Marketing, he upgraded to higher speeds of Internet service. That was >> supposed to fix whatever problem you were experiencing.? It didn't.? He >> switched to several different ISPs, at each end of the route. No joy. >> >> This finger-pointing environment results in a situation where all of the >> "operators" involved in my User's Internet communications believe that >> everything of theirs is working fine and the problem must be somewhere >> else.? But the User believes that the Internet is broken, unsuitable for >> what he's trying to do, and no one is working to fix it. >> >> That polar disagreement between the Users and Providers of the Internet >> was a disturbing (to me at least) revelation. >> >> I suspect the situation will deteriorate, since I frequently see >> articles describing plans to use the Internet for tasks involving >> real-time remote manipulation (telemedicine, remote surgery, distant >> control of vehicles, equipment, etc.).?? My experience is admittedly >> anecdotal, but I suspect it's not unique. >> >> I recommended to my User friend that he might try installing ancient >> technology - dial-up modems at each end!?? Amazingly, you can still >> purchase dial-up modems, even from Amazon.?? But I also advised him that >> even such old tech might not be an improvement.?? If his "voice call" >> became VOIP at any point along the way, his problems might not change >> much. >> >> His alternative was to forget about doing remote operations over the >> Internet.?? It might be easier to simply move. >> >> Jack Haverty >> >> On 10/5/24 23:29, Vint Cerf wrote: >>> sounds like your test discovered bufferbloat.... >>> >>> v >>> >>> >>> On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: >>> >>> ???? IIRC: >>> >>> ???? When the internal mechanisms (such as SQ) were being debated and >>> ???? choices made to create TCP/IP V4 for adoption as the DoD Standard, >>> ???? the technology world was quite different.?? At the time (early >>> ???? 1980s), gateways had very little memory - sometimes only enough to >>> ???? hold one or at most a few IP datagrams.?? If a datagram arrived >>> ???? and there was no place to hold it, SQ back to the source was a way >>> ???? to say "Slow down. I just had to drop your last datagram". >>> >>> ???? Over the decades, memory became a lot more available.? So gateways >>> ???? could easily have space to queue many datagrams. In one test I did >>> ???? just a few years ago, a stream of datagrams was sent from one site >>> ???? to another.? All were received intact and in order as sent.? No SQ >>> ???? messages were received.?? But latency soared.? Some datagrams took >>> ???? more than 30 seconds to reach their destination.?? Memory had >>> ???? become cheap enough that datagrams could just be held as long as >>> ???? needed. >>> >>> ???? For anyone involved in operating a piece of the Internet, or for >>> ???? diagnosing users' complaints like "it's too slow", ICMP's >>> ???? facilities were crucial tools.?? They were flawed and incomplete, >>> ???? but still useful as ways to figure out what was happening. >>> >>> ???? When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was >>> ???? not included.? As someone involved in diagnosing operational >>> ???? problems, we yelled, screamed, cajoled, encouraged, lobbied, and >>> ???? did whatever we could to get the DoD procurement folks to add ICMP >>> ???? to their list of required implementations. >>> >>> ???? This discussion about SQ reminded me of another "gateway issue" >>> ???? from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll >>> ???? write what I remember about that separately. >>> >>> ???? Jack >>> >>> >>> >>> ???? On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >>>> ???? All sorts of goodies: >>>> >>>> ???? ICMP Echo (what used to power Ping until people decided they >>>> didn't like >>>> ???? folks probing) >>>> >>>> ???? ICMP Unreachable (port or host) >>>> >>>> ???? ICMP Problem Param (diagnostic) >>>> >>>> ???? many more. >>>> >>>> ???? On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>>> ???? isn't there more to ICMP than source quench? Seems wrong to >>>>> ignore all ICMP >>>>> ???? messages. >>>>> >>>>> ???? v >>>>> >>>>> >>>>> ???? On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via >>>>> Internet-history < >>>>> internet-history at elists.isoc.org> wrote: >>>>> >>>>>> ???? On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>>>>> internet-history at elists.isoc.org? >>>> internet-history at elists.isoc.org>> >>>>>> ???? wrote: >>>>>>> ???? Forwarded for Barbara >>>>>>> >>>>>>> ???? ==== >>>>>>> >>>>>>> ???? Having trouble emailing again so i did some trimming on the >>>>>>> original >>>>>> ???? message.... >>>>>>> ???? Putting my packet radio hat back on, a source quench >>>>>>> message could help >>>>>> ???? disambiguate whether loss in the network is due to >>>>>> congestion or >>>>> ???? something >>>>>> ???? else (like in wireless, loss due to harsh environments, >>>>>> jamming, >>>>>> ???? mobility).?? I also think it is not obvious what you should >>>>>> do when you >>>>>> ???? receive a source quench, but to me trying to understand this >>>>>> is just part >>>>>> ???? of trying to see if we can make things work better.? How >>>>>> about what you >>>>>> ???? could do when you don't receive a source quench but have >>>>>> experienced >>>>> ???? loss? >>>>>>> ???? How is network coding coming along these days? >>>>>>> >>>>>>> ???? barbara >>>>>> ???? Any serious attempts to reinstitute ICMP source quench would >>>>>> have to go >>>>>> ???? through the IETF RFC process again because it?s been >>>>>> deprecated for some >>>>>> ???? time. [1]? Also, many sites block ICMP outright (even though >>>>>> they?ve been >>>>>> ???? warned not to do this). [2] >>>>>> >>>>>> ???? --gregbo >>>>>> >>>>>> ???? [1]https://datatracker.ietf.org/doc/rfc6633/ >>>>>> ???? [2] >>>>>> >>>>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>>>>> ???? -- >>>>>> ???? Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>>> ???? -- >>>>> ???? Please send any postal/overnight deliveries to: >>>>> ???? Vint Cerf >>>>> ???? Google, LLC >>>>> ???? 1900 Reston Metro Plaza, 16th Floor >>>>> ???? Reston, VA 20190 >>>>> ???? +1 (571) 213 1346 >>>>> >>>>> >>>>> ???? until further notice >>>>> ???? -- >>>>> ???? Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>> >>> >>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> >>> until further notice >>> >>> >>> >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From b_a_denny at yahoo.com Sun Oct 6 14:36:23 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Sun, 6 Oct 2024 21:36:23 +0000 (UTC) Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Message-ID: <1947865845.16791597.1728250583990@mail.yahoo.com> There is also?P?l Spilling. I would think his presence probably included real users but I am sure Vint can fill us in. https://www.internetsociety.org/blog/2018/01/remembering-pal-spilling/ barbara On Sunday, October 6, 2024 at 02:04:31 PM PDT, Jack Haverty via Internet-history wrote: Yes, I agree, that's how it works. But I think that the Service Model has changed over time; the original goals in the early 80s were to provide multiple types of service, e.g., one for interactive needs where timeliness was most important, and another for bulk transfers where accurate delivery of everything sent was crucial.? That's why TCP was split from IP to enable services such as UDP. ?At some point that goal was apparently abandoned.? It might be of historical interest to know when that occurred and if it was an explicit decision and if so who made it. Another main difference now is in the Management of "The Internet".?? It has also changed over the decades. In the 80s, ARPA was in charge of the Internet.? Users knew who to call if they had problems.? The first "real" User I remember was Peter Kirstein and his crew at UCL, who relied on the Internet to do their everyday work. When Peter had a problem, he would call or email Vint at ARPA.? If the problem look like it might be related to the "core gateways", I then got a call or email from Vint.? Peter figured this process out and would then simply CC me on his first complaint. The "Gateway Group" at BBN (usually Hinden, Brescia, and/or Sheltzer) would get to work and figure it all out.? Perhaps it was a SATNET issue, but Dale McNeill was down the hall with the SATNET team if needed.?? Same with the Arpanet. When NSF entered the scene, I suspect Steve Wolff's phone number became more popular.?? Problems probably cascaded to Dave Mills or other NSF researchers? In today's Internet, who is in charge????? When you have a phantasmal Internet experience, "Who Ya Gonna Call?"? Where are the Internet's Ghostbusters? Jack On 10/6/24 12:41, Brian E Carpenter via Internet-history wrote: > "The ISPs involved all did their tests and > measurements, and reported that *their* part of the Internet was working > just fine. " > > Of course it was. The Internet's "service model" is a best effort to > deliver independent datagrams. Guaranteed delivery, error-free delivery, > prompt delivery and in-order delivery are "nice to have". That's what > the ISPs' economic model have always been based on, because it scales. > > This has consequences, one of which is buffer bloat. > > (The Web success story is quite similar - all attempts at two-way > hyperlink systems have failed to scale, but the HTTP/HTML model based > on best-effort one-way hyperlinks has succeeded.) > > Regards > ?? Brian > > On 07-Oct-24 06:22, Jack Haverty via Internet-history wrote: >> Yes, I agree that Bufferbloat is the most likely root cause of what I >> saw.? In fact, that testing experience is when I actually heard the term >> "bufferbloat" for the first time and learned what it meant.?? I can >> imagine how it probably happened over the years.?? It was undoubtedly >> far easier to just add now-inexpensive memory to components inside the >> network than it was to invent, and deploy, appropriate mechanisms to >> replace the rudimentary "placeholders" of Source Quench, Type Of >> Service, hop-based routing, et al, in all of the components and >> organizations involved in the Internet. >> >> But what I also discovered was more disturbing than bufferbloat. >> >> Using the same tools I remembered from 40 years ago, we determined that >> the bloated buffers were likely deep in the bowels of the Internet - >> most likely inside a fiber carrier several ISPs away from either >> endpoint of the test.? Our ability to analyze was hindered by the lack >> of pervasive support today for mechanisms such as pings and traceroutes >> at various points along the route.?? Parts of the route through the >> Internet were cloaked in impenetrable (to us mere Users) shields. >> >> But the disturbing part was the attitude of the "providers" who operated >> the various pieces involved along the route we were trying to use.? Some >> of them, deep in the bowels of the Internet, wouldn't even talk to us >> mere Users.?? Their customers were other ISPs.? They don't talk to >> retail customers.? The ISPs involved all did their tests and >> measurements, and reported that *their* part of the Internet was working >> just fine.?? The software vendors in the Users' computers similarly said >> their technology was working as it should, nothing to be fixed. >> >> No one knew much about Source Quench or other congestion control issues >> and mechanisms.? Or Type of Service.? I assume that the IETF had by now >> also deprecated even the rudimentary and ineffective mechanisms of >> Source Quench, with no replacement mechanisms defined and deployed. >> >> My User friend tried all sorts of possible fixes.? As taught by >> Marketing, he upgraded to higher speeds of Internet service. That was >> supposed to fix whatever problem you were experiencing.? It didn't.? He >> switched to several different ISPs, at each end of the route. No joy. >> >> This finger-pointing environment results in a situation where all of the >> "operators" involved in my User's Internet communications believe that >> everything of theirs is working fine and the problem must be somewhere >> else.? But the User believes that the Internet is broken, unsuitable for >> what he's trying to do, and no one is working to fix it. >> >> That polar disagreement between the Users and Providers of the Internet >> was a disturbing (to me at least) revelation. >> >> I suspect the situation will deteriorate, since I frequently see >> articles describing plans to use the Internet for tasks involving >> real-time remote manipulation (telemedicine, remote surgery, distant >> control of vehicles, equipment, etc.).?? My experience is admittedly >> anecdotal, but I suspect it's not unique. >> >> I recommended to my User friend that he might try installing ancient >> technology - dial-up modems at each end!?? Amazingly, you can still >> purchase dial-up modems, even from Amazon.?? But I also advised him that >> even such old tech might not be an improvement.?? If his "voice call" >> became VOIP at any point along the way, his problems might not change >> much. >> >> His alternative was to forget about doing remote operations over the >> Internet.?? It might be easier to simply move. >> >> Jack Haverty >> >> On 10/5/24 23:29, Vint Cerf wrote: >>> sounds like your test discovered bufferbloat.... >>> >>> v >>> >>> >>> On Sat, Oct 5, 2024 at 6:28?PM Jack Haverty wrote: >>> >>> ???? IIRC: >>> >>> ???? When the internal mechanisms (such as SQ) were being debated and >>> ???? choices made to create TCP/IP V4 for adoption as the DoD Standard, >>> ???? the technology world was quite different.?? At the time (early >>> ???? 1980s), gateways had very little memory - sometimes only enough to >>> ???? hold one or at most a few IP datagrams.?? If a datagram arrived >>> ???? and there was no place to hold it, SQ back to the source was a way >>> ???? to say "Slow down. I just had to drop your last datagram". >>> >>> ???? Over the decades, memory became a lot more available.? So gateways >>> ???? could easily have space to queue many datagrams. In one test I did >>> ???? just a few years ago, a stream of datagrams was sent from one site >>> ???? to another.? All were received intact and in order as sent.? No SQ >>> ???? messages were received.?? But latency soared.? Some datagrams took >>> ???? more than 30 seconds to reach their destination.?? Memory had >>> ???? become cheap enough that datagrams could just be held as long as >>> ???? needed. >>> >>> ???? For anyone involved in operating a piece of the Internet, or for >>> ???? diagnosing users' complaints like "it's too slow", ICMP's >>> ???? facilities were crucial tools.?? They were flawed and incomplete, >>> ???? but still useful as ways to figure out what was happening. >>> >>> ???? When the TCP/IP RFCs were adopted as the DoD Standard, ICMP was >>> ???? not included.? As someone involved in diagnosing operational >>> ???? problems, we yelled, screamed, cajoled, encouraged, lobbied, and >>> ???? did whatever we could to get the DoD procurement folks to add ICMP >>> ???? to their list of required implementations. >>> >>> ???? This discussion about SQ reminded me of another "gateway issue" >>> ???? from the 1980s ICCB to-do list - "End-Middle Interactions". ? I'll >>> ???? write what I remember about that separately. >>> >>> ???? Jack >>> >>> >>> >>> ???? On 10/5/24 11:26, Craig Partridge via Internet-history wrote: >>>> ???? All sorts of goodies: >>>> >>>> ???? ICMP Echo (what used to power Ping until people decided they >>>> didn't like >>>> ???? folks probing) >>>> >>>> ???? ICMP Unreachable (port or host) >>>> >>>> ???? ICMP Problem Param (diagnostic) >>>> >>>> ???? many more. >>>> >>>> ???? On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>>> ???? isn't there more to ICMP than source quench? Seems wrong to >>>>> ignore all ICMP >>>>> ???? messages. >>>>> >>>>> ???? v >>>>> >>>>> >>>>> ???? On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via >>>>> Internet-history < >>>>> internet-history at elists.isoc.org> wrote: >>>>> >>>>>> ???? On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < >>>>>> internet-history at elists.isoc.org? >>>> internet-history at elists.isoc.org>> >>>>>> ???? wrote: >>>>>>> ???? Forwarded for Barbara >>>>>>> >>>>>>> ???? ==== >>>>>>> >>>>>>> ???? Having trouble emailing again so i did some trimming on the >>>>>>> original >>>>>> ???? message.... >>>>>>> ???? Putting my packet radio hat back on, a source quench >>>>>>> message could help >>>>>> ???? disambiguate whether loss in the network is due to >>>>>> congestion or >>>>> ???? something >>>>>> ???? else (like in wireless, loss due to harsh environments, >>>>>> jamming, >>>>>> ???? mobility).?? I also think it is not obvious what you should >>>>>> do when you >>>>>> ???? receive a source quench, but to me trying to understand this >>>>>> is just part >>>>>> ???? of trying to see if we can make things work better.? How >>>>>> about what you >>>>>> ???? could do when you don't receive a source quench but have >>>>>> experienced >>>>> ???? loss? >>>>>>> ???? How is network coding coming along these days? >>>>>>> >>>>>>> ???? barbara >>>>>> ???? Any serious attempts to reinstitute ICMP source quench would >>>>>> have to go >>>>>> ???? through the IETF RFC process again because it?s been >>>>>> deprecated for some >>>>>> ???? time. [1]? Also, many sites block ICMP outright (even though >>>>>> they?ve been >>>>>> ???? warned not to do this). [2] >>>>>> >>>>>> ???? --gregbo >>>>>> >>>>>> ???? [1]https://datatracker.ietf.org/doc/rfc6633/ >>>>>> ???? [2] >>>>>> >>>>> https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf >>>>>> ???? -- >>>>>> ???? Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>>> >>>>> ???? -- >>>>> ???? Please send any postal/overnight deliveries to: >>>>> ???? Vint Cerf >>>>> ???? Google, LLC >>>>> ???? 1900 Reston Metro Plaza, 16th Floor >>>>> ???? Reston, VA 20190 >>>>> ???? +1 (571) 213 1346 >>>>> >>>>> >>>>> ???? until further notice >>>>> ???? -- >>>>> ???? Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>> >>> >>> >>> -- >>> Please send any postal/overnight deliveries to: >>> Vint Cerf >>> Google, LLC >>> 1900 Reston Metro Plaza, 16th Floor >>> Reston, VA 20190 >>> +1 (571) 213 1346 >>> >>> >>> until further notice >>> >>> >>> >> >> -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From dave.taht at gmail.com Sun Oct 6 14:50:55 2024 From: dave.taht at gmail.com (Dave Taht) Date: Sun, 6 Oct 2024 14:50:55 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> Message-ID: the cake-autorate project has made use of icmp type 13 and 14, which includes timestamps that let you calculate one way delay. It's been proving useful with coping with 5g. These guys are crazy... but my kind of crazy. https://github.com/lynxthecat/cake-autorate It turns out that there are many possible type 13/14 reflectors in the world but only on ipv4. https://github.com/lynxthecat/cake-autorate/blob/master/ANALYSIS.md On Sat, Oct 5, 2024 at 11:26?AM Craig Partridge via Internet-history < internet-history at elists.isoc.org> wrote: > All sorts of goodies: > > ICMP Echo (what used to power Ping until people decided they didn't like > folks probing) > > ICMP Unreachable (port or host) > > ICMP Problem Param (diagnostic) > > many more. > > On Sat, Oct 5, 2024 at 10:50?AM Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > > > isn't there more to ICMP than source quench? Seems wrong to ignore all > ICMP > > messages. > > > > v > > > > > > On Sat, Oct 5, 2024 at 12:04?PM Greg Skinner via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > On Oct 3, 2024, at 9:02?AM, Greg Skinner via Internet-history < > > > internet-history at elists.isoc.org > internet-history at elists.isoc.org>> > > > wrote: > > > > > > > > Forwarded for Barbara > > > > > > > > ==== > > > > > > > > Having trouble emailing again so i did some trimming on the original > > > message.... > > > > > > > > Putting my packet radio hat back on, a source quench message could > help > > > disambiguate whether loss in the network is due to congestion or > > something > > > else (like in wireless, loss due to harsh environments, jamming, > > > mobility). I also think it is not obvious what you should do when you > > > receive a source quench, but to me trying to understand this is just > part > > > of trying to see if we can make things work better. How about what you > > > could do when you don't receive a source quench but have experienced > > loss? > > > > > > > > How is network coding coming along these days? > > > > > > > > barbara > > > > > > Any serious attempts to reinstitute ICMP source quench would have to go > > > through the IETF RFC process again because it?s been deprecated for > some > > > time. [1] Also, many sites block ICMP outright (even though they?ve > been > > > warned not to do this). [2] > > > > > > --gregbo > > > > > > [1] https://datatracker.ietf.org/doc/rfc6633/ > > > [2] > > > > > > https://www.linkedin.com/pulse/icmp-dilemma-why-blocking-makes-you-networking-noob-ronald-bartels-ikvnf > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > > > > -- > > Please send any postal/overnight deliveries to: > > Vint Cerf > > Google, LLC > > 1900 Reston Metro Plaza, 16th Floor > > Reston, VA 20190 > > +1 (571) 213 1346 > > > > > > until further notice > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Dave T?ht CSO, LibreQos From salo at saloits.com Sun Oct 6 16:39:41 2024 From: salo at saloits.com (Timothy J. Salo) Date: Sun, 6 Oct 2024 18:39:41 -0500 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: <1699812208.16759485.1728238215209@mail.yahoo.com> References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> <1699812208.16759485.1728238215209@mail.yahoo.com> Message-ID: On 10/6/2024 1:10 PM, Barbara Denny via Internet-history wrote: > In a while whatever problem I was trying to get them to look at disappeared. "Cleared while testing." Telcos, in particular, seemed inclined to never actually find anything wrong with a circuit, but the problem would magically go away after you called. Not surprising that ISPs learned this bellhead behavior. -tjs From gregskinner0 at icloud.com Sun Oct 6 19:20:35 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Sun, 6 Oct 2024 19:20:35 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> Message-ID: <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> On Oct 4, 2024, at 1:37?PM, Jack Haverty via Internet-history wrote: > > Thanks, Greg and Barbara. I hadn't heard of UNH's lab or NIST's test suites. It's good to see that somebody's doing something. > > I think a real measure of such testing and certification is the extent to which organizations, or even individuals, are using such tests and certificates to influence their purchasing decisions. I don't recall ever seeing a NIST or UNH logo or other indication that a product passed their tests or that an ISP I might sign up with provides some specific certified "capability". But of course I wasn't really looking for such labelling either. I wonder if Copilot has any info on that...? > > NIST's states that certain "IPV6 capabilities ... have since failed to achieve significant support ..." That statement implies that NIST has data about what "capabilities" are in fact present in products and/or actually used in operational networks. I wonder what they know, how they learned it, and whether or not they certified it by testing. > > I haven't used Copilot, but I've been playing a bit with GPT4All running on my desktop. So far, I'm not impressed much. I fed it a large folder of "network stuff" I've collected over the decades - emails, papers, reports, documents, and anything else I've gleaned from the email forums or the Web. I then asked it when was the first gateway on the Internet. It reported back that the first gateway on the Internet was the Arpanet circa 1970. Not quite what I expected. > > Jack > I put the question ?Which networking equipment bears NIST logos?? to Bing Copilot, and here?s what it had to say: ==== Networking equipment itself typically doesn't bear NIST logos. However, NIST (National Institute of Standards and Technology) provides guidelines and standards for securing network equipment and infrastructure. For example, NIST's Special Publication 800-215 offers guidance on securing enterprise network landscapes, covering technologies like firewalls, VPNs, and secure access service edge (SASE) solutions??. If you're looking for equipment that complies with NIST standards, you might want to check if the equipment meets specific NIST guidelines or certifications rather than looking for a NIST logo on the hardware itself. Is there a particular type of networking equipment you're interested in? Source: Conversation with Copilot, 10/6/2024 (1) Guide to a Secure Enterprise Network Landscape - NIST Computer Security .... https://csrc.nist.gov/pubs/sp/800/215/final. (2) NIST Publishes SP 800-215: Guide to a Secure Enterprise Network .... https://csrc.nist.gov/News/2022/sp-800-215-secure-enterprise-network-landscape. (3) Networking - NIST. https://www.nist.gov/networking. ==== For some of the things we?ve been discussing, such as ECN, I don?t think a certification logo would be of much use to end users. I could make an argument for certification logos for standards that are more visible to end users, such as HTTPS, SSL/TLS, and a few others. --gregbo From jack at 3kitty.org Sun Oct 6 19:28:30 2024 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 6 Oct 2024 19:28:30 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> Message-ID: <93b22137-c5bb-4b70-98e9-135e8dd609ba@3kitty.org> Several years ago, I suggested to someone on the ISOC Board that ISOC consider developing some kind of "ISOC Approved" logo, and of course associated testing mechanisms and bureaucracy.?? I'm not sure what if anything happened afterwards though. Jack On 10/6/24 19:20, Greg Skinner wrote: > On Oct 4, 2024, at 1:37?PM, Jack Haverty via Internet-history wrote: >> Thanks, Greg and Barbara. I hadn't heard of UNH's lab or NIST's test suites. It's good to see that somebody's doing something. >> >> I think a real measure of such testing and certification is the extent to which organizations, or even individuals, are using such tests and certificates to influence their purchasing decisions. I don't recall ever seeing a NIST or UNH logo or other indication that a product passed their tests or that an ISP I might sign up with provides some specific certified "capability". But of course I wasn't really looking for such labelling either. I wonder if Copilot has any info on that...? >> >> NIST's states that certain "IPV6 capabilities ... have since failed to achieve significant support ..." That statement implies that NIST has data about what "capabilities" are in fact present in products and/or actually used in operational networks. I wonder what they know, how they learned it, and whether or not they certified it by testing. >> >> I haven't used Copilot, but I've been playing a bit with GPT4All running on my desktop. So far, I'm not impressed much. I fed it a large folder of "network stuff" I've collected over the decades - emails, papers, reports, documents, and anything else I've gleaned from the email forums or the Web. I then asked it when was the first gateway on the Internet. It reported back that the first gateway on the Internet was the Arpanet circa 1970. Not quite what I expected. >> >> Jack >> > I put the question ?Which networking equipment bears NIST logos?? to Bing Copilot, and here?s what it had to say: > > ==== > > Networking equipment itself typically doesn't bear NIST logos. However, NIST (National Institute of Standards and Technology) provides guidelines and standards for securing network equipment and infrastructure. For example, NIST's Special Publication 800-215 offers guidance on securing enterprise network landscapes, covering technologies like firewalls, VPNs, and secure access service edge (SASE) solutions??. > > If you're looking for equipment that complies with NIST standards, you might want to check if the equipment meets specific NIST guidelines or certifications rather than looking for a NIST logo on the hardware itself. Is there a particular type of networking equipment you're interested in? > > Source: Conversation with Copilot, 10/6/2024 > (1) Guide to a Secure Enterprise Network Landscape - NIST Computer Security ....https://csrc.nist.gov/pubs/sp/800/215/final. > (2) NIST Publishes SP 800-215: Guide to a Secure Enterprise Network ....https://csrc.nist.gov/News/2022/sp-800-215-secure-enterprise-network-landscape. > (3) Networking - NIST.https://www.nist.gov/networking. > > ==== > > For some of the things we?ve been discussing, such as ECN, I don?t think a certification logo would be of much use to end users. I could make an argument for certification logos for standards that are more visible to end users, such as HTTPS, SSL/TLS, and a few others. > > --gregbo -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature.asc Type: application/pgp-signature Size: 665 bytes Desc: OpenPGP digital signature URL: From tony1athome at gmail.com Sun Oct 6 19:45:04 2024 From: tony1athome at gmail.com (Tony Li) Date: Mon, 7 Oct 2024 04:45:04 +0200 Subject: [ih] "Gateway Issue": End-Middle Interactions In-Reply-To: <8ceba83e-9bd7-4334-a003-a5841931c67c@gmail.com> References: <8ceba83e-9bd7-4334-a003-a5841931c67c@gmail.com> Message-ID: If it?s any consolation, RSVP is alive and well. It?s been repurposed for traffic engineering and is a mainstay in the backbone. Regards, T > On Oct 6, 2024, at 11:00 PM, Brian E Carpenter via Internet-history wrote: > > ?Karl, > > There's inevitably a lot of end-to-middle in the ongoing work > on deterministic networking [1]. As far as I can tell from a > quick glance, they will likely use RSVP-TE [2] for signaling. > > However, at Internet scale, I doubt we'll ever see any descendants > of RSVP. It would rely on stateful collaboration between ISPs > along the path, and that basically doesn't happen. The diffserv > model has some success across some ISP-ISP boundaries, but that's > stateless. > > [1] https://datatracker.ietf.org/wg/detnet/documents/ > [2] https://www.rfc-editor.org/rfc/rfc3209.html > > Regards > Brian Carpenter > >> On 07-Oct-24 09:04, Karl Auerbach via Internet-history wrote: >>> On 10/6/24 11:55 AM, Jack Haverty via Internet-history wrote: >>> --------------------- >>> Issue: End-Middle Interactions >>> >> In the mid/late 1990's Fred Baker and I worked on the router and client >> halves of the path resource reservation protocol, RSVP. That was a kind >> of end-to-middle interaction. RSVP never caught on. But I do think that >> the idea ought to be resurrected in a modern form, but I have a nagging >> feel that even if we got the tech to work it would not be something that >> would be practical given the competitive nature of today's providers. >> A bit later when I was trying to figure out how to do very fast and >> cheap discovery and binding of video clients to video services I came up >> with the beginnings of a protocol to evaluate hither-to-yon paths >> (including routing branches) in a bit more than one round trip time and >> with low priority processing in the routers along those paths. I called >> it a "Fast Path Characterization Protocol" or FPCP. (This was part of a >> contest I had with Bruce Mah - I wanted to see what information I could >> squeeze out of a position inside the routers and switches along the way >> while Bruce was seeing what he could get from the outside. He was more >> successful than I was, partially because I tend to over-design and end >> up sinking myself into a swamp of code.) >> The very sketchy draft of FPCP is at the URL below. Although this work >> was done at Cisco I got permission to publish it. >> https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html >> --karl-- > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From elmi at 4ever.de Mon Oct 7 00:51:40 2024 From: elmi at 4ever.de (Elmar K. Bins) Date: Mon, 7 Oct 2024 08:51:40 +0100 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> <1699812208.16759485.1728238215209@mail.yahoo.com> Message-ID: internet-history at elists.isoc.org (Timothy J. Salo via Internet-history) wrote: > Telcos, in particular, seemed inclined to never actually find anything > wrong with a circuit, but the problem would magically go away after you > called. This has changed with most of the current operators, big and small. I generally get useful summaries after things have resolved. Surprised me when it started, feels like 5-6 years ago. Except for Western/Southern European Incumbents. You know who you are, packet droppers! - Elmar From dhc at dcrocker.net Mon Oct 7 19:36:57 2024 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 08 Oct 2024 02:36:57 +0000 (UTC) Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <93b22137-c5bb-4b70-98e9-135e8dd609ba@3kitty.org> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> <93b22137-c5bb-4b70-98e9-135e8dd609ba@3kitty.org> Message-ID: <63ac751b-3ced-4aa1-a522-ae192d0dcdce@dcrocker.net> On 10/6/2024 7:28 PM, Jack Haverty via Internet-history wrote: > Several years ago, I suggested to someone on the ISOC Board that ISOC > consider developing some kind of "ISOC Approved" logo, and of course > associated testing mechanisms and bureaucracy.?? I'm not sure what if > anything happened afterwards though. That is an operational effort and since it delivers success/failure assessments, is likely to be controversial. Perhaps I've missed bit, but I believe ISOC has neither of these in its DNA. Perhaps a semi-independent certification effort, with ISOC oversight -- which it does have in its DNA. But ultimately the question is not just what the value add will be, over long-standing processes, but how will that value be enough to justify the considerable effort and expense?? Oh, and how will industry be convinced to engage with it, given it hasn't had to, for 40 years? d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From brian.e.carpenter at gmail.com Mon Oct 7 21:14:39 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 8 Oct 2024 17:14:39 +1300 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: <63ac751b-3ced-4aa1-a522-ae192d0dcdce@dcrocker.net> References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> <93b22137-c5bb-4b70-98e9-135e8dd609ba@3kitty.org> <63ac751b-3ced-4aa1-a522-ae192d0dcdce@dcrocker.net> Message-ID: The IPv6 Forum "IPv6 Ready Logo" was an attempt at something like this. Being a member of the Forum's very populous Hall of Fame, it's not for me to judge its effectiveness. Regards Brian On 08-Oct-24 15:36, Dave Crocker via Internet-history wrote: > On 10/6/2024 7:28 PM, Jack Haverty via Internet-history wrote: >> Several years ago, I suggested to someone on the ISOC Board that ISOC >> consider developing some kind of "ISOC Approved" logo, and of course >> associated testing mechanisms and bureaucracy.?? I'm not sure what if >> anything happened afterwards though. > > That is an operational effort and since it delivers success/failure > assessments, is likely to be controversial. Perhaps I've missed bit, but > I believe ISOC has neither of these in its DNA. > > Perhaps a semi-independent certification effort, with ISOC oversight -- > which it does have in its DNA. > > But ultimately the question is not just what the value add will be, over > long-standing processes, but how will that value be enough to justify > the considerable effort and expense?? Oh, and how will industry be > convinced to engage with it, given it hasn't had to, for 40 years? > > d/ > From gregskinner0 at icloud.com Tue Oct 8 00:00:29 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Tue, 8 Oct 2024 00:00:29 -0700 Subject: [ih] Gateway Issue: Certification (was Re: booting linux on a 4004) In-Reply-To: References: <3CA5EE85-FCCD-4AD8-A36E-56CC6002BCFC@comcast.net> <4606fbda-08c8-4f7a-8729-2c5d3ea46546@3kitty.org> <150932309.14410485.1727715976915@mail.yahoo.com> <068B1F96-0BFA-4698-A2BD-E020947837A7@comcast.net> <48eda1ad-6cff-40b0-823c-ef692eb2c774@3kitty.org> <6A2500BD-68C4-4AF2-9553-FF0EAC06C176@comcast.net> <4764dbe6-76e7-4a1a-a1aa-ad766ff21b0e@3kitty.org> <7153c300-a079-4d67-9f80-f3876408e5e3@3kitty.org> <2f49205b-361b-48b6-b193-23e461122792@3kitty.org> <5DE42C80-B9A3-437D-9BDC-00CD654D017A@icloud.com> <806618676.16280720.1728063604562@mail.yahoo.com> <7b7464f9-8eb6-4b07-9552-eb7ca017f442@3kitty.org> <2A4A18DD-37D4-4BF1-982E-515F60DFDC31@icloud.com> <93b22137-c5bb-4b70-98e9-135e8dd609ba@3kitty.org> <63ac751b-3ced-4aa1-a522-ae192d0dcdce@dcrocker.net> Message-ID: I just paid a visit to the IPv6 Ready Logo Program site . Interesting. IMO, it has an impressive Approved Products list. For what it's worth, here?s what Bing Copilot returns when asked to compare it with NIST certifications: ==== How effective is the IPv6 Ready Logo Program compared to other certification programs such as those provided by NIST? The IPv6 Ready Logo Program and NIST certification programs both aim to ensure the reliability and interoperability of technology, but they have different focuses and methodologies. Here's a comparison of their effectiveness: ### IPv6 Ready Logo Program 1. **Focus**: Primarily targets IPv6 conformance and interoperability. It ensures that products can operate seamlessly in an IPv6 environment?. 2. **Global Reach**: Has a worldwide presence with multiple testing laboratories, which helps maintain consistent standards internationally?. 3. **User Confidence**: By obtaining the IPv6 Ready Logo, products demonstrate their readiness for IPv6, increasing user confidence in their deployment?. 4. **Comprehensive Testing**: Includes extensive test specifications covering core IPv6 protocols and additional extended test categories?. ### NIST Certification Programs 1. **Broad Scope**: NIST provides a wide range of certification programs covering various aspects of technology, including cybersecurity, cryptography, and more?. 2. **Rigorous Standards**: Known for its stringent testing and certification processes, ensuring high levels of security and reliability. 3. **Government and Industry Trust**: NIST certifications are highly regarded and often required for government and industry compliance. 4. **Comprehensive Guidelines**: Provides detailed guidelines and frameworks that help organizations implement best practices. ### Effectiveness Comparison - **Specialization vs. Generalization**: The IPv6 Ready Logo Program is highly specialized, focusing solely on IPv6, which makes it very effective for ensuring IPv6 readiness. NIST, on the other hand, covers a broader range of technologies, providing comprehensive security and reliability certifications. - **Global vs. National**: The IPv6 Ready Logo Program has a more global approach, while NIST certifications are primarily recognized and required within the United States, though they are respected internationally. - **User Confidence**: Both programs enhance user confidence, but in different domains. IPv6 Ready Logo is crucial for IPv6 deployment, while NIST certifications are essential for cybersecurity and compliance. Both programs are effective in their respective areas, and the choice between them depends on the specific needs and focus of the organization. Is there a particular aspect of these programs you are more interested in? Source: Conversation with Copilot, 10/7/2024 (1) IPv6 Ready Logo Program Whitepaper. https://www.ipv6ready.org/docs/IPv6_Ready_Logo_White_Paper_Final.pdf. (2) IPv6 Ready Logo Site | Home. https://www.ipv6ready.org/. (3) IPV6 Ready | IPV6 Ready Logo | IPV6 Ready Logo Test Lab. https://cnlabsglobal.com/ipv6-ready-logo-program/. (4) IPV6 Ready Logo| IPV6 Certification | IPV6 Test Programs | IPV6 Testing. https://cnlabsglobal.com/ipv6-certification/. ==== > On Oct 7, 2024, at 9:14?PM, Brian E Carpenter via Internet-history wrote: > > The IPv6 Forum "IPv6 Ready Logo" was an attempt at something like this. > Being a member of the Forum's very populous Hall of Fame, it's not for > me to judge its effectiveness. > > Regards > Brian > > On 08-Oct-24 15:36, Dave Crocker via Internet-history wrote: >> On 10/6/2024 7:28 PM, Jack Haverty via Internet-history wrote: >>> Several years ago, I suggested to someone on the ISOC Board that ISOC >>> consider developing some kind of "ISOC Approved" logo, and of course >>> associated testing mechanisms and bureaucracy. I'm not sure what if >>> anything happened afterwards though. >> That is an operational effort and since it delivers success/failure >> assessments, is likely to be controversial. Perhaps I've missed bit, but >> I believe ISOC has neither of these in its DNA. >> Perhaps a semi-independent certification effort, with ISOC oversight -- >> which it does have in its DNA. >> But ultimately the question is not just what the value add will be, over >> long-standing processes, but how will that value be enough to justify >> the considerable effort and expense? Oh, and how will industry be >> convinced to engage with it, given it hasn't had to, for 40 years? >> d/ > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gregskinner0 at icloud.com Wed Oct 9 23:47:19 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Wed, 9 Oct 2024 23:47:19 -0700 Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> Message-ID: <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: > > As someone who was in touch with Raj/KK and Van/Mike during the development of congestion control. They were unaware of each other's work until spring of 1988, when they realized they were doing very similar stuff. I think, someone (Dave Clark) in the End2End Research Group became aware of Raj & KK's work and invited them to come present to an E2E meeting in early 1988 and E2E (more than IETF) was where Van was working out the kinks in his congestion control work with Mike. > > Craig > I looked into this a bit, and discovered that Raj/KK and Van/Mike were all at the 6th IETF, which took place in April 1987. [1] (It was a joint meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards groups.) Both teams presented their work at the meeting. > On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history > wrote: >> The work of Jain?s DEC team existed at the same time and I believe Jacobson?s original paper references it. >> >> As I said, at least it does congestion avoidance without causing congestion (unless under extreme conditions). >> >> I suspect that the main reason Jacobson didn?t adopt it was that they were trying to maximize the data rate by running as close to congestion collapse as they could. While Jain?s work attempted to balance the trade-off between throughput and response time. But that is just policy they still could have used ECN to keep from being predatory and used ECN while waiting until the queue is full to mark the packets. That is what TCP use of ECN does now. Of course, I think that is bad choice because it generates lots of retransmissions. >> Some of the reasons why Van/Mike took the approach they did were discussed in a email message Van sent to the tcp-ip list. It included some discussions that had taken place on the ietf and end2end-interest lists. [2] IMO, it?s unfortunate that the existing archives of those lists, because we would be able to read the points of view expressed by the list participants. >> When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, but an experimenter. >> >> But it is not uncommon to be so focused on the immediate problem to fail to notice the system implications. John, what could they have done that would have met your criteria and yielded a deployable solution to the congestion problems existing at that time in the timeframe that it was needed? IMO, their paper should be assessed in that context. --gregbo [1] https://www.ietf.org/proceedings/06.pdf [2] https://ee.lbl.gov/tcp.html From craig at tereschau.net Thu Oct 10 05:53:11 2024 From: craig at tereschau.net (Craig Partridge) Date: Thu, 10 Oct 2024 06:53:11 -0600 Subject: [ih] booting linux on a 4004 In-Reply-To: <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> Message-ID: Hi Greg: Thanks for correcting my faulty memory. As partial recompense for being wrong, I'll note I have a partial set of the end2end-interest archives if there are questions. As recompense for my error, I offer the following tidbit: Posted-Date: Tue, 31 Mar 87 17:58:17 PST To: Craig Partridge Cc: end2end-tf at venera.isi.edu Subject: Re: Thinking about Congestion In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. Date: Tue, 31 Mar 87 17:58:17 PST From: Van Jacobson Craig - Your note pushed one of my buttons: Sending a lot of data into a congested network doesn't improve transmit efficiency any more than disconnecting the collision detect wire on your ethernet would. Either action makes everyone on the net, including you, lose. There is always an optimum window size but computing it requires knowing how packet loss scales with window size. To first order, the scaling will be the exponential (1 - A)^W where W is the window size and A is a network dependent constant (0 < A < 1). For a long haul net, no-loss throughput will scale with window size like W/T where T is the round trip time. The effective throughput will go like the product of these two terms. For small W the linear term dominates and you see linear throughput increase with increasing window size. For large W the loss term dominates and you see exponential throughput decrease with increasing window size. For small A (low loss rates), the optimum window size will scale like -1/log(1-a). It's possible to do a more exact analysis. A few years ago a friend of mine was working on a tcp/ip implementation for a well known supercomputer manufacturer. At the time there was a huge debate in the company on whether to "modify" tcp. It seems that some cretin in management had decided that the only way to get good network performance was to do huge transfers, where "huge" was much larger than the 64K allowed by the tcp window size field. I was simulating very high performance fiber optic nets at the time and found this argument to be completely at odds with my results. I was so incensed that I wrote a little 5 page paper for my friend titled "Some notes on choosing an optimum transfer size" that started out: "The choice of network transfer size seems to have been driven by the idea that ``bigger is better''. While this reflects a good, American upbringing, it bears only faint resemblance to reality. In the unlikely event that a future decision is made on rational grounds, this note describes the mathematical basis for choice of window and transfer size." I'm afraid it went on in much the same tone (I must have been drunk when I wrote it) but I did summarize how to apply Erlang's and Hill's loss functions to tcp (the same analysis would apply to rdp - the only difference is rdp gains a factor of two in throughput over tcp at very high loss rates). If you're interested in the math, I'd be glad to send you extracts from this thing or the references I used. - Van On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner wrote: > > On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: > > > As someone who was in touch with Raj/KK and Van/Mike during the > development of congestion control. They were unaware of each other's work > until spring of 1988, when they realized they were doing very similar > stuff. I think, someone (Dave Clark) in the End2End Research Group became > aware of Raj & KK's work and invited them to come present to an E2E meeting > in early 1988 and E2E (more than IETF) was where Van was working out the > kinks in his congestion control work with Mike. > > Craig > > > I looked into this a bit, and discovered that Raj/KK and Van/Mike were all > at the 6th IETF, which took place in April 1987. [1] (It was a joint > meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards > groups.) Both teams presented their work at the meeting. > > On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> The work of Jain?s DEC team existed at the same time and I believe >> Jacobson?s original paper references it. >> >> As I said, at least it does congestion avoidance without causing >> congestion (unless under extreme conditions). >> >> I suspect that the main reason Jacobson didn?t adopt it was that they >> were trying to maximize the data rate by running as close to congestion >> collapse as they could. While Jain?s work attempted to balance the >> trade-off between throughput and response time. But that is just policy >> they still could have used ECN to keep from being predatory and used ECN >> while waiting until the queue is full to mark the packets. That is what TCP >> use of ECN does now. Of course, I think that is bad choice because it >> generates lots of retransmissions. >> >> > Some of the reasons why Van/Mike took the approach they did were discussed > in a email message Van sent to the tcp-ip list. It included some > discussions that had taken place on the ietf and end2end-interest lists. > [2] IMO, it?s unfortunate that the existing archives of those lists, > because we would be able to read the points of view expressed by the list > participants. > > When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >> but an experimenter. >> >> But it is not uncommon to be so focused on the immediate problem to fail >> to notice the system implications. >> > > John, what could they have done that would have met your criteria and > yielded a deployable solution to the congestion problems existing at that > time in the timeframe that it was needed? IMO, their paper should be > assessed in that context. > > --gregbo > > [1] https://www.ietf.org/proceedings/06.pdf > [2] https://ee.lbl.gov/tcp.html > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From b_a_denny at yahoo.com Thu Oct 10 09:56:00 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 10 Oct 2024 16:56:00 +0000 (UTC) Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> Message-ID: <248054650.362568.1728579360999@mail.yahoo.com> Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment.? If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio.? I looked recently to see if I could? find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic).? The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-) barbara On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history wrote: Hi Greg: Thanks for correcting my faulty memory.? As partial recompense for being wrong, I'll note I have a partial set of the end2end-interest archives if there are questions.? As recompense for my error, I offer the following tidbit: Posted-Date: Tue, 31 Mar 87 17:58:17 PST To: Craig Partridge Cc: end2end-tf at venera.isi.edu Subject: Re: Thinking about Congestion In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. Date: Tue, 31 Mar 87 17:58:17 PST From: Van Jacobson Craig - Your note pushed one of my buttons:? Sending a lot of data into a congested network doesn't improve transmit efficiency any more than disconnecting the collision detect wire on your ethernet would.? Either action makes everyone on the net, including you, lose. There is always an optimum window size but computing it requires knowing how packet loss scales with window size.? To first order, the scaling will be the exponential (1 - A)^W where W is the window size and A is a network dependent constant (0 < A < 1). For a long haul net, no-loss throughput will scale with window size like W/T where T is the round trip time.? The effective throughput will go like the product of these two terms.? For small W the linear term dominates and you see linear throughput increase with increasing window size.? For large W the loss term dominates and you see exponential throughput decrease with increasing window size.? For small A (low loss rates), the optimum window size will scale like -1/log(1-a). It's possible to do a more exact analysis.? A few years ago a friend of mine was working on a tcp/ip implementation for a well known supercomputer manufacturer.? At the time there was a huge debate in the company on whether to "modify" tcp.? It seems that some cretin in management had decided that the only way to get good network performance was to do huge transfers, where "huge" was much larger than the 64K allowed by the tcp window size field.? I was simulating very high performance fiber optic nets at the time and found this argument to be completely at odds with my results.? I was so incensed that I wrote a little 5 page paper for my friend titled "Some notes on choosing an optimum transfer size" that started out: ? ? "The choice of network transfer size seems to have been ? ? driven by the idea that ``bigger is better''.? While this ? ? reflects a good, American upbringing, it bears only faint ? ? resemblance to reality.? In the unlikely event that a future ? ? decision is made on rational grounds, this note describes the ? ? mathematical basis for choice of window and transfer size." I'm afraid it went on in much the same tone (I must have been drunk when I wrote it) but I did summarize how to apply Erlang's and Hill's loss functions to tcp (the same analysis would apply to rdp - the only difference is rdp gains a factor of two in throughput over tcp at very high loss rates).? If you're interested in the math, I'd be glad to send you extracts from this thing or the references I used. ? - Van On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner wrote: > > On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: > > > As someone who was in touch with Raj/KK and Van/Mike during the > development of congestion control.? They were unaware of each other's work > until spring of 1988, when they realized they were doing very similar > stuff.? I think, someone (Dave Clark) in the End2End Research Group became > aware of Raj & KK's work and invited them to come present to an E2E meeting > in early 1988 and E2E (more than IETF) was where Van was working out the > kinks in his congestion control work with Mike. > > Craig > > > I looked into this a bit, and discovered that Raj/KK and Van/Mike were all > at the 6th IETF, which took place in April 1987. [1] (It was a joint > meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards > groups.)? Both teams presented their work at the meeting. > > On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> The work of Jain?s DEC team existed at the same time and I believe >> Jacobson?s original paper references it. >> >> As I said, at least it does congestion avoidance without causing >> congestion (unless under extreme conditions). >> >> I suspect that the main reason Jacobson didn?t adopt it was that they >> were trying to maximize the data rate by running as close to congestion >> collapse as they could. While Jain?s work attempted to balance the >> trade-off between throughput and response time.? But that is just policy >> they still could have used ECN to keep from being predatory and used ECN >> while waiting until the queue is full to mark the packets. That is what TCP >> use of ECN does now. Of course, I think that is bad choice because it >> generates lots of retransmissions. >> >> > Some of the reasons why Van/Mike took the approach they did were discussed > in a email message Van sent to the tcp-ip list.? It included some > discussions that had taken place on the ietf and end2end-interest lists. > [2] IMO, it?s unfortunate that the existing archives of those lists, > because we would be able to read the points of view expressed by the list > participants. > > When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >> but an experimenter. >> >> But it is not uncommon to be so focused on the immediate problem to fail >> to notice the system implications. >> > > John, what could they have done that would have met your criteria and > yielded a deployable solution to the congestion problems existing at that > time in the timeframe that it was needed?? IMO, their paper should be > assessed in that context. > > --gregbo > > [1] https://www.ietf.org/proceedings/06.pdf > [2] https://ee.lbl.gov/tcp.html > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Thu Oct 10 10:55:19 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 10 Oct 2024 17:55:19 +0000 (UTC) Subject: [ih] booting linux on a 4004 In-Reply-To: <248054650.362568.1728579360999@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> <248054650.362568.1728579360999@mail.yahoo.com> Message-ID: <1329760811.380312.1728582919550@mail.yahoo.com> Just a FYI I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so? things may have changed.? ?I also don't know how the numbers in the IPV6 specification were selected. barbara On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history wrote: Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment.? If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio.? I looked recently to see if I could? find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic).? The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-) barbara ? ? On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history wrote:? Hi Greg: Thanks for correcting my faulty memory.? As partial recompense for being wrong, I'll note I have a partial set of the end2end-interest archives if there are questions.? As recompense for my error, I offer the following tidbit: Posted-Date: Tue, 31 Mar 87 17:58:17 PST To: Craig Partridge Cc: end2end-tf at venera.isi.edu Subject: Re: Thinking about Congestion In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. Date: Tue, 31 Mar 87 17:58:17 PST From: Van Jacobson Craig - Your note pushed one of my buttons:? Sending a lot of data into a congested network doesn't improve transmit efficiency any more than disconnecting the collision detect wire on your ethernet would.? Either action makes everyone on the net, including you, lose. There is always an optimum window size but computing it requires knowing how packet loss scales with window size.? To first order, the scaling will be the exponential (1 - A)^W where W is the window size and A is a network dependent constant (0 < A < 1). For a long haul net, no-loss throughput will scale with window size like W/T where T is the round trip time.? The effective throughput will go like the product of these two terms.? For small W the linear term dominates and you see linear throughput increase with increasing window size.? For large W the loss term dominates and you see exponential throughput decrease with increasing window size.? For small A (low loss rates), the optimum window size will scale like -1/log(1-a). It's possible to do a more exact analysis.? A few years ago a friend of mine was working on a tcp/ip implementation for a well known supercomputer manufacturer.? At the time there was a huge debate in the company on whether to "modify" tcp.? It seems that some cretin in management had decided that the only way to get good network performance was to do huge transfers, where "huge" was much larger than the 64K allowed by the tcp window size field.? I was simulating very high performance fiber optic nets at the time and found this argument to be completely at odds with my results.? I was so incensed that I wrote a little 5 page paper for my friend titled "Some notes on choosing an optimum transfer size" that started out: ? ? "The choice of network transfer size seems to have been ? ? driven by the idea that ``bigger is better''.? While this ? ? reflects a good, American upbringing, it bears only faint ? ? resemblance to reality.? In the unlikely event that a future ? ? decision is made on rational grounds, this note describes the ? ? mathematical basis for choice of window and transfer size." I'm afraid it went on in much the same tone (I must have been drunk when I wrote it) but I did summarize how to apply Erlang's and Hill's loss functions to tcp (the same analysis would apply to rdp - the only difference is rdp gains a factor of two in throughput over tcp at very high loss rates).? If you're interested in the math, I'd be glad to send you extracts from this thing or the references I used. ? - Van On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner wrote: > > On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: > > > As someone who was in touch with Raj/KK and Van/Mike during the > development of congestion control.? They were unaware of each other's work > until spring of 1988, when they realized they were doing very similar > stuff.? I think, someone (Dave Clark) in the End2End Research Group became > aware of Raj & KK's work and invited them to come present to an E2E meeting > in early 1988 and E2E (more than IETF) was where Van was working out the > kinks in his congestion control work with Mike. > > Craig > > > I looked into this a bit, and discovered that Raj/KK and Van/Mike were all > at the 6th IETF, which took place in April 1987. [1] (It was a joint > meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards > groups.)? Both teams presented their work at the meeting. > > On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> The work of Jain?s DEC team existed at the same time and I believe >> Jacobson?s original paper references it. >> >> As I said, at least it does congestion avoidance without causing >> congestion (unless under extreme conditions). >> >> I suspect that the main reason Jacobson didn?t adopt it was that they >> were trying to maximize the data rate by running as close to congestion >> collapse as they could. While Jain?s work attempted to balance the >> trade-off between throughput and response time.? But that is just policy >> they still could have used ECN to keep from being predatory and used ECN >> while waiting until the queue is full to mark the packets. That is what TCP >> use of ECN does now. Of course, I think that is bad choice because it >> generates lots of retransmissions. >> >> > Some of the reasons why Van/Mike took the approach they did were discussed > in a email message Van sent to the tcp-ip list.? It included some > discussions that had taken place on the ietf and end2end-interest lists. > [2] IMO, it?s unfortunate that the existing archives of those lists, > because we would be able to read the points of view expressed by the list > participants. > > When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >> but an experimenter. >> >> But it is not uncommon to be so focused on the immediate problem to fail >> to notice the system implications. >> > > John, what could they have done that would have met your criteria and > yielded a deployable solution to the congestion problems existing at that > time in the timeframe that it was needed?? IMO, their paper should be > assessed in that context. > > --gregbo > > [1] https://www.ietf.org/proceedings/06.pdf > [2] https://ee.lbl.gov/tcp.html > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history ? -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Thu Oct 10 13:49:32 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 11 Oct 2024 09:49:32 +1300 Subject: [ih] booting linux on a 4004 In-Reply-To: <1329760811.380312.1728582919550@mail.yahoo.com> References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> <248054650.362568.1728579360999@mail.yahoo.com> <1329760811.380312.1728582919550@mail.yahoo.com> Message-ID: Barbara, If you mean how the 1280 bytes minimum MTU size for IPv6 was chosen, it was 1500 minus N, where N was an abitrary choice of how many levels of IPv6-in-IPv6 encapsulation would be possible within one 1500 byte packet, plus a bit of spare. So there was some hand waving involved. It was always assumed that lower layers that couldn't carry 1280 natively would provide an adaptation layer. We are still very sad that PMTUD doesn't work reliably, because that means that the lowest common denominator of 1280 is often used when we could do much better. Regards Brian Carpenter On 11-Oct-24 06:55, Barbara Denny via Internet-history wrote: > Just a FYI > I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so? things may have changed. > ?I also don't know how the numbers in the IPV6 specification were selected. > > barbara > On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history wrote: > > Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment.? If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio.? I looked recently to see if I could? find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic).? The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-) > barbara > ? ? On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history wrote: > > Hi Greg: > > Thanks for correcting my faulty memory.? As partial recompense for being > wrong, I'll note I have a partial set of the end2end-interest archives if > there are questions.? As recompense for my error, I offer the following > tidbit: > > Posted-Date: Tue, 31 Mar 87 17:58:17 PST > > To: Craig Partridge > > Cc: end2end-tf at venera.isi.edu > > Subject: Re: Thinking about Congestion > > In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. > > Date: Tue, 31 Mar 87 17:58:17 PST > > From: Van Jacobson > > > Craig - > > > Your note pushed one of my buttons:? Sending a lot of data > > into a congested network doesn't improve transmit efficiency > > any more than disconnecting the collision detect wire on > > your ethernet would.? Either action makes everyone on the net, > > including you, lose. > > > There is always an optimum window size but computing it requires > > knowing how packet loss scales with window size.? To first order, > > the scaling will be the exponential (1 - A)^W where W is the > > window size and A is a network dependent constant (0 < A < 1). > > For a long haul net, no-loss throughput will scale with window > > size like W/T where T is the round trip time.? The effective > > throughput will go like the product of these two terms.? For > > small W the linear term dominates and you see linear throughput > > increase with increasing window size.? For large W the loss term > > dominates and you see exponential throughput decrease with > > increasing window size.? For small A (low loss rates), the > > optimum window size will scale like -1/log(1-a). > > > It's possible to do a more exact analysis.? A few years ago a > > friend of mine was working on a tcp/ip implementation for a well > > known supercomputer manufacturer.? At the time there was a huge > > debate in the company on whether to "modify" tcp.? It seems that > > some cretin in management had decided that the only way to get > > good network performance was to do huge transfers, where "huge" > > was much larger than the 64K allowed by the tcp window size > > field.? I was simulating very high performance fiber optic nets > > at the time and found this argument to be completely at odds with > > my results.? I was so incensed that I wrote a little 5 page paper > > for my friend titled "Some notes on choosing an optimum transfer > > size" that started out: > > > ? ? "The choice of network transfer size seems to have been > > ? ? driven by the idea that ``bigger is better''.? While this > > ? ? reflects a good, American upbringing, it bears only faint > > ? ? resemblance to reality.? In the unlikely event that a future > > ? ? decision is made on rational grounds, this note describes the > > ? ? mathematical basis for choice of window and transfer size." > > > I'm afraid it went on in much the same tone (I must have been > > drunk when I wrote it) but I did summarize how to apply Erlang's > > and Hill's loss functions to tcp (the same analysis would apply > > to rdp - the only difference is rdp gains a factor of two in > > throughput over tcp at very high loss rates).? If you're > > interested in the math, I'd be glad to send you extracts from > > this thing or the references I used. > > > ? - Van > > > On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner > wrote: > >> >> On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: >> >> >> As someone who was in touch with Raj/KK and Van/Mike during the >> development of congestion control.? They were unaware of each other's work >> until spring of 1988, when they realized they were doing very similar >> stuff.? I think, someone (Dave Clark) in the End2End Research Group became >> aware of Raj & KK's work and invited them to come present to an E2E meeting >> in early 1988 and E2E (more than IETF) was where Van was working out the >> kinks in his congestion control work with Mike. >> >> Craig >> >> >> I looked into this a bit, and discovered that Raj/KK and Van/Mike were all >> at the 6th IETF, which took place in April 1987. [1] (It was a joint >> meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards >> groups.)? Both teams presented their work at the meeting. >> >> On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> The work of Jain?s DEC team existed at the same time and I believe >>> Jacobson?s original paper references it. >>> >>> As I said, at least it does congestion avoidance without causing >>> congestion (unless under extreme conditions). >>> >>> I suspect that the main reason Jacobson didn?t adopt it was that they >>> were trying to maximize the data rate by running as close to congestion >>> collapse as they could. While Jain?s work attempted to balance the >>> trade-off between throughput and response time.? But that is just policy >>> they still could have used ECN to keep from being predatory and used ECN >>> while waiting until the queue is full to mark the packets. That is what TCP >>> use of ECN does now. Of course, I think that is bad choice because it >>> generates lots of retransmissions. >>> >>> >> Some of the reasons why Van/Mike took the approach they did were discussed >> in a email message Van sent to the tcp-ip list.? It included some >> discussions that had taken place on the ietf and end2end-interest lists. >> [2] IMO, it?s unfortunate that the existing archives of those lists, >> because we would be able to read the points of view expressed by the list >> participants. >> >> When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >>> but an experimenter. >>> >>> But it is not uncommon to be so focused on the immediate problem to fail >>> to notice the system implications. >>> >> >> John, what could they have done that would have met your criteria and >> yielded a deployable solution to the congestion problems existing at that >> time in the timeframe that it was needed?? IMO, their paper should be >> assessed in that context. >> >> --gregbo >> >> [1] https://www.ietf.org/proceedings/06.pdf >> [2] https://ee.lbl.gov/tcp.html >> >> > From b_a_denny at yahoo.com Thu Oct 10 14:09:08 2024 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 10 Oct 2024 21:09:08 +0000 (UTC) Subject: [ih] booting linux on a 4004 In-Reply-To: References: <295fe064-7128-403f-9cf8-f7a76720f90f@cis.upenn.edu> <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> <248054650.362568.1728579360999@mail.yahoo.com> <1329760811.380312.1728582919550@mail.yahoo.com> Message-ID: <1260425281.490712.1728594548537@mail.yahoo.com> Thanks Brian.? I could see how the number was perhaps derived. I don't know how much it was? vetted with different types of networking folks.?? To me the assumption of an adaptation layer just handling this is something that comes more from people who are used to wired networks than wireless.? As I am sure you know,? there were issues about IPv6? discussions when they started in the IETF.? I experienced it when I had an opportunity to be at an IETF meeting at the time.? For those who don't know this history and are interested, I am sure Steve Deering relayed his experience at the time. barbara On Thursday, October 10, 2024 at 01:49:41 PM PDT, Brian E Carpenter wrote: Barbara, If you mean how the 1280 bytes minimum MTU size for IPv6 was chosen, it was 1500 minus N, where N was an abitrary choice of how many levels of IPv6-in-IPv6 encapsulation would be possible within one 1500 byte packet, plus a bit of spare. So there was some hand waving involved. It was always assumed that lower layers that couldn't carry 1280 natively would provide an adaptation layer. We are still very sad that PMTUD doesn't work reliably, because that means that the lowest common denominator of 1280 is often used when we could do much better. Regards ? ? Brian Carpenter On 11-Oct-24 06:55, Barbara Denny via Internet-history wrote: >? Just a FYI > I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so? things may have changed. >? ?I also don't know how the numbers in the IPV6 specification were selected. > > barbara >? ? ? On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history wrote: >? >? ? Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment.? If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio.? I looked recently to see if I could? find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic).? The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-) > barbara >? ? ? On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history wrote: >? >? Hi Greg: > > Thanks for correcting my faulty memory.? As partial recompense for being > wrong, I'll note I have a partial set of the end2end-interest archives if > there are questions.? As recompense for my error, I offer the following > tidbit: > > Posted-Date: Tue, 31 Mar 87 17:58:17 PST > > To: Craig Partridge > > Cc: end2end-tf at venera.isi.edu > > Subject: Re: Thinking about Congestion > > In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. > > Date: Tue, 31 Mar 87 17:58:17 PST > > From: Van Jacobson > > > Craig - > > > Your note pushed one of my buttons:? Sending a lot of data > > into a congested network doesn't improve transmit efficiency > > any more than disconnecting the collision detect wire on > > your ethernet would.? Either action makes everyone on the net, > > including you, lose. > > > There is always an optimum window size but computing it requires > > knowing how packet loss scales with window size.? To first order, > > the scaling will be the exponential (1 - A)^W where W is the > > window size and A is a network dependent constant (0 < A < 1). > > For a long haul net, no-loss throughput will scale with window > > size like W/T where T is the round trip time.? The effective > > throughput will go like the product of these two terms.? For > > small W the linear term dominates and you see linear throughput > > increase with increasing window size.? For large W the loss term > > dominates and you see exponential throughput decrease with > > increasing window size.? For small A (low loss rates), the > > optimum window size will scale like -1/log(1-a). > > > It's possible to do a more exact analysis.? A few years ago a > > friend of mine was working on a tcp/ip implementation for a well > > known supercomputer manufacturer.? At the time there was a huge > > debate in the company on whether to "modify" tcp.? It seems that > > some cretin in management had decided that the only way to get > > good network performance was to do huge transfers, where "huge" > > was much larger than the 64K allowed by the tcp window size > > field.? I was simulating very high performance fiber optic nets > > at the time and found this argument to be completely at odds with > > my results.? I was so incensed that I wrote a little 5 page paper > > for my friend titled "Some notes on choosing an optimum transfer > > size" that started out: > > >? ? ? "The choice of network transfer size seems to have been > >? ? ? driven by the idea that ``bigger is better''.? While this > >? ? ? reflects a good, American upbringing, it bears only faint > >? ? ? resemblance to reality.? In the unlikely event that a future > >? ? ? decision is made on rational grounds, this note describes the > >? ? ? mathematical basis for choice of window and transfer size." > > > I'm afraid it went on in much the same tone (I must have been > > drunk when I wrote it) but I did summarize how to apply Erlang's > > and Hill's loss functions to tcp (the same analysis would apply > > to rdp - the only difference is rdp gains a factor of two in > > throughput over tcp at very high loss rates).? If you're > > interested in the math, I'd be glad to send you extracts from > > this thing or the references I used. > > >? ? - Van > > > On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner > wrote: > >> >> On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: >> >> >> As someone who was in touch with Raj/KK and Van/Mike during the >> development of congestion control.? They were unaware of each other's work >> until spring of 1988, when they realized they were doing very similar >> stuff.? I think, someone (Dave Clark) in the End2End Research Group became >> aware of Raj & KK's work and invited them to come present to an E2E meeting >> in early 1988 and E2E (more than IETF) was where Van was working out the >> kinks in his congestion control work with Mike. >> >> Craig >> >> >> I looked into this a bit, and discovered that Raj/KK and Van/Mike were all >> at the 6th IETF, which took place in April 1987. [1] (It was a joint >> meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards >> groups.)? Both teams presented their work at the meeting. >> >> On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> The work of Jain?s DEC team existed at the same time and I believe >>> Jacobson?s original paper references it. >>> >>> As I said, at least it does congestion avoidance without causing >>> congestion (unless under extreme conditions). >>> >>> I suspect that the main reason Jacobson didn?t adopt it was that they >>> were trying to maximize the data rate by running as close to congestion >>> collapse as they could. While Jain?s work attempted to balance the >>> trade-off between throughput and response time.? But that is just policy >>> they still could have used ECN to keep from being predatory and used ECN >>> while waiting until the queue is full to mark the packets. That is what TCP >>> use of ECN does now. Of course, I think that is bad choice because it >>> generates lots of retransmissions. >>> >>> >> Some of the reasons why Van/Mike took the approach they did were discussed >> in a email message Van sent to the tcp-ip list.? It included some >> discussions that had taken place on the ietf and end2end-interest lists. >> [2] IMO, it?s unfortunate that the existing archives of those lists, >> because we would be able to read the points of view expressed by the list >> participants. >> >> When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >>> but an experimenter. >>> >>> But it is not uncommon to be so focused on the immediate problem to fail >>> to notice the system implications. >>> >> >> John, what could they have done that would have met your criteria and >> yielded a deployable solution to the congestion problems existing at that >> time in the timeframe that it was needed?? IMO, their paper should be >> assessed in that context. >> >> --gregbo >> >> [1] https://www.ietf.org/proceedings/06.pdf >> [2] https://ee.lbl.gov/tcp.html >> >> > From gregskinner0 at icloud.com Thu Oct 10 20:49:25 2024 From: gregskinner0 at icloud.com (Greg Skinner) Date: Thu, 10 Oct 2024 20:49:25 -0700 Subject: [ih] bufferbloat and modern congestion control (was 4004) In-Reply-To: References: <56EA7444-CE69-4888-9D74-F7DC46D95B25@icloud.com> <200BBBC0-239B-4D85-9BEF-97667CEF26C2@icloud.com> <70425da8-a6b0-4d84-84d5-b186c66c16f5@3kitty.org> <651bcb46-23ea-40c1-b238-6c97dd59ff11@3kitty.org> Message-ID: <55162275-5A5A-4BF5-AB98-D4101A31564E@icloud.com> On Oct 6, 2024, at 2:04?PM, Jack Haverty via Internet-history wrote: > > Yes, I agree, that's how it works. > > But I think that the Service Model has changed over time; the original goals in the early 80s were to provide multiple types of service, e.g., one for interactive needs where timeliness was most important, and another for bulk transfers where accurate delivery of everything sent was crucial. That's why TCP was split from IP to enable services such as UDP. > > At some point that goal was apparently abandoned. It might be of historical interest to know when that occurred and if it was an explicit decision and if so who made it. > > Another main difference now is in the Management of "The Internet". It has also changed over the decades. > > In the 80s, ARPA was in charge of the Internet. Users knew who to call if they had problems. The first "real" User I remember was Peter Kirstein and his crew at UCL, who relied on the Internet to do their everyday work. > > When Peter had a problem, he would call or email Vint at ARPA. If the problem look like it might be related to the "core gateways", I then got a call or email from Vint. Peter figured this process out and would then simply CC me on his first complaint. > > The "Gateway Group" at BBN (usually Hinden, Brescia, and/or Sheltzer) would get to work and figure it all out. Perhaps it was a SATNET issue, but Dale McNeill was down the hall with the SATNET team if needed. Same with the Arpanet. > > When NSF entered the scene, I suspect Steve Wolff's phone number became more popular. Problems probably cascaded to Dave Mills or other NSF researchers? > > In today's Internet, who is in charge? When you have a phantasmal Internet experience, "Who Ya Gonna Call?" Where are the Internet's Ghostbusters? > > Jack > On some of the NOG (network operators group) mailing lists I follow, people who are having problems and can?t reach the NOC of the provider they?re having problems with post requests to the lists for provider contacts. Major problems are reported on lists such as the outages list (with a companion list, outages-discussion). If it?s determined that some protocol needs to be fixed, vendors are contacted, and instructions are made available on how to obtain the update. There are companies such as Kentik and Thousand Eyes (part of Cisco) that monitor various providers and services. As far as NSFnet goes, there are people on the NANOG list who were around back in the Merit/NSFnet days. You might pose a question on that list to find out how escalation responsibilities transitioned from the researchers to the NOCs, etc. --gregbo From brian.e.carpenter at gmail.com Tue Oct 15 13:12:25 2024 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 16 Oct 2024 09:12:25 +1300 Subject: [ih] IP over wireless [was: booting linux on a 4004] In-Reply-To: <1260425281.490712.1728594548537@mail.yahoo.com> References: <45b1484f-ff71-401d-a5a8-e56f4f7ea08f@gmail.com> <133D21A5-4992-4446-BF3A-D613B4F6F3F1@sobco.com> <9507C0B2-CF24-48A3-8B3B-BA764A3E58D3@sobco.com> <1191418713.15094180.1727830223232@mail.yahoo.com> <29623329-6B8A-4DA5-A30B-83EEE15C7558@comcast.net> <645A35FD-23F6-4E43-9489-59DF8CE74285@icloud.com> <6B0500A3-414E-4A68-B528-822BB528F372@icloud.com> <248054650.362568.1728579360999@mail.yahoo.com> <1329760811.380312.1728582919550@mail.yahoo.com> <1260425281.490712.1728594548537@mail.yahoo.com> Message-ID: <293f7cf4-3efe-402e-a67b-2fc0baca9a43@gmail.com> Barbara, I've been communing with myself about how to reply to this. When I was trying to explain to students why Ethernet is like it is, of course like everybody I started with Aloha before talking about yellow cable and CSMA/CD. That is of course largely irrelevant today, but we still use the same frame format. Ethernet (partly because it descended from Aloha) natively supports broadcast. Then there is Wi-Fi which has the goal of emulating Ethernet, so must support broadcast even though it's disastrous to do so and has many bad consequences. Short explanation: if you run a large enough Wi-Fi network, it will end up saturated by multicast traffic. We don't even have that properly documented in an RFC today, but it's on its way: https://www.ietf.org/archive/id/draft-ietf-6man-ipv6-over-wireless-06.html That said, because IPv6 was designed in the Ethernet era, IPv6 over Wi-Fi works in a relatively simple way, and 1500 byte packets are the norm. So small Wi-Fi networks are fine. Then there is the whole topic of low-power wireless networks where 1500 bytes is certainly not the norm and some kind of adaptation layer is needed. (I'm no expert in that area, but as far as I can tell all the effort has gone into IPv6 rather than IPv4.) You'll find references to that work in the above draft, but I think the key one is RFC 4944 about IPv6 over IEEE 802.15.4. There's a whole section about the adaptation layer: https://www.rfc-editor.org/rfc/rfc4944.html#section-5 So, the wireless people have simply accepted the magic 1280 rule and adapted to it. As to the origin of that magic rule, there's one view of its origin at https://mailarchive.ietf.org/arch/msg/int-area/Wpv1jT6UQt6KlzrdIoZSSLF-7XA (RFC 1883 specified 576 bytes in 1995. RFC 2460 raised it to 1280 in 1998.) Regards Brian Carpenter On 11-Oct-24 10:09, Barbara Denny via Internet-history wrote: > Thanks Brian.? I could see how the number was perhaps derived. I don't know how much it was? vetted with different types of networking folks.?? To me the assumption of an adaptation layer just handling this is something that comes more from people who are used to wired networks than wireless. > > As I am sure you know,? there were issues about IPv6? discussions when they started in the IETF.? I experienced it when I had an opportunity to be at an IETF meeting at the time.? For those who don't know this history and are interested, I am sure Steve Deering relayed his experience at the time. > > barbara > > On Thursday, October 10, 2024 at 01:49:41 PM PDT, Brian E Carpenter wrote: > > Barbara, > > If you mean how the 1280 bytes minimum MTU size for IPv6 was chosen, it was 1500 minus N, where N was an abitrary choice of how many levels of IPv6-in-IPv6 encapsulation would be possible within one 1500 byte packet, plus a bit of spare. So there was some hand waving involved. > > It was always assumed that lower layers that couldn't carry 1280 natively would provide an adaptation layer. > > We are still very sad that PMTUD doesn't work reliably, because that means that the lowest common denominator of 1280 is often used when we could do much better. > > Regards > ? ? Brian Carpenter > > On 11-Oct-24 06:55, Barbara Denny via Internet-history wrote: >> ? Just a FYI >> I can see how IPv6 requirements in this area might be problematic for packet radio networks. I will admit my knowledge is old so? things may have changed. >> ? ?I also don't know how the numbers in the IPV6 specification were selected. >> >> barbara >> ? ? ? On Thursday, October 10, 2024 at 09:56:26 AM PDT, Barbara Denny via Internet-history wrote: >> >> ? ? Reminds me of how much effort went into selecting a packet size and the coding at the link layer in a packet radio environment.? If you are interested, I think Mike Pursley (Clemson) might have done the analysis for us (SRI) when we were working on porting the Packet Radio protocols to the SINCGARs radio.? I looked recently to see if I could? find a writeup but couldn't find anything quickly regarding this particular effort (Our discussion regarding MTU got me thinking on this topic).? The web page for him at Clemson does mention his current research is network coding for packet radio networks. :-) >> barbara >> ? ? ? On Thursday, October 10, 2024 at 05:53:34 AM PDT, Craig Partridge via Internet-history wrote: >> >> ? Hi Greg: >> >> Thanks for correcting my faulty memory.? As partial recompense for being >> wrong, I'll note I have a partial set of the end2end-interest archives if >> there are questions.? As recompense for my error, I offer the following >> tidbit: >> >> Posted-Date: Tue, 31 Mar 87 17:58:17 PST >> >> To: Craig Partridge >> >> Cc: end2end-tf at venera.isi.edu >> >> Subject: Re: Thinking about Congestion >> >> In-Reply-To: Your message of Fri, 27 Mar 87 08:43:19 EST. >> >> Date: Tue, 31 Mar 87 17:58:17 PST >> >> From: Van Jacobson >> >> >> Craig - >> >> >> Your note pushed one of my buttons:? Sending a lot of data >> >> into a congested network doesn't improve transmit efficiency >> >> any more than disconnecting the collision detect wire on >> >> your ethernet would.? Either action makes everyone on the net, >> >> including you, lose. >> >> >> There is always an optimum window size but computing it requires >> >> knowing how packet loss scales with window size.? To first order, >> >> the scaling will be the exponential (1 - A)^W where W is the >> >> window size and A is a network dependent constant (0 < A < 1). >> >> For a long haul net, no-loss throughput will scale with window >> >> size like W/T where T is the round trip time.? The effective >> >> throughput will go like the product of these two terms.? For >> >> small W the linear term dominates and you see linear throughput >> >> increase with increasing window size.? For large W the loss term >> >> dominates and you see exponential throughput decrease with >> >> increasing window size.? For small A (low loss rates), the >> >> optimum window size will scale like -1/log(1-a). >> >> >> It's possible to do a more exact analysis.? A few years ago a >> >> friend of mine was working on a tcp/ip implementation for a well >> >> known supercomputer manufacturer.? At the time there was a huge >> >> debate in the company on whether to "modify" tcp.? It seems that >> >> some cretin in management had decided that the only way to get >> >> good network performance was to do huge transfers, where "huge" >> >> was much larger than the 64K allowed by the tcp window size >> >> field.? I was simulating very high performance fiber optic nets >> >> at the time and found this argument to be completely at odds with >> >> my results.? I was so incensed that I wrote a little 5 page paper >> >> for my friend titled "Some notes on choosing an optimum transfer >> >> size" that started out: >> >> >> ? ? ? "The choice of network transfer size seems to have been >> >> ? ? ? driven by the idea that ``bigger is better''.? While this >> >> ? ? ? reflects a good, American upbringing, it bears only faint >> >> ? ? ? resemblance to reality.? In the unlikely event that a future >> >> ? ? ? decision is made on rational grounds, this note describes the >> >> ? ? ? mathematical basis for choice of window and transfer size." >> >> >> I'm afraid it went on in much the same tone (I must have been >> >> drunk when I wrote it) but I did summarize how to apply Erlang's >> >> and Hill's loss functions to tcp (the same analysis would apply >> >> to rdp - the only difference is rdp gains a factor of two in >> >> throughput over tcp at very high loss rates).? If you're >> >> interested in the math, I'd be glad to send you extracts from >> >> this thing or the references I used. >> >> >> ? ? - Van >> >> >> On Thu, Oct 10, 2024 at 12:47?AM Greg Skinner >> wrote: >> >>> >>> On Oct 5, 2024, at 5:42?PM, Craig Partridge wrote: >>> >>> >>> As someone who was in touch with Raj/KK and Van/Mike during the >>> development of congestion control.? They were unaware of each other's work >>> until spring of 1988, when they realized they were doing very similar >>> stuff.? I think, someone (Dave Clark) in the End2End Research Group became >>> aware of Raj & KK's work and invited them to come present to an E2E meeting >>> in early 1988 and E2E (more than IETF) was where Van was working out the >>> kinks in his congestion control work with Mike. >>> >>> Craig >>> >>> >>> I looked into this a bit, and discovered that Raj/KK and Van/Mike were all >>> at the 6th IETF, which took place in April 1987. [1] (It was a joint >>> meeting of the IETF and ANSI X3S3.3 Network and Transport Layer standards >>> groups.)? Both teams presented their work at the meeting. >>> >>> On Sat, Oct 5, 2024 at 5:34?PM John Day via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> The work of Jain?s DEC team existed at the same time and I believe >>>> Jacobson?s original paper references it. >>>> >>>> As I said, at least it does congestion avoidance without causing >>>> congestion (unless under extreme conditions). >>>> >>>> I suspect that the main reason Jacobson didn?t adopt it was that they >>>> were trying to maximize the data rate by running as close to congestion >>>> collapse as they could. While Jain?s work attempted to balance the >>>> trade-off between throughput and response time.? But that is just policy >>>> they still could have used ECN to keep from being predatory and used ECN >>>> while waiting until the queue is full to mark the packets. That is what TCP >>>> use of ECN does now. Of course, I think that is bad choice because it >>>> generates lots of retransmissions. >>>> >>>> >>> Some of the reasons why Van/Mike took the approach they did were discussed >>> in a email message Van sent to the tcp-ip list.? It included some >>> discussions that had taken place on the ietf and end2end-interest lists. >>> [2] IMO, it?s unfortunate that the existing archives of those lists, >>> because we would be able to read the points of view expressed by the list >>> participants. >>> >>> When I asked Jain why his wasn?t adopted, he said he isn?t an implementor, >>>> but an experimenter. >>>> >>>> But it is not uncommon to be so focused on the immediate problem to fail >>>> to notice the system implications. >>>> >>> >>> John, what could they have done that would have met your criteria and >>> yielded a deployable solution to the congestion problems existing at that >>> time in the timeframe that it was needed?? IMO, their paper should be >>> assessed in that context. >>> >>> --gregbo >>> >>> [1] https://www.ietf.org/proceedings/06.pdf >>> [2] https://ee.lbl.gov/tcp.html >>> >>> >> >