From jack at 3kitty.org Thu Dec 1 10:03:20 2022 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 1 Dec 2022 10:03:20 -0800 Subject: [ih] History of duplicate address tests In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> Message-ID: Hi Grant, Thanks for the update.? I remember reporting the issue to IETF back in the mid 80s and again in the early 90s.?? I guess it still hasn't risen to the top of anybody's priority list. FYI, Flakeway was a user-level program that ran on a Sparc.?? We didn't have access to the Sun kernel code so it didn't involve any kernel mods.? Essentially the Flakeway was a simplified clone of a regular gateway, with the features added for mangling the datagram stream.? Doing everything at user level made the programming much easier, and worked fine for a debugging tool. Also FYI, there was a similar mechanism we put into the IMPs, called a "Remote Datascope".? This took advantage of some ancient code that was originally put in the IMP to gather data for the UCLA measurement work on the neonatal ARPANET.? The Datascope simply told the IMP to copy the first N bytes of every packet (filtered to only those of interest) transiting the IMP, and send that data in a separate packet to the Datascope address.?? This allowed us to "hang a datascope" on a network connection and view computer-computer interactions from somewhere else. No probe hanging on to a wire needed.? By setting the "N" value large enough we could capture TCP and IP headers.?? This proved very useful in debugging problems as hosts were trying to convert to the DDN and get TCP running.?? I don't recall if we ever put similar functionality into the "core gateways", since they were all connected to some IMP, but it would have been easy to do so. One of the other related but larger issues on the 80s list of "things we need to do" was something I called the "end-middle" scenario.? It's a generalization of the ARP issue that Flakeway exploited. The basic notion is that if you look at a chunk of data passing through the network, at any level of protocol, it contains a lot of pieces of information other than the actual user data, such as the various fields in the different headers at different layers. There was a lot of interest in the early days in end-to-end mechanisms, which eventually led to things like SSL, PGP, et al. The "end-middle" scenario broadened that view of the Internet architecture. Basically, each such "chunk" of data in any header in any protocol level is produced somewhere, and consumed somewhere else, possibly in multiple places.? So a "consumer" of some piece of data typically needs to have some mechanism for assurance that the piece of data was actually produced by who you think produced it, and has not been altered or mangled ("Flaked?") along the way to the consumer. Essentially, there are a lot more "ends" involved in Internet communication than most people think.?? Many of the "ends" are actually somewhere in the "middle". So, for example, in the context of the IP header, when some box along the path receives an IP datagram, the IP source address should be assured to be the actual source address of the computer you think it is.? Similarly, an ARP message conveying an address mapping should be assured to have come from a source that is authorized to report such information. Every field, of every header, that is being used to make programming decisions, no matter where it is produced or consumed, potentially needs to be protected to assure its authenticity. The "end-middle" issue appears at all levels of protocols. Violating it at level 2 made Flakeway possible.? But the issue exists even at "app" levels -- e.g., all the "header fields" that you see today in email.?? IIRC, a similar violation made NAT possible. We recognized in the 80s that such protection mechanisms would be difficult, and consume a lot of computing power that we didn't have, and require protocols and mechanisms that did not yet exist.?? Also, they weren't really needed for an experimental research network, as long as the architecture permitted them to be added later for operational use. So the "end-middle issue" was on the list of "things we need to do". I guess it's still there. Jack On 11/30/22 18:40, Grant Taylor via Internet-history wrote: > On 11/29/22 2:51 PM, Jack Haverty via Internet-history wrote: >> I'm surprised (and disappointed) if this "feature" still exists. > > Well, I think you should be surprised and disappointed as this feature > seems to still be very much a thing. > >> Back in the mid 80s, we created a simple software tool we called a >> "Flakeway".? The purpose was to simulate network problems in order to >> test out a TCP's ability to deal with IP datagrams that were actually >> dropped, reordered, duplicated, or otherwise mangled in transit. > > Interesting. > > As I read your message I was wondering how many of those things could > be implemented with a properly configured Linux kernel / system.? > Dropped, delayed, and mangled can probably easily be done with > firewalling and / or QoS features. > > Reordering might be somewhat more problematic and require changing the > path that frames / packets take through the network such that one path > has a delay.? Thus you send some frames / packets through the delay > and others more direct thereby inducing an out of order condition. > > Duplication is a tough nut to crack.? Or at least I'm not aware how to > go about doing that with firewalling and / or QoS.? There might be a > way to punt the frame / packet to user space where a daemon would > duplicate it and re-inject it into the network. > >> Computers directly connected to the ARPANET never encountered such >> conditions, so we looked for a way to create them in order to see if >> a TCP implementation really worked. > > The more that I learn about the ARPANET and NCP the more impressed I am. > > I also see why you needed to test TCP stacks against new problems that > didn't exist in the ARPANET. > >> Such a tool would be most useful if it didn't require changes >> anywhere in the existing equipment. Ideally it would work by somehow >> inserting itself into the normal path for traffic between two >> computers interacting using TCP, without those computers knowing that >> it was there. > > I see two older methods and one newer method to do this.? The first > older method is effectively monkeying with the ARP cache like > Flakeaway did.? Many grey / black hat tools still do this. The second > method is to get in path / route between source and destination > ostensibly on different subnets passing through a router.? The third, > newer, method is to leverage something like SDN / OpenFlow to receive > a frame / packet, ship it off to a controller to fold, spindle, or > mutilate it before sending it back to the egress device. > >> By exploiting the behavior of ARP and common host implementations >> that Joe mentioned, we created "Flakeway" software. > > I instantly knew /how/ you did -- what I'll call -- the /interception/ > bit based on the description.? I say this because I know how this > works from contemporary network problems / vulnerabilities that -- to > the best of my knowledge -- still exist at the end of 2022. > > Does the name / phrase "Firesheep" mean anything to you? > > According to Wikipedia, Firesheep came out in late 2010. Firesheep was > targeted at WiFi networks which were inherently broadcast.? But my > understanding is that it was possible to make it work on traditional > LANs, even switched LANs, via ARP techniques that you and Joe referred > to, specifically the LAN's default gateway. > >> Typically we would run it on a machine with an Ethernet interface >> that could deal with lots of traffic, and which could be put in >> "promiscuous mode" so that the Flakeway would receive all Ethernet >> traffic regardless of its addresses.?? The constraint for using the >> tool was that at least one of the computers under test had to be >> connected to the same Ethernet segment, so that all its traffic would >> be visible at the Flakeway's Ethernet interface.? At the time, Sun >> workstations ("SPARCs") were typically available and met the >> requirements. > > Needing to be on the same network segment as the source and / or > destination means that you were completely playing at layer 2. > > Newer tools allow attacking systems on different network segments as > long as you are in the routed path between said systems.? E.g. > > [A]---(1)---[B]---(2)---[C] > > B can run contemporary counterparts to Flakeway which would target R1 > & R2's L2 MAC addresses and selectively target A's & C's L3 IP > address.? }:-) > >> Flakeway would start watching all Ethernet traffic (much like >> Wireshark today), and then issue an ARP request for the computer >> being tested.? On receiving the ARP reply, it would immediately send >> its own ARP reply for that same IP address, but specifying itself as >> the appropriate Ethernet target address. > > *nod*nod* > >> As Joe noted, computers typically flush their address caches on >> seeing new ARP information, and use the most recent response for any >> subsequent traffic they send to that IP address. > > I would think that the computers would /update/ their ARP cache for > the IP to MAC mapping for the specific victim and not actually /flush/ > all of the ARP cache. > >> After completing this setup, all the interesting traffic now flows to >> the Flakeway machine, which can simply retransmit it to the real >> Ethernet address of the computer being tested. > > Yep.? That's how things still work today. > >> Other than a slight increase in delay, nothing much changes. > > Agreed. > > Really, the only way for clients on the same LAN to have any idea that > this is happening is to check the MAC address that each has associated > with the other's IP address. > > Incidentally, this is why it's a good idea to check the ARP cache when > weird things are happening.? Make sure that each machine has the > correct MAC address for the other machine. > >> By performing the same procedure for a (real) gateway's Ethernet >> address, the Flakeway could insert itself into the datagram pathway >> for traffic in both directions.?? To do the testing, we then set it >> up so it could delay, duplicate, reorder, discard, and otherwise >> mangle the IP traffic flow to see how the TCPs dealt with such >> problems (as they were designed to do). > > That makes perfect sense to me. > >> This tool was very useful. > > Today, Flakeway would be considered a grey hat hacking tool.? Just one > of may that fall into the category. > >> Programming it took only a day or two. But it did seem to reveal a >> vulnerability in the protocols. > This vulnerability sill very much so exists today. > > The vast majority of people assume ~> believe that the L2 LAN > broadcast domain is a safe place.? That's is a VERY dangerous > assumption.? I'd say that greater than 95% of the time it is safe.? > But when it's not safe, it's really not safe. > >> E.g., you could easily pretend to be any other Internet computer and >> possibly convince some user to enter sensitive information or do >> other nasty things. > > Yep. > >> This was all reported, quietly, to IETF, so it could be fixed. > > Chuckle. > > Based on the following comment, I'm guessing this was reported in the > early '80s, correct? > >> Meanwhile, it was a useful tool. > > I'll argue that it still is a useful tool.? At least to some people. > >> About ten years later, in the early 90s, I tried using the tool again >> and discovered that it still worked. > > I'd bet a lunch that it still would work in many situations in late 2022. > >> I don't know if that was because the protocols hadn't been improved, >> or because the computers involved weren't up to date. > > I suspect it's more the former, the protocols hadn't been improved. > > Stop and think for a moment, what has changed in networking protocols > since the time you were messing with them?? Not much. Most of the > changes happened at higher layers that still depend on the protocols > that operate at the lower layers that you're talking about. > >> Curiously, as it became more common to use "switched" Ethernet, the >> tool required more careful configuration of the machines involved to >> make sure the traffic flows were visible to the Flakeway. > > Ya.? With switches, you either MUST use ARP techniques /or/ you need > to continually flood the switch and bust it down to a hub. > > Promiscuous mode doesn't mean nearly as much as it used to.? It used > to mean everything on / passing through the LAN segment.? Now it only > means everything on the host.? Different implementations will see > different types of traffic, direct or direct + broadcast. > >> Now, with the pervasive use of Wifi, and the broadcast nature of >> radio, maybe the pendulum has swung back. > > The pendulum never really swung completely away.? But yes, older WiFi > very much so did swing it back. > > I say older WiFi, as in not fairly new WiFi.? This is because fairly > new (maybe just before contemporary) WiFi was going to start using > different encryption between each client and the AP. Thus clients > couldn't decrypt RF broadcasted traffic that was unicast IP traffic to > another machine on the LAN. > > According to Wikipedia, IPsec pre-dates SSL.? IPsec being early '90s > and SSL being mid to late '90s.? Then in the late 2000s MACsec started > to become a thing. > > Both IPsec and SSL move the authentication / integrity / > confidentiality higher up the stack than the layer 2 ARP attacks being > discussed. MACsec is the first thing that I'm aware of that actually > addresses the crux of the layer 2 ARP attack. > >> So, a little piece of history of address idiosyncracies.... and use >> of duplicate addresses. > > It is history.? But I think the underlying layer 2 ARP problem is > still very much with us today in late 2022. > > I was recently listening to the episode 27 of APNIC's PING podcast -- > Wither RPKI -- wherein one of the hosts mentioned that people have > tended to prefer to address the authentication / integrity / > confidentiality at the top layer of the OSI stack as this tends to > work across more things in an end to end manner.? MACsec is L2 and > inherently local to the broadcast domain.? IPsec is L3 but is limited > to IP addresses.? Whereas TLS is L7 and can actually ride across > multiple underlying transports, not all of which involve IP.? What's > more is that it's relatively easy to co-mingle multiple TLS (L7) > endpoints on a single IP address via things like (e)SNI. > > Some later parts of the podcast even went so far as to say that IP > addresses aren't important any more.? Their justification is that with > things like (e)SNI and CDNs, IP addresses aren't as important as they > once were.? --? I've got to say, the comments make some sense.? Not > that I like what they are saying. > > > From bob.hinden at gmail.com Thu Dec 1 10:55:56 2022 From: bob.hinden at gmail.com (Bob Hinden) Date: Thu, 1 Dec 2022 10:55:56 -0800 Subject: [ih] History of duplicate address tests In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> Message-ID: <8F4A6932-82A4-4B3D-99CA-4AC1BD71C202@gmail.com> Hi Jack, > On Dec 1, 2022, at 10:03 AM, Jack Haverty via Internet-history wrote: > > Hi Grant, > > Thanks for the update. I remember reporting the issue to IETF back in the mid 80s and again in the early 90s. I guess it still hasn't risen to the top of anybody's priority list. As was noted earlier, Duplicate Address Detection (DAD) was included in IPv6. Our thinking at the time was to detect broken hardware NICs (all with the same mac address) as we were initially using MAC addresses to form Interface IDs. We have moved away from that, see RFC 8064. I think DAD is still useful to detect misconfigured manually assigned addresses. Bob > > FYI, Flakeway was a user-level program that ran on a Sparc. We didn't have access to the Sun kernel code so it didn't involve any kernel mods. Essentially the Flakeway was a simplified clone of a regular gateway, with the features added for mangling the datagram stream. Doing everything at user level made the programming much easier, and worked fine for a debugging tool. > > Also FYI, there was a similar mechanism we put into the IMPs, called a "Remote Datascope". This took advantage of some ancient code that was originally put in the IMP to gather data for the UCLA measurement work on the neonatal ARPANET. The Datascope simply told the IMP to copy the first N bytes of every packet (filtered to only those of interest) transiting the IMP, and send that data in a separate packet to the Datascope address. This allowed us to "hang a datascope" on a network connection and view computer-computer interactions from somewhere else. No probe hanging on to a wire needed. By setting the "N" value large enough we could capture TCP and IP headers. This proved very useful in debugging problems as hosts were trying to convert to the DDN and get TCP running. I don't recall if we ever put similar functionality into the "core gateways", since they were all connected to some IMP, but it would have been easy to do so. > > One of the other related but larger issues on the 80s list of "things we need to do" was something I called the "end-middle" scenario. It's a generalization of the ARP issue that Flakeway exploited. > > The basic notion is that if you look at a chunk of data passing through the network, at any level of protocol, it contains a lot of pieces of information other than the actual user data, such as the various fields in the different headers at different layers. > > There was a lot of interest in the early days in end-to-end mechanisms, which eventually led to things like SSL, PGP, et al. The "end-middle" scenario broadened that view of the Internet architecture. > > Basically, each such "chunk" of data in any header in any protocol level is produced somewhere, and consumed somewhere else, possibly in multiple places. So a "consumer" of some piece of data typically needs to have some mechanism for assurance that the piece of data was actually produced by who you think produced it, and has not been altered or mangled ("Flaked?") along the way to the consumer. > > Essentially, there are a lot more "ends" involved in Internet communication than most people think. Many of the "ends" are actually somewhere in the "middle". > > So, for example, in the context of the IP header, when some box along the path receives an IP datagram, the IP source address should be assured to be the actual source address of the computer you think it is. Similarly, an ARP message conveying an address mapping should be assured to have come from a source that is authorized to report such information. > > Every field, of every header, that is being used to make programming decisions, no matter where it is produced or consumed, potentially needs to be protected to assure its authenticity. > > The "end-middle" issue appears at all levels of protocols. Violating it at level 2 made Flakeway possible. But the issue exists even at "app" levels -- e.g., all the "header fields" that you see today in email. IIRC, a similar violation made NAT possible. > > We recognized in the 80s that such protection mechanisms would be difficult, and consume a lot of computing power that we didn't have, and require protocols and mechanisms that did not yet exist. Also, they weren't really needed for an experimental research network, as long as the architecture permitted them to be added later for operational use. > > So the "end-middle issue" was on the list of "things we need to do". > > I guess it's still there. > > Jack > > > > On 11/30/22 18:40, Grant Taylor via Internet-history wrote: >> On 11/29/22 2:51 PM, Jack Haverty via Internet-history wrote: >>> I'm surprised (and disappointed) if this "feature" still exists. >> >> Well, I think you should be surprised and disappointed as this feature seems to still be very much a thing. >> >>> Back in the mid 80s, we created a simple software tool we called a "Flakeway". The purpose was to simulate network problems in order to test out a TCP's ability to deal with IP datagrams that were actually dropped, reordered, duplicated, or otherwise mangled in transit. >> >> Interesting. >> >> As I read your message I was wondering how many of those things could be implemented with a properly configured Linux kernel / system. Dropped, delayed, and mangled can probably easily be done with firewalling and / or QoS features. >> >> Reordering might be somewhat more problematic and require changing the path that frames / packets take through the network such that one path has a delay. Thus you send some frames / packets through the delay and others more direct thereby inducing an out of order condition. >> >> Duplication is a tough nut to crack. Or at least I'm not aware how to go about doing that with firewalling and / or QoS. There might be a way to punt the frame / packet to user space where a daemon would duplicate it and re-inject it into the network. >> >>> Computers directly connected to the ARPANET never encountered such conditions, so we looked for a way to create them in order to see if a TCP implementation really worked. >> >> The more that I learn about the ARPANET and NCP the more impressed I am. >> >> I also see why you needed to test TCP stacks against new problems that didn't exist in the ARPANET. >> >>> Such a tool would be most useful if it didn't require changes anywhere in the existing equipment. Ideally it would work by somehow inserting itself into the normal path for traffic between two computers interacting using TCP, without those computers knowing that it was there. >> >> I see two older methods and one newer method to do this. The first older method is effectively monkeying with the ARP cache like Flakeaway did. Many grey / black hat tools still do this. The second method is to get in path / route between source and destination ostensibly on different subnets passing through a router. The third, newer, method is to leverage something like SDN / OpenFlow to receive a frame / packet, ship it off to a controller to fold, spindle, or mutilate it before sending it back to the egress device. >> >>> By exploiting the behavior of ARP and common host implementations that Joe mentioned, we created "Flakeway" software. >> >> I instantly knew /how/ you did -- what I'll call -- the /interception/ bit based on the description. I say this because I know how this works from contemporary network problems / vulnerabilities that -- to the best of my knowledge -- still exist at the end of 2022. >> >> Does the name / phrase "Firesheep" mean anything to you? >> >> According to Wikipedia, Firesheep came out in late 2010. Firesheep was targeted at WiFi networks which were inherently broadcast. But my understanding is that it was possible to make it work on traditional LANs, even switched LANs, via ARP techniques that you and Joe referred to, specifically the LAN's default gateway. >> >>> Typically we would run it on a machine with an Ethernet interface that could deal with lots of traffic, and which could be put in "promiscuous mode" so that the Flakeway would receive all Ethernet traffic regardless of its addresses. The constraint for using the tool was that at least one of the computers under test had to be connected to the same Ethernet segment, so that all its traffic would be visible at the Flakeway's Ethernet interface. At the time, Sun workstations ("SPARCs") were typically available and met the requirements. >> >> Needing to be on the same network segment as the source and / or destination means that you were completely playing at layer 2. >> >> Newer tools allow attacking systems on different network segments as long as you are in the routed path between said systems. E.g. >> >> [A]---(1)---[B]---(2)---[C] >> >> B can run contemporary counterparts to Flakeway which would target R1 & R2's L2 MAC addresses and selectively target A's & C's L3 IP address. }:-) >> >>> Flakeway would start watching all Ethernet traffic (much like Wireshark today), and then issue an ARP request for the computer being tested. On receiving the ARP reply, it would immediately send its own ARP reply for that same IP address, but specifying itself as the appropriate Ethernet target address. >> >> *nod*nod* >> >>> As Joe noted, computers typically flush their address caches on seeing new ARP information, and use the most recent response for any subsequent traffic they send to that IP address. >> >> I would think that the computers would /update/ their ARP cache for the IP to MAC mapping for the specific victim and not actually /flush/ all of the ARP cache. >> >>> After completing this setup, all the interesting traffic now flows to the Flakeway machine, which can simply retransmit it to the real Ethernet address of the computer being tested. >> >> Yep. That's how things still work today. >> >>> Other than a slight increase in delay, nothing much changes. >> >> Agreed. >> >> Really, the only way for clients on the same LAN to have any idea that this is happening is to check the MAC address that each has associated with the other's IP address. >> >> Incidentally, this is why it's a good idea to check the ARP cache when weird things are happening. Make sure that each machine has the correct MAC address for the other machine. >> >>> By performing the same procedure for a (real) gateway's Ethernet address, the Flakeway could insert itself into the datagram pathway for traffic in both directions. To do the testing, we then set it up so it could delay, duplicate, reorder, discard, and otherwise mangle the IP traffic flow to see how the TCPs dealt with such problems (as they were designed to do). >> >> That makes perfect sense to me. >> >>> This tool was very useful. >> >> Today, Flakeway would be considered a grey hat hacking tool. Just one of may that fall into the category. >> >>> Programming it took only a day or two. But it did seem to reveal a vulnerability in the protocols. >> This vulnerability sill very much so exists today. >> >> The vast majority of people assume ~> believe that the L2 LAN broadcast domain is a safe place. That's is a VERY dangerous assumption. I'd say that greater than 95% of the time it is safe. But when it's not safe, it's really not safe. >> >>> E.g., you could easily pretend to be any other Internet computer and possibly convince some user to enter sensitive information or do other nasty things. >> >> Yep. >> >>> This was all reported, quietly, to IETF, so it could be fixed. >> >> Chuckle. >> >> Based on the following comment, I'm guessing this was reported in the early '80s, correct? >> >>> Meanwhile, it was a useful tool. >> >> I'll argue that it still is a useful tool. At least to some people. >> >>> About ten years later, in the early 90s, I tried using the tool again and discovered that it still worked. >> >> I'd bet a lunch that it still would work in many situations in late 2022. >> >>> I don't know if that was because the protocols hadn't been improved, or because the computers involved weren't up to date. >> >> I suspect it's more the former, the protocols hadn't been improved. >> >> Stop and think for a moment, what has changed in networking protocols since the time you were messing with them? Not much. Most of the changes happened at higher layers that still depend on the protocols that operate at the lower layers that you're talking about. >> >>> Curiously, as it became more common to use "switched" Ethernet, the tool required more careful configuration of the machines involved to make sure the traffic flows were visible to the Flakeway. >> >> Ya. With switches, you either MUST use ARP techniques /or/ you need to continually flood the switch and bust it down to a hub. >> >> Promiscuous mode doesn't mean nearly as much as it used to. It used to mean everything on / passing through the LAN segment. Now it only means everything on the host. Different implementations will see different types of traffic, direct or direct + broadcast. >> >>> Now, with the pervasive use of Wifi, and the broadcast nature of radio, maybe the pendulum has swung back. >> >> The pendulum never really swung completely away. But yes, older WiFi very much so did swing it back. >> >> I say older WiFi, as in not fairly new WiFi. This is because fairly new (maybe just before contemporary) WiFi was going to start using different encryption between each client and the AP. Thus clients couldn't decrypt RF broadcasted traffic that was unicast IP traffic to another machine on the LAN. >> >> According to Wikipedia, IPsec pre-dates SSL. IPsec being early '90s and SSL being mid to late '90s. Then in the late 2000s MACsec started to become a thing. >> >> Both IPsec and SSL move the authentication / integrity / confidentiality higher up the stack than the layer 2 ARP attacks being discussed. MACsec is the first thing that I'm aware of that actually addresses the crux of the layer 2 ARP attack. >> >>> So, a little piece of history of address idiosyncracies.... and use of duplicate addresses. >> >> It is history. But I think the underlying layer 2 ARP problem is still very much with us today in late 2022. >> >> I was recently listening to the episode 27 of APNIC's PING podcast -- Wither RPKI -- wherein one of the hosts mentioned that people have tended to prefer to address the authentication / integrity / confidentiality at the top layer of the OSI stack as this tends to work across more things in an end to end manner. MACsec is L2 and inherently local to the broadcast domain. IPsec is L3 but is limited to IP addresses. Whereas TLS is L7 and can actually ride across multiple underlying transports, not all of which involve IP. What's more is that it's relatively easy to co-mingle multiple TLS (L7) endpoints on a single IP address via things like (e)SNI. >> >> Some later parts of the podcast even went so far as to say that IP addresses aren't important any more. Their justification is that with things like (e)SNI and CDNs, IP addresses aren't as important as they once were. -- I've got to say, the comments make some sense. Not that I like what they are saying. >> >> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From brian.e.carpenter at gmail.com Thu Dec 1 11:47:19 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 2 Dec 2022 08:47:19 +1300 Subject: [ih] History of duplicate address tests In-Reply-To: <8F4A6932-82A4-4B3D-99CA-4AC1BD71C202@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <8F4A6932-82A4-4B3D-99CA-4AC1BD71C202@gmail.com> Message-ID: <7f330b01-d5be-2f51-c437-ae051dcb0a45@gmail.com> On 02-Dec-22 07:55, Bob Hinden via Internet-history wrote: > Hi Jack, > >> On Dec 1, 2022, at 10:03 AM, Jack Haverty via Internet-history wrote: >> >> Hi Grant, >> >> Thanks for the update. I remember reporting the issue to IETF back in the mid 80s and again in the early 90s. I guess it still hasn't risen to the top of anybody's priority list. > > As was noted earlier, Duplicate Address Detection (DAD) was included in IPv6. Our thinking at the time was to detect broken hardware NICs (all with the same mac address) as we were initially using MAC addresses to form Interface IDs. We have moved away from that, see RFC 8064. I think DAD is still useful to detect misconfigured manually assigned addresses. And you cannot 100% discount a collision between pseudorandom IIDs. Yes, very very unlikely, unless you have lousy random number generators, which we'll never have, right? Brian > > Bob > > >> >> FYI, Flakeway was a user-level program that ran on a Sparc. We didn't have access to the Sun kernel code so it didn't involve any kernel mods. Essentially the Flakeway was a simplified clone of a regular gateway, with the features added for mangling the datagram stream. Doing everything at user level made the programming much easier, and worked fine for a debugging tool. >> >> Also FYI, there was a similar mechanism we put into the IMPs, called a "Remote Datascope". This took advantage of some ancient code that was originally put in the IMP to gather data for the UCLA measurement work on the neonatal ARPANET. The Datascope simply told the IMP to copy the first N bytes of every packet (filtered to only those of interest) transiting the IMP, and send that data in a separate packet to the Datascope address. This allowed us to "hang a datascope" on a network connection and view computer-computer interactions from somewhere else. No probe hanging on to a wire needed. By setting the "N" value large enough we could capture TCP and IP headers. This proved very useful in debugging problems as hosts were trying to convert to the DDN and get TCP running. I don't recall if we ever put similar functionality into the "core gateways", since they were all connected to some IMP, but it would have been easy to do so. >> >> One of the other related but larger issues on the 80s list of "things we need to do" was something I called the "end-middle" scenario. It's a generalization of the ARP issue that Flakeway exploited. >> >> The basic notion is that if you look at a chunk of data passing through the network, at any level of protocol, it contains a lot of pieces of information other than the actual user data, such as the various fields in the different headers at different layers. >> >> There was a lot of interest in the early days in end-to-end mechanisms, which eventually led to things like SSL, PGP, et al. The "end-middle" scenario broadened that view of the Internet architecture. >> >> Basically, each such "chunk" of data in any header in any protocol level is produced somewhere, and consumed somewhere else, possibly in multiple places. So a "consumer" of some piece of data typically needs to have some mechanism for assurance that the piece of data was actually produced by who you think produced it, and has not been altered or mangled ("Flaked?") along the way to the consumer. >> >> Essentially, there are a lot more "ends" involved in Internet communication than most people think. Many of the "ends" are actually somewhere in the "middle". >> >> So, for example, in the context of the IP header, when some box along the path receives an IP datagram, the IP source address should be assured to be the actual source address of the computer you think it is. Similarly, an ARP message conveying an address mapping should be assured to have come from a source that is authorized to report such information. >> >> Every field, of every header, that is being used to make programming decisions, no matter where it is produced or consumed, potentially needs to be protected to assure its authenticity. >> >> The "end-middle" issue appears at all levels of protocols. Violating it at level 2 made Flakeway possible. But the issue exists even at "app" levels -- e.g., all the "header fields" that you see today in email. IIRC, a similar violation made NAT possible. >> >> We recognized in the 80s that such protection mechanisms would be difficult, and consume a lot of computing power that we didn't have, and require protocols and mechanisms that did not yet exist. Also, they weren't really needed for an experimental research network, as long as the architecture permitted them to be added later for operational use. >> >> So the "end-middle issue" was on the list of "things we need to do". >> >> I guess it's still there. >> >> Jack >> >> >> >> On 11/30/22 18:40, Grant Taylor via Internet-history wrote: >>> On 11/29/22 2:51 PM, Jack Haverty via Internet-history wrote: >>>> I'm surprised (and disappointed) if this "feature" still exists. >>> >>> Well, I think you should be surprised and disappointed as this feature seems to still be very much a thing. >>> >>>> Back in the mid 80s, we created a simple software tool we called a "Flakeway". The purpose was to simulate network problems in order to test out a TCP's ability to deal with IP datagrams that were actually dropped, reordered, duplicated, or otherwise mangled in transit. >>> >>> Interesting. >>> >>> As I read your message I was wondering how many of those things could be implemented with a properly configured Linux kernel / system. Dropped, delayed, and mangled can probably easily be done with firewalling and / or QoS features. >>> >>> Reordering might be somewhat more problematic and require changing the path that frames / packets take through the network such that one path has a delay. Thus you send some frames / packets through the delay and others more direct thereby inducing an out of order condition. >>> >>> Duplication is a tough nut to crack. Or at least I'm not aware how to go about doing that with firewalling and / or QoS. There might be a way to punt the frame / packet to user space where a daemon would duplicate it and re-inject it into the network. >>> >>>> Computers directly connected to the ARPANET never encountered such conditions, so we looked for a way to create them in order to see if a TCP implementation really worked. >>> >>> The more that I learn about the ARPANET and NCP the more impressed I am. >>> >>> I also see why you needed to test TCP stacks against new problems that didn't exist in the ARPANET. >>> >>>> Such a tool would be most useful if it didn't require changes anywhere in the existing equipment. Ideally it would work by somehow inserting itself into the normal path for traffic between two computers interacting using TCP, without those computers knowing that it was there. >>> >>> I see two older methods and one newer method to do this. The first older method is effectively monkeying with the ARP cache like Flakeaway did. Many grey / black hat tools still do this. The second method is to get in path / route between source and destination ostensibly on different subnets passing through a router. The third, newer, method is to leverage something like SDN / OpenFlow to receive a frame / packet, ship it off to a controller to fold, spindle, or mutilate it before sending it back to the egress device. >>> >>>> By exploiting the behavior of ARP and common host implementations that Joe mentioned, we created "Flakeway" software. >>> >>> I instantly knew /how/ you did -- what I'll call -- the /interception/ bit based on the description. I say this because I know how this works from contemporary network problems / vulnerabilities that -- to the best of my knowledge -- still exist at the end of 2022. >>> >>> Does the name / phrase "Firesheep" mean anything to you? >>> >>> According to Wikipedia, Firesheep came out in late 2010. Firesheep was targeted at WiFi networks which were inherently broadcast. But my understanding is that it was possible to make it work on traditional LANs, even switched LANs, via ARP techniques that you and Joe referred to, specifically the LAN's default gateway. >>> >>>> Typically we would run it on a machine with an Ethernet interface that could deal with lots of traffic, and which could be put in "promiscuous mode" so that the Flakeway would receive all Ethernet traffic regardless of its addresses. The constraint for using the tool was that at least one of the computers under test had to be connected to the same Ethernet segment, so that all its traffic would be visible at the Flakeway's Ethernet interface. At the time, Sun workstations ("SPARCs") were typically available and met the requirements. >>> >>> Needing to be on the same network segment as the source and / or destination means that you were completely playing at layer 2. >>> >>> Newer tools allow attacking systems on different network segments as long as you are in the routed path between said systems. E.g. >>> >>> [A]---(1)---[B]---(2)---[C] >>> >>> B can run contemporary counterparts to Flakeway which would target R1 & R2's L2 MAC addresses and selectively target A's & C's L3 IP address. }:-) >>> >>>> Flakeway would start watching all Ethernet traffic (much like Wireshark today), and then issue an ARP request for the computer being tested. On receiving the ARP reply, it would immediately send its own ARP reply for that same IP address, but specifying itself as the appropriate Ethernet target address. >>> >>> *nod*nod* >>> >>>> As Joe noted, computers typically flush their address caches on seeing new ARP information, and use the most recent response for any subsequent traffic they send to that IP address. >>> >>> I would think that the computers would /update/ their ARP cache for the IP to MAC mapping for the specific victim and not actually /flush/ all of the ARP cache. >>> >>>> After completing this setup, all the interesting traffic now flows to the Flakeway machine, which can simply retransmit it to the real Ethernet address of the computer being tested. >>> >>> Yep. That's how things still work today. >>> >>>> Other than a slight increase in delay, nothing much changes. >>> >>> Agreed. >>> >>> Really, the only way for clients on the same LAN to have any idea that this is happening is to check the MAC address that each has associated with the other's IP address. >>> >>> Incidentally, this is why it's a good idea to check the ARP cache when weird things are happening. Make sure that each machine has the correct MAC address for the other machine. >>> >>>> By performing the same procedure for a (real) gateway's Ethernet address, the Flakeway could insert itself into the datagram pathway for traffic in both directions. To do the testing, we then set it up so it could delay, duplicate, reorder, discard, and otherwise mangle the IP traffic flow to see how the TCPs dealt with such problems (as they were designed to do). >>> >>> That makes perfect sense to me. >>> >>>> This tool was very useful. >>> >>> Today, Flakeway would be considered a grey hat hacking tool. Just one of may that fall into the category. >>> >>>> Programming it took only a day or two. But it did seem to reveal a vulnerability in the protocols. >>> This vulnerability sill very much so exists today. >>> >>> The vast majority of people assume ~> believe that the L2 LAN broadcast domain is a safe place. That's is a VERY dangerous assumption. I'd say that greater than 95% of the time it is safe. But when it's not safe, it's really not safe. >>> >>>> E.g., you could easily pretend to be any other Internet computer and possibly convince some user to enter sensitive information or do other nasty things. >>> >>> Yep. >>> >>>> This was all reported, quietly, to IETF, so it could be fixed. >>> >>> Chuckle. >>> >>> Based on the following comment, I'm guessing this was reported in the early '80s, correct? >>> >>>> Meanwhile, it was a useful tool. >>> >>> I'll argue that it still is a useful tool. At least to some people. >>> >>>> About ten years later, in the early 90s, I tried using the tool again and discovered that it still worked. >>> >>> I'd bet a lunch that it still would work in many situations in late 2022. >>> >>>> I don't know if that was because the protocols hadn't been improved, or because the computers involved weren't up to date. >>> >>> I suspect it's more the former, the protocols hadn't been improved. >>> >>> Stop and think for a moment, what has changed in networking protocols since the time you were messing with them? Not much. Most of the changes happened at higher layers that still depend on the protocols that operate at the lower layers that you're talking about. >>> >>>> Curiously, as it became more common to use "switched" Ethernet, the tool required more careful configuration of the machines involved to make sure the traffic flows were visible to the Flakeway. >>> >>> Ya. With switches, you either MUST use ARP techniques /or/ you need to continually flood the switch and bust it down to a hub. >>> >>> Promiscuous mode doesn't mean nearly as much as it used to. It used to mean everything on / passing through the LAN segment. Now it only means everything on the host. Different implementations will see different types of traffic, direct or direct + broadcast. >>> >>>> Now, with the pervasive use of Wifi, and the broadcast nature of radio, maybe the pendulum has swung back. >>> >>> The pendulum never really swung completely away. But yes, older WiFi very much so did swing it back. >>> >>> I say older WiFi, as in not fairly new WiFi. This is because fairly new (maybe just before contemporary) WiFi was going to start using different encryption between each client and the AP. Thus clients couldn't decrypt RF broadcasted traffic that was unicast IP traffic to another machine on the LAN. >>> >>> According to Wikipedia, IPsec pre-dates SSL. IPsec being early '90s and SSL being mid to late '90s. Then in the late 2000s MACsec started to become a thing. >>> >>> Both IPsec and SSL move the authentication / integrity / confidentiality higher up the stack than the layer 2 ARP attacks being discussed. MACsec is the first thing that I'm aware of that actually addresses the crux of the layer 2 ARP attack. >>> >>>> So, a little piece of history of address idiosyncracies.... and use of duplicate addresses. >>> >>> It is history. But I think the underlying layer 2 ARP problem is still very much with us today in late 2022. >>> >>> I was recently listening to the episode 27 of APNIC's PING podcast -- Wither RPKI -- wherein one of the hosts mentioned that people have tended to prefer to address the authentication / integrity / confidentiality at the top layer of the OSI stack as this tends to work across more things in an end to end manner. MACsec is L2 and inherently local to the broadcast domain. IPsec is L3 but is limited to IP addresses. Whereas TLS is L7 and can actually ride across multiple underlying transports, not all of which involve IP. What's more is that it's relatively easy to co-mingle multiple TLS (L7) endpoints on a single IP address via things like (e)SNI. >>> >>> Some later parts of the podcast even went so far as to say that IP addresses aren't important any more. Their justification is that with things like (e)SNI and CDNs, IP addresses aren't as important as they once were. -- I've got to say, the comments make some sense. Not that I like what they are saying. >>> >>> >>> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > From touch at strayalpha.com Thu Dec 1 12:52:50 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Thu, 1 Dec 2022 12:52:50 -0800 Subject: [ih] History of duplicate address tests In-Reply-To: <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> Message-ID: WARNING: banning rules detected suspect part(s), do not open unless you know what you are doing -------------- next part -------------- An embedded message was scrubbed... From: "touch at strayalpha.com" Subject: Re: [ih] History of duplicate address tests Date: Thu, 1 Dec 2022 12:52:50 -0800 Size: 4633 URL: From j at shoch.com Thu Dec 1 13:19:50 2022 From: j at shoch.com (John Shoch) Date: Thu, 1 Dec 2022 13:19:50 -0800 Subject: [ih] History of duplicate address tests In-Reply-To: References: Message-ID: I've always thought that in networking there is a place for absolutes, perfection, and "error free." But a really important part of system engineering is about cost/benefit tradeoff, probabilities, and error recovery. When considering going from manual assignment of Experimental Ethernet addresses to a semi-automatic generation of 48-bit addresses to be blown into a PROM on a board: --We thought, "It may not be perfect, but it's certainly more reliable than trying to scale the manual process!" --"But what are the odds a bit blown into a PROM will heal, and produce a duplicate ID?" "Let's just make sure the odds of that are LESS than the odds of your machine catching fire or dying from a power surge; or your building being destroyed by lightning, flood, or earthquake; or someone typing Delete *.*" --"Have a backup and recovery plan!" "And if you don't have a recovery plan for fire, lightning, flood, earthquake, or fumble-fingers, don't complain about lower-probability network events....." John PS: In a similar vein, I would sometimes field provocative questions, "Why don't you have encryption on the Ethernet?" I would merely observe, "We do have a project to build a crypto box in front of the Ethernet transceiver, for serious government customers with Tempest needs. But do you shred all your letters and print-outs before they go in the dumpster? If not, you have worse problems than your Ethernet....." On Thu, Dec 1, 2022 at 12:00 PM wrote: > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 Dec 2022 08:47:19 +1300 > From: Brian E Carpenter > To: Bob Hinden , Jack Haverty > Cc: internet-history at elists.isoc.org > Subject: Re: [ih] History of duplicate address tests > Message-ID: <7f330b01-d5be-2f51-c437-ae051dcb0a45 at gmail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On 02-Dec-22 07:55, Bob Hinden via Internet-history wrote: > > Hi Jack, > > > >> On Dec 1, 2022, at 10:03 AM, Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Hi Grant, > >> > >> Thanks for the update. I remember reporting the issue to IETF back in > the mid 80s and again in the early 90s. I guess it still hasn't risen to > the top of anybody's priority list. > > > > As was noted earlier, Duplicate Address Detection (DAD) was included in > IPv6. Our thinking at the time was to detect broken hardware NICs (all > with the same mac address) as we were initially using MAC addresses to form > Interface IDs. We have moved away from that, see RFC 8064. I think DAD > is still useful to detect misconfigured manually assigned addresses. > > And you cannot 100% discount a collision between pseudorandom IIDs. Yes, > very very unlikely, unless you have lousy random number generators, which > we'll never have, right? > > Brian > > > > From winowicki at yahoo.com Thu Dec 1 14:03:22 2022 From: winowicki at yahoo.com (Bill Nowicki) Date: Thu, 1 Dec 2022 22:03:22 +0000 (UTC) Subject: [ih] History of duplicate address tests In-Reply-To: References: Message-ID: <920199317.45796.1669932202587@mail.yahoo.com> Reminds me of this anecdote about Sun's early days (maybe people have already heard it). The rule for the commercial Ethernet was 24 bits of organization number assigned by IEEE, and 24 bits assigned by vendor. At some point it was realized that Sun's addresses had wrapped around. The quick hack that assigned the addresses in manufacturing was lazy and used 16-bit arithmetic instead of 24 bits for the vendor bits. The person who wrote it (I remember who, but will not name names) assumed that Sun would never sell 65,000 machines, but they did.? Do not think any customer ever found out, let alone had any problems with duplicates, since generally they bought machines in batches which were sequentially numbered in those low bits.? No idea if they ran out of 24 bits. Some vendors presumably did. Bill On Thursday, December 1, 2022 at 01:20:12 PM PST, John Shoch via Internet-history wrote: I've always thought that in networking there is a place for absolutes, perfection, and "error free." But a really important part of system engineering is about cost/benefit tradeoff, probabilities, and error recovery. When considering going from manual assignment of Experimental Ethernet addresses to a semi-automatic generation of 48-bit addresses to be blown into a PROM on a board: --We thought, "It may not be perfect, but it's certainly more reliable than trying to scale the manual process!" --"But what are the odds a bit blown into a PROM will heal, and produce a duplicate ID?"? "Let's just make sure the odds of that are LESS than the odds of your machine catching fire or dying from a power surge; or your building being destroyed by lightning, flood, or earthquake; or someone typing Delete *.*" --"Have a backup and recovery plan!"? "And if you don't have a recovery plan for fire, lightning, flood, earthquake, or fumble-fingers, don't complain about lower-probability network events....." John PS:? In a similar vein, I would sometimes field provocative questions, "Why don't you have encryption on the Ethernet?"? I would merely observe, "We do have a project to build a crypto box in front of the Ethernet transceiver, for serious government customers with Tempest needs.? But do you shred all your letters and print-outs before they go in the dumpster?? If not, you have worse problems than your Ethernet....." On Thu, Dec 1, 2022 at 12:00 PM wrote: > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 2 Dec 2022 08:47:19 +1300 > From: Brian E Carpenter > To: Bob Hinden , Jack Haverty > Cc: internet-history at elists.isoc.org > Subject: Re: [ih] History of duplicate address tests > Message-ID: <7f330b01-d5be-2f51-c437-ae051dcb0a45 at gmail.com> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On 02-Dec-22 07:55, Bob Hinden via Internet-history wrote: > > Hi Jack, > > > >> On Dec 1, 2022, at 10:03 AM, Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Hi Grant, > >> > >> Thanks for the update.? I remember reporting the issue to IETF back in > the mid 80s and again in the early 90s.? I guess it still hasn't risen to > the top of anybody's priority list. > > > > As was noted earlier, Duplicate Address Detection (DAD) was included in > IPv6.? ? Our thinking at the time was to detect broken hardware NICs (all > with the same mac address) as we were initially using MAC addresses to form > Interface IDs.? We have moved away from that, see RFC 8064.? I think DAD > is still useful to detect misconfigured manually assigned addresses. > > And you cannot 100% discount a collision between pseudorandom IIDs. Yes, > very very unlikely, unless you have lousy random number generators, which > we'll never have, right? > >? ? Brian > > > > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From bob.hinden at gmail.com Thu Dec 1 14:24:07 2022 From: bob.hinden at gmail.com (Bob Hinden) Date: Thu, 1 Dec 2022 14:24:07 -0800 Subject: [ih] History of duplicate address tests In-Reply-To: <7f330b01-d5be-2f51-c437-ae051dcb0a45@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <8F4A6932-82A4-4B3D-99CA-4AC1BD71C202@gmail.com> <7f330b01-d5be-2f51-c437-ae051dcb0a45@gmail.com> Message-ID: <2FBB20B3-61C9-4B4B-9B90-84D145FDEF2E@gmail.com> > On Dec 1, 2022, at 11:47 AM, Brian E Carpenter wrote: > > On 02-Dec-22 07:55, Bob Hinden via Internet-history wrote: >> Hi Jack, >>> On Dec 1, 2022, at 10:03 AM, Jack Haverty via Internet-history wrote: >>> >>> Hi Grant, >>> >>> Thanks for the update. I remember reporting the issue to IETF back in the mid 80s and again in the early 90s. I guess it still hasn't risen to the top of anybody's priority list. >> As was noted earlier, Duplicate Address Detection (DAD) was included in IPv6. Our thinking at the time was to detect broken hardware NICs (all with the same mac address) as we were initially using MAC addresses to form Interface IDs. We have moved away from that, see RFC 8064. I think DAD is still useful to detect misconfigured manually assigned addresses. > > And you cannot 100% discount a collision between pseudorandom IIDs. Yes, very very unlikely, unless you have lousy random number generators, which we'll never have, right? Right, or software bugs, but we never have the latter either :-) Bob > > Brian > >> Bob >>> >>> FYI, Flakeway was a user-level program that ran on a Sparc. We didn't have access to the Sun kernel code so it didn't involve any kernel mods. Essentially the Flakeway was a simplified clone of a regular gateway, with the features added for mangling the datagram stream. Doing everything at user level made the programming much easier, and worked fine for a debugging tool. >>> >>> Also FYI, there was a similar mechanism we put into the IMPs, called a "Remote Datascope". This took advantage of some ancient code that was originally put in the IMP to gather data for the UCLA measurement work on the neonatal ARPANET. The Datascope simply told the IMP to copy the first N bytes of every packet (filtered to only those of interest) transiting the IMP, and send that data in a separate packet to the Datascope address. This allowed us to "hang a datascope" on a network connection and view computer-computer interactions from somewhere else. No probe hanging on to a wire needed. By setting the "N" value large enough we could capture TCP and IP headers. This proved very useful in debugging problems as hosts were trying to convert to the DDN and get TCP running. I don't recall if we ever put similar functionality into the "core gateways", since they were all connected to some IMP, but it would have been easy to do so. >>> >>> One of the other related but larger issues on the 80s list of "things we need to do" was something I called the "end-middle" scenario. It's a generalization of the ARP issue that Flakeway exploited. >>> >>> The basic notion is that if you look at a chunk of data passing through the network, at any level of protocol, it contains a lot of pieces of information other than the actual user data, such as the various fields in the different headers at different layers. >>> >>> There was a lot of interest in the early days in end-to-end mechanisms, which eventually led to things like SSL, PGP, et al. The "end-middle" scenario broadened that view of the Internet architecture. >>> >>> Basically, each such "chunk" of data in any header in any protocol level is produced somewhere, and consumed somewhere else, possibly in multiple places. So a "consumer" of some piece of data typically needs to have some mechanism for assurance that the piece of data was actually produced by who you think produced it, and has not been altered or mangled ("Flaked?") along the way to the consumer. >>> >>> Essentially, there are a lot more "ends" involved in Internet communication than most people think. Many of the "ends" are actually somewhere in the "middle". >>> >>> So, for example, in the context of the IP header, when some box along the path receives an IP datagram, the IP source address should be assured to be the actual source address of the computer you think it is. Similarly, an ARP message conveying an address mapping should be assured to have come from a source that is authorized to report such information. >>> >>> Every field, of every header, that is being used to make programming decisions, no matter where it is produced or consumed, potentially needs to be protected to assure its authenticity. >>> >>> The "end-middle" issue appears at all levels of protocols. Violating it at level 2 made Flakeway possible. But the issue exists even at "app" levels -- e.g., all the "header fields" that you see today in email. IIRC, a similar violation made NAT possible. >>> >>> We recognized in the 80s that such protection mechanisms would be difficult, and consume a lot of computing power that we didn't have, and require protocols and mechanisms that did not yet exist. Also, they weren't really needed for an experimental research network, as long as the architecture permitted them to be added later for operational use. >>> >>> So the "end-middle issue" was on the list of "things we need to do". >>> >>> I guess it's still there. >>> >>> Jack >>> >>> >>> >>> On 11/30/22 18:40, Grant Taylor via Internet-history wrote: >>>> On 11/29/22 2:51 PM, Jack Haverty via Internet-history wrote: >>>>> I'm surprised (and disappointed) if this "feature" still exists. >>>> >>>> Well, I think you should be surprised and disappointed as this feature seems to still be very much a thing. >>>> >>>>> Back in the mid 80s, we created a simple software tool we called a "Flakeway". The purpose was to simulate network problems in order to test out a TCP's ability to deal with IP datagrams that were actually dropped, reordered, duplicated, or otherwise mangled in transit. >>>> >>>> Interesting. >>>> >>>> As I read your message I was wondering how many of those things could be implemented with a properly configured Linux kernel / system. Dropped, delayed, and mangled can probably easily be done with firewalling and / or QoS features. >>>> >>>> Reordering might be somewhat more problematic and require changing the path that frames / packets take through the network such that one path has a delay. Thus you send some frames / packets through the delay and others more direct thereby inducing an out of order condition. >>>> >>>> Duplication is a tough nut to crack. Or at least I'm not aware how to go about doing that with firewalling and / or QoS. There might be a way to punt the frame / packet to user space where a daemon would duplicate it and re-inject it into the network. >>>> >>>>> Computers directly connected to the ARPANET never encountered such conditions, so we looked for a way to create them in order to see if a TCP implementation really worked. >>>> >>>> The more that I learn about the ARPANET and NCP the more impressed I am. >>>> >>>> I also see why you needed to test TCP stacks against new problems that didn't exist in the ARPANET. >>>> >>>>> Such a tool would be most useful if it didn't require changes anywhere in the existing equipment. Ideally it would work by somehow inserting itself into the normal path for traffic between two computers interacting using TCP, without those computers knowing that it was there. >>>> >>>> I see two older methods and one newer method to do this. The first older method is effectively monkeying with the ARP cache like Flakeaway did. Many grey / black hat tools still do this. The second method is to get in path / route between source and destination ostensibly on different subnets passing through a router. The third, newer, method is to leverage something like SDN / OpenFlow to receive a frame / packet, ship it off to a controller to fold, spindle, or mutilate it before sending it back to the egress device. >>>> >>>>> By exploiting the behavior of ARP and common host implementations that Joe mentioned, we created "Flakeway" software. >>>> >>>> I instantly knew /how/ you did -- what I'll call -- the /interception/ bit based on the description. I say this because I know how this works from contemporary network problems / vulnerabilities that -- to the best of my knowledge -- still exist at the end of 2022. >>>> >>>> Does the name / phrase "Firesheep" mean anything to you? >>>> >>>> According to Wikipedia, Firesheep came out in late 2010. Firesheep was targeted at WiFi networks which were inherently broadcast. But my understanding is that it was possible to make it work on traditional LANs, even switched LANs, via ARP techniques that you and Joe referred to, specifically the LAN's default gateway. >>>> >>>>> Typically we would run it on a machine with an Ethernet interface that could deal with lots of traffic, and which could be put in "promiscuous mode" so that the Flakeway would receive all Ethernet traffic regardless of its addresses. The constraint for using the tool was that at least one of the computers under test had to be connected to the same Ethernet segment, so that all its traffic would be visible at the Flakeway's Ethernet interface. At the time, Sun workstations ("SPARCs") were typically available and met the requirements. >>>> >>>> Needing to be on the same network segment as the source and / or destination means that you were completely playing at layer 2. >>>> >>>> Newer tools allow attacking systems on different network segments as long as you are in the routed path between said systems. E.g. >>>> >>>> [A]---(1)---[B]---(2)---[C] >>>> >>>> B can run contemporary counterparts to Flakeway which would target R1 & R2's L2 MAC addresses and selectively target A's & C's L3 IP address. }:-) >>>> >>>>> Flakeway would start watching all Ethernet traffic (much like Wireshark today), and then issue an ARP request for the computer being tested. On receiving the ARP reply, it would immediately send its own ARP reply for that same IP address, but specifying itself as the appropriate Ethernet target address. >>>> >>>> *nod*nod* >>>> >>>>> As Joe noted, computers typically flush their address caches on seeing new ARP information, and use the most recent response for any subsequent traffic they send to that IP address. >>>> >>>> I would think that the computers would /update/ their ARP cache for the IP to MAC mapping for the specific victim and not actually /flush/ all of the ARP cache. >>>> >>>>> After completing this setup, all the interesting traffic now flows to the Flakeway machine, which can simply retransmit it to the real Ethernet address of the computer being tested. >>>> >>>> Yep. That's how things still work today. >>>> >>>>> Other than a slight increase in delay, nothing much changes. >>>> >>>> Agreed. >>>> >>>> Really, the only way for clients on the same LAN to have any idea that this is happening is to check the MAC address that each has associated with the other's IP address. >>>> >>>> Incidentally, this is why it's a good idea to check the ARP cache when weird things are happening. Make sure that each machine has the correct MAC address for the other machine. >>>> >>>>> By performing the same procedure for a (real) gateway's Ethernet address, the Flakeway could insert itself into the datagram pathway for traffic in both directions. To do the testing, we then set it up so it could delay, duplicate, reorder, discard, and otherwise mangle the IP traffic flow to see how the TCPs dealt with such problems (as they were designed to do). >>>> >>>> That makes perfect sense to me. >>>> >>>>> This tool was very useful. >>>> >>>> Today, Flakeway would be considered a grey hat hacking tool. Just one of may that fall into the category. >>>> >>>>> Programming it took only a day or two. But it did seem to reveal a vulnerability in the protocols. >>>> This vulnerability sill very much so exists today. >>>> >>>> The vast majority of people assume ~> believe that the L2 LAN broadcast domain is a safe place. That's is a VERY dangerous assumption. I'd say that greater than 95% of the time it is safe. But when it's not safe, it's really not safe. >>>> >>>>> E.g., you could easily pretend to be any other Internet computer and possibly convince some user to enter sensitive information or do other nasty things. >>>> >>>> Yep. >>>> >>>>> This was all reported, quietly, to IETF, so it could be fixed. >>>> >>>> Chuckle. >>>> >>>> Based on the following comment, I'm guessing this was reported in the early '80s, correct? >>>> >>>>> Meanwhile, it was a useful tool. >>>> >>>> I'll argue that it still is a useful tool. At least to some people. >>>> >>>>> About ten years later, in the early 90s, I tried using the tool again and discovered that it still worked. >>>> >>>> I'd bet a lunch that it still would work in many situations in late 2022. >>>> >>>>> I don't know if that was because the protocols hadn't been improved, or because the computers involved weren't up to date. >>>> >>>> I suspect it's more the former, the protocols hadn't been improved. >>>> >>>> Stop and think for a moment, what has changed in networking protocols since the time you were messing with them? Not much. Most of the changes happened at higher layers that still depend on the protocols that operate at the lower layers that you're talking about. >>>> >>>>> Curiously, as it became more common to use "switched" Ethernet, the tool required more careful configuration of the machines involved to make sure the traffic flows were visible to the Flakeway. >>>> >>>> Ya. With switches, you either MUST use ARP techniques /or/ you need to continually flood the switch and bust it down to a hub. >>>> >>>> Promiscuous mode doesn't mean nearly as much as it used to. It used to mean everything on / passing through the LAN segment. Now it only means everything on the host. Different implementations will see different types of traffic, direct or direct + broadcast. >>>> >>>>> Now, with the pervasive use of Wifi, and the broadcast nature of radio, maybe the pendulum has swung back. >>>> >>>> The pendulum never really swung completely away. But yes, older WiFi very much so did swing it back. >>>> >>>> I say older WiFi, as in not fairly new WiFi. This is because fairly new (maybe just before contemporary) WiFi was going to start using different encryption between each client and the AP. Thus clients couldn't decrypt RF broadcasted traffic that was unicast IP traffic to another machine on the LAN. >>>> >>>> According to Wikipedia, IPsec pre-dates SSL. IPsec being early '90s and SSL being mid to late '90s. Then in the late 2000s MACsec started to become a thing. >>>> >>>> Both IPsec and SSL move the authentication / integrity / confidentiality higher up the stack than the layer 2 ARP attacks being discussed. MACsec is the first thing that I'm aware of that actually addresses the crux of the layer 2 ARP attack. >>>> >>>>> So, a little piece of history of address idiosyncracies.... and use of duplicate addresses. >>>> >>>> It is history. But I think the underlying layer 2 ARP problem is still very much with us today in late 2022. >>>> >>>> I was recently listening to the episode 27 of APNIC's PING podcast -- Wither RPKI -- wherein one of the hosts mentioned that people have tended to prefer to address the authentication / integrity / confidentiality at the top layer of the OSI stack as this tends to work across more things in an end to end manner. MACsec is L2 and inherently local to the broadcast domain. IPsec is L3 but is limited to IP addresses. Whereas TLS is L7 and can actually ride across multiple underlying transports, not all of which involve IP. What's more is that it's relatively easy to co-mingle multiple TLS (L7) endpoints on a single IP address via things like (e)SNI. >>>> >>>> Some later parts of the podcast even went so far as to say that IP addresses aren't important any more. Their justification is that with things like (e)SNI and CDNs, IP addresses aren't as important as they once were. -- I've got to say, the comments make some sense. Not that I like what they are saying. >>>> >>>> >>>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From internet-history at gtaylor.tnetconsulting.net Thu Dec 1 17:38:20 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Thu, 1 Dec 2022 18:38:20 -0700 Subject: [ih] History of duplicate address tests In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> Message-ID: <28f7839f-49ef-72ef-9217-c718c9e6ad73@spamtrap.tnetconsulting.net> On 12/1/22 11:03 AM, Jack Haverty via Internet-history wrote: > Hi Grant, Hi Jack, > Thanks for the update. You're welcome. > I remember reporting the issue to IETF back in the mid 80s and again in > the early 90s. I guess it still hasn't risen to the top of anybody's > priority list. I suspect that multiple people have looked at / thought about / pondered it. But there is a huge backwards compatibility issue. Just look at how quickly the industry is adopting IPv6 which is somewhat backwards compatible with IPv4. }:-) > FYI, Flakeway was a user-level program that ran on a Sparc.?? We didn't > have access to the Sun kernel code so it didn't involve any kernel > mods. Nicely done. > Essentially the Flakeway was a simplified clone of a regular gateway, > with the features added for mangling the datagram stream. Doing > everything at user level made the programming much easier, and worked > fine for a debugging tool. Was routing / forwarding not in kernel space? Am I too young to appreciate routing / forwarding not being in kernel space? > Also FYI, there was a similar mechanism we put into the IMPs, called a > "Remote Datascope".? This took advantage of some ancient code that was > originally put in the IMP to gather data for the UCLA measurement work > on the neonatal ARPANET.? The Datascope simply told the IMP to copy the > first N bytes of every packet (filtered to only those of interest) > transiting the IMP, and send that data in a separate packet to the > Datascope address.?? This allowed us to "hang a datascope" on a network > connection and view computer-computer interactions from somewhere else. > No probe hanging on to a wire needed.? By setting the "N" value large > enough we could capture TCP and IP headers.?? This proved very useful in > debugging problems as hosts were trying to convert to the DDN and get > TCP running.?? I don't recall if we ever put similar functionality into > the "core gateways", since they were all connected to some IMP, but it > would have been easy to do so. This sort of reminds me of mirror / span ports on switches. > One of the other related but larger issues on the 80s list of "things we > need to do" was something I called the "end-middle" scenario.? It's a > generalization of the ARP issue that Flakeway exploited. > > The basic notion is that if you look at a chunk of data passing through > the network, at any level of protocol, it contains a lot of pieces of > information other than the actual user data, such as the various fields > in the different headers at different layers. > > There was a lot of interest in the early days in end-to-end mechanisms, > which eventually led to things like SSL, PGP, et al. The "end-middle" > scenario broadened that view of the Internet architecture. > > Basically, each such "chunk" of data in any header in any protocol level > is produced somewhere, and consumed somewhere else, possibly in multiple > places.? So a "consumer" of some piece of data typically needs to have > some mechanism for assurance that the piece of data was actually > produced by who you think produced it, and has not been altered or > mangled ("Flaked?") along the way to the consumer. I feel like -- what I'm going to call -- the veracity of data is still a difficult problem today. In fact, the only things that I'm aware of that address them in any capacity require various forms of cryptography to either sign and / or encrypt for subsequent verification / decryption by consumers. > Essentially, there are a lot more "ends" involved in Internet > communication than most people think.?? Many of the "ends" are actually > somewhere in the "middle". > > So, for example, in the context of the IP header, when some box along > the path receives an IP datagram, the IP source address should be > assured to be the actual source address of the computer you think it > is.? Similarly, an ARP message conveying an address mapping should be > assured to have come from a source that is authorized to report such > information. How do you enforce such authorization or not without cryptography. The only thing that comes to mind is that network equipment become WAY more Draconian and require static configuration of what is, and thus is not, allowed on various ports. We can't even get people to implement uRPF on access networks to ensure that people aren't spoofing source IPs outside of said access network. > Every field, of every header, that is being used to make programming > decisions, no matter where it is produced or consumed, potentially needs > to be protected to assure its authenticity. That is actually a really big ask. Especially if you want to do it without cryptography. > The "end-middle" issue appears at all levels of protocols. Violating it > at level 2 made Flakeway possible.? But the issue exists even at "app" > levels -- e.g., all the "header fields" that you see today in email. > IIRC, a similar violation made NAT possible. I don't know that Flakeway violated L2. Unless you are implying that it did so by receiving frames passing through multiple layers and then re-implementing L2 again in Flakeway in user space. I say this because lying / ARP spoofing is done at layer 2. Aside: It is possible to run Ethernet networks with multiple (sets / pairs of) systems using duplicate IP addresses /as/ /long/ /as/ they have non-conflicting MAC addresses. You do this by statically configuring ARP entries. }:-) A = .1 = AA:AA:AA:AA:AA:AA B = .2 = BB:BB:BB:BB:BB:BB C = .1 = CC:CC:CC:CC:CC:CC D = .2 = DD:DD:DD:DD:DD:DD Using static ARP entries on all systems, A & B can communicate with each other C & D can communicate with each other If we extend this to include static ARP entries for all co-opereating systems in a network, we can be largely immune to IP spoofing. Though there's not much that can be done about MAC spoofing. And if someone is spoofing MAC addresses, they are likely also spoofing IP addresses. > We recognized in the 80s that such protection mechanisms would be > difficult, I think that they are still difficult today, in late 2022. > and consume a lot of computing power that we didn't have, and require > protocols and mechanisms that did not yet exist. Thankfully we now have some computing power to spare and some protocols that help with this; MACsec, IPsec, and TLS. > Also, they weren't really needed for an experimental research network, > as long as the architecture permitted them to be added later for > operational use. #truth > So the "end-middle issue" was on the list of "things we need to do". > > I guess it's still there. I think it's fair to say that it's still on the list. It's probably also fair to say many have never known about and / or have forgotten about it. -- Grant. . . . unix || die From brian.e.carpenter at gmail.com Thu Dec 1 18:16:23 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 2 Dec 2022 15:16:23 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> Message-ID: <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> I'm not sure whether this actually started before RFC1310 (March 1992), but certainly since then there have been multiple steps on the standards track: Proposed Standard, Draft Standard (no longer assigned) and Internet Standard. (Rumour has it that this started in pure imitation of the ISO standards process. Vint can probably speak to the truth of that.) But, as I first heard from Fred Baker, "The Internet runs on Proposed Standards", because most IETFers can't be bothered with the bureaucracy to take the next step. Draft Standard was abolished for new work to reduce the bureaucracy, but it hasn't had much effect. We did advance IPv6 to Internet Standard, but most WGs just don't bother. In any case, the formal "STD" designation doesn't really mean much. For a current non-IETF effort, I've drawn a diagram about how to interpret the status of RFCs. It can be found at https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md Regards Brian Carpenter On 02-Dec-22 09:52, touch at strayalpha.com wrote: > On Nov 30, 2022, at 1:36 PM, Jack Haverty wrote: >> >> Well, maybe... >> >> RFC5227 describes itself as a proposed standard. Has it subsequently become an actual standard? I don't see it in the "Official Internet Protocol Standards" maintained at rfc-editor.org but maybe it had later revisions. > > That distinction isn?t all that significant. There are a LOT of protocols that never progressed beyond the initial ?PS? status: > https://www.rfc-editor.org/standards#PS? > Progression requires not only some specific hurdles, but also the will and effort of someone to walk the spec through that process. The latter is more often the limitation. > >> If it or a descendant is a Standard, does that prevent the creation of "tools" such as the Flakeway I described? RFCs are full of "SHOULD" and "MUST" directives, which systems such as Flakeway probably violated. If RFC5227 was universally and correctly implemented, would it prevent someone from implementing a Flakeway-like tool, assuming of course they don't feel the need to follow the RFCs' rules? >> >> If RFC5227 et al do in fact prevent such behavior, how does one know whether or not the proscribed mechanisms are actually present in one's equipment? I just looked and I have 54 devices on my home Ethernet. Some are wired, some are wifi, and from many different companies. How do I tell if they've all correctly implemented the mechanisms proscribed in the RFCs? > > The IETF provides no mechanisms for protocol validation. That?s true for all MUSTs, SHOULDs, and MAYs for all protocols. > >> So, is it really "fixed" even today? >> >> I guess it depends on how you define "fixed?. > > Doesn?t it always? :-) > > Joe > > From internet-history at gtaylor.tnetconsulting.net Thu Dec 1 18:22:56 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Thu, 1 Dec 2022 19:22:56 -0700 Subject: [ih] History of duplicate address tests In-Reply-To: References: Message-ID: <6e2e032a-89e2-6b29-7b83-87af81ec8016@spamtrap.tnetconsulting.net> On 12/1/22 2:19 PM, John Shoch via Internet-history wrote: > When considering going from manual assignment of Experimental Ethernet > addresses to a semi-automatic generation of 48-bit addresses to be > blown into a PROM on a board: > > --We thought, "It may not be perfect, but it's certainly more reliable > than trying to scale the manual process!" Fair enough. > --"But what are the odds a bit blown into a PROM will heal, and produce > a duplicate ID?" "Let's just make sure the odds of that are LESS than > the odds of your machine catching fire or dying from a power surge; > or your building being destroyed by lightning, flood, or earthquake; > or someone typing Delete *.*" :-) > --"Have a backup and recovery plan!" "And if you don't have a recovery > plan for fire, lightning, flood, earthquake, or fumble-fingers, > don't complain about lower-probability network events....." I like that. > PS: In a similar vein, I would sometimes field provocative questions, > "Why don't you have encryption on the Ethernet?" Are you familiar with 802.1AE, a.k.a. MACsec? I wonder if MACsec might be an answer for some of the provocative people you're talking about. I've messed with MACsec with manual keying between some Linux systems on my LAN. I've not yet messed with the MACsec Key Agreement (a.k.a. MKA) protocol. Even ARP is encrypted and indistinguishable from line noise. }:-) > I would merely observe, "We do have a project to build a crypto box > in front of the Ethernet transceiver, for serious government customers > with Tempest needs. But do you shred all your letters and print-outs > before they go in the dumpster? If not, you have worse problems > than your Ethernet....." As I was describing it to someone in the grc.security newsgroup, the biggest value that I see in MACsec is on the link between two buildings where someone could tap said link and get up to mischif. Assuming that the two offices are moderately physically secure buildings and the cable runs in a conduit down the block between the buildings. ;-) -- Grant. . . . unix || die From vint at google.com Thu Dec 1 18:39:58 2022 From: vint at google.com (Vint Cerf) Date: Thu, 1 Dec 2022 21:39:58 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: Brian is pretty much right - the "draft" step was to try to assure that independently implemented instances of a protocol would interwork successfully as a way of judging the quality of the spec. Copying the ISO sequence seemed reasonable at the time. Pragmatically, we have tended to deploy even soon than "draft" would imply. v On Thu, Dec 1, 2022 at 9:16 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > I'm not sure whether this actually started before RFC1310 (March 1992), > but certainly since then there have been multiple steps on the standards > track: Proposed Standard, Draft Standard (no longer assigned) and Internet > Standard. > > (Rumour has it that this started in pure imitation of the ISO standards > process. Vint can probably speak to the truth of that.) > > But, as I first heard from Fred Baker, "The Internet runs on Proposed > Standards", because most IETFers can't be bothered with the bureaucracy to > take the next step. Draft Standard was abolished for new work to reduce the > bureaucracy, but it hasn't had much effect. We did advance IPv6 to Internet > Standard, but most WGs just don't bother. > > In any case, the formal "STD" designation doesn't really mean much. > > For a current non-IETF effort, I've drawn a diagram about how to interpret > the status of RFCs. It can be found at > https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md > > Regards > Brian Carpenter > > On 02-Dec-22 09:52, touch at strayalpha.com wrote: > > On Nov 30, 2022, at 1:36 PM, Jack Haverty wrote: > >> > >> Well, maybe... > >> > >> RFC5227 describes itself as a proposed standard. Has it subsequently > become an actual standard? I don't see it in the "Official Internet > Protocol Standards" maintained at rfc-editor.org but maybe it had later > revisions. > > > > That distinction isn?t all that significant. There are a LOT of > protocols that never progressed beyond the initial ?PS? status: > > https://www.rfc-editor.org/standards#PS? > > Progression requires not only some specific hurdles, but also the will > and effort of someone to walk the spec through that process. The latter is > more often the limitation. > > > >> If it or a descendant is a Standard, does that prevent the creation of > "tools" such as the Flakeway I described? RFCs are full of "SHOULD" and > "MUST" directives, which systems such as Flakeway probably violated. If > RFC5227 was universally and correctly implemented, would it prevent someone > from implementing a Flakeway-like tool, assuming of course they don't feel > the need to follow the RFCs' rules? > >> > >> If RFC5227 et al do in fact prevent such behavior, how does one know > whether or not the proscribed mechanisms are actually present in one's > equipment? I just looked and I have 54 devices on my home Ethernet. Some > are wired, some are wifi, and from many different companies. How do I tell > if they've all correctly implemented the mechanisms proscribed in the RFCs? > > > > The IETF provides no mechanisms for protocol validation. That?s true for > all MUSTs, SHOULDs, and MAYs for all protocols. > > > >> So, is it really "fixed" even today? > >> > >> I guess it depends on how you define "fixed?. > > > > Doesn?t it always? :-) > > > > Joe > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From j at shoch.com Thu Dec 1 19:50:32 2022 From: j at shoch.com (John Shoch) Date: Thu, 1 Dec 2022 19:50:32 -0800 Subject: [ih] Internet-history Digest, Vol 39, Issue 4 In-Reply-To: References: Message-ID: "Are you familiar with 802.1AE, a.k.a. MACsec?" Of course, the security overlay on 802 came much later. In the 1980's the government was interested in enhanced "high grade" security for what was then the DIX Ethernet. They facilitated a joint effort between Xerox and a 3rd party. It was a hard problem: a box between a host and a transceiver, custom silicon, government review of everything, etc. Required manual distribution of a digital key, on a physical PROM-key device. Developed in the mid-1980's, it led to a product introduction ca.1989, as the Xerox Encryption Unit: https://techmonitor.ai/technology/xerox_device_for_encrypted_open_traffic_on_one_net >From a history of network encryption, http://www.toad.com/gnu/netcrypt.html : "Mark Vondemkamp, MarkVon at aol.com Xerox started selling the Xerox Encryption Unit around 1990. The XEU was a layer 2 (Ethernet/802.3) network encryption device. Wang started selling the Trusted Interface Unit around 1990. The TIU was a layer 2 (Ethernet/802.3) and layer 3 (IP) network encryption device. These products were based on technology developed by Ultron Labs which started around 1985 by Ultron Labs." "Stephen Kent, kent at bbn.com The XEU and TIU are good examples of inline network crypto from the latter 80s...." From internet-history at gtaylor.tnetconsulting.net Thu Dec 1 21:33:52 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Thu, 1 Dec 2022 22:33:52 -0700 Subject: [ih] Internet-history Digest, Vol 39, Issue 4 In-Reply-To: References: Message-ID: On 12/1/22 8:50 PM, John Shoch via Internet-history wrote: > From a history of network encryption,http://www.toad.com/gnu/netcrypt.html Oh ... that's an interesting read. It poses a LOT more questions than it answers. I'd love to find something like Where Wizards Stay Up Late - The Origins of the Internet but for network encryption as summarized in the Network Encryption - history and patents document. There are a LOT of abbreviations that I don't recognize and would like to learn more about. -- Grant. . . . unix || die From cabo at tzi.org Thu Dec 1 22:25:19 2022 From: cabo at tzi.org (Carsten Bormann) Date: Fri, 2 Dec 2022 07:25:19 +0100 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> On 2. Dec 2022, at 03:16, Brian E Carpenter via Internet-history wrote: > > In any case, the formal "STD" designation doesn't really mean much. I don?t quite agree. This inconspicuous label has helped provide motivation for some significant efforts that led to tremendously improved specifications. Going to STD is pretty much the only IETF activity that creates legitimate space for considerable reflection. That said, I?d say that the *absence* of the formal ?STD? designation doesn?t really mean much. The fact that we label our regular-quality specifications ?Proposed Standard? is highly misleading to people not familiar with the arcana of the process. If I were tasked to name the single most damaging self-inflicted feature of the process, this would probably be that label. Gr??e, Carsten From jack at 3kitty.org Fri Dec 2 10:01:07 2022 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 2 Dec 2022 10:01:07 -0800 Subject: [ih] Internet-history Digest, Vol 39, Issue 4 In-Reply-To: References: Message-ID: <7edf3ce6-2876-7efa-7ebc-8828fdbedebc@3kitty.org> Been there, done that -- I was involved in a similar project back in the 80s. Definitely a Hard Problem.? But not the only one that had to be solved. Even if you got all of the security mechanisms perfect - hardware, protocols, algorithms, etc., there was also the hard problem of believing that all the underlying computers were also doing only what they were supposed to do when in secure operation.?? There's a long pathway from specification to running machine instructions and bits on wires. At the time of the early Internet research, government funding was also working on "trusted computing".?? E.g., how do you make an operating system that you can prove is working as designed, and running programs that do only what their programmers intended them to do. That's a very hard problem.?? Essentially means there can't be any bugs or flaws (what we now call "vulnerabilities") in any code, and you have to somehow be able to prove that as a fact before you can trust the system to do what you designed it to do. I remember there were projects with names like PSOS (IIRC "Provably Secure Operating System") and KSOS ("Kernelized Secure Operating System").?? And of course Multics which had a focus on security. Probably many others too. But once that problem was solved, and secure networking algorithms and protocols also designed and implemented to run on those Proven computers, the Internet would be trustworthy. Haven't heard much about such efforts for decades... were those efforts abandoned?? It seems every computer I use today is continuously getting updates to fix severe vulnerabilities.? Has Insecurity now become the norm? Jack On 12/1/22 19:50, John Shoch via Internet-history wrote: > "Are you familiar with 802.1AE, a.k.a. MACsec?" > > Of course, the security overlay on 802 came much later. > > In the 1980's the government was interested in enhanced "high grade" > security for what was then the DIX Ethernet. They facilitated a joint > effort between Xerox and a 3rd party. > > It was a hard problem: a box between a host and a transceiver, custom > silicon, government review of everything, etc. Required manual > distribution of a digital key, on a physical PROM-key device. > > Developed in the mid-1980's, it led to a product introduction ca.1989, as > the Xerox Encryption Unit: > https://techmonitor.ai/technology/xerox_device_for_encrypted_open_traffic_on_one_net > > >From a history of network encryption, http://www.toad.com/gnu/netcrypt.html > : > > "Mark Vondemkamp, MarkVon at aol.com > Xerox started selling the Xerox Encryption Unit around 1990. The XEU was a > layer 2 (Ethernet/802.3) network encryption device. > Wang started selling the Trusted Interface Unit around 1990. The TIU was a > layer 2 (Ethernet/802.3) and layer 3 (IP) network encryption device. > These products were based on technology developed by Ultron Labs which > started around 1985 by Ultron Labs." > > "Stephen Kent, kent at bbn.com > The XEU and TIU are good examples of inline network crypto from the latter > 80s...." From brian.e.carpenter at gmail.com Fri Dec 2 16:02:05 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 3 Dec 2022 13:02:05 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> Message-ID: <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> On 02-Dec-22 19:25, Carsten Bormann wrote: > On 2. Dec 2022, at 03:16, Brian E Carpenter via Internet-history wrote: >> >> In any case, the formal "STD" designation doesn't really mean much. > > I don?t quite agree. This inconspicuous label has helped provide motivation for some significant efforts that led to tremendously improved specifications. > Going to STD is pretty much the only IETF activity that creates legitimate space for considerable reflection. > > That said, I?d say that the *absence* of the formal ?STD? designation doesn?t really mean much. > The fact that we label our regular-quality specifications ?Proposed Standard? is highly misleading to people not familiar with the arcana of the process. > If I were tasked to name the single most damaging self-inflicted feature of the process, this would probably be that label. Since this is the history list, I will limit myself to observing that there have been several attempts in recent history to reduce the standards track to a single stage, and it seems that interest in this question and energy to discuss it is approximately zero. See the above subject header. Brian From sob at sobco.com Fri Dec 2 16:24:07 2022 From: sob at sobco.com (Scott Bradner) Date: Fri, 2 Dec 2022 19:24:07 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> Message-ID: note that, starting with RFC 2026, the move out of proposed std requires that two independent licenses of any claimed IPR be shown - this was put in as a way to empirically judge if the license terms were fair Scott > On Dec 2, 2022, at 7:02 PM, Brian E Carpenter via Internet-history wrote: > > On 02-Dec-22 19:25, Carsten Bormann wrote: >> On 2. Dec 2022, at 03:16, Brian E Carpenter via Internet-history wrote: >>> >>> In any case, the formal "STD" designation doesn't really mean much. >> I don?t quite agree. This inconspicuous label has helped provide motivation for some significant efforts that led to tremendously improved specifications. >> Going to STD is pretty much the only IETF activity that creates legitimate space for considerable reflection. >> That said, I?d say that the *absence* of the formal ?STD? designation doesn?t really mean much. >> The fact that we label our regular-quality specifications ?Proposed Standard? is highly misleading to people not familiar with the arcana of the process. >> If I were tasked to name the single most damaging self-inflicted feature of the process, this would probably be that label. > > Since this is the history list, I will limit myself to observing that there have been several attempts in recent history to reduce the standards track to a single stage, and it seems that interest in this question and energy to discuss it is approximately zero. > See the above subject header. > > Brian > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Fri Dec 2 16:48:22 2022 From: jeanjour at comcast.net (John Day) Date: Fri, 2 Dec 2022 19:48:22 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> Message-ID: The only reason that ISO has a two level approval is the first is to get consensus from the committee that may have several working groups (e.g. SC6 has 3 or 4 working groups that Physical Layer, Data Link Layer including LANs, and Networking) and the second to get consent across committees (computing and networking, banking, Highway signs, Paper Size, avionics, drones, etc.) that cover even more diverse topics. The IETF at least initially didn?t really have that diversity or could one argue that the two levels were the Areas and the IESG? > On Dec 2, 2022, at 19:02, Brian E Carpenter via Internet-history wrote: > > On 02-Dec-22 19:25, Carsten Bormann wrote: >> On 2. Dec 2022, at 03:16, Brian E Carpenter via Internet-history wrote: >>> >>> In any case, the formal "STD" designation doesn't really mean much. >> I don?t quite agree. This inconspicuous label has helped provide motivation for some significant efforts that led to tremendously improved specifications. >> Going to STD is pretty much the only IETF activity that creates legitimate space for considerable reflection. >> That said, I?d say that the *absence* of the formal ?STD? designation doesn?t really mean much. >> The fact that we label our regular-quality specifications ?Proposed Standard? is highly misleading to people not familiar with the arcana of the process. >> If I were tasked to name the single most damaging self-inflicted feature of the process, this would probably be that label. > > Since this is the history list, I will limit myself to observing that there have been several attempts in recent history to reduce the standards track to a single stage, and it seems that interest in this question and energy to discuss it is approximately zero. > See the above subject header. > > Brian > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From agmalis at gmail.com Sat Dec 3 06:34:54 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Sat, 3 Dec 2022 09:34:54 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: Brian et al, Having worked for both a bunch of vendors and a major operator, I think it's more accurate to say that the Internet runs on a mix of IETF Standards, Proposed Standards, internet drafts, and various proprietary features from either a single vendor, or several cooperating vendors pushed together by a common customer. In addition, operators have been known to develop and use their own proprietary HW and/or SW as well. Cheers, Andy On Thu, Dec 1, 2022 at 9:16 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > I'm not sure whether this actually started before RFC1310 (March 1992), > but certainly since then there have been multiple steps on the standards > track: Proposed Standard, Draft Standard (no longer assigned) and Internet > Standard. > > (Rumour has it that this started in pure imitation of the ISO standards > process. Vint can probably speak to the truth of that.) > > But, as I first heard from Fred Baker, "The Internet runs on Proposed > Standards", because most IETFers can't be bothered with the bureaucracy to > take the next step. Draft Standard was abolished for new work to reduce the > bureaucracy, but it hasn't had much effect. We did advance IPv6 to Internet > Standard, but most WGs just don't bother. > > In any case, the formal "STD" designation doesn't really mean much. > > For a current non-IETF effort, I've drawn a diagram about how to interpret > the status of RFCs. It can be found at > https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md > > Regards > Brian Carpenter > > On 02-Dec-22 09:52, touch at strayalpha.com wrote: > > On Nov 30, 2022, at 1:36 PM, Jack Haverty wrote: > >> > >> Well, maybe... > >> > >> RFC5227 describes itself as a proposed standard. Has it subsequently > become an actual standard? I don't see it in the "Official Internet > Protocol Standards" maintained at rfc-editor.org but maybe it had later > revisions. > > > > That distinction isn?t all that significant. There are a LOT of > protocols that never progressed beyond the initial ?PS? status: > > https://www.rfc-editor.org/standards#PS? > > Progression requires not only some specific hurdles, but also the will > and effort of someone to walk the spec through that process. The latter is > more often the limitation. > > > >> If it or a descendant is a Standard, does that prevent the creation of > "tools" such as the Flakeway I described? RFCs are full of "SHOULD" and > "MUST" directives, which systems such as Flakeway probably violated. If > RFC5227 was universally and correctly implemented, would it prevent someone > from implementing a Flakeway-like tool, assuming of course they don't feel > the need to follow the RFCs' rules? > >> > >> If RFC5227 et al do in fact prevent such behavior, how does one know > whether or not the proscribed mechanisms are actually present in one's > equipment? I just looked and I have 54 devices on my home Ethernet. Some > are wired, some are wifi, and from many different companies. How do I tell > if they've all correctly implemented the mechanisms proscribed in the RFCs? > > > > The IETF provides no mechanisms for protocol validation. That?s true for > all MUSTs, SHOULDs, and MAYs for all protocols. > > > >> So, is it really "fixed" even today? > >> > >> I guess it depends on how you define "fixed?. > > > > Doesn?t it always? :-) > > > > Joe > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From agmalis at gmail.com Sat Dec 3 08:02:24 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Sat, 3 Dec 2022 11:02:24 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: Toerless, I ran the process on one of my RFCs (Multiprotocol Interconnect over Frame Relay) back when there were three stages: RFCs 1294 (PS), 1490 (DS), and finally 2427 (STD 55). It was not an inconsiderable amount of work. And by the time we made it to full STD, we knew that it wasn't "crap", the IP and bridging over FR market was working its way up to the billion $ mark between vendor equipment and operator revenue. We knew it worked, and worked well! Cheers, Andy On Sat, Dec 3, 2022 at 10:45 AM Toerless Eckert wrote: > I was recently trying to explain to my WG a bit how to "upgrade" a > standards > track RFC to a "full" IETF standard: > > https://mailarchive.ietf.org/arch/msg/anima/SacPSgFBbb0aOFAl9EcrZP8LXGk/ > > Including to promote the idea of considering to invest the work to do so, > because it turns your work into a "One percenter". Aka: roughly 10,000 RFC, > but only about 100 full IETF standards. > > Of course, it does help too to point out something is a full internet > standard > by referring to nit not via its RFC number, but via it's STD number. Same > as > BCP. > > With all that being said, and as much as i like the idea to have more of > our > work go through that process, i also have to point out, that this may > simply > result in promoting crap. Even when we know it better. Aka: The bar for > full Internet standard just means it is deployed and that nobody manages at > the short time of the upgrade happening to complain about the spec not > really working well enough to afford the title of full Internet standard. > > I was just raising the concern about one of the core IP multicast routing > standards where i think the upgrade was not considering real deployment > issues. But of course, i missed the upgrade window by 2 years, aka: i > hadn't > followed the work when the upgrade happened. > (STD83, > https://mailarchive.ietf.org/arch/msg/routing-discussion/5EuPCcXRjmf4skMkUTJrMKhst74 > ) > > Oh well... > > Cheers > Toerless > > On Sat, Dec 03, 2022 at 09:34:54AM -0500, Andrew G. Malis via > Internet-history wrote: > > Brian et al, > > > > Having worked for both a bunch of vendors and a major operator, I think > > it's more accurate to say that the Internet runs on a mix of IETF > > Standards, Proposed Standards, internet drafts, and > > various proprietary features from either a single vendor, or several > > cooperating vendors pushed together by a common customer. In addition, > > operators have been known to develop and use their own proprietary HW > > and/or SW as well. > > > > Cheers, > > Andy > > > > > > On Thu, Dec 1, 2022 at 9:16 PM Brian E Carpenter via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > I'm not sure whether this actually started before RFC1310 (March 1992), > > > but certainly since then there have been multiple steps on the > standards > > > track: Proposed Standard, Draft Standard (no longer assigned) and > Internet > > > Standard. > > > > > > (Rumour has it that this started in pure imitation of the ISO standards > > > process. Vint can probably speak to the truth of that.) > > > > > > But, as I first heard from Fred Baker, "The Internet runs on Proposed > > > Standards", because most IETFers can't be bothered with the > bureaucracy to > > > take the next step. Draft Standard was abolished for new work to > reduce the > > > bureaucracy, but it hasn't had much effect. We did advance IPv6 to > Internet > > > Standard, but most WGs just don't bother. > > > > > > In any case, the formal "STD" designation doesn't really mean much. > > > > > > For a current non-IETF effort, I've drawn a diagram about how to > interpret > > > the status of RFCs. It can be found at > > > > https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md > > > > > > Regards > > > Brian Carpenter > > > > > > On 02-Dec-22 09:52, touch at strayalpha.com wrote: > > > > On Nov 30, 2022, at 1:36 PM, Jack Haverty wrote: > > > >> > > > >> Well, maybe... > > > >> > > > >> RFC5227 describes itself as a proposed standard. Has it > subsequently > > > become an actual standard? I don't see it in the "Official Internet > > > Protocol Standards" maintained at rfc-editor.org but maybe it had > later > > > revisions. > > > > > > > > That distinction isn?t all that significant. There are a LOT of > > > protocols that never progressed beyond the initial ?PS? status: > > > > https://www.rfc-editor.org/standards#PS? > > > > Progression requires not only some specific hurdles, but also the > will > > > and effort of someone to walk the spec through that process. The > latter is > > > more often the limitation. > > > > > > > >> If it or a descendant is a Standard, does that prevent the creation > of > > > "tools" such as the Flakeway I described? RFCs are full of "SHOULD" > and > > > "MUST" directives, which systems such as Flakeway probably violated. > If > > > RFC5227 was universally and correctly implemented, would it prevent > someone > > > from implementing a Flakeway-like tool, assuming of course they don't > feel > > > the need to follow the RFCs' rules? > > > >> > > > >> If RFC5227 et al do in fact prevent such behavior, how does one know > > > whether or not the proscribed mechanisms are actually present in one's > > > equipment? I just looked and I have 54 devices on my home Ethernet. > Some > > > are wired, some are wifi, and from many different companies. How do I > tell > > > if they've all correctly implemented the mechanisms proscribed in the > RFCs? > > > > > > > > The IETF provides no mechanisms for protocol validation. That?s true > for > > > all MUSTs, SHOULDs, and MAYs for all protocols. > > > > > > > >> So, is it really "fixed" even today? > > > >> > > > >> I guess it depends on how you define "fixed?. > > > > > > > > Doesn?t it always? :-) > > > > > > > > Joe > > > > > > > > > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > --- > tte at cs.fau.de > From stewart at serissa.com Sat Dec 3 08:35:40 2022 From: stewart at serissa.com (Larry Stewart) Date: Sat, 3 Dec 2022 11:35:40 -0500 Subject: [ih] WHOIS Message-ID: <76E7C107-CE84-465E-9956-76188DF5CB9A@serissa.com> > Are there archived old copies of the Arpanet WHOIS database anywhere? -Larry From jeanjour at comcast.net Sat Dec 3 08:43:43 2022 From: jeanjour at comcast.net (John Day) Date: Sat, 3 Dec 2022 11:43:43 -0500 Subject: [ih] Looking for Assigned Numbers In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: Does anyone know where one might find the Assigned Numbers register for the IPv4 protocol-ids as of between 1985 and 2000? Just curious. Thanks, John Day From sob at sobco.com Sat Dec 3 08:48:43 2022 From: sob at sobco.com (Scott Bradner) Date: Sat, 3 Dec 2022 11:48:43 -0500 Subject: [ih] Looking for Assigned Numbers In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: there are a number of assigned numbers RFCs that include IP address assignments e.g. RFC 900, 923, 1340 etc Scott > On Dec 3, 2022, at 11:43 AM, John Day via Internet-history wrote: > > Does anyone know where one might find the Assigned Numbers register for the IPv4 protocol-ids as of between 1985 and 2000? > > Just curious. > > Thanks, > John Day > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From touch at strayalpha.com Sat Dec 3 09:04:15 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 3 Dec 2022 09:04:15 -0800 Subject: [ih] Looking for Assigned Numbers In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> After a point you need to use the internet wayback machine to see the history. I?ve collected this for port numbers, but not protocol IDs. Joe ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Dec 3, 2022, at 8:48 AM, Scott Bradner via Internet-history wrote: > > there are a number of assigned numbers RFCs that include IP address assignments > > e.g. RFC 900, 923, 1340 etc > > Scott > >> On Dec 3, 2022, at 11:43 AM, John Day via Internet-history wrote: >> >> Does anyone know where one might find the Assigned Numbers register for the IPv4 protocol-ids as of between 1985 and 2000? >> >> Just curious. >> >> Thanks, >> John Day >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From agmalis at gmail.com Sat Dec 3 09:36:53 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Sat, 3 Dec 2022 12:36:53 -0500 Subject: [ih] Looking for Assigned Numbers In-Reply-To: <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> Message-ID: John, IANA (iana at iana.org) may keep a historical record of changes to a particular registry over time. This is the current version of the registry: https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml Cheers, Andy On Sat, Dec 3, 2022 at 12:05 PM touch--- via Internet-history < internet-history at elists.isoc.org> wrote: > After a point you need to use the internet wayback machine to see the > history. I?ve collected this for port numbers, but not protocol IDs. > > Joe > > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com > > > On Dec 3, 2022, at 8:48 AM, Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > there are a number of assigned numbers RFCs that include IP address > assignments > > > > e.g. RFC 900, 923, 1340 etc > > > > Scott > > > >> On Dec 3, 2022, at 11:43 AM, John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> Does anyone know where one might find the Assigned Numbers register for > the IPv4 protocol-ids as of between 1985 and 2000? > >> > >> Just curious. > >> > >> Thanks, > >> John Day > >> > >> > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From touch at strayalpha.com Sat Dec 3 09:42:02 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 3 Dec 2022 09:42:02 -0800 Subject: [ih] Looking for Assigned Numbers In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> Message-ID: <1681B002-E0E8-4806-B5BD-8B28DB8C3E7C@strayalpha.com> FYI, I don?t think they do. I had to scrape the wayback pages for the info on evolution since the info went online-only for ports. Joe ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Dec 3, 2022, at 9:36 AM, Andrew G. Malis via Internet-history wrote: > > John, > > IANA (iana at iana.org) may keep a historical record of changes to a > particular registry over time. This is the current version of the registry: > > https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml > > Cheers, > Andy > > > On Sat, Dec 3, 2022 at 12:05 PM touch--- via Internet-history < > internet-history at elists.isoc.org> wrote: > >> After a point you need to use the internet wayback machine to see the >> history. I?ve collected this for port numbers, but not protocol IDs. >> >> Joe >> >> ? >> Dr. Joe Touch, temporal epistemologist >> www.strayalpha.com >> >>> On Dec 3, 2022, at 8:48 AM, Scott Bradner via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> >>> there are a number of assigned numbers RFCs that include IP address >> assignments >>> >>> e.g. RFC 900, 923, 1340 etc >>> >>> Scott >>> >>>> On Dec 3, 2022, at 11:43 AM, John Day via Internet-history < >> internet-history at elists.isoc.org> wrote: >>>> >>>> Does anyone know where one might find the Assigned Numbers register for >> the IPv4 protocol-ids as of between 1985 and 2000? >>>> >>>> Just curious. >>>> >>>> Thanks, >>>> John Day >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From nigel at channelisles.net Sat Dec 3 10:11:53 2022 From: nigel at channelisles.net (Nigel Roberts) Date: Sat, 3 Dec 2022 18:11:53 +0000 Subject: [ih] Looking for Assigned Numbers In-Reply-To: <1681B002-E0E8-4806-B5BD-8B28DB8C3E7C@strayalpha.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> <1681B002-E0E8-4806-B5BD-8B28DB8C3E7C@strayalpha.com> Message-ID: <9bdc0ef0-46c8-e81a-7695-9b8f15630c61@channelisles.net> My understanding, from talking to IANA staff, is that there is very little pre-1998 (when ICANN was formed), unless additional information has come to light from ISI records since then. (It was several years ago). Nigel -- Nigel Roberts Island Networks, Maison Postel, Ollivier St, Alderney (C.I.) +44 7973 263842 On 12/3/22 17:42, touch--- via Internet-history wrote: > FYI, I don?t think they do. I had to scrape the wayback pages for the info on evolution since the info went online-only for ports. > > Joe > > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com > >> On Dec 3, 2022, at 9:36 AM, Andrew G. Malis via Internet-history wrote: >> >> John, >> >> IANA (iana at iana.org) may keep a historical record of changes to a >> particular registry over time. This is the current version of the registry: >> >> https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml >> >> Cheers, >> Andy >> >> >> On Sat, Dec 3, 2022 at 12:05 PM touch--- via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> After a point you need to use the internet wayback machine to see the >>> history. I?ve collected this for port numbers, but not protocol IDs. >>> >>> Joe >>> >>> ? >>> Dr. Joe Touch, temporal epistemologist >>> www.strayalpha.com >>> >>>> On Dec 3, 2022, at 8:48 AM, Scott Bradner via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>> >>>> there are a number of assigned numbers RFCs that include IP address >>> assignments >>>> >>>> e.g. RFC 900, 923, 1340 etc >>>> >>>> Scott >>>> >>>>> On Dec 3, 2022, at 11:43 AM, John Day via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>>> >>>>> Does anyone know where one might find the Assigned Numbers register for >>> the IPv4 protocol-ids as of between 1985 and 2000? >>>>> >>>>> Just curious. >>>>> >>>>> Thanks, >>>>> John Day >>>>> >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Sat Dec 3 10:28:35 2022 From: jeanjour at comcast.net (John Day) Date: Sat, 3 Dec 2022 13:28:35 -0500 Subject: [ih] Looking for Assigned Numbers In-Reply-To: <9bdc0ef0-46c8-e81a-7695-9b8f15630c61@channelisles.net> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <3FCDF2FF-0261-4BD4-AEC5-2B4000A4AB12@strayalpha.com> <1681B002-E0E8-4806-B5BD-8B28DB8C3E7C@strayalpha.com> <9bdc0ef0-46c8-e81a-7695-9b8f15630c61@channelisles.net> Message-ID: Thanks, Scott! Perfect!! John > On Dec 3, 2022, at 13:11, Nigel Roberts via Internet-history wrote: > > My understanding, from talking to IANA staff, is that there is very little pre-1998 (when ICANN was formed), unless additional information has come to light from ISI records since then. (It was several years ago). > > > > > Nigel > > > > > -- > Nigel Roberts > Island Networks, Maison Postel, Ollivier St, Alderney (C.I.) > +44 7973 263842 > > > > On 12/3/22 17:42, touch--- via Internet-history wrote: >> FYI, I don?t think they do. I had to scrape the wayback pages for the info on evolution since the info went online-only for ports. >> Joe >> ? >> Dr. Joe Touch, temporal epistemologist >> www.strayalpha.com >>> On Dec 3, 2022, at 9:36 AM, Andrew G. Malis via Internet-history wrote: >>> >>> John, >>> >>> IANA (iana at iana.org) may keep a historical record of changes to a >>> particular registry over time. This is the current version of the registry: >>> >>> https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml >>> >>> Cheers, >>> Andy >>> >>> >>> On Sat, Dec 3, 2022 at 12:05 PM touch--- via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> After a point you need to use the internet wayback machine to see the >>>> history. I?ve collected this for port numbers, but not protocol IDs. >>>> >>>> Joe >>>> >>>> ? >>>> Dr. Joe Touch, temporal epistemologist >>>> www.strayalpha.com >>>> >>>>> On Dec 3, 2022, at 8:48 AM, Scott Bradner via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>> >>>>> there are a number of assigned numbers RFCs that include IP address >>>> assignments >>>>> >>>>> e.g. RFC 900, 923, 1340 etc >>>>> >>>>> Scott >>>>> >>>>>> On Dec 3, 2022, at 11:43 AM, John Day via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>>> >>>>>> Does anyone know where one might find the Assigned Numbers register for >>>> the IPv4 protocol-ids as of between 1985 and 2000? >>>>>> >>>>>> Just curious. >>>>>> >>>>>> Thanks, >>>>>> John Day >>>>>> >>>>>> >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Sat Dec 3 10:42:50 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 4 Dec 2022 07:42:50 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <1BCB9C3A-39E5-4F34-A199-EF0732E7CA9B@tzi.org> <79106597-3a73-3f4b-f610-7dd5bf06b0ed@gmail.com> Message-ID: <9694e249-a9b5-219a-d514-e998ca91a846@gmail.com> On 03-Dec-22 13:48, John Day wrote: > The only reason that ISO has a two level approval is the first is to get consensus from the committee that may have several working groups (e.g. SC6 has 3 or 4 working groups that Physical Layer, Data Link Layer including LANs, and Networking) and the second to get consent across committees (computing and networking, banking, Highway signs, Paper Size, avionics, drones, etc.) that cover even more diverse topics. > > The IETF at least initially didn?t really have that diversity or could one argue that the two levels were the Areas and the IESG? These days, WG Last Calls are a de facto first stage and there are semi-formalised Area reviews during the formal IETF Last Call. So even the process for getting to Proposed Standard is quite elaborate. Some people would even say that the IESG is too zealous. But as Andy suggested, permissionless innovation still has a big role. Consider QUIC for example. Brian > >> On Dec 2, 2022, at 19:02, Brian E Carpenter via Internet-history wrote: >> >> On 02-Dec-22 19:25, Carsten Bormann wrote: >>> On 2. Dec 2022, at 03:16, Brian E Carpenter via Internet-history wrote: >>>> >>>> In any case, the formal "STD" designation doesn't really mean much. >>> I don?t quite agree. This inconspicuous label has helped provide motivation for some significant efforts that led to tremendously improved specifications. >>> Going to STD is pretty much the only IETF activity that creates legitimate space for considerable reflection. >>> That said, I?d say that the *absence* of the formal ?STD? designation doesn?t really mean much. >>> The fact that we label our regular-quality specifications ?Proposed Standard? is highly misleading to people not familiar with the arcana of the process. >>> If I were tasked to name the single most damaging self-inflicted feature of the process, this would probably be that label. >> >> Since this is the history list, I will limit myself to observing that there have been several attempts in recent history to reduce the standards track to a single stage, and it seems that interest in this question and energy to discuss it is approximately zero. >> See the above subject header. >> >> Brian >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jnc at mercury.lcs.mit.edu Sat Dec 3 12:22:17 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 3 Dec 2022 15:22:17 -0500 (EST) Subject: [ih] WHOIS Message-ID: <20221203202217.2262318C085@mercury.lcs.mit.edu> > From: Larry Stewart > Are there archived old copies of the Arpanet WHOIS database anywhere? I don't know what your goal is, but there are printed volumes called the "ARPANET Directory" (later titled "DDN Directory"), which is basically that. (They also contain other info; lists of hosts, etc.) They're pretty big; 500 pages or so.) I have the 1982 (NIC 49000) and 1984 (NIC 50000) editions. It's possible there are scanned copies somwhere, e.g. in the DTIC collection). Noel From jack at 3kitty.org Sat Dec 3 12:37:19 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 3 Dec 2022 12:37:19 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> Message-ID: <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Thanks Andy.?? That's what I suspected but I can only see now from a User's perspective. I still have doubts about "The Internet runs on Proposed Standards".?? Does anybody know -- Is it true?? How do you know? Personally I haven't found any way, at least as a User, to tell what technology is inside all the equipment, software, services, protocols, algorithms, et al that are operating between my keyboard/screen and yours.? It could be all Standards of some ilk, or it could all be Proprietary.?? It might conform to the spec, or have some zero-day flaw.? How do you tell? Focussing on Internet History, I think there's been an interesting evolution over the years.? The technology has of course advanced, as evidenced by thousands of RFCs.?? But the "Process" has also changed over time. Back in the 80s, the mantra "Rough Consensus and Running Code" ruled.? Someone had an idea, debated it with other researchers, and eventually someone wrote the code so the idea could be tried out on the live (if very small then) Internet.?? That would reveal problems, revisions would be made, and more experimentation would follow.?? Little got written down except in emails.? When I got involved in implementing TCP, the version was 2.x, and I remember quickly progressing through 2.5, 2.5+epsilon, etc. as experience drove changes. Vint came in to a meeting one day and advised that DoD was declaring TCP a DoD Standard.?? That meant somebody had to create the specification as a document, and Jon Postel took on the task.? Our "documentation is in the code" excuse wasn't acceptable.? The rest of us scrambled to try to describe what exactly our code did, so that Jon could capture it in writing.?? We didn't have much time. But Jon produced the spec and published it as an RFC.? It defined the new DoD Standard -- TCP3, the next obvious "major version". TCP3 became a mandatory requirement for DoD procurements. Unfortunately, with the time pressure we quickly found that there were flaws in the TCP3 spec.?? It didn't match what the code did. So a revision was needed and rather quickly TCP4 was published. Some contractors (e.g., Ford Aerospace IIRC) sadly got caught in the middle and had to implement TCP3 as required by their contract, only to discover that there was no one they could communicate with.??? It was a rather frenzied period in Internet history. While this was all happening, other efforts were creating more pieces of the "process".? NBS (now NIST) created a test suite and testing program, so that contractors implementing TCP to fulfill their contract had a means of verifying that they had actually met that requirement.?? It checked a box on their list of deliverables. ARPA had decided that the TCP and related Internet technologies would be open and freely available.?? Not only the documentation, but also implementations were made freely available for multiple types of computers of the era, either to run directly or to serve as guidelines for new implementations on other computers.?? IIRC, as NSF got involved it followed a similar policy. Educational institutions, seeing the need to add Networking to their curricula, selected TCP as the vehicle for teaching.?? It was readily available and free.? Within a few years, a "pipeline" had been created, producing a steady stream of new graduates who knew about the TCP technologies and how to use them.? Industry quickly adopted TCP since it could be observed to work, at events such as Interop, and there was a supply of new technical staff who already knew how to use it even as a new grad. Rough consensus.? Running code.?? Operational experience. .....Fast forward 40 years..... I'm not very familiar with how the process works today, or how we got from there to here.?? But my impression is that today there are few if any of those old "process" mechanisms still in place. Technology is defined in RFCs, but there may not be any open and freely available implementations for others to use or examine. There seem to be no mechanisms for any kind of "certification" that an implementation even exists in whatever hardware/software you might have in front of you.? Few people, even techies, seem to be aware of the available technology in the RFCs, let alone how to use them and their purpose.?? Users have no clue, even when technology is present, of how to use it (looking at you PGP).?? No one seems to care much about getting a technology into actual widespread use, except within their own product, service, walled garden, etc. My impression is that the role of the technology development has changed a lot over the years.? The "deliverable" of the process today seems to be RFCs, defining technology that is placed on a public "shelf" and offered for anybody to use as they like.?? The "process" that causes technology to be actually deployed into field operation is someone else's task. If you look at other infrastructures, there's some parallels to the Internet which is arguably a new infrastructure.? E.g., electricity was invented and early users experienced fires, explosions, electrocutions, and other such nasty side-effects.? But over time rules were developed, building codes created, inspectors put in place, grids and procedures developed, and electricity made much more reliable and safe as an infrastructure. Similar evolutions happened with roads, water, fuel, transportation, and other infrastructures.? Perhaps the Internet is just too young as an infrastructure for similar mechanisms to have been created yet.?? Maybe government(s) will step (back) in soon. One of the reasons I recall as an explanation of why TCP succeeded where OSI failed is because the TCP community produced working code while OSI produced only very expensive paper.? The Internet Project of the 80s produced code and reluctantly also documentation.?? The focus of IAB/IESG/IETF/IRTF/etc. in 2022 seems to be limited to documentation. For Internet Historians: How did we get from there to here??? And why? Perhaps the Internet has simply become OSI. Jack Haverty On 12/3/22 06:34, Andrew G. Malis wrote: > Brian et al, > > Having worked for both a bunch of?vendors and?a major operator, I > think it's more accurate to say that the Internet runs on a mix of > IETF Standards, Proposed Standards, internet drafts, and > various?proprietary?features from either a single vendor, or several > cooperating vendors pushed together by a common customer. In addition, > operators have been known to develop and use their own proprietary HW > and/or SW as well. > > Cheers, > Andy > > > On Thu, Dec 1, 2022 at 9:16 PM Brian E Carpenter via Internet-history > wrote: > > I'm not sure whether this actually started before RFC1310 (March > 1992), but certainly since then there have been multiple steps on > the standards track: Proposed Standard, Draft Standard (no longer > assigned) and Internet Standard. > > (Rumour has it that this started in pure imitation of the ISO > standards process. Vint can probably speak to the truth of that.) > > But, as I first heard from Fred Baker, "The Internet runs on > Proposed Standards", because most IETFers can't be bothered with > the bureaucracy to take the next step. Draft Standard was > abolished for new work to reduce the bureaucracy, but it hasn't > had much effect. We did advance IPv6 to Internet Standard, but > most WGs just don't bother. > > In any case, the formal "STD" designation doesn't really mean much. > > For a current non-IETF effort, I've drawn a diagram about how to > interpret the status of RFCs. It can be found at > https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md > > Regards > ? ? Brian Carpenter > > On 02-Dec-22 09:52, touch at strayalpha.com wrote: > > On Nov 30, 2022, at 1:36 PM, Jack Haverty wrote: > >> > >> Well, maybe... > >> > >> RFC5227 describes itself as a proposed standard.? Has it > subsequently become an actual standard?? ?I don't see it in the > "Official Internet Protocol Standards" maintained at > rfc-editor.org but maybe it had later > revisions. > > > > That distinction isn?t all that significant. There are a LOT of > protocols that never progressed beyond the initial ?PS? status: > > https://www.rfc-editor.org/standards#PS? > > Progression requires not only some specific hurdles, but also > the will and effort of someone to walk the spec through that > process. The latter is more often the limitation. > > > >> If it or a descendant is a Standard, does that prevent the > creation of "tools" such as the Flakeway I described?? RFCs are > full of "SHOULD" and "MUST" directives, which systems such as > Flakeway probably violated.? If RFC5227 was universally and > correctly implemented, would it prevent someone from implementing > a Flakeway-like tool, assuming of course they don't feel the need > to follow the RFCs' rules? > >> > >> If RFC5227 et al do in fact prevent such behavior, how does one > know whether or not the proscribed mechanisms are actually present > in one's equipment?? I just looked and I have 54 devices on my > home Ethernet.? ?Some are wired, some are wifi, and from many > different companies.? How do I tell if they've all correctly > implemented the mechanisms proscribed in the RFCs? > > > > The IETF provides no mechanisms for protocol validation. That?s > true for all MUSTs, SHOULDs, and MAYs for all protocols. > > > >> So, is it really "fixed" even today? > >> > >> I guess it depends on how you define "fixed?. > > > > Doesn?t it always? :-) > > > > Joe > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From cabo at tzi.org Sat Dec 3 12:48:57 2022 From: cabo at tzi.org (Carsten Bormann) Date: Sat, 3 Dec 2022 21:48:57 +0100 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: <39A64E5E-987F-401B-86D9-AEA85E43E78E@tzi.org> On 2022-12-03, at 21:37, Jack Haverty via Internet-history wrote: > > The focus of IAB/IESG/IETF/IRTF/etc. in 2022 seems to be limited to documentation. Shiny counterexamples include HTTP/2, and QUIC+HTTP/3. These had multiple (often open-source, and often production quality) implementation efforts in parallel plus regular meetings where interoperability was checked and improvements to the documents (and their technical content) were made or kicked off. There also was accompanying research that went into the standards. These are the apex of how the open source and open standard development can work together, but there are lots of other effort that did something similar, on a smaller scale. Ones that come to my mind: The development of CoAP was also supported by multiple open-source implementation activities and interoperability events (as was ROHC a decade earlier, except that open source implementations weren?t as common in the 3G environment this was being developed for). A different model was used for DTN (used in space these days), which started as a research group that came up with its own experimental protocols tested in lots of implementations, and which then spun off an IETF WG that did the 2.0 (actually, version 7) protocol. So it?s not all gloomy about running code, but there needs to be an effort to make it happen, and not all ecosystems have that. Gr??e, Carsten From internet-history at gtaylor.tnetconsulting.net Sat Dec 3 13:20:59 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Sat, 3 Dec 2022 14:20:59 -0700 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: On 12/3/22 1:37 PM, Jack Haverty via Internet-history wrote: > I still have doubts about "The Internet runs on Proposed Standards". > Does anybody know -- Is it true?? How do you know? Personally I haven't > found any way, at least as a User, to tell what technology is inside all > the equipment, software, services, protocols, algorithms, et al that are > operating between my keyboard/screen and yours.? It could be all > Standards of some ilk, or it could all be Proprietary.?? It might > conform to the spec, or have some zero-day flaw.? How do you tell? I think that's a valid concern. I do think that observations from a 100 foot point of view can indicate that at least some of the things are common, even if more proprietary than official standards. I say this because of interoperability. E.g. take a keyboard; USB, PS/2, or even ADB, and look at how many different keyboards you can buy from different vendors and use them across a wide range of computers. The simple fact that they interoperate as well as they do indicates to me that they are using something common, even if it's not the official standard. As such I think that it's quite clear that it's not something proprietary to one vendor. Different operating systems mesh with different computers in an interoperable manner. Different web browsers behave similarly. We have source code to some of the web browsers and can compile reproducable builds matching what vendors distribute. We can see that these open source web browsers behave very similarly if not the same as close source web browsers. As such, we can deduce that both the open and closed source web browsers are behaving very similarly. This is especially true if we have a common server, more so if it's a server we control & monitor. I think similar methodology and logic can be applied to many layers of the stack, both hardware and software. Aside: What is a standard? Is it a specification and / or something implemented (in an interoperable way) by multiple vendors? > Rough consensus.? Running code.?? Operational experience. :-) > I'm not very familiar with how the process works today, or how we got > from there to here.?? But my impression is that today there are few if > any of those old "process" mechanisms still in place. Technology is > defined in RFCs, but there may not be any open and freely available > implementations for others to use or examine. There seem to be no > mechanisms for any kind of "certification" that an implementation even > exists in whatever hardware/software you might have in front of you. Few > people, even techies, seem to be aware of the available technology in > the RFCs, let alone how to use them and their purpose.?? Users have no > clue, even when technology is present, of how to use it (looking at you > PGP).?? No one seems to care much about getting a technology into actual > widespread use, except within their own product, service, walled garden, > etc. "More Gear" vs "Learning to Use Existing Gear" comes to mind. https://images.squarespace-cdn.com/content/v1/5534618be4b0fc1dd67c939c/1559565752896-B91S8QXW83FLLKKDGMFY/image_71C0CF3B-0B33-490A-B1C2-49D4D2F7627A.JPG?format=750w Apparently that's a photoshop of a complaints / gratitude commic by Mike Baldwin. https://evergreenleadership.com/2010/11/23/gratitude-or-grouchiness-the-choice-that-shapes-what-you-recieve/gratitude-cartoon/ > My impression is that the role of the technology development has changed > a lot over the years.? The "deliverable" of the process today seems to > be RFCs, defining technology that is placed on a public "shelf" and > offered for anybody to use as they like.?? The "process" that causes > technology to be actually deployed into field operation is someone > else's task. HEAVYsigh > If you look at other infrastructures, there's some parallels to the > Internet which is arguably a new infrastructure.? E.g., electricity was > invented and early users experienced fires, explosions, electrocutions, > and other such nasty side-effects.? But over time rules were developed, > building codes created, inspectors put in place, grids and procedures > developed, and electricity made much more reliable and safe as an > infrastructure. The most commonly deployed current / voltage / frequency became the standard (in the region). ;-) > One of the reasons I recall as an explanation of why TCP succeeded where > OSI failed is because the TCP community produced working code while OSI > produced only very expensive paper. /me humorously points to his previous statement. -- Grant. . . . unix || die From jack at 3kitty.org Sat Dec 3 13:34:01 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 3 Dec 2022 13:34:01 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <39A64E5E-987F-401B-86D9-AEA85E43E78E@tzi.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <39A64E5E-987F-401B-86D9-AEA85E43E78E@tzi.org> Message-ID: Thanks, Carsten.? I agree there are exceptions.? Probably many of them.?? I was just trying to point out a historical ongoing trend. I just looked a bit at HTTP/3.? Read the Wikipedia entry which implies it's at a very early stage of deployment.? Saw a reference "A first look at HTTP/3 adoption and performance"? which I thought might give some indication of how the deployment is going,who's driving it, what improvements I might experience, how I can tell if my network stuff is using it, etc. The abstract is interesting.? Sadly, the paper itself is paywalled.?? Just like OSI. Sigh, Jack On 12/3/22 12:48, Carsten Bormann wrote: > On 2022-12-03, at 21:37, Jack Haverty via Internet-history wrote: >> The focus of IAB/IESG/IETF/IRTF/etc. in 2022 seems to be limited to documentation. > Shiny counterexamples include HTTP/2, and QUIC+HTTP/3. > These had multiple (often open-source, and often production quality) implementation efforts in parallel plus regular meetings where interoperability was checked and improvements to the documents (and their technical content) were made or kicked off. > There also was accompanying research that went into the standards. > > These are the apex of how the open source and open standard development can work together, but there are lots of other effort that did something similar, on a smaller scale. > Ones that come to my mind: The development of CoAP was also supported by multiple open-source implementation activities and interoperability events (as was ROHC a decade earlier, except that open source implementations weren?t as common in the 3G environment this was being developed for). > > A different model was used for DTN (used in space these days), which started as a research group that came up with its own experimental protocols tested in lots of implementations, and which then spun off an IETF WG that did the 2.0 (actually, version 7) protocol. > > So it?s not all gloomy about running code, but there needs to be an effort to make it happen, and not all ecosystems have that. > > Gr??e, Carsten > From dhc at dcrocker.net Sat Dec 3 13:40:40 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 3 Dec 2022 13:40:40 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: <33458599-06fd-8b55-0560-cd95457b39d3@dcrocker.net> The only interesting statement about the IETF and its processes is that any universal statement about them is wrong. Like end to end arguments, the IETF is a flexible environment, with predilections rather than rigid absolutes. The specifications are developed in a variety of ways, from fully within the IETF, to the IETF's doing barely more than a basic technical review and mild refinement. We even have a study suggesting the best IETF specifications are those that come into the IETF after they are already successful. The requirements for reaching Proposed vary quite a bit, from pretty cursory to needing two implementations.? An open source version is always useful but pretty much never required.? It shows technical feasibility and, perhaps more importantly, commitment to promoting the specification. Arguably, this is reasonable. Arguably, it is arbitrary. It is quite common for successful IETF specifications to be Proposed (or, for older ones, Draft) rather than Full. Simplistically, Proposed minimally means there has been demonstrated interest in the work, reasonable review, and not much more. Equally simplistically, Full minimally means that there is continued interest, a useful degree of deployment, and a constituency feeling that Full will be more helpful, such as for enabling inclusion in some strict contracting processes. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From cabo at tzi.org Sat Dec 3 13:47:29 2022 From: cabo at tzi.org (Carsten Bormann) Date: Sat, 3 Dec 2022 22:47:29 +0100 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <39A64E5E-987F-401B-86D9-AEA85E43E78E@tzi.org> Message-ID: <6EFD14DC-8C51-40FF-8D00-0E0F90C20309@tzi.org> On 2022-12-03, at 22:34, Jack Haverty wrote: > > > I just looked a bit at HTTP/3. Read the Wikipedia entry which implies it's at a very early stage of deployment. Uh, oh, it seems somebody didn?t do their Wikipedia homework (*). HTTP/3 is completed: https://www.rfc-editor.org/rfc/rfc9114.html (And, yes, this is a ?Proposed Standard?. Which means it has received more quality control than most other standards in networking/IT space. That label hurts. Read [1] and weep. ?Less mature? __ ___.) Gr??e, Carsten (*) Cleaning up Wikipedia articles is a regular assignment in our information security classes. Wikipedia articles often get started in an early stage and have trouble absorbing the updating reality. (The HTTP/3 article looked good to me on first glance.) [1]: https://en.wikipedia.org/wiki/Internet_Standard From agmalis at gmail.com Sat Dec 3 13:53:07 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Sat, 3 Dec 2022 16:53:07 -0500 Subject: [ih] WHOIS In-Reply-To: <20221203202217.2262318C085@mercury.lcs.mit.edu> References: <20221203202217.2262318C085@mercury.lcs.mit.edu> Message-ID: Google Books has the 1978 and 1982 editions available for download. 1978: https://www.google.com/books/edition/ARPANET_Directory/AHo-AQAAIAAJ?kptab=editions&sa=X&ved=2ahUKEwjL0aqTt977AhWWElkFHaFKCWMQmBZ6BAgDEAc 1982: https://www.google.com/books/edition/_/M6opAQAAIAAJ?sa=X&ved=2ahUKEwiR4Lyft977AhUPK1kFHRsoDfsQ8fIDegQIEBAE Cheers, Andy On Sat, Dec 3, 2022 at 3:22 PM Noel Chiappa via Internet-history < internet-history at elists.isoc.org> wrote: > > From: Larry Stewart > > > Are there archived old copies of the Arpanet WHOIS database anywhere? > > I don't know what your goal is, but there are printed volumes called the > "ARPANET Directory" (later titled "DDN Directory"), which is basically > that. > (They also contain other info; lists of hosts, etc.) They're pretty big; > 500 > pages or so.) > > I have the 1982 (NIC 49000) and 1984 (NIC 50000) editions. It's possible > there are scanned copies somwhere, e.g. in the DTIC collection). > > Noel > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Sat Dec 3 14:19:10 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 3 Dec 2022 14:19:10 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: <80f8ac3e-d8d3-d4cb-4948-c5c4ec4eca16@3kitty.org> Another part of the "process" of deploying TCP was to force its adoption by existing network users, who may have viewed TCP as a needless distraction and likely impediment to their ongoing work. That meant "turning off" the old NCP mechanisms so that only the new TCP would function. I recall discussions with ARPA of exactly how to accomplish that. We ended up using the fact that the ARPANET was always in the network path, and by slight changes to the code in the ARPANET IMPs it was possible to make NCP no longer work.? There was a rather elaborate schedule over a year or so with multiple test sessions before the final cutover was made in early 1983. ? Andy Malis probably remembers a lot more. ? We (actually IIRC Jon P) even issued commemorative buttons, something like "I Survived the TCP Transition".?? I still have mine somewhere. All that was part of the deployment process to get TCP to become "the" standard in all the computers in the Internet at the time.? We were lucky to have the ARPANET as an available enforcement mechanism. I can't imagine how you could similarly "turn off" BGP (or any other mechanism) in the Internet today.? Even if you were serious about doing it.?? AFAIK, there's no "process" to do such things. Jack On 12/3/22 13:34, Tony Li wrote: > > >> On Dec 3, 2022, at 12:37 PM, Jack Haverty via Internet-history >> wrote: >> >> I still have doubts about "The Internet runs on Proposed >> Standards".?? Does anybody know -- Is it true?? How do you know? >> Personally I haven't found any way, at least as a User, to tell what >> technology is inside all the equipment, software, services, >> protocols, algorithms, et al that are operating between my >> keyboard/screen and yours.? It could be all Standards of some ilk, or >> it could all be Proprietary.?? It might conform to the spec, or have >> some zero-day flaw.? How do you tell? > > > Data point: BGP is still a draft standard. > > Some might argue that the Internet cannot run without BGP. > > I?m not quite convinced. ?I suggest we turn it off and find out. ?It > should be an interesting experiment. > > Regards, > Tony > From agmalis at gmail.com Sat Dec 3 14:23:52 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Sat, 3 Dec 2022 17:23:52 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: Jack, I've been on both the sending and receiving side of Internet Backbone Equipment RFPs. All you need to do is take a look at the cited RFCs and drafts (not even yet RFCs) and you'll see what I mean. I realize that's not public information and thus hard to defend. However, you can easily take a look at the online documentation for any big-iron router from Cisco or Juniper, or other router vendor. I just picked, at random, Cisco's "MPLS Configuration Guide for Cisco 8000 Series Routers, IOS XR Release 7.5.x", which you can find at https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/mpls/75x/b-mpls-cg-cisco8000-75x/implementing-mpls-ldp-75x.html . MPLS is a major feature in today's backbone routers, and goes back to the early 2000s for first backbone deployments. If you scroll down to the referenced RFCs at the end of the document, they are all Proposed Drafts. There are many many other examples out there. Cheers, Andy On Sat, Dec 3, 2022 at 4:34 PM Tony Li wrote: > > > On Dec 3, 2022, at 12:37 PM, Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > I still have doubts about "The Internet runs on Proposed Standards". > Does anybody know -- Is it true? How do you know? Personally I haven't > found any way, at least as a User, to tell what technology is inside all > the equipment, software, services, protocols, algorithms, et al that are > operating between my keyboard/screen and yours. It could be all Standards > of some ilk, or it could all be Proprietary. It might conform to the > spec, or have some zero-day flaw. How do you tell? > > > > Data point: BGP is still a draft standard. > > Some might argue that the Internet cannot run without BGP. > > I?m not quite convinced. I suggest we turn it off and find out. It > should be an interesting experiment. > > Regards, > Tony > > From johnl at iecc.com Sat Dec 3 15:53:35 2022 From: johnl at iecc.com (John Levine) Date: 3 Dec 2022 18:53:35 -0500 Subject: [ih] QUIC, was "The Internet runs on Proposed Standards" In-Reply-To: Message-ID: <20221203235336.1B88E5082B03@ary.qy> According to Jack Haverty via Internet-history : >Thanks, Carsten.? I agree there are exceptions.? Probably many of >them.?? I was just trying to point out a historical ongoing trend. > >I just looked a bit at HTTP/3.? Read the Wikipedia entry which implies >it's at a very early stage of deployment.? The wikipedia article is fairly stale. The IETF has published standards track RFCs for QUIC and HTTP/3. Chromium, which is the base for most browsers other than Safari and Firefox, has supported it since about 2020. Firefox support started in May 2021, Safari in October 2021. It looks like every widely used web server other than Apache now supports QUIC, as does Cloudflare's web proxy so I would guess that by now there is as much QUIC and HTTP/3 as other kinds of web traffic. QUIC took quite a while but the IETF process worked well, with multiple groups implementing the drafts and providing feedback. From jack at 3kitty.org Sat Dec 3 16:06:46 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 3 Dec 2022 16:06:46 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> Hi Andy, Yes, that's a good example.? But what kind of example....??? If I were evaluating a proposal, I'd have some questions. - Does anyone, other than the manufacturer's own marketing department, agree that the "standard" technology is implemented and operational in the product? - Has another manufacturer implemented their own independent implementation?? Has it been tested to interact properly with the first manufacturer's implementation? - Has any independent entity tested the implementations and verified it is operating as defined, and "certified" the product to be true to the Standards specification. - Has any early adopter integrated the new product into an existing operational network system?? How did they accomplish the changeover? Depending on the answers, the example might be a successful deployment of an Internet Standard.?? Or it might be an example of a proprietary solution that is only known to work in a network composed only of equipment from that manufacturer -- i.e., a "walled garden" technology which happens to use mechanisms defined in RFCs. My point about "process" is the apparent lack of mechanisms analogous to things like UL for the electricity infrastructure, and rules and regulations such as building codes, and enforsement mechanisms such as inspectors - i.e., things typically associated with mature "infrastructures". The "deployment process" of the Internet in the 1980s was crude and rudimentary, but it existed and was somewhat effective.? Somehow along the way it seems to have disappeared even as the technology evolved.?? Perhaps some Historian will explore that someday. Jack On 12/3/22 14:23, Andrew G. Malis wrote: > Jack, > > I've been on both the sending and receiving side of Internet Backbone > Equipment RFPs. All you need to do is take a look at the cited RFCs > and drafts (not even yet RFCs) and you'll see what I mean. I realize > that's not public information and thus hard to defend. > > However, you can easily take a look at the online documentation for > any big-iron router from Cisco or Juniper, or other router vendor. > > I just picked, at random, Cisco's "MPLS Configuration Guide for Cisco > 8000 Series Routers, IOS XR Release 7.5.x", which you can find at > https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/mpls/75x/b-mpls-cg-cisco8000-75x/implementing-mpls-ldp-75x.html > . MPLS is a major feature in today's backbone routers, and goes back > to the early 2000s for first backbone deployments. If you scroll down > to the referenced RFCs at the end of the document, they are all > Proposed Drafts. > > There are many many other?examples out there. > > Cheers, > Andy > > > On Sat, Dec 3, 2022 at 4:34 PM Tony Li wrote: > > > >> On Dec 3, 2022, at 12:37 PM, Jack Haverty via Internet-history >> wrote: >> >> I still have doubts about "The Internet runs on Proposed >> Standards".?? Does anybody know -- Is it true?? How do you know? >> Personally I haven't found any way, at least as a User, to tell >> what technology is inside all the equipment, software, services, >> protocols, algorithms, et al that are operating between my >> keyboard/screen and yours.? It could be all Standards of some >> ilk, or it could all be Proprietary.?? It might conform to the >> spec, or have some zero-day flaw.? How do you tell? > > > Data point: BGP is still a draft standard. > > Some might argue that the Internet cannot run without BGP. > > I?m not quite convinced.? I suggest we turn it off and find out.? > It should be an interesting experiment. > > Regards, > Tony > From jim at deitygraveyard.com Sat Dec 3 16:30:03 2022 From: jim at deitygraveyard.com (Jim Carpenter) Date: Sat, 3 Dec 2022 19:30:03 -0500 Subject: [ih] WHOIS In-Reply-To: References: <20221203202217.2262318C085@mercury.lcs.mit.edu> Message-ID: On Sat, Dec 3, 2022 at 4:53 PM Andrew G. Malis via Internet-history wrote: > > Google Books has the 1978 and 1982 editions available for download. > > 1978: > https://www.google.com/books/edition/ARPANET_Directory/AHo-AQAAIAAJ?kptab=editions&sa=X&ved=2ahUKEwjL0aqTt977AhWWElkFHaFKCWMQmBZ6BAgDEAc That actually has Dec. 1978, Oct. 1979, and Nov. 1980. Jim From gnu at toad.com Sat Dec 3 18:33:53 2022 From: gnu at toad.com (John Gilmore) Date: Sat, 03 Dec 2022 18:33:53 -0800 Subject: [ih] WHOIS In-Reply-To: References: <20221203202217.2262318C085@mercury.lcs.mit.edu> Message-ID: <3864.1670121233@hop.toad.com> Then there's the "DDN Directory, June 1984", scanned by the Internet Archive as part of Ed Feigenbaum's papers: https://archive.org/details/DDNDirectoryJune1984/page/n1/mode/1up Behind the title page, it says: "The DDN Directory was published by the DDN Network Information Center using Tymshare?s AUGMENT system for text processing and formatting. Camera ready copy was produced with a laser printer from Imagen Corporation, Los Altos, CA, and final printing was done by Consolidated Printers, Inc. of Berkeley, CA." Copies from microfilm are also available here: https://archive.org/details/DTIC_ADA148213/page/n5/mode/2up https://apps.dtic.mil/sti/citations/ADA148213 as well as metadata for offline copies at: https://archivesspace.mit.edu/repositories/2/archival_objects/308531 https://www.computerhistory.org/collections/catalog/102659263 (I didn't find pointers to any older or newer versions of the DDN Directory.) On the general topic of network history: If you search the Internet Archive for "ARPANET", you'll find almost 700 items, including lots of DTIC reports, the "ARPANET NEWS" (e.g. May 1973 Issue 3), etc. Almost 200 items within the results are the "ARPANET Collection" created by Katie Hafner in writing Where Wizards Stay Up Late. These include transcripts of interviews with BBN folks, as well as published materials. John From stewart at serissa.com Sat Dec 3 18:48:34 2022 From: stewart at serissa.com (Lawrence Stewart) Date: Sat, 3 Dec 2022 21:48:34 -0500 Subject: [ih] WHOIS In-Reply-To: <20221203202217.2262318C085@mercury.lcs.mit.edu> References: <20221203202217.2262318C085@mercury.lcs.mit.edu> Message-ID: I was just wondering whatever happened to my old NIC handle. I did find myself in the 1984 DDN directory, found on Archive.org I vaguely remember it as LS4, but those are not listed in the 1984 DDN volume. > On 2022, Dec 3, at 3:22 PM, Noel Chiappa wrote: > >> From: Larry Stewart > >> Are there archived old copies of the Arpanet WHOIS database anywhere? > > I don't know what your goal is, but there are printed volumes called the > "ARPANET Directory" (later titled "DDN Directory"), which is basically that. > (They also contain other info; lists of hosts, etc.) They're pretty big; 500 > pages or so.) > > I have the 1982 (NIC 49000) and 1984 (NIC 50000) editions. It's possible > there are scanned copies somwhere, e.g. in the DTIC collection). > > Noel From jim at deitygraveyard.com Sat Dec 3 19:32:40 2022 From: jim at deitygraveyard.com (Jim Carpenter) Date: Sat, 3 Dec 2022 22:32:40 -0500 Subject: [ih] WHOIS In-Reply-To: References: <20221203202217.2262318C085@mercury.lcs.mit.edu> Message-ID: On Sat, Dec 3, 2022 at 9:48 PM Lawrence Stewart via Internet-history wrote: > > I was just wondering whatever happened to my old NIC handle. I did find myself in the 1984 DDN directory, found on Archive.org > I vaguely remember it as LS4, but those are not listed in the 1984 DDN volume. You are listed as LS4 in the Dec. 1978/Oct. 1979/Nov. 1980 Arpanet Directory linked in a previous email. Jim From brian.e.carpenter at gmail.com Sat Dec 3 21:16:17 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 4 Dec 2022 18:16:17 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> Message-ID: <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> Jack, this is perhaps just a parable, but I think it says a lot: Soon after HTML and HTTP made their debuts, I attended a seminar at CERN by Frank Kappe, a student of Hermann Maurer at TU Graz, Austria, about the Hyper-G project. Their strongly expressed opinion was that the Web was useless as a hypertext project because (unlike Hyper-G) it did not have rigorous two-way hyperlinks. They maintained that an unmanaged hypertext system in which A could point to B but B didn't know about it and have a reverse pointer to A was useless. How could it possibly be managed and kept consistent? Needless to say, a certain member of the audience didn't agree. Tim Berners-Lee argued that the two-way hyperlink approach didn't scale, required some sort of coordinated database to work at all, and would never succeed as a result. Whereas the Web would scale indefinitely, needed no centralized anything, didn't need managing, and had a chance of world domination. This seminar was in September 1994. At that time there was only one Informational RFC about the Web (RFC1630), which didn't even describe HTML and HTTP. But the Web was already running in several countries (this was probably around the time that the Web overtook Gopher). So yes, not rigorous, not remotely standardised, still considered an upstart by many people, certainly no IETF rough consensus, but above all, running code. Hyper-G??? Yes, a real project, later renamed Hyperwave, which you can read about at https://mprove.de/visionreality/text/2.1.15_hyperg.html World domination? Not so much. The point of the parable is that rigorous, well defined, good computer science and solid engineering simply haven't worked as well in the Internet as moving fast and breaking stuff. Regards Brian Carpenter On 04-Dec-22 13:06, Jack Haverty via Internet-history wrote: > Hi Andy, > > Yes, that's a good example.? But what kind of example....??? If I were > evaluating a proposal, I'd have some questions. > > - Does anyone, other than the manufacturer's own marketing department, > agree that the "standard" technology is implemented and operational in > the product? > - Has another manufacturer implemented their own independent > implementation?? Has it been tested to interact properly with the first > manufacturer's implementation? > - Has any independent entity tested the implementations and verified it > is operating as defined, and "certified" the product to be true to the > Standards specification. > - Has any early adopter integrated the new product into an existing > operational network system?? How did they accomplish the changeover? > > Depending on the answers, the example might be a successful deployment > of an Internet Standard.?? Or it might be an example of a proprietary > solution that is only known to work in a network composed only of > equipment from that manufacturer -- i.e., a "walled garden" technology > which happens to use mechanisms defined in RFCs. > > My point about "process" is the apparent lack of mechanisms analogous to > things like UL for the electricity infrastructure, and rules and > regulations such as building codes, and enforsement mechanisms such as > inspectors - i.e., things typically associated with mature > "infrastructures". > > The "deployment process" of the Internet in the 1980s was crude and > rudimentary, but it existed and was somewhat effective.? Somehow along > the way it seems to have disappeared even as the technology evolved. > Perhaps some Historian will explore that someday. > > Jack > > On 12/3/22 14:23, Andrew G. Malis wrote: >> Jack, >> >> I've been on both the sending and receiving side of Internet Backbone >> Equipment RFPs. All you need to do is take a look at the cited RFCs >> and drafts (not even yet RFCs) and you'll see what I mean. I realize >> that's not public information and thus hard to defend. >> >> However, you can easily take a look at the online documentation for >> any big-iron router from Cisco or Juniper, or other router vendor. >> >> I just picked, at random, Cisco's "MPLS Configuration Guide for Cisco >> 8000 Series Routers, IOS XR Release 7.5.x", which you can find at >> https://www.cisco.com/c/en/us/td/docs/iosxr/cisco8000/mpls/75x/b-mpls-cg-cisco8000-75x/implementing-mpls-ldp-75x.html >> . MPLS is a major feature in today's backbone routers, and goes back >> to the early 2000s for first backbone deployments. If you scroll down >> to the referenced RFCs at the end of the document, they are all >> Proposed Drafts. >> >> There are many many other?examples out there. >> >> Cheers, >> Andy >> >> >> On Sat, Dec 3, 2022 at 4:34 PM Tony Li wrote: >> >> >> >>> On Dec 3, 2022, at 12:37 PM, Jack Haverty via Internet-history >>> wrote: >>> >>> I still have doubts about "The Internet runs on Proposed >>> Standards".?? Does anybody know -- Is it true?? How do you know? >>> Personally I haven't found any way, at least as a User, to tell >>> what technology is inside all the equipment, software, services, >>> protocols, algorithms, et al that are operating between my >>> keyboard/screen and yours.? It could be all Standards of some >>> ilk, or it could all be Proprietary.?? It might conform to the >>> spec, or have some zero-day flaw.? How do you tell? >> >> >> Data point: BGP is still a draft standard. >> >> Some might argue that the Internet cannot run without BGP. >> >> I?m not quite convinced.? I suggest we turn it off and find out. >> It should be an interesting experiment. >> >> Regards, >> Tony >> From bill.n1vux at gmail.com Sat Dec 3 22:11:07 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Sun, 4 Dec 2022 01:11:07 -0500 Subject: [ih] WHOIS In-Reply-To: <3864.1670121233@hop.toad.com> References: <20221203202217.2262318C085@mercury.lcs.mit.edu> <3864.1670121233@hop.toad.com> Message-ID: On Sat, Dec 3, 2022, 21:34 John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > Then there's the "DDN Directory, June 1984", scanned by the Internet > Archive as part of Ed Feigenbaum's papers: > > https://archive.org/details/DDNDirectoryJune1984/page/n1/mode/1up Wow, even my nouveau arriv?e self is in the 1984 directory, being down the hall from Mike Padlipsky at MITRE-Bedford, not that one can tell since his corp address is the POBox and his e-mailbox was by then at ISI (having changed repeatedly in prior issues, including new corp but old MIT email, and then a Community customer mailbox in between). William Ricker The Literary Estate of Michael A Padlipsky From vgcerf at gmail.com Sat Dec 3 22:40:05 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sun, 4 Dec 2022 01:40:05 -0500 Subject: [ih] QUIC, was "The Internet runs on Proposed Standards" In-Reply-To: <20221203235336.1B88E5082B03@ary.qy> References: <20221203235336.1B88E5082B03@ary.qy> Message-ID: Perhaps worth pointing out that the interplanetary Bundle Protocol Suite has had the benefit of work in boh IRTF and IETF as well as CCSDS. That work started around 1998. v On Sat, Dec 3, 2022 at 6:53 PM John Levine via Internet-history < internet-history at elists.isoc.org> wrote: > According to Jack Haverty via Internet-history : > >Thanks, Carsten. I agree there are exceptions. Probably many of > >them. I was just trying to point out a historical ongoing trend. > > > >I just looked a bit at HTTP/3. Read the Wikipedia entry which implies > >it's at a very early stage of deployment. > > The wikipedia article is fairly stale. The IETF has published standards > track RFCs for QUIC and HTTP/3. Chromium, which is the base for most > browsers other than Safari and Firefox, has supported it since about 2020. > Firefox support started in May 2021, Safari in October 2021. > > It looks like every widely used web server other than Apache now supports > QUIC, as does Cloudflare's web proxy so I would guess that by now there > is as much QUIC and HTTP/3 as other kinds of web traffic. > > QUIC took quite a while but the IETF process worked well, with multiple > groups > implementing the drafts and providing feedback. > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jmamodio at gmail.com Sat Dec 3 23:56:39 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Sun, 4 Dec 2022 01:56:39 -0600 Subject: [ih] QUIC, was "The Internet runs on Proposed Standards" In-Reply-To: References: Message-ID: And keeps ongoing :-) -Jorge > On Dec 4, 2022, at 12:40 AM, vinton cerf via Internet-history wrote: > > ?Perhaps worth pointing out that the interplanetary Bundle Protocol Suite > has had the benefit of work in boh IRTF and IETF as well as CCSDS. That > work started around 1998. > > v > > >> On Sat, Dec 3, 2022 at 6:53 PM John Levine via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> According to Jack Haverty via Internet-history : >>> Thanks, Carsten. I agree there are exceptions. Probably many of >>> them. I was just trying to point out a historical ongoing trend. >>> >>> I just looked a bit at HTTP/3. Read the Wikipedia entry which implies >>> it's at a very early stage of deployment. >> >> The wikipedia article is fairly stale. The IETF has published standards >> track RFCs for QUIC and HTTP/3. Chromium, which is the base for most >> browsers other than Safari and Firefox, has supported it since about 2020. >> Firefox support started in May 2021, Safari in October 2021. >> >> It looks like every widely used web server other than Apache now supports >> QUIC, as does Cloudflare's web proxy so I would guess that by now there >> is as much QUIC and HTTP/3 as other kinds of web traffic. >> >> QUIC took quite a while but the IETF process worked well, with multiple >> groups >> implementing the drafts and providing feedback. >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gnu at toad.com Sun Dec 4 01:33:12 2022 From: gnu at toad.com (John Gilmore) Date: Sun, 04 Dec 2022 01:33:12 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> Message-ID: <20554.1670146392@hop.toad.com> Brian E Carpenter wrote: > Berners-Lee argued that the two-way hyperlink approach didn't scale, > required some sort of coordinated database to work at all, and would > never succeed as a result. Whereas the Web would scale indefinitely, > needed no centralized anything, didn't need managing, and had a chance > of world domination. Interesting history, but perhaps shortsighted on Tim's part. HTTP already provided most of the tools needed to do 2-way hyperlinks without any centralized anything, using the mispelled Referer field. The first and every time that an HTTP client followed a link from A to B, the B server would get an HTTP access to B that notified it that there's a link from A. Early and even modern HTTP servers just dump that info into a logfile and otherwise ignore it. But they could be cacheing that info to build a local database at B of backreferences. The B HTTP server could actually check the Referer by accessing the cited A page itself to see if the link is really there, to avoid spoofing. And then could modify the HTML that it serves up to clients, to somehow include the backreference info. As a simple example, it could use a naming convention in the URL to offer a metadata page corresponding to each page it serves up, that shows the backlinks that it knows about for that page. Perhaps think of the metadata page like Wikipedia's "talk page" for every encyclopedia article page. Instead, that Referer merely got used by advertisers for spyware (that's how Google and Meta know which page a "tracking pixel" or "web bug" was accessed from). The opportunity for 2-way, cached, distributed web linking is still available today, if somebody wanted to write a little software! (But this is a history list, so let's go back to looking backward. ;-) John From internet-history at gtaylor.tnetconsulting.net Sun Dec 4 08:56:07 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Sun, 4 Dec 2022 09:56:07 -0700 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <20554.1670146392@hop.toad.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> <20554.1670146392@hop.toad.com> Message-ID: <7e6f8fd7-8bf8-c56e-2017-abd09e573bfd@spamtrap.tnetconsulting.net> On 12/4/22 2:33 AM, John Gilmore via Internet-history wrote: > The opportunity for 2-way, cached, distributed web linking is still > available today, if somebody wanted to write a little software! > (But this is a history list, so let's go back to looking backward. ;-) I'm not as convinced that the opportunity you're describing is still available. I distinctly remember effort in the last decade by multiple web browsers to alter the contents of the "Referer" (spelled as it is) header for privacy reasons. From memory, there are four classes of -- what I'll call -- redaction: 1) None - use full URL and query parameter(s) 2) Remove query parameters - use full URL without query paramers 3) Remove the file path+name - use the referrer's FQDN 4) Remove everything - don't use a referrer header at all I'm seeing different levels used in different situations. Usually crossing origin removes more information than staying within origin. I believe there's also a Content Security Policy parameter where a publishing site can hint at what amount of reference they would like back to them. As such, I don't think that there is as much opportunity to leverage the Referer (...) header as you indicate. Aside: I -- /personally/ /as/ /a/ /consumer/ -- don't see much value in knowing what other pages are linking to a page that I'm reading. /As/ /a/ /webmaster/ I see /some/ value in being able to glean some visibility into where traffic to my site is coming from. -- Grant. . . . unix || die From brian.e.carpenter at gmail.com Sun Dec 4 11:40:02 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 5 Dec 2022 08:40:02 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> <20554.1670146392@hop.toad.com> Message-ID: <36c6357b-8286-213d-17b7-e6a00c6dba4a@gmail.com> On 05-Dec-22 04:26, Toerless Eckert wrote: > The "requires some sort of coordinated database" certainly sounds incorrect. That is certainly how Hyper-G worked. Not having a recording of Tim's remarks, I'm working from my memory of ~30 years ago, and he probably said something more subtle. Does anyone here know exactly when "Referer" was added to HTTP? I can't find it in the original specs at http://info.cern.ch/hypertext/WWW/Protocols/HTTP.html and I don't where to look for intermediate versions prior to RFC1945. Brian > > If course: , Google Search Console does provide this information AFAIK, so it > certinly is possible with a coordinated database . > > For a disributed solution, > it would certainly help the ability to track referrers to your URL if the > web had a mechanism for URLs that can only be used from a specific referrer > location/URL. That would force the referrer to actually do keep an active > referrer status with the referred to URL. Not trivial. > > Short of that, the realities of how clients are behind firewalls and web pages > behind world-wide content caching systems make the mechanisms of > HTTP as described by John unfortunately rather impractical to rely upon AFAISI. > > Cheers > Toerless > > On Sun, Dec 04, 2022 at 01:33:12AM -0800, John Gilmore via Internet-history wrote: >> Brian E Carpenter wrote: >>> Berners-Lee argued that the two-way hyperlink approach didn't scale, >>> required some sort of coordinated database to work at all, and would >>> never succeed as a result. Whereas the Web would scale indefinitely, >>> needed no centralized anything, didn't need managing, and had a chance >>> of world domination. >> >> Interesting history, but perhaps shortsighted on Tim's part. HTTP >> already provided most of the tools needed to do 2-way hyperlinks without >> any centralized anything, using the mispelled Referer field. The first >> and every time that an HTTP client followed a link from A to B, the B >> server would get an HTTP access to B that notified it that there's a >> link from A. >> >> Early and even modern HTTP servers just dump that info into a logfile >> and otherwise ignore it. But they could be cacheing that info to build >> a local database at B of backreferences. The B HTTP server could >> actually check the Referer by accessing the cited A page itself to see >> if the link is really there, to avoid spoofing. And then could modify >> the HTML that it serves up to clients, to somehow include the >> backreference info. As a simple example, it could use a naming >> convention in the URL to offer a metadata page corresponding to each >> page it serves up, that shows the backlinks that it knows about for that >> page. Perhaps think of the metadata page like Wikipedia's "talk page" >> for every encyclopedia article page. >> >> Instead, that Referer merely got used by advertisers for spyware (that's >> how Google and Meta know which page a "tracking pixel" or "web bug" was >> accessed from). >> >> The opportunity for 2-way, cached, distributed web linking is still >> available today, if somebody wanted to write a little software! >> (But this is a history list, so let's go back to looking backward. ;-) >> >> John >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From johnl at iecc.com Sun Dec 4 11:42:51 2022 From: johnl at iecc.com (John Levine) Date: 4 Dec 2022 14:42:51 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> Message-ID: <20221204194252.45F64508B528@ary.qy> It appears that Jack Haverty via Internet-history said: >operating between my keyboard/screen and yours.? It could be all >Standards of some ilk, or it could all be Proprietary.?? It might >conform to the spec, or have some zero-day flaw.? How do you tell? IETF standards are all about interoperation. I don't have to care what's inside your black box so long as it talks to my black box using standard protocols. I realize this argument gets fuzzy around the edges, but for the stuff I do like e-mail, if the other end doesn't speak SMTP as described in RFCs over TCP and IP as described in RFCs, it's not going to get any mail delivered. R's, John From brian.e.carpenter at gmail.com Sun Dec 4 11:55:06 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 5 Dec 2022 08:55:06 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <20221204194252.45F64508B528@ary.qy> References: <20221204194252.45F64508B528@ary.qy> Message-ID: On 05-Dec-22 08:42, John Levine via Internet-history wrote: > It appears that Jack Haverty via Internet-history said: >> operating between my keyboard/screen and yours.? It could be all >> Standards of some ilk, or it could all be Proprietary.?? It might >> conform to the spec, or have some zero-day flaw.? How do you tell? > > IETF standards are all about interoperation. I don't have to care > what's inside your black box so long as it talks to my black box > using standard protocols. > > I realize this argument gets fuzzy around the edges, but for the stuff > I do like e-mail, if the other end doesn't speak SMTP as described in > RFCs over TCP and IP as described in RFCs, it's not going to get any > mail delivered. Exactly. And whether those RFCs are Proposed Standard, Draft Standard or Internet Standard turns out to be completely irrelevant. There's a fairly large set of RFCs that need to be followed quite accurately; the permissionless innovation that I mentioned can't duck that. Brian From johnl at iecc.com Sun Dec 4 12:15:07 2022 From: johnl at iecc.com (John Levine) Date: 4 Dec 2022 15:15:07 -0500 Subject: [ih] what is hypertext, was "The Internet runs on Proposed Standards" In-Reply-To: <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> Message-ID: <20221204201507.6DB0C508B895@ary.qy> It appears that Brian E Carpenter via Internet-history said: >... Their strongly expressed opinion >was that the Web was useless as a hypertext project because (unlike >Hyper-G) it did not have rigorous two-way hyperlinks. They maintained >that an unmanaged hypertext system in which A could point to B but >B didn't know about it and have a reverse pointer to A was useless. >How could it possibly be managed and kept consistent? Seems that this is a matter of nomenclature. Ted Nelson had been working on hypertext projects since the late 1960s and always said that all the links were two-way. He planned for a lot of other stuff that would have been great if it were possible, like links to a subrange of a document (how does that work if the target is updated?) and micropayments to authors per byte viewed. They were right, WWW doesn't do that, and by that definition it's not real hypertext. But Tim picked a subset that was large enough to be useful, even with one-way links that sometimes point nowhere, and small enough to be implemnted at scale. Ted is still grumpy about the Web, but I still have no idea how you would implement a lot of that stuff. R's, John From johnl at iecc.com Sun Dec 4 12:42:14 2022 From: johnl at iecc.com (John R. Levine) Date: 4 Dec 2022 15:42:14 -0500 Subject: [ih] what is hypertext, was "The Internet runs on Proposed Standards" Message-ID: <64a19d07-4839-4385-3fc6-a78854c2a6aa@iecc.com> On Sun, 4 Dec 2022, Toerless Eckert wrote: > To paraphrase like one of my ex-colleagues would have said: > Did Ted produce any www.showmethemoney.com for his ideas ? He got modest funding for some attempts at implementation but he is, ah, not the best manager of technical projects. > I always felt in the 90th we needed librarians to bring order to the > mess which the web became. And i did not believe that crawling and search > could keep up. But after google figured out its monetization, i > think we closed the books on these earlier alternative approaches. It's all a matter of scale. You can probably do a curated index of millions of pages, but not of billions of pages. It's not unlike the social media moderation problem. Anything that works at scale isn't good, anything good doesn't work at scale. Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From dhc at dcrocker.net Sun Dec 4 14:10:40 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sun, 4 Dec 2022 14:10:40 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> Message-ID: On 12/3/2022 9:16 PM, Brian E Carpenter via Internet-history wrote: > Soon after HTML and HTTP made their debuts, I attended a seminar > at CERN by Frank Kappe, a student of Hermann Maurer at TU Graz, > Austria, about the Hyper-G project. Their strongly expressed opinion > was that the Web was useless as a hypertext project because (unlike > Hyper-G) it did not have rigorous two-way hyperlinks. They maintained > that an unmanaged hypertext system in which A could point to B but > B didn't know about it and have a reverse pointer to A was useless. > How could it possibly be managed and kept consistent? A linguistic debate about the use of the word hypertext with and without 2-way links is good for academic energy.? Much like debating when the Internet started... But a more interesting architectural issue is the old 'tussles' concern.? The challenge about consistency is demonstrably real. But so is the challenge to maintaineverse pointers. Arguably what the web did was to pare down to an immutable core, leaving the additional bit -- reverse pointsers -- to be done as a value-added layer.? It doesn't deny the utility of the latter, but did not require solving their design as a roadblock to use of forward pointers. It's not as if this integrate vs. divide-and-conquor debate is unusual, nor is the latter's proving operationally (vastly) superior. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From bpurvy at gmail.com Mon Dec 5 09:28:36 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 5 Dec 2022 09:28:36 -0800 Subject: [ih] Post on running a Working Group Message-ID: I generally try to avoid Shameless Self-Promotion for this list, but in this case, my Substack article does have quite a bit on my chairmanship of the WG that produced RFC 1697. Also a non-technical tutorial on SNMP and MIBs. https://albertcory50.substack.com/p/in-larrys-kingdom-part-1 From alejandroacostaalamo at gmail.com Mon Dec 5 11:31:02 2022 From: alejandroacostaalamo at gmail.com (Alejandro Acosta) Date: Mon, 5 Dec 2022 16:31:02 -0300 Subject: [ih] Internet should be in private hands Message-ID: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Hello There, ? I think this is the right list to ask.? My apologies if my comments are not accurate. ? I have heard a couple of times that there was a moment in the history of the Internet where private companies were putting pressure on the community indicating that the Internet had to be controlled by a private company, that it had to be in the hands of someone, not in their free will. Is it so?. I think these comments used to occur when there was a big failure in the network (I think I've heard mostly when there were congestion control issues).., probably in the mid 80's or so. ? Am I crazy or the above actually happened? If so, I'm looking for information, documentation, references, etc. If I'm wrong nothing happen, just let me know :-) Thanks, Alejandro, From jmamodio at gmail.com Mon Dec 5 12:17:24 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Mon, 5 Dec 2022 14:17:24 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: Hi Alejandro, where and from whom did you "hear" that ? mid 80's was very early on, no commercial isps yet. NSF started to sponsor the network in 1985. NSFNet came online in 1986. Regards Jorge On Mon, Dec 5, 2022 at 1:31 PM Alejandro Acosta via Internet-history < internet-history at elists.isoc.org> wrote: > Hello There, > > I think this is the right list to ask. My apologies if my comments > are not accurate. > > I have heard a couple of times that there was a moment in the history > of the Internet where private companies were putting pressure on the > community indicating that the Internet had to be controlled by a private > company, that it had to be in the hands of someone, not in their free > will. Is it so?. I think these comments used to occur when there was a > big failure in the network (I think I've heard mostly when there were > congestion control issues).., probably in the mid 80's or so. > > Am I crazy or the above actually happened? If so, I'm looking for > information, documentation, references, etc. If I'm wrong nothing > happen, just let me know :-) > > > Thanks, > > > Alejandro, > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From salo at saloits.com Mon Dec 5 12:31:33 2022 From: salo at saloits.com (Timothy J. Salo) Date: Mon, 5 Dec 2022 14:31:33 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: On 12/5/2022 1:31 PM, Alejandro Acosta via Internet-history wrote: > ? I think this is the right list to ask.? My apologies if my comments > are not accurate. > > ? I have heard a couple of times that there was a moment in the history > of the Internet where private companies were putting pressure on the > community indicating that the Internet had to be controlled by a private > company, that it had to be in the hands of someone, not in their free > will. Is it so?. I think these comments used to occur when there was a > big failure in the network (I think I've heard mostly when there were > congestion control issues).., probably in the mid 80's or so. > > ? Am I crazy or the above actually happened? If so, I'm looking for > information, documentation, references, etc. If I'm wrong nothing > happen, just let me know :-) This might be referring to discussions during the 1990s that included the com-priv [commercialization/privatization] mail list. At least part of the list appears to be here: My impression was that talk of commercialization/privatization of the Internet was largely driven by individuals and companies that were looking for commercial opportunities. -tjs From jmamodio at gmail.com Mon Dec 5 12:45:23 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Mon, 5 Dec 2022 14:45:23 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: Not all Europeans are created equal ;-) -J On Mon, Dec 5, 2022 at 2:33 PM Toerless Eckert wrote: > Maybe this is about the evolution of funding the networks constituting the > Internet > in the USA from primarily public to primarily private starting in the > first haf of the 1900th. > > google "Al Gore invented the Internet" to learn more ( ;-) ) > > (sorry to be so facetious, but this part of Internet history and lore is > obviously very > non-inclusive, but only USA centric. Where in reality this was also the > first decade where > other parts of the planet started to join the party in significant parts. > > But admittedly: The USA story is still the most enjoyable story. Great > marketing! > And best documented too (those europeans suck at documentation *suck*). > > Cheers > Toerless > > On Mon, Dec 05, 2022 at 02:17:24PM -0600, Jorge Amodio via > Internet-history wrote: > > Hi Alejandro, > > > > where and from whom did you "hear" that ? > > > > mid 80's was very early on, no commercial isps yet. NSF started to > sponsor > > the network in 1985. NSFNet came online in 1986. > > > > Regards > > Jorge > > > > > > On Mon, Dec 5, 2022 at 1:31 PM Alejandro Acosta via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > Hello There, > > > > > > I think this is the right list to ask. My apologies if my comments > > > are not accurate. > > > > > > I have heard a couple of times that there was a moment in the > history > > > of the Internet where private companies were putting pressure on the > > > community indicating that the Internet had to be controlled by a > private > > > company, that it had to be in the hands of someone, not in their free > > > will. Is it so?. I think these comments used to occur when there was a > > > big failure in the network (I think I've heard mostly when there were > > > congestion control issues).., probably in the mid 80's or so. > > > > > > Am I crazy or the above actually happened? If so, I'm looking for > > > information, documentation, references, etc. If I'm wrong nothing > > > happen, just let me know :-) > > > > > > > > > Thanks, > > > > > > > > > Alejandro, > > > > > > > > > > > > > > > > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > --- > tte at cs.fau.de > From alejandroacostaalamo at gmail.com Mon Dec 5 13:27:35 2022 From: alejandroacostaalamo at gmail.com (Alejandro Acosta) Date: Mon, 5 Dec 2022 18:27:35 -0300 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: <827ecb90-e212-2858-72bc-68a6dcf5d074@gmail.com> Hello Jorge, On 5/12/22 5:17 PM, Jorge Amodio wrote: > > Hi Alejandro, > > where?and from whom did you "hear" that ? I heard that many years ago, I recall something about it during an online training, however, I could have misunderstood or did not pay enough attention in that moment. Based in your comments I think it's just false. Thank you all for the clarification. > > mid 80's was very early on, no commercial isps yet. NSF started to > sponsor the network in 1985. NSFNet came online in 1986. That's right Thanks again, Alejandro, > > Regards > Jorge > > > On Mon, Dec 5, 2022 at 1:31 PM Alejandro Acosta via Internet-history > wrote: > > Hello There, > > ?? I think this is the right list to ask.? My apologies if my > comments > are not accurate. > > ?? I have heard a couple of times that there was a moment in the > history > of the Internet where private companies were putting pressure on the > community indicating that the Internet had to be controlled by a > private > company, that it had to be in the hands of someone, not in their free > will. Is it so?. I think these comments used to occur when there > was a > big failure in the network (I think I've heard mostly when there were > congestion control issues).., probably in the mid 80's or so. > > ?? Am I crazy or the above actually happened? If so, I'm looking for > information, documentation, references, etc. If I'm wrong nothing > happen, just let me know :-) > > > Thanks, > > > Alejandro, > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Mon Dec 5 14:41:23 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 6 Dec 2022 11:41:23 +1300 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> I can't think of anything like that in the 1980s. It was clear than the the Internet was a tool (or toy) for the academic and research community, with a few hangers-on in .com. The serious network for business was in the planning stage: OSI. I wonder if this isn't a diffuse echo of the run-up to the creation of ICANN in 1998? The Clinton/Gore White House was very keen to see commercialisation of the Internet, and naturally there was a lot of entrepreneurial interest in that. Some thought that IANA should be a purely commercial undertaking, some thought it should be an arm of government or of the ITU, others thought it should be an independent NGO, and so on. The delusion that ICANN is in some way in charge of the Internet has never quite gone away. Regards Brian Carpenter On 06-Dec-22 09:17, Jorge Amodio via Internet-history wrote: > Hi Alejandro, > > where and from whom did you "hear" that ? > > mid 80's was very early on, no commercial isps yet. NSF started to sponsor > the network in 1985. NSFNet came online in 1986. > > Regards > Jorge > > > On Mon, Dec 5, 2022 at 1:31 PM Alejandro Acosta via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Hello There, >> >> I think this is the right list to ask. My apologies if my comments >> are not accurate. >> >> I have heard a couple of times that there was a moment in the history >> of the Internet where private companies were putting pressure on the >> community indicating that the Internet had to be controlled by a private >> company, that it had to be in the hands of someone, not in their free >> will. Is it so?. I think these comments used to occur when there was a >> big failure in the network (I think I've heard mostly when there were >> congestion control issues).., probably in the mid 80's or so. >> >> Am I crazy or the above actually happened? If so, I'm looking for >> information, documentation, references, etc. If I'm wrong nothing >> happen, just let me know :-) >> >> >> Thanks, >> >> >> Alejandro, >> >> >> >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From jmamodio at gmail.com Mon Dec 5 16:22:12 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Mon, 5 Dec 2022 18:22:12 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> Message-ID: There was a fundamental shift in the mid 90's after the Telecom Act of 1996. Ma'Bell was 100% gone. Al Gore contributed a lot of momentum and support for the expansion of the Internet beyond the academic arena, there were several well established ISPs, direct support from NFS started to fade. With the Telecom Act a bunch of CLECs, Worldcoms, TCG, etc, flourished until their implosion years later. The Internet Fever Rush and stock market baloon started around that time, and Postel's notepad becoming overloaded. Regards Jorge On Mon, Dec 5, 2022 at 4:41 PM Brian E Carpenter < brian.e.carpenter at gmail.com> wrote: > I can't think of anything like that in the 1980s. It was clear > than the the Internet was a tool (or toy) for the academic and > research community, with a few hangers-on in .com. The serious > network for business was in the planning stage: OSI. > > I wonder if this isn't a diffuse echo of the run-up to the > creation of ICANN in 1998? The Clinton/Gore White House was very > keen to see commercialisation of the Internet, and naturally > there was a lot of entrepreneurial interest in that. Some thought > that IANA should be a purely commercial undertaking, some > thought it should be an arm of government or of the ITU, others > thought it should be an independent NGO, and so on. > > The delusion that ICANN is in some way in charge of the Internet > has never quite gone away. > > Regards > Brian Carpenter > > On 06-Dec-22 09:17, Jorge Amodio via Internet-history wrote: > > Hi Alejandro, > > > > where and from whom did you "hear" that ? > > > > mid 80's was very early on, no commercial isps yet. NSF started to > sponsor > > the network in 1985. NSFNet came online in 1986. > > > > Regards > > Jorge > > > > > > On Mon, Dec 5, 2022 at 1:31 PM Alejandro Acosta via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> Hello There, > >> > >> I think this is the right list to ask. My apologies if my comments > >> are not accurate. > >> > >> I have heard a couple of times that there was a moment in the > history > >> of the Internet where private companies were putting pressure on the > >> community indicating that the Internet had to be controlled by a private > >> company, that it had to be in the hands of someone, not in their free > >> will. Is it so?. I think these comments used to occur when there was a > >> big failure in the network (I think I've heard mostly when there were > >> congestion control issues).., probably in the mid 80's or so. > >> > >> Am I crazy or the above actually happened? If so, I'm looking for > >> information, documentation, references, etc. If I'm wrong nothing > >> happen, just let me know :-) > >> > >> > >> Thanks, > >> > >> > >> Alejandro, > >> > >> > >> > >> > >> > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > From rob at stanford.com.au Mon Dec 5 17:25:43 2022 From: rob at stanford.com.au (Robert Stanford) Date: Tue, 6 Dec 2022 11:25:43 +1000 Subject: [ih] Internet should be in private hands In-Reply-To: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> I can't remember where I read this, but I'm sure I saw a reference to the US postal service once suggesting that they should be in charge of email and it should be a paid service. Regards Robert On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: > Hello There, > > ? I think this is the right list to ask.? My apologies if my comments > are not accurate. > > ? I have heard a couple of times that there was a moment in the > history of the Internet where private companies were putting pressure > on the community indicating that the Internet had to be controlled by > a private company, that it had to be in the hands of someone, not in > their free will. Is it so?. I think these comments used to occur when > there was a big failure in the network (I think I've heard mostly when > there were congestion control issues).., probably in the mid 80's or so. > > ? Am I crazy or the above actually happened? If so, I'm looking for > information, documentation, references, etc. If I'm wrong nothing > happen, just let me know :-) > > > Thanks, > > > Alejandro, > > > > > From sob at sobco.com Mon Dec 5 17:31:42 2022 From: sob at sobco.com (Scott Bradner) Date: Mon, 5 Dec 2022 20:31:42 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> Message-ID: <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> The post office did have a soft of email service for a while 0 you sent them an email for someone an they would print it out and deliver it as a regular postal mail - they put a lot of money into it but it did not last all that long I have read in multiple places that some people in the post office wanted to make the argument that email was mail and since the post office had exclusive authority to deliver mail then the post office should have exclusive authority to run email systems & deliver email - but I have never been able to track down any actual backup documentation to that effect - it would be great if anyone on this list knew anything solid about it Scott > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history wrote: > > I can't remember where I read this, but I'm sure I saw a reference to the US postal service once suggesting that they should be in charge of email and it should be a paid service. > > Regards > Robert > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: >> Hello There, >> >> I think this is the right list to ask. My apologies if my comments are not accurate. >> >> I have heard a couple of times that there was a moment in the history of the Internet where private companies were putting pressure on the community indicating that the Internet had to be controlled by a private company, that it had to be in the hands of someone, not in their free will. Is it so?. I think these comments used to occur when there was a big failure in the network (I think I've heard mostly when there were congestion control issues).., probably in the mid 80's or so. >> >> Am I crazy or the above actually happened? If so, I'm looking for information, documentation, references, etc. If I'm wrong nothing happen, just let me know :-) >> >> >> Thanks, >> >> >> Alejandro, >> >> >> >> >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From ajs at crankycanuck.ca Mon Dec 5 17:36:24 2022 From: ajs at crankycanuck.ca (Andrew Sullivan) Date: Mon, 5 Dec 2022 20:36:24 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> Message-ID: <20221206013624.d4ywtpcx7yi2i4wa@crankycanuck.ca> Hi, On Tue, Dec 06, 2022 at 11:25:43AM +1000, Robert Stanford via Internet-history wrote: >I can't remember where I read this, but I'm sure I saw a reference to >the US postal service once suggesting that they should be in charge of >email and it should be a paid service. > Maybe it was this (warning: paywall): https://www.bloomberg.com/features/2016-usps-email/ As for the idea that someone should own or control the Internet, that seems to overlook the design of the thing. Since it's not a unitary network, it can't be owned or controlled by any one entity. ? -- Andrew Sullivan ajs at crankycanuck.ca From brian.e.carpenter at gmail.com Mon Dec 5 18:02:32 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 6 Dec 2022 15:02:32 +1300 Subject: [ih] Internet should be in private hands In-Reply-To: <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: <3bbc812c-b7bb-9045-e8a0-50222295b176@gmail.com> At one point, the (British) Royal Mail had an experimental service for printing and delivering X.400 messages. I don't remember exactly when, probably around 1990. I believe that the Swiss federal Constitution had to be changed to allow such services to be operated legally except by the former Swiss PTT. Regards Brian On 06-Dec-22 14:31, Scott Bradner via Internet-history wrote: > The post office did have a soft of email service for a while 0 you sent them an email for someone an they > would print it out and deliver it as a regular postal mail - they put a lot of money into it but it did not last all that long > > I have read in multiple places that some people in the post office wanted to make the argument that email was > mail and since the post office had exclusive authority to deliver mail then the post office should have exclusive > authority to run email systems & deliver email - but I have never been able to track down any actual > backup documentation to that effect - it would be great if anyone on this list knew anything solid about it > > Scott > >> On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history wrote: >> >> I can't remember where I read this, but I'm sure I saw a reference to the US postal service once suggesting that they should be in charge of email and it should be a paid service. >> >> Regards >> Robert >> >> On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: >>> Hello There, >>> >>> I think this is the right list to ask. My apologies if my comments are not accurate. >>> >>> I have heard a couple of times that there was a moment in the history of the Internet where private companies were putting pressure on the community indicating that the Internet had to be controlled by a private company, that it had to be in the hands of someone, not in their free will. Is it so?. I think these comments used to occur when there was a big failure in the network (I think I've heard mostly when there were congestion control issues).., probably in the mid 80's or so. >>> >>> Am I crazy or the above actually happened? If so, I'm looking for information, documentation, references, etc. If I'm wrong nothing happen, just let me know :-) >>> >>> >>> Thanks, >>> >>> >>> Alejandro, >>> >>> >>> >>> >>> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From vint at google.com Mon Dec 5 20:47:50 2022 From: vint at google.com (Vint Cerf) Date: Mon, 5 Dec 2022 23:47:50 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: MCI Mail had a postal delivery service. You could include a postal address, a telex address, and email addresses in the MCI Mail "to:" format and we would print and deliver hard copy to USPS or to overnight service. That went into service in September 1983. v On Mon, Dec 5, 2022 at 8:32 PM Scott Bradner via Internet-history < internet-history at elists.isoc.org> wrote: > The post office did have a soft of email service for a while 0 you sent > them an email for someone an they > would print it out and deliver it as a regular postal mail - they put a > lot of money into it but it did not last all that long > > I have read in multiple places that some people in the post office wanted > to make the argument that email was > mail and since the post office had exclusive authority to deliver mail > then the post office should have exclusive > authority to run email systems & deliver email - but I have never been > able to track down any actual > backup documentation to that effect - it would be great if anyone on this > list knew anything solid about it > > Scott > > > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I can't remember where I read this, but I'm sure I saw a reference to > the US postal service once suggesting that they should be in charge of > email and it should be a paid service. > > > > Regards > > Robert > > > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: > >> Hello There, > >> > >> I think this is the right list to ask. My apologies if my comments > are not accurate. > >> > >> I have heard a couple of times that there was a moment in the history > of the Internet where private companies were putting pressure on the > community indicating that the Internet had to be controlled by a private > company, that it had to be in the hands of someone, not in their free will. > Is it so?. I think these comments used to occur when there was a big > failure in the network (I think I've heard mostly when there were > congestion control issues).., probably in the mid 80's or so. > >> > >> Am I crazy or the above actually happened? If so, I'm looking for > information, documentation, references, etc. If I'm wrong nothing happen, > just let me know :-) > >> > >> > >> Thanks, > >> > >> > >> Alejandro, > >> > >> > >> > >> > >> > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From jmamodio at gmail.com Mon Dec 5 21:19:42 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Mon, 5 Dec 2022 23:19:42 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: Hi Vint, didn't it have Fax service too ? I think so. Regards Jorge On Mon, Dec 5, 2022 at 10:48 PM Vint Cerf via Internet-history < internet-history at elists.isoc.org> wrote: > MCI Mail had a postal delivery service. You could include a postal address, > a telex address, and email addresses in the MCI Mail "to:" format and we > would print and deliver hard copy to USPS or to overnight service. That > went into service in September 1983. > v > > > On Mon, Dec 5, 2022 at 8:32 PM Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > The post office did have a soft of email service for a while 0 you sent > > them an email for someone an they > > would print it out and deliver it as a regular postal mail - they put a > > lot of money into it but it did not last all that long > > > > I have read in multiple places that some people in the post office wanted > > to make the argument that email was > > mail and since the post office had exclusive authority to deliver mail > > then the post office should have exclusive > > authority to run email systems & deliver email - but I have never been > > able to track down any actual > > backup documentation to that effect - it would be great if anyone on this > > list knew anything solid about it > > > > Scott > > > > > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > > I can't remember where I read this, but I'm sure I saw a reference to > > the US postal service once suggesting that they should be in charge of > > email and it should be a paid service. > > > > > > Regards > > > Robert > > > > > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: > > >> Hello There, > > >> > > >> I think this is the right list to ask. My apologies if my comments > > are not accurate. > > >> > > >> I have heard a couple of times that there was a moment in the > history > > of the Internet where private companies were putting pressure on the > > community indicating that the Internet had to be controlled by a private > > company, that it had to be in the hands of someone, not in their free > will. > > Is it so?. I think these comments used to occur when there was a big > > failure in the network (I think I've heard mostly when there were > > congestion control issues).., probably in the mid 80's or so. > > >> > > >> Am I crazy or the above actually happened? If so, I'm looking for > > information, documentation, references, etc. If I'm wrong nothing happen, > > just let me know :-) > > >> > > >> > > >> Thanks, > > >> > > >> > > >> Alejandro, > > >> > > >> > > >> > > >> > > >> > > > > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From vint at google.com Mon Dec 5 21:41:58 2022 From: vint at google.com (Vint Cerf) Date: Tue, 6 Dec 2022 00:41:58 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: yes, that was added later - I think after I left in 1986. v On Tue, Dec 6, 2022 at 12:20 AM Jorge Amodio wrote: > Hi Vint, > > didn't it have Fax service too ? I think so. > > Regards > Jorge > > > On Mon, Dec 5, 2022 at 10:48 PM Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> MCI Mail had a postal delivery service. You could include a postal >> address, >> a telex address, and email addresses in the MCI Mail "to:" format and we >> would print and deliver hard copy to USPS or to overnight service. That >> went into service in September 1983. >> v >> >> >> On Mon, Dec 5, 2022 at 8:32 PM Scott Bradner via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> > The post office did have a soft of email service for a while 0 you sent >> > them an email for someone an they >> > would print it out and deliver it as a regular postal mail - they put a >> > lot of money into it but it did not last all that long >> > >> > I have read in multiple places that some people in the post office >> wanted >> > to make the argument that email was >> > mail and since the post office had exclusive authority to deliver mail >> > then the post office should have exclusive >> > authority to run email systems & deliver email - but I have never been >> > able to track down any actual >> > backup documentation to that effect - it would be great if anyone on >> this >> > list knew anything solid about it >> > >> > Scott >> > >> > > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history < >> > internet-history at elists.isoc.org> wrote: >> > > >> > > I can't remember where I read this, but I'm sure I saw a reference to >> > the US postal service once suggesting that they should be in charge of >> > email and it should be a paid service. >> > > >> > > Regards >> > > Robert >> > > >> > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: >> > >> Hello There, >> > >> >> > >> I think this is the right list to ask. My apologies if my comments >> > are not accurate. >> > >> >> > >> I have heard a couple of times that there was a moment in the >> history >> > of the Internet where private companies were putting pressure on the >> > community indicating that the Internet had to be controlled by a private >> > company, that it had to be in the hands of someone, not in their free >> will. >> > Is it so?. I think these comments used to occur when there was a big >> > failure in the network (I think I've heard mostly when there were >> > congestion control issues).., probably in the mid 80's or so. >> > >> >> > >> Am I crazy or the above actually happened? If so, I'm looking for >> > information, documentation, references, etc. If I'm wrong nothing >> happen, >> > just let me know :-) >> > >> >> > >> >> > >> Thanks, >> > >> >> > >> >> > >> Alejandro, >> > >> >> > >> >> > >> >> > >> >> > >> >> > > >> > > -- >> > > Internet-history mailing list >> > > Internet-history at elists.isoc.org >> > > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 <(571)%20213-1346> >> >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From julf at Julf.com Tue Dec 6 00:40:47 2022 From: julf at Julf.com (Johan Helsingius) Date: Tue, 6 Dec 2022 09:40:47 +0100 Subject: [ih] Internet should be in private hands In-Reply-To: <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> Message-ID: <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> On 05/12/2022 23:41, Brian E Carpenter via Internet-history wrote: > The delusion that ICANN is in some way in charge of the Internet > has never quite gone away. Not even inside ICANN... Julf From gnu at toad.com Tue Dec 6 00:52:08 2022 From: gnu at toad.com (John Gilmore) Date: Tue, 06 Dec 2022 00:52:08 -0800 Subject: [ih] What research capability-based OS was on the ARPAnet? Message-ID: <9417.1670316728@hop.toad.com> When I was a teenager connecting to various computers on the ARPAnet via the old Telenet/ARPAnet RS232 patch (about 1975), I used commands like "@o 134" to open a connection to the MIT-AI computer, where I had a tourist account and learned Emacs. I used to try "@" other numbers, and explore what else was out there; there were only a few hundred machines, and most wouldn't let you get past a login prompt. For a time, I ran across an interesting research machine that was based on "capabilities". It was well documented for its time, and I learned about it by using it just to explore, but I never programmed for it. By now I've forgotten its name and who its inventor was. Do any Internet historians remember it? John PS: I may have been using an acoustic coupler at the time, in a 300 bps modem connection to a Telenet front-end. Wow. Things have evolved since 1975. From el at lisse.NA Tue Dec 6 00:55:08 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Tue, 6 Dec 2022 10:55:08 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: What does being "in charge of the Internet" mean? el On 06/12/2022 10:40, Johan Helsingius via Internet-history wrote: > On 05/12/2022 23:41, Brian E Carpenter via Internet-history wrote: > >> The delusion that ICANN is in some way in charge of the Internet >> has never quite gone away. > > Not even inside ICANN... > > ????Julf > -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From jmamodio at gmail.com Tue Dec 6 01:20:38 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Tue, 6 Dec 2022 03:20:38 -0600 Subject: [ih] Internet should be in private hands In-Reply-To: References: Message-ID: <686E5847-B202-4279-A442-6C6FF18B3AF6@gmail.com> You can have it ? https://youtu.be/iDbyYGrswtg -Jorge > On Dec 6, 2022, at 2:55 AM, Dr Eberhard W Lisse via Internet-history wrote: > > ?What does being "in charge of the Internet" mean? > > el > >> On 06/12/2022 10:40, Johan Helsingius via Internet-history wrote: >>> On 05/12/2022 23:41, Brian E Carpenter via Internet-history wrote: >>> >>> The delusion that ICANN is in some way in charge of the Internet >>> has never quite gone away. >> >> Not even inside ICANN... >> >> Julf >> > > -- > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From gregskinner0 at icloud.com Tue Dec 6 02:00:13 2022 From: gregskinner0 at icloud.com (Greg Skinner) Date: Tue, 6 Dec 2022 02:00:13 -0800 Subject: [ih] What research capability-based OS was on the ARPAnet? In-Reply-To: <9417.1670316728@hop.toad.com> References: <9417.1670316728@hop.toad.com> Message-ID: <015ABC44-9F4E-414E-B5C1-B661BB33AEE6@icloud.com> I found a document by Jed Donnelley that describes in some detail the Capability Computing research he did. https://www.computer-history.info/Page4.dir/pages/LTSS.NLTSS.dir/pages/cap-livermore.html ?gregbo > On Dec 6, 2022, at 12:52 AM, John Gilmore via Internet-history wrote: > > When I was a teenager connecting to various computers on the ARPAnet via > the old Telenet/ARPAnet RS232 patch (about 1975), I used commands like > "@o 134" to open a connection to the MIT-AI computer, where I had a > tourist account and learned Emacs. I used to try "@" other numbers, and > explore what else was out there; there were only a few hundred machines, > and most wouldn't let you get past a login prompt. > > For a time, I ran across an interesting research machine that was based > on "capabilities". It was well documented for its time, and I learned > about it by using it just to explore, but I never programmed for it. By > now I've forgotten its name and who its inventor was. Do any Internet > historians remember it? > > John > > PS: I may have been using an acoustic coupler at the time, in a > 300 bps modem connection to a Telenet front-end. Wow. Things have > evolved since 1975. > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From clemc at ccc.com Tue Dec 6 06:23:17 2022 From: clemc at ccc.com (Clem Cole) Date: Tue, 6 Dec 2022 09:23:17 -0500 Subject: [ih] What research capability-based OS was on the ARPAnet? In-Reply-To: <9417.1670316728@hop.toad.com> References: <9417.1670316728@hop.toad.com> Message-ID: On Tue, Dec 6, 2022 at 3:52 AM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > > For a time, I ran across an interesting research machine that was based > on "capabilities". It was well documented for its time, and I learned > about it by using it just to explore, but I never programmed for it. By > now I've forgotten its name and who its inventor was. Do any Internet > historians remember it? There were a couple, but did you mean CMU's C.mmp - which ran Hydra, which was capabilities based and was on the Internet as CMU-D or CMU-Hydra? The system influenced the Intel 432 BTW, which was also based on capabilities. Bill Wulf and his co-authors Roy Levin and Sam Harbison have a book about it: "HYDRA/C.mmp: An Experimental Computer System" (ISBN 0-07-072120-3). After all their work on building Cdot and Hydra, the book is the report. It has my favorite dedication of any book in the research world, which says: "To builders and programs of real programming systems." The follow-on for Hydra was StarOS for CM*, which was also capability based, but I don't think it was ever directly connected to CMU's IMP, only on the LAN, so it's unlikely you would have had access to it. FWIW: Henry Levy has a book from Digital Press called '*Capability-Based Computer Systems*' (ISBN 978-148-3101064), which has a pretty good survey of most of them, so if Cdot/Hydra was not it, there is a good chance Henry describes the system in his book, which I think is available via google books. [I have printed copies of both] Clem ? From dave.taht at gmail.com Tue Dec 6 06:46:39 2022 From: dave.taht at gmail.com (Dave Taht) Date: Tue, 6 Dec 2022 06:46:39 -0800 Subject: [ih] What research capability-based OS was on the ARPAnet? In-Reply-To: References: <9417.1670316728@hop.toad.com> Message-ID: On Tue, Dec 6, 2022 at 6:24 AM Clem Cole via Internet-history wrote: > > On Tue, Dec 6, 2022 at 3:52 AM John Gilmore via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > > For a time, I ran across an interesting research machine that was based > > on "capabilities". It was well documented for its time, and I learned > > about it by using it just to explore, but I never programmed for it. By > > now I've forgotten its name and who its inventor was. Do any Internet > > historians remember it? > > > There were a couple, but did you mean CMU's C.mmp - which ran Hydra, which > was capabilities based and was on the Internet as CMU-D or CMU-Hydra? The > system influenced the Intel 432 BTW, which was also based on capabilities. > Bill Wulf and his co-authors Roy Levin and Sam Harbison have a book about > it: "HYDRA/C.mmp: An Experimental Computer System" (ISBN 0-07-072120-3). > After all their work on building Cdot and Hydra, the book is the report. > It has my favorite dedication of any book in the research world, which > says: "To builders and programs of real programming systems." > > The follow-on for Hydra was StarOS for CM*, which was also > capability based, but I don't think it was ever directly connected to CMU's > IMP, only on the LAN, so it's unlikely you would have had access to it. > FWIW: Henry Levy has a book from Digital Press called '*Capability-Based > Computer Systems*' (ISBN 978-148-3101064), which has a pretty good survey > of most of them, so if Cdot/Hydra was not it, there is a good chance Henry > describes the system in his book, which I think is available via google > books. [I have printed copies of both] Moving forward to modern day, cambridge has shipped working hardware for their capability based architecture and ported a lot of code to it: https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ (I'm very encouraged). Also in looking for "cambridge capabilities", research there goes back to the 70s also. Also millcomputer's attempt at an implementation was fascinating: https://millcomputing.com/docs/security/ (ivan's a trip and goes back to algop 68) > > Clem > ? > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- This song goes out to all the folk that thought Stadia would work: https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz Dave T?ht CEO, TekLibre, LLC From dcrocker at bbiw.net Mon Dec 5 09:39:23 2022 From: dcrocker at bbiw.net (Dave Crocker) Date: Mon, 5 Dec 2022 09:39:23 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> Message-ID: <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> On 12/5/2022 9:28 AM, Larry Masinter wrote: > I think the motivation to surpass Gopher traffic compared to HTTP (I > remember a graph showing the relative usage) was more important than > the abstract discussion about two-way linlks/ > HTTP 0.9 was little more than Gopher. Getting Mark McCayhill to let > his name be used on the URL spec because it included his writeup of > "gopher:" was important to show people that www subsumed gopher. Back then, Gopher was much easier to get a (text) page up and available. And there were lots of existing text documents. The Web had a much richer presentation appearance but existing documents needed to be translated into html. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From bob.hinden at gmail.com Tue Dec 6 09:38:18 2022 From: bob.hinden at gmail.com (Bob Hinden) Date: Tue, 6 Dec 2022 09:38:18 -0800 Subject: [ih] Internet should be in private hands In-Reply-To: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> Message-ID: The reminds me of the quote I heard attributed to Danny Cohn (https://www.internethalloffame.org/inductees/danny-cohen) In the US the private networks are public, and the public networks are private. Bob > On Dec 5, 2022, at 11:31 AM, Alejandro Acosta via Internet-history wrote: > > Hello There, > > I think this is the right list to ask. My apologies if my comments are not accurate. > > I have heard a couple of times that there was a moment in the history of the Internet where private companies were putting pressure on the community indicating that the Internet had to be controlled by a private company, that it had to be in the hands of someone, not in their free will. Is it so?. I think these comments used to occur when there was a big failure in the network (I think I've heard mostly when there were congestion control issues).., probably in the mid 80's or so. > > Am I crazy or the above actually happened? If so, I'm looking for information, documentation, references, etc. If I'm wrong nothing happen, just let me know :-) > > > Thanks, > > > Alejandro, > > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From johnl at iecc.com Tue Dec 6 11:15:02 2022 From: johnl at iecc.com (John Levine) Date: 6 Dec 2022 19:15:02 -0000 Subject: [ih] gopher broke, was "The Internet runs on Proposed Standards" In-Reply-To: <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <20 <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> References: <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <20 <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> Message-ID: According to Dave Crocker via Internet-history : >On 12/5/2022 9:28 AM, Larry Masinter wrote: >> I think the motivation to surpass Gopher traffic compared to HTTP (I >> remember a graph showing the relative usage) was more important than >> the abstract discussion about two-way linlks/ Gopher was great, and as Dave noted, it was a lot easier to get working. But Gopher committed suicide when the U of Minnesota tried to charge license fees for it. If they hadn't, we might still have Gopher as a lightweight alternative to the web for constrained devices and environments. R's, John -- Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From craig at tereschau.net Tue Dec 6 13:44:33 2022 From: craig at tereschau.net (Craig Partridge) Date: Tue, 6 Dec 2022 14:44:33 -0700 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: Was that the service where Dave Clark figured out it was possible to send an email of a sequence of page feeds and have MCI mail a ream of paper cheap? Craig On Mon, Dec 5, 2022 at 9:48 PM Vint Cerf via Internet-history < internet-history at elists.isoc.org> wrote: > MCI Mail had a postal delivery service. You could include a postal address, > a telex address, and email addresses in the MCI Mail "to:" format and we > would print and deliver hard copy to USPS or to overnight service. That > went into service in September 1983. > v > > > On Mon, Dec 5, 2022 at 8:32 PM Scott Bradner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > The post office did have a soft of email service for a while 0 you sent > > them an email for someone an they > > would print it out and deliver it as a regular postal mail - they put a > > lot of money into it but it did not last all that long > > > > I have read in multiple places that some people in the post office wanted > > to make the argument that email was > > mail and since the post office had exclusive authority to deliver mail > > then the post office should have exclusive > > authority to run email systems & deliver email - but I have never been > > able to track down any actual > > backup documentation to that effect - it would be great if anyone on this > > list knew anything solid about it > > > > Scott > > > > > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > > I can't remember where I read this, but I'm sure I saw a reference to > > the US postal service once suggesting that they should be in charge of > > email and it should be a paid service. > > > > > > Regards > > > Robert > > > > > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: > > >> Hello There, > > >> > > >> I think this is the right list to ask. My apologies if my comments > > are not accurate. > > >> > > >> I have heard a couple of times that there was a moment in the > history > > of the Internet where private companies were putting pressure on the > > community indicating that the Internet had to be controlled by a private > > company, that it had to be in the hands of someone, not in their free > will. > > Is it so?. I think these comments used to occur when there was a big > > failure in the network (I think I've heard mostly when there were > > congestion control issues).., probably in the mid 80's or so. > > >> > > >> Am I crazy or the above actually happened? If so, I'm looking for > > information, documentation, references, etc. If I'm wrong nothing happen, > > just let me know :-) > > >> > > >> > > >> Thanks, > > >> > > >> > > >> Alejandro, > > >> > > >> > > >> > > >> > > >> > > > > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > Google, LLC > 1900 Reston Metro Plaza, 16th Floor > Reston, VA 20190 > +1 (571) 213 1346 > > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From gnu at toad.com Tue Dec 6 14:51:04 2022 From: gnu at toad.com (John Gilmore) Date: Tue, 06 Dec 2022 14:51:04 -0800 Subject: [ih] What research capability-based OS was on the ARPAnet? In-Reply-To: <015ABC44-9F4E-414E-B5C1-B661BB33AEE6@icloud.com> References: <9417.1670316728@hop.toad.com> <015ABC44-9F4E-414E-B5C1-B661BB33AEE6@icloud.com> Message-ID: <18421.1670367064@hop.toad.com> Greg Skinner wrote: > https://www.computer-history.info/Page4.dir/pages/LTSS.NLTSS.dir/pages/cap-livermore.html Thank you, Greg! The system that I accessed was Jed Donnelley's RATS. John From vint at google.com Tue Dec 6 15:22:07 2022 From: vint at google.com (Vint Cerf) Date: Tue, 6 Dec 2022 18:22:07 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <34b9e339-3e11-8ddf-3266-779937075055@stanford.com.au> <8FA1F862-C8F3-45AA-8C58-E67B3BC3FCD9@sobco.com> Message-ID: yep v On Tue, Dec 6, 2022 at 4:44 PM Craig Partridge wrote: > Was that the service where Dave Clark figured out it was possible to send > an email of a sequence of page feeds and have MCI mail a ream of paper > cheap? > > Craig > > On Mon, Dec 5, 2022 at 9:48 PM Vint Cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> MCI Mail had a postal delivery service. You could include a postal >> address, >> a telex address, and email addresses in the MCI Mail "to:" format and we >> would print and deliver hard copy to USPS or to overnight service. That >> went into service in September 1983. >> v >> >> >> On Mon, Dec 5, 2022 at 8:32 PM Scott Bradner via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> > The post office did have a soft of email service for a while 0 you sent >> > them an email for someone an they >> > would print it out and deliver it as a regular postal mail - they put a >> > lot of money into it but it did not last all that long >> > >> > I have read in multiple places that some people in the post office >> wanted >> > to make the argument that email was >> > mail and since the post office had exclusive authority to deliver mail >> > then the post office should have exclusive >> > authority to run email systems & deliver email - but I have never been >> > able to track down any actual >> > backup documentation to that effect - it would be great if anyone on >> this >> > list knew anything solid about it >> > >> > Scott >> > >> > > On Dec 5, 2022, at 8:25 PM, Robert Stanford via Internet-history < >> > internet-history at elists.isoc.org> wrote: >> > > >> > > I can't remember where I read this, but I'm sure I saw a reference to >> > the US postal service once suggesting that they should be in charge of >> > email and it should be a paid service. >> > > >> > > Regards >> > > Robert >> > > >> > > On 6/12/22 05:31, Alejandro Acosta via Internet-history wrote: >> > >> Hello There, >> > >> >> > >> I think this is the right list to ask. My apologies if my comments >> > are not accurate. >> > >> >> > >> I have heard a couple of times that there was a moment in the >> history >> > of the Internet where private companies were putting pressure on the >> > community indicating that the Internet had to be controlled by a private >> > company, that it had to be in the hands of someone, not in their free >> will. >> > Is it so?. I think these comments used to occur when there was a big >> > failure in the network (I think I've heard mostly when there were >> > congestion control issues).., probably in the mid 80's or so. >> > >> >> > >> Am I crazy or the above actually happened? If so, I'm looking for >> > information, documentation, references, etc. If I'm wrong nothing >> happen, >> > just let me know :-) >> > >> >> > >> >> > >> Thanks, >> > >> >> > >> >> > >> Alejandro, >> > >> >> > >> >> > >> >> > >> >> > >> >> > > >> > > -- >> > > Internet-history mailing list >> > > Internet-history at elists.isoc.org >> > > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> > -- >> > Internet-history mailing list >> > Internet-history at elists.isoc.org >> > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> >> >> -- >> Please send any postal/overnight deliveries to: >> Vint Cerf >> Google, LLC >> 1900 Reston Metro Plaza, 16th Floor >> Reston, VA 20190 >> +1 (571) 213 1346 <(571)%20213-1346> >> >> >> until further notice >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From internet-history at gtaylor.tnetconsulting.net Tue Dec 6 16:19:48 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Tue, 6 Dec 2022 17:19:48 -0700 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> Message-ID: On 12/5/22 10:39 AM, Dave Crocker via Internet-history wrote: > Back then, Gopher was much easier to get a (text) page up and available. > And there were lots of existing text documents. I question the veracity of that. But I have no first hand experience hosting Gopher. > The Web had a much richer presentation appearance but existing documents > needed to be translated into html. The number of PDFs, PS, JPG, GIF, et al. files that I've downloaded beg to differ with the need to be translated into HTML. -- Grant. . . . unix || die From brian.e.carpenter at gmail.com Tue Dec 6 18:34:05 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 7 Dec 2022 15:34:05 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221204194252.45F64508B528@ary.qy> Message-ID: <222d1fd7-dd6a-5100-8992-1c5e9ac1d105@gmail.com> On 07-Dec-22 12:30, Louis Mamakos wrote: > And don't forget about the Informational RFCs that were published. > > I was one of the perpetrators of PPP-over-Ethernet and it was only published as an Informational RFC.?? This was at the dawn of the era of ADSL and there were a plethora of ADSL CPE and DSLAM solutions that had.. proprietary or at light opinionated and single vendor solutions.? I was at UUNET at the time, and we observed the one thing most all of the hardware could do was bridge Ethernet frames.. > > So we (UUNET, a service provider), a DSL equipment vendor and a client software vendor built and tested implementations that interoperated and documented the protocol in RFC 2516. ? All involved were interested in interoperable implementations, products, and service provider customers. > > It was thought to have been a short-term solution, like many things in Internet history, but still seems to be around here and there a few decades later.? For a variety of reasons, we never progressed it beyond "Informational" status, and it adequately served its purpose as a publicly available reference document that could be cited. Not only that, it's classified by the RFC Editor as "Stream: [Legacy]" which means that its provenance is somewhat undefined. But indeed, a very large fraction of Internet users have depended entirely on PPPoE for many years. > I'd imagine there are other "important" protocols in wide use that also went down this path. Some of them were in due course adopted and further developed in the IETF; for example NFS and SSL --> TLS. An interesting case is RFC 20, listed as "Status: Unknown, Stream: [Legacy]" for many years, but promoted to Internet Standard (STD 80) in 2020. As a result, Vint certainly holds the record for the longest standards-track latency ever achieved. For those who don't know, the IESG maintains a "downref registry" for non-standards track RFCs that can be cited as if they were standards: https://datatracker.ietf.org/doc/downref Brian > > Louis Mamakos > > > On Sun, Dec 4, 2022 at 2:55 PM Brian E Carpenter via Internet-history > wrote: > > On 05-Dec-22 08:42, John Levine via Internet-history wrote: > > It appears that Jack Haverty via Internet-history > said: > >> operating between my keyboard/screen and yours.? It could be all > >> Standards of some ilk, or it could all be Proprietary.?? It might > >> conform to the spec, or have some zero-day flaw.? How do you tell? > > > > IETF standards are all about interoperation.? I don't have to care > > what's inside your black box so long as it talks to my black box > > using standard protocols. > > > > I realize this argument gets fuzzy around the edges, but for the stuff > > I do like e-mail, if the other end doesn't speak SMTP as described in > > RFCs over TCP and IP as described in RFCs, it's not going to get any > > mail delivered. > > Exactly. And whether those RFCs are Proposed Standard, Draft Standard > or Internet Standard turns out to be completely irrelevant. There's > a fairly large set of RFCs that need to be followed quite accurately; > the permissionless innovation that I mentioned can't duck that. > > ? ? ? Brian > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Tue Dec 6 18:48:54 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 7 Dec 2022 15:48:54 +1300 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> Message-ID: <5a157e0d-a79a-bd1b-34c1-b5a52b9957b9@gmail.com> On 07-Dec-22 13:19, Grant Taylor via Internet-history wrote: > On 12/5/22 10:39 AM, Dave Crocker via Internet-history wrote: >> Back then, Gopher was much easier to get a (text) page up and available. >> And there were lots of existing text documents. > > I question the veracity of that. But I have no first hand experience > hosting Gopher. > >> The Web had a much richer presentation appearance but existing documents >> needed to be translated into html. > > The number of PDFs, PS, JPG, GIF, et al. files that I've downloaded beg > to differ with the need to be translated into HTML. Ditto plain text. You don't need any HTML statements whatever, and certainly nothing like a CSS, to access and render plain text via HTTP. That was true in 1994 and it's still true today. But of course, in 1994, most documents were neither plain text nor PDF. They were paper, or possibly a proprietary format such as .doc or .lwp. That must have presented a problem to gopher and wais as much as to the Web. (At CERN our "official" markup then was SGML/Bookmaster, which was relatively easy to convert to HTML, and in fact it was because of this that Tim and Robert Cailliau knew to make HTML a dialect of SGML.) Brian From gnu at toad.com Tue Dec 6 19:47:14 2022 From: gnu at toad.com (John Gilmore) Date: Tue, 06 Dec 2022 19:47:14 -0800 Subject: [ih] Internet should be in private hands In-Reply-To: <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> Message-ID: <30064.1670384834@hop.toad.com> >>> I have heard a couple of times that there was a moment in the history >>> of the Internet where private companies were putting pressure on the >>> community indicating that the Internet had to be controlled by a private >>> company, that it had to be in the hands of someone, not in their free >>> will. Is it so? I recall a time when the Domain Name System top-level domain (TLD) for each country had to be handled by a private party, not by the government that ran the country. For example, the .au domain for Australia was originally run by Robert Elz at Melbourne University. By 2001, a private nonprofit called ".au Domain Administration" had been organized to run it instead, with the "endorsement of the Commonwealth of Australia (the Commonwealth) of auDA as the appropriate to hold the delegation of authority by the Internet Corporation for Assigned Names and Numbers (ICANN) for administrative authority of the au country code top level domain (ccTLD)". As a separate response... The original Internet was operated completely by government contractors (like BBN or MERIT). Eventually, in the early 1990s, private parties built their own networks that used the IP protocols. UUNET was an early one, which didn't receive government subsidies, unlike the bloated and therefore generally unresponsive NSFnet regional networks like BARRNET. The private IP networks were soon gatewayed to each other, and to the government-run parts of the Internet, so that it all acted as one big Internet. Then it became possible for both government projects, and ordinary companies or people, to just buy Internet service from this new concept, an "ISP". Soon after that, there was a general move on the part of the US Government to get out of the business of running operational IP networks. Instead, parts of the government (like the NSF) could just fund their grantees to buy Internet access from any commercial provider. (Even after that, experimental networks such as the Gigabit Internet Testbed could be fully funded and operated by government agencies or their contractors.) John From johnl at iecc.com Tue Dec 6 20:14:50 2022 From: johnl at iecc.com (John Levine) Date: 6 Dec 2022 23:14:50 -0500 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: Message-ID: <20221207041450.4080C50AF77F@ary.qy> It appears that Grant Taylor via Internet-history said: >On 12/5/22 10:39 AM, Dave Crocker via Internet-history wrote: >> Back then, Gopher was much easier to get a (text) page up and available. >> And there were lots of existing text documents. > >I question the veracity of that. But I have no first hand experience >hosting Gopher. >From personal experience I can confirm that it was true. What you could put on the page was quite limited, but even so at the time it was pretty exciting to put together a menu page that pointed to resources on six computers on three continents. >> The Web had a much richer presentation appearance but existing documents >> needed to be translated into html. > >The number of PDFs, PS, JPG, GIF, et al. files that I've downloaded beg >to differ with the need to be translated into HTML. Gopher was just a bunch of menus and the later Gopher+ had a way to say that the thing a menu entry pointed to was a picture or whatever. Recall that the early WWW only had text links. In-line images were Netscape's innovation. R's, John From internet-history at gtaylor.tnetconsulting.net Tue Dec 6 20:55:48 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Tue, 6 Dec 2022 21:55:48 -0700 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <20221207041450.4080C50AF77F@ary.qy> References: <20221207041450.4080C50AF77F@ary.qy> Message-ID: <009df42e-6497-7bd2-1fe6-e840eb83c9da@spamtrap.tnetconsulting.net> On 12/6/22 9:14 PM, John Levine via Internet-history wrote: > From personal experience I can confirm that it was true. What you > could put on the page was quite limited, but even so at the time > it was pretty exciting to put together a menu page that pointed to > resources on six computers on three continents. I largely agree that was could be put on the page was limited. I say largely because I still question the actual value of non-text on pages today. > Gopher was just a bunch of menus and the later Gopher+ had a way to > say that the thing a menu entry pointed to was a picture or whatever. > Recall that the early WWW only had text links. In-line images were > Netscape's innovation. Maybe the minimal experience I have as a user was with Gopher+. I really thought that Gopher could point to an arbitrary file independent of what it was. Perhaps I'm mistaken. Did Gopher+ allow you to point to an arbitrary file independent of what it was? Or was my minimal Gopher experience so divorced from reality as to not be worth counting? -- Grant. . . . unix || die From vint at google.com Tue Dec 6 23:30:47 2022 From: vint at google.com (Vint Cerf) Date: Wed, 7 Dec 2022 02:30:47 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <30064.1670384834@hop.toad.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> Message-ID: good summary, John. v On Tue, Dec 6, 2022 at 10:47 PM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > >>> I have heard a couple of times that there was a moment in the > history > >>> of the Internet where private companies were putting pressure on the > >>> community indicating that the Internet had to be controlled by a > private > >>> company, that it had to be in the hands of someone, not in their free > >>> will. Is it so? > > I recall a time when the Domain Name System top-level domain (TLD) for > each country had to be handled by a private party, not by the government > that ran the country. > > For example, the .au domain for Australia was originally run by Robert > Elz at Melbourne University. By 2001, a private nonprofit called ".au > Domain Administration" had been organized to run it instead, with the > "endorsement of the Commonwealth of Australia (the Commonwealth) of auDA > as the appropriate to hold the delegation of authority by the Internet > Corporation for Assigned Names and Numbers (ICANN) for administrative > authority of the au country code top level domain (ccTLD)". > > As a separate response... > > The original Internet was operated completely by government contractors > (like BBN or MERIT). Eventually, in the early 1990s, private parties > built their own networks that used the IP protocols. UUNET was an early > one, which didn't receive government subsidies, unlike the bloated and > therefore generally unresponsive NSFnet regional networks like BARRNET. > The private IP networks were soon gatewayed to each other, and to the > government-run parts of the Internet, so that it all acted as one big > Internet. Then it became possible for both government projects, and > ordinary companies or people, to just buy Internet service from this new > concept, an "ISP". > > Soon after that, there was a general move on the part of the US > Government to get out of the business of running operational IP > networks. Instead, parts of the government (like the NSF) could just > fund their grantees to buy Internet access from any commercial provider. > > (Even after that, experimental networks such as the Gigabit Internet > Testbed could be fully funded and operated by government agencies or > their contractors.) > > John > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From dhc at dcrocker.net Wed Dec 7 04:54:06 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 7 Dec 2022 04:54:06 -0800 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: References: <20221129142338.3089d2ae@dataplane.org> <8FD4D4B8-096E-42F0-A95D-263A3E9E17E8@strayalpha.com> <49DD0AB1-A507-4F9A-8448-A016EE08B80F@strayalpha.com> <5ada7b96-f710-4e54-838b-5328e74f8d1b@3kitty.org> <835cb2ca-3c48-aa5f-6268-4b735da02533@gmail.com> <17432c00-9584-78fd-6a80-253325c7a6e8@3kitty.org> <63874b58-a90b-8fec-d2db-be9fa3ef50ad@3kitty.org> <2c975dc4-f803-1a13-731e-d07089e498d3@gmail.com> <3b326e3e-d466-a30e-7f72-7ccb5b9b4e3e@bbiw.net> Message-ID: <0ef3b71a-318e-3645-e842-f50520d878ee@dcrocker.net> On 12/6/2022 4:19 PM, Grant Taylor via Internet-history wrote: > On 12/5/22 10:39 AM, Dave Crocker via Internet-history wrote: >> Back then, Gopher was much easier to get a (text) page up and >> available. And there were lots of existing text documents. > I question the veracity of that.? But I have no first hand experience > hosting Gopher. >> The Web had a much richer presentation appearance but existing >> documents needed to be translated into html. > The number of PDFs, PS, JPG, GIF, et al. files that I've downloaded > beg to differ with the need to be translated into HTML. My p[osting was about the usage experience for the web and for gopher, in the early days of the web. It wasn't about theory and it wasn't about today. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From mfidelman at meetinghouse.net Wed Dec 7 08:16:52 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Wed, 7 Dec 2022 11:16:52 -0500 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <20221207041450.4080C50AF77F@ary.qy> References: <20221207041450.4080C50AF77F@ary.qy> Message-ID: <932dcaaa-e206-f2cc-2e2f-3e2781753bd2@meetinghouse.net> At The Center for Civic Networking, circa 1992, the first batch of stuff we put on the net was behind a gopher server.? Worked like a charm. Miles Fidelman John Levine via Internet-history wrote: > It appears that Grant Taylor via Internet-history said: >> On 12/5/22 10:39 AM, Dave Crocker via Internet-history wrote: >>> Back then, Gopher was much easier to get a (text) page up and available. >>> And there were lots of existing text documents. >> I question the veracity of that. But I have no first hand experience >> hosting Gopher. > >From personal experience I can confirm that it was true. What you > could put on the page was quite limited, but even so at the time it > was pretty exciting to put together a menu page that pointed to > resources on six computers on three continents. > >>> The Web had a much richer presentation appearance but existing documents >>> needed to be translated into html. >> The number of PDFs, PS, JPG, GIF, et al. files that I've downloaded beg >> to differ with the need to be translated into HTML. > Gopher was just a bunch of menus and the later Gopher+ had a way to > say that the thing a menu entry pointed to was a picture or whatever. > Recall that the early WWW only had text links. In-line images were > Netscape's innovation. > > R's, > John -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From craig at tereschau.net Wed Dec 7 08:29:31 2022 From: craig at tereschau.net (Craig Partridge) Date: Wed, 7 Dec 2022 09:29:31 -0700 Subject: [ih] Internet should be in private hands In-Reply-To: <30064.1670384834@hop.toad.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> Message-ID: On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > > I recall a time when the Domain Name System top-level domain (TLD) for > each country had to be handled by a private party, not by the government > that ran the country. > > As I recall, Jon Postel didn't object to a government handling -- he simply wanted a responsible custodian. In most cases, governments weren't prepared. The reason for this recollection is that I believe some European governments took a role in designating who ran their initial TLD and that CSNET did a certain amount of mediating (and I worked for CSNET at the time). But my memory may be conflating multiple issues as there were also European governments who argued over who controlled their CSNET connection to the Internet. Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From johnl at iecc.com Wed Dec 7 09:10:14 2022 From: johnl at iecc.com (John Levine) Date: 7 Dec 2022 12:10:14 -0500 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <009df42e-6497-7bd2-1fe6-e840eb83c9da@spamtrap.tnetconsulting.net> Message-ID: <20221207171014.C829750B5CB2@ary.qy> It appears that Grant Taylor via Internet-history said: >I really thought that Gopher could point to an arbitrary file >independent of what it was. Perhaps I'm mistaken. > >Did Gopher+ allow you to point to an arbitrary file independent of what >it was? Or was my minimal Gopher experience so divorced from reality as >to not be worth counting? See RFC 1436. Gopher could return arbitrary files, so long as you had a client that could do something with them. The gopher index scheme had a short list of filetypes including BinHex, uuencode, and "a binary file!" R's, John From olejacobsen at me.com Wed Dec 7 09:14:42 2022 From: olejacobsen at me.com (Ole Jacobsen) Date: Wed, 7 Dec 2022 09:14:42 -0800 Subject: [ih] Internet should be in private hands In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> Message-ID: <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> Jon Postel also wanted ccTLDs to use the official ISO 3166 list of countries, which became a problem for the UK whose ISO 3166 code is GB, so Jon had to make an exception. Life was simple back then :-) Ole > On Dec 7, 2022, at 08:29, Craig Partridge via Internet-history wrote: > > On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < > internet-history at elists.isoc.org> wrote: > >> >> I recall a time when the Domain Name System top-level domain (TLD) for >> each country had to be handled by a private party, not by the government >> that ran the country. >> >> > As I recall, Jon Postel didn't object to a government handling -- he simply > wanted a responsible custodian. In most cases, governments weren't > prepared. > > The reason for this recollection is that I believe some European > governments took a role in designating who ran their initial TLD and that > CSNET did a certain amount of mediating (and I worked for CSNET at the > time). But my memory may be conflating multiple issues as there were also > European governments who argued over who controlled their CSNET connection > to the Internet. > > Craig > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history Ole J. Jacobsen Editor and Publisher The Internet Protocol Journal Office: +1 415-550-9433 Cell: +1 415-370-4628 UK Cell: +44 7805 977889 Web: protocoljournal.org E-mail: olejacobsen at me.com E-mail: ole at protocoljournal.org Skype: organdemo From ocl at gih.com Wed Dec 7 10:14:23 2022 From: ocl at gih.com (=?UTF-8?Q?Olivier_MJ_Cr=c3=a9pin-Leblond?=) Date: Wed, 7 Dec 2022 19:14:23 +0100 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <20221207041450.4080C50AF77F@ary.qy> References: <20221207041450.4080C50AF77F@ary.qy> Message-ID: <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> On 07/12/2022 05:14, John Levine via Internet-history wrote: > Gopher was just a bunch of menus and the later Gopher+ had a way to > say that the thing a menu entry pointed to was a picture or whatever. > Recall that the early WWW only had text links. In-line images were > Netscape's innovation. Gopher was pretty easy to administer and did the job really well on a low bandwidth. For those who are nostalgic, here's a cut/paste from an old Gopher page I did back in the day. Internet Gopher Information Client 2.0 pl10 Directory Services -->1.About This Directory. 2.College Telephone Book (text)/ 3.Connect to X.500 Directory 4.Electronic Yellow Pages (Experimental) 5.Finger to X.500 Services 6.International Dialling Codes. 7.Internet "white pages" directory facility (Netfind) [Experime.. 8.People's Locator on other sites (X.500 gateway)/ 9.Search College Telephone Book 10. Top level mail domains. 11. UK STD codes (long - nearly 8000 lines). 12. UK STD codes (string search) 13. X500 Data Summary. 14. X500 Great Britain DIT statistics. Press ? for Help, q to Quit, u to go up a menuPage: 1/1 As for text only WWW, Lynx was the oldest, and still is the oldest text browser. But inline images were already possible with NCSA Mosaic, Netscape's ancestor. NCSA Mosaic was widely used. I also have screen shots of a very early version of NCSA Mosaic but this mailing list does not support attachments. Kindest regards, Olivier From jack at 3kitty.org Wed Dec 7 10:50:42 2022 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 7 Dec 2022 10:50:42 -0800 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> References: <20221207041450.4080C50AF77F@ary.qy> <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> Message-ID: IIRC, I didn't personally use gopher much, but I think it used character-based interfaces (e.g., the "Press ? for Help, q to Quit, u to go up a menuPage: 1/1" below.) GUIs were becoming popular about that time, with "terminals" like SPARCs and Macs providing GUIs, and desktop PCs and Macs becoming common for non-techies.?? Did the early web browsers (Mosaic etc.) run on PCs and Macs??? I don't recall the timing, but one of the advantages that the Web had might be the availability of the GUI instead of TTY-style interactions - even if all the documents being handled were just text. I recall that I demonstrated the Web in the early 90s to everyone I could accost at Oracle, from the receptionist to the Chairman of the Board.?? They all "got it" and thought it was promising even if there was little content online at the time.?? Clicking on links was easy to learn.? Adding links to text documents was also simple to do in your favorite text editor.? Even simple formatting like headers, bold/italic, etc.? I never even thought of demonstrating gopher - it was suited for techies but not so much for normal people. Jack Haverty On 12/7/22 10:14, Olivier MJ Cr?pin-Leblond via Internet-history wrote: > > > On 07/12/2022 05:14, John Levine via Internet-history wrote: >> Gopher was just a bunch of menus and the later Gopher+ had a way to >> say that the thing a menu entry pointed to was a picture or whatever. >> Recall that the early WWW only had text links. In-line images were >> Netscape's innovation. > > Gopher was pretty easy to administer and did the job really well on a > low bandwidth. > > For those who are nostalgic, here's a cut/paste from an old Gopher > page I did back in the day. > > Internet Gopher Information Client 2.0 pl10 > > Directory Services > > -->1.About This Directory. > > 2.College Telephone Book (text)/ > > 3.Connect to X.500 Directory > > 4.Electronic Yellow Pages (Experimental) > > 5.Finger to X.500 Services > > 6.International Dialling Codes. > > 7.Internet "white pages" directory facility (Netfind) [Experime.. > > 8.People's Locator on other sites (X.500 gateway)/ > > 9.Search College Telephone Book > > 10. Top level mail domains. > > 11. UK STD codes (long - nearly 8000 lines). > > 12. UK STD codes (string search) > > 13. X500 Data Summary. > > 14. X500 Great Britain DIT statistics. > > Press ? for Help, q to Quit, u to go up a menuPage: 1/1 > > > > As for text only WWW, Lynx was the oldest, and still is the oldest > text browser. But inline images were already possible with NCSA > Mosaic, Netscape's ancestor. NCSA Mosaic was widely used. I also have > screen shots of a very early version of NCSA Mosaic but this mailing > list does not support attachments. > Kindest regards, > > Olivier From jeanjour at comcast.net Wed Dec 7 10:54:46 2022 From: jeanjour at comcast.net (John Day) Date: Wed, 7 Dec 2022 13:54:46 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> Message-ID: <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> That is an odd error in ISO in ISO 3166 given the organizations involved. (Being pedantic.) Great Britain is a geographical term, while UK designates the political unit, which was what was of concern to the organizations involved. 3166 was developed by TC46 which does documentation, libraries, museums, etc "in collaboration with the following International Organizations: Customs Cooperation Council (CCC), [since 1995: World Customs Organization (WCO)], United Nations Economic Commission for Europe (ECE), Food and Agriculture Organization of the United Nations (FAO), International Atomic Energy Agency (IAEA), International Air Transport Association (IATA), International Chamber of Commerce (ICC), International Chamber of Shipping (ICS), International Federation of Library Associations (IFLA), International Labour Office (ILO), International Telecommunication Union (ITU), International Union of Railways (UIC), United Nations Organization (UN), United Nations Conference on Trade and Development (UNCTAD), Universal Postal Union (UPU), World Intellectual Property Organization (WIPO), and World Health Organization (WHO).? You would have thought one of them would have caught it. ;-) Also, I remember seeing the initial CC TLD list at the time that had Switzerland as .sw, rather than .ch, which would have been familiar to any stamp collector. ;-) Everyone learned something. ;-) John > On Dec 7, 2022, at 12:14, Ole Jacobsen via Internet-history wrote: > > Jon Postel also wanted ccTLDs to use the official ISO 3166 > list of countries, which became a problem for the UK whose > ISO 3166 code is GB, so Jon had to make an exception. > > Life was simple back then :-) > > Ole > >> On Dec 7, 2022, at 08:29, Craig Partridge via Internet-history wrote: >> >> On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> >>> I recall a time when the Domain Name System top-level domain (TLD) for >>> each country had to be handled by a private party, not by the government >>> that ran the country. >>> >>> >> As I recall, Jon Postel didn't object to a government handling -- he simply >> wanted a responsible custodian. In most cases, governments weren't >> prepared. >> >> The reason for this recollection is that I believe some European >> governments took a role in designating who ran their initial TLD and that >> CSNET did a certain amount of mediating (and I worked for CSNET at the >> time). But my memory may be conflating multiple issues as there were also >> European governments who argued over who controlled their CSNET connection >> to the Internet. >> >> Craig >> >> -- >> ***** >> Craig Partridge's email account for professional society activities and >> mailing lists. >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > > Ole J. Jacobsen > Editor and Publisher > The Internet Protocol Journal > Office: +1 415-550-9433 > Cell: +1 415-370-4628 > UK Cell: +44 7805 977889 > Web: protocoljournal.org > E-mail: olejacobsen at me.com > E-mail: ole at protocoljournal.org > Skype: organdemo > > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Wed Dec 7 11:01:46 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 7 Dec 2022 11:01:46 -0800 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: References: <20221207041450.4080C50AF77F@ary.qy> <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> Message-ID: <5372baee-671b-9683-56a6-7003a246e324@dcrocker.net> On 12/7/2022 10:50 AM, Jack Haverty via Internet-history wrote: > I recall that I demonstrated the Web in the early 90s to everyone I > could accost at Oracle, from the receptionist to the Chairman of the > Board.?? They all "got it" and thought it was promising even if there > was little content online at the time. In 1990, I did a half-day class about Internet tech, as part of a week-long CMU class for some AT&T folk. I included a gopher demo, to get some class participation, starting with a master page that listed areas of the globe. The class choices eventually got us down to Wellington, New Zealand. Listed there was the Wellington Town Council and I saw an entry for the Town Council minutes.? For the previous week.? Which we were looking at from Pittsburgh. That's the first time I finally understood what globalized use of the net was going to look like. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From vint at google.com Wed Dec 7 11:02:59 2022 From: vint at google.com (Vint Cerf) Date: Wed, 7 Dec 2022 14:02:59 -0500 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <5372baee-671b-9683-56a6-7003a246e324@dcrocker.net> References: <20221207041450.4080C50AF77F@ary.qy> <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> <5372baee-671b-9683-56a6-7003a246e324@dcrocker.net> Message-ID: wellington was the first city to have a web site as I recall. v On Wed, Dec 7, 2022 at 2:02 PM Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > On 12/7/2022 10:50 AM, Jack Haverty via Internet-history wrote: > > I recall that I demonstrated the Web in the early 90s to everyone I > > could accost at Oracle, from the receptionist to the Chairman of the > > Board. They all "got it" and thought it was promising even if there > > was little content online at the time. > > In 1990, I did a half-day class about Internet tech, as part of a > week-long CMU class for some AT&T folk. I included a gopher demo, to get > some class participation, starting with a master page that listed areas > of the globe. > > The class choices eventually got us down to Wellington, New Zealand. > > Listed there was the Wellington Town Council and I saw an entry for the > Town Council minutes. For the previous week. Which we were looking at > from Pittsburgh. > > That's the first time I finally understood what globalized use of the > net was going to look like. > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > mast:@dcrocker at mastodon.social > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From johnl at iecc.com Wed Dec 7 11:09:45 2022 From: johnl at iecc.com (John R. Levine) Date: 7 Dec 2022 14:09:45 -0500 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" Message-ID: > As for text only WWW, Lynx was the oldest, and still is the oldest text > browser. But inline images were already possible with NCSA Mosaic, Netscape's > ancestor. NCSA Mosaic was widely used. Oh, of course. Mosaic was the base for Netscape. Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From el at lisse.NA Wed Dec 7 11:40:13 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Wed, 7 Dec 2022 21:40:13 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> Message-ID: There is nothing odd here. As one of the longer serving ccTLD Managers (.NA, 1991) I recall that Postel wanted not to be in the business of deciding what a country is or not. So, as he wrote in RFC 1591 (April 1994): "The selection of the ISO 3166 list as a basis for country code top-level domain names was made with the knowledge that ISO has a procedure for determining which entities should be and should not be on that list." The ISO maintains the International Standard ISO 3166-1:2020(E) "Codes for the representation of names of countries and their subdivisions". I would not delve to much and too deep into who within the ISO develops the list, or how, basically the will of the government of the country/subdivision concerned how it wants that name to be represented filters down. And as time has shown it seems to work out. UK, EU, AC and SU for that matter (and others) are not assigned but what is called exceptionally reserved: "Code elements may be reserved, in exceptional cases, for country names which the ISO 3166/MA has decided not to include in the code corresponding to this document, but for which an interchange requirement exists. Before such code elements are reserved, advice from the relevant authority should be sought." JE and GG were for a while exceptionally reserved but are now assigned. .UK was one of the first ccTLDs (1987) and I do not remember if it existed before UK became exceptionally reserved or afterwards, which would be however quite interesting :-)-O .GB (with the same WHOIS date as .UK) interestingly is still in the root, managed by JANET (which may or may not exist any longer). .CH started in 1987 SW is unassigned. Details matter... el On 2022-12-07 20:54 , John Day via Internet-history wrote: > That is an odd error in ISO in ISO 3166 given the organizations > involved. (Being pedantic.) > > Great Britain is a geographical term, while UK designates the > political unit, which was what was of concern to the organizations > involved. > > 3166 was developed by TC46 which does documentation, libraries, > museums, etc "in collaboration with the following International > Organizations: Customs Cooperation Council (CCC), [since 1995: World > Customs Organization (WCO)], United Nations Economic Commission for > Europe (ECE), Food and Agriculture Organization of the United Nations > (FAO), International Atomic Energy Agency (IAEA), International Air > Transport Association (IATA), International Chamber of Commerce (ICC), > International Chamber of Shipping (ICS), International Federation of > Library Associations (IFLA), International Labour Office (ILO), > International Telecommunication Union (ITU), International Union of > Railways (UIC), United Nations Organization (UN), United Nations > Conference on Trade and Development (UNCTAD), Universal Postal Union > (UPU), World Intellectual Property Organization (WIPO), and World > Health Organization (WHO).? > > You would have thought one of them would have caught it. ;-) > > Also, I remember seeing the initial CC TLD list at the time that had > Switzerland as .sw, rather than .ch, which would have been familiar to > any stamp collector. ;-) > > Everyone learned something. ;-) > > John > >> On Dec 7, 2022, at 12:14, Ole Jacobsen via Internet-history >> wrote: >> >> Jon Postel also wanted ccTLDs to use the official ISO 3166 list of >> countries, which became a problem for the UK whose ISO 3166 code is >> GB, so Jon had to make an exception. >> >> Life was simple back then :-) >> >> Ole >> >>> On Dec 7, 2022, at 08:29, Craig Partridge via Internet-history >>> wrote: >>> >>> On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>> >>>> I recall a time when the Domain Name System top-level domain (TLD) >>>> for each country had to be handled by a private party, not by the >>>> government that ran the country. >>>> >>> As I recall, Jon Postel didn't object to a government handling -- he >>> simply wanted a responsible custodian. In most cases, governments >>> weren't prepared. >>> >>> The reason for this recollection is that I believe some European >>> governments took a role in designating who ran their initial TLD and >>> that CSNET did a certain amount of mediating (and I worked for CSNET >>> at the time). But my memory may be conflating multiple issues as >>> there were also European governments who argued over who controlled >>> their CSNET connection to the Internet. >>> >>> Craig [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From el at lisse.NA Wed Dec 7 11:44:16 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Wed, 7 Dec 2022 21:44:16 +0200 Subject: [ih] Internet should be in private hands (correction of typo) In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> Message-ID: Correction, .UK/.GB carry both 1985-07-24 on the whois (ie not 1987) el On 2022-12-07 21:40 , Dr Eberhard W Lisse wrote: > There is nothing odd here. > > As one of the longer serving ccTLD Managers (.NA, 1991) I recall that > Postel wanted not to be in the business of deciding what a country is > or not. > > So, as he wrote in RFC 1591 (April 1994): > > ????"The selection of the ISO 3166 list as a basis for country code > ????top-level domain names was made with the knowledge that ISO has a > ????procedure for determining which entities should be and should not be > ????on that list." > > The ISO maintains the International Standard ISO 3166-1:2020(E) "Codes > for the representation of names of countries and their subdivisions". > > I would not delve to much and too deep into who within the ISO develops > the list, or how, basically the will of the government of the > country/subdivision concerned how it wants that name to be represented > filters down.? And as time has shown it seems to work out. > > > UK, EU, AC and SU for that matter (and others) are not assigned but what > is called exceptionally reserved: > > ????"Code elements may be reserved, in exceptional cases, for country > ????names which the ISO 3166/MA has decided not to include in the code > ????corresponding to this document, but for which an interchange > ????requirement exists.? Before such code elements are reserved, advice > ????from the relevant authority should be sought." > > JE and GG were for a while exceptionally reserved but are now assigned. > > .UK was one of the first ccTLDs (1987) and I do not remember if it > existed before UK became exceptionally reserved or afterwards, which > would be however quite interesting :-)-O > > .GB (with the same WHOIS date as .UK) interestingly is still in the > root, managed by JANET (which may or may not exist any longer). > > .CH started in 1987 > > SW is unassigned. > > > Details matter... > > el [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From johnl at iecc.com Wed Dec 7 11:57:21 2022 From: johnl at iecc.com (John Levine) Date: 7 Dec 2022 14:57:21 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> Message-ID: <20221207195721.86AF050B81DD@ary.qy> It appears that John Day via Internet-history said: >That is an odd error in ISO in ISO 3166 given the organizations involved. (Being pedantic.) > >Great Britain is a geographical term, while UK designates the political unit, which was what was of concern to the >organizations involved. ... ISO 3166 was originally issued in 1974. Despite what we might imagine, the world has not always revolved around the Internet. So what about AQ? BV? AX? GP, MQ, GF, and PM? Geographically, John From el at lisse.NA Wed Dec 7 12:05:52 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Wed, 7 Dec 2022 22:05:52 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: <20221207195721.86AF050B81DD@ary.qy> References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: John, these are ASSIGNED. Remember it's "countries and subdivisions" el On 2022-12-07 21:57 , John Levine via Internet-history wrote: [...] > ISO 3166 was originally issued in 1974. Despite what we might > imagine, the world has not always revolved around the Internet. > > So what about AQ? BV? AX? GP, MQ, GF, and PM? > > Geographically, > John -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From jeanjour at comcast.net Wed Dec 7 12:11:12 2022 From: jeanjour at comcast.net (John Day) Date: Wed, 7 Dec 2022 15:11:12 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <20221207195721.86AF050B81DD@ary.qy> References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: Yes, that was my point. And why given the political organizations involved, it was odd that someone didn?t catch that GB wasn?t the designation of the political entity. And of course, CH or Confederation Helvetica is the official name for Switzerland and has been Helvetica for centuries and Confederation Helvetica since unification. (My Swiss history isn?t that good.) ;-) (I always find English versions of such names interesting, sometimes it is far from clear how they came to be. Not so much the name used by the country but the English version. Like getting Florence from Firenze isn?t exactly obvious.) ;-) Take care, John > On Dec 7, 2022, at 14:57, John Levine wrote: > > It appears that John Day via Internet-history said: >> That is an odd error in ISO in ISO 3166 given the organizations involved. (Being pedantic.) >> >> Great Britain is a geographical term, while UK designates the political unit, which was what was of concern to the >> organizations involved. ... > > ISO 3166 was originally issued in 1974. Despite what we might imagine, > the world has not always revolved around the Internet. > > So what about AQ? BV? AX? GP, MQ, GF, and PM? > > Geographically, > John From el at lisse.NA Wed Dec 7 12:16:13 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Wed, 7 Dec 2022 22:16:13 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: You are making Postel's point of not being in the business of what a country is. We was looking for a list of sorts and the ISO "list" fits the bill (even with all its warts) el el On 2022-12-07 22:11 , John Day via Internet-history wrote: > Yes, that was my point. > > And why given the political organizations involved, it was odd that > someone didn?t catch that GB wasn?t the designation of the political > entity. > > And of course, CH or Confederation Helvetica is the official name for > Switzerland and has been Helvetica for centuries and Confederation > Helvetica since unification. (My Swiss history isn?t that good.) ;-) > > (I always find English versions of such names interesting, sometimes > it is far from clear how they came to be. Not so much the name used > by the country but the English version. Like getting Florence from > Firenze isn?t exactly obvious.) ;-) > > Take care, > John [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From jeanjour at comcast.net Wed Dec 7 12:38:06 2022 From: jeanjour at comcast.net (John Day) Date: Wed, 7 Dec 2022 15:38:06 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: <9B1583C1-AFBF-44DC-96C8-8830D3A5D050@comcast.net> Yes that was the whole idea and that both 3166 and networking had something to learn. As John L said, the Internet is not the center of the world, to which I would add, or always right. Everyone had something to learn. > On Dec 7, 2022, at 15:16, Dr Eberhard W Lisse via Internet-history wrote: > > You are making Postel's point of not being in the business of what a > country is. We was looking for a list of sorts and the ISO "list" fits > the bill (even with all its warts) > > el > > el > > On 2022-12-07 22:11 , John Day via Internet-history wrote: >> Yes, that was my point. >> And why given the political organizations involved, it was odd that >> someone didn?t catch that GB wasn?t the designation of the political >> entity. >> And of course, CH or Confederation Helvetica is the official name for >> Switzerland and has been Helvetica for centuries and Confederation >> Helvetica since unification. (My Swiss history isn?t that good.) ;-) >> (I always find English versions of such names interesting, sometimes >> it is far from clear how they came to be. Not so much the name used >> by the country but the English version. Like getting Florence from >> Firenze isn?t exactly obvious.) ;-) >> Take care, >> John > [...] > > -- > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From el at lisse.NA Wed Dec 7 12:51:17 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Wed, 7 Dec 2022 22:51:17 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: <9B1583C1-AFBF-44DC-96C8-8830D3A5D050@comcast.net> References: <20221207195721.86AF050B81DD@ary.qy> <9B1583C1-AFBF-44DC-96C8-8830D3A5D050@comcast.net> Message-ID: <0ad4ec5a-418b-6661-001b-467060e3028d@lisse.NA> The 2020 version of 3166 acknowledges (in the bibliography): [20] [...] This role means assigning the operators of top-level domains, including country code domain names, which are based on ISO 3166 alpha-2 country code elements. [...] el On 2022-12-07 22:38 , John Day wrote: > Yes that was the whole idea and that both 3166 and networking had > something to learn. [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From brian.e.carpenter at gmail.com Wed Dec 7 13:18:39 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Thu, 8 Dec 2022 10:18:39 +1300 Subject: [ih] Internet should be in private hands In-Reply-To: <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> Message-ID: <8ad5881d-dbdb-42df-2758-63f2241ee707@gmail.com> On 08-Dec-22 07:54, John Day via Internet-history wrote: > That is an odd error in ISO in ISO 3166 given the organizations involved. (Being pedantic.) > > Great Britain is a geographical term, while UK designates the political unit, which was what was of concern to the organizations involved. "GB" was long established as the sticker you had to put on the back of your vehicle when you took it across the English Channel and drove it in continental Europe. According to https://en.wikipedia.org/wiki/International_vehicle_registration_code it dates back to 1909, at which time it was short for "Great Britain and Ireland". Then in 1924 it was redefined as "Great Britain and Northern Ireland" (geographically and politically identical to the modern UK, following Irish independence in 1922). So ISO 3166 didn't invent "GB". JANET invented ".UK" because by the time DNS came along, the Irish Republic existed, the Northern Ireland troubles were ongoing, and asserting that Northern Ireland was part of "GB" was a political hot potato. I have just learned that the vehicle sticker for the UK was finally changed from "GB" to "UK" in 2021, presumably as a side-effect of Brexit, rather than to celebrate 99 years of Irish independence. Brian > > 3166 was developed by TC46 which does documentation, libraries, museums, etc "in collaboration with the following International Organizations: Customs Cooperation Council (CCC), [since 1995: World Customs Organization (WCO)], United Nations Economic Commission for Europe (ECE), Food and Agriculture Organization of the United Nations (FAO), International Atomic Energy Agency (IAEA), International Air Transport Association (IATA), International Chamber of Commerce (ICC), International Chamber of Shipping (ICS), International Federation of Library Associations (IFLA), International Labour Office (ILO), International Telecommunication Union (ITU), International Union of Railways (UIC), United Nations Organization (UN), United Nations Conference on Trade and Development (UNCTAD), Universal Postal Union (UPU), World Intellectual Property Organization (WIPO), and World Health Organization (WHO).? > > You would have thought one of them would have caught it. ;-) > > Also, I remember seeing the initial CC TLD list at the time that had Switzerland as .sw, rather than .ch, which would have been familiar to any stamp collector. ;-) > > Everyone learned something. ;-) > > John > >> On Dec 7, 2022, at 12:14, Ole Jacobsen via Internet-history wrote: >> >> Jon Postel also wanted ccTLDs to use the official ISO 3166 >> list of countries, which became a problem for the UK whose >> ISO 3166 code is GB, so Jon had to make an exception. >> >> Life was simple back then :-) >> >> Ole >> >>> On Dec 7, 2022, at 08:29, Craig Partridge via Internet-history wrote: >>> >>> On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> >>>> I recall a time when the Domain Name System top-level domain (TLD) for >>>> each country had to be handled by a private party, not by the government >>>> that ran the country. >>>> >>>> >>> As I recall, Jon Postel didn't object to a government handling -- he simply >>> wanted a responsible custodian. In most cases, governments weren't >>> prepared. >>> >>> The reason for this recollection is that I believe some European >>> governments took a role in designating who ran their initial TLD and that >>> CSNET did a certain amount of mediating (and I worked for CSNET at the >>> time). But my memory may be conflating multiple issues as there were also >>> European governments who argued over who controlled their CSNET connection >>> to the Internet. >>> >>> Craig >>> >>> -- >>> ***** >>> Craig Partridge's email account for professional society activities and >>> mailing lists. >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> Ole J. Jacobsen >> Editor and Publisher >> The Internet Protocol Journal >> Office: +1 415-550-9433 >> Cell: +1 415-370-4628 >> UK Cell: +44 7805 977889 >> Web: protocoljournal.org >> E-mail: olejacobsen at me.com >> E-mail: ole at protocoljournal.org >> Skype: organdemo >> >> >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From lpress at csudh.edu Wed Dec 7 14:22:16 2022 From: lpress at csudh.edu (Larry Press) Date: Wed, 7 Dec 2022 22:22:16 +0000 Subject: [ih] more gopher baroque, "The Internet runs on Proposed Standards" In-Reply-To: <5372baee-671b-9683-56a6-7003a246e324@dcrocker.net> References: <20221207041450.4080C50AF77F@ary.qy> <6233443e-2d24-81ff-658a-b84c5ee4f674@gih.com> <5372baee-671b-9683-56a6-7003a246e324@dcrocker.net> Message-ID: We did a demo of the Web at the Developing Nations Workshop held in conjunction with the ISOC annual meeting -- not sure which year, but the Web was new. I recall showing a Web site at the U. of Hawaii which had had pictures of dinosaurs, so it must have been after 8/1991 when the first of the Cernettes was online. ________________________________ From: Internet-history on behalf of Dave Crocker via Internet-history Sent: Wednesday, December 7, 2022 11:01 AM To: Jack Haverty ; internet-history at elists.isoc.org Cc: Dave Crocker Subject: Re: [ih] more gopher baroque, "The Internet runs on Proposed Standards" On 12/7/2022 10:50 AM, Jack Haverty via Internet-history wrote: > I recall that I demonstrated the Web in the early 90s to everyone I > could accost at Oracle, from the receptionist to the Chairman of the > Board. They all "got it" and thought it was promising even if there > was little content online at the time. In 1990, I did a half-day class about Internet tech, as part of a week-long CMU class for some AT&T folk. I included a gopher demo, to get some class participation, starting with a master page that listed areas of the globe. The class choices eventually got us down to Wellington, New Zealand. Listed there was the Wellington Town Council and I saw an entry for the Town Council minutes. For the previous week. Which we were looking at from Pittsburgh. That's the first time I finally understood what globalized use of the net was going to look like. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social -- Internet-history mailing list Internet-history at elists.isoc.org https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!P7nkOOY!sikLKL8mXpkckqgLySxZhue-YDwZ1BJKkm_lD0hids5b48aiIy3iD6SEX8SJYBX-NOnLbVWmAthfQfTDfcnMNmCr8Wlo0DM$ From cabo at tzi.org Wed Dec 7 14:45:25 2022 From: cabo at tzi.org (Carsten Bormann) Date: Wed, 7 Dec 2022 23:45:25 +0100 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <222d1fd7-dd6a-5100-8992-1c5e9ac1d105@gmail.com> References: <20221204194252.45F64508B528@ary.qy> <222d1fd7-dd6a-5100-8992-1c5e9ac1d105@gmail.com> Message-ID: <2C81CD93-DAF2-4B12-B282-35DEFA699F03@tzi.org> > On 2022-12-07, at 03:34, Brian E Carpenter via Internet-history wrote: > > For those who don't know, the IESG maintains a "downref registry" for non-standards track RFCs that can be cited as if they were standards: https://datatracker.ietf.org/doc/downref This list contains an entry that may be of interest for those who still care about the thread about the history of standards levels: > RFC 7251 AES-CCM Elliptic Curve Cryptography (ECC) Cipher Suites for TLS (Used as a normative reference by:) > draft-ietf-core-coap The Constrained Application Protocol (CoAP) RFC 7251 is an ?informational? RFC (which is why it is on this list). RFC 7251 was used as a normative reference by RFC 7252 (draft-ietf-core-coap), which is a standards-track document (?Proposed Standard?). Both RFC 7251 and RFC 7252 were published on 2014-06-27. So we knew, at the very day RFC 7251 was published, that the ?informational? status of RFC 7251 was a misstatement and this was used as a part of a standard (RFC 7252). (That part of) the Internet deliberately runs on an ?informational? standard. This nomenclature still boggles my mind. It is regularly done, though, and there is nothing *wrong* about this weird procedure (except maybe for the English language meaning of the labels being used). Gr??e, Carsten From jhlowry at mac.com Wed Dec 7 15:50:23 2022 From: jhlowry at mac.com (John Lowry) Date: Wed, 7 Dec 2022 18:50:23 -0500 Subject: [ih] "The Internet runs on Proposed Standards" In-Reply-To: <2C81CD93-DAF2-4B12-B282-35DEFA699F03@tzi.org> References: <20221204194252.45F64508B528@ary.qy> <222d1fd7-dd6a-5100-8992-1c5e9ac1d105@gmail.com> <2C81CD93-DAF2-4B12-B282-35DEFA699F03@tzi.org> Message-ID: <3F3097B3-F924-44C8-A8FA-E25374D75217@mac.com> I agree that the nomenclature is difficult. I first encountered the IETF when an ?RFC" was a working standard and always viewed the later STDs as a kind of honorary recognition that an RFC was widely implemented and accepted. I would say that the internet runs or at least ran on RFCs. 2 cents worth. > On Dec 7, 2022, at 17:45, Carsten Bormann via Internet-history wrote: > > >> On 2022-12-07, at 03:34, Brian E Carpenter via Internet-history wrote: >> >> For those who don't know, the IESG maintains a "downref registry" for non-standards track RFCs that can be cited as if they were standards: https://datatracker.ietf.org/doc/downref > > This list contains an entry that may be of interest for those who still care about the thread about the history of standards levels: > >> RFC 7251 AES-CCM Elliptic Curve Cryptography (ECC) Cipher Suites for TLS > (Used as a normative reference by:) >> draft-ietf-core-coap The Constrained Application Protocol (CoAP) > > RFC 7251 is an ?informational? RFC (which is why it is on this list). > RFC 7251 was used as a normative reference by RFC 7252 (draft-ietf-core-coap), which is a standards-track document (?Proposed Standard?). > > Both RFC 7251 and RFC 7252 were published on 2014-06-27. > > So we knew, at the very day RFC 7251 was published, that the ?informational? status of RFC 7251 was a misstatement and this was used as a part of a standard (RFC 7252). > > (That part of) the Internet deliberately runs on an ?informational? standard. > > This nomenclature still boggles my mind. > > It is regularly done, though, and there is nothing *wrong* about this weird procedure (except maybe for the English language meaning of the labels being used). > > Gr??e, Carsten > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Wed Dec 7 15:51:16 2022 From: jeanjour at comcast.net (John Day) Date: Wed, 7 Dec 2022 18:51:16 -0500 Subject: [ih] Internet should be in private hands In-Reply-To: <8ad5881d-dbdb-42df-2758-63f2241ee707@gmail.com> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <30064.1670384834@hop.toad.com> <056FFE7F-EFFA-4910-895E-90EF38CBC164@me.com> <24917543-49D8-410E-8F57-EAF615DF8B21@comcast.net> <8ad5881d-dbdb-42df-2758-63f2241ee707@gmail.com> Message-ID: <102BF56E-1A4B-43B0-B3BB-7EDDF442C9FF@comcast.net> > On Dec 7, 2022, at 16:18, Brian E Carpenter wrote: > > On 08-Dec-22 07:54, John Day via Internet-history wrote: >> That is an odd error in ISO in ISO 3166 given the organizations involved. (Being pedantic.) >> Great Britain is a geographical term, while UK designates the political unit, which was what was of concern to the organizations involved. > > "GB" was long established as the sticker you had to put on the back of your vehicle when you took it across the English Channel and drove it in continental Europe. According to https://en.wikipedia.org/wiki/International_vehicle_registration_code O, that is right! I had forgotten about that!! > it dates back to 1909, at which time it was short for "Great Britain and Ireland". Then in 1924 it was redefined as "Great Britain and Northern Ireland" (geographically and politically identical to the modern UK, following Irish independence in 1922). > > So ISO 3166 didn't invent "GB". JANET invented ".UK" because by the time DNS came along, the Irish Republic existed, the Northern Ireland troubles were ongoing, and asserting that Northern Ireland was part of "GB" was a political hot potato. Well, apparently 3166 used GB in 1974, which predates JANET and DNS. > > I have just learned that the vehicle sticker for the UK was finally changed from "GB" to "UK" in 2021, presumably as a side-effect of Brexit, rather than to celebrate 99 years of Irish independence. ;-) lol I bet you are right! These things are always interesting. Thanks, John > > Brian > >> 3166 was developed by TC46 which does documentation, libraries, museums, etc "in collaboration with the following International Organizations: Customs Cooperation Council (CCC), [since 1995: World Customs Organization (WCO)], United Nations Economic Commission for Europe (ECE), Food and Agriculture Organization of the United Nations (FAO), International Atomic Energy Agency (IAEA), International Air Transport Association (IATA), International Chamber of Commerce (ICC), International Chamber of Shipping (ICS), International Federation of Library Associations (IFLA), International Labour Office (ILO), International Telecommunication Union (ITU), International Union of Railways (UIC), United Nations Organization (UN), United Nations Conference on Trade and Development (UNCTAD), Universal Postal Union (UPU), World Intellectual Property Organization (WIPO), and World Health Organization (WHO).? >> You would have thought one of them would have caught it. ;-) >> Also, I remember seeing the initial CC TLD list at the time that had Switzerland as .sw, rather than .ch, which would have been familiar to any stamp collector. ;-) >> Everyone learned something. ;-) >> John >>> On Dec 7, 2022, at 12:14, Ole Jacobsen via Internet-history wrote: >>> >>> Jon Postel also wanted ccTLDs to use the official ISO 3166 >>> list of countries, which became a problem for the UK whose >>> ISO 3166 code is GB, so Jon had to make an exception. >>> >>> Life was simple back then :-) >>> >>> Ole >>> >>>> On Dec 7, 2022, at 08:29, Craig Partridge via Internet-history wrote: >>>> >>>> On Tue, Dec 6, 2022 at 8:47 PM John Gilmore via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>>> >>>>> I recall a time when the Domain Name System top-level domain (TLD) for >>>>> each country had to be handled by a private party, not by the government >>>>> that ran the country. >>>>> >>>>> >>>> As I recall, Jon Postel didn't object to a government handling -- he simply >>>> wanted a responsible custodian. In most cases, governments weren't >>>> prepared. >>>> >>>> The reason for this recollection is that I believe some European >>>> governments took a role in designating who ran their initial TLD and that >>>> CSNET did a certain amount of mediating (and I worked for CSNET at the >>>> time). But my memory may be conflating multiple issues as there were also >>>> European governments who argued over who controlled their CSNET connection >>>> to the Internet. >>>> >>>> Craig >>>> >>>> -- >>>> ***** >>>> Craig Partridge's email account for professional society activities and >>>> mailing lists. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> Ole J. Jacobsen >>> Editor and Publisher >>> The Internet Protocol Journal >>> Office: +1 415-550-9433 >>> Cell: +1 415-370-4628 >>> UK Cell: +44 7805 977889 >>> Web: protocoljournal.org >>> E-mail: olejacobsen at me.com >>> E-mail: ole at protocoljournal.org >>> Skype: organdemo >>> >>> >>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Wed Dec 7 15:55:06 2022 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 7 Dec 2022 15:55:06 -0800 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: On 12/6/22 00:55, Dr Eberhard W Lisse via Internet-history wrote: > What does being "in charge of the Internet" mean? > > el Getting back to this question... IIRC, one of the fundamental questions of the "Internet Experiment" was whether or not it was possible to create and operate a network for which no one person or entity was "in charge".?? Prior networks, such as ARPANET, were managed and operated by a single locus of control, e.g., DARPA/DCA for setting policy, and contractor BBN to handle the technical operations of the switching fabric. If you look at other kinds of infrastructures, much of this issue of control has been worked out over the years.? E.g., the air travel "network" has all sorts of rules and procedures, policies and agreements between countries, ancillary mechanisms such as air traffic control, and funding channels such as gate charges.?? There are also laws and constraints governing things like airport operations, crew and passenger safety, cargo restrictions, etc. Various organizations, both public and private, are "in charge". Similar control structures have been developed, sometimes over centuries, for other forms of "moving stuff", e.g, trains, automobiles, ships, etc. The telephone system is one of the newer infrastructures, but has been around long enough to have developed a lot of such control mechanisms with multiple entities in charge of various pieces. Still, it is not complete.? For example, in the US at least, there have recently been laws passed to combat "spoofing" of telephone numbers and require system operators to implement them. IMHO, the Internet is still too young to have developed such control structures.? So there are many aspects of the Internet as a data transportation infrastructure that are not yet developed. One example, from perhaps 20 years ago...? At the time, delivering software across country or state boundaries required payment of tariffs or duties.?? Such "control" had been created and still applied to software delivered on CDs, i.e., some kind of physical media.? However, FTPing across the Internet bypassed those mechanisms.??? The "control mechanisms" for taxes and tariffs hadn't adapted to the new world of the Internet. IMHO, "in charge of the Internet" means having the policies, procedures, mechanisms, and organizations in place, with a mix of public and private components, to assure that the Internet is operated and evolved with the same goals as other infrastructures - e.g., safety, availability, conformance to laws, reliability, and other such aspects of things that we all depend upon.? Since the Internet is a global infrastructure, that necessarily will also involve international diplomacy and politics, treaties and organizations, and multinational coordination. I had thought that was ISOC's mission, but I think it will take more impetus than ISOC can provide.? Also it has to be applied at many levels -- not just the "plumbing" of IP.?? For example, it is possible today to ship cargo like explosives around the world.?? But there are many constraints, permits, inspections, and obstacles to doing it.?? Even lithium batteries are regulated.? In the Internet world today, you can similarly ship "explosive bits" (hate speech, pornography, terrorist manuals, whatever) around the world with little if any control. Similarly, in the early days, I used to simply trust my email.?? The Internet was a very collegial environment.? Now, for serious use, email is not to be trusted.?? Banks, medical facilities, et al use email only to tell me that they have a message for me, and I should log in securely to their silo to read it.?? Over the decades, email has become widely available but less trustworthy.?? Mechanisms (PGP etc.) exist to provide some help, but people seem to ignore them - even those who religiously wear seatbelts when using the auto transport network. In the original Internet and ARPANET, and NSFNet, policy dictated what could be sent over the net.? And the policy was enforced. Today? Not an easy problem.?? It's not clear, to me at least, that anyone is working on it. Jack Haverty From johnl at iecc.com Wed Dec 7 17:46:03 2022 From: johnl at iecc.com (John R. Levine) Date: 7 Dec 2022 20:46:03 -0500 Subject: [ih] two letter follies, was Internet should be in private hands In-Reply-To: References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: > And why given the political organizations involved, it was odd that > someone didn?t catch that GB wasn?t the designation of the political > entity. >> So what about AQ? BV? AX? GP, MQ, GF, and PM? Antarctica is not a political entity. The Aland islands are part of Finland. Guadeloupe, Martinique, and St Pierre are islands that are part of France, French Guiana is a chunk of South America that is also part of France.* I don't know all the history of 3166 but it's definitely not just a list of political entities. R's, John * - and is why the answer to "what is the distance from France to Brazil" is zero From gdmr at inf.ed.ac.uk Wed Dec 7 23:59:40 2022 From: gdmr at inf.ed.ac.uk (George Ross) Date: Thu, 08 Dec 2022 07:59:40 +0000 Subject: [ih] Internet should be in private hands In-Reply-To: Your message of "Wed, 07 Dec 2022 15:11:12 -0500." Message-ID: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> > ... And why given the political organizations involved, it was odd that > someone didn?t catch that GB wasn?t the designation of the political entity. Not at all, given the confusion by the English establishment among UK, GB, England and (Great) Britain. It persists to this day, though some organisations have obviously started to brief their employees regarding the sensitivities of those of us from the non-English parts of the UK. (That can lead to over-correction, though, such as describing Elizabeth 1st's navy as "British" rather than "English".) The naming scheme which JANET-attached Universties used did start with "uk" rather than "gb" -- we were uk.ac.ed.cs. That was in place before we joined the IP internet and started using the DNS too, and I suspect the prospect of sites having two different names or else having to change the JANET scheme, which was much more heavily used than the DNS at the time, didn't appeal, in addition to the political sensitivities. We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid confusion in the mail gateways, as .CS was a ccTLD. -- George D M Ross MSc PhD CEng MBCS CITP University of Edinburgh, School of Informatics, Appleton Tower, 11 Crichton Street, Edinburgh, Scotland, EH8 9LE Mail: gdmr at inf.ed.ac.uk Voice: 0131 650 5147 PGP: 1024D/AD758CC5 B91E D430 1E0D 5883 EF6A 426C B676 5C2B AD75 8CC5 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From el at lisse.na Thu Dec 8 00:21:12 2022 From: el at lisse.na (Dr Eberhard W Lisse) Date: Thu, 8 Dec 2022 10:21:12 +0200 Subject: [ih] two letter follies, was Internet should be in private hands In-Reply-To: References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: If you don?t know, why comment? And there are more, of course, for example (at least) FO and BQ. Again, the ISO maintains the International Standard ISO 3166-1:2020(E) with the title of "Codes for the representation of names of countries and their subdivisions". The operative words here are CODES (2 Letters, 3 Letters, Numbers), and NAMES. And, the will of the government(s) filters down into the Standard (as can be seen by the recent name change of T?rkiye, or by the 2 Letter code representing South Sudan), but I look at the ISO list from the viewpoint of ?Distinct Economies?, which helps in most cases, but not all. el On 8. Dec 2022 at 03:46 +0200, John R. Levine via Internet-history , wrote: > > And why given the political organizations involved, it was odd that > > someone didn?t catch that GB wasn?t the designation of the political > > entity. > > > > So what about AQ? BV? AX? GP, MQ, GF, and PM? > > Antarctica is not a political entity. The Aland islands are part of > Finland. Guadeloupe, Martinique, and St Pierre are islands that are part > of France, French Guiana is a chunk of South America that is also part of > France.* I don't know all the history of 3166 but it's definitely not > just a list of political entities. > > R's, > John > > * - and is why the answer to "what is the distance from France to Brazil" > is zero > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From julf at Julf.com Thu Dec 8 00:35:10 2022 From: julf at Julf.com (Johan Helsingius) Date: Thu, 8 Dec 2022 09:35:10 +0100 Subject: [ih] two letter follies, was Internet should be in private hands In-Reply-To: References: <20221207195721.86AF050B81DD@ary.qy> Message-ID: <3ae38105-d62b-1a9c-2065-e4db9c839fcb@Julf.com> On 08/12/2022 02:46, John R. Levine via Internet-history wrote: > The Aland islands are part of Finland. Yes, but with a UN-recognized special status (originally by the League of Nations), and their own (limited) governance structure, so one could argue they are a "political entity". Julf From craig at tereschau.net Thu Dec 8 06:24:56 2022 From: craig at tereschau.net (Craig Partridge) Date: Thu, 8 Dec 2022 07:24:56 -0700 Subject: [ih] Internet should be in private hands In-Reply-To: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> Message-ID: On Thu, Dec 8, 2022 at 12:59 AM George Ross via Internet-history < internet-history at elists.isoc.org> wrote: > > The naming scheme which JANET-attached Universties used did start with > "uk" > rather than "gb" -- we were uk.ac.ed.cs. That was in place before we > joined the IP internet and started using the DNS too, and I suspect the > prospect of sites having two different names or else having to change the > JANET scheme, which was much more heavily used than the DNS at the time, > didn't appeal, in addition to the political sensitivities. > > We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid confusion in > the mail gateways, as .CS was a ccTLD. > Quick gloss on George's note for those who weren't there. The UK, due to JANET or simply a magnetic attraction to doing things the opposite way from most folks, initially had domain names with the TLD first. This meant a bunch of Internet email systems had a "both-ways" switch that involved parsing domain names left to right and right to left to see if either version made sense. Eventually, as the DNS grew, some domain names made sense in both directions (stories that some in the DNS leadership team encouraged this mayhem to persuade our friends in the UK to rethink their ordering reflect joking comments at the time, but I don't think anyone did it intentionally) and this led to changes such as those George describes. Thanks! Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From dhc at dcrocker.net Thu Dec 8 06:46:04 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 8 Dec 2022 06:46:04 -0800 Subject: [ih] Internet should be in private hands In-Reply-To: References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> Message-ID: <86e45523-318a-a9d6-a46b-0d1bd32d0c3d@dcrocker.net> On 12/8/2022 6:24 AM, Craig Partridge via Internet-history wrote: > This meant a bunch of Internet email systems had a "both-ways" switch that > involved parsing domain names left to right and right to left to see if > either version made sense. A premise of 'gateway' operation is translation between heterogenous environments.? As things come into an environment, make everything look 'native', to the extent possible.? The UK was trying to juggle heterogeneous? throughout, rather than do a translation. Sometimes, the need for gatewaying sneaks up on folk, when introducing a parallel service.? Still I was always intrigued that the attempt here was juggle rather than map. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From mfidelman at meetinghouse.net Thu Dec 8 07:10:45 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 8 Dec 2022 10:10:45 -0500 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: <92a1ab12-7061-cd2f-0658-cae210a456de@meetinghouse.net> Comments at end... Jack Haverty via Internet-history wrote: > On 12/6/22 00:55, Dr Eberhard W Lisse via Internet-history wrote: >> What does being "in charge of the Internet" mean? >> >> el > > Getting back to this question... > > IIRC, one of the fundamental questions of the "Internet Experiment" > was whether or not it was possible to create and operate a network for > which no one person or entity was "in charge".?? Prior networks, such > as ARPANET, were managed and operated by a single locus of control, > e.g., DARPA/DCA for setting policy, and contractor BBN to handle the > technical operations of the switching fabric. > > If you look at other kinds of infrastructures, much of this issue of > control has been worked out over the years.? E.g., the air travel > "network" has all sorts of rules and procedures, policies and > agreements between countries, ancillary mechanisms such as air traffic > control, and funding channels such as gate charges. There are also > laws and constraints governing things like airport operations, crew > and passenger safety, cargo restrictions, etc. Various organizations, > both public and private, are "in charge". > > Similar control structures have been developed, sometimes over > centuries, for other forms of "moving stuff", e.g, trains, > automobiles, ships, etc. > > The telephone system is one of the newer infrastructures, but has been > around long enough to have developed a lot of such control mechanisms > with multiple entities in charge of various pieces. Still, it is not > complete.? For example, in the US at least, there have recently been > laws passed to combat "spoofing" of telephone numbers and require > system operators to implement them. > > IMHO, the Internet is still too young to have developed such control > structures.? So there are many aspects of the Internet as a data > transportation infrastructure that are not yet developed. > > One example, from perhaps 20 years ago...? At the time, delivering > software across country or state boundaries required payment of > tariffs or duties.?? Such "control" had been created and still applied > to software delivered on CDs, i.e., some kind of physical media.? > However, FTPing across the Internet bypassed those mechanisms.??? The > "control mechanisms" for taxes and tariffs hadn't adapted to the new > world of the Internet. > > IMHO, "in charge of the Internet" means having the policies, > procedures, mechanisms, and organizations in place, with a mix of > public and private components, to assure that the Internet is operated > and evolved with the same goals as other infrastructures - e.g., > safety, availability, conformance to laws, reliability, and other such > aspects of things that we all depend upon.? Since the Internet is a > global infrastructure, that necessarily will also involve > international diplomacy and politics, treaties and organizations, and > multinational coordination. > > I had thought that was ISOC's mission, but I think it will take more > impetus than ISOC can provide.? Also it has to be applied at many > levels -- not just the "plumbing" of IP.?? For example, it is possible > today to ship cargo like explosives around the world. But there are > many constraints, permits, inspections, and obstacles to doing it.?? > Even lithium batteries are regulated.? In the Internet world today, > you can similarly ship "explosive bits" (hate speech, pornography, > terrorist manuals, whatever) around the world with little if any control. > > Similarly, in the early days, I used to simply trust my email. The > Internet was a very collegial environment.? Now, for serious use, > email is not to be trusted.?? Banks, medical facilities, et al use > email only to tell me that they have a message for me, and I should > log in securely to their silo to read it.?? Over the decades, email > has become widely available but less trustworthy. Mechanisms (PGP > etc.) exist to provide some help, but people seem to ignore them - > even those who religiously wear seatbelts when using the auto > transport network. > > In the original Internet and ARPANET, and NSFNet, policy dictated what > could be sent over the net.? And the policy was enforced. Today? > > Not an easy problem.?? It's not clear, to me at least, that anyone is > working on it. > > Jack Haverty yes... folks are working on it - though not the right ones - politicians, lawyers, censors, propagandists, marketeers - all working to both segment the net & favor their messages while suppressing others - a far cry from the days when it was all about connectivity, interoperability, and Metcalfe's law perhaps ISOC & ICANN could have done a better job of encouraging/enforcing standards - rather than abdicating either government or market forces - but it might well be too late - not a happy situation the model I keep coming back to is religion - we have global religions, tied together by various kinds of organizations - but somehow, the organizations keep splitting, battling, self-destructing, changing their messages, .... perhaps the scientific community is a better model - open data & publication, transparency, and peer review - and a modicum of polite behavior Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From dhc at dcrocker.net Thu Dec 8 07:22:39 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 8 Dec 2022 07:22:39 -0800 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: On 12/7/2022 3:55 PM, Jack Haverty via Internet-history wrote: > IIRC, one of the fundamental questions of the "Internet Experiment" > was whether or not it was possible to create and operate a network for > which no one person or entity was "in charge".?? Prior networks, such > as ARPANET, were managed and operated by a single locus of control, > e.g., DARPA/DCA for setting policy, and contractor BBN to handle the > technical operations of the switching fabric. Given that the Internet had a single backbone for a long time, I'm not sure the no one in charge goal was pursued all that explicitly. Rather it was backed into, when additional backbones developed independently, finally producing a concession to make it possible to support -- route among -- them equally.? And this was NSFNet time, not DARPA time. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From el at lisse.NA Thu Dec 8 07:31:19 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Thu, 8 Dec 2022 17:31:19 +0200 Subject: [ih] Internet should be in private hands In-Reply-To: References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> Message-ID: In the very beginning when my little 286 was the only box in .NA, and Namibia for that matter, running UUPC on MS-DOS with the fantastic PC-Elm written by a friend of mine in Turbo-C, using MicroEmacs as the editor and a 2000 line BAT script to tie this all together, it took me a while to figure out why there was traffic coming my way which looked like it should have gone to .NZ. I was using a German keyboard a the time which is QWERTZ ie the Y is close to the A but on the predominant standard keyboards (QUERTY) the Z is close to the A so a typo would push that my way. Then eventually I also saw a bit of traffic for .UA (handwritten looks similar to .NA :-)-O) I also recall that there was some Radio Amateur mail system where the developers had thought that .NA was a great abbreviation for North America. As soon as there was a mail gateway (of sorts) to the fledgeling Internet, I saw traffic for these too :-)-O But the coolest stuff was for NAVY.MIL when some server in the Great Lakes region cut off "VY.MIL". I tried to figure out who to email in this regards (remember that was before the Web) but never received a response until I somehow found the email address of an Admiral there. He copied me in to a polite but stern instruction to have that fixed and I never saw one of those again :-)-O Technically (I think) Great Britain refers to England, Scotland and Wales. Technically (I think) the United Kingdom refers to Great Britain and Northern Ireland. So it does make sense to me that JANET which was a network of universities and research institutions after all used something like UK.2ld.3ld.4ld instead of GB.2ld.3ld.4ld because the network included the institutions in Northern Ireland. It predated the DNS (or at least the registration date of the two ccTLDs .UK and .GB). Eventually there were gateways "turning around" the addresses (as below) so people would have been used to emailing to 4ld.3ld.2ld.UK and changing that to 4ld.3ld.2ld.GB would have been impractical. Unfortunately on the ISO's Browsing Platform there is no history for UK even though it would be really interesting to see when that was reserved (on the request by HM Government, by the way). el On 08/12/2022 16:24, Craig Partridge via Internet-history wrote: > On Thu, Dec 8, 2022 at 12:59 AM George Ross via Internet-history < > internet-history at elists.isoc.org> wrote: > >> >> The naming scheme which JANET-attached Universties used did >> start with "uk" rather than "gb" -- we were uk.ac.ed.cs. That >> was in place before we joined the IP internet and started using >> the DNS too, and I suspect the prospect of sites having two >> different names or else having to change the JANET scheme, >> which was much more heavily used than the DNS at the time, >> didn't appeal, in addition to the political sensitivities. >> >> We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid >> confusion in the mail gateways, as .CS was a ccTLD. >> > > Quick gloss on George's note for those who weren't there. The > UK, due to JANET or simply a magnetic attraction to doing things > the opposite way from most folks, initially had domain names > with the TLD first. > > This meant a bunch of Internet email systems had a "both-ways" > switch that involved parsing domain names left to right and > right to left to see if either version made sense. > > Eventually, as the DNS grew, some domain names made sense in > both directions (stories that some in the DNS leadership team > encouraged this mayhem to persuade our friends in the UK to > rethink their ordering reflect joking comments at the time, but > I don't think anyone did it intentionally) and this led to > changes such as those George describes. > > Thanks! > > Craig > > -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From ocl at gih.com Thu Dec 8 08:06:19 2022 From: ocl at gih.com (=?UTF-8?Q?Olivier_MJ_Cr=c3=a9pin-Leblond?=) Date: Thu, 8 Dec 2022 17:06:19 +0100 Subject: [ih] Internet should be in private hands In-Reply-To: References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> Message-ID: <0b353ccf-0969-1e0e-7443-e5e4329598ac@gih.com> Hello all, please see my comments interspersed in the text below: On 08/12/2022 15:24, Craig Partridge via Internet-history wrote: > On Thu, Dec 8, 2022 at 12:59 AM George Ross via Internet-history < > internet-history at elists.isoc.org> wrote: > >> The naming scheme which JANET-attached Universties used did start with >> "uk" >> rather than "gb" -- we were uk.ac.ed.cs. That was in place before we >> joined the IP internet and started using the DNS too, and I suspect the >> prospect of sites having two different names or else having to change the >> JANET scheme, which was much more heavily used than the DNS at the time, >> didn't appeal, in addition to the political sensitivities. >> >> We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid confusion in >> the mail gateways, as .CS was a ccTLD. Yes, a lot of computer departments did so, but the most challenging was the other way around, with one of the most influential sites at the time cs.net - and I remember that I had the most trouble getting emails to it because sometimes it would look-up under .net zone and sometimes under .cs zone, so sometimes I used to have to cheat by emailing through a third party open relay, like cunyvm.cuny.edu or uunet.net or decwrl.dec.com or uucp.sun.com who, for other reasons, all used to be pretty good at routing emails cleverly on the Internet. >> > Quick gloss on George's note for those who weren't there. The UK, due to > JANET or simply a magnetic attraction to doing things the opposite way from > most folks, initially had domain names with the TLD first. > > This meant a bunch of Internet email systems had a "both-ways" switch that > involved parsing domain names left to right and right to left to see if > either version made sense. Only the "gateways" like uk.ac.ucl.cs.nss that then became uk.ac.nfsnet-relay, uk.ac.mhs-relay, uk.ac.rl.ib (the RAL gateway to Bitnet) and uk.ac.ukc (for UUCP). Trying to do this within the JANET NRS scheme (yes that's what's it's called, designed in 1983 - more info on https://en.wikipedia.org/wiki/JANET_NRS ) was challenging. I know because I tried playing with mailer systems in Coloured Book Software and it was impossible, downright impossible to think of every instance for it to work well - more info on Colour Book at: https://en.wikipedia.org/wiki/Coloured_Book_protocols ) This is of course not to confuse with the ITU (then CCITT) coloured books that determined successive 4 yearly editions of CCITT standards like X3, X21, X25, X400 which the UK Coloured Books also used. I have no idea why UK was used instead of GB, but the order in which it was used probably stems from the X.400 address notation that started with Country first, Admin Name, Carrier Name, Organisation Name, Recipient Name etc. which in itself, for anyone who has ever used it, was nothing short of a nightmare. > > Eventually, as the DNS grew, some domain names made sense in both > directions (stories that some in the DNS leadership team encouraged this > mayhem to persuade our friends in the UK to rethink their ordering reflect > joking comments at the time, but I don't think anyone did it intentionally) > and this led to changes such as those George describes. > Plus an increasing number of us used the Internet through the NSFNET gateway at UCL and got our way finally to convince the powers that be that TCP-IP was the future and could be tunnelled over X.25... and then of course that was upgraded to native TCP-IP on E1 leased lines soon after. Kindest regards, Olivier From vint at google.com Thu Dec 8 09:04:49 2022 From: vint at google.com (Vint Cerf) Date: Thu, 8 Dec 2022 12:04:49 -0500 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: the design intent was that each network would be operated independently v On Thu, Dec 8, 2022 at 10:22 AM Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > On 12/7/2022 3:55 PM, Jack Haverty via Internet-history wrote: > > IIRC, one of the fundamental questions of the "Internet Experiment" > > was whether or not it was possible to create and operate a network for > > which no one person or entity was "in charge". Prior networks, such > > as ARPANET, were managed and operated by a single locus of control, > > e.g., DARPA/DCA for setting policy, and contractor BBN to handle the > > technical operations of the switching fabric. > > Given that the Internet had a single backbone for a long time, I'm not > sure the no one in charge goal was pursued all that explicitly. > > Rather it was backed into, when additional backbones developed > independently, finally producing a concession to make it possible to > support -- route among -- them equally. And this was NSFNet time, not > DARPA time. > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > mast:@dcrocker at mastodon.social > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf Google, LLC 1900 Reston Metro Plaza, 16th Floor Reston, VA 20190 +1 (571) 213 1346 until further notice From brian.e.carpenter at gmail.com Thu Dec 8 12:48:39 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 9 Dec 2022 09:48:39 +1300 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> Message-ID: <3390994a-1fd8-768b-e810-7c11003b379d@gmail.com> On 09-Dec-22 06:04, Vint Cerf via Internet-history wrote: > the design intent was that each network would be operated independently And a good thing too! Asking who's in charge of the Internet is like asking who's in charge of the roads, only more so. It all depends. But I do remember the strange looks we used to get in places like Brussels and Geneva when telling various officials that nobody was in charge and it all worked by rough consensus and running code, and no, formal international standards were not required, thank you. I was even accused of being disingenuous at an OECD-sponsored meeting when I explained that the Internet Architecture Board wasn't in charge of the Internet's architecture. The Internet Governance Forum is good for a laugh to this day. Brian > > v > > > On Thu, Dec 8, 2022 at 10:22 AM Dave Crocker via Internet-history < > internet-history at elists.isoc.org> wrote: > >> On 12/7/2022 3:55 PM, Jack Haverty via Internet-history wrote: >>> IIRC, one of the fundamental questions of the "Internet Experiment" >>> was whether or not it was possible to create and operate a network for >>> which no one person or entity was "in charge". Prior networks, such >>> as ARPANET, were managed and operated by a single locus of control, >>> e.g., DARPA/DCA for setting policy, and contractor BBN to handle the >>> technical operations of the switching fabric. >> >> Given that the Internet had a single backbone for a long time, I'm not >> sure the no one in charge goal was pursued all that explicitly. >> >> Rather it was backed into, when additional backbones developed >> independently, finally producing a concession to make it possible to >> support -- route among -- them equally. And this was NSFNet time, not >> DARPA time. >> >> d/ >> >> -- >> Dave Crocker >> Brandenburg InternetWorking >> bbiw.net >> mast:@dcrocker at mastodon.social >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > From brian.e.carpenter at gmail.com Thu Dec 8 12:59:57 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 9 Dec 2022 09:59:57 +1300 Subject: [ih] Which two letters? [Internet should be in private hands] In-Reply-To: <0b353ccf-0969-1e0e-7443-e5e4329598ac@gih.com> References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> <0b353ccf-0969-1e0e-7443-e5e4329598ac@gih.com> Message-ID: My favourite case was not the notorious cs.ucl.ac.uk, whose mail often ended up in Czechoslovakia, but uk.oracle.com, Oracle's London office, who supported CERN when we first became an Oracle customer. JANET, naturally, couldn't forward to com.oracle.uk, but our gateway thought they could. We got the heuristics in our mail gateway right in the end, but it took a while. It was around that time that Jon Postel politely explained that we couldn't have .cern., and recommended .cern.int., which we declined because politics. Regards Brian Carpenter On 09-Dec-22 05:06, Olivier MJ Cr?pin-Leblond via Internet-history wrote: > Hello all, > > please see my comments interspersed in the text below: > > On 08/12/2022 15:24, Craig Partridge via Internet-history wrote: >> On Thu, Dec 8, 2022 at 12:59 AM George Ross via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> The naming scheme which JANET-attached Universties used did start with >>> "uk" >>> rather than "gb" -- we were uk.ac.ed.cs. That was in place before we >>> joined the IP internet and started using the DNS too, and I suspect the >>> prospect of sites having two different names or else having to change the >>> JANET scheme, which was much more heavily used than the DNS at the time, >>> didn't appeal, in addition to the political sensitivities. >>> >>> We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid confusion in >>> the mail gateways, as .CS was a ccTLD. > > Yes, a lot of computer departments did so, but the most challenging was > the other way around, with one of the most influential sites at the time > cs.net - and I remember that I had the most trouble getting emails to it > because sometimes it would look-up under .net zone and sometimes under > .cs zone, so sometimes I used to have to cheat by emailing through a > third party open relay, like cunyvm.cuny.edu or uunet.net or > decwrl.dec.com or uucp.sun.com who, for other reasons, all used to be > pretty good at routing emails cleverly on the Internet. > >>> >> Quick gloss on George's note for those who weren't there. The UK, due to >> JANET or simply a magnetic attraction to doing things the opposite way from >> most folks, initially had domain names with the TLD first. >> >> This meant a bunch of Internet email systems had a "both-ways" switch that >> involved parsing domain names left to right and right to left to see if >> either version made sense. > > Only the "gateways" like uk.ac.ucl.cs.nss that then became > uk.ac.nfsnet-relay, uk.ac.mhs-relay, uk.ac.rl.ib (the RAL gateway to > Bitnet) and uk.ac.ukc (for UUCP). Trying to do this within the JANET NRS > scheme (yes that's what's it's called, designed in 1983 - more info on > https://en.wikipedia.org/wiki/JANET_NRS ) was challenging. I know > because I tried playing with mailer systems in Coloured Book Software > and it was impossible, downright impossible to think of every instance > for it to work well - more info on Colour Book at: > https://en.wikipedia.org/wiki/Coloured_Book_protocols ) > > This is of course not to confuse with the ITU (then CCITT) coloured > books that determined successive 4 yearly editions of CCITT standards > like X3, X21, X25, X400 which the UK Coloured Books also used. > > I have no idea why UK was used instead of GB, but the order in which it > was used probably stems from the X.400 address notation that started > with Country first, Admin Name, Carrier Name, Organisation Name, > Recipient Name etc. which in itself, for anyone who has ever used it, > was nothing short of a nightmare. > > >> >> Eventually, as the DNS grew, some domain names made sense in both >> directions (stories that some in the DNS leadership team encouraged this >> mayhem to persuade our friends in the UK to rethink their ordering reflect >> joking comments at the time, but I don't think anyone did it intentionally) >> and this led to changes such as those George describes. >> > > Plus an increasing number of us used the Internet through the NSFNET > gateway at UCL and got our way finally to convince the powers that be > that TCP-IP was the future and could be tunnelled over X.25... and then > of course that was upgraded to native TCP-IP on E1 leased lines soon after. > > Kindest regards, > > Olivier From cdel at firsthand.net Sat Dec 10 06:15:36 2022 From: cdel at firsthand.net (christian de larrinaga) Date: Sat, 10 Dec 2022 14:15:36 +0000 Subject: [ih] Which two letters? [Internet should be in private hands] In-Reply-To: References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> <0b353ccf-0969-1e0e-7443-e5e4329598ac@gih.com> Message-ID: <87a63vgx5z.fsf@firsthand.net> Brian E Carpenter via Internet-history writes: > My favourite case was not the notorious cs.ucl.ac.uk, whose mail often ended > up in Czechoslovakia, Did the then czech security people set up a mail server at uk.ac.ucl.cs to receive this bounty? ; C > but uk.oracle.com, Oracle's London office, who supported > CERN when we first became an Oracle customer. JANET, naturally, couldn't > forward to com.oracle.uk, but our gateway thought they could. > > We got the heuristics in our mail gateway right in the end, but it took a while. > > It was around that time that Jon Postel politely explained that we couldn't > have .cern., and recommended .cern.int., which we declined because politics. > > Regards > Brian Carpenter > > On 09-Dec-22 05:06, Olivier MJ Cr?pin-Leblond via Internet-history wrote: >> Hello all, >> please see my comments interspersed in the text below: >> On 08/12/2022 15:24, Craig Partridge via Internet-history wrote: >>> On Thu, Dec 8, 2022 at 12:59 AM George Ross via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> The naming scheme which JANET-attached Universties used did start with >>>> "uk" >>>> rather than "gb" -- we were uk.ac.ed.cs. That was in place before we >>>> joined the IP internet and started using the DNS too, and I suspect the >>>> prospect of sites having two different names or else having to change the >>>> JANET scheme, which was much more heavily used than the DNS at the time, >>>> didn't appeal, in addition to the political sensitivities. >>>> >>>> We renamed ourselves as dcs.ed.ac.uk shortly after, to avoid confusion in >>>> the mail gateways, as .CS was a ccTLD. >> Yes, a lot of computer departments did so, but the most challenging >> was >> the other way around, with one of the most influential sites at the time >> cs.net - and I remember that I had the most trouble getting emails to it >> because sometimes it would look-up under .net zone and sometimes under >> .cs zone, so sometimes I used to have to cheat by emailing through a >> third party open relay, like cunyvm.cuny.edu or uunet.net or >> decwrl.dec.com or uucp.sun.com who, for other reasons, all used to be >> pretty good at routing emails cleverly on the Internet. >> >>>> >>> Quick gloss on George's note for those who weren't there. The UK, due to >>> JANET or simply a magnetic attraction to doing things the opposite way from >>> most folks, initially had domain names with the TLD first. >>> >>> This meant a bunch of Internet email systems had a "both-ways" switch that >>> involved parsing domain names left to right and right to left to see if >>> either version made sense. >> Only the "gateways" like uk.ac.ucl.cs.nss that then became >> uk.ac.nfsnet-relay, uk.ac.mhs-relay, uk.ac.rl.ib (the RAL gateway to >> Bitnet) and uk.ac.ukc (for UUCP). Trying to do this within the JANET NRS >> scheme (yes that's what's it's called, designed in 1983 - more info on >> https://en.wikipedia.org/wiki/JANET_NRS ) was challenging. I know >> because I tried playing with mailer systems in Coloured Book Software >> and it was impossible, downright impossible to think of every instance >> for it to work well - more info on Colour Book at: >> https://en.wikipedia.org/wiki/Coloured_Book_protocols ) >> This is of course not to confuse with the ITU (then CCITT) coloured >> books that determined successive 4 yearly editions of CCITT standards >> like X3, X21, X25, X400 which the UK Coloured Books also used. >> I have no idea why UK was used instead of GB, but the order in which >> it >> was used probably stems from the X.400 address notation that started >> with Country first, Admin Name, Carrier Name, Organisation Name, >> Recipient Name etc. which in itself, for anyone who has ever used it, >> was nothing short of a nightmare. >> >>> >>> Eventually, as the DNS grew, some domain names made sense in both >>> directions (stories that some in the DNS leadership team encouraged this >>> mayhem to persuade our friends in the UK to rethink their ordering reflect >>> joking comments at the time, but I don't think anyone did it intentionally) >>> and this led to changes such as those George describes. >>> >> Plus an increasing number of us used the Internet through the NSFNET >> gateway at UCL and got our way finally to convince the powers that be >> that TCP-IP was the future and could be tunnelled over X.25... and then >> of course that was upgraded to native TCP-IP on E1 leased lines soon after. >> Kindest regards, >> Olivier -- christian de larrinaga https://firsthand.net From el at lisse.NA Sat Dec 10 06:22:07 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Sat, 10 Dec 2022 16:22:07 +0200 Subject: [ih] Which two letters? [Internet should be in private hands] In-Reply-To: <87a63vgx5z.fsf@firsthand.net> References: <202212080759.2B87xee4010494@maysl7.inf.ed.ac.uk> <0b353ccf-0969-1e0e-7443-e5e4329598ac@gih.com> <87a63vgx5z.fsf@firsthand.net> Message-ID: There were many CS departments in the UK... el On 10/12/2022 16:15, christian de larrinaga via Internet-history wrote: > > Brian E Carpenter via Internet-history writes: > >> My favourite case was not the notorious cs.ucl.ac.uk, whose >> mail often ended up in Czechoslovakia, > > Did the then czech security people set up a mail server at > uk.ac.ucl.cs to receive this bounty? [...] From jack at 3kitty.org Sat Dec 10 10:39:27 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 10 Dec 2022 10:39:27 -0800 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: <92cb0d12-ce2d-b83c-7e25-b303ee2764c2@3kitty.org> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> <92cb0d12-ce2d-b83c-7e25-b303ee2764c2@3kitty.org> Message-ID: <012ec4cf-f74e-6d63-3eda-5c2e6f0b1cf1@3kitty.org> Sorry, missed a piece in my copy/paste -- added it at the end below. On 12/10/22 10:35, Jack Haverty wrote: > So, who's in charge...?? I decided to ask the New Kid on the Block.?? > S/he concurs that no one is in charge of the Internet, but told me > about one player that I didn't know until now - the Internet > Governance Forum. > > FYI, I've appended our conversation below.?? I asked the one-liner > questions.?? The answers are all from ChatGPT.?? Not very creative but > seems to be a good and impartial factual (if possibly outdated) > summary.? IMHO at least.? We're all entering into a new world....as > the Internet develops a Brain. > > Enjoy, > Jack Haverty > > ---------------------------------------------------- > > Who is in charge of the Internet? > > There is no one person or organization in charge of the Internet. The > Internet is a global network of interconnected computer networks that > is based on a decentralized architecture, meaning that it is not > controlled by any single entity. Instead, it is managed by a variety > of organizations and individuals that work together to ensure that the > Internet continues to function smoothly and effectively. This includes > governments, private companies, and individual volunteers who all play > a role in maintaining and improving the Internet. > > jack at 3kitty.org > > What organizations maintain the Internet? > > There are many organizations that work together to maintain and > improve the Internet. These organizations include: > > 1. > > The Internet Corporation for Assigned Names and Numbers (ICANN), > which is responsible for coordinating the global system of unique > identifiers that are used on the Internet, such as domain names > and IP addresses. > > 2. > > The World Wide Web Consortium (W3C), which is an international > community of organizations and individuals that develops standards > for the World Wide Web. > > 3. > > The Internet Engineering Task Force (IETF), which is a group of > volunteers who are responsible for the technical development of > the Internet. > > 4. > > The Internet Society (ISOC), which is a nonprofit organization > that promotes the development and use of the Internet for the > benefit of all people. > > 5. > > National research and education networks (NRENs), which are > networks that connect educational institutions and research > organizations within a specific country. > > 6. > > Regional Internet registries (RIRs), which are organizations that > are responsible for allocating IP addresses within a specific region. > > 7. > > Internet service providers (ISPs), which are companies that > provide individuals and organizations with access to the Internet. > > 8. > > Government agencies, such as the National Science Foundation (NSF) > in the United States, which provide funding and support for the > development and maintenance of the Internet. > > jack at 3kitty.org > Who decides policies of the Internet? > > There is no one person or organization that decides the policies of > the Internet. Instead, the policies of the Internet are determined > through a collaborative process that involves a variety of > stakeholders, including governments, private companies, technical > experts, and civil society organizations. > > One of the main organizations that helps to coordinate the development > of Internet policies is the Internet Governance Forum (IGF), which is > a United Nations-based platform that brings together stakeholders from > all over the world to discuss and develop policies related to the > Internet. Other organizations, such as the Internet Corporation for > Assigned Names and Numbers (ICANN) and the World Wide Web Consortium > (W3C), also play a role in developing policies that affect the > functioning of the Internet. > > Ultimately, the policies of the Internet are determined through a > process of consensus-building and collaboration among the many > different stakeholders that are involved in the operation and > management of the Internet. > > jack at 3kitty.org > What actions has the Internet Governance Forum taken? > > The Internet Governance Forum (IGF) is a United Nations-based platform > that brings together stakeholders from all over the world to discuss > and develop policies related to the Internet. It was created in 2006 > to serve as a neutral, inclusive space where all stakeholders could > come together to discuss and share their views on a wide range of > issues related to the Internet. > > The IGF has taken many actions over the years to promote the growth > and development of the Internet. Some of these actions include: > > * > > Organizing annual meetings and workshops where stakeholders can > discuss and share their views on a wide range of Internet-related > issues, including cybersecurity, data privacy, online freedom of > expression, and the digital divide. > > * > > Developing best practices and guidelines for the use of the > Internet, such as the IGF Best Practice Forums, which provide > guidance on how to address specific challenges and opportunities > related to the Internet. > > * > > Providing a platform for stakeholders to engage in dialogue and > build relationships with each other, which helps to foster a more > collaborative and inclusive approach to Internet governance. > > * > > Supporting the development of national and regional Internet > governance forums, which help to bring the benefits of the > Internet to more people around the world. > > * > > Advocating for the inclusion of all stakeholders in the > decision-making processes related to the Internet, including > governments, the private sector, civil society organizations, and > technical experts. > > Overall, the actions of the IGF are aimed at promoting the growth and > development of the Internet in a way that is inclusive, transparent, > and accountable. > > > jack at 3kitty.org > Does the current mechanism for governing the Internet work well? > > The current mechanism for governing the Internet is based on a > decentralized, bottom-up approach that involves a wide variety of > stakeholders, including governments, private companies, technical > experts, and civil society organizations. This approach has been > successful in many ways, such as promoting the growth and development > of the Internet and fostering a more inclusive and collaborative > approach to decision-making. > > However, some people have criticized the current mechanism for > governing the Internet, arguing that it is not always effective at > addressing important issues, such as cybersecurity and online privacy. > Some have also argued that it does not adequately represent the > interests of all stakeholders, particularly those in developing countries. > > Overall, the effectiveness of the current mechanism for governing the > Internet is a matter of debate, and there is no consensus on whether > it is working well or not. Some people believe that it is largely > effective, while others think that it needs to be improved in order to > better address the challenges and opportunities of the Internet. > > jack at 3kitty.org > Should someone be in charge of the Internet? > > Whether or not someone should be in charge of the Internet is a matter > of debate. Some people believe that the current decentralized, > bottom-up approach to governing the Internet is effective and should > be maintained. This approach involves a wide variety of stakeholders, > including governments, private companies, technical experts, and civil > society organizations, and it has been successful in promoting the > growth and development of the Internet. > > However, others argue that the current approach is not always > effective at addressing important issues, such as cybersecurity and > online privacy. They believe that the Internet is such an important > global resource that it should be managed by a single entity that has > the authority and resources to make decisions that are in the best > interests of the global community. > > Ultimately, the question of whether someone should be in charge of the > Internet is a complex one, and there is no right or wrong answer. It > depends on how you view the role of the Internet in society and what > you think is the best way to ensure that it continues to be a positive > force for global development and innovation. > jack at 3kitty.org What does "being in charge of the Internet" mean? Being in charge of the Internet would mean having the authority and responsibility to make decisions about the operation and management of the Internet. This would likely involve setting policies and guidelines for the use of the Internet, allocating resources to support its growth and development, and coordinating the efforts of the various organizations and individuals that are involved in maintaining and improving the Internet. However, it is important to note that the Internet is a decentralized, bottom-up system, and it is not currently controlled by any single entity. This means that it is not clear what it would mean to be "in charge" of the Internet, or how such a role could be implemented in practice. The idea of being in charge of the Internet is therefore largely theoretical, and it is not clear how it would work in the real world. From dhc at dcrocker.net Sat Dec 10 11:36:30 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 10 Dec 2022 11:36:30 -0800 Subject: [ih] Chatting about email Message-ID: <4e35278a-bbb2-adee-15a8-bc96879e9483@dcrocker.net> I'd been ignoring the current focus on the latest AI wonder, but finally broke down and as a single question twice, with one significant change.? Both answers look credible to me, though I continue to kick myself for not making RFC 822 have an acronym... what is the design of internet mail? The design of internet mail is based on the concept of sending messages between devices on a network using a standardized protocol. This allows devices on different networks to communicate with each other by sending and receiving messages through a network of servers that use the protocol to transmit the messages from one device to another. The protocol used for internet mail is called the Simple Mail Transfer Protocol (SMTP). This protocol defines a set of rules and commands that are used to send and receive mail messages between devices on a network. dhc2 at dcrocker.net what is the architecture of internet email The architecture of internet email involves several different components that work together to enable the sending and receiving of email messages. Here is a brief overview of the main components of the internet email architecture: * Mail user agents (MUAs): These are the client applications that users interact with to compose, send, receive, and read email messages. Examples of MUAs include email programs like Microsoft Outlook and Mozilla Thunderbird, as well as web-based email services like Gmail and Yahoo Mail. * Mail transfer agents (MTAs): These are the servers that are responsible for transmitting email messages between different networks. When a user sends an email message using an MUA, the message is passed to an MTA, which then delivers the message to the appropriate MTA on the recipient's network. * Mailbox servers: These are the servers that store email messages on behalf of users. When an MTA delivers an email message to a recipient's network, the message is passed to the recipient's mailbox server, which then stores the message in the user's mailbox. * Message store: This is the database or file system where the mailbox server stores the email messages for each user. * Protocols: The architecture of internet email relies on a number of different protocols to facilitate the transmission and storage of messages. The most important of these protocols are SMTP, which is used to transmit messages between MTAs, and the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP), which are used by MUAs to retrieve messages from the mailbox server. Overall, the architecture of internet email is designed to allow users to send and receive messages quickly and reliably, regardless of the devices they are using or the networks they are connected to. -- Dave Crocker Brandenburg InternetWorking bbiw.net mast:@dcrocker at mastodon.social From brian.e.carpenter at gmail.com Sat Dec 10 11:49:58 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 11 Dec 2022 08:49:58 +1300 Subject: [ih] What does being "in charge of the Internet" mean? In-Reply-To: <012ec4cf-f74e-6d63-3eda-5c2e6f0b1cf1@3kitty.org> References: <67b27e5d-8f08-2de7-92a2-898e35c42383@gmail.com> <0d6822a4-b62b-66ff-d52b-7bc4c57d195a@gmail.com> <62418263-ea9b-e23b-013c-f8839d235c5d@Julf.com> <92cb0d12-ce2d-b83c-7e25-b303ee2764c2@3kitty.org> <012ec4cf-f74e-6d63-3eda-5c2e6f0b1cf1@3kitty.org> Message-ID: <2f1e4e59-e89e-ddac-f03f-5a480fa8ddc3@gmail.com> ChatGPT is quite honest when asked the right question: "What practical results has the Internet Governance Forum obtained? The Internet Governance Forum (IGF) is a platform for discussion and dialogue on issues related to the governance of the Internet. It is not a decision-making body, so it does not produce concrete results in the form of laws or policies. Instead, the IGF aims to foster collaboration and understanding among stakeholders, including governments, civil society organizations, the private sector, and technical experts." ChatGPT is not reliable, however. I have found at least one instance where it regurgitates a technical myth. It needs fact-checking. It has academics worried though, because students can misuse it to write assignments. (In fact, the myth that I found can be traced back to one of those sites that prepare student assignments for money; ChatGPT is no better than Google at fact-checking, since it is basically a smart web crawler with a very good natural language UI.) Regards Brian Carpenter On 11-Dec-22 07:39, Jack Haverty via Internet-history wrote: > Sorry, missed a piece in my copy/paste -- added it at the end below. > > On 12/10/22 10:35, Jack Haverty wrote: >> So, who's in charge...?? I decided to ask the New Kid on the Block. >> S/he concurs that no one is in charge of the Internet, but told me >> about one player that I didn't know until now - the Internet >> Governance Forum. >> >> FYI, I've appended our conversation below.?? I asked the one-liner >> questions.?? The answers are all from ChatGPT.?? Not very creative but >> seems to be a good and impartial factual (if possibly outdated) >> summary.? IMHO at least.? We're all entering into a new world....as >> the Internet develops a Brain. >> >> Enjoy, >> Jack Haverty >> >> ---------------------------------------------------- >> >> Who is in charge of the Internet? >> >> There is no one person or organization in charge of the Internet. The >> Internet is a global network of interconnected computer networks that >> is based on a decentralized architecture, meaning that it is not >> controlled by any single entity. Instead, it is managed by a variety >> of organizations and individuals that work together to ensure that the >> Internet continues to function smoothly and effectively. This includes >> governments, private companies, and individual volunteers who all play >> a role in maintaining and improving the Internet. >> >> jack at 3kitty.org >> >> What organizations maintain the Internet? >> >> There are many organizations that work together to maintain and >> improve the Internet. These organizations include: >> >> 1. >> >> The Internet Corporation for Assigned Names and Numbers (ICANN), >> which is responsible for coordinating the global system of unique >> identifiers that are used on the Internet, such as domain names >> and IP addresses. >> >> 2. >> >> The World Wide Web Consortium (W3C), which is an international >> community of organizations and individuals that develops standards >> for the World Wide Web. >> >> 3. >> >> The Internet Engineering Task Force (IETF), which is a group of >> volunteers who are responsible for the technical development of >> the Internet. >> >> 4. >> >> The Internet Society (ISOC), which is a nonprofit organization >> that promotes the development and use of the Internet for the >> benefit of all people. >> >> 5. >> >> National research and education networks (NRENs), which are >> networks that connect educational institutions and research >> organizations within a specific country. >> >> 6. >> >> Regional Internet registries (RIRs), which are organizations that >> are responsible for allocating IP addresses within a specific region. >> >> 7. >> >> Internet service providers (ISPs), which are companies that >> provide individuals and organizations with access to the Internet. >> >> 8. >> >> Government agencies, such as the National Science Foundation (NSF) >> in the United States, which provide funding and support for the >> development and maintenance of the Internet. >> >> jack at 3kitty.org >> Who decides policies of the Internet? >> >> There is no one person or organization that decides the policies of >> the Internet. Instead, the policies of the Internet are determined >> through a collaborative process that involves a variety of >> stakeholders, including governments, private companies, technical >> experts, and civil society organizations. >> >> One of the main organizations that helps to coordinate the development >> of Internet policies is the Internet Governance Forum (IGF), which is >> a United Nations-based platform that brings together stakeholders from >> all over the world to discuss and develop policies related to the >> Internet. Other organizations, such as the Internet Corporation for >> Assigned Names and Numbers (ICANN) and the World Wide Web Consortium >> (W3C), also play a role in developing policies that affect the >> functioning of the Internet. >> >> Ultimately, the policies of the Internet are determined through a >> process of consensus-building and collaboration among the many >> different stakeholders that are involved in the operation and >> management of the Internet. >> >> jack at 3kitty.org >> What actions has the Internet Governance Forum taken? >> >> The Internet Governance Forum (IGF) is a United Nations-based platform >> that brings together stakeholders from all over the world to discuss >> and develop policies related to the Internet. It was created in 2006 >> to serve as a neutral, inclusive space where all stakeholders could >> come together to discuss and share their views on a wide range of >> issues related to the Internet. >> >> The IGF has taken many actions over the years to promote the growth >> and development of the Internet. Some of these actions include: >> >> * >> >> Organizing annual meetings and workshops where stakeholders can >> discuss and share their views on a wide range of Internet-related >> issues, including cybersecurity, data privacy, online freedom of >> expression, and the digital divide. >> >> * >> >> Developing best practices and guidelines for the use of the >> Internet, such as the IGF Best Practice Forums, which provide >> guidance on how to address specific challenges and opportunities >> related to the Internet. >> >> * >> >> Providing a platform for stakeholders to engage in dialogue and >> build relationships with each other, which helps to foster a more >> collaborative and inclusive approach to Internet governance. >> >> * >> >> Supporting the development of national and regional Internet >> governance forums, which help to bring the benefits of the >> Internet to more people around the world. >> >> * >> >> Advocating for the inclusion of all stakeholders in the >> decision-making processes related to the Internet, including >> governments, the private sector, civil society organizations, and >> technical experts. >> >> Overall, the actions of the IGF are aimed at promoting the growth and >> development of the Internet in a way that is inclusive, transparent, >> and accountable. >> >> >> jack at 3kitty.org >> Does the current mechanism for governing the Internet work well? >> >> The current mechanism for governing the Internet is based on a >> decentralized, bottom-up approach that involves a wide variety of >> stakeholders, including governments, private companies, technical >> experts, and civil society organizations. This approach has been >> successful in many ways, such as promoting the growth and development >> of the Internet and fostering a more inclusive and collaborative >> approach to decision-making. >> >> However, some people have criticized the current mechanism for >> governing the Internet, arguing that it is not always effective at >> addressing important issues, such as cybersecurity and online privacy. >> Some have also argued that it does not adequately represent the >> interests of all stakeholders, particularly those in developing countries. >> >> Overall, the effectiveness of the current mechanism for governing the >> Internet is a matter of debate, and there is no consensus on whether >> it is working well or not. Some people believe that it is largely >> effective, while others think that it needs to be improved in order to >> better address the challenges and opportunities of the Internet. >> >> jack at 3kitty.org >> Should someone be in charge of the Internet? >> >> Whether or not someone should be in charge of the Internet is a matter >> of debate. Some people believe that the current decentralized, >> bottom-up approach to governing the Internet is effective and should >> be maintained. This approach involves a wide variety of stakeholders, >> including governments, private companies, technical experts, and civil >> society organizations, and it has been successful in promoting the >> growth and development of the Internet. >> >> However, others argue that the current approach is not always >> effective at addressing important issues, such as cybersecurity and >> online privacy. They believe that the Internet is such an important >> global resource that it should be managed by a single entity that has >> the authority and resources to make decisions that are in the best >> interests of the global community. >> >> Ultimately, the question of whether someone should be in charge of the >> Internet is a complex one, and there is no right or wrong answer. It >> depends on how you view the role of the Internet in society and what >> you think is the best way to ensure that it continues to be a positive >> force for global development and innovation. >> > > jack at 3kitty.org > What does "being in charge of the Internet" mean? > > Being in charge of the Internet would mean having the authority and > responsibility to make decisions about the operation and management of > the Internet. This would likely involve setting policies and guidelines > for the use of the Internet, allocating resources to support its growth > and development, and coordinating the efforts of the various > organizations and individuals that are involved in maintaining and > improving the Internet. > > However, it is important to note that the Internet is a decentralized, > bottom-up system, and it is not currently controlled by any single > entity. This means that it is not clear what it would mean to be "in > charge" of the Internet, or how such a role could be implemented in > practice. The idea of being in charge of the Internet is therefore > largely theoretical, and it is not clear how it would work in the real > world. From jack at 3kitty.org Sat Dec 10 13:44:32 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 10 Dec 2022 13:44:32 -0800 Subject: [ih] 50th Anniversary - The Internet Has A Brain? Message-ID: A bit more than 50 years ago: https://www.rfc-editor.org/rfc/rfc439.txt Vint - Perhaps a repeat of the experiment with two of today's AI's??? What has 50 years of history changed..... Jack From vgcerf at gmail.com Sat Dec 10 14:26:43 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sat, 10 Dec 2022 17:26:43 -0500 Subject: [ih] 50th Anniversary - The Internet Has A Brain? In-Reply-To: References: Message-ID: already underway :-)))) v On Sat, Dec 10, 2022 at 4:45 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > A bit more than 50 years ago: > > https://www.rfc-editor.org/rfc/rfc439.txt > > Vint - Perhaps a repeat of the experiment with two of today's AI's? > What has 50 years of history changed..... > > Jack > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From pugs78 at gmail.com Thu Dec 29 16:34:46 2022 From: pugs78 at gmail.com (Tom Lyon) Date: Thu, 29 Dec 2022 16:34:46 -0800 Subject: [ih] INWG#39 from 1973: "HOST and PROCESS Level Protocols for Internetwork Communication" Message-ID: Fresh from the scanner, this seminal document for TCP. Autographed by Vint Cerf and Bob Metcalfe at the #Internet50 event in 2019. https://drive.google.com/file/d/1Iedi3VLmhlBsfkPZ_gfrBvzdGgG4kRJh/view?usp=sharing From tracyhackshaw at gmail.com Fri Dec 30 13:28:36 2022 From: tracyhackshaw at gmail.com (Tracy F. Hackshaw @ Google) Date: Fri, 30 Dec 2022 17:28:36 -0400 Subject: [ih] Leading Tech YouTuber MKBHD's Studio on ICANN & the DNS Message-ID: https://www.linkedin.com/posts/tracyhackshaw_icann-and-the-7-keys-to-the-internet-activity-7014700817957965824-c9uI