From lars at nocrew.org Sat Jan 2 00:17:33 2021 From: lars at nocrew.org (Lars Brinkhoff) Date: Sat, 02 Jan 2021 08:17:33 +0000 Subject: [ih] byte order, was Octal vs Hex, not Re: Dotted decimal notation In-Reply-To: <7wa6tu2lyr.fsf@junk.nocrew.org> (Lars Brinkhoff via Internet-history's message of "Thu, 31 Dec 2020 12:34:20 +0000") References: <20201229232059.78C6C356444F@ary.qy> <63601804-716a-f531-8da4-63e43b9d9fe6@3kitty.org> <6e51b676-2ea6-dae8-cd37-86bd7572ae02@dcrocker.net> <0b0a0514-8662-b2d6-cb8e-71a06f9ff3d9@gmail.com> <7wim8i2n2p.fsf@junk.nocrew.org> <7wa6tu2lyr.fsf@junk.nocrew.org> Message-ID: <7wzh1rzraa.fsf@junk.nocrew.org> >> Geoff Goodfellow wrote: >>> the MIT PDP-10 reference must be of Al Vezza's MIT-DM host, but >>> yours truly is kinda perplexed over the last sentence of: >>> >>> "Mazewar games between MIT and Stanford were a major data load on the >>> early Arpanet." >>> >>> anyone got more "history" here on this...??? >> >> I have seen this story many times, but no evidence to back it up. > This slide points to USC: Tom Lipkis remembers: | Yes, we had mazewar running on Imlacs at USC, and yes, we played over | the Arpanet with people at MIT. I don't remember overloading the | network - we were off campus and, I think, dialing in to the USC-TIP, | so we were way more bandwidth-limited than the 50kbps Arpanet of the | time. What I do remember is that the latency put us at a | disadvantage. Someone could come around a corner, shoot, and retreat | before we saw the update. From gnu at toad.com Sat Jan 2 01:38:43 2021 From: gnu at toad.com (John Gilmore) Date: Sat, 02 Jan 2021 01:38:43 -0800 Subject: [ih] History of 127/8 as localhost/loopback addresses? Message-ID: <27498.1609580323@hop.toad.com> I am in the process of sorting out various ways that the IPv4 unicast address space was historically constrained to allow fewer than the 2^32 available IP addresses. One question that came up was how we ended up with 16,777,216 loopback addresses in IPv4. History questions: Was there a host-software-accessible loopback or localhost function in the ARPANET or in NCP? How was it invoked? When TCP/IP was being designed, where did the concept of a loopback function come from? How did it merge with the "connect to a port on the local host, without having to figure out its IP address" function that 127/8 eventually got used for? Did Jon Postel or other IP designers have the localhost function in mind for 127 when he first reserved it back in 1981? Was 127 used this way prior to 1986? Did Jon or others discuss this use prior to then? Who, if anyone, argued for having more than a single loopback address? Was there discussion of whether a full Class-A network was needed for the loopback function? Why was a Class-C network not used? Is there an explanation for why so many addresses were ultimately assigned to that function? And, fast-forwarding into the 1990s: When IPv6 was designed, why was this design decision revised? Who made the decision to allocate a single IPv6 localhost address? Was that controversial? Thanks for the memories! Researching in the first thousand RFC's reveals: The first mention of any kind of loopback in the RFC series seems to be in June 1984 in RFC 900. In that Assigned Numbers RFC, loopback appears as an Ethernet frame type 0x9000, assigned for Larry Garlick of Xerox. This refers to a specific kind of packet sent on 10-megabit Ethernet v2.0 networks to test connectivity among hosts. In RFC 907 of July 1984, the SATNET Host Access Protocol has a specific bit assigned as the "Loopback Bit", and also defines a remote loopback request/response message and function. (This is for setting a mode in which ALL traffic is looped from transmit to receive side of an interface -- not for looping an individual packet or TCP connection.) In the evolution of IP Multicast from RFC 966 in December 1985 to RFC 988 in July 1986, a new parameter specified whether multicast packets would or would not be "looped-back" to their sending host. In September 1981, in RFC 790, Jon Postel first indicated that IP network number 127 was "reserved", without explicitly stating for what. This was repeated in all the Assigned Numbers RFCs through RFC 960 (December 1985); then in RFC 990 (November 1986), Jon and Joyce Reynolds assigned it for loopback, stating that: The class A network number 127 is assigned the "loopback" function, that is, a datagram sent by a higher level protocol to a network 127 address should loop back inside the host. No datagram "sent" to a network 127 address should ever appear on any network anywhere. By the Host Requirements RFC 1122 in October 1989, the spec was restated to: { 127, } Internal host loopback address. Addresses of this form MUST NOT appear outside a host. The first mention of the specific address "127.0.0.1" in RFCs is in May 1993 in RFC 1459 (as an example dotted decimal address that one might use in the IRC protocol). The RFCs contain no explanation of how the whole specified range of 16 million loopback addresses was narrowed in many peoples' minds to the single "localhost" address 127.0.0.1. ("Localhost" does not appear in the first thousand RFC's.) John From craig at tereschau.net Sat Jan 2 05:40:15 2021 From: craig at tereschau.net (Craig Partridge) Date: Sat, 2 Jan 2021 06:40:15 -0700 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <27498.1609580323@hop.toad.com> References: <27498.1609580323@hop.toad.com> Message-ID: There's been a long discussion this week on the ex-BBNers list about ARPANET link debugging and the ability to loopback IMP lines. That's from the IMP side (not host per your question) but shows that loopbacks were considered an essential debugging feature from the start of the ARPANET. Craig On Sat, Jan 2, 2021 at 2:38 AM John Gilmore via Internet-history < internet-history at elists.isoc.org> wrote: > I am in the process of sorting out various ways that the IPv4 unicast > address space was historically constrained to allow fewer than the 2^32 > available IP addresses. One question that came up was how we ended up > with 16,777,216 loopback addresses in IPv4. > > History questions: > > Was there a host-software-accessible loopback or localhost function in > the ARPANET or in NCP? How was it invoked? > > When TCP/IP was being designed, where did the concept of a loopback > function come from? How did it merge with the "connect to a port > on the local host, without having to figure out its IP address" function > that 127/8 eventually got used for? > > Did Jon Postel or other IP designers have the localhost function in mind > for 127 when he first reserved it back in 1981? Was 127 used this way > prior to 1986? Did Jon or others discuss this use prior to then? > > Who, if anyone, argued for having more than a single loopback address? > Was there discussion of whether a full Class-A network was needed for > the loopback function? Why was a Class-C network not used? Is there an > explanation for why so many addresses were ultimately assigned to that > function? > > And, fast-forwarding into the 1990s: When IPv6 was designed, why was > this design decision revised? Who made the decision to allocate a > single IPv6 localhost address? Was that controversial? > > Thanks for the memories! > > Researching in the first thousand RFC's reveals: > > The first mention of any kind of loopback in the RFC series seems to be > in June 1984 in RFC 900. In that Assigned Numbers RFC, loopback appears > as an Ethernet frame type 0x9000, assigned for Larry Garlick of Xerox. > This refers to a specific kind of packet sent on 10-megabit Ethernet > v2.0 networks to test connectivity among hosts. > > In RFC 907 of July 1984, the SATNET Host Access Protocol has a specific > bit assigned as the "Loopback Bit", and also defines a remote loopback > request/response message and function. (This is for setting a mode > in which ALL traffic is looped from transmit to receive side of an > interface -- not for looping an individual packet or TCP connection.) > > In the evolution of IP Multicast from RFC 966 in December 1985 to RFC > 988 in July 1986, a new parameter specified whether multicast packets > would or would not be "looped-back" to their sending host. > > In September 1981, in RFC 790, Jon Postel first indicated that IP > network number 127 was "reserved", without explicitly stating for what. > This was repeated in all the Assigned Numbers RFCs through RFC 960 > (December 1985); then in RFC 990 (November 1986), Jon and Joyce Reynolds > assigned it for loopback, stating that: > > The class A network number 127 is assigned the "loopback" > function, that is, a datagram sent by a higher level protocol > to a network 127 address should loop back inside the host. No > datagram "sent" to a network 127 address should ever appear on > any network anywhere. > > By the Host Requirements RFC 1122 in October 1989, the spec was > restated to: > > { 127, } > > Internal host loopback address. Addresses of this form > MUST NOT appear outside a host. > > The first mention of the specific address "127.0.0.1" in RFCs is in May > 1993 in RFC 1459 (as an example dotted decimal address that one might > use in the IRC protocol). The RFCs contain no explanation of how the > whole specified range of 16 million loopback addresses was narrowed in > many peoples' minds to the single "localhost" address 127.0.0.1. > ("Localhost" does not appear in the first thousand RFC's.) > > John > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From cabo at tzi.org Sat Jan 2 07:52:43 2021 From: cabo at tzi.org (Carsten Bormann) Date: Sat, 2 Jan 2021 16:52:43 +0100 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: References: <27498.1609580323@hop.toad.com> Message-ID: <7BDBD781-67D3-4AC8-872F-9587CB5A5DF0@tzi.org> On 2021-01-02, at 14:40, Craig Partridge via Internet-history wrote: > > loopback I think it is an unfortunate artifact of naming the localhost address ?loopback? that it is easily confused with actual loopback functions. The latter are meant to test a line or path by having the remote end loop back the traffic, and I?m sure that some form of this was used in diagnosing most reasonably operating networks. Information takes an actual loop (which happens to coincide with the ?local loop? in many cases) from one end of a line/path to another end and back. The localhost interfaces have no physical loop. Their packets are sent up the IP stack right from the interface, but that is not different from e.g. the way multicast sometimes behaves. Note that, in another unrelated usage, the term ?loopback address? is also used for a routable IP address given to a router; loopback interfaces are then the virtual interfaces carrying those addresses (i.e., not 127/8). (It is of course too late to fix this terminological conflation, but the second best approach is to be aware of it and make people aware of it.) IIRC, localhost (?lo0?) was part of 4.2BSD (and I?m sure the various 4.1x BSDs), so I think we can date it to before 1983, probably even 1981. Source code for those distributions should be available, so a more precise statement can certainly be made after some digging. Gr??e, Carsten From gaylord at dirtcheapemail.com Sat Jan 2 07:59:52 2021 From: gaylord at dirtcheapemail.com (gaylord at dirtcheapemail.com) Date: Sat, 02 Jan 2021 10:59:52 -0500 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <7BDBD781-67D3-4AC8-872F-9587CB5A5DF0@tzi.org> References: <27498.1609580323@hop.toad.com> <7BDBD781-67D3-4AC8-872F-9587CB5A5DF0@tzi.org> Message-ID: <830139dd-a0fb-4641-87d4-829a8a45f46d@www.fastmail.com> On Sat, Jan 2, 2021, at 10:52 AM, Carsten Bormann via Internet-history wrote: > On 2021-01-02, at 14:40, Craig Partridge via Internet-history > wrote: > > > > loopback > > I think it is an unfortunate artifact of naming the localhost address > ?loopback? that it is easily confused with actual loopback functions. > The latter are meant to test a line or path by having the remote end > loop back the traffic, and I?m sure that some form of this was used in > diagnosing most reasonably operating networks. Information takes an > actual loop (which happens to coincide with the ?local loop? in many > cases) from one end of a line/path to another end and back. I don't think it's unfortunate actually. There is a true "local loopback" concept, much closer than the "local loop" which is (e.g. on a T-1) a physical loop directly on the local network interface. This serves to ensure the local interface is working correctly, so has a very reasonable analogy to the loopback network interface, confirming the operation of the local networking stack. Clark From cabo at tzi.org Sat Jan 2 08:12:13 2021 From: cabo at tzi.org (Carsten Bormann) Date: Sat, 2 Jan 2021 17:12:13 +0100 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <7BDBD781-67D3-4AC8-872F-9587CB5A5DF0@tzi.org> References: <27498.1609580323@hop.toad.com> <7BDBD781-67D3-4AC8-872F-9587CB5A5DF0@tzi.org> Message-ID: On 2021-01-02, at 16:52, Carsten Bormann via Internet-history wrote: > > 4.2BSD Found it (below). Note that, in March 1982, it said ?non-standard? about the address 127.0.0.1, and maybe a bit ruefully ?An approved network address should be reserved for this interface?. That is also my recollection: The squatting on 127/8 for this was common usage, ?harmless? at the time as that range had not been allocated, but not actually standardized anywhere (but soon immutable by having been encoded in hundreds of applications). If it had been the subject of standardization discussion, a class C address might very well have been chosen after giving it a tiny amount of additional thought ? but then RFC 844 as late as February 1983 gave a survey that meant to address how many hosts supported class C already, so maybe class A was still the logical thing to choose in 1982; class C seems to have been introduced between January 1981 (RFC 776) and September 1981 (RFC 790). Note that BSD also had 0.0.0.0 to talk to yourself; ?telnet 0 22? (*) still connects on my current laptop... Gr??e, Carsten (*) branching over to the dotted quad discussion :-) > UNIX Programmer?s Manual > > LO (4) > > NAME > > lo ? software loopback network interface > > SYNOPSIS > > pseudo-device loop > > DESCRIPTION > > The loop interface is a software loopback mechanism which may be used for performance > analysis, software testing, and/or local communication. By default, the loopback interface is > accessible at address 127.0.0.1 (non-standard); this address may be changed with the SIOCSI- > FADDR ioctl. > > DIAGNOSTICS > > lo%d: can?t handle af%d. The interface was handed a message with addresses formatted in an > unsuitable address family; the packet was dropped. > > SEE ALSO > > intro (4N), inet(4F) > > BUGS > > It should handle all address and protocol families. An approved network address should be > reserved for this interface. > > > 4th Berkeley Distribution > > > 26 March 1982 From brian.e.carpenter at gmail.com Sat Jan 2 11:44:32 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 3 Jan 2021 08:44:32 +1300 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: References: <27498.1609580323@hop.toad.com> Message-ID: <5682a1d2-e2a6-01d2-0d3e-d862f21cb5ce@gmail.com> The idea precedes computer networks. I recall that a favourite trick at student parties in Manchester (England) in the late 1960's, if there was a phone in the flat, was to dial a magic number and hang up. Some seconds later the phone would ring but whoever answered it found nobody there, because the magic number was the loopback, used for engineering tests. It would have been a very natural idea for any ex-telephony people to add to the IMP. As for IPv6, I have the draft minutes of the SIPP working group (just before it became IPv6) from July 1994: > A local loop back address is defined. This is the local address with the low order > one bit set to one, and the rest set to zero (address FE00:0:0:0:0:0:0:1). The > cluster label for this would appear to be FE00:0:0:0:0:0:0:0, although some of us > were unclear what a cluster loopback address would mean. The document under discussion was draft-ietf-sipp-routing-addr-02.txt. The IETF archive is more than a bit flaky (it claims to have the draft, but doesn't). https://www.sobco.com/ipng/internet-drafts/ doesn't have it either (it has draft-ietf-sip-routing-addr-00.txt but that is for 64-bit SIP, not 128-bit SIPP.) John, I'll unicast you those minutes and the draft. But for more details, ask Bob Hinden. Regards Brian On 03-Jan-21 02:40, Craig Partridge via Internet-history wrote: > There's been a long discussion this week on the ex-BBNers list about > ARPANET link debugging and the ability to loopback IMP lines. That's from > the IMP side (not host per your question) but shows that loopbacks were > considered an essential debugging feature from the start of the ARPANET. > > Craig > > On Sat, Jan 2, 2021 at 2:38 AM John Gilmore via Internet-history < > internet-history at elists.isoc.org> wrote: > >> I am in the process of sorting out various ways that the IPv4 unicast >> address space was historically constrained to allow fewer than the 2^32 >> available IP addresses. One question that came up was how we ended up >> with 16,777,216 loopback addresses in IPv4. >> >> History questions: >> >> Was there a host-software-accessible loopback or localhost function in >> the ARPANET or in NCP? How was it invoked? >> >> When TCP/IP was being designed, where did the concept of a loopback >> function come from? How did it merge with the "connect to a port >> on the local host, without having to figure out its IP address" function >> that 127/8 eventually got used for? >> >> Did Jon Postel or other IP designers have the localhost function in mind >> for 127 when he first reserved it back in 1981? Was 127 used this way >> prior to 1986? Did Jon or others discuss this use prior to then? >> >> Who, if anyone, argued for having more than a single loopback address? >> Was there discussion of whether a full Class-A network was needed for >> the loopback function? Why was a Class-C network not used? Is there an >> explanation for why so many addresses were ultimately assigned to that >> function? >> >> And, fast-forwarding into the 1990s: When IPv6 was designed, why was >> this design decision revised? Who made the decision to allocate a >> single IPv6 localhost address? Was that controversial? >> >> Thanks for the memories! >> >> Researching in the first thousand RFC's reveals: >> >> The first mention of any kind of loopback in the RFC series seems to be >> in June 1984 in RFC 900. In that Assigned Numbers RFC, loopback appears >> as an Ethernet frame type 0x9000, assigned for Larry Garlick of Xerox. >> This refers to a specific kind of packet sent on 10-megabit Ethernet >> v2.0 networks to test connectivity among hosts. >> >> In RFC 907 of July 1984, the SATNET Host Access Protocol has a specific >> bit assigned as the "Loopback Bit", and also defines a remote loopback >> request/response message and function. (This is for setting a mode >> in which ALL traffic is looped from transmit to receive side of an >> interface -- not for looping an individual packet or TCP connection.) >> >> In the evolution of IP Multicast from RFC 966 in December 1985 to RFC >> 988 in July 1986, a new parameter specified whether multicast packets >> would or would not be "looped-back" to their sending host. >> >> In September 1981, in RFC 790, Jon Postel first indicated that IP >> network number 127 was "reserved", without explicitly stating for what. >> This was repeated in all the Assigned Numbers RFCs through RFC 960 >> (December 1985); then in RFC 990 (November 1986), Jon and Joyce Reynolds >> assigned it for loopback, stating that: >> >> The class A network number 127 is assigned the "loopback" >> function, that is, a datagram sent by a higher level protocol >> to a network 127 address should loop back inside the host. No >> datagram "sent" to a network 127 address should ever appear on >> any network anywhere. >> >> By the Host Requirements RFC 1122 in October 1989, the spec was >> restated to: >> >> { 127, } >> >> Internal host loopback address. Addresses of this form >> MUST NOT appear outside a host. >> >> The first mention of the specific address "127.0.0.1" in RFCs is in May >> 1993 in RFC 1459 (as an example dotted decimal address that one might >> use in the IRC protocol). The RFCs contain no explanation of how the >> whole specified range of 16 million loopback addresses was narrowed in >> many peoples' minds to the single "localhost" address 127.0.0.1. >> ("Localhost" does not appear in the first thousand RFC's.) >> >> John >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > From sob at sobco.com Sat Jan 2 12:05:30 2021 From: sob at sobco.com (Scott Bradner) Date: Sat, 2 Jan 2021 15:05:30 -0500 Subject: [ih] IPng archive- (was Re: History of 127/8 as localhost/loopback addresses?) In-Reply-To: <5682a1d2-e2a6-01d2-0d3e-d862f21cb5ce@gmail.com> References: <27498.1609580323@hop.toad.com> <5682a1d2-e2a6-01d2-0d3e-d862f21cb5ce@gmail.com> Message-ID: for what its worth I put together an archive of all of the IPng related documents I could find at https://www.sobco.com/ipng/archive/index.html Scott From sob at sobco.com Sat Jan 2 11:59:15 2021 From: sob at sobco.com (Scott Bradner) Date: Sat, 2 Jan 2021 14:59:15 -0500 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <5682a1d2-e2a6-01d2-0d3e-d862f21cb5ce@gmail.com> References: <27498.1609580323@hop.toad.com> <5682a1d2-e2a6-01d2-0d3e-d862f21cb5ce@gmail.com> Message-ID: <85C677F2-19B8-472C-B018-C422FAD446C7@sobco.com> the SIPP loopback address is in "SIPP Addressing Architecture July 1994" https://www.sobco.com/ipng/archive/sipp/sipp-routing-addr-02.txt Scott > On Jan 2, 2021, at 2:44 PM, Brian E Carpenter via Internet-history wrote: > > The idea precedes computer networks. I recall that a favourite trick > at student parties in Manchester (England) in the late 1960's, if there > was a phone in the flat, was to dial a magic number and hang up. Some > seconds later the phone would ring but whoever answered it found nobody > there, because the magic number was the loopback, used for engineering > tests. It would have been a very natural idea for any ex-telephony people > to add to the IMP. > > As for IPv6, I have the draft minutes of the SIPP working group (just before it > became IPv6) from July 1994: > >> A local loop back address is defined. This is the local address with the low order >> one bit set to one, and the rest set to zero (address FE00:0:0:0:0:0:0:1). The >> cluster label for this would appear to be FE00:0:0:0:0:0:0:0, although some of us >> were unclear what a cluster loopback address would mean. > > The document under discussion was draft-ietf-sipp-routing-addr-02.txt. The IETF archive is more than a bit flaky (it claims to have the draft, but doesn't). https://www.sobco.com/ipng/internet-drafts/ doesn't have it either (it has draft-ietf-sip-routing-addr-00.txt but that is for 64-bit SIP, not 128-bit SIPP.) > > John, I'll unicast you those minutes and the draft. But for more details, ask Bob Hinden. > > Regards > Brian > > On 03-Jan-21 02:40, Craig Partridge via Internet-history wrote: >> There's been a long discussion this week on the ex-BBNers list about >> ARPANET link debugging and the ability to loopback IMP lines. That's from >> the IMP side (not host per your question) but shows that loopbacks were >> considered an essential debugging feature from the start of the ARPANET. >> >> Craig >> >> On Sat, Jan 2, 2021 at 2:38 AM John Gilmore via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> I am in the process of sorting out various ways that the IPv4 unicast >>> address space was historically constrained to allow fewer than the 2^32 >>> available IP addresses. One question that came up was how we ended up >>> with 16,777,216 loopback addresses in IPv4. >>> >>> History questions: >>> >>> Was there a host-software-accessible loopback or localhost function in >>> the ARPANET or in NCP? How was it invoked? >>> >>> When TCP/IP was being designed, where did the concept of a loopback >>> function come from? How did it merge with the "connect to a port >>> on the local host, without having to figure out its IP address" function >>> that 127/8 eventually got used for? >>> >>> Did Jon Postel or other IP designers have the localhost function in mind >>> for 127 when he first reserved it back in 1981? Was 127 used this way >>> prior to 1986? Did Jon or others discuss this use prior to then? >>> >>> Who, if anyone, argued for having more than a single loopback address? >>> Was there discussion of whether a full Class-A network was needed for >>> the loopback function? Why was a Class-C network not used? Is there an >>> explanation for why so many addresses were ultimately assigned to that >>> function? >>> >>> And, fast-forwarding into the 1990s: When IPv6 was designed, why was >>> this design decision revised? Who made the decision to allocate a >>> single IPv6 localhost address? Was that controversial? >>> >>> Thanks for the memories! >>> >>> Researching in the first thousand RFC's reveals: >>> >>> The first mention of any kind of loopback in the RFC series seems to be >>> in June 1984 in RFC 900. In that Assigned Numbers RFC, loopback appears >>> as an Ethernet frame type 0x9000, assigned for Larry Garlick of Xerox. >>> This refers to a specific kind of packet sent on 10-megabit Ethernet >>> v2.0 networks to test connectivity among hosts. >>> >>> In RFC 907 of July 1984, the SATNET Host Access Protocol has a specific >>> bit assigned as the "Loopback Bit", and also defines a remote loopback >>> request/response message and function. (This is for setting a mode >>> in which ALL traffic is looped from transmit to receive side of an >>> interface -- not for looping an individual packet or TCP connection.) >>> >>> In the evolution of IP Multicast from RFC 966 in December 1985 to RFC >>> 988 in July 1986, a new parameter specified whether multicast packets >>> would or would not be "looped-back" to their sending host. >>> >>> In September 1981, in RFC 790, Jon Postel first indicated that IP >>> network number 127 was "reserved", without explicitly stating for what. >>> This was repeated in all the Assigned Numbers RFCs through RFC 960 >>> (December 1985); then in RFC 990 (November 1986), Jon and Joyce Reynolds >>> assigned it for loopback, stating that: >>> >>> The class A network number 127 is assigned the "loopback" >>> function, that is, a datagram sent by a higher level protocol >>> to a network 127 address should loop back inside the host. No >>> datagram "sent" to a network 127 address should ever appear on >>> any network anywhere. >>> >>> By the Host Requirements RFC 1122 in October 1989, the spec was >>> restated to: >>> >>> { 127, } >>> >>> Internal host loopback address. Addresses of this form >>> MUST NOT appear outside a host. >>> >>> The first mention of the specific address "127.0.0.1" in RFCs is in May >>> 1993 in RFC 1459 (as an example dotted decimal address that one might >>> use in the IRC protocol). The RFCs contain no explanation of how the >>> whole specified range of 16 million loopback addresses was narrowed in >>> many peoples' minds to the single "localhost" address 127.0.0.1. >>> ("Localhost" does not appear in the first thousand RFC's.) >>> >>> John >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Sat Jan 2 15:56:07 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 2 Jan 2021 15:56:07 -0800 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <27498.1609580323@hop.toad.com> References: <27498.1609580323@hop.toad.com> Message-ID: Here's what I remember from the period when TCP2.5 was being split into TCP and IP V4. The short answer is that 127.0.0.1 was not part of the initial IP V4 work.?? The 127 value was "reserved for futre use", and someone (Berkeley?) later used it to adopt 127.0.0.1 as "my address".? More detail, HTH: - Loopback was a well-known tool from a decade of ARPANET operations; we tried to adopt many of the ARPANET tools and techniques into the IP level - It was a common practice to reserve certain values for protocol fields.? Zero was often reserved because it often appeared as a value because of a bug somewhere; it was best not to assign it for any real use. - "all ones" was another but less frequent error.? Still a good idea not to assign it, but reserve it as a last-chance choice for some unpredicted future need.? E.g., 127.x.x.x (the "last" class A network) might have been used for some IP V5 addressing scheme where a variable length address appeared somewhere later in the V5 header, and the x.x.x might say where to find it. - 127.0.0.1 was not considered as having any special meaning, it was simply "reserved" for some possible future need; I suspect the 127.0.0.1 convention came about later as some host-side implementor decided to use that "reserved" but unused and "safe" address to indicate local loopback within a computer. - A lot of tweaking around that time was motivated by the impending explosion of LANs, especially Ethernets - where it was impossible to encode the "local net address" of Ethernet in the remaining 24 bits even of a "Class A" network.?? For example, that led to creation of ARP, and later DHCP.? In the ARPANET days, every host pretty much knew its IMP address, so it could "loop back" by simply connecting to itself.? With TCP/IP V2, a host could still do that, i.e., connect to itself, by using its own IP address for both source and destination.?? Exactly what that did, i.e., what path such a packet might take, depended on how that host's TCP/IP implementation worked.? It might loopback in software within the O/S; it might send the IP packet to it's local gateway which would send it right back; etc.?? IIRC, 127.0.0.1 came into use when workstations appeared and the number of hosts on a LAN exploded.? Even if you didn't know your own IP address, 127.0.0.1 would work to talk to yourself, an important step in bringing a machine up on the net.?? - At the time (defining IPV4), there weren't a lot of nets and class A would be sufficient.? But the future was clear that many more computers would be appearing with workstations.? So class B was defined, and even class C.?? We limited the choices of net versus host to fall on byte boundaries in order to avoid computational load on hosts and gateways (yes, we even counted instructions needed in tight loops like checksumming or table lookups). - There was also a recognition that it was architecturally possible to define a new type of "network" which was just a wire, e.g., a telephone circuit.?? A Wire network is a network with only two IP addresses -- "this end" and "that end".? That triggered creating definitions of additional classes of networks - class D, E, and F.?? F networks used 31 bits of network number, and 1 bit of address -- very suitable for Wire networks.?? That made possible the interconnection of gateways by simply using a wire instead of a traditional network (like the ARPANET); it was no longer necessary to have a gateway and an IMP at an Internet site - As TCP/IP V4 was deploying, there was a lot of "technology transfer" from the ARPANET experience, aided by the co-location of the "ARPANET" and "Internet" groups at BBN; both were located just "down the hallway" from the NOC which had been operating the ARPANET for a decade and was now tasked to do the same for the fledgling Internet.? So there was a lot of collaboration and sharing of experience. - The ARPANET had developed a lot of tools and techniques, and we tried very hard to replicate that functionality within the Internet.? For example, with "Wire" networks interconnecting gateways, each interface on a gateway now had a unique IP address (e.g., a "Class F" one).? So it was straightforward to perform troubleshooting tasks like "looping" through each interface on some remote gateway to isolate problems.?? Similarly, metrics taken inside a gateway (queue lengths, checksum errors, etc.) could be reported back to the NOC just as IMPs had been doing.?? That led to SNMP et al, analogous to the ARPANET's "traps" mechanisms. - The Internet at that time was still very mich an experiment; we didn't know how it would behave.? The IP address structure and especially tools like the various flavors of Source Routing all were intended to help probe the network, even when it was somehow "broken", to find and fix problems. - There was an unknown amount of unplanned and uncoordinated "experimentation" in that early Internet.? Probably largely undocumented too.? Sometimes the curtains fell down and you got to see what was going on.? A good example is the time when the gateways started reporting massive amounts of checksum errors.? It was traced down to a new OS release (BSD IIRC), where they had decided to change the ordering of the LAN/TCP/IP headers traversing the Ethernet in order to avoid the need to do a memory-memory copy operation in the O/S.?? Worked fine between "consenting workstations", but when those packets leaked out to the other gateways it caused quite a ruckus. /Jack Haverty ? On 1/2/21 1:38 AM, John Gilmore via Internet-history wrote: > I am in the process of sorting out various ways that the IPv4 unicast > address space was historically constrained to allow fewer than the 2^32 > available IP addresses. One question that came up was how we ended up > with 16,777,216 loopback addresses in IPv4. > > History questions: > > Was there a host-software-accessible loopback or localhost function in > the ARPANET or in NCP? How was it invoked? > > When TCP/IP was being designed, where did the concept of a loopback > function come from? How did it merge with the "connect to a port > on the local host, without having to figure out its IP address" function > that 127/8 eventually got used for? > > Did Jon Postel or other IP designers have the localhost function in mind > for 127 when he first reserved it back in 1981? Was 127 used this way > prior to 1986? Did Jon or others discuss this use prior to then? > > Who, if anyone, argued for having more than a single loopback address? > Was there discussion of whether a full Class-A network was needed for > the loopback function? Why was a Class-C network not used? Is there an > explanation for why so many addresses were ultimately assigned to that > function? > > And, fast-forwarding into the 1990s: When IPv6 was designed, why was > this design decision revised? Who made the decision to allocate a > single IPv6 localhost address? Was that controversial? > > Thanks for the memories! > > Researching in the first thousand RFC's reveals: > > The first mention of any kind of loopback in the RFC series seems to be > in June 1984 in RFC 900. In that Assigned Numbers RFC, loopback appears > as an Ethernet frame type 0x9000, assigned for Larry Garlick of Xerox. > This refers to a specific kind of packet sent on 10-megabit Ethernet > v2.0 networks to test connectivity among hosts. > > In RFC 907 of July 1984, the SATNET Host Access Protocol has a specific > bit assigned as the "Loopback Bit", and also defines a remote loopback > request/response message and function. (This is for setting a mode > in which ALL traffic is looped from transmit to receive side of an > interface -- not for looping an individual packet or TCP connection.) > > In the evolution of IP Multicast from RFC 966 in December 1985 to RFC > 988 in July 1986, a new parameter specified whether multicast packets > would or would not be "looped-back" to their sending host. > > In September 1981, in RFC 790, Jon Postel first indicated that IP > network number 127 was "reserved", without explicitly stating for what. > This was repeated in all the Assigned Numbers RFCs through RFC 960 > (December 1985); then in RFC 990 (November 1986), Jon and Joyce Reynolds > assigned it for loopback, stating that: > > The class A network number 127 is assigned the "loopback" > function, that is, a datagram sent by a higher level protocol > to a network 127 address should loop back inside the host. No > datagram "sent" to a network 127 address should ever appear on > any network anywhere. > > By the Host Requirements RFC 1122 in October 1989, the spec was > restated to: > > { 127, } > > Internal host loopback address. Addresses of this form > MUST NOT appear outside a host. > > The first mention of the specific address "127.0.0.1" in RFCs is in May > 1993 in RFC 1459 (as an example dotted decimal address that one might > use in the IRC protocol). The RFCs contain no explanation of how the > whole specified range of 16 million loopback addresses was narrowed in > many peoples' minds to the single "localhost" address 127.0.0.1. > ("Localhost" does not appear in the first thousand RFC's.) > > John From john.kougoulos at gmail.com Sun Jan 3 02:07:16 2021 From: john.kougoulos at gmail.com (John Kougoulos) Date: Sun, 3 Jan 2021 11:07:16 +0100 Subject: [ih] History of 127/8 as localhost/loopback addresses? In-Reply-To: <27498.1609580323@hop.toad.com> References: <27498.1609580323@hop.toad.com> Message-ID: Hi, I had done some research also, it appears that it started as network 254: https://fossil.fuhrwerks.com/csrg/artifact/7abb44b66b9fc3e8 You can see more details in my answer in quora, just in case they help: https://www.quora.com/Why-is-127-0-0-1-used-for-localhost-Does-anyone-know-why-that-number-was-chosen Kind regards, John When TCP/IP was being designed, where did the concept of a loopback > function come from? How did it merge with the "connect to a port > on the local host, without having to figure out its IP address" function > that 127/8 eventually got used for? > > Did Jon Postel or other IP designers have the localhost function in mind > for 127 when he first reserved it back in 1981? Was 127 used this way > prior to 1986? Did Jon or others discuss this use prior to then? > > From dan at lynch.com Sun Jan 3 09:23:42 2021 From: dan at lynch.com (Dan Lynch) Date: Sun, 3 Jan 2021 09:23:42 -0800 Subject: [ih] PhD Message-ID: In an earlier post someone was posting about Gordon Bell and his wife Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to Gordon. Gordon had a Masters degree. Of course he gave out PhDs while at CMU. The early days of computer science found a number of brilliant people leading the field without advanced academic degrees of their own. Frothy times! I, myself, taught computer science without ever having taken a course in it. Early times. 60s. Cheers, Dan Cell 650-776-7313 From vgcerf at gmail.com Sun Jan 3 09:55:46 2021 From: vgcerf at gmail.com (vinton cerf) Date: Sun, 3 Jan 2021 12:55:46 -0500 Subject: [ih] PhD In-Reply-To: References: Message-ID: Robert Floyd was on the faculty of Stanford but had not attained a Ph.D. - but he was brilliant. https://en.wikipedia.org/wiki/Robert_W._Floyd v On Sun, Jan 3, 2021 at 12:24 PM Dan Lynch via Internet-history < internet-history at elists.isoc.org> wrote: > In an earlier post someone was posting about Gordon Bell and his wife > Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to > Gordon. Gordon had a Masters degree. Of course he gave out PhDs while at > CMU. The early days of computer science found a number of brilliant people > leading the field without advanced academic degrees of their own. Frothy > times! > > I, myself, taught computer science without ever having taken a course in > it. Early times. 60s. > > Cheers, > > Dan > > Cell 650-776-7313 > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From touch at strayalpha.com Sun Jan 3 10:12:55 2021 From: touch at strayalpha.com (Joseph Touch) Date: Sun, 3 Jan 2021 10:12:55 -0800 Subject: [ih] PhD In-Reply-To: References: Message-ID: > On Jan 3, 2021, at 9:55 AM, vinton cerf via Internet-history wrote: > > Robert Floyd was on the faculty of Stanford but had not attained a Ph.D. - > but he was brilliant. Dave Farber (my PhD advisor) was faculty at UC Irvine, UDel, UPenn, and CMU, all without a PhD, too. It?s less common as a field matures, but there are always exceptions in every field, even in ?R1? universities. It does, however, require a a reason that warrants the exception... Joe From bill.n1vux at gmail.com Sun Jan 3 10:14:42 2021 From: bill.n1vux at gmail.com (Bill Ricker) Date: Sun, 3 Jan 2021 13:14:42 -0500 Subject: [ih] PhD In-Reply-To: References: Message-ID: On Sun, Jan 3, 2021, 12:24 Dan Lynch via Internet-history < internet-history at elists.isoc.org> wrote: > In an earlier post someone was posting about Gordon Bell and his wife > Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to > Gordon. Gordon had a Masters degree. I'll accept the correction as a friendly amendment since Gordon's Hon.D.Eng (WPI'93) and Hon. D.Sc.T. (CMU'10) came years after the events recounted, so yes, he would have been Professor but not Doctor then. ;-) (Somewhat amused yet saddened that CMU wasn't first in line!) (Referring to them as Dr & Dr Bell in the ^present tense^ is not wrong, despite only one of the several doctorates being 'earned' in the traditional sense of a defensed thesis.) Of course he gave out PhDs while at CMU. The early days of computer science > found a number of brilliant people leading the field without advanced > academic degrees of their own. Frothy times! > > I, myself, taught computer science without ever having taken a course in > it. Early times. 60s. > Unclear if either of the subjects I taught in '80s and x90s (Unix for Secretaries, and C++ and OOD for C programmers) or any of the courses I took were 'Computer Science' per se (as opposed to practical software development), but yes I too taught with neither a terminal degree nor even an advanced degree. In this world of adjunctification, this likely continues in night divisions despite surfeit of Ph.D.s, as adjunct lecturers remain even cheaper than the much abused itinerant adjunct professors! Not all fields are over-producing doctorates in this new century. In another field, a family friend received promotion to tenure-track in the School of Journalism with the understanding that he would use a sabbatical to write a D.Jo. thesis-worthy book before the School was nextdue for accreditation review. Which kinda skewed the incentives for his reader & viva committee. (OTOH his books are both well researched and well written, so any objections would have been obviously self-serving ^more of a comment than question^.) From johnl at iecc.com Sun Jan 3 10:21:07 2021 From: johnl at iecc.com (John Levine) Date: 3 Jan 2021 13:21:07 -0500 Subject: [ih] PhD In-Reply-To: Message-ID: <20210103182108.121104EE36E9@ary.qy> In article you write: >In an earlier post someone was posting about Gordon Bell and his wife Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to Gordon. >Gordon had a Masters degree. Of course he gave out PhDs while at CMU. The early days of computer science found a number of brilliant people leading >the field without advanced academic degrees of their own. Frothy times! When I was at Yale getting my PhD, Ned Irons was on the faculty and had only an M.S. Of course, he had already invented some of the fundamental ideas in computer languages. From dan at lynch.com Sun Jan 3 11:49:25 2021 From: dan at lynch.com (Dan Lynch) Date: Sun, 3 Jan 2021 11:49:25 -0800 Subject: [ih] PhD In-Reply-To: References: Message-ID: <78FBA606-65E2-4E35-A06B-492BA63DECD5@lynch.com> Oh yeah! There was a Wednesday night poker game on the peninsula with about 5-8 people. One night we invited Bob Floyd to play. Big mistake... He had never played poker so we told him the rules and licked our chops. A few hands in and he had figured out the game including bluffing and took us to the cleaners. Nice guy, but somehow not invited back ??. Dan Cell 650-776-7313 > On Jan 3, 2021, at 9:55 AM, vinton cerf wrote: > > ? > Robert Floyd was on the faculty of Stanford but had not attained a Ph.D. - but he was brilliant. > https://en.wikipedia.org/wiki/Robert_W._Floyd > > v > > >> On Sun, Jan 3, 2021 at 12:24 PM Dan Lynch via Internet-history wrote: >> In an earlier post someone was posting about Gordon Bell and his wife Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to Gordon. Gordon had a Masters degree. Of course he gave out PhDs while at CMU. The early days of computer science found a number of brilliant people leading the field without advanced academic degrees of their own. Frothy times! >> >> I, myself, taught computer science without ever having taken a course in it. Early times. 60s. >> >> Cheers, >> >> Dan >> >> Cell 650-776-7313 >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Sun Jan 3 11:58:09 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 3 Jan 2021 11:58:09 -0800 Subject: [ih] PhD In-Reply-To: References: Message-ID: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: > It?s less common as a field matures, I think this observation is key -- "Computer Science" is far from being mature.? IMHO, it's still somewhere in the spectrum between Art and Engineering.?? When I was in grad school at MIT, I remember asking my advisor about investing several more years to get a PhD in the then-new curriculum of Computer Science.??? His observation was that PhDs tend to produce experts in some very narrow specialization, and also provide credentials useful for attracting venture capital to found companies.?? In contrast, MS work tends to create professionals with much broader interests and ability to explore outside of their academic focus.?? I interpreted this: PhDs think and discover scientific principles; MSes build stuff. Since I was most interested in "building stuff that people actually use" (what I told my high school adviser), I took the MS route.?? This was apparently pretty common at the time (an immature field).? There's an interesting summary here of the experience at Harvard, and its ties to The Internet: https://www.harvardmagazine.com/2020/09/features-a-science-is-born I've asked the question before, but never gotten any answers -- after 50+ years of Computer Science, what are the top few most important Scientific Principles that have been discovered - analogous to Maxwell's Equations, or Einstein's, etc?? Same question for the subfield of Computer Networking. IMHO, we won't have a Science until we know those Principles that tell us how to use computers in ways that don't require constant updates to fix critical flaws, or enable branches of governments or high school script kiddies to engage in cyberwarfare, or subject all of us to spam, phishing, viruses, identity theft, and other such nasties of computer life today. There's now over 50 years of operational experience with computers and networks.? The "Internet Experiment" continues.?? Perhaps some current PhD candidates can extract some Scientific Principles from all that experimentation and tell the next generation of builders how to make things better. /Jack Haverty From touch at strayalpha.com Sun Jan 3 12:08:31 2021 From: touch at strayalpha.com (Joseph Touch) Date: Sun, 3 Jan 2021 12:08:31 -0800 Subject: [ih] PhD In-Reply-To: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> References: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> Message-ID: <987AA50B-CB43-4844-8C43-FD68D14ACB9F@strayalpha.com> > On Jan 3, 2021, at 11:58 AM, Jack Haverty via Internet-history wrote: > > > On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: >> It?s less common as a field matures, > I think this observation is key -- "Computer Science" is far from being > mature. IMHO, it's still somewhere in the spectrum between Art and > Engineering. The way universities have devolved to teaching it, I agree. It?s sad that one can get an undergrad major in CS without actually being exposed to automata theory, complexity, algorithms, data structures, etc. But I guess Java and Python pay increasing tuition costs more effectively. ... > I've asked the question before, but never gotten any answers -- after > 50+ years of Computer Science, what are the top few most important > Scientific Principles that have been discovered - analogous to Maxwell's > Equations, or Einstein's, etc? See above, when it?s actually taught. > Same question for the subfield of Computer Networking. The way it?s often taught there are very few ?principles?; it?s too often an exercise in comparative anatomy or archeology (here?s what others built; let?s study it). I did develop a first-principles approach that extrapolated Shannon?s info theory into networking, focusing on WHY rather than merely HOW. It was taught for several years at USC and by others at UCLA and elsewhere. It?s in development as a book, though spare time for such unsung work is hard to come by? Joe From brian.e.carpenter at gmail.com Sun Jan 3 12:10:53 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 4 Jan 2021 09:10:53 +1300 Subject: [ih] PhD In-Reply-To: References: Message-ID: On 04-Jan-21 06:23, Dan Lynch via Internet-history wrote: > In an earlier post someone was posting about Gordon Bell and his wife Gwen. It correctly attributed a PhD degree to Gwen and incorrectly to Gordon. Gordon had a Masters degree. Of course he gave out PhDs while at CMU. The early days of computer science found a number of brilliant people leading the field without advanced academic degrees of their own. Frothy times! > > I, myself, taught computer science without ever having taken a course in it. Early times. 60s. Crossovers were normal. My M.Sc. supervisor in Manchester was Prof. Frank Sumner, whose Ph.D. was in chemistry. He did have the advantage of being taught to program by Alan Turing. All the Manchester CS Professors were crossovers at that time (1967), of course, but I think the others all had engineering degrees. For that matter, all the grad students were crossovers, since there was no such as a CS graduate in the UK until 1968. And in networking: Donald Davies was a physicist. Paul Baran was an electrical engineer. Brian Carpenter From amckenzie3 at yahoo.com Sun Jan 3 12:45:15 2021 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Sun, 3 Jan 2021 20:45:15 +0000 (UTC) Subject: [ih] PhD In-Reply-To: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> References: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> Message-ID: <332864076.6683129.1609706715538@mail.yahoo.com> Jack, For networking, I think there are a number of key principles exposed in John Day's book " Patterns in Network Architecture" For Computer Science, I don't know. Cheers,Alex On Sunday, January 3, 2021, 2:58:31 PM EST, Jack Haverty via Internet-history wrote: On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: > It?s less common as a field matures, I think this observation is key -- "Computer Science" is far from being mature.? IMHO, it's still somewhere in the spectrum between Art and Engineering.?? When I was in grad school at MIT, I remember asking my advisor about investing several more years to get a PhD in the then-new curriculum of Computer Science.??? His observation was that PhDs tend to produce experts in some very narrow specialization, and also provide credentials useful for attracting venture capital to found companies.?? In contrast, MS work tends to create professionals with much broader interests and ability to explore outside of their academic focus.?? I interpreted this: PhDs think and discover scientific principles; MSes build stuff. Since I was most interested in "building stuff that people actually use" (what I told my high school adviser), I took the MS route.?? This was apparently pretty common at the time (an immature field).? There's an interesting summary here of the experience at Harvard, and its ties to The Internet: https://www.harvardmagazine.com/2020/09/features-a-science-is-born I've asked the question before, but never gotten any answers -- after 50+ years of Computer Science, what are the top few most important Scientific Principles that have been discovered - analogous to Maxwell's Equations, or Einstein's, etc?? Same question for the subfield of Computer Networking. IMHO, we won't have a Science until we know those Principles that tell us how to use computers in ways that don't require constant updates to fix critical flaws, or enable branches of governments or high school script kiddies to engage in cyberwarfare, or subject all of us to spam, phishing, viruses, identity theft, and other such nasties of computer life today. There's now over 50 years of operational experience with computers and networks.? The "Internet Experiment" continues.?? Perhaps some current PhD candidates can extract some Scientific Principles from all that experimentation and tell the next generation of builders how to make things better. /Jack Haverty -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From vgcerf at gmail.com Sun Jan 3 13:19:14 2021 From: vgcerf at gmail.com (vinton cerf) Date: Sun, 3 Jan 2021 16:19:14 -0500 Subject: [ih] PhD In-Reply-To: <332864076.6683129.1609706715538@mail.yahoo.com> References: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> <332864076.6683129.1609706715538@mail.yahoo.com> Message-ID: well, at least read Knuth. v On Sun, Jan 3, 2021 at 3:45 PM Alex McKenzie via Internet-history < internet-history at elists.isoc.org> wrote: > Jack, > For networking, I think there are a number of key principles exposed in > John Day's book " Patterns in Network Architecture" > For Computer Science, I don't know. > Cheers,Alex > > On Sunday, January 3, 2021, 2:58:31 PM EST, Jack Haverty via > Internet-history wrote: > > > On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: > > It?s less common as a field matures, > I think this observation is key -- "Computer Science" is far from being > mature. IMHO, it's still somewhere in the spectrum between Art and > Engineering. > > When I was in grad school at MIT, I remember asking my advisor about > investing several more years to get a PhD in the then-new curriculum of > Computer Science. His observation was that PhDs tend to produce > experts in some very narrow specialization, and also provide credentials > useful for attracting venture capital to found companies. In contrast, > MS work tends to create professionals with much broader interests and > ability to explore outside of their academic focus. I interpreted > this: PhDs think and discover scientific principles; MSes build stuff. > > Since I was most interested in "building stuff that people actually use" > (what I told my high school adviser), I took the MS route. This was > apparently pretty common at the time (an immature field). There's an > interesting summary here of the experience at Harvard, and its ties to > The Internet: > > https://www.harvardmagazine.com/2020/09/features-a-science-is-born > > I've asked the question before, but never gotten any answers -- after > 50+ years of Computer Science, what are the top few most important > Scientific Principles that have been discovered - analogous to Maxwell's > Equations, or Einstein's, etc? Same question for the subfield of > Computer Networking. > > IMHO, we won't have a Science until we know those Principles that tell > us how to use computers in ways that don't require constant updates to > fix critical flaws, or enable branches of governments or high school > script kiddies to engage in cyberwarfare, or subject all of us to spam, > phishing, viruses, identity theft, and other such nasties of computer > life today. > > There's now over 50 years of operational experience with computers and > networks. The "Internet Experiment" continues. Perhaps some current > PhD candidates can extract some Scientific Principles from all that > experimentation and tell the next generation of builders how to make > things better. > > /Jack Haverty > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dave.walden.family at gmail.com Sun Jan 3 13:31:56 2021 From: dave.walden.family at gmail.com (David Walden) Date: Sun, 03 Jan 2021 16:31:56 -0500 Subject: [ih] PhD Message-ID: <062fpa5ytige199r1m01eeok.1609709516760@email.android.com> Maybe see Peter Denning's book on great principles in computing. On January 3, 2021, at 4:19 PM, vinton cerf via Internet-history wrote: well, at least read Knuth. v On Sun, Jan 3, 2021 at 3:45 PM Alex McKenzie via Internet-history < internet-history at elists.isoc.org> wrote: > Jack, > For networking, I think there are a number of key principles exposed in > John Day's book " Patterns in Network Architecture" > For Computer Science, I don't know. > Cheers,Alex > > On Sunday, January 3, 2021, 2:58:31 PM EST, Jack Haverty via > Internet-history wrote: > > > On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: > > It?s less common as a field matures, > I think this observation is key -- "Computer Science" is far from being > mature. IMHO, it's still somewhere in the spectrum between Art and > Engineering. > > When I was in grad school at MIT, I remember asking my advisor about > investing several more years to get a PhD in the then-new curriculum of > Computer Science. His observation was that PhDs tend to produce > experts in some very narrow specialization, and also provide credentials > useful for attracting venture capital to found companies. In contrast, > MS work tends to create professionals with much broader interests and > ability to explore outside of their academic focus. I interpreted > this: PhDs think and discover scientific principles; MSes build stuff. > > Since I was most interested in "building stuff that people actually use" > (what I told my high school adviser), I took the MS route. This was > apparently pretty common at the time (an immature field). There's an > interesting summary here of the experience at Harvard, and its ties to > The Internet: > > https://www.harvardmagazine.com/2020/09/features-a-science-is-born > > I've asked the question before, but never gotten any answers -- after > 50+ years of Computer Science, what are the top few most important > Scientific Principles that have been discovered - analogous to Maxwell's > Equations, or Einstein's, etc? Same question for the subfield of > Computer Networking. > > IMHO, we won't have a Science until we know those Principles that tell > us how to use computers in ways that don't require constant updates to > fix critical flaws, or enable branches of governments or high school > script kiddies to engage in cyberwarfare, or subject all of us to spam, > phishing, viruses, identity theft, and other such nasties of computer > life today. > > There's now over 50 years of operational experience with computers and > networks. The "Internet Experiment" continues. Perhaps some current > PhD candidates can extract some Scientific Principles from all that > experimentation and tell the next generation of builders how to make > things better. > > /Jack Haverty > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Sun Jan 3 13:32:56 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 3 Jan 2021 13:32:56 -0800 Subject: [ih] PhD In-Reply-To: References: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> <332864076.6683129.1609706715538@mail.yahoo.com> Message-ID: <9d49f8e8-4527-c5bc-fe9a-295a581a8938@3kitty.org> Knuth was mandatory during the 60s/70s for anyone interested in computers.? A great set of books.? But he makes my point -- he recounts "The *Art* of Computer...", not the Science. IIRC, Science involves observing something, distilling theories about underlying principles, and then creating and performing experiments to validate the theory.?? We've been observing Computers and Networks for 50 years, and performing the Internet Experiment almost as long.? I'm just curious about the results so far.... /Jack (still waiting for Knuth Volumes 4-7...) On 1/3/21 1:19 PM, vinton cerf via Internet-history wrote: > well, at least read Knuth. > v > > > On Sun, Jan 3, 2021 at 3:45 PM Alex McKenzie via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Jack, >> For networking, I think there are a number of key principles exposed in >> John Day's book " Patterns in Network Architecture" >> For Computer Science, I don't know. >> Cheers,Alex >> >> On Sunday, January 3, 2021, 2:58:31 PM EST, Jack Haverty via >> Internet-history wrote: >> >> >> On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: >>> It?s less common as a field matures, >> I think this observation is key -- "Computer Science" is far from being >> mature. IMHO, it's still somewhere in the spectrum between Art and >> Engineering. >> >> When I was in grad school at MIT, I remember asking my advisor about >> investing several more years to get a PhD in the then-new curriculum of >> Computer Science. His observation was that PhDs tend to produce >> experts in some very narrow specialization, and also provide credentials >> useful for attracting venture capital to found companies. In contrast, >> MS work tends to create professionals with much broader interests and >> ability to explore outside of their academic focus. I interpreted >> this: PhDs think and discover scientific principles; MSes build stuff. >> >> Since I was most interested in "building stuff that people actually use" >> (what I told my high school adviser), I took the MS route. This was >> apparently pretty common at the time (an immature field). There's an >> interesting summary here of the experience at Harvard, and its ties to >> The Internet: >> >> https://www.harvardmagazine.com/2020/09/features-a-science-is-born >> >> I've asked the question before, but never gotten any answers -- after >> 50+ years of Computer Science, what are the top few most important >> Scientific Principles that have been discovered - analogous to Maxwell's >> Equations, or Einstein's, etc? Same question for the subfield of >> Computer Networking. >> >> IMHO, we won't have a Science until we know those Principles that tell >> us how to use computers in ways that don't require constant updates to >> fix critical flaws, or enable branches of governments or high school >> script kiddies to engage in cyberwarfare, or subject all of us to spam, >> phishing, viruses, identity theft, and other such nasties of computer >> life today. >> >> There's now over 50 years of operational experience with computers and >> networks. The "Internet Experiment" continues. Perhaps some current >> PhD candidates can extract some Scientific Principles from all that >> experimentation and tell the next generation of builders how to make >> things better. >> >> /Jack Haverty >> >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From mbgreen at seas.upenn.edu Sun Jan 3 13:45:08 2021 From: mbgreen at seas.upenn.edu (Michael Greenwald) Date: Sun, 03 Jan 2021 13:45:08 -0800 Subject: [ih] PhD In-Reply-To: <9d49f8e8-4527-c5bc-fe9a-295a581a8938@3kitty.org> References: <4cb7bedf-15b7-8d37-6674-da4e6df904ce@3kitty.org> <332864076.6683129.1609706715538@mail.yahoo.com> <9d49f8e8-4527-c5bc-fe9a-295a581a8938@3kitty.org> Message-ID: <7e174ca51b3117df024e01a43dd27a81@seas.upenn.edu> On 2021-01-03 13:32, Jack Haverty via Internet-history wrote: > Knuth was mandatory during the 60s/70s for anyone interested in > computers.? A great set of books.? But he makes my point -- he recounts > "The *Art* of Computer...", not the Science. > > IIRC, Science involves observing something, distilling theories about > underlying principles, and then creating and performing experiments to > validate the theory.?? We've been observing Computers and Networks for > 50 years, and performing the Internet Experiment almost as long.? I'm > just curious about the results so far.... > > /Jack > (still waiting for Knuth Volumes 4-7...) I assume you mean waiting for volume 4 to be *complete*. But in the off-chance that you didn't, Volume 4 is (partially) out already. It is divided into subvolumes. The subvolumes are divided into fascicles (not sure I got the spelling right). Volume 4A + 6 fascicles are out. (I believe that fascicles 5 and 6 are part of Volume 4B, not 4A.) My apologies if this is old news to everyone. > > On 1/3/21 1:19 PM, vinton cerf via Internet-history wrote: >> well, at least read Knuth. >> v >> >> >> On Sun, Jan 3, 2021 at 3:45 PM Alex McKenzie via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> Jack, >>> For networking, I think there are a number of key principles exposed >>> in >>> John Day's book " Patterns in Network Architecture" >>> For Computer Science, I don't know. >>> Cheers,Alex >>> >>> On Sunday, January 3, 2021, 2:58:31 PM EST, Jack Haverty via >>> Internet-history wrote: >>> >>> >>> On 1/3/21 10:12 AM, Joseph Touch via Internet-history wrote: >>>> It?s less common as a field matures, >>> I think this observation is key -- "Computer Science" is far from >>> being >>> mature. IMHO, it's still somewhere in the spectrum between Art and >>> Engineering. >>> >>> When I was in grad school at MIT, I remember asking my advisor about >>> investing several more years to get a PhD in the then-new curriculum >>> of >>> Computer Science. His observation was that PhDs tend to produce >>> experts in some very narrow specialization, and also provide >>> credentials >>> useful for attracting venture capital to found companies. In >>> contrast, >>> MS work tends to create professionals with much broader interests and >>> ability to explore outside of their academic focus. I interpreted >>> this: PhDs think and discover scientific principles; MSes build >>> stuff. >>> >>> Since I was most interested in "building stuff that people actually >>> use" >>> (what I told my high school adviser), I took the MS route. This was >>> apparently pretty common at the time (an immature field). There's an >>> interesting summary here of the experience at Harvard, and its ties >>> to >>> The Internet: >>> >>> https://www.harvardmagazine.com/2020/09/features-a-science-is-born >>> >>> I've asked the question before, but never gotten any answers -- after >>> 50+ years of Computer Science, what are the top few most important >>> Scientific Principles that have been discovered - analogous to >>> Maxwell's >>> Equations, or Einstein's, etc? Same question for the subfield of >>> Computer Networking. >>> >>> IMHO, we won't have a Science until we know those Principles that >>> tell >>> us how to use computers in ways that don't require constant updates >>> to >>> fix critical flaws, or enable branches of governments or high school >>> script kiddies to engage in cyberwarfare, or subject all of us to >>> spam, >>> phishing, viruses, identity theft, and other such nasties of computer >>> life today. >>> >>> There's now over 50 years of operational experience with computers >>> and >>> networks. The "Internet Experiment" continues. Perhaps some >>> current >>> PhD candidates can extract some Scientific Principles from all that >>> experimentation and tell the next generation of builders how to make >>> things better. >>> >>> /Jack Haverty >>> >>> >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> From jack at 3kitty.org Sat Jan 23 12:19:50 2021 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 23 Jan 2021 12:19:50 -0800 Subject: [ih] Visualization of Internet History 1993-now Message-ID: FYI, I stumbled across an interesting dynamic graphic, visualizing the top Internet sites over time.?? Fascinating to see the shifts as the Internet evolved, so might be of interest to Internet Historians.? See: https://www.visualcapitalist.com/most-popular-websites-since-1993/ From brian.e.carpenter at gmail.com Sat Jan 23 13:58:20 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 24 Jan 2021 10:58:20 +1300 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: References: Message-ID: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> Er, hum, excuse me, for January 1993 it shows AOL as having 20 million of something. http://info.cern.ch was almost certainly still the leading site then, and there were only about 50 sites in total, mainly in academia. AOL was widely sneered at for *not* being an ISP. Anyway, aol.com wasn't registered until 1995-06-22. prodigy.com was registered 1992-09-16, but according to Wikipedia: "In 1994, Prodigy became the first of the early-generation dialup services to offer full access to the World Wide Web and to offer Web page hosting to its members. Since Prodigy was not a true Internet service provider, programs that needed an Internet connection, such as Internet Explorer and Quake multiplayer, could not be used with the service." compuserve.com was registered 1988-10-06. I have no idea when they first had an HTTP server, but they really didn't have proper Internet connectivity even in late 1995. They did start sending somebody (Rich Petke) to the IETF during 1995. (I'm looking at an email from Barry F. Berkov dated 21 Oct 95 10:56:43 EDT.) The graphic also shows imdb having 21,261 of something in January 1993. imdb.com was registered on 1996-01-05. mtv.com was registered on 1995-02-14. bloomberg.com on 1993-09-29. So at least for 1993-4, it seems that the numbers are rubbish. Regards Brian Carpenter On 24-Jan-21 09:19, Jack Haverty via Internet-history wrote: > FYI, I stumbled across an interesting dynamic graphic, visualizing the > top Internet sites over time.?? Fascinating to see the shifts as the > Internet evolved, so might be of interest to Internet Historians.? See: > > https://www.visualcapitalist.com/most-popular-websites-since-1993/ > From mark at good-stuff.co.uk Sat Jan 23 14:34:42 2021 From: mark at good-stuff.co.uk (Mark Goodge) Date: Sat, 23 Jan 2021 22:34:42 +0000 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> References: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> Message-ID: On 23/01/2021 21:58, Brian E Carpenter via Internet-history wrote: > Er, hum, excuse me, for January 1993 it shows AOL as having 20 > million of something. http://info.cern.ch was almost certainly still > the leading site then, and there were only about 50 sites in total, > mainly in academia. AOL was widely sneered at for *not* being an ISP. > Anyway, aol.com wasn't registered until 1995-06-22. The early years of this are clearly including visits to walled garden dial-up services, which is why AOL is way out there to begin with. It would probably be more helpful if it didn't. But then, the web and the Internet aren't necessarily synonymous. > The graphic also shows imdb having 21,261 of something in January > 1993. imdb.com was registered on 1996-01-05. mtv.com was registered > on 1995-02-14. bloomberg.com on 1993-09-29. IMDb long existed before the domain name was registered. It was on Usenet before it was on the web, and when it first appeared on the web it was hosted by the CompSci department at Cardiff University (where the website's creator was doing a PhD). https://web.archive.org/web/20130324121844/http://www.cs.cf.ac.uk/movies/ I haven't checked, but it wouldn't surprise me if MTV and Bloomberg's websites were originally on a different domain to the ones they use now, either (just like Facebook was originally on thefacebook.com). A lot of well-established sites changed their domains over the course of their early history. Incidentally, I wrote my first website in 1994, when, according to that video, Apple (in 10th place) was getting 296,677 visits a month and the BBC (12th) was getting 112,930. I now run a website which routinely gets more than a million visitors a month, but, according to the Alexa rankings, it isn't even in the top 100,000 most popular sites. That fact alone is an illustration of how much the web has grown. Mark From ocl at gih.com Sat Jan 23 15:24:08 2021 From: ocl at gih.com (=?UTF-8?Q?Olivier_MJ_Cr=c3=a9pin-Leblond?=) Date: Sun, 24 Jan 2021 00:24:08 +0100 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> References: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> Message-ID: <3e6d5ace-f205-b734-675c-ccd1c5accad4@gih.com> Looks flawed indeed. Re: AoL I wonder if the figures are for the number of "free hours" 5 1/4in floppies and CDs that ended up in landfill sites around the world. The statistics erroneously link the popularity of Web sites with the ISP side of the business - of course every AoL user used to have its starting page as an AoL page, but that does not count as a "popular" Web site, does it? Same for Prodigy and Compuserve - their "web" site was their subscriber starting page. Re: IMDB their site was operating from the Cardiff University Computer Science department, before they registered their domain name, so the figures might be correct. But back then Web traffic was still minimal and the largest traffic was FTP, Usenet and IRC. On FTP sites, the constellation of Sun's Sunsites and WSMR's Simtel20 had high levels of traffic which vastly exceeded any Web traffic. And when the Web picked up, the largest amount of data traffic carried was Porn, but I guess that wouldn't be something we'd be proud to inscribe in history? I also find it bizarre to see Yahoo fly forward in the early 2000s. I thought that Google was fast to outgrow them and that in the meantime, I thought automated crawlers like Lycos, Excite, Hotbot & Altavista were stronger? But perhaps Yahoo had better international roll-out in other languages... Unfortunately, as with much of the data about the Internet in the 1990s and early 2000s, things happened so fast and in so many places that I doubt that anyone will be able to agree to a single dataset. At some point, I'd argue that "we" lost track and ended up making estimates with varying levels of guesswork. Kindest regards, Olivier On 23/01/2021 22:58, Brian E Carpenter via Internet-history wrote: > Er, hum, excuse me, for January 1993 it shows AOL as having 20 million of something. http://info.cern.ch was almost certainly still the leading site then, and there were only about 50 sites in total, mainly in academia. AOL was widely sneered at for *not* being an ISP. Anyway, aol.com wasn't registered until 1995-06-22. > > prodigy.com was registered 1992-09-16, but according to Wikipedia: > "In 1994, Prodigy became the first of the early-generation dialup services to offer full access to the World Wide Web and to offer Web page hosting to its members. Since Prodigy was not a true Internet service provider, programs that needed an Internet connection, such as Internet Explorer and Quake multiplayer, could not be used with the service." > > compuserve.com was registered 1988-10-06. I have no idea when they first had an HTTP server, but they really didn't have proper Internet connectivity even in late 1995. They did start sending somebody (Rich Petke) to the IETF during 1995. > (I'm looking at an email from Barry F. Berkov dated 21 Oct 95 10:56:43 EDT.) > > The graphic also shows imdb having 21,261 of something in January 1993. imdb.com was registered on 1996-01-05. mtv.com was registered on 1995-02-14. bloomberg.com on 1993-09-29. > > So at least for 1993-4, it seems that the numbers are rubbish. > > Regards > Brian Carpenter > > On 24-Jan-21 09:19, Jack Haverty via Internet-history wrote: >> FYI, I stumbled across an interesting dynamic graphic, visualizing the >> top Internet sites over time.?? Fascinating to see the shifts as the >> Internet evolved, so might be of interest to Internet Historians.? See: >> >> https://www.visualcapitalist.com/most-popular-websites-since-1993/ >> From scott.brim at gmail.com Sat Jan 23 16:31:17 2021 From: scott.brim at gmail.com (Scott Brim) Date: Sat, 23 Jan 2021 19:31:17 -0500 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: References: Message-ID: "Websites" only cover half the traffic. What about traffic from phone apps, e.g. TikTok and Instagram? And I agree that it conflates information sites, service provider email sites, etc. Even so, that was a fun ride. Scott From joly at punkcast.com Sun Jan 24 06:02:58 2021 From: joly at punkcast.com (Joly MacFie) Date: Sun, 24 Jan 2021 09:02:58 -0500 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: <3e6d5ace-f205-b734-675c-ccd1c5accad4@gih.com> References: <79586153-bfe0-0169-7c3a-24baa56d843a@gmail.com> <3e6d5ace-f205-b734-675c-ccd1c5accad4@gih.com> Message-ID: > I also find it bizarre to see Yahoo fly forward in the early 2000s. With Yahoo Clubs, and then their purchase of egroups, Yahoo absolutely dominated social media. They completely blew it. On Sat, Jan 23, 2021 at 6:24 PM Olivier MJ Cr?pin-Leblond via Internet-history wrote: > Looks flawed indeed. > Re: AoL I wonder if the figures are for the number of "free hours" 5 > 1/4in floppies and CDs that ended up in landfill sites around the world. > The statistics erroneously link the popularity of Web sites with the ISP > side of the business - of course every AoL user used to have its > starting page as an AoL page, but that does not count as a "popular" Web > site, does it? > Same for Prodigy and Compuserve - their "web" site was their subscriber > starting page. > Re: IMDB their site was operating from the Cardiff University Computer > Science department, before they registered their domain name, so the > figures might be correct. > But back then Web traffic was still minimal and the largest traffic was > FTP, Usenet and IRC. On FTP sites, the constellation of Sun's Sunsites > and WSMR's Simtel20 had high levels of traffic which vastly exceeded any > Web traffic. And when the Web picked up, the largest amount of data > traffic carried was Porn, but I guess that wouldn't be something we'd be > proud to inscribe in history? > > I also find it bizarre to see Yahoo fly forward in the early 2000s. I > thought that Google was fast to outgrow them and that in the meantime, I > thought automated crawlers like Lycos, Excite, Hotbot & Altavista were > stronger? But perhaps Yahoo had better international roll-out in other > languages... > > Unfortunately, as with much of the data about the Internet in the 1990s > and early 2000s, things happened so fast and in so many places that I > doubt that anyone will be able to agree to a single dataset. At some > point, I'd argue that "we" lost track and ended up making estimates with > varying levels of guesswork. > > Kindest regards, > > Olivier > > On 23/01/2021 22:58, Brian E Carpenter via Internet-history wrote: > > Er, hum, excuse me, for January 1993 it shows AOL as having 20 million > of something. http://info.cern.ch was almost certainly still the leading > site then, and there were only about 50 sites in total, mainly in academia. > AOL was widely sneered at for *not* being an ISP. Anyway, aol.com wasn't > registered until 1995-06-22. > > > > prodigy.com was registered 1992-09-16, but according to Wikipedia: > > "In 1994, Prodigy became the first of the early-generation dialup > services to offer full access to the World Wide Web and to offer Web page > hosting to its members. Since Prodigy was not a true Internet service > provider, programs that needed an Internet connection, such as Internet > Explorer and Quake multiplayer, could not be used with the service." > > > > compuserve.com was registered 1988-10-06. I have no idea when they > first had an HTTP server, but they really didn't have proper Internet > connectivity even in late 1995. They did start sending somebody (Rich > Petke) to the IETF during 1995. > > (I'm looking at an email from Barry F. Berkov > dated 21 Oct 95 10:56:43 EDT.) > > > > The graphic also shows imdb having 21,261 of something in January 1993. > imdb.com was registered on 1996-01-05. mtv.com was registered on > 1995-02-14. bloomberg.com on 1993-09-29. > > > > So at least for 1993-4, it seems that the numbers are rubbish. > > > > Regards > > Brian Carpenter > > > > On 24-Jan-21 09:19, Jack Haverty via Internet-history wrote: > >> FYI, I stumbled across an interesting dynamic graphic, visualizing the > >> top Internet sites over time. Fascinating to see the shifts as the > >> Internet evolved, so might be of interest to Internet Historians. See: > >> > >> https://www.visualcapitalist.com/most-popular-websites-since-1993/ > >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- -------------------------------------- Joly MacFie +12185659365 -------------------------------------- - From mfidelman at meetinghouse.net Sun Jan 24 09:07:52 2021 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 24 Jan 2021 12:07:52 -0500 Subject: [ih] Visualization of Internet History 1993-now In-Reply-To: References: Message-ID: <846c6c88-7d1d-555b-7eae-0603b91fa1ed@meetinghouse.net> Jack Haverty via Internet-history wrote: > FYI, I stumbled across an interesting dynamic graphic, visualizing the > top Internet sites over time.?? Fascinating to see the shifts as the > Internet evolved, so might be of interest to Internet Historians.? See: > > https://www.visualcapitalist.com/most-popular-websites-since-1993/ > That's pretty cool. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From brian.e.carpenter at gmail.com Sun Jan 31 18:58:31 2021 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 1 Feb 2021 15:58:31 +1300 Subject: [ih] Funny how things work out Message-ID: Oddly enough, I only noticed today that .cern has been a TLD since 2014. I'm amazed they bothered. The reason CERN was originally .cern.ch is that in about 1987, when we were a central point in European academia, in particular operating an important mail interchange gateway, someone in my team wrote to IANA asking for the TLD ".cern".** Jon replied very nicely explaining why this was a silly idea. I don't have those emails, regrettably, but I believe that he suggested .cern.int, although .int was still a bit of a political football then. We opted for .cern.ch and I believe we also got .cern.fr, which we never used, although our computer centre was on the French side of the border. So, some 25 years later one of my successors at CERN decided that Jon was wrong, and ICANN agreed. I just looked at the current state of the TLD registry. It's (IMNSHO) horrible. Counting up, there are the following numbers of TLDs of various types: Generic 1247, Sponsored 14, Country Code 316, Infrastructure 1. Back in early 1998, the IAB wrote to Ira Magaziner in response to the Green Paper that led to ICANN. Among other things, we said "On the other hand, a very large increase in the total number of gTLDs (say to thousands) would lead us into technically unknown territory." Are we there yet? ** One variant of my address at that time was BRIAN%priam.cern at ean-relay.ac.uk, so you can see that we'd preempted the domain already :-). Regards Brian Carpenter From johnl at iecc.com Sun Jan 31 20:24:14 2021 From: johnl at iecc.com (John Levine) Date: 31 Jan 2021 23:24:14 -0500 Subject: [ih] Funny how things work out In-Reply-To: Message-ID: <20210201042415.3339F6D17262@ary.qy> In article you write: >Oddly enough, I only noticed today that .cern has been a TLD since 2014. I'm amazed they bothered. Me too, seems like a poor use of $200K. See their cute little list of 2LDs below. >I just looked at the current state of the TLD registry. It's (IMNSHO) horrible. Counting up, there are the following numbers of TLDs of various types: >Generic 1247, Sponsored 14, Country Code 316, Infrastructure 1. Most of the generic ones are either vanity TLDs like .BANANAREPUBLIC or failed attempts to take business from .COM. like .BLUE (13K names) and .HOCKEY (1200 names.) Only a handful of new gTLDs have gotten as many as a million entries, and those mostly seem to be Chinese fashion statements. The three largest gTLDs are still COM with 150M, NET with 13M, and ORG with 10M. Some of larger ccTLDs might have 10M. >Back in early 1998, the IAB wrote to Ira Magaziner in response to the Green Paper that led to ICANN. Among other things, we said "On the other hand, a very >large increase in the total number of gTLDs (say to thousands) would lead us into technically unknown territory." Are we there yet? The root server operators have no problem with the current size of the root zone, and have much more of an issue with the vast number of garbage queries that they get. ICANN is of course barelling ahead to open another round of new TLDs, even though the current round still has a few unresolved applications, and there is basically no evidence that the current round has benefited anyone other than domain speculators and the people who provide the backend services for the vanity domains. R's, John about.cern. accelerators.cern. againstcovid19.cern. alice.cern. alumni.cern. antimatter.cern. arts.cern. at.cern. atlas.cern. beamlineforschools.cern. beams.cern. belgium.cern. careers.cern. cernandsocietyfoundation.cern. cernvm.cern. chis.cern. clear.cern. clic.cern. cms.cern. compass.cern. computing.cern. cosmicrays.cern. darkmatter.cern. education.cern. engineering.cern. europeanstrategy.cern. exhibitions.cern. experiments.cern. flair.cern. fluka.cern. giving.cern. globe.cern. go.cern. higgsboson.cern. home.cern. hse.cern. ideasquare.cern. incident.cern. isolde.cern. jobs.cern. knowledge.cern. kt.cern. learn.cern. lhc.cern. library.cern. medicis.cern. neighbours.cern. news.cern. newsroom.cern. nic.cern. opendays.cern. openlab.cern. particles.cern. physics.cern. press.cern. quantum.cern. root.cern. science.cern. sciencegateway.cern. scientific-info.cern. sis.cern. sparks.cern. standardmodel.cern. supersymmetry.cern. teachers.cern. technology.cern. test-home.cern. testing1.cern. theory.cern. united-states.cern. visit.cern. voisins.cern. webfest.cern. winservices.cern. www.cern. From gtaylor at tnetconsulting.net Sun Jan 31 21:43:31 2021 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 31 Jan 2021 22:43:31 -0700 Subject: [ih] Funny how things work out In-Reply-To: <20210201042415.3339F6D17262@ary.qy> References: <20210201042415.3339F6D17262@ary.qy> Message-ID: On 1/31/21 9:24 PM, John Levine via Internet-history wrote: > The root server operators have no problem with the current size of > the root zone, and have much more of an issue with the vast number > of garbage queries that they get. I would like to see more effort to reduce the number of garbage queries that the real root servers get. A couple options come to mind, and I'd like to see more people do one or both of these. RFC 6303 - Locally Served DNS Zones RFC 7706 - Running Root on Loopback I'd really like to see 6303 become a default in mainstream DNS server software. I believe BIND does some, if not all, of the 6303 zones. -- Grant. . . . unix || die