From j at shoch.com Tue Mar 1 09:37:21 2022 From: j at shoch.com (John Shoch) Date: Tue, 1 Mar 2022 09:37:21 -0800 Subject: [ih] David Boggs, Co-Inventor of Ethernet, Dies at 71 - The New York Times In-Reply-To: References: Message-ID: Sad news about the loss of David Boggs. Although best known as the co-inventor of Ethernet, he also did major work on early internetworking. And his 1982 PhD thesis was on "Internet Broadcasting" -- https://dl.acm.org/doi/10.5555/910299 https://www.nytimes.com/2022/02/28/technology/david-boggs-dead.html John Shoch From brian.e.carpenter at gmail.com Tue Mar 1 18:17:46 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 2 Mar 2022 15:17:46 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished Message-ID: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> https://www.theregister.com/2022/03/01/the_internet_is_so_hard/ (start watching the video at 46 minutes in) From geoff at iconia.com Tue Mar 1 19:27:33 2022 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Tue, 1 Mar 2022 17:27:33 -1000 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> References: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> Message-ID: vis-a-vis "We Were Not Done Yet (starting at ~1 hr, min 7) & "We Still Had Long List of Things That Had To Be Figured Out Someday and the technology kind of got out of our hands and went out into the user's environment before it was read to go" + "It Should All Just Work But The Reality Is It Still Doesn't There's A Long List of Things That Have To Get Done" (start at ~1 hr, 12 mins) curious if anyone has a copy or memory of what the Long List of Things That Had To Be Figured Out Someday? / There's A Long List of Things That Have To Get Done? On Tue, Mar 1, 2022 at 4:18 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > https://www.theregister.com/2022/03/01/the_internet_is_so_hard/ > > (start watching the video at 46 minutes in) > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From geoff at iconia.com Tue Mar 1 19:37:07 2022 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Tue, 1 Mar 2022 17:37:07 -1000 Subject: [ih] David Boggs, Co-Inventor of Ethernet, Dies at 71 - The New York Times In-Reply-To: References: Message-ID: A memorable Dave Boggs utterance said to yours truly when meeting with him at Xerox-PARC: "if only Xerox could hire someone to sleep for me he could be more productive/get more done On Tue, Mar 1, 2022 at 7:37 AM John Shoch via Internet-history < internet-history at elists.isoc.org> wrote: > Sad news about the loss of David Boggs. > Although best known as the co-inventor of Ethernet, he also did major work > on early internetworking. > And his 1982 PhD thesis was on "Internet Broadcasting" -- > https://dl.acm.org/doi/10.5555/910299 > > https://www.nytimes.com/2022/02/28/technology/david-boggs-dead.html > > John Shoch > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From jack at 3kitty.org Tue Mar 1 20:46:40 2022 From: jack at 3kitty.org (Jack Haverty) Date: Tue, 1 Mar 2022 20:46:40 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> Message-ID: Yeah, that was me.?? But the article's writer got some of the details wrong.?? IIRC, I didn't "develop FTP"; not sure where that came from.? I think that Abhay Bhushan wrote RFC114, documenting FTP but I don't think even he'd claim to have invented it.?? It was a team effort of lots of ARPANET denizens of the day.? Abhay's office was just a few doors away from mine, and I do remember annoying him persistently until he agreed to add some features we needed in order to implement email (MLFL, IIRC). Re the Long List -- I still have my notebooks from ICCB meetings where that list was kept on the whiteboard-du-jour.? At one meeting I copied the list into my notebook.? I'll see if I can find it and report back.?? One that I used in the talk was TOS, i.e., how should routers (and TCPs) treat datagrams differently depending on their TOS values.?? There were 8 or so others on the list too. Jack Haverty On 3/1/22 19:27, the keyboard of geoff goodfellow via Internet-history wrote: > vis-a-vis "We Were Not Done Yet (starting at ~1 hr, min 7) > & > "We Still Had Long List of Things That Had To Be Figured Out Someday > and the technology kind of got out of our hands and > went out into the user's environment before it was read to go" > + > "It Should All Just Work But The Reality Is It Still Doesn't There's A Long > List of Things That Have To Get Done" (start at ~1 hr, 12 mins) > > curious if anyone has a copy or memory of what > the Long List of Things That Had To Be Figured Out Someday? > / > There's A Long List of Things That Have To Get Done? > > > On Tue, Mar 1, 2022 at 4:18 PM Brian E Carpenter via Internet-history < > internet-history at elists.isoc.org> wrote: > >> https://www.theregister.com/2022/03/01/the_internet_is_so_hard/ >> >> (start watching the video at 46 minutes in) >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> From bpurvy at gmail.com Tue Mar 1 22:47:29 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Tue, 1 Mar 2022 22:47:29 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> Message-ID: just an aside: When the subject is *mainly* someone's name, I panic and think they died. glad you're still with us, Jack. On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Yeah, that was me. But the article's writer got some of the details > wrong. IIRC, I didn't "develop FTP"; not sure where that came from. I > think that Abhay Bhushan wrote RFC114, documenting FTP but I don't think > even he'd claim to have invented it. It was a team effort of lots of > ARPANET denizens of the day. Abhay's office was just a few doors away > from mine, and I do remember annoying him persistently until he agreed > to add some features we needed in order to implement email (MLFL, IIRC). > > Re the Long List -- I still have my notebooks from ICCB meetings where > that list was kept on the whiteboard-du-jour. At one meeting I copied > the list into my notebook. I'll see if I can find it and report back. > One that I used in the talk was TOS, i.e., how should routers (and TCPs) > treat datagrams differently depending on their TOS values. There were > 8 or so others on the list too. > > Jack Haverty > > > On 3/1/22 19:27, the keyboard of geoff goodfellow via Internet-history > wrote: > > vis-a-vis "We Were Not Done Yet (starting at ~1 hr, min 7) > > & > > "We Still Had Long List of Things That Had To Be Figured Out Someday > > and the technology kind of got out of our hands and > > went out into the user's environment before it was read to go" > > + > > "It Should All Just Work But The Reality Is It Still Doesn't There's A > Long > > List of Things That Have To Get Done" (start at ~1 hr, 12 mins) > > > > curious if anyone has a copy or memory of what > > the Long List of Things That Had To Be Figured Out Someday? > > / > > There's A Long List of Things That Have To Get Done? > > > > > > On Tue, Mar 1, 2022 at 4:18 PM Brian E Carpenter via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> https://www.theregister.com/2022/03/01/the_internet_is_so_hard/ > >> > >> (start watching the video at 46 minutes in) > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jnc at mercury.lcs.mit.edu Wed Mar 2 08:22:34 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Wed, 2 Mar 2022 11:22:34 -0500 (EST) Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished Message-ID: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: > One that I used in the talk was TOS, i.e., how should routers (and TCPs) > treat datagrams differently depending on their TOS values. I actually don't think that's that important any more (or multicast either). TOS is only realy important in a network with resource limitations, or very different service levels. We don't have those any more - those limitations have just been engineered away. At some level, I'd say the real issues havn't changed - the routing architecture is a joke of a kludge; and trying to use one 'name' to identify both *who* a node is, and *where* it is, is ludicrous. But neither one of those is a clear issue today (either, with TOS), and fixing either one would now have a cost many orders of magnitude higher than if we'd tackled them back when. The biggest issue I see is actually not in the network at all - it's the structure of the users' nodes that are plugged into it. Specifically, they don't have a robust security architecure, to prevent infection, theft of data, surveillance, etc. Software companies have almost always prioritized user-visible features over *real* security (constant new releaases to 'fix' security bugs notwithstanding), and nothing has changed - or will. Noel From mfidelman at meetinghouse.net Wed Mar 2 08:50:49 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Wed, 2 Mar 2022 11:50:49 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> Message-ID: <6007f595-60ad-4445-a2e1-a0b97a3c2341@meetinghouse.net> Just a general comment: Of COURSE the Internet was never finished.? It's become an organism (or a "self-organizing complex adaptive system" - chose your preferred terminology).? We planted some seeds, they grew, and now the Internet (and the system-of-systems of which it is a part) is evolving under it's own steam. Let's just hope it doesn't become self-aware, and all the world's phones don't all ring at the same time.? (Anybody else here read Arthur Clarke's "Dial For Frankenstein?"? "?For homo sapiens, the telephone bell had tolled.?) Cheers, Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From touch at strayalpha.com Wed Mar 2 08:53:22 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Wed, 2 Mar 2022 08:53:22 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> Message-ID: <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> > > On Mar 2, 2022, at 8:22 AM, Noel Chiappa via Internet-history wrote: > >> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: > >> One that I used in the talk was TOS, i.e., how should routers (and TCPs) >> treat datagrams differently depending on their TOS values. > > I actually don't think that's that important any more (or multicast either). > TOS is only realy important in a network with resource limitations, or very > different service levels. We don't have those any more - those limitations > have just been engineered away. Not all networks can be over-provisioned; DSCPs and traffic engineering are alive and well. They?ve just been buried so low that you don?t notice them. It?s like driving on cement and claiming no more need for iron rebar. Taking Clarke?s Third Law a step further*, ?any sufficiently useful technology fades into the background". Joe *?Any sufficiently advanced technology is indistinguishable from magic" ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com From brian.e.carpenter at gmail.com Wed Mar 2 12:03:00 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Thu, 3 Mar 2022 09:03:00 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> Message-ID: <47eb08ef-e41b-7ce8-8f9b-cf27c9dd8f40@gmail.com> On 03-Mar-22 05:53, touch--- via Internet-history wrote: >> >> On Mar 2, 2022, at 8:22 AM, Noel Chiappa via Internet-history wrote: >> >>> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >> >>> One that I used in the talk was TOS, i.e., how should routers (and TCPs) >>> treat datagrams differently depending on their TOS values. >> >> I actually don't think that's that important any more (or multicast either). >> TOS is only realy important in a network with resource limitations, or very >> different service levels. We don't have those any more - those limitations >> have just been engineered away. > > Not all networks can be over-provisioned; DSCPs and traffic engineering are alive and well. Indeed. I couldn't tell from the user problem that Jack described in his talk whether the scenario involved multiple ISPs, and that's critical because diffserv was not designed for and usually does not work when crossing ISP boundaries. Alternatively, what he was describing was a result of the well-known buffer bloat problem. Hard to tell. Brian > > They?ve just been buried so low that you don?t notice them. It?s like driving on cement and claiming no more need for iron rebar. > > Taking Clarke?s Third Law a step further*, ?any sufficiently useful technology fades into the background". > > Joe > > *?Any sufficiently advanced technology is indistinguishable from magic" > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com > From karl at cavebear.com Wed Mar 2 14:55:54 2022 From: karl at cavebear.com (Karl Auerbach) Date: Wed, 2 Mar 2022 14:55:54 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> Message-ID: <6d0eefe5-776e-f120-21ae-f7d439369501@cavebear.com> On 3/1/22 7:27 PM, the keyboard of geoff goodfellow via Internet-history wrote: > curious if anyone has a copy or memory of what > the Long List of Things That Had To Be Figured Out Someday? My list begins with an association protocol layer - what ISO/OSI called the "session" layer, but nothing as incomprehensibly overburdened and extensive as what they had. On the net we have a lot of "association" context that can transcend the lifetime of a single transport connection. That sort of thing would have been useful to avoid the triangular routing used in mobile IP. And it would have obviated a lot of cookie usage on the web. Same for crypto context in things like DTLS. Basically an association layer would let the end applications established named markers during their conversation. Then when a transport broke and was re-established the two end-points would say "where were we when we last spoke?" They would use the association protocol to find the last agreed-upon name. It would be up to the applications to remember what they each did after that name and how to deal with it - sort of like a journaled file system. Such a protocol would not require any storage in the protocol stack - that would be in the applications who need to remember what they did after the last agreed-upon named checkpoint. --karl-- From jack at 3kitty.org Wed Mar 2 17:06:20 2022 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 2 Mar 2022 17:06:20 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> Message-ID: <3174ab1e-5dcc-3759-8297-83d7c80530ce@3kitty.org> Absolutely.? The audience for my talk was technical-savvy people who are involved in building and/or operating the pieces of the Internet in places where fiber hasn't carpeted the area yet - places like Bangladesh et al, where they do have to pay attention to traffic engineering. But even so, I included the anecdote of my friend and his recent attempt at a "gaming" experience (actually a remote-desktop kind of situation) over the path between LA and Reno, NV.?? Even in fiber-rich US, the Internet doesn't work for some users when they try to do certain things.?? I speculate that we can see this every day now by watching TV news interviews with their occasional audio glitches, video freezing, etc.? I can't "see" the traffic over their Internet path, but I surmise that some of those datagrams aren't getting to their destination soon enough to be useful, as was happening in my friend's experience. That same behavior was reported by people like Steve Casner, Jim Forgie, et al as they tried to do real-time interactive voice over the early 1980s Internet.?? That experience led to the splitting of TCP into TCP/IP, and the creation of UDP to also run over IP and provide another type of service.? Where TCP provided a virtual circuit, UDP provided raw datagrams. no guarantees at all.?? We realized that different kinds of uses motivated different kinds of network behavior. Those additional services didn't require that the underlying datagram transport mechanisms (routers) necessarily provide multiple types of service.?? But we thought that such an architecture might be desirable.? For example, to reduce wasted bandwidth, the TTL value would indicate how long a datagram could still be in transit and still be useful when it arrived at its destination.?? Routers could simply discard such datagrams immediately, even if their TTL was not yet zero, if they somehow knew that the datagram would not get to its destination before its TTL expired.?? We expected that might especially occur at the boundary between a fast LAN and slow WAN (ARPANET).?? Routers could also prioritize traffic if doing so would get it to its destination "in time", e.g., by placing such datagrams at the head of an output queue.?? Or perhaps they would route such traffic over a separate path - one path for bulk traffic, the other for express. We didn't know how best to do all that, i.e., it was Research. Also, there were important mechanisms missing.? E.g., TTL was defined in "hops" because the routers of the day had no means to measure time or synchronize across the net.?? Dave Mills took on that challenge and NTP was the result.?? I heard that his "fuzzballs" subsequently somehow used time instead of hops in their routing and queue management algorithms. Placeholder mechanisms were put in place.? TOS bits were a way for a host to indicate what kind of service was required for each datagram, after someone figured out what different services routers could provide.?? TTL was hops, but could readily become time later.?? Source Quench was a rudimentary mechanism to reflect congestion from somewhere inside the network back to a source so that it could "slow down".? Some people however decided receipt of a Source Quench meant your last datagram got discarded -- so you should instantly retransmit it.? Personally, I had no idea what my own TCP implementation should do on receiving a Source Quench;? I think I incremented a counter somewhere. All of the above occurred in the time between TCP2 and TCP4, with the expectation that the ongoing research would produce some answers which would be introduced into V5, V6, V7, etc.?? It's been 40 years, so it's quite possible that TOS bits, for example, are no longer needed, and that the new mechanisms have been well documented and standardized in the thousands of RFCs that have been written. But from a user's perspective, mechanisms and algorithms aren't useful until they're present and operating in all the equipment that's involved in whatever the user is trying to do.?? Are they there?? Can't tell.? The talking heads on TV still pixelate.? My friend can't play his game. The other point I was trying to make probably didn't come through clearly - just not enough time to explain it well. Networks are no longer just the collection of switching equipment and communications "lines" that interconnect them and the algorithms cast into their software.? Much of the mechanism that in "the old days" you would find inside the switches is no longer there. The ARPANET, and X.25 or other nets of the 80s, had elaborate internal mechanisms to implement virtual circuits, manage resources to avoid congestion, and "push back" on senders to force them to slow down when needed.?? In today's Internet, much of that mechanism has been "moved out" from the switches and into the "hosts", i.e., the billions of desktops, laptops, smartphones, and even refrigerators, TVs, attic fans, and such.? Some is in the related OS' TCP/IP implementation.?? Some is in the applications as they try to figure out how to best use whatever the network is providing right now. Moving such mechanisms from the "switches" to the "hosts" was, IMHO, a salient part of the Internet Experiment.? It certainly made routers easier to build than switches.? But was it a good idea to put such mechanisms into the billions of hosts? To a network operator, trying to keep its customers satisfied, that means that it has to look not only at how the switches and lines are performing, but also at how those "network" mechanisms, now residing in the "hosts", are performing.?? It all has to work well for the user to be happy with the service, and the network operators happy with their equipment.?? That's what I tried to highlight in my anecdote about the network glitch and TCP retransmissions over a trans-Pacific path.? The users weren't happy because the network was slow.? The operators weren't happy, not only because the users were complaining, but because half of those expensive transoceanic circuits was being wasted.?? TCP does a wonderful job of keeping the data flowing despite all sorts of obstacles.?? It also does a wonderful job of hiding problems unless someone goes digging to see what's going on. In the earliest ARPANET days, the NOC used to keep track of end-to-end delay, with a target of keeping it under 250 milliseconds (IIRC).? Most users then interacted with their remote computers at typewriter terminals, and became unhappy if their keystrokes didn't echo back at least that quickly. Today's Internet, admittedly from my anecdotal experience, seems to think 30 seconds is perfectly acceptable as long as all the datagrams get there eventually. Jack On 3/2/22 08:53, touch--- via Internet-history wrote: >> On Mar 2, 2022, at 8:22 AM, Noel Chiappa via Internet-history wrote: >> >>> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >>> One that I used in the talk was TOS, i.e., how should routers (and TCPs) >>> treat datagrams differently depending on their TOS values. >> I actually don't think that's that important any more (or multicast either). >> TOS is only realy important in a network with resource limitations, or very >> different service levels. We don't have those any more - those limitations >> have just been engineered away. > Not all networks can be over-provisioned; DSCPs and traffic engineering are alive and well. > > They?ve just been buried so low that you don?t notice them. It?s like driving on cement and claiming no more need for iron rebar. > > Taking Clarke?s Third Law a step further*, ?any sufficiently useful technology fades into the background". > > Joe > > *?Any sufficiently advanced technology is indistinguishable from magic" > ? > Dr. Joe Touch, temporal epistemologist > www.strayalpha.com From salo at saloits.com Wed Mar 2 19:11:32 2022 From: salo at saloits.com (Timothy J. Salo) Date: Wed, 2 Mar 2022 21:11:32 -0600 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <6007f595-60ad-4445-a2e1-a0b97a3c2341@meetinghouse.net> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> <6007f595-60ad-4445-a2e1-a0b97a3c2341@meetinghouse.net> Message-ID: <7e44c19b-2459-b18f-36e3-799c17c9e473@saloits.com> On 3/2/2022 10:50 AM, Miles Fidelman via Internet-history wrote: > ... Let's just hope it doesn't become self-aware, and all the world's phones > don't all ring at the same time. ... Too late. Several years back, I was in a meeting (perhaps at the IETF) when half of the phones in the room rang simultaneously. It was only a test, of Wireless Emergency Alerts (WEAs). -tjs From salo at saloits.com Wed Mar 2 19:33:40 2022 From: salo at saloits.com (Timothy J. Salo) Date: Wed, 2 Mar 2022 21:33:40 -0600 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> Message-ID: On 3/2/2022 10:22 AM, Noel Chiappa via Internet-history wrote: > ... I actually don't think that's that important any more (or multicast either). > TOS is only realy important in a network with resource limitations, or very > different service levels. We don't have those any more - those limitations > have just been engineered away. ... I don't believe that these limitations have been engineered away in all parts of the Internet (depending on how far you believe the Internet extends). Mobile wireless networks may still have limited and/or rapidly varying bandwidths. I understand that many wireless tactical networks still have fairly limited bandwidths. mmWave 5G service seems to have made this worse. Bandwidths, even connectivity, can vary rapidly and dramatically, over short time frames and distances. See, for example, "A First Look at Commercial 5G Performance on Smartphones" -tjs From johnl at iecc.com Wed Mar 2 20:03:21 2022 From: johnl at iecc.com (John Levine) Date: 2 Mar 2022 23:03:21 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> Message-ID: <20220303040322.02468386532E@ary.qy> It appears that Noel Chiappa via Internet-history said: > > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: > > > One that I used in the talk was TOS, i.e., how should routers (and TCPs) > > treat datagrams differently depending on their TOS values. > >I actually don't think that's that important any more (or multicast either). >TOS is only realy important in a network with resource limitations, or very >different service levels. We don't have those any more - those limitations >have just been engineered away. That's not it, they came up against the impenetrable barrier of a business model. We understand how to price peering and transit of traffic where all packets are the same, but nobody has any idea how you do it where some packets are more valuable. I never figured out why multicast failed. It is bizarre that people are dumping cable service which has 100 channels multicast to all of the customers in favor of point-to-point service where you frequently have a zillion people streaming separate copies of the same thing, e.g., a football game. We fake it with CDNs that position servers inside retail networks but really, it's multicast. R's, John From touch at strayalpha.com Wed Mar 2 20:07:57 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Wed, 2 Mar 2022 20:07:57 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303040322.02468386532E@ary.qy> References: <20220303040322.02468386532E@ary.qy> Message-ID: ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Mar 2, 2022, at 8:03 PM, John Levine via Internet-history wrote: > > It appears that Noel Chiappa via Internet-history said: >>> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >> >>> One that I used in the talk was TOS, i.e., how should routers (and TCPs) >>> treat datagrams differently depending on their TOS values. >> >> I actually don't think that's that important any more (or multicast either). >> TOS is only realy important in a network with resource limitations, or very >> different service levels. We don't have those any more - those limitations >> have just been engineered away. > > That's not it, they came up against the impenetrable barrier of a > business model. We understand how to price peering and transit of > traffic where all packets are the same, but nobody has any idea how > you do it where some packets are more valuable. > > I never figured out why multicast failed. You answered this in your previous paragraph ? nobody ever figured out how to bill for it. I.e., how do you charge for a service that has distributed costs? Who pays and how to do you keep track? In comparison, source replication is easy - source pays to send each copy. ;-) Joe From touch at strayalpha.com Wed Mar 2 20:23:12 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Wed, 2 Mar 2022 20:23:12 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <3174ab1e-5dcc-3759-8297-83d7c80530ce@3kitty.org> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> <75360B27-933A-4E3C-AD15-B3A922AB2FEF@strayalpha.com> <3174ab1e-5dcc-3759-8297-83d7c80530ce@3kitty.org> Message-ID: <456137F5-EACF-4973-A8C8-663348B8E97B@strayalpha.com> Hi, Jack, > On 3/2/22 08:53, touch--- via Internet-history wrote: >>> On Mar 2, 2022, at 8:22 AM, Noel Chiappa via Internet-history wrote: >>> >>>> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >>>> One that I used in the talk was TOS, i.e., how should routers (and TCPs) >>>> treat datagrams differently depending on their TOS values. >>> I actually don't think that's that important any more (or multicast either). >>> TOS is only realy important in a network with resource limitations, or very >>> different service levels. We don't have those any more - those limitations >>> have just been engineered away. >> Not all networks can be over-provisioned; DSCPs and traffic engineering are alive and well. ... > Absolutely. The audience for my talk was technical-savvy people who are involved in building and/or operating the pieces of the Internet in places where fiber hasn't carpeted the area yet - places like Bangladesh et al, where they do have to pay attention to traffic engineering. Underserved areas are everywhere - my car, my phone, apartments in buildings that are hard to retrofit, and (especially) people in places that don?t warrant the shared cost of fiber, e.g.: https://vividmaps.com/us-population-density/ There are lots of places where there are too few people per square mile, but there ARE still people. > But even so, I included the anecdote of my friend and his recent attempt at a "gaming" experience (actually a remote-desktop kind of situation) over the path between LA and Reno, NV. ... > That same behavior was reported by people like Steve Casner, Jim Forgie, et al as they tried to do real-time interactive voice over the early 1980s Internet. ' Same behavior might not be from the same cause. New causes include bufferbloat, poorly provisioned access networks (miscalculated uplink vs. downlink capacity), etc. Four decades later, RAM is cheap and nearly everyone has a streaming video source, but those benefits came with problems that packet prioritization alone cannot solve. ... > Placeholder mechanisms were put in place. TOS bits were a way for a host to indicate what kind of service was required for each datagram, after someone figured out what different services routers could provide. ... > But from a user's perspective, mechanisms and algorithms aren't useful until they're present and operating in all the equipment that's involved in whatever the user is trying to do. Are they there? Can't tell. The talking heads on TV still pixelate. My friend can't play his game. TOS became DSCP and ECN; both are enabled by default in many OSes. Granted, they?re not always ubiquitous in routers, but that could just be a tragedy-of-the-commons pricing issue; nobody wants to individually pay for something that has only group benefit. But again, the behavior you saw may have a root cause unrelated to packet priority. ... > In the earliest ARPANET days, the NOC used to keep track of end-to-end delay, with a target of keeping it under 250 milliseconds (IIRC). Most users then interacted with their remote computers at typewriter terminals, and became unhappy if their keystrokes didn't echo back at least that quickly. > > Today's Internet, admittedly from my anecdotal experience, seems to think 30 seconds is perfectly acceptable as long as all the datagrams get there eventually. That?s almost definitely bufferbloat, FWIW. Joe From jack at 3kitty.org Wed Mar 2 20:32:20 2022 From: jack at 3kitty.org (Jack Haverty) Date: Wed, 2 Mar 2022 20:32:20 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303040322.02468386532E@ary.qy> References: <20220303040322.02468386532E@ary.qy> Message-ID: <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> IMHO, many things also happen for non-technical and non-business reasons.? Since multicast was needed for some uses of the 'net, but it didn't actually get deployed widely in the Internet (whatever happened to the Mbone...?), people figured out another way to provide it by putting it in separate boxes (the CDNs) from the switches themselves. I've always wondered if that same pattern drove the creation of TCP and use of datagram mode.?? The ARPANET was the only WAN of the day, and its gurus were extremely reluctant to allow use of "uncontrolled packets" (aka datagrams) for fear of bringing down the whole network.?? I recently found a 1975-era BBN report analyzing the TCP proposal and concluding for DCA that it couldn't work. So TCP was implemented in the host computers, where mere mortals could get at the code.?? Of course, TCP mechanisms duplicated the mechanisms already in the ARPANET.?? That's what I meant by "moving mechanisms from switches to hosts:.? But that did enable us a few years later to simply interconnect routers with wires, cutting the ARPANET out of the picture. Jack On 3/2/22 20:03, John Levine via Internet-history wrote: > It appears that Noel Chiappa via Internet-history said: >> > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >> >> > One that I used in the talk was TOS, i.e., how should routers (and TCPs) >> > treat datagrams differently depending on their TOS values. >> >> I actually don't think that's that important any more (or multicast either). >> TOS is only realy important in a network with resource limitations, or very >> different service levels. We don't have those any more - those limitations >> have just been engineered away. > That's not it, they came up against the impenetrable barrier of a > business model. We understand how to price peering and transit of > traffic where all packets are the same, but nobody has any idea how > you do it where some packets are more valuable. > > I never figured out why multicast failed. It is bizarre that people are dumping > cable service which has 100 channels multicast to all of the customers in favor > of point-to-point service where you frequently have a zillion people streaming > separate copies of the same thing, e.g., a football game. We fake it with CDNs > that position servers inside retail networks but really, it's multicast. > > R's, > John From dhc at dcrocker.net Wed Mar 2 20:47:30 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Wed, 2 Mar 2022 20:47:30 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <7e44c19b-2459-b18f-36e3-799c17c9e473@saloits.com> References: <20220302162234.AB9DE18C08E@mercury.lcs.mit.edu> <6007f595-60ad-4445-a2e1-a0b97a3c2341@meetinghouse.net> <7e44c19b-2459-b18f-36e3-799c17c9e473@saloits.com> Message-ID: <71ad3d96-4c41-099d-9e49-795f564092fa@dcrocker.net> On 3/2/2022 7:11 PM, Timothy J. Salo via Internet-history wrote: > Several years back, I was in a meeting (perhaps at the IETF) when half > of the phones in the room rang simultaneously.? It was only a test, > of Wireless Emergency Alerts (WEAs). that's the story that was provided. but it's possible that... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jmamodio at gmail.com Thu Mar 3 05:28:10 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Thu, 3 Mar 2022 07:28:10 -0600 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <1c479fac-c88d-729a-5b3d-69aaa811f8c7@gmail.com> Message-ID: Of course you are right, it was never finished and it will never be as long as we keep fixing, improving, connecting, creating, innovating, etc. Since my early involvement with it I have always seen it as an experiment gone wild, successfully wild, that exceeds so far previous generations of telecommunication technologies, plus I believe Gutenberg would be really jealous of TBL. -J On Tue, Mar 1, 2022 at 10:46 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Yeah, that was me. But the article's writer got some of the details > wrong. IIRC, I didn't "develop FTP"; not sure where that came from. I > think that Abhay Bhushan wrote RFC114, documenting FTP but I don't think > even he'd claim to have invented it. It was a team effort of lots of > ARPANET denizens of the day. Abhay's office was just a few doors away > from mine, and I do remember annoying him persistently until he agreed > to add some features we needed in order to implement email (MLFL, IIRC). > > Re the Long List -- I still have my notebooks from ICCB meetings where > that list was kept on the whiteboard-du-jour. At one meeting I copied > the list into my notebook. I'll see if I can find it and report back. > One that I used in the talk was TOS, i.e., how should routers (and TCPs) > treat datagrams differently depending on their TOS values. There were > 8 or so others on the list too. > > Jack Haverty > > > On 3/1/22 19:27, the keyboard of geoff goodfellow via Internet-history > wrote: > > vis-a-vis "We Were Not Done Yet (starting at ~1 hr, min 7) > > & > > "We Still Had Long List of Things That Had To Be Figured Out Someday > > and the technology kind of got out of our hands and > > went out into the user's environment before it was read to go" > > + > > "It Should All Just Work But The Reality Is It Still Doesn't There's A > Long > > List of Things That Have To Get Done" (start at ~1 hr, 12 mins) > > > > curious if anyone has a copy or memory of what > > the Long List of Things That Had To Be Figured Out Someday? > > / > > There's A Long List of Things That Have To Get Done? > > > > > > On Tue, Mar 1, 2022 at 4:18 PM Brian E Carpenter via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> https://www.theregister.com/2022/03/01/the_internet_is_so_hard/ > >> > >> (start watching the video at 46 minutes in) > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > >> > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From mfidelman at meetinghouse.net Thu Mar 3 08:18:33 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 3 Mar 2022 11:18:33 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303040322.02468386532E@ary.qy> References: <20220303040322.02468386532E@ary.qy> Message-ID: <71cd20d0-16f8-5b75-aeb9-6a1dbb86f538@meetinghouse.net> John Levine via Internet-history wrote: > It appears that Noel Chiappa via Internet-history said: >> > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >> >> > One that I used in the talk was TOS, i.e., how should routers (and TCPs) >> > treat datagrams differently depending on their TOS values. >> >> I actually don't think that's that important any more (or multicast either). >> TOS is only realy important in a network with resource limitations, or very >> different service levels. We don't have those any more - those limitations >> have just been engineered away. > That's not it, they came up against the impenetrable barrier of a > business model. We understand how to price peering and transit of > traffic where all packets are the same, but nobody has any idea how > you do it where some packets are more valuable. > > I never figured out why multicast failed. It is bizarre that people are dumping > cable service which has 100 channels multicast to all of the customers in favor > of point-to-point service where you frequently have a zillion people streaming > separate copies of the same thing, e.g., a football game. We fake it with CDNs > that position servers inside retail networks but really, it's multicast. Well... probably because carriers were trying to charge by the bit/packet, and vendors were trying to sell centralized, proprietary videoconferencing services.? Interoperable multicast makes it all too easy to distribute such things.? (Consider the demise of CuSeeMe and IRC - can't see that Zoom or the myriad of chat services improve on the originals.)? Sigh... Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From mfidelman at meetinghouse.net Thu Mar 3 08:21:58 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 3 Mar 2022 11:21:58 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: Jack Haverty via Internet-history wrote: > IMHO, many things also happen for non-technical and non-business > reasons.? Since multicast was needed for some uses of the 'net, but it > didn't actually get deployed widely in the Internet (whatever happened > to the Mbone...?), people figured out another way to provide it by > putting it in separate boxes (the CDNs) from the switches themselves. Come to think of it, for distributed simulation, DIS relies on multicast, over the Defense Simulation Internet (at least it did, when I was at MAK).? Meanwhile, MAK's DIS/HLA libraries can provide a multi-cast overlay for both DIS and HLA. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From galmes at tamu.edu Thu Mar 3 09:05:43 2022 From: galmes at tamu.edu (Guy Almes) Date: Thu, 3 Mar 2022 12:05:43 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303040322.02468386532E@ary.qy> References: <20220303040322.02468386532E@ary.qy> Message-ID: <84fd27ca-10ea-4a77-1c09-852b659d9538@gmail.com> Hi John, This is an interesting thread. Your two main points below are at the blurry boundary between historical and technical. Let me put in my two cents' worth. Before getting into the points about TOS and multicast, I'll just recall several of the strengths of the Internet architecture, including the wonderful efficiency and scalability that comes from keeping the complicated stuff at the edge (not at the core). Think of all the times when we've rehearsed this in the context of why the TCP/IP Internet prevailed over connection-oriented networks. On 3/2/22 11:03 PM, John Levine via Internet-history wrote: > It appears that Noel Chiappa via Internet-history said: >> > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >> >> > One that I used in the talk was TOS, i.e., how should routers (and TCPs) >> > treat datagrams differently depending on their TOS values. >> >>I actually don't think that's that important any more (or multicast either). >>TOS is only realy important in a network with resource limitations, or very >>different service levels. We don't have those any more - those limitations >>have just been engineered away. > > That's not it, they came up against the impenetrable barrier of a > business model. We understand how to price peering and transit of > traffic where all packets are the same, but nobody has any idea how > you do it where some packets are more valuable. So think about TOS. The hard problem is not so much the preferential packet forwarding (though that does add some unhelpful complexity to packet forwarders). The hard problem is knowing which packets to prefer. And the ISPs would be glad to charge extra for enhanced TOS / QOS. Further, to make a positive statement, in some corporate intranet applications, the idea works fine. But, to do it in the public Internet would pull us in the direction of the complexities of the dreaded connection-oriented network architectures. This calls to mind the shock some of us Internet engineers had when talking to some telco engineers in the late 80s and being told that 30% of the cost of their infrastructure was to support billing. > > I never figured out why multicast failed. It is bizarre that people are dumping > cable service which has 100 channels multicast to all of the customers in favor > of point-to-point service where you frequently have a zillion people streaming > separate copies of the same thing, e.g., a football game. We fake it with CDNs > that position servers inside retail networks but really, it's multicast. The situation with network-layer Multicast is more technical/operational. We have many in our community who are good at intra-AS routing and the more difficult but doable us of BGP to set up inter-AS routing. But setting up Multicast routing is much harder and less intuitive. Engineers who are quite good at intra- and inter-AS unicast IP routing often find multicast routing very confusing. The problem isn't the idea of Multicast. The problem is the enormous hidden costs of doing it at the IP layer. In contrast, there are many successful examples of applications that do Multicast, but at the application layer. Zoom is an example, but CDNs and even the Usenet nntp servers of the 1980s are another. Coming back to TOS and Multicast together, refraining from burdening the router infrastructure of the Internet with solving these problems at the IP network layer is part of what allows the Internet to continue to grow in its scalability and performance. But now also coming back to the blend of technical and historical themes of the thread, I'd be interested in the thoughts of others. Is my critique of TOS and network-layer Multicast fair / correct? And, if so, historically, how did thought evolve on these issues? -- Guy > > R's, > John > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!KwNVnqRv!WVMiL6xyJa6_itoASH6-sGV4p-N2_M6sfqShVAMDVgc8NgiJNQwI6IYSvApCzw$ > > . From mgrant at grant.org Thu Mar 3 09:42:31 2022 From: mgrant at grant.org (Michael Grant) Date: Thu, 3 Mar 2022 12:42:31 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: Jack Haverty via Internet-history wrote: > IMHO, many things also happen for non-technical and non-business > reasons.? Since multicast was needed for some uses of the 'net, but it > didn't actually get deployed widely in the Internet (whatever happened > to the Mbone...?), people figured out another way to provide it by > putting it in separate boxes (the CDNs) from the switches themselves. From my memory, there were several different ways of doing multicast and it was a bit of a mess. IGMP, PIM, others, I'm sure someone can enumerate them all. Almost no ISP supported multicast and the few that did, not all were the same and very few routers supported it. Then there was the issue that it wasn't global. You couldn't expect just to get something multicast to you from anywhere on the internet. The address space (224.0.0.0 to 239.255.255.255) was very small, I never understood how that was supposed to work in a global context. You could sort of get it working within a LAN but there was no reason to save the bandwidth with switches everywhere. But technical stuff aside, the final nail in the coffin was that the content providers wanted to know who they were broadcasting to so they could advertize to them and get their data and sell it. Also to be able to sell the content behind a paywall. And then there's content on demand vs live streaming. You can't pause a multicast stream indefinitely. In the end, trying to save bandwidth using multicasting became harder than just using unicast. Michael Grant -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From louie at transsys.com Thu Mar 3 10:49:02 2022 From: louie at transsys.com (Louis Mamakos) Date: Thu, 3 Mar 2022 13:49:02 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: The small amount of multicast address space really isn't a problem in practice. For any successful, scalable multicast deployment, you'll end up with source-rooted trees and the forwarding state in the routers are (S,G) tuples. And multicast only makes sense for a large number of receivers because of all the effort required to instantiate the forwarding state in the control plane of your network. The larger problem is that multicast requires a large number of receivers that want to simultaneously receive the traffic. This is at odds with personalized content. I did a multicast product at UUNET so many years ago now, back when the access was dial-up users. How do you sell this? Content providers want to reach content consumers everywhere. So multicast distribution is an optimization, rather than a central part of the solution to this problem. The customer that I worked with at the time was essentially in the "Internet Radio" business. They selected a subset of all their live streams for distribution by multicast on our network, with about 250K multicast-enabled dial-up ports. Their client software would use some program guide, distributed out-of-band for their customers to navigate and select content. The client software also subscribed to a multicast group to listen for "beacon" messages to discover if a multicast stream was possibly available. (And we just transmitted NTP time announcements on that group every few seconds..) The client would attempt to join the group if possible, or fall back to a unicast stream. This was completely at odds with the "MBONE" experimentation going on at the time. There were content announcement sent to a multicast group by each source, and some client applications that listened for these things. This wasn't a great model for commercial adoption if the content provider wanted to reach the most eyeballs, as it reduced the addressable segment of his market to a very small subset. This was back in the mid to later 1990's, when dial-up V.90 modems were the common means of Internet access for residential end-users. I spent time with our finance people trying to figure out costs of running a platform like this, so we'd have at least something to base retail pricing on and ideally produce a positive margin. So it was an exercise to understand the span and extent of a multicast distribution tree across backbone links for any given stream from a source, and some hand-waving over the cost of the forwarding state, back when memory was expensive and you had state based on both source and destination occupying resources. At the time, this was not quite top-of-mind, but something to think hard about, having had to upgrade CPU boards in many routers as the default-free Internet routing table was growing quite rapidly in those days. And back then, inter-domain multicast was quite... a hack. Gluing together sparse-mode PIM IGP infrastructure wasn't not at all obvious at that time. Of course BGP got co-opted yet again as the all-purpose container for carrying router state, but you still had problem before IGMPv3 and being able to specify a source when joining a multlicast group. So wonderful hacks like inter-domain source discovery protocols to forward discovered sources in groups towards the PIM RP. Madness. IGMPv3 made more of this possible to imagine working, though I had moved on to other things and stopped following in detail what happened in the interdomain multicast routing space by then. Louis Mamakos On Thu, Mar 3, 2022 at 12:42 PM Michael Grant via Internet-history < internet-history at elists.isoc.org> wrote: > Jack Haverty via Internet-history wrote: > > IMHO, many things also happen for non-technical and non-business > > reasons. Since multicast was needed for some uses of the 'net, but it > > didn't actually get deployed widely in the Internet (whatever happened > > to the Mbone...?), people figured out another way to provide it by > > putting it in separate boxes (the CDNs) from the switches themselves. > > From my memory, there were several different ways of doing multicast > and it was a bit of a mess. IGMP, PIM, others, I'm sure someone can > enumerate them all. Almost no ISP supported multicast and the few > that did, not all were the same and very few routers supported it. > > Then there was the issue that it wasn't global. You couldn't expect > just to get something multicast to you from anywhere on the internet. > > The address space (224.0.0.0 to 239.255.255.255) was very small, I > never understood how that was supposed to work in a global context. > > You could sort of get it working within a LAN but there was no reason > to save the bandwidth with switches everywhere. > > But technical stuff aside, the final nail in the coffin was that the > content providers wanted to know who they were broadcasting to so they > could advertize to them and get their data and sell it. Also to be > able to sell the content behind a paywall. > > And then there's content on demand vs live streaming. You can't pause > a multicast stream indefinitely. > > In the end, trying to save bandwidth using multicasting became harder > than just using unicast. > > Michael Grant > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dhc at dcrocker.net Thu Mar 3 11:10:04 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 3 Mar 2022 11:10:04 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> > The small amount of multicast address space really isn't a problem in > practice. For any successful, scalable multicast deployment, you'll end up > with source-rooted trees and the forwarding state in the routers are (S,G) > tuples. Broadly, for anything like TOS or multicast, there are two different sets of issues, either of which can easily create showstoppers. First is, of course, the mechanics. What is the functional design? What is the basis for believing it will satisfy real-world needs? How robust will it be? How easy to operate? Etc. Second is gaining adoption across a very large range of entirely independent operators. What are their immediate, compelling business incentives? As we keep seeing, getting adoption of anything across an Internet infrastructure service, is more than a little challenging. Cable TV's multicast is done within the span of a single administrative control. And it's a relatively stable, constrained set of traffic. Generic Internet multicast is multiple administrations, with highly variable sets of traffic, across many administrations. Very, very different game. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jnc at mercury.lcs.mit.edu Thu Mar 3 12:02:41 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 3 Mar 2022 15:02:41 -0500 (EST) Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished Message-ID: <20220303200241.F2EF618C097@mercury.lcs.mit.edu> > From: Michael Grant > The address space (224.0.0.0 to 239.255.255.255) was very small, I > never understood how that was supposed to work in a global context. The whole Internet is an _experiment_ that grew out out of control. Some aspects (e.g. the moving of connection state into the endpoints) works well. Some of it has been bodged into working (e.g. the addressing and routing architecture). Multicast is another experient, one that was 1/4-baked - and the addressing shows that. > From: Guy Almes > The problem is the enormous hidden costs of doing it at the IP layer. > In contrast, there are many successful examples of applications that do > Multicast, but at the application layer. I always thought that it was a mistake to try and do multicast completely integrated into the internet layer. It made a lot more sense to me to do it as a sub-layer on top of the internet layer, using multicast distribution nodes: logically/architecturally seperate, but perhaps co-located in switching nodes in implementations. (The way Van's fast TCP had the logically separate IP and TCP layers integrated in the actual implementation.) I also thought that it would better to have separate namespaces for the groups (i.e. to name their members), and to identify their distribution trees. All OBE, of course. As has been observed, the Internet has gotten too large to evolve. Noel From vint at google.com Thu Mar 3 12:08:00 2022 From: vint at google.com (Vint Cerf) Date: Thu, 3 Mar 2022 15:08:00 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303200241.F2EF618C097@mercury.lcs.mit.edu> References: <20220303200241.F2EF618C097@mercury.lcs.mit.edu> Message-ID: multicast would have been useful if people wanted to record and playback, but CDNs and unicast proved to be faster to implement. v On Thu, Mar 3, 2022 at 3:02 PM Noel Chiappa via Internet-history < internet-history at elists.isoc.org> wrote: > > From: Michael Grant > > > The address space (224.0.0.0 to 239.255.255.255) was very small, I > > never understood how that was supposed to work in a global context. > > The whole Internet is an _experiment_ that grew out out of control. Some > aspects (e.g. the moving of connection state into the endpoints) works > well. > Some of it has been bodged into working (e.g. the addressing and routing > architecture). Multicast is another experient, one that was 1/4-baked - and > the addressing shows that. > > > From: Guy Almes > > > The problem is the enormous hidden costs of doing it at the IP layer. > > In contrast, there are many successful examples of applications that > do > > Multicast, but at the application layer. > > I always thought that it was a mistake to try and do multicast completely > integrated into the internet layer. It made a lot more sense to me to do it > as a sub-layer on top of the internet layer, using multicast distribution > nodes: logically/architecturally seperate, but perhaps co-located in > switching nodes in implementations. (The way Van's fast TCP had the > logically > separate IP and TCP layers integrated in the actual implementation.) > > I also thought that it would better to have separate namespaces for the > groups (i.e. to name their members), and to identify their distribution > trees. All OBE, of course. As has been observed, the Internet has gotten > too > large to evolve. > > Noel > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From mfidelman at meetinghouse.net Thu Mar 3 12:31:31 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 3 Mar 2022 15:31:31 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <20220303200241.F2EF618C097@mercury.lcs.mit.edu> References: <20220303200241.F2EF618C097@mercury.lcs.mit.edu> Message-ID: <623b5fd2-7a0c-672c-9046-b131f3f3a466@meetinghouse.net> Noel Chiappa via Internet-history wrote: > > From: Michael Grant > > > The address space (224.0.0.0 to 239.255.255.255) was very small, I > > never understood how that was supposed to work in a global context. > > The whole Internet is an _experiment_ that grew out out of control. Well, that can be said of pretty much anything new.? For that matter it can be said about life, the universe, and everything. :-) > Noel -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From jeanjour at comcast.net Thu Mar 3 12:35:20 2022 From: jeanjour at comcast.net (John Day) Date: Thu, 3 Mar 2022 15:35:20 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: Jack, Only now getting around to responding to this. BBN was correct. It wouldn?t work with just that. TCP was based on the work in 1971-72 by CYCLADES at IRIA in France, now INRIA. Having seen the ARPANET, CYCLADES was building a network to do research on networks. Dave Walden spent some time working with them. The team?s plan was to determine the minimal assumptions to get a packet through a network. Then their research program was to see what else was needed. Start with the simplest and see how much more was needed. (What we would call a clean-slate approach today). What they found was that a 'best effort' datagram with an end-to-end transport in the hosts did everything at the time but two open questions: routing and congestion control. Of course some work had been done on routing but we are still looking for better ways. They recognized immediately that congestion would be an issue and began work on that. (Remember Baran had laid had said there would distinct advantages to a network dedicated to data, rather than voice. It wouldn?t be until much later that more would be needed when that view was relaxed.) Around 1972 or so, CYCLADES awarded a contract to the University of Waterloo to do research and simulations on those two topics. The primary people were Merek Irland, his advisor Eric Manning and a few others. They got some interesting results which were factored into the CIGALE network implementation. (CIGALE were the switches for CYCLADES.) However, CYCLADES was shut down in the late 70s and Irland died of lung cancer in ?78 and as near as I can tell the work was forgotten. Irland did publish results at the 4th Data Communications Symposium in 1975 in Quebec and was co-author on two papers at a IRIA sponsored conference, Flow Control in Computer Networks in 1979. (The proceedings are dedicated to Irland? memory.) There are a couple of other papers by him, but mostly there are his thesis and reports at Waterloo and INRIA, so far as know none of which are on-line. Take care, John > On Mar 2, 2022, at 23:32, Jack Haverty via Internet-history wrote: > > IMHO, many things also happen for non-technical and non-business reasons. Since multicast was needed for some uses of the 'net, but it didn't actually get deployed widely in the Internet (whatever happened to the Mbone...?), people figured out another way to provide it by putting it in separate boxes (the CDNs) from the switches themselves. > > I've always wondered if that same pattern drove the creation of TCP and use of datagram mode. The ARPANET was the only WAN of the day, and its gurus were extremely reluctant to allow use of "uncontrolled packets" (aka datagrams) for fear of bringing down the whole network. I recently found a 1975-era BBN report analyzing the TCP proposal and concluding for DCA that it couldn't work. > > So TCP was implemented in the host computers, where mere mortals could get at the code. Of course, TCP mechanisms duplicated the mechanisms already in the ARPANET. That's what I meant by "moving mechanisms from switches to hosts:. But that did enable us a few years later to simply interconnect routers with wires, cutting the ARPANET out of the picture. > > Jack > > On 3/2/22 20:03, John Levine via Internet-history wrote: >> It appears that Noel Chiappa via Internet-history said: >>> > On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote: >>> >>> > One that I used in the talk was TOS, i.e., how should routers (and TCPs) >>> > treat datagrams differently depending on their TOS values. >>> >>> I actually don't think that's that important any more (or multicast either). >>> TOS is only realy important in a network with resource limitations, or very >>> different service levels. We don't have those any more - those limitations >>> have just been engineered away. >> That's not it, they came up against the impenetrable barrier of a >> business model. We understand how to price peering and transit of >> traffic where all packets are the same, but nobody has any idea how >> you do it where some packets are more valuable. >> >> I never figured out why multicast failed. It is bizarre that people are dumping >> cable service which has 100 channels multicast to all of the customers in favor >> of point-to-point service where you frequently have a zillion people streaming >> separate copies of the same thing, e.g., a football game. We fake it with CDNs >> that position servers inside retail networks but really, it's multicast. >> >> R's, >> John > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Thu Mar 3 12:53:41 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Mar 2022 09:53:41 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <71cd20d0-16f8-5b75-aeb9-6a1dbb86f538@meetinghouse.net> References: <20220303040322.02468386532E@ary.qy> <71cd20d0-16f8-5b75-aeb9-6a1dbb86f538@meetinghouse.net> Message-ID: Miles, On 04-Mar-22 05:18, Miles Fidelman via Internet-history wrote: ... > Well... probably because carriers were trying to charge by the > bit/packet, and vendors were trying to sell centralized, proprietary > videoconferencing services.? Interoperable multicast makes it all too > easy to distribute such things.? (Consider the demise of CuSeeMe and IRC > - can't see that Zoom or the myriad of chat services improve on the > originals.)? Sigh... Zoom has a user interface that even Fine Arts professors can use. Apart from that, it's nothing really new, but that's enough. Same explanation for all the browser-based chat services. Brian From scott.brim at gmail.com Thu Mar 3 12:53:49 2022 From: scott.brim at gmail.com (Scott Brim) Date: Thu, 3 Mar 2022 15:53:49 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: A little earlier than that, we wanted to do multicast for CU-SeeMe. I was a proponent of L3 multicast but it was just so much simpler to do "something like" multicast at the application layer, cruising over whatever environment lower layers gave us, dynamically adjusting compression to fit throughput on each subtree, etc. On Thu, Mar 3, 2022 at 1:49 PM Louis Mamakos via Internet-history < internet-history at elists.isoc.org> wrote: > The small amount of multicast address space really isn't a problem in > practice. For any successful, scalable multicast deployment, you'll end up > with source-rooted trees and the forwarding state in the routers are (S,G) > tuples. And multicast only makes sense for a large number of receivers > because of all the effort required to instantiate the forwarding state in > the control plane of your network. > > The larger problem is that multicast requires a large number of receivers > that want to simultaneously receive the traffic. This is at odds with > personalized content. > > I did a multicast product at UUNET so many years ago now, back when the > access was dial-up users. How do you sell this? Content providers want to > reach content consumers everywhere. So multicast distribution is an > optimization, rather than a central part of the solution to this problem. > The customer that I worked with at the time was essentially in the > "Internet Radio" business. They selected a subset of all their live > streams for distribution by multicast on our network, with about 250K > multicast-enabled dial-up ports. Their client software would use some > program guide, distributed out-of-band for their customers to navigate and > select content. The client software also subscribed to a multicast group > to listen for "beacon" messages to discover if a multicast stream was > possibly available. (And we just transmitted NTP time announcements on > that group every few seconds..) The client would attempt to join the > group if possible, or fall back to a unicast stream. > > This was completely at odds with the "MBONE" experimentation going on at > the time. There were content announcement sent to a multicast group by > each source, and some client applications that listened for these things. > This wasn't a great model for commercial adoption if the content provider > wanted to reach the most eyeballs, as it reduced the addressable segment of > his market to a very small subset. > > This was back in the mid to later 1990's, when dial-up V.90 modems were the > common means of Internet access for residential end-users. I spent time > with our finance people trying to figure out costs of running a platform > like this, so we'd have at least something to base retail pricing on and > ideally produce a positive margin. So it was an exercise to understand the > span and extent of a multicast distribution tree across backbone links for > any given stream from a source, and some hand-waving over the cost of the > forwarding state, back when memory was expensive and you had state based on > both source and destination occupying resources. At the time, this was not > quite top-of-mind, but something to think hard about, having had to upgrade > CPU boards in many routers as the default-free Internet routing table was > growing quite rapidly in those days. > > And back then, inter-domain multicast was quite... a hack. Gluing together > sparse-mode PIM IGP infrastructure wasn't not at all obvious at that time. > Of course BGP got co-opted yet again as the all-purpose container for > carrying router state, but you still had problem before IGMPv3 and being > able to specify a source when joining a multlicast group. So wonderful > hacks like inter-domain source discovery protocols to forward discovered > sources in groups towards the PIM RP. Madness. IGMPv3 made more of this > possible to imagine working, though I had moved on to other things and > stopped following in detail what happened in the interdomain multicast > routing space by then. > > Louis Mamakos > > On Thu, Mar 3, 2022 at 12:42 PM Michael Grant via Internet-history < > internet-history at elists.isoc.org> wrote: > > > Jack Haverty via Internet-history wrote: > > > IMHO, many things also happen for non-technical and non-business > > > reasons. Since multicast was needed for some uses of the 'net, but it > > > didn't actually get deployed widely in the Internet (whatever happened > > > to the Mbone...?), people figured out another way to provide it by > > > putting it in separate boxes (the CDNs) from the switches themselves. > > > > From my memory, there were several different ways of doing multicast > > and it was a bit of a mess. IGMP, PIM, others, I'm sure someone can > > enumerate them all. Almost no ISP supported multicast and the few > > that did, not all were the same and very few routers supported it. > > > > Then there was the issue that it wasn't global. You couldn't expect > > just to get something multicast to you from anywhere on the internet. > > > > The address space (224.0.0.0 to 239.255.255.255) was very small, I > > never understood how that was supposed to work in a global context. > > > > You could sort of get it working within a LAN but there was no reason > > to save the bandwidth with switches everywhere. > > > > But technical stuff aside, the final nail in the coffin was that the > > content providers wanted to know who they were broadcasting to so they > > could advertize to them and get their data and sell it. Also to be > > able to sell the content behind a paywall. > > > > And then there's content on demand vs live streaming. You can't pause > > a multicast stream indefinitely. > > > > In the end, trying to save bandwidth using multicasting became harder > > than just using unicast. > > > > Michael Grant > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Thu Mar 3 13:08:12 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Mar 2022 10:08:12 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> Message-ID: Actually inter-ISP diffserv is technically well defined now as is mapping to MPLS and 5G classes of service. But indeed, the issue for diffserv and multicast is the same: there is no cost-effective business model across ISP boundaries. Flat-rate best-effort capacity-based charging is still vastly cheaper and simpler to implement. I can't see any reason that will ever change. Regards Brian On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: > >> The small amount of multicast address space really isn't a problem in >> practice. For any successful, scalable multicast deployment, you'll end up >> with source-rooted trees and the forwarding state in the routers are (S,G) >> tuples. > > > Broadly, for anything like TOS or multicast, there are two different > sets of issues, either of which can easily create showstoppers. > > First is, of course, the mechanics. What is the functional design? > What is the basis for believing it will satisfy real-world needs? How > robust will it be? How easy to operate? Etc. > > Second is gaining adoption across a very large range of entirely > independent operators. What are their immediate, compelling business > incentives? > > As we keep seeing, getting adoption of anything across an Internet > infrastructure service, is more than a little challenging. > > Cable TV's multicast is done within the span of a single administrative > control. And it's a relatively stable, constrained set of traffic. > Generic Internet multicast is multiple administrations, with highly > variable sets of traffic, across many administrations. Very, very > different game. > > d/ > From mfidelman at meetinghouse.net Thu Mar 3 13:34:26 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 3 Mar 2022 16:34:26 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> Message-ID: <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> But why does there NEED to be a separate charging scheme? Seems to me that supporting multicast is a LOT cheaper than supporting all that extra traffic generated by lots of redundant traffic.? Multicast would also likely reduce pressure on chokepoints, at ISP boundaries. Miles Fidelman Brian E Carpenter via Internet-history wrote: > Actually inter-ISP diffserv is technically well defined now > as is mapping to MPLS and 5G classes of service. But indeed, > the issue for diffserv and multicast is the same: there is > no cost-effective business model across ISP boundaries. > Flat-rate best-effort capacity-based charging is still vastly > cheaper and simpler to implement. I can't see any reason that > will ever change. > > Regards > ?? Brian > > On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: >> >>> The small amount of multicast address space really isn't a problem in >>> practice.? For any successful, scalable multicast deployment, you'll >>> end up >>> with source-rooted trees and the forwarding state in the routers are >>> (S,G) >>> tuples. >> >> >> Broadly, for anything like TOS or multicast, there are two different >> sets of issues, either of which can easily create showstoppers. >> >> First is, of course, the mechanics.? What is the functional design? >> What is the basis for believing it will satisfy real-world needs?? How >> robust will it be?? How easy to operate?? Etc. >> >> Second is gaining adoption across a very large range of entirely >> independent operators.? What are their immediate, compelling business >> incentives? >> >> As we keep seeing, getting adoption of anything across an Internet >> infrastructure service, is more than a little challenging. >> >> Cable TV's multicast is done within the span of a single administrative >> control.? And it's a relatively stable, constrained set of traffic. >> Generic Internet multicast is multiple administrations, with highly >> variable sets of traffic, across many administrations.? Very, very >> different game. >> >> d/ >> -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From gnu at toad.com Thu Mar 3 16:20:02 2022 From: gnu at toad.com (John Gilmore) Date: Thu, 03 Mar 2022 16:20:02 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> Message-ID: <15303.1646353202@hop.toad.com> Jack is right that the Internet was an experiment that was never finished. In the last 2 or 3 years I have been trying to polish up a few of the odd corners of the Internet experiment. And the roadblocks to finishing it are not the ones you might expect. Louis Mamakos wrote: > So it was an exercise to understand the > span and extent of a multicast distribution tree across backbone links for > any given stream from a source, and some hand-waving over the cost of the > forwarding state, back when memory was expensive and you had state based on > both source and destination occupying resources. At the time, this was not > quite top-of-mind, but something to think hard about, having had to upgrade > CPU boards in many routers as the default-free Internet routing table was > growing quite rapidly in those days. Back in the Mbone days I had multicast running in my basement over a T1. But upstream, no major ISP would enable it in their Cisco routers, mixing the packets into the default data stream. This was because Cisco's multicast routing code was much less mature than the unicast IP routing and forwarding code that handled 99% of their paying traffic. Having to update or merely reboot their core routers once a month was far too much of an operational ask. The few ISPs who offered multicast, offered it via manually configured tunnels connected to side-servers that were not their core routers. These would only disrupt their few multicast customers when they needed a reboot. Michael Grant wrote: > The address space (224.0.0.0 to 239.255.255.255) was very small, I > never understood how that was supposed to work in a global context. Global multicast turned out (for whatever reasons) to be so infrequently used that more than half of that "very small" space has never even been allocated to anyone by IETF or IANA, let alone used in the real Internet. This among other things reconfirms for me that *unicast packets* are the key success of the Internet experiment. Unicast traffic outnumbers all other kinds of traffic by orders of magnitude. Unicast address demand is far greater than demand for multicast, broadcast, loopback, or reserved addresses. It seems obvious and yet most people don't think of it that way. As a result, I propose that at least the unused half of the multicast address space should be re-allocated to unicast use. Recently I have tried to improve some corner cases in the Internet experiment. In particular, reforming the mis-allocation of scarce address space to things other than unicast traffic. There are four major address blocks that can't be used for unicast traffic, at a time when the world is gasping for usable unicast IPv4 addresses. These are 0/8 (a failed DHCP), 127/8 (loopback), 224/4 (multicast), and 240/4 ("for future use"). In addition there are two allocations in every subnet that are unusable for unicast: the highest (the subnet broadcast address, deprecated in 1999), and the zeroth (the second subnet broadcast address, reserved for 4.2BSD Unix compatability in 1989). So far nobody has raised any serious arguments that these corners of the IPv4 experiment should not be reformed. The most prevalent argument is not serious -- that since not everyone has adopted IPv6, we should therefore abandon seeking all improvements of IPv4, in the hope that inaction will provide an epsilon more impetus toward IPv6. The code changes required are typically one to two lines of code per address block, usually removing a special-case so the addresses will be treated like the unicast default case. Yet the proposals have not progressed in the IETF, only among OS implementers. My small team has written Internet-Drafts that cover unicast use of 0/8, 127/8, 240/4, and the lowest subnet address, which are pending in the "intarea" working group. If you believe that the Internet experiment isn't over, and that there are ways we can find consensus to improve the Internet that we have, then help to support or improve those drafts. Please show the doubters that it's safe to gently evolve our experiment. John Gilmore IPv4 Unicast Extensions Project From touch at strayalpha.com Thu Mar 3 17:40:31 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Thu, 3 Mar 2022 17:40:31 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> Message-ID: <95795439-612F-4948-A165-A56236A1792D@strayalpha.com> ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com > On Mar 3, 2022, at 1:34 PM, Miles Fidelman via Internet-history wrote: > > But why does there NEED to be a separate charging scheme? > > Seems to me that supporting multicast is a LOT cheaper than supporting all that extra traffic generated by lots of redundant traffic. Multicast would also likely reduce pressure on chokepoints, at ISP boundaries. IMO because it?s too easy for ME to do less work by asking YOU to do more. ISPs ?trade? traffic; if IN balances OUT, no money needs to change hands. Imbalances are easy to price. Multicast isn?t, because I can send you one packet that can cost you one, 100, or 1,000,000 packets of capacity, and you won?t know until it traverses your network. It?s too expensive to keep track of that so it can be fairly charged. Joe From louie at transsys.com Thu Mar 3 18:15:08 2022 From: louie at transsys.com (Louis Mamakos) Date: Thu, 03 Mar 2022 21:15:08 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> Message-ID: <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> How much simultaneous multicast traffic do you think there could be over wide-area transit networks and across transit providers? And that ship has sailed. CDNs solve this problem now, for both live and near-live content as well as delayed and on-demand content. You don't need to augment your router with PIM-SM or some other multicast routing protocol, or figure out how to make interdomain routing work. That's a lot of work for solving the general case of multicast IP across the global internet, to displace the CDN-based solution that already works and solves the higher-volume adjacent problem space. Application-layer multicast is mature and works really well and can evolve without having to go in lock-step with all of the intra- and inter-domain L3 routing/forwarding hardware that's deployed. I think one popular multicast solution was in last-mile CATV networks to do delivery of video streams. But from a user-experience perspective, you've got join latencies that compete with another HTTP unicast request and I can't pause my TV program to hit the bathroom. It's cheaper to have the extra bandwidth than adding storage capacity in, e.g., set-top boxes for local caching and asynchronous delivery. I think that IP multicast is really neat tech, and I tried to make a commercial success of service offering, but in the general case I couldn't get (enough) customers at the time to want to pay for the capability to support the infrastructure costs, not to mention the operational complexity. And then you open a can of worms on what settlement-free peering is supposed to look like for multicast traffic. With unicast traffic crossing a peering interconnect, you didn't have to think real hard about equal costs/equal burden sort of issues. It wasn't obvious at that time what that would mean for multicast traffic. Louis Mamakos On 3 Mar 2022, at 16:34, Miles Fidelman via Internet-history wrote: > But why does there NEED to be a separate charging scheme? > > Seems to me that supporting multicast is a LOT cheaper than supporting > all that extra traffic generated by lots of redundant traffic.? > Multicast would also likely reduce pressure on chokepoints, at ISP > boundaries. > > Miles Fidelman > > > Brian E Carpenter via Internet-history wrote: >> Actually inter-ISP diffserv is technically well defined now >> as is mapping to MPLS and 5G classes of service. But indeed, >> the issue for diffserv and multicast is the same: there is >> no cost-effective business model across ISP boundaries. >> Flat-rate best-effort capacity-based charging is still vastly >> cheaper and simpler to implement. I can't see any reason that >> will ever change. >> >> Regards >> ?? Brian >> >> On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: >>> >>>> The small amount of multicast address space really isn't a problem >>>> in >>>> practice.? For any successful, scalable multicast deployment, >>>> you'll end up >>>> with source-rooted trees and the forwarding state in the routers >>>> are (S,G) >>>> tuples. >>> >>> >>> Broadly, for anything like TOS or multicast, there are two different >>> sets of issues, either of which can easily create showstoppers. >>> >>> First is, of course, the mechanics.? What is the functional design? >>> What is the basis for believing it will satisfy real-world needs?? >>> How >>> robust will it be?? How easy to operate?? Etc. >>> >>> Second is gaining adoption across a very large range of entirely >>> independent operators.? What are their immediate, compelling >>> business >>> incentives? >>> >>> As we keep seeing, getting adoption of anything across an Internet >>> infrastructure service, is more than a little challenging. >>> >>> Cable TV's multicast is done within the span of a single >>> administrative >>> control.? And it's a relatively stable, constrained set of traffic. >>> Generic Internet multicast is multiple administrations, with highly >>> variable sets of traffic, across many administrations.? Very, very >>> different game. >>> >>> d/ >>> > > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Thu Mar 3 20:17:38 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Fri, 4 Mar 2022 17:17:38 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> Message-ID: On 04-Mar-22 15:15, Louis Mamakos via Internet-history wrote: > How much simultaneous multicast traffic do you think there could be > over wide-area transit networks and across transit providers? > > And that ship has sailed. CDNs solve this problem now, for both > live and near-live content as well as delayed and on-demand content. > You don't need to augment your router with PIM-SM or some other > multicast routing protocol, or figure out how to make interdomain > routing work. That's a lot of work for solving the general case > of multicast IP across the global internet, to displace the CDN-based > solution that already works and solves the higher-volume adjacent > problem space. Application-layer multicast is mature and works > really well and can evolve without having to go in lock-step with > all of the intra- and inter-domain L3 routing/forwarding hardware > that's deployed. > > I think one popular multicast solution was in last-mile CATV networks > to do delivery of video streams. But from a user-experience > perspective, you've got join latencies that compete with another > HTTP unicast request and I can't pause my TV program to hit the > bathroom. It's cheaper to have the extra bandwidth than adding > storage capacity in, e.g., set-top boxes for local caching and > asynchronous delivery. > > I think that IP multicast is really neat tech, and I tried to make > a commercial success of service offering, but in the general case > I couldn't get (enough) customers at the time to want to pay for > the capability to support the infrastructure costs, not to mention > the operational complexity. > > And then you open a can of worms on what settlement-free peering is > supposed to look like for multicast traffic. With unicast traffic > crossing a peering interconnect, you didn't have to think real hard > about equal costs/equal burden sort of issues. It wasn't obvious > at that time what that would mean for multicast traffic. Actually, I think it's fairly obvious: you'd need traffic metering and settlements. Nobody wanted that. Apart from anything else it raised anti-trust concerns because people would have to talk about pricing. That's exactly why the following draft never got a -01 version: https://datatracker.ietf.org/doc/html/draft-carpenter-metrics-00 The ISPs wouldn't touch it. (And I didn't even mention multicast.) My memory tells me that John Curran and Mike O'Dell took me out for lunch (most likely at IETF 96 in Montreal) to explain the facts of life to me in simple words. Brian > > Louis Mamakos > > On 3 Mar 2022, at 16:34, Miles Fidelman via Internet-history wrote: > >> But why does there NEED to be a separate charging scheme? >> >> Seems to me that supporting multicast is a LOT cheaper than supporting >> all that extra traffic generated by lots of redundant traffic. >> Multicast would also likely reduce pressure on chokepoints, at ISP >> boundaries. >> >> Miles Fidelman >> >> >> Brian E Carpenter via Internet-history wrote: >>> Actually inter-ISP diffserv is technically well defined now >>> as is mapping to MPLS and 5G classes of service. But indeed, >>> the issue for diffserv and multicast is the same: there is >>> no cost-effective business model across ISP boundaries. >>> Flat-rate best-effort capacity-based charging is still vastly >>> cheaper and simpler to implement. I can't see any reason that >>> will ever change. >>> >>> Regards >>> ?? Brian >>> >>> On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: >>>> >>>>> The small amount of multicast address space really isn't a problem >>>>> in >>>>> practice.? For any successful, scalable multicast deployment, >>>>> you'll end up >>>>> with source-rooted trees and the forwarding state in the routers >>>>> are (S,G) >>>>> tuples. >>>> >>>> >>>> Broadly, for anything like TOS or multicast, there are two different >>>> sets of issues, either of which can easily create showstoppers. >>>> >>>> First is, of course, the mechanics.? What is the functional design? >>>> What is the basis for believing it will satisfy real-world needs? >>>> How >>>> robust will it be?? How easy to operate?? Etc. >>>> >>>> Second is gaining adoption across a very large range of entirely >>>> independent operators.? What are their immediate, compelling >>>> business >>>> incentives? >>>> >>>> As we keep seeing, getting adoption of anything across an Internet >>>> infrastructure service, is more than a little challenging. >>>> >>>> Cable TV's multicast is done within the span of a single >>>> administrative >>>> control.? And it's a relatively stable, constrained set of traffic. >>>> Generic Internet multicast is multiple administrations, with highly >>>> variable sets of traffic, across many administrations.? Very, very >>>> different game. >>>> >>>> d/ >>>> >> >> >> -- >> In theory, there is no difference between theory and practice. >> In practice, there is. .... Yogi Berra >> >> Theory is when you know everything but nothing works. >> Practice is when everything works but no one knows why. >> In our lab, theory and practice are combined: >> nothing works and no one knows why. ... unknown >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From mfidelman at meetinghouse.net Fri Mar 4 07:09:13 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Fri, 4 Mar 2022 10:09:13 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> Message-ID: <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> Brian E Carpenter via Internet-history wrote: > On 04-Mar-22 15:15, Louis Mamakos via Internet-history wrote: >> How much simultaneous multicast traffic do you think there could be >> over wide-area transit networks and across transit providers? >> >> And that ship has sailed.? CDNs solve this problem now, for both >> live and near-live content as well as delayed and on-demand content. And make a lot of bucks doing so.? Seems to me that's a lot of the reason that multicast didn't go all that far. Economics tends to trump technology.? Particularly, it seems, when it comes to interoperability.? (Consider all the walled garden email systems that have popped up in the medical community - despite secure mail being built into most clients these days.) Miles >> You don't need to augment your router with PIM-SM or some other >> multicast routing protocol, or figure out how to make interdomain >> routing work.? That's a lot of work for solving the general case >> of multicast IP across the global internet, to displace the CDN-based >> solution that already works and solves the higher-volume adjacent >> problem space.? Application-layer multicast is mature and works >> really well and can evolve without having to go in lock-step with >> all of the intra- and inter-domain L3 routing/forwarding hardware >> that's deployed. >> >> I think one popular multicast solution was in last-mile CATV networks >> to do delivery of video streams.? But from a user-experience >> perspective, you've got join latencies that compete with another >> HTTP unicast request and I can't pause my TV program to hit the >> bathroom.? It's cheaper to have the extra bandwidth than adding >> storage capacity in, e.g., set-top boxes for local caching and >> asynchronous delivery. >> >> I think that IP multicast is really neat tech, and I tried to make >> a commercial success of service offering, but in the general case >> I couldn't get (enough) customers at the time to want to pay for >> the capability to support the infrastructure costs, not to mention >> the operational complexity. >> >> And then you open a can of worms on what settlement-free peering is >> supposed to look like for multicast traffic.? With unicast traffic >> crossing a peering interconnect, you didn't have to think real hard >> about equal costs/equal burden sort of issues.? It wasn't obvious >> at that time what that would mean for multicast traffic. > > Actually, I think it's fairly obvious: you'd need traffic metering > and settlements. Nobody wanted that. Apart from anything else > it raised anti-trust concerns because people would have to talk > about pricing. > > That's exactly why the following draft never got a -01 version: > https://datatracker.ietf.org/doc/html/draft-carpenter-metrics-00 > The ISPs wouldn't touch it. (And I didn't even mention multicast.) > My memory tells me that John Curran and Mike O'Dell took me out > for lunch (most likely at IETF 96 in Montreal) to explain the facts > of life to me in simple words. > > ?? Brian > >> >> Louis Mamakos >> >> On 3 Mar 2022, at 16:34, Miles Fidelman via Internet-history wrote: >> >>> But why does there NEED to be a separate charging scheme? >>> >>> Seems to me that supporting multicast is a LOT cheaper than supporting >>> all that extra traffic generated by lots of redundant traffic. >>> Multicast would also likely reduce pressure on chokepoints, at ISP >>> boundaries. >>> >>> Miles Fidelman >>> >>> >>> Brian E Carpenter via Internet-history wrote: >>>> Actually inter-ISP diffserv is technically well defined now >>>> as is mapping to MPLS and 5G classes of service. But indeed, >>>> the issue for diffserv and multicast is the same: there is >>>> no cost-effective business model across ISP boundaries. >>>> Flat-rate best-effort capacity-based charging is still vastly >>>> cheaper and simpler to implement. I can't see any reason that >>>> will ever change. >>>> >>>> Regards >>>> ??? Brian >>>> >>>> On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: >>>>> >>>>>> The small amount of multicast address space really isn't a problem >>>>>> in >>>>>> practice.? For any successful, scalable multicast deployment, >>>>>> you'll end up >>>>>> with source-rooted trees and the forwarding state in the routers >>>>>> are (S,G) >>>>>> tuples. >>>>> >>>>> >>>>> Broadly, for anything like TOS or multicast, there are two different >>>>> sets of issues, either of which can easily create showstoppers. >>>>> >>>>> First is, of course, the mechanics.? What is the functional design? >>>>> What is the basis for believing it will satisfy real-world needs? >>>>> How >>>>> robust will it be?? How easy to operate?? Etc. >>>>> >>>>> Second is gaining adoption across a very large range of entirely >>>>> independent operators.? What are their immediate, compelling >>>>> business >>>>> incentives? >>>>> >>>>> As we keep seeing, getting adoption of anything across an Internet >>>>> infrastructure service, is more than a little challenging. >>>>> >>>>> Cable TV's multicast is done within the span of a single >>>>> administrative >>>>> control.? And it's a relatively stable, constrained set of traffic. >>>>> Generic Internet multicast is multiple administrations, with highly >>>>> variable sets of traffic, across many administrations. Very, very >>>>> different game. >>>>> >>>>> d/ >>>>> >>> >>> >>> -- >>> In theory, there is no difference between theory and practice. >>> In practice, there is.? .... Yogi Berra >>> >>> Theory is when you know everything but nothing works. >>> Practice is when everything works but no one knows why. >>> In our lab, theory and practice are combined: >>> nothing works and no one knows why.? ... unknown >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From dhc at dcrocker.net Fri Mar 4 07:29:26 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 4 Mar 2022 07:29:26 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> Message-ID: On 3/4/2022 7:09 AM, Miles Fidelman via Internet-history wrote: > despite secure mail being built into most clients these days.) Except that it does not work at scale, whereas the walled ones do. There is no existence proof for meaningful, end-user usable security at scale for email.(*) Cert management and end-user UX design appear to be the major barriers. Truly distributdc admin and ops is much, much harder than centralized. d/ (*) No doubt some will argue that server credentials with TLS is an exception, except that isn't and end-user function; end users aren't part of any meaningful security detection or enforcement activity, lock icons notwithstanding -- Dave Crocker Brandenburg InternetWorking bbiw.net From galmes at tamu.edu Fri Mar 4 07:43:10 2022 From: galmes at tamu.edu (Guy Almes) Date: Fri, 4 Mar 2022 10:43:10 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> Message-ID: <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Miles, The issue is not multicast vs not-multicast. The issue is doing multicast at the IP / Layer-3 level vs doing multicast at the Application level. The pioneering conferencing systems, built on vic and vat and MBone, were working in the early 1990s. But they required heroic engineering to make it work and, in the early 2000s it *still* required (a little less) heroic engineering. That created an opening for CDNs etc. And the "etc." is important, because how the application-level multicast is done depends on, ahem, the application. Thus, Akamai does a kind of app-level multicast for non-real-time web content and Zoom does its thing for real-time conferencing. And, even in the old days, Usenet has its wonderful NNTP. Sometimes economics does trump technology, but that is not necessary to explain the limited success of IP-level Multicast. -- Guy On 3/4/22 10:09 AM, Miles Fidelman via Internet-history wrote: > Brian E Carpenter via Internet-history wrote: >> On 04-Mar-22 15:15, Louis Mamakos via Internet-history wrote: >>> How much simultaneous multicast traffic do you think there could be >>> over wide-area transit networks and across transit providers? >>> >>> And that ship has sailed.? CDNs solve this problem now, for both >>> live and near-live content as well as delayed and on-demand content. > > And make a lot of bucks doing so.? Seems to me that's a lot of the > reason that > multicast didn't go all that far. > > Economics tends to trump technology.? Particularly, it seems, when it > comes to > interoperability.? (Consider all the walled garden email systems that > have popped > up in the medical community - despite secure mail being built into most > clients these days.) > > Miles > > >>> You don't need to augment your router with PIM-SM or some other >>> multicast routing protocol, or figure out how to make interdomain >>> routing work.? That's a lot of work for solving the general case >>> of multicast IP across the global internet, to displace the CDN-based >>> solution that already works and solves the higher-volume adjacent >>> problem space.? Application-layer multicast is mature and works >>> really well and can evolve without having to go in lock-step with >>> all of the intra- and inter-domain L3 routing/forwarding hardware >>> that's deployed. >>> >>> I think one popular multicast solution was in last-mile CATV networks >>> to do delivery of video streams.? But from a user-experience >>> perspective, you've got join latencies that compete with another >>> HTTP unicast request and I can't pause my TV program to hit the >>> bathroom.? It's cheaper to have the extra bandwidth than adding >>> storage capacity in, e.g., set-top boxes for local caching and >>> asynchronous delivery. >>> >>> I think that IP multicast is really neat tech, and I tried to make >>> a commercial success of service offering, but in the general case >>> I couldn't get (enough) customers at the time to want to pay for >>> the capability to support the infrastructure costs, not to mention >>> the operational complexity. >>> >>> And then you open a can of worms on what settlement-free peering is >>> supposed to look like for multicast traffic.? With unicast traffic >>> crossing a peering interconnect, you didn't have to think real hard >>> about equal costs/equal burden sort of issues.? It wasn't obvious >>> at that time what that would mean for multicast traffic. >> >> Actually, I think it's fairly obvious: you'd need traffic metering >> and settlements. Nobody wanted that. Apart from anything else >> it raised anti-trust concerns because people would have to talk >> about pricing. >> >> That's exactly why the following draft never got a -01 version: >> https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draft-carpenter-metrics-00__;!!KwNVnqRv!QY_7WIqxdeyM6Z5L6ofsGrIrdu0degUDBMlkZA4-5XCvDOEXR7LCgP3KUQJRBg$ >> The ISPs wouldn't touch it. (And I didn't even mention multicast.) >> My memory tells me that John Curran and Mike O'Dell took me out >> for lunch (most likely at IETF 96 in Montreal) to explain the facts >> of life to me in simple words. >> >> ?? Brian >> >>> >>> Louis Mamakos >>> >>> On 3 Mar 2022, at 16:34, Miles Fidelman via Internet-history wrote: >>> >>>> But why does there NEED to be a separate charging scheme? >>>> >>>> Seems to me that supporting multicast is a LOT cheaper than supporting >>>> all that extra traffic generated by lots of redundant traffic. >>>> Multicast would also likely reduce pressure on chokepoints, at ISP >>>> boundaries. >>>> >>>> Miles Fidelman >>>> >>>> >>>> Brian E Carpenter via Internet-history wrote: >>>>> Actually inter-ISP diffserv is technically well defined now >>>>> as is mapping to MPLS and 5G classes of service. But indeed, >>>>> the issue for diffserv and multicast is the same: there is >>>>> no cost-effective business model across ISP boundaries. >>>>> Flat-rate best-effort capacity-based charging is still vastly >>>>> cheaper and simpler to implement. I can't see any reason that >>>>> will ever change. >>>>> >>>>> Regards >>>>> ??? Brian >>>>> >>>>> On 04-Mar-22 08:10, Dave Crocker via Internet-history wrote: >>>>>> >>>>>>> The small amount of multicast address space really isn't a problem >>>>>>> in >>>>>>> practice.? For any successful, scalable multicast deployment, >>>>>>> you'll end up >>>>>>> with source-rooted trees and the forwarding state in the routers >>>>>>> are (S,G) >>>>>>> tuples. >>>>>> >>>>>> >>>>>> Broadly, for anything like TOS or multicast, there are two different >>>>>> sets of issues, either of which can easily create showstoppers. >>>>>> >>>>>> First is, of course, the mechanics.? What is the functional design? >>>>>> What is the basis for believing it will satisfy real-world needs? >>>>>> How >>>>>> robust will it be?? How easy to operate?? Etc. >>>>>> >>>>>> Second is gaining adoption across a very large range of entirely >>>>>> independent operators.? What are their immediate, compelling >>>>>> business >>>>>> incentives? >>>>>> >>>>>> As we keep seeing, getting adoption of anything across an Internet >>>>>> infrastructure service, is more than a little challenging. >>>>>> >>>>>> Cable TV's multicast is done within the span of a single >>>>>> administrative >>>>>> control.? And it's a relatively stable, constrained set of traffic. >>>>>> Generic Internet multicast is multiple administrations, with highly >>>>>> variable sets of traffic, across many administrations. Very, very >>>>>> different game. >>>>>> >>>>>> d/ >>>>>> >>>> >>>> >>>> -- >>>> In theory, there is no difference between theory and practice. >>>> In practice, there is.? .... Yogi Berra >>>> >>>> Theory is when you know everything but nothing works. >>>> Practice is when everything works but no one knows why. >>>> In our lab, theory and practice are combined: >>>> nothing works and no one knows why.? ... unknown >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!KwNVnqRv!QY_7WIqxdeyM6Z5L6ofsGrIrdu0degUDBMlkZA4-5XCvDOEXR7LCgP15_iffNw$ >> > > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!KwNVnqRv!QY_7WIqxdeyM6Z5L6ofsGrIrdu0degUDBMlkZA4-5XCvDOEXR7LCgP15_iffNw$ > From touch at strayalpha.com Fri Mar 4 08:42:41 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Fri, 4 Mar 2022 08:42:41 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Message-ID: Minor point... > On Mar 4, 2022, at 7:43 AM, Guy Almes via Internet-history wrote: > > Miles, > The issue is not multicast vs not-multicast. > The issue is doing multicast at the IP / Layer-3 level vs doing multicast at the Application leve Multicast needs to include L2 (see RFC3918, Sec 6). Without that, routers would need to do serial copy for every host inside each L2, which is prohibitive (i.e., this is why IGMP is *in addition* to PIM or other IP-level mechanisms). Joe From vint at google.com Fri Mar 4 09:25:48 2022 From: vint at google.com (Vint Cerf) Date: Fri, 4 Mar 2022 12:25:48 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Message-ID: CDN is a funny kind of asynchronous multicast.... v On Fri, Mar 4, 2022 at 11:42 AM touch--- via Internet-history < internet-history at elists.isoc.org> wrote: > Minor point... > > > On Mar 4, 2022, at 7:43 AM, Guy Almes via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > Miles, > > The issue is not multicast vs not-multicast. > > The issue is doing multicast at the IP / Layer-3 level vs doing > multicast at the Application leve > > Multicast needs to include L2 (see RFC3918, Sec 6). Without that, routers > would need to do serial copy for every host inside each L2, which is > prohibitive (i.e., this is why IGMP is *in addition* to PIM or other > IP-level mechanisms). > > Joe > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From dhc at dcrocker.net Fri Mar 4 09:32:00 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 4 Mar 2022 09:32:00 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Message-ID: <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> On 3/4/2022 9:25 AM, Vint Cerf via Internet-history wrote: > CDN is a funny kind of asynchronous multicast.... non-realtime DTN... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Fri Mar 4 10:50:40 2022 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 4 Mar 2022 10:50:40 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> Message-ID: IMHO, if there were a ubiquitous IP-level multicast of some type that could be observed to actually work in the vast reaches of the Internet, people (app developers) who could use it would do so. But "ubiquitous" is important - a mechanism that only works in some places isn't as valuable as one that works everywhere? (a corollary of Metcalfe's Law?).?? A mechanism that only exists in one or a few ISPs isn't useful unless you expect all your customers to be using that ISP(s), and all of the network paths your customers use (to interact with their own customers etc) are also confined to that same ISP(s) who support the mechanism.?? Those ISPs of course would need their equipment vendors (routers, switches, hosts, whatever) to also play the same game. This relates to my discussion in that talk about TOS bits as a placeholder.?? We knew that an infrastructure like the IP network should likely offer more services than just unguaranteed datagram delivery ("We'll deliver it.? Maybe.? Eventually.? Hopefully."), and that research was needed to figure out what those services should be and fold them into the spec for the next generation. That didn't happen so people invented whatever adhoc mechanisms they needed at some "higher level" where they could just write the code themselves - continuing the "rough consensus and running code", and put their own "servers" (e.g., CDN equipment) wherever it was needed, relying only on the basic unreliable IP datagram delivery service to be ubiquitous. Such "silo-ization" seems to be everywhere now and increasing ....email, messaging, video chat, forums, .... Sigh, Jack From brian.e.carpenter at gmail.com Fri Mar 4 12:12:20 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 5 Mar 2022 09:12:20 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Message-ID: On 05-Mar-22 05:42, touch--- via Internet-history wrote: > Minor point... > >> On Mar 4, 2022, at 7:43 AM, Guy Almes via Internet-history wrote: >> >> Miles, >> The issue is not multicast vs not-multicast. >> The issue is doing multicast at the IP / Layer-3 level vs doing multicast at the Application leve > > Multicast needs to include L2 (see RFC3918, Sec 6). Without that, routers would need to do serial copy for every host inside each L2, which is prohibitive (i.e., this is why IGMP is *in addition* to PIM or other IP-level mechanisms). I think you'd be surprised how much of that actually happens today, under the covers of "Ethernet switches" and WiFi pretending to be Ethernet. We still design protocols on the assumption that a thick yellow cable snakes around the building. The difficulty arises when you try to make it work at global scope, and Guy's analysis is correct. Somebody asked where MBONE went - it went to Network Operations Hell because it deserved to. Brian From gnu at toad.com Fri Mar 4 12:44:27 2022 From: gnu at toad.com (John Gilmore) Date: Fri, 04 Mar 2022 12:44:27 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> Message-ID: <8181.1646426667@hop.toad.com> Joe Touch wrote: > Multicast needs to include L2. Without that, routers would need to do > serial copy for every host inside each L2, which is prohibitive ... By that view YouTube, Zoom, podcasts, and CDNs are all "prohibitive". I do not think that word means what you think it means. John From touch at strayalpha.com Fri Mar 4 13:10:16 2022 From: touch at strayalpha.com (Joe Touch) Date: Fri, 4 Mar 2022 13:10:16 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <8181.1646426667@hop.toad.com> References: <8181.1646426667@hop.toad.com> Message-ID: > On Mar 4, 2022, at 12:44 PM, John Gilmore wrote: > > ?Joe Touch wrote: >> Multicast needs to include L2. Without that, routers would need to do >> serial copy for every host inside each L2, which is prohibitive ... > > By that view YouTube, Zoom, podcasts, and CDNs are all "prohibitive". They copy at the source at the app layer, not in L2 anywhere. it?s not the BW, but the local serial copy operation and it?s state that are prohibitive. Joe From mfidelman at meetinghouse.net Sat Mar 5 07:05:24 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 5 Mar 2022 10:05:24 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> Message-ID: Jack Haverty via Internet-history wrote: > > IMHO, if there were a ubiquitous IP-level multicast of some type that > could be observed to actually work in the vast reaches of the > Internet, people (app developers) who could use it would do so. > > But "ubiquitous" is important - a mechanism that only works in some > places isn't as valuable as one that works everywhere? (a corollary of > Metcalfe's Law?).?? A mechanism that only exists in one or a few ISPs > isn't useful unless you expect all your customers to be using that > ISP(s), and all of the network paths your customers use (to interact > with their own customers etc) are also confined to that same ISP(s) > who support the mechanism. Those ISPs of course would need their > equipment vendors (routers, switches, hosts, whatever) to also play > the same game. Exactly.? Which leads to two questions that still remains unclear: - If everyone enabled it, how capable, and scalable, is the current version of IP multicast?? Judging from my experience with DIS, on the Defense Simulation Internet - it can support some very large, challenging, real-time training exercises (MMORPGs for folks who use real ammo).? But those exercises are one-offs.? A far cry from, say, supporting a million videochats.? What are the limits?? Are there any clear paths to scaling (if anyone were motivated to)? - How much of the lack-of-support is driven by technology, how much be administrative complexity, how much by commercial factors? > > That didn't happen so people invented whatever adhoc mechanisms they > needed at some "higher level" where they could just write the code > themselves - continuing the "rough consensus and running code", and > put their own "servers" (e.g., CDN equipment) wherever it was needed, > relying only on the basic unreliable IP datagram delivery service to > be ubiquitous. > > Such "silo-ization" seems to be everywhere now and increasing > ....email, messaging, video chat, forums, .... I'm not sure that's the primary explanation. Seems to me that, back in the day, resource & information sharing were the prime drivers for the net - making connectivity and interoperability core drivers.? (C.f., Metcalfe's Law). Since commercialization of the net, it seems like capturing market share has become the fundamental driver - leading to intentional creation of walled gardens.? Without much effective pushback. I'm reminded of the early days of email:? There was a time when access to Internet email was a selling point for Compuserve.? Today, folks are selling private email (and chat) based on privacy, codes of conduct (or lack thereof), etc.? (Discord doesn't grow because it adds value, it grows because it's an alternative to Facebook). We're almost back to the days when Boston had a dozen phone companies - each promoting itself based on its user base - and every business needing to have a dozen phones on the desk.? (Kind of ironic, that Microsoft Exchange supports email and calendaring standards better than anybody else.) (By the way, not a hypothetical for me, right now - as I'm about to launch a new venture that has a major social-networking component. Struggling with which standards to build around, and how to gateway to other environments - so that we can operate across, and independent of, the growing myriad of platforms.) > > Sigh, > Jack > Sigh, indeed, Miles :-( -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From mfidelman at meetinghouse.net Sat Mar 5 07:09:02 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 5 Mar 2022 10:09:02 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <8181.1646426667@hop.toad.com> Message-ID: Joe Touch via Internet-history wrote: > >> On Mar 4, 2022, at 12:44 PM, John Gilmore wrote: >> >> ?Joe Touch wrote: >>> Multicast needs to include L2. Without that, routers would need to do >>> serial copy for every host inside each L2, which is prohibitive ... >> By that view YouTube, Zoom, podcasts, and CDNs are all "prohibitive". > They copy at the source at the app layer, not in L2 anywhere. > > it?s not the BW, but the local serial copy operation and it?s state that are prohibitive. > No more prohibitive than doing it as an overlay.? If anything, it's more complex and resource intensive as an overlay.? (Granted that we're talking multiple overlays - but the cost here is interoperability.) It's the same logic that applies to supporting a file system in userspace, vs. supporting it as a kernel module. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From touch at strayalpha.com Sat Mar 5 09:37:52 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 5 Mar 2022 09:37:52 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <8181.1646426667@hop.toad.com> Message-ID: <1A39EFDD-9F7C-4D67-8BDC-9275A99BD541@strayalpha.com> Hi, Miles, > On Mar 5, 2022, at 7:09 AM, Miles Fidelman via Internet-history wrote: > > Joe Touch via Internet-history wrote: >> >>>> ... >>> By that view YouTube, Zoom, podcasts, and CDNs are all "prohibitive". >> They copy at the source at the app layer, not in L2 anywhere. >> >> it?s not the BW, but the local serial copy operation and it?s state that are prohibitive. >> > No more prohibitive than doing it as an overlay. If anything, it's more complex and resource intensive as an overlay. (Granted that we're talking multiple overlays - but the cost here is interoperability.) I should have been more specific: CDNs move the work to L7. That doesn?t reduce the overall work for edge distribution, but it does avoid serial local copy inside (cheap) L2 devices that don?t always have the capacity to do so. (CDNs *do* reduce *overall* work by caching content closer to users, thus reducing overall network traffic vs. repeated use of multicast trees rooted a the original content source, but that?s a separate issue). Joe From brian.e.carpenter at gmail.com Sat Mar 5 11:45:57 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 6 Mar 2022 08:45:57 +1300 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> Message-ID: <2f72dd07-17ca-7a47-9441-b78de7c106d4@gmail.com> Miles, You might want to look at https://www.ietf.org/archive/id/draft-nottingham-avoiding-internet-centralization-02.html which is discussed at https://www.ietf.org/mailman/listinfo/architecture-discuss Regards Brian Carpenter On 06-Mar-22 04:05, Miles Fidelman via Internet-history wrote: > Jack Haverty via Internet-history wrote: >> >> IMHO, if there were a ubiquitous IP-level multicast of some type that >> could be observed to actually work in the vast reaches of the >> Internet, people (app developers) who could use it would do so. >> >> But "ubiquitous" is important - a mechanism that only works in some >> places isn't as valuable as one that works everywhere? (a corollary of >> Metcalfe's Law?).?? A mechanism that only exists in one or a few ISPs >> isn't useful unless you expect all your customers to be using that >> ISP(s), and all of the network paths your customers use (to interact >> with their own customers etc) are also confined to that same ISP(s) >> who support the mechanism. Those ISPs of course would need their >> equipment vendors (routers, switches, hosts, whatever) to also play >> the same game. > Exactly.? Which leads to two questions that still remains unclear: > > - If everyone enabled it, how capable, and scalable, is the current > version of IP multicast?? Judging from my experience with DIS, on the > Defense Simulation Internet - it can support some very large, > challenging, real-time training exercises (MMORPGs for folks who use > real ammo).? But those exercises are one-offs.? A far cry from, say, > supporting a million videochats.? What are the limits?? Are there any > clear paths to scaling (if anyone were motivated to)? > > - How much of the lack-of-support is driven by technology, how much be > administrative complexity, how much by commercial factors? > >> >> That didn't happen so people invented whatever adhoc mechanisms they >> needed at some "higher level" where they could just write the code >> themselves - continuing the "rough consensus and running code", and >> put their own "servers" (e.g., CDN equipment) wherever it was needed, >> relying only on the basic unreliable IP datagram delivery service to >> be ubiquitous. >> >> Such "silo-ization" seems to be everywhere now and increasing >> ....email, messaging, video chat, forums, .... > > I'm not sure that's the primary explanation. > > Seems to me that, back in the day, resource & information sharing were > the prime drivers for the net - making connectivity and interoperability > core drivers.? (C.f., Metcalfe's Law). > > Since commercialization of the net, it seems like capturing market share > has become the fundamental driver - leading to intentional creation of > walled gardens.? Without much effective pushback. > > I'm reminded of the early days of email:? There was a time when access > to Internet email was a selling point for Compuserve.? Today, folks are > selling private email (and chat) based on privacy, codes of conduct (or > lack thereof), etc.? (Discord doesn't grow because it adds value, it > grows because it's an alternative to Facebook). We're almost back to the > days when Boston had a dozen phone companies - each promoting itself > based on its user base - and every business needing to have a dozen > phones on the desk.? (Kind of ironic, that Microsoft Exchange supports > email and calendaring standards better than anybody else.) > > (By the way, not a hypothetical for me, right now - as I'm about to > launch a new venture that has a major social-networking component. > Struggling with which standards to build around, and how to gateway to > other environments - so that we can operate across, and independent of, > the growing myriad of platforms.) > >> >> Sigh, >> Jack >> > Sigh, indeed, > Miles > :-( > From mfidelman at meetinghouse.net Sat Mar 5 12:06:34 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 5 Mar 2022 15:06:34 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <2f72dd07-17ca-7a47-9441-b78de7c106d4@gmail.com> References: <20220303040322.02468386532E@ary.qy> <663aa651-bb36-5899-fa21-5dc39fa0ffe0@3kitty.org> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> <2f72dd07-17ca-7a47-9441-b78de7c106d4@gmail.com> Message-ID: Thanks for the pointer. Unfortunately, the document seems to be a rehash of things we've all known and said for years, while the discussion seems not to get into what we actually DO to get back on the path of truth, justice, and the interoperable way. Meanwhile, we keep getting more and more walled gardens, and things like DMARC to break things that are working. We need something more forceful - like the TCP/IP Flag Day.? Big customers who demand interoperable standards (I'm reminded of how Wang Labs lost their position, as the Army's main computer vendor, when they dragged their feet on implementing a DoD protocol stack.? Personally, I think that was the beginning of the end, for Wang.) Sigh... Miles Fidelman Brian E Carpenter wrote: > Miles, > > You might want to look at > https://www.ietf.org/archive/id/draft-nottingham-avoiding-internet-centralization-02.html > > which is discussed at > https://www.ietf.org/mailman/listinfo/architecture-discuss > > Regards > ?? Brian Carpenter > > On 06-Mar-22 04:05, Miles Fidelman via Internet-history wrote: >> Jack Haverty via Internet-history wrote: >>> >>> IMHO, if there were a ubiquitous IP-level multicast of some type that >>> could be observed to actually work in the vast reaches of the >>> Internet, people (app developers) who could use it would do so. >>> >>> But "ubiquitous" is important - a mechanism that only works in some >>> places isn't as valuable as one that works everywhere? (a corollary of >>> Metcalfe's Law?).?? A mechanism that only exists in one or a > few ISPs >>> isn't useful unless you expect all your customers to be using that >>> ISP(s), and all of the network paths your customers use (to interact >>> with their own customers etc) are also confined to that same ISP(s) >>> who support the mechanism. Those ISPs of course would need their >>> equipment vendors (routers, switches, hosts, whatever) to also play >>> the same game. >> Exactly.? Which leads to two questions that still remains unclear: >> >> - If everyone enabled it, how capable, and scalable, is the current >> version of IP multicast?? Judging from my experience with DIS, on the >> Defense Simulation Internet - it can support some very large, >> challenging, real-time training exercises (MMORPGs for folks who use >> real ammo).? But those exercises are one-offs.? A far cry from, say, >> supporting a million videochats.? What are the limits?? Are there any >> clear paths to scaling (if anyone were motivated to)? >> >> - How much of the lack-of-support is driven by technology, how much be >> administrative complexity, how much by commercial factors? >> >>> >>> That didn't happen so people invented whatever adhoc mechanisms they >>> needed at some "higher level" where they could just write the code >>> themselves - continuing the "rough consensus and running code", and >>> put their own "servers" (e.g., CDN equipment) wherever it was needed, >>> relying only on the basic unreliable IP datagram delivery service to >>> be ubiquitous. >>> >>> Such "silo-ization" seems to be everywhere now and increasing >>> ....email, messaging, video chat, forums, .... >> >> I'm not sure that's the primary explanation. >> >> Seems to me that, back in the day, resource & information sharing were >> the prime drivers for the net - making connectivity and interoperability >> core drivers.? (C.f., Metcalfe's Law). >> >> Since commercialization of the net, it seems like capturing market share >> has become the fundamental driver - leading to intentional creation of >> walled gardens.? Without much effective pushback. >> >> I'm reminded of the early days of email:? There was a time when access >> to Internet email was a selling point for Compuserve.? Today, folks are >> selling private email (and chat) based on privacy, codes of conduct (or >> lack thereof), etc.? (Discord doesn't grow because it adds value, it >> grows because it's an alternative to Facebook). We're almost back to the >> days when Boston had a dozen phone companies - each promoting itself >> based on its user base - and every business needing to have a dozen >> phones on the desk.? (Kind of ironic, that Microsoft Exchange supports >> email and calendaring standards better than anybody else.) >> >> (By the way, not a hypothetical for me, right now - as I'm about to >> launch a new venture that has a major social-networking component. >> Struggling with which standards to build around, and how to gateway to >> other environments - so that we can operate across, and independent of, >> the growing myriad of platforms.) >> >>> >>> Sigh, >>> Jack >>> >> Sigh, indeed, >> Miles >> :-( >> -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From jack at 3kitty.org Sat Mar 5 19:36:42 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 5 Mar 2022 19:36:42 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <1A39EFDD-9F7C-4D67-8BDC-9275A99BD541@strayalpha.com> References: <8181.1646426667@hop.toad.com> <1A39EFDD-9F7C-4D67-8BDC-9275A99BD541@strayalpha.com> Message-ID: <90d9152a-6d20-7102-be8d-9c5abe2a428b@3kitty.org> One of the points I tried to make in the talk that started this discussion was that the Internet architecture moved mechanisms from the "switching" to the "host" parts of the overall system, which has significant impact on how you "operate" and optimize the pieces. Not enough time to explain that very well though. One of the results of that architecture is the necessity to look at the whole picture to understand what is going on.?? My "glitch on the transpacific line" example required looking at both the hosts/applications as well as the routers to understand why service had slowed dramatically. So, how can you be sure that CDNs necessarily "reduce overall work" by placing CDN servers near a user community? Another experiment I did involved the Internet pathways involving my location, one in Reno nevada, and one in Los Angeles.? Reno is about 50 miles or so East of me.? LA is hundreds of miles south and west of me. So a CDN builder might assume that it would be useful to place a CDN cache in Reno as a close-by city.?? But experimenting with traceroute indicated that packets from me to Reno actually went west, not east, travelled to LA, bounced around a few nodes in SoCal, and eventually came back north and east to Reno. So for me, a CDN in LA, hundreds of miles away, would be actually much closer than one in Reno, 50 miles away.?? It would be especially inefficient if the ultimate source of the content was in the LA area. Conclusion - you have to look at the whole system to understand what is going on. Jack On 3/5/22 09:37, touch--- via Internet-history wrote: > Hi, Miles, > >> On Mar 5, 2022, at 7:09 AM, Miles Fidelman via Internet-history wrote: >> >> Joe Touch via Internet-history wrote: >>>>> ... >>>> By that view YouTube, Zoom, podcasts, and CDNs are all "prohibitive". >>> They copy at the source at the app layer, not in L2 anywhere. >>> >>> it?s not the BW, but the local serial copy operation and it?s state that are prohibitive. >>> >> No more prohibitive than doing it as an overlay. If anything, it's more complex and resource intensive as an overlay. (Granted that we're talking multiple overlays - but the cost here is interoperability.) > I should have been more specific: > > CDNs move the work to L7. That doesn?t reduce the overall work for edge distribution, but it does avoid serial local copy inside (cheap) L2 devices that don?t always have the capacity to do so. > > (CDNs *do* reduce *overall* work by caching content closer to users, thus reducing overall network traffic vs. repeated use of multicast trees rooted a the original content source, but that?s a separate issue). > > Joe From touch at strayalpha.com Sat Mar 5 19:48:36 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 5 Mar 2022 19:48:36 -0800 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: <90d9152a-6d20-7102-be8d-9c5abe2a428b@3kitty.org> References: <8181.1646426667@hop.toad.com> <1A39EFDD-9F7C-4D67-8BDC-9275A99BD541@strayalpha.com> <90d9152a-6d20-7102-be8d-9c5abe2a428b@3kitty.org> Message-ID: <7A2B9683-F5C0-45E1-8C4F-2243A36E5D2D@strayalpha.com> Hi, Jack, ? > On Mar 5, 2022, at 7:36 PM, Jack Haverty via Internet-history wrote: > ... > > So, how can you be sure that CDNs necessarily "reduce overall work" by placing CDN servers near a user community? 1. Setup a server 2. Track where your users are (various estimates based on IP address) 3. Put a CDN server closer to your users That doesn?t reduce server load, but it does reduce user delays. It reduces overall network load, e.g., by dropping the load between the CDN and server. The load at the edge is the same, though. > Another experiment I did involved the Internet pathways involving my location, one in Reno nevada, and one in Los Angeles. Reno is about 50 miles or so East of me. LA is hundreds of miles south and west of me. > > So a CDN builder might assume that it would be useful to place a CDN cache in Reno as a close-by city. But experimenting with traceroute indicated that packets from me to Reno actually went west, not east, travelled to LA, bounced around a few nodes in SoCal, and eventually came back north and east to Reno. CDN operators don?t assume geography correlates to network topology. Note also that traceroute doesn?t always tell you the right thing; some MPLS and SONET paths won?t accept packets with low hop counts (they assume you WANT to see an IP router). And yes, CDN operators *do* look at all this. Joe From mfidelman at meetinghouse.net Sun Mar 6 09:06:19 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 6 Mar 2022 12:06:19 -0500 Subject: [ih] ARPANET pioneer Jack Haverty says the internet was never finished In-Reply-To: References: <20220303040322.02468386532E@ary.qy> <95a6780c-826d-4a9b-756d-ebddabcc46b9@dcrocker.net> <2654fd0e-77ea-4eeb-3cfa-6165817fbc0e@meetinghouse.net> <150C0CF5-F445-4FA0-938C-3B44449B38C4@transsys.com> <5c0e0725-6642-4aeb-82c3-9cd7b456809a@meetinghouse.net> <7feb0136-0059-86f3-cd23-e4dad6d024e3@tamu.edu> <02e0bc82-7c83-b428-05a9-293642e92bdc@dcrocker.net> <2f72dd07-17ca-7a47-9441-b78de7c106d4@gmail.com> Message-ID: Steven Ehrbar wrote: > On Sat, Mar 5, 2022 at 1:06 PM Miles Fidelman via Internet-history > wrote: >> Thanks for the pointer. >> >> Unfortunately, the document seems to be a rehash of things we've all >> known and said for years, while the discussion seems not to get into >> what we actually DO to get back on the path of truth, justice, and the >> interoperable way. > It's simple enough. You repeal Section 230. > > Under the court precedents prior to passage of the Communications > Decency Act, a system that moderated what users communicated (in the > specific court case, Prodigy) was legally liable for what the users > communicated, while one that didn't (in the specific court case, > CompuServe) wasn't. > > Since there is no scalable way to moderate all potential libel, that > legal regime would create a situation where you cannot simultaneously > be large and moderate content, since you'll get sued into the ground. > Since raw unmoderated content gets you 4chan/the Eternal September > Usenet, there will be customers demanding small, specialized, curated > platforms (both because of the scaling difficulties of moderation and > the fact that small targets have less-deep pockets to attract > lawsuits), which for scale reasons will tend to be built on top of > large, unmoderated technical service providers. And since the small > platforms will want to be able to switch technical providers, and > people will have interests that inherently can't be satisfied by a > single small platform, the demand will be for standardized technical > services to be provided to the small platforms and standardized ways > to federate the small platforms on a per-user basis. The technological > implementation will then follow the demand. > > Of course, *that* requires tolerating the fact that the large > technical service providers will not be able to pressure the small > platforms that use them from providing undesirable content, whether > porn (the original incentive for Section 230), hate speech, > misinformation, or whatever else. > > So while the fix is simple, it's also one that I don't see as > politically plausible to implement. Hell no... repealing Section 230 would have a chilling effect on pretty much everything. I'd rather a stronger section that enforces protections for "undesirable" speech - particularly political speech. End-user filtering, charging for bulk mail, there are all kinds of things that we can use to push back on the crap that infests our mail streams - that don't quash useful communications. > >> We need something more forceful - like the TCP/IP Flag Day. Big >> customers who demand interoperable standards (I'm reminded of how Wang >> Labs lost their position, as the Army's main computer vendor, when they >> dragged their feet on implementing a DoD protocol stack. Personally, I >> think that was the beginning of the end, for Wang.) > What big customers? Alphabet/Google, Amazon, Apple, Meta/Facebook, and > Microsoft are each from one to two orders of magnitude larger in > revenue than Wang was at its peak (your choice of nominal dollars, > constant dollars, or relative to US GDP). A customer, or even a > coordinated alliance of customers, ten to a hundred times bigger than > the US Army in the 1980s is pretty hard to find. Well... the US Government might be a start.? It already does things like require proposal submissions by email, and standard formats for medical billing submitted to Medicare.? It might be a start if the USG required use of PEM for official communication, and maybe if there were a regulation that clarified that PEM is acceptable under HIPPA. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From johnl at iecc.com Sun Mar 6 14:07:43 2022 From: johnl at iecc.com (John Levine) Date: 6 Mar 2022 17:07:43 -0500 Subject: [ih] there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer In-Reply-To: Message-ID: <20220306220744.4662E389279D@ary.qy> >Steven Ehrbar wrote: >> It's simple enough. You repeal Section 230. >> >> Under the court precedents prior to passage of the Communications >> Decency Act, a system that moderated what users communicated (in the >> specific court case, Prodigy) was legally liable for what the users >> communicated, while one that didn't (in the specific court case, >> CompuServe) wasn't. ... This is severely oversimplified and basically wrong. There are three models of distributor liability which we can call magazine, you're responsible for everything with minor exceptions, bookstore, you're responsible for what you know about or should reasonably know about, and Fedex, you're responsible for nothing. There are court cases for all of these. Compuserve v. Cubby which was in federal court, used the bookstore model. Then Stratton-Oakmont vs. Prodigy, in NY state court, misread Compuserve and assumed it was either publisher or Fedex so if you do anything you're a publisher. This was a mistake, not least because the bad things said about Stratton-Oakmont turned out to be true. Section 230 said no, all online services are Fedex. While this was reasonable it was also not inevitable. In the absence of Sec 230, Compuserve is the precedent (outside of NY at least) and the bookstore model fits a lot better than either of the other two. Were 230 to be repealed, the next few years would be pretty exciting particularly due to all of the ignorant nonsense about what people imagine 230 to say, e.g., that somehow without it online providers would be required to publish everything anyone says which is absurd. In all likelihood we'd end up with the bookstore model, perhaps with something like the notice and takedown process that OCILLA (part of the DMCA) has for copyright violations. It's far from perfect but it wouldn't be the end of the world. On the other hand, most of the 230 "reform" bills introduced in recent years would be awful, ill-specified carveouts that would only enrich lawyers. Look at SOPA/PIPA which was supposed to deter sex trafficking and in fact had the opposite effect, just as its opponents predicted. R's, John From bpurvy at gmail.com Sun Mar 6 16:54:58 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 6 Mar 2022 16:54:58 -0800 Subject: [ih] there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer In-Reply-To: <20220306220744.4662E389279D@ary.qy> References: <20220306220744.4662E389279D@ary.qy> Message-ID: >There are three models of distributor liability which we can call magazine, you're responsible for everything with minor exceptions, bookstore, you're responsible for what you know about or should reasonably know about, and Fedex, you're responsible for nothat thing. There are court cases for all of these. Quite interesting. I have to admit I've never heard much about the "bookstore model." What are some of the court cases about that? Personally, I believe the proposition that "every issue is brand new with the internet" is just wrong, and legal models that existed pre-internet were perfectly capable of handling it. On Sun, Mar 6, 2022 at 2:07 PM John Levine via Internet-history < internet-history at elists.isoc.org> wrote: > >Steven Ehrbar wrote: > >> It's simple enough. You repeal Section 230. > >> > >> Under the court precedents prior to passage of the Communications > >> Decency Act, a system that moderated what users communicated (in the > >> specific court case, Prodigy) was legally liable for what the users > >> communicated, while one that didn't (in the specific court case, > >> CompuServe) wasn't. ... > > This is severely oversimplified and basically wrong. > > There are three models of distributor liability which we can call > magazine, you're responsible for everything with minor exceptions, > bookstore, you're responsible for what you know about or should > reasonably know about, and Fedex, you're responsible for nothing. > There are court cases for all of these. > > Compuserve v. Cubby which was in federal court, used the bookstore model. > Then Stratton-Oakmont vs. Prodigy, in NY state court, misread Compuserve > and assumed it was either publisher or Fedex so if you do anything > you're a publisher. This was a mistake, not least because > the bad things said about Stratton-Oakmont turned out to be true. > > Section 230 said no, all online services are Fedex. While this was > reasonable it was also not inevitable. In the absence of Sec 230, > Compuserve is the precedent (outside of NY at least) and the bookstore > model fits a lot better than either of the other two. Were 230 to > be repealed, the next few years would be pretty exciting particularly > due to all of the ignorant nonsense about what people imagine 230 to > say, e.g., that somehow without it online providers would be required > to publish everything anyone says which is absurd. > > In all likelihood we'd end up with the bookstore model, perhaps with > something like the notice and takedown process that OCILLA (part of > the DMCA) has for copyright violations. It's far from perfect but > it wouldn't be the end of the world. > > On the other hand, most of the 230 "reform" bills introduced in recent > years would be awful, ill-specified carveouts that would only enrich > lawyers. Look at SOPA/PIPA which was supposed to deter sex trafficking > and in fact had the opposite effect, just as its opponents predicted. > > R's, > John > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Sun Mar 6 17:26:09 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 7 Mar 2022 14:26:09 +1300 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> Message-ID: <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> On 07-Mar-22 13:54, Bob Purvy via Internet-history wrote: >> There are three models of distributor liability which we can call > magazine, you're responsible for everything with minor exceptions, > bookstore, you're responsible for what you know about or should > reasonably know about, and Fedex, you're responsible for nothat thing. > There are court cases for all of these. > > Quite interesting. I have to admit I've never heard much about the > "bookstore model." What are some of the > court cases about that? > > Personally, I believe the proposition that "every issue is brand new with > the internet" is just wrong, and > legal models that existed pre-internet were perfectly capable of handling > it. I used to think that, but there are at least three gotchas: 1) In Common Law countries such as the US and UK, arguing from precedent about new technologies seems to be generally accepted. But in other jurisdictions, such as those based on Napoleonic law, this is less clear. 2) You cannot assume that advocates and judges understand the technology well enough to argue and adjudicate correctly. There's been a persistent failure to distinguish value from reference, for example, not helped by lousy terminology such as "address" when a URL is meant (even without starting on the distinction between URL, URN and URI). 3) Very few issues are really national; just note how DCMA has affected countries other than the USA, or how GDPR has affected countries outside the EU. Brian > > On Sun, Mar 6, 2022 at 2:07 PM John Levine via Internet-history < > internet-history at elists.isoc.org> wrote: > >>> Steven Ehrbar wrote: >>>> It's simple enough. You repeal Section 230. >>>> >>>> Under the court precedents prior to passage of the Communications >>>> Decency Act, a system that moderated what users communicated (in the >>>> specific court case, Prodigy) was legally liable for what the users >>>> communicated, while one that didn't (in the specific court case, >>>> CompuServe) wasn't. ... >> >> This is severely oversimplified and basically wrong. >> >> There are three models of distributor liability which we can call >> magazine, you're responsible for everything with minor exceptions, >> bookstore, you're responsible for what you know about or should >> reasonably know about, and Fedex, you're responsible for nothing. >> There are court cases for all of these. >> >> Compuserve v. Cubby which was in federal court, used the bookstore model. >> Then Stratton-Oakmont vs. Prodigy, in NY state court, misread Compuserve >> and assumed it was either publisher or Fedex so if you do anything >> you're a publisher. This was a mistake, not least because >> the bad things said about Stratton-Oakmont turned out to be true. >> >> Section 230 said no, all online services are Fedex. While this was >> reasonable it was also not inevitable. In the absence of Sec 230, >> Compuserve is the precedent (outside of NY at least) and the bookstore >> model fits a lot better than either of the other two. Were 230 to >> be repealed, the next few years would be pretty exciting particularly >> due to all of the ignorant nonsense about what people imagine 230 to >> say, e.g., that somehow without it online providers would be required >> to publish everything anyone says which is absurd. >> >> In all likelihood we'd end up with the bookstore model, perhaps with >> something like the notice and takedown process that OCILLA (part of >> the DMCA) has for copyright violations. It's far from perfect but >> it wouldn't be the end of the world. >> >> On the other hand, most of the 230 "reform" bills introduced in recent >> years would be awful, ill-specified carveouts that would only enrich >> lawyers. Look at SOPA/PIPA which was supposed to deter sex trafficking >> and in fact had the opposite effect, just as its opponents predicted. >> >> R's, >> John >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From bpurvy at gmail.com Sun Mar 6 19:16:50 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 6 Mar 2022 19:16:50 -0800 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: > in other jurisdictions, such as those based on Napoleonic law, this is less clear The mind boggles. I don't know what the US could do except ignore it. Getting ONE legal system to work is hard enough. > Very few issues are really national; just note how DCMA has affected countries other than the USA, or how GDPR has affected countries outside the EU. Again, it's really impossible to consider the whole world at once. I'd draw an analogy to getting international patents -- even Google doesn't bother on a lot of its patents. It's horrendously expensive, and any company wanting to be global will have to sell in the US anyway. Recall that the Internet itself faced off against a consciously-international standards effort, the CCITT, and won. Getting the whole world to agree on *anything* is a sucker's game. On Sun, Mar 6, 2022 at 5:26 PM Brian E Carpenter < brian.e.carpenter at gmail.com> wrote: > On 07-Mar-22 13:54, Bob Purvy via Internet-history wrote: > >> There are three models of distributor liability which we can call > > magazine, you're responsible for everything with minor exceptions, > > bookstore, you're responsible for what you know about or should > > reasonably know about, and Fedex, you're responsible for nothat thing. > > There are court cases for all of these. > > > > Quite interesting. I have to admit I've never heard much about the > > "bookstore model." What are some of the > > court cases about that? > > > > Personally, I believe the proposition that "every issue is brand new with > > the internet" is just wrong, and > > legal models that existed pre-internet were perfectly capable of handling > > it. > > I used to think that, but there are at least three gotchas: > > 1) In Common Law countries such as the US and UK, arguing from > precedent about new technologies seems to be generally accepted. > But in other jurisdictions, such as those based on Napoleonic law, > this is less clear. > > 2) You cannot assume that advocates and judges understand the > technology well enough to argue and adjudicate correctly. There's > been a persistent failure to distinguish value from reference, for > example, not helped by lousy terminology such as "address" when > a URL is meant (even without starting on the distinction between > URL, URN and URI). > > 3) Very few issues are really national; just note how DCMA has > affected countries other than the USA, or how GDPR has affected > countries outside the EU. > > Brian > > > > > On Sun, Mar 6, 2022 at 2:07 PM John Levine via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >>> Steven Ehrbar wrote: > >>>> It's simple enough. You repeal Section 230. > >>>> > >>>> Under the court precedents prior to passage of the Communications > >>>> Decency Act, a system that moderated what users communicated (in the > >>>> specific court case, Prodigy) was legally liable for what the users > >>>> communicated, while one that didn't (in the specific court case, > >>>> CompuServe) wasn't. ... > >> > >> This is severely oversimplified and basically wrong. > >> > >> There are three models of distributor liability which we can call > >> magazine, you're responsible for everything with minor exceptions, > >> bookstore, you're responsible for what you know about or should > >> reasonably know about, and Fedex, you're responsible for nothing. > >> There are court cases for all of these. > >> > >> Compuserve v. Cubby which was in federal court, used the bookstore > model. > >> Then Stratton-Oakmont vs. Prodigy, in NY state court, misread Compuserve > >> and assumed it was either publisher or Fedex so if you do anything > >> you're a publisher. This was a mistake, not least because > >> the bad things said about Stratton-Oakmont turned out to be true. > >> > >> Section 230 said no, all online services are Fedex. While this was > >> reasonable it was also not inevitable. In the absence of Sec 230, > >> Compuserve is the precedent (outside of NY at least) and the bookstore > >> model fits a lot better than either of the other two. Were 230 to > >> be repealed, the next few years would be pretty exciting particularly > >> due to all of the ignorant nonsense about what people imagine 230 to > >> say, e.g., that somehow without it online providers would be required > >> to publish everything anyone says which is absurd. > >> > >> In all likelihood we'd end up with the bookstore model, perhaps with > >> something like the notice and takedown process that OCILLA (part of > >> the DMCA) has for copyright violations. It's far from perfect but > >> it wouldn't be the end of the world. > >> > >> On the other hand, most of the 230 "reform" bills introduced in recent > >> years would be awful, ill-specified carveouts that would only enrich > >> lawyers. Look at SOPA/PIPA which was supposed to deter sex trafficking > >> and in fact had the opposite effect, just as its opponents predicted. > >> > >> R's, > >> John > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > From johnl at iecc.com Sun Mar 6 19:44:39 2022 From: johnl at iecc.com (John R. Levine) Date: 6 Mar 2022 22:44:39 -0500 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: >> in other jurisdictions, such as those based on Napoleonic law, this is > less clear > > The mind boggles. I don't know what the US could do except ignore it. > Getting ONE legal system to work is hard enough. Just looking at anglophone common law countries, here in the US we have fights over Sec 230. Australia has a newish law that lets the Murdoch owned newspapers shake down Google and Facebook. The Canadian government nearly passed bill C-10 which would have regulated the Internet like radio and TV and may try to pass it again. (If that makes no sense, you understand correctly.) And the UK is in another round of trying to outlaw strong encryption with the usual fearmongering. So I would prefer that we screw up or unscrew one country at a time. Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From joly at punkcast.com Sun Mar 6 19:46:20 2022 From: joly at punkcast.com (Joly MacFie) Date: Sun, 6 Mar 2022 22:46:20 -0500 Subject: [ih] there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> Message-ID: I believe 'bookstore model' refers to Smith vs California 1959 https://en.wikipedia.org/wiki/Smith_v._California joly On Sun, Mar 6, 2022 at 7:54 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > >There are three models of distributor liability which we can call > magazine, you're responsible for everything with minor exceptions, > bookstore, you're responsible for what you know about or should > reasonably know about, and Fedex, you're responsible for nothat thing. > There are court cases for all of these. > > Quite interesting. I have to admit I've never heard much about the > "bookstore model." What are some of the > court cases about that? > > Personally, I believe the proposition that "every issue is brand new with > the internet" is just wrong, and > legal models that existed pre-internet were perfectly capable of handling > it. > > On Sun, Mar 6, 2022 at 2:07 PM John Levine via Internet-history < > internet-history at elists.isoc.org> wrote: > > > >Steven Ehrbar wrote: > > >> It's simple enough. You repeal Section 230. > > >> > > >> Under the court precedents prior to passage of the Communications > > >> Decency Act, a system that moderated what users communicated (in the > > >> specific court case, Prodigy) was legally liable for what the users > > >> communicated, while one that didn't (in the specific court case, > > >> CompuServe) wasn't. ... > > > > This is severely oversimplified and basically wrong. > > > > There are three models of distributor liability which we can call > > magazine, you're responsible for everything with minor exceptions, > > bookstore, you're responsible for what you know about or should > > reasonably know about, and Fedex, you're responsible for nothing. > > There are court cases for all of these. > > > > Compuserve v. Cubby which was in federal court, used the bookstore model. > > Then Stratton-Oakmont vs. Prodigy, in NY state court, misread Compuserve > > and assumed it was either publisher or Fedex so if you do anything > > you're a publisher. This was a mistake, not least because > > the bad things said about Stratton-Oakmont turned out to be true. > > > > Section 230 said no, all online services are Fedex. While this was > > reasonable it was also not inevitable. In the absence of Sec 230, > > Compuserve is the precedent (outside of NY at least) and the bookstore > > model fits a lot better than either of the other two. Were 230 to > > be repealed, the next few years would be pretty exciting particularly > > due to all of the ignorant nonsense about what people imagine 230 to > > say, e.g., that somehow without it online providers would be required > > to publish everything anyone says which is absurd. > > > > In all likelihood we'd end up with the bookstore model, perhaps with > > something like the notice and takedown process that OCILLA (part of > > the DMCA) has for copyright violations. It's far from perfect but > > it wouldn't be the end of the world. > > > > On the other hand, most of the 230 "reform" bills introduced in recent > > years would be awful, ill-specified carveouts that would only enrich > > lawyers. Look at SOPA/PIPA which was supposed to deter sex trafficking > > and in fact had the opposite effect, just as its opponents predicted. > > > > R's, > > John > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- -------------------------------------- Joly MacFie +12185659365 -------------------------------------- - From brian.e.carpenter at gmail.com Sun Mar 6 20:07:47 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 7 Mar 2022 17:07:47 +1300 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: <7877fe57-45a1-26ca-1bdb-20c6ca6330f5@gmail.com> On 07-Mar-22 16:44, John R. Levine wrote: >>> in other jurisdictions, such as those based on Napoleonic law, this is >> less clear >> >> The mind boggles. I don't know what the US could do except ignore it. >> Getting ONE legal system to work is hard enough. > > Just looking at anglophone common law countries, here in the US we have > fights over Sec 230. Australia has a newish law that lets the Murdoch > owned newspapers shake down Google and Facebook. The Canadian government > nearly passed bill C-10 which would have regulated the Internet like radio > and TV and may try to pass it again. (If that makes no sense, you > understand correctly.) And the UK is in another round of trying to outlaw > strong encryption with the usual fearmongering. > > So I would prefer that we screw up or unscrew one country at a time. Well yes. I wasn't suggesting that One Law to Rule Them All was an option; just pointing out that local laws often have effects far beyond their apparent jurisdiction. My opinions of the so-called Internet Governance Forum are close to unprintable and generally involve unprofessional language. Brian From el at lisse.NA Mon Mar 7 02:10:09 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Mon, 7 Mar 2022 12:10:09 +0200 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: My impression/experience is quite the opposite, lawyers are usually very clever people (they make a living using the word as a weapon so to speak, and can read themselves into complex issues amazingly quickly) but of course ambulance chasers might find that difficult. In Common Law countries judges are selected from experienced lawyers, ie it's further, positive selection. But, they only adjudicate the issue(s) before them ("on the papers") as narrowly as possible. So you get what you pay for. In Germany the grade of your law school exam and the one after the two year internship (akin to the Bar Exam) are the determining factor of becoming a (junior) judge and then progress to higher courts. These courts adjudge a little wider and if there is complex matter the courts will hear both sides experts and perhaps even appoint one. Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to good job of framing the jargon into plain English/German. el On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: [...] > 2) You cannot assume that advocates and judges understand the > technology well enough to argue and adjudicate correctly. There's > been a persistent failure to distinguish value from reference, for > example, not helped by lousy terminology such as "address" when > a URL is meant (even without starting on the distinction between > URL, URN and URI). [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From bpurvy at gmail.com Mon Mar 7 08:47:22 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 7 Mar 2022 08:47:22 -0800 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: If you'll pardon some bragging, I do have personal experience with German lawyers and German courts. I think I earned my salary for my entire 11 1/2 years there on this one. (Oddly, Vint, I never talked to you about it.) They are very, very good. In 2014, things were looking very bleak for Maps in Germany. Microsoft had bought a patent for online maps, and the German courts had found that Google Maps infringed it. They were actually going to grant an injunction against Maps while the validity of the patent was adjudicated. They use different courts for infringement and invalidity. Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about searching for prior art. Things get thin before about 1996. A couple of days, and nothing. Then out of desperation, I went to Google Scholar and searched "client server maps." Who'd have thought of *that*? The very first result was this . I made a claim chart and it seemed to work. I flew down to LA to meet our two German attorneys from Quinn Emanuel, and our maps expert. We spent the day looking for something a little less technical, but came up empty, so we went with this. Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my life. He has a PhD in computer science plus a law degree. He used this paper and asked the judge to reconsider the injunction. Amazingly, he did. Ralf later went to the Patent Court in Munich, and destroyed Microsoft's patent. Especially satisfying was that they tried to dispute the date of the DeWitt publication, and Ralf produced a Microsoft press release about their hiring of DeWitt. It cited that paper. I didn't get to go to that, since I wasn't needed, but what I heard is that their courts are much less formal than ours, and more down-to-earth. You can just walk into the judge's chambers and talk to him, which would get you arrested in the United States. So if you claim lawyers and judges are stupid and don't understand tech: no, you're wrong. On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < internet-history at elists.isoc.org> wrote: > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. > > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. > > So you get what you pay for. > > > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts. These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > > el > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] > > 2) You cannot assume that advocates and judges understand the > > technology well enough to argue and adjudicate correctly. There's > > been a persistent failure to distinguish value from reference, for > > example, not helped by lousy terminology such as "address" when > > a URL is meant (even without starting on the distinction between > > URL, URN and URI). > [...] > -- > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From vint at google.com Mon Mar 7 08:53:24 2022 From: vint at google.com (Vint Cerf) Date: Mon, 7 Mar 2022 11:53:24 -0500 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: point well taken - we should value knowledge where it exists and facilitate it where it doesn't. v On Mon, Mar 7, 2022 at 11:47 AM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > If you'll pardon some bragging, I do have personal experience with German > lawyers and German courts. I think I earned my salary for my entire 11 1/2 > years there on this one. (Oddly, Vint, I never talked to you about it.) > They are very, very good. > > In 2014, things were looking > very bleak for > Maps in Germany. Microsoft had bought a patent for online maps, and the > German courts had found that Google Maps infringed it. They were actually > going to grant an injunction against Maps while the validity of the patent > was adjudicated. They use different courts for infringement and invalidity. > > Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about > searching for prior art. Things get thin before about 1996. A couple of > days, and nothing. > > Then out of desperation, I went to Google Scholar and searched "client > server maps." Who'd have thought of *that*? The very first result was this > < > https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.4371&rep=rep1&type=pdf > >. > I made a claim chart and it seemed to work. I flew down to LA to meet our > two German attorneys from Quinn Emanuel, and our maps expert. We spent the > day looking for something a little less technical, but came up empty, so we > went with this. > > Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my > life. He has a PhD in computer science plus a law degree. He used this > paper and asked the judge to reconsider the injunction. Amazingly, he did. > > Ralf later went to the Patent Court in Munich, and destroyed > < > https://www.zdnet.com/article/microsoft-loses-mapping-patent-tussle-in-german-fight-with-google-and-motorola/ > > > Microsoft's patent. Especially satisfying was that they tried to dispute > the date of the DeWitt publication, and Ralf produced a Microsoft press > release about their hiring of DeWitt. It cited that paper. > > I didn't get to go to that, since I wasn't needed, but what I heard is that > their courts are much less formal than ours, and more down-to-earth. You > can just walk into the judge's chambers and talk to him, which would get > you arrested in the United States. > > So if you claim lawyers and judges are stupid and don't understand tech: > no, you're wrong. > > On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < > internet-history at elists.isoc.org> wrote: > > > My impression/experience is quite the opposite, lawyers are usually very > > clever people (they make a living using the word as a weapon so to > > speak, and can read themselves into complex issues amazingly quickly) but > > of course ambulance chasers might find that difficult. > > > > In Common Law countries judges are selected from experienced lawyers, ie > > it's further, positive selection. But, they only adjudicate the > > issue(s) before them ("on the papers") as narrowly as possible. > > > > So you get what you pay for. > > > > > > In Germany the grade of your law school exam and the one after the two > > year internship (akin to the Bar Exam) are the determining factor of > > becoming a (junior) judge and then progress to higher courts. These > > courts adjudge a little wider and if there is complex matter the courts > > will hear both sides experts and perhaps even appoint one. > > > > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > > good job of framing the jargon into plain English/German. > > > > el > > > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > > [...] > > > 2) You cannot assume that advocates and judges understand the > > > technology well enough to argue and adjudicate correctly. There's > > > been a persistent failure to distinguish value from reference, for > > > example, not helped by lousy terminology such as "address" when > > > a URL is meant (even without starting on the distinction between > > > URL, URN and URI). > > [...] > > -- > > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > > el at lisse.NA / * | Telephone: +264 81 124 6733 > <+264%2081%20124%206733> (cell) > > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From b_a_denny at yahoo.com Mon Mar 7 10:45:52 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 7 Mar 2022 18:45:52 +0000 (UTC) Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: <1008200089.1137511.1646678752864@mail.yahoo.com> I am curious what you felt you needed to find to nullify the patent.? Was it just online maps or something else?? I do have a? memory of at least one? packet radio demo where we showed where a mobile packet radio was on a map using a sun workstation.? Unfortunately I can't remember the name of the project at SRI which was doing the application that involved maps. We just borrowed it for demo purposes. This has to be quite a few years before 1996. There was really another interesting map project called TerraVision that I think of as an early Google Earth (3D mapping).? ?This was out of the AI center at SRI.? I only know about it because I was asked to help with the SRI network connection to the MAGIC testbed (an ATM testbed). The AI people used the testbed for their application.? ?This was during the Clinton years because I am pretty sure there was a demo given to Al Gore when he was vice president. I left SRI in 1996 so TerraVision had to have a working version before then.? Coincidentally while I was just checking the spelling of TerraVision I found out the SRI application caused a German company named ART+COM? to loose a recent patent claim against Google in the U.S. (2016). The German company even used the name Terravision! I will have to look a little bit more into this. barbara On Monday, March 7, 2022, 08:47:43 AM PST, Bob Purvy via Internet-history wrote: If you'll pardon some bragging, I do have personal experience with German lawyers and German courts. I think I earned my salary for my entire 11 1/2 years there on this one. (Oddly, Vint, I never talked to you about it.) They are very, very good. In 2014, things were looking very bleak for Maps in Germany. Microsoft had bought a patent for online maps, and the German courts had found that Google Maps infringed it. They were actually going to grant an injunction against Maps while the validity of the patent was adjudicated. They use different courts for infringement and invalidity. Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about searching for prior art. Things get thin before about 1996. A couple of days, and nothing. Then out of desperation, I went to Google Scholar and searched "client server maps." Who'd have thought of *that*? The very first result was this . I made a claim chart and it seemed to work. I flew down to LA to meet our two German attorneys from Quinn Emanuel, and our maps expert. We spent the day looking for something a little less technical, but came up empty, so we went with this. Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my life. He has a PhD in computer science plus a law degree. He used this paper and asked the judge to reconsider the injunction. Amazingly, he did. Ralf later went to the Patent Court in Munich, and destroyed Microsoft's patent. Especially satisfying was that they tried to dispute the date of the DeWitt publication, and Ralf produced a Microsoft press release about their hiring of DeWitt. It cited that paper. I didn't get to go to that, since I wasn't needed, but what I heard is that their courts are much less formal than ours, and more down-to-earth. You can just walk into the judge's chambers and talk to him, which would get you arrested in the United States. So if you claim lawyers and judges are stupid and don't understand tech: no, you're wrong. On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < internet-history at elists.isoc.org> wrote: > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. > > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. > > So you get what you pay for. > > > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts.? These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > > el > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] > > 2) You cannot assume that advocates and judges understand the > > technology well enough to argue and adjudicate correctly. There's > > been a persistent failure to distinguish value from reference, for > > example, not helped by lousy terminology such as "address" when > > a URL is meant (even without starting on the distinction between > > URL, URN and URI). > [...] > -- > Dr. Eberhard W. Lisse? \? ? ? ? /? ? ? Obstetrician & Gynaecologist > el at lisse.NA? ? ? ? ? ? / *? ? ? |? Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht? \? ? ? /? If this email is signed with GPG/PGP > 10007, Namibia? ? ? ? ? ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Mon Mar 7 11:32:57 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 7 Mar 2022 19:32:57 +0000 (UTC) Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <1008200089.1137511.1646678752864@mail.yahoo.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> <1008200089.1137511.1646678752864@mail.yahoo.com> Message-ID: <2124953413.1169539.1646681577992@mail.yahoo.com> Given how much I discovered about TerraVision in the news,? I thought I would add the PI for the project was Yvan Leclerc.? Unfortunately he became seriously ill and died very young in 2002. He was a great guy to work with. barbara On Monday, March 7, 2022, 10:45:52 AM PST, Barbara Denny wrote: I am curious what you felt you needed to find to nullify the patent.? Was it just online maps or something else?? I do have a? memory of at least one? packet radio demo where we showed where a mobile packet radio was on a map using a sun workstation.? Unfortunately I can't remember the name of the project at SRI which was doing the application that involved maps. We just borrowed it for demo purposes. This has to be quite a few years before 1996. There was really another interesting map project called TerraVision that I think of as an early Google Earth (3D mapping).? ?This was out of the AI center at SRI.? I only know about it because I was asked to help with the SRI network connection to the MAGIC testbed (an ATM testbed). The AI people used the testbed for their application.? ?This was during the Clinton years because I am pretty sure there was a demo given to Al Gore when he was vice president. I left SRI in 1996 so TerraVision had to have a working version before then.? Coincidentally while I was just checking the spelling of TerraVision I found out the SRI application caused a German company named ART+COM? to loose a recent patent claim against Google in the U.S. (2016). The German company even used the name Terravision! I will have to look a little bit more into this. barbara On Monday, March 7, 2022, 08:47:43 AM PST, Bob Purvy via Internet-history wrote: If you'll pardon some bragging, I do have personal experience with German lawyers and German courts. I think I earned my salary for my entire 11 1/2 years there on this one. (Oddly, Vint, I never talked to you about it.) They are very, very good. In 2014, things were looking very bleak for Maps in Germany. Microsoft had bought a patent for online maps, and the German courts had found that Google Maps infringed it. They were actually going to grant an injunction against Maps while the validity of the patent was adjudicated. They use different courts for infringement and invalidity. Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about searching for prior art. Things get thin before about 1996. A couple of days, and nothing. Then out of desperation, I went to Google Scholar and searched "client server maps." Who'd have thought of *that*? The very first result was this . I made a claim chart and it seemed to work. I flew down to LA to meet our two German attorneys from Quinn Emanuel, and our maps expert. We spent the day looking for something a little less technical, but came up empty, so we went with this. Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my life. He has a PhD in computer science plus a law degree. He used this paper and asked the judge to reconsider the injunction. Amazingly, he did. Ralf later went to the Patent Court in Munich, and destroyed Microsoft's patent. Especially satisfying was that they tried to dispute the date of the DeWitt publication, and Ralf produced a Microsoft press release about their hiring of DeWitt. It cited that paper. I didn't get to go to that, since I wasn't needed, but what I heard is that their courts are much less formal than ours, and more down-to-earth. You can just walk into the judge's chambers and talk to him, which would get you arrested in the United States. So if you claim lawyers and judges are stupid and don't understand tech: no, you're wrong. On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < internet-history at elists.isoc.org> wrote: > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. > > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. > > So you get what you pay for. > > > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts.? These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > > el > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] > > 2) You cannot assume that advocates and judges understand the > > technology well enough to argue and adjudicate correctly. There's > > been a persistent failure to distinguish value from reference, for > > example, not helped by lousy terminology such as "address" when > > a URL is meant (even without starting on the distinction between > > URL, URN and URI). > [...] > -- > Dr. Eberhard W. Lisse? \? ? ? ? /? ? ? Obstetrician & Gynaecologist > el at lisse.NA? ? ? ? ? ? / *? ? ? |? Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht? \? ? ? /? If this email is signed with GPG/PGP > 10007, Namibia? ? ? ? ? ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Mon Mar 7 11:58:30 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 7 Mar 2022 11:58:30 -0800 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <2124953413.1169539.1646681577992@mail.yahoo.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> <1008200089.1137511.1646678752864@mail.yahoo.com> <2124953413.1169539.1646681577992@mail.yahoo.com> Message-ID: I think TerraVision was one of the ones we looked at, and art+com was one of my cases later. I don't recall what happened to it. The thing about patents that turns off engineers is, they'll hold up some project and say "well, it's online maps -- what else do you need?" and the answer is "a lot of language from the claims." Which will seem completely irrelevant, except it's not. If a project happened back then, it depends on what they wrote down and published, and what a "person of ordinary skill in the art" would have been able to discover from it at the time. If they never released the software, OR if they did but it can't be recovered now, then you're SOL. On Mon, Mar 7, 2022 at 11:34 AM Barbara Denny wrote: > Given how much I discovered about TerraVision in the news, I thought I > would add the PI for the project was Yvan Leclerc. Unfortunately he became > seriously ill and died very young in 2002. > > He was a great guy to work with. > > barbara > > > On Monday, March 7, 2022, 10:45:52 AM PST, Barbara Denny < > b_a_denny at yahoo.com> wrote: > > > I am curious what you felt you needed to find to nullify the patent. Was > it just online maps or something else? I do have a memory of at least > one packet radio demo where we showed where a mobile packet radio was on a > map using a sun workstation. Unfortunately I can't remember the name of > the project at SRI which was doing the application that involved maps. We > just borrowed it for demo purposes. This has to be quite a few years before > 1996. > > There was really another interesting map project called TerraVision that I > think of as an early Google Earth (3D mapping). This was out of the AI > center at SRI. I only know about it because I was asked to help with the > SRI network connection to the MAGIC testbed (an ATM testbed). The AI people > used the testbed for their application. This was during the Clinton years > because I am pretty sure there was a demo given to Al Gore when he was vice > president. I left SRI in 1996 so TerraVision had to have a working version > before then. > > Coincidentally while I was just checking the spelling of TerraVision I > found out the SRI application caused a German company named ART+COM to > loose a recent patent claim against Google in the U.S. (2016). The German > company even used the name Terravision! I will have to look a little bit > more into this. > > barbara > > > On Monday, March 7, 2022, 08:47:43 AM PST, Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > > > If you'll pardon some bragging, I do have personal experience with German > lawyers and German courts. I think I earned my salary for my entire 11 1/2 > years there on this one. (Oddly, Vint, I never talked to you about it.) > They are very, very good. > > In 2014, things were looking > very bleak for > Maps in Germany. Microsoft had bought a patent for online maps, and the > German courts had found that Google Maps infringed it. They were actually > going to grant an injunction against Maps while the validity of the patent > was adjudicated. They use different courts for infringement and invalidity. > > Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about > searching for prior art. Things get thin before about 1996. A couple of > days, and nothing. > > Then out of desperation, I went to Google Scholar and searched "client > server maps." Who'd have thought of *that*? The very first result was this > < > https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.4371&rep=rep1&type=pdf > >. > I made a claim chart and it seemed to work. I flew down to LA to meet our > two German attorneys from Quinn Emanuel, and our maps expert. We spent the > day looking for something a little less technical, but came up empty, so we > went with this. > > Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my > life. He has a PhD in computer science plus a law degree. He used this > paper and asked the judge to reconsider the injunction. Amazingly, he did. > > Ralf later went to the Patent Court in Munich, and destroyed > < > https://www.zdnet.com/article/microsoft-loses-mapping-patent-tussle-in-german-fight-with-google-and-motorola/ > > > Microsoft's patent. Especially satisfying was that they tried to dispute > the date of the DeWitt publication, and Ralf produced a Microsoft press > release about their hiring of DeWitt. It cited that paper. > > I didn't get to go to that, since I wasn't needed, but what I heard is that > their courts are much less formal than ours, and more down-to-earth. You > can just walk into the judge's chambers and talk to him, which would get > you arrested in the United States. > > So if you claim lawyers and judges are stupid and don't understand tech: > no, you're wrong. > > On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < > internet-history at elists.isoc.org> wrote: > > > My impression/experience is quite the opposite, lawyers are usually very > > clever people (they make a living using the word as a weapon so to > > speak, and can read themselves into complex issues amazingly quickly) but > > of course ambulance chasers might find that difficult. > > > > In Common Law countries judges are selected from experienced lawyers, ie > > it's further, positive selection. But, they only adjudicate the > > issue(s) before them ("on the papers") as narrowly as possible. > > > > So you get what you pay for. > > > > > > In Germany the grade of your law school exam and the one after the two > > year internship (akin to the Bar Exam) are the determining factor of > > becoming a (junior) judge and then progress to higher courts. These > > courts adjudge a little wider and if there is complex matter the courts > > will hear both sides experts and perhaps even appoint one. > > > > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > > good job of framing the jargon into plain English/German. > > > > el > > > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > > [...] > > > 2) You cannot assume that advocates and judges understand the > > > technology well enough to argue and adjudicate correctly. There's > > > been a persistent failure to distinguish value from reference, for > > > example, not helped by lousy terminology such as "address" when > > > a URL is meant (even without starting on the distinction between > > > URL, URN and URI). > > [...] > > -- > > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Mon Mar 7 12:05:23 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 8 Mar 2022 09:05:23 +1300 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: On 07-Mar-22 23:10, Dr Eberhard W Lisse via Internet-history wrote: > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. I would agree with that in general, but we have seen cases in which lawyers have clearly failed to understand technical issues. I would extend your comment to also cover fraud squad ("brigade financi?re") police officers, who quite often have legal training. In the early days of cybercrime they were the only police who could understand the topic, or even what the Internet was. Anecdote: Sometime in the late 1980s, we hosted a meeting at CERN between two officers from the "brigade financi?re" in Paris, an officer from the Geneva "brigade financi?re", and an officer from a German service that I remember she described as the "Informatikpolizei", although that may not have been the official title. In any case I had to give them a short lecture about what a modem was, the concepts of dial-up and remote login, and exactly how a bad person could use a DECnet host at CERN to hop from a stolen login in Germany via Switzerland to a target host in France. I have to say that the fraud squad officers understood this very quickly, although it was completely new to them. Regards Brian Carpenter > > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. > > So you get what you pay for. > > > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts. These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > > el > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] >> 2) You cannot assume that advocates and judges understand the >> technology well enough to argue and adjudicate correctly. There's >> been a persistent failure to distinguish value from reference, for >> example, not helped by lousy terminology such as "address" when >> a URL is meant (even without starting on the distinction between >> URL, URN and URI). > [...] > From el at lisse.NA Mon Mar 7 12:35:31 2022 From: el at lisse.NA (Dr Eberhard W Lisse) Date: Mon, 7 Mar 2022 22:35:31 +0200 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: I have briefed the local Financial Crimes unit a number of years back. They were very clued up to ATM fraud but not at all to Internet Fraud. But, they realized this quickly and were very concerned. Specialized unit officers do not get put there because they are simpletons. And German CID officers all have a college degree (undertaken as part of their training). el On 2022-03-07 22:05 , Brian E Carpenter via Internet-history wrote: > On 07-Mar-22 23:10, Dr Eberhard W Lisse via Internet-history wrote: [...] > I have to say that the fraud squad officers understood this very > quickly, although it was completely new to them. > > Regards > Brian Carpenter [...] -- Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply From bpurvy at gmail.com Mon Mar 7 13:07:49 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 7 Mar 2022 13:07:49 -0800 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: further anecdote: for years, I was the "engineer interviewer" for patent lawyer candidates at Google. I liked that much better than interviewing engineers. "After all, who *wouldn't* love torturing lawyers?" I would say. I'd describe a patent I had personal knowledge of (not a Google patent) and have them write claims for it. Kinda like a leetcode question for programmers. On Mon, Mar 7, 2022 at 12:35 PM Dr Eberhard W Lisse via Internet-history < internet-history at elists.isoc.org> wrote: > I have briefed the local Financial Crimes unit a number of years back. > > They were very clued up to ATM fraud but not at all to Internet Fraud. > But, they realized this quickly and were very concerned. > > Specialized unit officers do not get put there because they are > simpletons. And German CID officers all have a college degree > (undertaken as part of their training). > > el > > > On 2022-03-07 22:05 , Brian E Carpenter via Internet-history wrote: > > On 07-Mar-22 23:10, Dr Eberhard W Lisse via Internet-history wrote: > [...] > > I have to say that the fraud squad officers understood this very > > quickly, although it was completely new to them. > > > > Regards > > Brian Carpenter > [...] > > -- > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From bpurvy at gmail.com Mon Mar 7 15:03:24 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 7 Mar 2022 15:03:24 -0800 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: One more anecdote and that's it, I promise: Briefly, in 2010 or so, everyone thought that the "throw weight" of your patent portfolio was all-important. Nortel's portfolio famously sold for $4.5 billion. Google bought Motorola for theirs ?? So we were interviewing lawyers who specialized in buying and selling patents. I thought, "what's a good Googley question for someone like that?" What I came up with was this: * You have the opportunity to bid on 1,000 patents in the wireless space. You have 24 hours to decide whether to bid, and how much. You can call on any resources in Google to help you. What do you do?* I gave this to 13 applicants, I think, and every answer was different. One person, whom we hired for the Public Policy team in DC, said she'd get a lot of people and read them all. I thought, "Well, that's not prohibited. Not a bad answer." Another reasonable answer was "$300,000 per patent." There are also lots of ways to assess their value in an automated way, most of them garbage. On Mon, Mar 7, 2022 at 1:07 PM Bob Purvy wrote: > further anecdote: > > for years, I was the "engineer interviewer" for patent lawyer candidates > at Google. I liked that much better than interviewing engineers. "After > all, who *wouldn't* love torturing lawyers?" I would say. > > I'd describe a patent I had personal knowledge of (not a Google patent) > and have them write claims for it. Kinda like a leetcode question for > programmers. > > On Mon, Mar 7, 2022 at 12:35 PM Dr Eberhard W Lisse via Internet-history < > internet-history at elists.isoc.org> wrote: > >> I have briefed the local Financial Crimes unit a number of years back. >> >> They were very clued up to ATM fraud but not at all to Internet Fraud. >> But, they realized this quickly and were very concerned. >> >> Specialized unit officers do not get put there because they are >> simpletons. And German CID officers all have a college degree >> (undertaken as part of their training). >> >> el >> >> >> On 2022-03-07 22:05 , Brian E Carpenter via Internet-history wrote: >> > On 07-Mar-22 23:10, Dr Eberhard W Lisse via Internet-history wrote: >> [...] >> > I have to say that the fraud squad officers understood this very >> > quickly, although it was completely new to them. >> > >> > Regards >> > Brian Carpenter >> [...] >> >> -- >> Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist >> el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) >> PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP >> 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > From ned+internet-history at mrochek.com Mon Mar 7 10:37:28 2022 From: ned+internet-history at mrochek.com (ned+internet-history at mrochek.com) Date: Mon, 07 Mar 2022 10:37:28 -0800 (PST) Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] {dkim-fail} In-Reply-To: "Your message dated Mon, 07 Mar 2022 12:10:09 +0200" References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: <01SAJZ330774000RIW@mauve.mrochek.com> > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. I've found corporate patent counsel to be very very good - as in able to catch my techinical errors good. Quite a few have an engineering background. And the appellate bench tends to even better. Beyond that... I've done a lot of depositions, and in that regard my experience has been poor. Perhaps the worst was when I was deposed in a case where not only did the plantiff's lawwer not understand any of the fairly low level tech involved, he didn't care. He spent a lot of time about the poor preparation of this own materials by his team and how he was going to get a bunch of people fired because of it. Most of his questions made no sense. But you still have to answer, so I eventually stumbled into the approach of letting him ask the question, wait for all the other lawyers to finish objecting (there were multiple parties involved in this fiasco), then saying something along the lines of, "Your question makes no sense, the closest question I can think of is , the answer to that question is ." This went on for about six hours, at which point one of the other lawyers completely lost it and started screaming at the guy. There much more to it, but the rest gets into privileged material. Suffice to say it was complete shitshow that went on for weeks. > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. In this Common Law state judges are elected by the public, leading to such amusements as a baker (with no legal credentials) recently getting elected to - superior court, I think - in Santa Monica. The bigger problem is you end up with a lot of former prosecutors as judges. Very low tech savvy overall, to say nothing of the obvious biases. > So you get what you pay for. Or not. In a recent local case I'm familiar with a lawyer with an good reputation basically stopping showing up to court dates. Seems he developed some medical condition and tried to hide it rather than withdraw. The winners were delighted about the win and thought it was all over. Wrong. The loser decided to appeal, usually a waste of time but not in cases of flagrant malpractice. And as I said, appellate lawyers tend to be very good at what they do. Not only did the appellate brief void the verdict, it forced a immediate settlement. Ned P.S. If you have any interest in appellate law and the people who practive it, I highly recommend: https://appellatesquawk.wordpress.com/ > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts. These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > el > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] > > 2) You cannot assume that advocates and judges understand the > > technology well enough to argue and adjudicate correctly. There's > > been a persistent failure to distinguish value from reference, for > > example, not helped by lousy terminology such as "address" when > > a URL is meant (even without starting on the distinction between > > URL, URN and URI). > [...] > -- > Dr. Eberhard W. Lisse \ / Obstetrician & Gynaecologist > el at lisse.NA / * | Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht \ / If this email is signed with GPG/PGP > 10007, Namibia ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From johnl at iecc.com Mon Mar 7 18:41:24 2022 From: johnl at iecc.com (John Levine) Date: 7 Mar 2022 21:41:24 -0500 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] {dkim-fail} In-Reply-To: <01SAJZ330774000RIW@mauve.mrochek.com> Message-ID: <20220308024124.7967E38A580C@ary.qy> It appears that Ned Freed via Internet-history said: >I've found corporate patent counsel to be very very good - as in able to catch >my techinical errors good. Quite a few have an engineering background. It is my impression that a science or engineering degree is a prerequisite if you want to go into patent law. One of my college classmates got a PhD in biochemistry and after a few years in industry went back to school and did patent law ever since. She mentioned one time she'd gotten something like 400 patents, which is a lot. So I looked at some of them and found they were all the same other than minor changes in a chemical formula. "I didn't know you could get the same patent 400 times", I told her when I saw her next. She smiled and said, "You're just about the only person who noticed." R's, John >P.S. If you have any interest in appellate law and the people who practice it, >I highly recommend: > > https://appellatesquawk.wordpress.com/ Looking from the other direction I like the Short Circuit blog/newsletter/podcast, which has a weekly roundup of snarky but well-informed summaries of appellate court decisions. It's run by the libertarian Institute for Justice but they know their law. https://shortcircuit.org/ R's, John From b_a_denny at yahoo.com Mon Mar 7 18:53:10 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 8 Mar 2022 02:53:10 +0000 (UTC) Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] {dkim-fail} In-Reply-To: <20220308024124.7967E38A580C@ary.qy> References: <01SAJZ330774000RIW@mauve.mrochek.com> <20220308024124.7967E38A580C@ary.qy> Message-ID: <1592716299.57660.1646707990957@mail.yahoo.com> A friend's wife is a patent lawyer.? She has a Ph.D. in Engineering from MIT and a law degree from Harvard. As to professional work I think she has only practiced law. barbara On Monday, March 7, 2022, 06:41:39 PM PST, John Levine via Internet-history wrote: It appears that Ned Freed via Internet-history said: >I've found corporate patent counsel to be very very good -? as in able to catch >my techinical errors good. Quite a few have an engineering background. It is my impression that a science or engineering degree is a prerequisite if you want to go into patent law.? One of my college classmates got a PhD in biochemistry and after a few years in industry went back to school and did patent law ever since. She mentioned one time she'd gotten something like 400 patents, which is a lot.? So I looked at some of them and found they were all the same other than minor changes in a chemical formula.? "I didn't know you could get the same patent 400 times", I told her when I saw her next.? She smiled and said, "You're just about the only person who noticed." R's, John >P.S. If you have any interest in appellate law and the people who practice it, >I highly recommend: > >? ? https://appellatesquawk.wordpress.com/ Looking from the other direction I like the Short Circuit blog/newsletter/podcast, which has a weekly roundup of snarky but well-informed summaries of appellate court decisions. It's run by the libertarian Institute for Justice but they know their law. https://shortcircuit.org/ R's, John -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From b_a_denny at yahoo.com Mon Mar 7 23:08:44 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Tue, 8 Mar 2022 07:08:44 +0000 (UTC) Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <2124953413.1169539.1646681577992@mail.yahoo.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> <1008200089.1137511.1646678752864@mail.yahoo.com> <2124953413.1169539.1646681577992@mail.yahoo.com> Message-ID: <1581137419.115152.1646723324909@mail.yahoo.com> In poking a little further I want to also mention Stephen Lau.? He worked with Yvan and I am saddened to say he has also passed away from COVID-19 in March 2020.? Yvan told me Stephen made important contributions in the development of TerraVision.? ?Stephen? did provide key testimony for the patent infringement case. According to one source, Stephen testified he had discussed and had shared SRI's code with the German company. barbara On Monday, March 7, 2022, 11:32:58 AM PST, Barbara Denny wrote: Given how much I discovered about TerraVision in the news,? I thought I would add the PI for the project was Yvan Leclerc.? Unfortunately he became seriously ill and died very young in 2002. He was a great guy to work with. barbara On Monday, March 7, 2022, 10:45:52 AM PST, Barbara Denny wrote: I am curious what you felt you needed to find to nullify the patent.? Was it just online maps or something else?? I do have a? memory of at least one? packet radio demo where we showed where a mobile packet radio was on a map using a sun workstation.? Unfortunately I can't remember the name of the project at SRI which was doing the application that involved maps. We just borrowed it for demo purposes. This has to be quite a few years before 1996. There was really another interesting map project called TerraVision that I think of as an early Google Earth (3D mapping).? ?This was out of the AI center at SRI.? I only know about it because I was asked to help with the SRI network connection to the MAGIC testbed (an ATM testbed). The AI people used the testbed for their application.? ?This was during the Clinton years because I am pretty sure there was a demo given to Al Gore when he was vice president. I left SRI in 1996 so TerraVision had to have a working version before then.? Coincidentally while I was just checking the spelling of TerraVision I found out the SRI application caused a German company named ART+COM? to loose a recent patent claim against Google in the U.S. (2016). The German company even used the name Terravision! I will have to look a little bit more into this. barbara On Monday, March 7, 2022, 08:47:43 AM PST, Bob Purvy via Internet-history wrote: If you'll pardon some bragging, I do have personal experience with German lawyers and German courts. I think I earned my salary for my entire 11 1/2 years there on this one. (Oddly, Vint, I never talked to you about it.) They are very, very good. In 2014, things were looking very bleak for Maps in Germany. Microsoft had bought a patent for online maps, and the German courts had found that Google Maps infringed it. They were actually going to grant an injunction against Maps while the validity of the patent was adjudicated. They use different courts for infringement and invalidity. Naturally, all of us in Patent Litigation (I was a Tech Advisor) set about searching for prior art. Things get thin before about 1996. A couple of days, and nothing. Then out of desperation, I went to Google Scholar and searched "client server maps." Who'd have thought of *that*? The very first result was this . I made a claim chart and it seemed to work. I flew down to LA to meet our two German attorneys from Quinn Emanuel, and our maps expert. We spent the day looking for something a little less technical, but came up empty, so we went with this. Ralf Uhrich is honestly one of the 10 smartest people I've ever met in my life. He has a PhD in computer science plus a law degree. He used this paper and asked the judge to reconsider the injunction. Amazingly, he did. Ralf later went to the Patent Court in Munich, and destroyed Microsoft's patent. Especially satisfying was that they tried to dispute the date of the DeWitt publication, and Ralf produced a Microsoft press release about their hiring of DeWitt. It cited that paper. I didn't get to go to that, since I wasn't needed, but what I heard is that their courts are much less formal than ours, and more down-to-earth. You can just walk into the judge's chambers and talk to him, which would get you arrested in the United States. So if you claim lawyers and judges are stupid and don't understand tech: no, you're wrong. On Mon, Mar 7, 2022 at 2:10 AM Dr Eberhard W Lisse via Internet-history < internet-history at elists.isoc.org> wrote: > My impression/experience is quite the opposite, lawyers are usually very > clever people (they make a living using the word as a weapon so to > speak, and can read themselves into complex issues amazingly quickly) but > of course ambulance chasers might find that difficult. > > In Common Law countries judges are selected from experienced lawyers, ie > it's further, positive selection. But, they only adjudicate the > issue(s) before them ("on the papers") as narrowly as possible. > > So you get what you pay for. > > > In Germany the grade of your law school exam and the one after the two > year internship (akin to the Bar Exam) are the determining factor of > becoming a (junior) judge and then progress to higher courts.? These > courts adjudge a little wider and if there is complex matter the courts > will hear both sides experts and perhaps even appoint one. > > > Many judgements I have read (US, UK, ZA, NA and DE) to a reasonable to > good job of framing the jargon into plain English/German. > > el > > On 2022-03-07 03:26 , Brian E Carpenter via Internet-history wrote: > [...] > > 2) You cannot assume that advocates and judges understand the > > technology well enough to argue and adjudicate correctly. There's > > been a persistent failure to distinguish value from reference, for > > example, not helped by lousy terminology such as "address" when > > a URL is meant (even without starting on the distinction between > > URL, URN and URI). > [...] > -- > Dr. Eberhard W. Lisse? \? ? ? ? /? ? ? Obstetrician & Gynaecologist > el at lisse.NA? ? ? ? ? ? / *? ? ? |? Telephone: +264 81 124 6733 (cell) > PO Box 8421 Bachbrecht? \? ? ? /? If this email is signed with GPG/PGP > 10007, Namibia? ? ? ? ? ;____/ Sect 20 of Act No. 4 of 2019 may apply > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From geoff at iconia.com Thu Mar 10 17:02:06 2022 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Thu, 10 Mar 2022 15:02:06 -1000 Subject: [ih] Preparing for the splinternet Message-ID: EXCERPT: According to Wikipedia , a researcher at the Cato Institute first used the word "splinternet" in 2001 to describe the idea of "parallel Internets that would be run as distinct, private and autonomous universes." Clyde Wayne Crews, the researcher, thought it might be a good thing. Roughly 20 years later, some aren't so sure. A splinternet might "hurt individuals attempting to organize in opposition to the war, report openly and honestly on events in Russia, and access information about what is happening in Ukraine and abroad," *argued 41 digital rights groups* led by Access Now and the nonprofit Wikimedia Foundation. Ultimately, the topic could have profound implications for executives and companies in the global telecommunications space. After all, there's a big difference between selling into a globalized economy and selling into a handful of splintered, Balkanized world regions. Cutting off Russia... [...] https://www.lightreading.com/security/preparing-for-splinternet-/a/d-id/775979 -- Geoff.Goodfellow at iconia.com living as The Truth is True From julf at Julf.com Fri Mar 11 01:43:27 2022 From: julf at Julf.com (Johan Helsingius) Date: Fri, 11 Mar 2022 10:43:27 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <3a9ae3b4-1822-6976-9ecc-0a213d543f3f@Julf.com> Here is the RIPE NCC response to the Ukrainian government?s recent request: https://www.ripe.net/publications/news/announcements/ripe-ncc-response-to-request-from-ukrainian-government Here is one position paper signed by a bunch of people: https://pch.net/resources/Papers/Multistakeholder-Imposition-of-Internet-Sanctions.pdf Daniel Karrenberg has started this appeal by network engineers: https://keepitopen.net/ Julf On 11/03/2022 02:02, the keyboard of geoff goodfellow via Internet-history wrote: > EXCERPT: > > According to Wikipedia > , a > researcher at the Cato Institute first used the word "splinternet" in 2001 > to describe the idea of "parallel Internets that would be run as distinct, > private and autonomous universes." Clyde Wayne Crews, the researcher, > thought it might be a good thing. > > Roughly 20 years later, some aren't so sure. > > A splinternet might "hurt individuals attempting to organize in opposition > to the war, report openly and honestly on events in Russia, and access > information about what is happening in Ukraine and abroad," *argued 41 > digital rights groups* > > led > by Access Now and the nonprofit Wikimedia Foundation. > > Ultimately, the topic could have profound implications for executives and > companies in the global telecommunications space. After all, there's a big > difference between selling into a globalized economy and selling into a > handful of splintered, Balkanized world regions. > > Cutting off Russia... > > [...] > https://www.lightreading.com/security/preparing-for-splinternet-/a/d-id/775979 > From jmamodio at gmail.com Fri Mar 11 05:45:02 2022 From: jmamodio at gmail.com (Jorge Amodio) Date: Fri, 11 Mar 2022 07:45:02 -0600 Subject: [ih] Preparing for the splinternet In-Reply-To: <3a9ae3b4-1822-6976-9ecc-0a213d543f3f@Julf.com> References: <3a9ae3b4-1822-6976-9ecc-0a213d543f3f@Julf.com> Message-ID: Is this history or current events ? -J On Fri, Mar 11, 2022 at 3:44 AM Johan Helsingius via Internet-history < internet-history at elists.isoc.org> wrote: > Here is the RIPE NCC response to the Ukrainian government?s recent request: > > > https://www.ripe.net/publications/news/announcements/ripe-ncc-response-to-request-from-ukrainian-government > > Here is one position paper signed by a bunch of people: > > > https://pch.net/resources/Papers/Multistakeholder-Imposition-of-Internet-Sanctions.pdf > > > Daniel Karrenberg has started this appeal by network engineers: > > https://keepitopen.net/ > > Julf > > > On 11/03/2022 02:02, the keyboard of geoff goodfellow via > Internet-history wrote: > > EXCERPT: > > > > According to Wikipedia > > , > a > > researcher at the Cato Institute first used the word "splinternet" in > 2001 > > to describe the idea of "parallel Internets that would be run as > distinct, > > private and autonomous universes." Clyde Wayne Crews, the researcher, > > thought it might be a good thing. > > > > Roughly 20 years later, some aren't so sure. > > > > A splinternet might "hurt individuals attempting to organize in > opposition > > to the war, report openly and honestly on events in Russia, and access > > information about what is happening in Ukraine and abroad," *argued 41 > > digital rights groups* > > < > https://www.accessnow.org/cms/assets/uploads/2022/03/Civil-society-letter-to-Biden-Admin-re-Russia-sanctions-and-internet-access_10-March-2022-1.pdf > > > > led > > by Access Now and the nonprofit Wikimedia Foundation. > > > > Ultimately, the topic could have profound implications for executives and > > companies in the global telecommunications space. After all, there's a > big > > difference between selling into a globalized economy and selling into a > > handful of splintered, Balkanized world regions. > > > > Cutting off Russia... > > > > [...] > > > https://www.lightreading.com/security/preparing-for-splinternet-/a/d-id/775979 > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From karl at cavebear.com Sat Mar 12 01:56:51 2022 From: karl at cavebear.com (Karl Auerbach) Date: Sat, 12 Mar 2022 01:56:51 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: On 3/10/22 5:02 PM, the keyboard of geoff goodfellow via Internet-history wrote: > EXCERPT: > > According to Wikipedia > , a > researcher at the Cato Institute first used the word "splinternet" in 2001 > to describe the idea of "parallel Internets that would be run as distinct, > private and autonomous universes." Well, "splinternet" it isn't quite "Internet history", it's more of a prophesy of things that could come.? And I sense that none of us want disjoint "splinters", and I don't think users want that either.? But if we use the analogy of wood then today's Internet is nice, clear lumber. And what we have called "splinternet" might be gluelams or fiberboard, i.e. many pieces that are joined to form something that is at least as strong and useful as a timber cut from a single tree. If we expand our view of Internet History to encompass predecessors we see that that "splinters" have existed yet the system as a whole provided acceptable service to users. Perhaps the earliest system that used store-and-forward handling of electronic messages was the telegraph system that arose in the 1830s.? Although it was never a single technically uniform global system, it did have "splinters" that worked acceptably well and were sufficiently joined so that from the users' point of view, it was one system. (We can say the same about the voice telephone system, but I view that more as a circuit switching paradigm rather than store-and-forward message handling.) I, personally, am of the belief that just as the Internet began as a single network and then became a network of networks, i.e. an Internet, the time may be near when we add yet another tier; that the Internet evolves into a network of internets. How this may come to pass is uncertain.? However, I believe that the weak fracture plane is that users no longer care about elegant end-to-end principles but, rather, live in a world of Apps and those users care nothing whether the underlying plumbing is elegant or a jumble - the users only care that their favorite Apps work. Early Internet protocols needed end-to-end connections.? But as the years passed more and more protocols were designed with the idea that they could operate via relays and proxies.? SMTP was an early one, HTTP a later one.? It is that acceptance of proxies and relays that reduces the strength of the end-to-end principle to act as a glue that holds the Internet into a single system. (This is not a negative reflection on those protocols; it is merely the recognition that a useful and common "feature" may also become the means through which the net could separate into realms that touch one another only via relays and proxies.) I wrote about this some years ago in note I titled "Internet: Quo Vadis (Where are you going?)" at https://www.cavebear.com/cavebear-blog/internet_quo_vadis/ ??? ??? --karl-- From dal at riseup.net Sat Mar 12 02:01:40 2022 From: dal at riseup.net (Douglas Lucas) Date: Sat, 12 Mar 2022 02:01:40 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <67cbb00b88dfe442091cf84823fd1aea@riseup.net> Another longstanding and pervasive issue with accessing different portions of the Internet during netsplits and netblocks is that if a nuclear bomb goes off on your front line then your WiFi goes out From tte at cs.fau.de Sat Mar 12 05:08:34 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Sat, 12 Mar 2022 14:08:34 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: Karl, How is what you are describing not what we already have for decades ? Even in the 90th and 200x, we had IP networks of global companies with as much as 100,000 routers (in 2005 from one corporation if i remember correctly) or government agencies like the US defense networks, all of which had tightly filtered connections to the Internet. Who would today dare to put arbitrary home equipment you buy "onto" the Internet without a strong firewall ? Aka: the end-to-end Internet was and is mostly a great marketing message up to the point where whoever asked starts to worry about attacks/security/privacy and so on. The only somewhat novel aspect is more and more state actors wanting to filter the public Internet. Including the EU. And there is and probably never will be a more differentiated answer from IOSOC than "don't" and then being quiet when being challenged on that ultra-simplified position. But to me all that splinternet filtering is a secondary issue. I am more worried about the opposite: The drive for lower cost has eliminated more and more of such large "private" networks and replaced their backbones with "SD-WAN", aka: tunneling across the Internet. I have seen no good analysis/prediction of resilience under failure or attack of/against the Internet and the impact not only for "native?" Internet services, but everything that tunnels across it. Sure, the Internet has likely more redundancy now than 15 years ago, but does that compensate for loss of independent underlying physical connectivity we had in before ? arguably, even those private networks might have shared fibers. Aka: How much insight do we have about the history/evolution and current state of resilience at the actual application/service level ? To me even the fact that we might not have enough of an idea would be a good reason for DK to send out the mail he did. But even more, it would be a good reason to better understand/analyze and improve the situation. Cheers Toerless On Sat, Mar 12, 2022 at 01:56:51AM -0800, Karl Auerbach via Internet-history wrote: > On 3/10/22 5:02 PM, the keyboard of geoff goodfellow via Internet-history > wrote: > > > EXCERPT: > > > > According to Wikipedia > > , a > > researcher at the Cato Institute first used the word "splinternet" in 2001 > > to describe the idea of "parallel Internets that would be run as distinct, > > private and autonomous universes." > > Well, "splinternet" it isn't quite "Internet history", it's more of a > prophesy of things that could come.? And I sense that none of us want > disjoint "splinters", and I don't think users want that either.? But if we > use the analogy of wood then today's Internet is nice, clear lumber. And > what we have called "splinternet" might be gluelams or fiberboard, i.e. many > pieces that are joined to form something that is at least as strong and > useful as a timber cut from a single tree. > > If we expand our view of Internet History to encompass predecessors we see > that that "splinters" have existed yet the system as a whole provided > acceptable service to users. > > Perhaps the earliest system that used store-and-forward handling of > electronic messages was the telegraph system that arose in the 1830s.? > Although it was never a single technically uniform global system, it did > have "splinters" that worked acceptably well and were sufficiently joined so > that from the users' point of view, it was one system. > > (We can say the same about the voice telephone system, but I view that more > as a circuit switching paradigm rather than store-and-forward message > handling.) > > I, personally, am of the belief that just as the Internet began as a single > network and then became a network of networks, i.e. an Internet, the time > may be near when we add yet another tier; that the Internet evolves into a > network of internets. > > How this may come to pass is uncertain.? However, I believe that the weak > fracture plane is that users no longer care about elegant end-to-end > principles but, rather, live in a world of Apps and those users care nothing > whether the underlying plumbing is elegant or a jumble - the users only care > that their favorite Apps work. > > Early Internet protocols needed end-to-end connections.? But as the years > passed more and more protocols were designed with the idea that they could > operate via relays and proxies.? SMTP was an early one, HTTP a later one.? > It is that acceptance of proxies and relays that reduces the strength of the > end-to-end principle to act as a glue that holds the Internet into a single > system. > > (This is not a negative reflection on those protocols; it is merely the > recognition that a useful and common "feature" may also become the means > through which the net could separate into realms that touch one another only > via relays and proxies.) > > I wrote about this some years ago in note I titled "Internet: Quo Vadis > (Where are you going?)" at > https://www.cavebear.com/cavebear-blog/internet_quo_vadis/ > > ??? ??? --karl-- > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From mfidelman at meetinghouse.net Sat Mar 12 08:20:27 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 12 Mar 2022 11:20:27 -0500 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: Karl Auerbach via Internet-history wrote: > On 3/10/22 5:02 PM, the keyboard of geoff goodfellow via > Internet-history wrote: > >> EXCERPT: >> >> According to Wikipedia >> , >> a >> researcher at the Cato Institute first used the word "splinternet" in >> 2001 >> to describe the idea of "parallel Internets that would be run as >> distinct, >> private and autonomous universes." > > Well, "splinternet" it isn't quite "Internet history", it's more of a > prophesy of things that could come.? And I sense that none of us want > disjoint "splinters", and I don't think users want that either.? But > if we use the analogy of wood then today's Internet is nice, clear > lumber. And what we have called "splinternet" might be gluelams or > fiberboard, i.e. many pieces that are joined to form something that is > at least as strong and useful as a timber cut from a single tree. > > If we expand our view of Internet History to encompass predecessors we > see that that "splinters" have existed yet the system as a whole > provided acceptable service to users. We had "walled gardens" before the Internet, we have private internets from the beginning (can you say Defense Data Network?), and walled gardens have been making a big comeback of late, not just when it comes to social media, but private email for healthcare, finance, etc. Connectivity & Interoperability are what make the Internet useful - and we've been going backwards since the day we opened the Internet to the public.? Sigh.... Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From dhc at dcrocker.net Sat Mar 12 08:30:41 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 12 Mar 2022 08:30:41 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: > > Connectivity & Interoperability are what make the Internet useful - and > we've been going backwards since the day we opened the Internet to the > public. That seems an overly-constrained assessment. A casual view of the activities prior to going public could reasonable produce the view that there has been increasing entropy from the start. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Sat Mar 12 12:00:08 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 13 Mar 2022 09:00:08 +1300 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: > On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: >> >> Connectivity & Interoperability are what make the Internet useful - and >> we've been going backwards since the day we opened the Internet to the >> public. > > That seems an overly-constrained assessment. A casual view of the > activities prior to going public could reasonable produce the view that > there has been increasing entropy from the start. Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, because the network is big enough and random enough for that to apply.) Also, there are many examples of segmentation of the network for technical, rather than political, reasons. https://www.rfc-editor.org/rfc/rfc8799.html Brian From helbakoury at gmail.com Sat Mar 12 12:07:28 2022 From: helbakoury at gmail.com (Hesham ElBakoury) Date: Sat, 12 Mar 2022 12:07:28 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: What I noticed is that different people have different understanding or definition of the limited domain. Hesham On Sat, Mar 12, 2022, 12:00 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: > > On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: > >> > >> Connectivity & Interoperability are what make the Internet useful - and > >> we've been going backwards since the day we opened the Internet to the > >> public. > > > > That seems an overly-constrained assessment. A casual view of the > > activities prior to going public could reasonable produce the view that > > there has been increasing entropy from the start. > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > because the network is big enough and random enough for that to apply.) > > Also, there are many examples of segmentation of the network for > technical, rather than political, reasons. > https://www.rfc-editor.org/rfc/rfc8799.html > > Brian > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From mfidelman at meetinghouse.net Sat Mar 12 12:29:11 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 12 Mar 2022 15:29:11 -0500 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: Brian E Carpenter via Internet-history wrote: > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: >> On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: >>> >>> Connectivity & Interoperability are what make the Internet useful - and >>> we've been going backwards since the day we opened the Internet to the >>> public. >> >> That seems an overly-constrained assessment.? A casual view of the >> activities prior to going public could reasonable produce the view that >> there has been increasing entropy from the start. > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > because the network is big enough and random enough for that to apply.) > > Also, there are many examples of segmentation of the network for > technical, rather than political, reasons. > https://www.rfc-editor.org/rfc/rfc8799.html Well sure... there are lots of examples of, and reasons for, private networks.?? But when it comes to a network that was originally designed for "resource sharing" and collaboration - segmentation is USUALLY a bad thing. And, where some of it is driven by security and privacy considerations, it sure seems like a lot more of it is about market capture and segmentation (as in the days when AOL, Compuserve, et. al. competed on the basis of who had more email users), and pure commercial gain (selling private email systems and portals to doctors, hospitals, and banks - rather than using secure email). And that's before we get into "political" reasons - like building entire media networks around disinformation (can you say "fake news?"). Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From brian.e.carpenter at gmail.com Sat Mar 12 12:40:00 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 13 Mar 2022 09:40:00 +1300 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <1596c4f4-4e7d-5c92-93f9-1947a02667e9@gmail.com> On 13-Mar-22 09:07, Hesham ElBakoury wrote: > What I noticed is that different people have different understanding or definition of the limited domain. Which is why the RFC attempts to extract the common features and requirements. Brian > > Hesham > > On Sat, Mar 12, 2022, 12:00 PM Brian E Carpenter via Internet-history > wrote: > > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: > > On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: > >> > >> Connectivity & Interoperability are what make the Internet useful - and > >> we've been going backwards since the day we opened the Internet to the > >> public. > > > > That seems an overly-constrained assessment.? A casual view of the > > activities prior to going public could reasonable produce the view that > > there has been increasing entropy from the start. > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > because the network is big enough and random enough for that to apply.) > > Also, there are many examples of segmentation of the network for > technical, rather than political, reasons. > https://www.rfc-editor.org/rfc/rfc8799.html > > ? ? Brian > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Sat Mar 12 13:08:15 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 13 Mar 2022 10:08:15 +1300 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: An insider's view from the splinternet. History in the making, I guess: https://www.youtube.com/watch?v=Qi-t6fBT5uM Regards Brian On 13-Mar-22 09:00, Brian E Carpenter wrote: > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: >> On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: >>> >>> Connectivity & Interoperability are what make the Internet useful - and >>> we've been going backwards since the day we opened the Internet to the >>> public. >> >> That seems an overly-constrained assessment. A casual view of the >> activities prior to going public could reasonable produce the view that >> there has been increasing entropy from the start. > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > because the network is big enough and random enough for that to apply.) > > Also, there are many examples of segmentation of the network for > technical, rather than political, reasons. > https://www.rfc-editor.org/rfc/rfc8799.html > > Brian > From vgcerf at gmail.com Sat Mar 12 13:25:04 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sat, 12 Mar 2022 16:25:04 -0500 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: doesn't seem to sympathize with the Ukrainians however. v On Sat, Mar 12, 2022 at 4:08 PM Brian E Carpenter via Internet-history < internet-history at elists.isoc.org> wrote: > An insider's view from the splinternet. History in the making, I guess: > > https://www.youtube.com/watch?v=Qi-t6fBT5uM > > Regards > Brian > On 13-Mar-22 09:00, Brian E Carpenter wrote: > > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: > >> On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: > >>> > >>> Connectivity & Interoperability are what make the Internet useful - and > >>> we've been going backwards since the day we opened the Internet to the > >>> public. > >> > >> That seems an overly-constrained assessment. A casual view of the > >> activities prior to going public could reasonable produce the view that > >> there has been increasing entropy from the start. > > > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > > because the network is big enough and random enough for that to apply.) > > > > Also, there are many examples of segmentation of the network for > > technical, rather than political, reasons. > > https://www.rfc-editor.org/rfc/rfc8799.html > > > > Brian > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Sat Mar 12 14:13:58 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 13 Mar 2022 11:13:58 +1300 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: On 13-Mar-22 10:25, vinton cerf wrote: > doesn't seem to sympathize with the Ukrainians however. I assume he has to be careful what he says. Brian > v > > > On Sat, Mar 12, 2022 at 4:08 PM Brian E Carpenter via Internet-history > wrote: > > An insider's view from the splinternet. History in the making, I guess: > > https://www.youtube.com/watch?v=Qi-t6fBT5uM > > Regards > ? ? Brian > On 13-Mar-22 09:00, Brian E Carpenter wrote: > > On 13-Mar-22 05:30, Dave Crocker via Internet-history wrote: > >> On 3/12/2022 8:20 AM, Miles Fidelman via Internet-history wrote: > >>> > >>> Connectivity & Interoperability are what make the Internet useful - and > >>> we've been going backwards since the day we opened the Internet to the > >>> public. > >> > >> That seems an overly-constrained assessment.? A casual view of the > >> activities prior to going public could reasonable produce the view that > >> there has been increasing entropy from the start. > > > > Correct. (And yes, it is indeed the 2nd law of thermodynamics in action, > > because the network is big enough and random enough for that to apply.) > > > > Also, there are many examples of segmentation of the network for > > technical, rather than political, reasons. > > https://www.rfc-editor.org/rfc/rfc8799.html > > > >? ? ? Brian > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Sat Mar 12 17:23:00 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 12 Mar 2022 17:23:00 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> IMHO, the Internet has been splintered for decades, going back to the days of DDN, and the introduction of EGP which enabled carving up the Internet into many pieces, each run by different operators. But the history is a bit more complex than that.?? Back in the mid-80s, I used to give a lot of presentations about the Internet. One of the points I made was that the first ten years of the Internet were all about connectivity -- making it possible for every computer on the planet to communicate with every other computer.? I opined then that the next ten years would be about making it *not* possible for every computer to talk with every other -- i.e., to introduce mechanisms that made it possible to constrain connectivity, for any of a number of reasons already mentioned. That was about 40 years ago -- my ten year projection was way off target. At the time, the usage model of the Internet was based on the way that computers of that era were typically used.? A person would use a terminal of some kind (typewriter or screen) and do something to connect it to a computer.? He or she would then somehow "log in" to that computer with a name and password, and gain the ability to use whatever programs, data, and resources that individual was allowed to use.? At the end of that "session", the user would log out, and that terminal would no longer be able to do anything until the next user repeated the process. In the early days of the Internet, that model was translated into the network realm.? E.g., there was a project called TACACS (TAC Access Control System) that provided the mechanisms for a human user to "log in" to the Internet, using a name and a password.?? DDN, for example, issued DDN Access Cards which had your name and network password that enabled a human user to log in to the DDN as a network. Having logged in to the network, you could then still connect to your chosen computer as before.? But you no longer had to log in to that computer.?? The network could tell the computer which user was associated with the new connection, and, assuming the computer manager trusted the network, the user would be automatically logged in and be able to do whatever that user was allowed to do.?? This new feature was termed "Double Login Elimination", since it removed the necessity to log in more than once for a given session, regardless of how many computers you might use. Those mechanisms didn't have strong security, but it was straightforward to add it for situations where it was required.? The basic model was that network activity was always associated with some user, who was identified and verified by the network mechanisms.?? Each computer that the user might use would be told who the user was, and could then apply its own rules about what that user could do.?? If the user made a network connection out to some other computer, the user's identity would be similarly passed along to the other computer. At about that time (later 1980s), LANs and PCs began to spread through the Internet, and the user-at-a-terminal model broke down. Instead of users at terminals making connections to the network, now there were users at microcomputers making connections.?? Such computers were "personal" computers, not under management by the typical "data center" or network operator but rather by individuals.??? Rather than connecting to remote computers as "terminals", connections started to also be made by programs running on those personal computers.?? The human user might not even be aware that such connections were happening. With that evolution of the network/user model, mechanisms such as TACACS became obsolete.? Where it was often reasonable to trust the identification of a user performed by a mechanism run by the network or a datacenter, it was difficult to similarly trust the word of one of the multitude of microcomputers and software packages that were now involved. So, the notion that a "user" could be identified and then constrained in use of the resources on the Internet was no longer available. AFAIK, since that time in the 80s, there hasn't been a new "usage model" developed to deal with the reality of today's Internet.? We each have many devices now, not just one personal computer.?? Many of them are online all of the time; there are no "sessions" now with a human interacting with a remote computer as in the 80s.? When we use a website, what appears on our screen may come from dozens of computers somewhere "out there".?? Some of the content on the screen isn't even what we asked for.?? Who is the "user" asking for advertising popups to appear??? Did I give that user permission to use some of my screen space??? Who did? User interaction with today's network is arguably much more complex than it was 40 years ago.? IMHO, no one has developed a good model of network usage for such a world, that enables the control of the resources (computing, data) accessed across the Internet.?? For mechanisms that have been developed, such as privacy-enhanced electronic mail, deployment seems to have been very spotty for some reason.?? We get email from identified Users, but can we trust that the email actually came from that User??? When the Web appeared, the Internet got really complicated. Lacking appropriate mechanisms, users still need some way to control who can utliize what.?? So they improvise and generate adhoc point solutions.? My bank wants to interact with me safely, so it sets up a separate account on its own computers, with name, password, and 2-factor authentication.?? It can't trust the Internet to tell it who I am.?? It sends me email when I need to do something, advising me to log in to my account and read its message to me there, where it knows that I'm me, and I know that it's my bank.?? It can't trust Internet email for more than advising me to come in to its splinter of the Internet. All my vendors do the same.? My newspaper.? My doctors.? My media subscriptions.? Each has its own "silo" where it can interact with me reliably and confidently.?? Some of them probably do it to better make money.? But IMHO most of them do it because they have to - the Internet doesn't provide any mechanisms to help. So we get lots of "splintering".??? IMHO that has at least partially been driven by the lack of mechanisms within the Internet technology to deal with control of resources in ways that the users require. So they have invented their own individual mechanisms as needs arose.? It's not just at the router/ISP level, where splintering can be caused by things like the absence of mechanisms for "policy routing" or "type of service" or "security" that's important to someone. "Double Login" momentarily was eliminated, but revived and has evolved into "Continuous Login" since the Internet doesn't provide what's needed by the users in today's complex world. I was involved in operating a "splinternet" corporate internet in the 90s, connected to "the Internet" only by an email gateway.? We just couldn't trust the Internet so we kept it at arms length. Hope this helps some historian.... Jack Haverty From steffen at sdaoden.eu Sat Mar 12 17:50:14 2022 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Sun, 13 Mar 2022 02:50:14 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <20220313015014._Tke3%steffen@sdaoden.eu> vinton cerf wrote in : |doesn't seem to sympathize with the Ukrainians however. |v But then again millions are/were starving in Yemen ("the greatest humanitarian catastrophe") and _noone_ cares (though war made). --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From touch at strayalpha.com Sat Mar 12 18:00:42 2022 From: touch at strayalpha.com (touch at strayalpha.com) Date: Sat, 12 Mar 2022 18:00:42 -0800 Subject: [ih] A note on off-topic discussions In-Reply-To: <20220313015014._Tke3%steffen@sdaoden.eu> References: <20220313015014._Tke3%steffen@sdaoden.eu> Message-ID: <1BF2FE50-73F6-421B-9689-43632FC5D144@strayalpha.com> Hi, all, As a reminder, this list is for discussions about *Internet history*. Although the definition of ?history? can be interpreted generously, (including past, present, and arguably future), please stay within THAT topic. Joe (as list admin) ? Dr. Joe Touch, temporal epistemologist www.strayalpha.com From mfidelman at meetinghouse.net Sat Mar 12 18:14:05 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 12 Mar 2022 21:14:05 -0500 Subject: [ih] Preparing for the splinternet In-Reply-To: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> Message-ID: <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> A helpful perspective.? Thanks Jack. Not sure I completely agree with all of it (see below) - but pretty close. Jack Haverty via Internet-history wrote: > IMHO, the Internet has been splintered for decades, going back to the > days of DDN, and the introduction of EGP which enabled carving up the > Internet into many pieces, each run by different operators. > > But the history is a bit more complex than that.?? Back in the > mid-80s, I used to give a lot of presentations about the Internet. One > of the points I made was that the first ten years of the Internet were > all about connectivity -- making it possible for every computer on the > planet to communicate with every other computer.? I opined then that > the next ten years would be about making it *not* possible for every > computer to talk with every other -- i.e., to introduce mechanisms > that made it possible to constrain connectivity, for any of a number > of reasons already mentioned. That was about 40 years ago -- my ten > year projection was way off target. > > At the time, the usage model of the Internet was based on the way that > computers of that era were typically used.? A person would use a > terminal of some kind (typewriter or screen) and do something to > connect it to a computer.? He or she would then somehow "log in" to > that computer with a name and password, and gain the ability to use > whatever programs, data, and resources that individual was allowed to > use.? At the end of that "session", the user would log out, and that > terminal would no longer be able to do anything until the next user > repeated the process. > > In the early days of the Internet, that model was translated into the > network realm.? E.g., there was a project called TACACS (TAC Access > Control System) that provided the mechanisms for a human user to "log > in" to the Internet, using a name and a password. DDN, for example, > issued DDN Access Cards which had your name and network password that > enabled a human user to log in to the DDN as a network. > > Having logged in to the network, you could then still connect to your > chosen computer as before.? But you no longer had to log in to that > computer.?? The network could tell the computer which user was > associated with the new connection, and, assuming the computer manager > trusted the network, the user would be automatically logged in and be > able to do whatever that user was allowed to do.?? This new feature > was termed "Double Login Elimination", since it removed the necessity > to log in more than once for a given session, regardless of how many > computers you might use. > > Those mechanisms didn't have strong security, but it was > straightforward to add it for situations where it was required. The > basic model was that network activity was always associated with some > user, who was identified and verified by the network mechanisms.?? > Each computer that the user might use would be told who the user was, > and could then apply its own rules about what that user could do.?? If > the user made a network connection out to some other computer, the > user's identity would be similarly passed along to the other computer. > > At about that time (later 1980s), LANs and PCs began to spread through > the Internet, and the user-at-a-terminal model broke down. Instead of > users at terminals making connections to the network, now there were > users at microcomputers making connections.?? Such computers were > "personal" computers, not under management by the typical "data > center" or network operator but rather by individuals.??? Rather than > connecting to remote computers as "terminals", connections started to > also be made by programs running on those personal computers.?? The > human user might not even be aware that such connections were happening. > > With that evolution of the network/user model, mechanisms such as > TACACS became obsolete.? Where it was often reasonable to trust the > identification of a user performed by a mechanism run by the network > or a datacenter, it was difficult to similarly trust the word of one > of the multitude of microcomputers and software packages that were now > involved. > > So, the notion that a "user" could be identified and then constrained > in use of the resources on the Internet was no longer available. > > AFAIK, since that time in the 80s, there hasn't been a new "usage > model" developed to deal with the reality of today's Internet.? We > each have many devices now, not just one personal computer.?? Many of > them are online all of the time; there are no "sessions" now with a > human interacting with a remote computer as in the 80s. When we use a > website, what appears on our screen may come from dozens of computers > somewhere "out there".?? Some of the content on the screen isn't even > what we asked for.?? Who is the "user" asking for advertising popups > to appear??? Did I give that user permission to use some of my screen > space??? Who did? > > User interaction with today's network is arguably much more complex > than it was 40 years ago.? IMHO, no one has developed a good model of > network usage for such a world, that enables the control of the > resources (computing, data) accessed across the Internet.?? For > mechanisms that have been developed, such as privacy-enhanced > electronic mail, deployment seems to have been very spotty for some > reason.?? We get email from identified Users, but can we trust that > the email actually came from that User? When the Web appeared, the > Internet got really complicated. > > Lacking appropriate mechanisms, users still need some way to control > who can utliize what.?? So they improvise and generate adhoc point > solutions.? My bank wants to interact with me safely, so it sets up a > separate account on its own computers, with name, password, and > 2-factor authentication.?? It can't trust the Internet to tell it who > I am.?? It sends me email when I need to do something, advising me to > log in to my account and read its message to me there, where it knows > that I'm me, and I know that it's my bank.?? It can't trust Internet > email for more than advising me to come in to its splinter of the > Internet. > > All my vendors do the same.? My newspaper.? My doctors.? My media > subscriptions.? Each has its own "silo" where it can interact with me > reliably and confidently.?? Some of them probably do it to better make > money.? But IMHO most of them do it because they have to - the > Internet doesn't provide any mechanisms to help. I'm not sure that's really the case.? We do, after all have things like X.509 certificates, and various mechanisms defined on top of them.? Or, in the academic & enterprise worlds, we have IAM mechanisms that work across multiple institutions (e.g., Shibboleth and the like). > > So we get lots of "splintering".??? IMHO that has at least partially > been driven by the lack of mechanisms within the Internet technology > to deal with control of resources in ways that the users require. So > they have invented their own individual mechanisms as needs arose.? > It's not just at the router/ISP level, where splintering can be caused > by things like the absence of mechanisms for "policy routing" or "type > of service" or "security" that's important to someone. And here, I'll come back to commercial interests as driving the show. In the academic world - where interoperability and resource/information sharing are a priority - we have a world of identify federations.? Yes, one has to have permissions and such, but one doesn't need multiple library cards to access multiple libraries, or to make interlibrary loans.? For that matter, we can do business worldwide, with one bank account or credit card. But, when it comes to things like, say, distributing medical records, it took the Medicare administrators to force all doctors offices, hospitals, etc. to use the same format for submitting billing records.? Meanwhile commercial firms have made a fortune creating and selling portals and private email systems - and convincing folks that the only way they can meet HIPPA requirements is to use said private systems.? And now they've started to sell their users on mechanisms to share records between providers (kind of like the early days of email - "there are more folks on our system then the other guys,' so we're your best option for letting doctors exchange patient records").? Without a forcing function for interoperability (be it ARPA funding the ARPANET specifically to enable resource sharing, or Medicare, or some other large institution) - market forces, and perhaps basic human psychology, push toward finding ways to segment markets, isolate tribes, carve off market niches, etc. Come to think of it, the same applies to "web services" - we developed a perfectly good protocol stack, and built RESTful services on top of it.? But somebody had to go off and reinvent everything, push all the functions up to the application layer, and make everything incredibly baroque and cumbersome.? And then folks started to come to their senses and start standardizing, a bit, on how to do RESTful web services in ways that sort of work for everyone.? (Of course, there are those who are trying to repeat the missteps, with "Web 3.0," smart contracts, and all of that stuff.) > > "Double Login" momentarily was eliminated, but revived and has evolved > into "Continuous Login" since the Internet doesn't provide what's > needed by the users in today's complex world. A nice way of putting it. Though, perhaps it's equally useful to view things as "no login." Everything is a transaction, governed by a set of rules, accompanied by credentials and currency. And we have models for that that date back millennia - basically contracts and currency.? Later we invented multi-part forms & checking accounts.? Now we have a plethora of mechanisms - all doing basically the same thing - and competing with each other for market share.? (Kind of like standards, we need a standard way of talking to each other - so let's invent a new one.) Maybe, we can take a breath, take a step backwards, and start building on interoperable building blocks that have stood the test of time.? In the same way that e-books "work" a lot better than reading on laptops, and now tablets are merging the form factor in ways that are practical.? Or chat, in the form of SMS & MMS messaging, is pretty much still the standard for reaching anybody, anywhere, any time. But... absent a major institution pushing things forward (or together)... it probably will take a concerted effort, by those of us who understand the issues, and are in positions to specify technology for large systems, or large groups/organizations, to keep nudging things in the right direction, when we have the opportunity to do so. > > I was involved in operating a "splinternet" corporate internet in the > 90s, connected to "the Internet" only by an email gateway.? We just > couldn't trust the Internet so we kept it at arms length. > > Hope this helps some historian.... > Jack Haverty And, perhaps, offer some lessons learned to those who would prefer not to repeat history! Cheers, Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From tte at cs.fau.de Sat Mar 12 22:55:13 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Sun, 13 Mar 2022 07:55:13 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> Message-ID: Access control would be a lovely topic to take to the IETF. For something what Jack described as a review of historic methods to learn from (would be a very helpful info RFC, but lot of work i guess), and for todays perspective IMHO what access control methods could be recommended to avoid the problematic filtering at network layer. For example, we just had another incident of a court in germany issuing blocking orders to german ISPs (which typically operates on DNS), against a porn service that wasn't providing adequate child protection. How do we get rid of such recurring challenges to the basic internet infrastructure (IP and naming level...) ? Funnily, i am just trying to watch a movie on disneyplus ("All King Man") while being in Germany with a USA based account, and the account only allows me to select <= PG14. Talked with tech-support, and the only solution was to temporarily update the account location to germany because (as i figure) it's even logically impossible to automate this: In germany kids are allowed/disallowed to watch different movies than in the USA, but travelling parents might be caught by surprise (especially on the "allowed" part). So that's from an arguably kids-friendly global content provider. Now try to imagine how governments are struggling, that many parents do expect to provide some useful degree of protection for kids. If the answer to the problem is "well, we can't figure out how to do this for the Internet at large", then this will even increase the monopolization of services to those global providers that do. Sorry. Too much current-day text. The Internet was definitely a lot easier in <= 1990'th, when we had not enough kids on the Internet to worry about that issue. How about "The Internet was built for adults" ? Cheers Toerless On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via Internet-history wrote: > A helpful perspective.? Thanks Jack. > > Not sure I completely agree with all of it (see below) - but pretty close. > > Jack Haverty via Internet-history wrote: > > IMHO, the Internet has been splintered for decades, going back to the > > days of DDN, and the introduction of EGP which enabled carving up the > > Internet into many pieces, each run by different operators. > > > > But the history is a bit more complex than that.?? Back in the mid-80s, > > I used to give a lot of presentations about the Internet. One of the > > points I made was that the first ten years of the Internet were all > > about connectivity -- making it possible for every computer on the > > planet to communicate with every other computer.? I opined then that the > > next ten years would be about making it *not* possible for every > > computer to talk with every other -- i.e., to introduce mechanisms that > > made it possible to constrain connectivity, for any of a number of > > reasons already mentioned. That was about 40 years ago -- my ten year > > projection was way off target. > > > > At the time, the usage model of the Internet was based on the way that > > computers of that era were typically used.? A person would use a > > terminal of some kind (typewriter or screen) and do something to connect > > it to a computer.? He or she would then somehow "log in" to that > > computer with a name and password, and gain the ability to use whatever > > programs, data, and resources that individual was allowed to use.? At > > the end of that "session", the user would log out, and that terminal > > would no longer be able to do anything until the next user repeated the > > process. > > > > In the early days of the Internet, that model was translated into the > > network realm.? E.g., there was a project called TACACS (TAC Access > > Control System) that provided the mechanisms for a human user to "log > > in" to the Internet, using a name and a password. DDN, for example, > > issued DDN Access Cards which had your name and network password that > > enabled a human user to log in to the DDN as a network. > > > > Having logged in to the network, you could then still connect to your > > chosen computer as before.? But you no longer had to log in to that > > computer.?? The network could tell the computer which user was > > associated with the new connection, and, assuming the computer manager > > trusted the network, the user would be automatically logged in and be > > able to do whatever that user was allowed to do.?? This new feature was > > termed "Double Login Elimination", since it removed the necessity to log > > in more than once for a given session, regardless of how many computers > > you might use. > > > > Those mechanisms didn't have strong security, but it was straightforward > > to add it for situations where it was required. The basic model was that > > network activity was always associated with some user, who was > > identified and verified by the network mechanisms.?? Each computer that > > the user might use would be told who the user was, and could then apply > > its own rules about what that user could do.?? If the user made a > > network connection out to some other computer, the user's identity would > > be similarly passed along to the other computer. > > > > At about that time (later 1980s), LANs and PCs began to spread through > > the Internet, and the user-at-a-terminal model broke down. Instead of > > users at terminals making connections to the network, now there were > > users at microcomputers making connections.?? Such computers were > > "personal" computers, not under management by the typical "data center" > > or network operator but rather by individuals.??? Rather than connecting > > to remote computers as "terminals", connections started to also be made > > by programs running on those personal computers.?? The human user might > > not even be aware that such connections were happening. > > > > With that evolution of the network/user model, mechanisms such as TACACS > > became obsolete.? Where it was often reasonable to trust the > > identification of a user performed by a mechanism run by the network or > > a datacenter, it was difficult to similarly trust the word of one of the > > multitude of microcomputers and software packages that were now > > involved. > > > > So, the notion that a "user" could be identified and then constrained in > > use of the resources on the Internet was no longer available. > > > > AFAIK, since that time in the 80s, there hasn't been a new "usage model" > > developed to deal with the reality of today's Internet.? We each have > > many devices now, not just one personal computer.?? Many of them are > > online all of the time; there are no "sessions" now with a human > > interacting with a remote computer as in the 80s. When we use a website, > > what appears on our screen may come from dozens of computers somewhere > > "out there".?? Some of the content on the screen isn't even what we > > asked for.?? Who is the "user" asking for advertising popups to > > appear??? Did I give that user permission to use some of my screen > > space??? Who did? > > > > User interaction with today's network is arguably much more complex than > > it was 40 years ago.? IMHO, no one has developed a good model of network > > usage for such a world, that enables the control of the resources > > (computing, data) accessed across the Internet.?? For mechanisms that > > have been developed, such as privacy-enhanced electronic mail, > > deployment seems to have been very spotty for some reason.?? We get > > email from identified Users, but can we trust that the email actually > > came from that User? When the Web appeared, the Internet got really > > complicated. > > > > Lacking appropriate mechanisms, users still need some way to control who > > can utliize what.?? So they improvise and generate adhoc point > > solutions.? My bank wants to interact with me safely, so it sets up a > > separate account on its own computers, with name, password, and 2-factor > > authentication.?? It can't trust the Internet to tell it who I am.?? It > > sends me email when I need to do something, advising me to log in to my > > account and read its message to me there, where it knows that I'm me, > > and I know that it's my bank.?? It can't trust Internet email for more > > than advising me to come in to its splinter of the Internet. > > > > All my vendors do the same.? My newspaper.? My doctors.? My media > > subscriptions.? Each has its own "silo" where it can interact with me > > reliably and confidently.?? Some of them probably do it to better make > > money.? But IMHO most of them do it because they have to - the Internet > > doesn't provide any mechanisms to help. > I'm not sure that's really the case.? We do, after all have things like > X.509 certificates, and various mechanisms defined on top of them.? Or, in > the academic & enterprise worlds, we have IAM mechanisms that work across > multiple institutions (e.g., Shibboleth and the like). > > > > So we get lots of "splintering".??? IMHO that has at least partially > > been driven by the lack of mechanisms within the Internet technology to > > deal with control of resources in ways that the users require. So they > > have invented their own individual mechanisms as needs arose.? It's not > > just at the router/ISP level, where splintering can be caused by things > > like the absence of mechanisms for "policy routing" or "type of service" > > or "security" that's important to someone. > > And here, I'll come back to commercial interests as driving the show. > > In the academic world - where interoperability and resource/information > sharing are a priority - we have a world of identify federations.? Yes, one > has to have permissions and such, but one doesn't need multiple library > cards to access multiple libraries, or to make interlibrary loans.? For that > matter, we can do business worldwide, with one bank account or credit card. > > But, when it comes to things like, say, distributing medical records, it > took the Medicare administrators to force all doctors offices, hospitals, > etc. to use the same format for submitting billing records.? Meanwhile > commercial firms have made a fortune creating and selling portals and > private email systems - and convincing folks that the only way they can meet > HIPPA requirements is to use said private systems.? And now they've started > to sell their users on mechanisms to share records between providers (kind > of like the early days of email - "there are more folks on our system then > the other guys,' so we're your best option for letting doctors exchange > patient records").? Without a forcing function for interoperability (be it > ARPA funding the ARPANET specifically to enable resource sharing, or > Medicare, or some other large institution) - market forces, and perhaps > basic human psychology, push toward finding ways to segment markets, isolate > tribes, carve off market niches, etc. > > Come to think of it, the same applies to "web services" - we developed a > perfectly good protocol stack, and built RESTful services on top of it.? But > somebody had to go off and reinvent everything, push all the functions up to > the application layer, and make everything incredibly baroque and > cumbersome.? And then folks started to come to their senses and start > standardizing, a bit, on how to do RESTful web services in ways that sort of > work for everyone.? (Of course, there are those who are trying to repeat the > missteps, with "Web 3.0," smart contracts, and all of that stuff.) > > > > "Double Login" momentarily was eliminated, but revived and has evolved > > into "Continuous Login" since the Internet doesn't provide what's needed > > by the users in today's complex world. > A nice way of putting it. > > Though, perhaps it's equally useful to view things as "no login." Everything > is a transaction, governed by a set of rules, accompanied by credentials and > currency. > > And we have models for that that date back millennia - basically contracts > and currency.? Later we invented multi-part forms & checking accounts.? Now > we have a plethora of mechanisms - all doing basically the same thing - and > competing with each other for market share.? (Kind of like standards, we > need a standard way of talking to each other - so let's invent a new one.) > > Maybe, we can take a breath, take a step backwards, and start building on > interoperable building blocks that have stood the test of time.? In the same > way that e-books "work" a lot better than reading on laptops, and now > tablets are merging the form factor in ways that are practical.? Or chat, in > the form of SMS & MMS messaging, is pretty much still the standard for > reaching anybody, anywhere, any time. > > But... absent a major institution pushing things forward (or together)... it > probably will take a concerted effort, by those of us who understand the > issues, and are in positions to specify technology for large systems, or > large groups/organizations, to keep nudging things in the right direction, > when we have the opportunity to do so. > > > > > I was involved in operating a "splinternet" corporate internet in the > > 90s, connected to "the Internet" only by an email gateway.? We just > > couldn't trust the Internet so we kept it at arms length. > > > > Hope this helps some historian.... > > Jack Haverty > And, perhaps, offer some lessons learned to those who would prefer not to > repeat history! > > Cheers, > > Miles > > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From jack at 3kitty.org Sat Mar 12 23:24:48 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 12 Mar 2022 23:24:48 -0800 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> Message-ID: if you look at history from the Users' perspective, IMHO the problem has been a lack of follow-through.? Lots of technology (protocols, formats, algorithms) has been created, documented in 1000s of RFCs and such.? But unless it gets to the field and becomes an inherent and pervasive capability of the Internet, it doesn't really exist for the Users, whether they be individuals or corporations or governments or network or cloud operators. Two good examples of technology beyond basic TCP/IP that have made that leap are DNS and NTP.? You can pretty much count on them to be available no matter where you connect to the Internet and what kind of device you use to make that connection. In contrast, many other technologies may "exist" but haven't made that leap. E.g., X.509 and certificates may exist, but IMHO they aren't widely used.?? I occasionally see my browser advise me that a certificate is invalid.?? But the only path forward it offers is to ignore the error if I want to continue doing whatever I'm trying to do.? I typically say "go ahead", and I suspect most Users do the same. Similarly, I have PGP and S/MIME credentials, but I rarely use them, and rarely receive any email from others using them. Control of Internet content, to provide child protection or other constraints, was developed by W3C in the 90s (look up PICS - Platform for Internet Content Selection).? It was even implemented in popular browsers of the day.? As a rep to W3C I helped get that in place as a general mechanism for attaching metadata to Web content, but AFAIK it never got any real use in the broad Internet and by now seems to have disappeared. Perhaps some historian will someday explain why such mechanisms don't seem to make it to the field and get widely implemented, deployed, and used.? Why are they different from TCP/IP, DNS, NTP and maybe a few others which had success in the early stages of the Internet? Jack On 3/12/22 22:55, Toerless Eckert via Internet-history wrote: > Access control would be a lovely topic to take to the IETF. For something > what Jack described as a review of historic methods to learn from (would be a very > helpful info RFC, but lot of work i guess), and for todays perspective IMHO > what access control methods could be recommended to avoid the problematic filtering at > network layer. > > For example, we just had another incident of a court in germany issuing blocking > orders to german ISPs (which typically operates on DNS), against a porn service > that wasn't providing adequate child protection. How do we get rid of such recurring > challenges to the basic internet infrastructure (IP and naming level...) ? > > Funnily, i am just trying to watch a movie on disneyplus ("All King Man") while being in > Germany with a USA based account, and the account only allows me to select <= PG14. > Talked with tech-support, and the only solution was to temporarily update the account location > to germany because (as i figure) it's even logically impossible to automate this: In germany > kids are allowed/disallowed to watch different movies than in the USA, but travelling > parents might be caught by surprise (especially on the "allowed" part). So that's > from an arguably kids-friendly global content provider. Now try to imagine how governments > are struggling, that many parents do expect to provide some useful degree of protection for > kids. If the answer to the problem is "well, we can't figure out how to do this for the > Internet at large", then this will even increase the monopolization of services to those > global providers that do. > > Sorry. Too much current-day text. The Internet was definitely a lot easier in <= 1990'th, > when we had not enough kids on the Internet to worry about that issue. > > How about "The Internet was built for adults" ? > > Cheers > Toerless > > > On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via Internet-history wrote: >> A helpful perspective.? Thanks Jack. >> >> Not sure I completely agree with all of it (see below) - but pretty close. >> >> Jack Haverty via Internet-history wrote: >>> IMHO, the Internet has been splintered for decades, going back to the >>> days of DDN, and the introduction of EGP which enabled carving up the >>> Internet into many pieces, each run by different operators. >>> >>> But the history is a bit more complex than that.?? Back in the mid-80s, >>> I used to give a lot of presentations about the Internet. One of the >>> points I made was that the first ten years of the Internet were all >>> about connectivity -- making it possible for every computer on the >>> planet to communicate with every other computer.? I opined then that the >>> next ten years would be about making it *not* possible for every >>> computer to talk with every other -- i.e., to introduce mechanisms that >>> made it possible to constrain connectivity, for any of a number of >>> reasons already mentioned. That was about 40 years ago -- my ten year >>> projection was way off target. >>> >>> At the time, the usage model of the Internet was based on the way that >>> computers of that era were typically used.? A person would use a >>> terminal of some kind (typewriter or screen) and do something to connect >>> it to a computer.? He or she would then somehow "log in" to that >>> computer with a name and password, and gain the ability to use whatever >>> programs, data, and resources that individual was allowed to use.? At >>> the end of that "session", the user would log out, and that terminal >>> would no longer be able to do anything until the next user repeated the >>> process. >>> >>> In the early days of the Internet, that model was translated into the >>> network realm.? E.g., there was a project called TACACS (TAC Access >>> Control System) that provided the mechanisms for a human user to "log >>> in" to the Internet, using a name and a password. DDN, for example, >>> issued DDN Access Cards which had your name and network password that >>> enabled a human user to log in to the DDN as a network. >>> >>> Having logged in to the network, you could then still connect to your >>> chosen computer as before.? But you no longer had to log in to that >>> computer.?? The network could tell the computer which user was >>> associated with the new connection, and, assuming the computer manager >>> trusted the network, the user would be automatically logged in and be >>> able to do whatever that user was allowed to do.?? This new feature was >>> termed "Double Login Elimination", since it removed the necessity to log >>> in more than once for a given session, regardless of how many computers >>> you might use. >>> >>> Those mechanisms didn't have strong security, but it was straightforward >>> to add it for situations where it was required. The basic model was that >>> network activity was always associated with some user, who was >>> identified and verified by the network mechanisms.?? Each computer that >>> the user might use would be told who the user was, and could then apply >>> its own rules about what that user could do.?? If the user made a >>> network connection out to some other computer, the user's identity would >>> be similarly passed along to the other computer. >>> >>> At about that time (later 1980s), LANs and PCs began to spread through >>> the Internet, and the user-at-a-terminal model broke down. Instead of >>> users at terminals making connections to the network, now there were >>> users at microcomputers making connections.?? Such computers were >>> "personal" computers, not under management by the typical "data center" >>> or network operator but rather by individuals.??? Rather than connecting >>> to remote computers as "terminals", connections started to also be made >>> by programs running on those personal computers.?? The human user might >>> not even be aware that such connections were happening. >>> >>> With that evolution of the network/user model, mechanisms such as TACACS >>> became obsolete.? Where it was often reasonable to trust the >>> identification of a user performed by a mechanism run by the network or >>> a datacenter, it was difficult to similarly trust the word of one of the >>> multitude of microcomputers and software packages that were now >>> involved. >>> >>> So, the notion that a "user" could be identified and then constrained in >>> use of the resources on the Internet was no longer available. >>> >>> AFAIK, since that time in the 80s, there hasn't been a new "usage model" >>> developed to deal with the reality of today's Internet.? We each have >>> many devices now, not just one personal computer.?? Many of them are >>> online all of the time; there are no "sessions" now with a human >>> interacting with a remote computer as in the 80s. When we use a website, >>> what appears on our screen may come from dozens of computers somewhere >>> "out there".?? Some of the content on the screen isn't even what we >>> asked for.?? Who is the "user" asking for advertising popups to >>> appear??? Did I give that user permission to use some of my screen >>> space??? Who did? >>> >>> User interaction with today's network is arguably much more complex than >>> it was 40 years ago.? IMHO, no one has developed a good model of network >>> usage for such a world, that enables the control of the resources >>> (computing, data) accessed across the Internet.?? For mechanisms that >>> have been developed, such as privacy-enhanced electronic mail, >>> deployment seems to have been very spotty for some reason.?? We get >>> email from identified Users, but can we trust that the email actually >>> came from that User? When the Web appeared, the Internet got really >>> complicated. >>> >>> Lacking appropriate mechanisms, users still need some way to control who >>> can utliize what.?? So they improvise and generate adhoc point >>> solutions.? My bank wants to interact with me safely, so it sets up a >>> separate account on its own computers, with name, password, and 2-factor >>> authentication.?? It can't trust the Internet to tell it who I am.?? It >>> sends me email when I need to do something, advising me to log in to my >>> account and read its message to me there, where it knows that I'm me, >>> and I know that it's my bank.?? It can't trust Internet email for more >>> than advising me to come in to its splinter of the Internet. >>> >>> All my vendors do the same.? My newspaper.? My doctors.? My media >>> subscriptions.? Each has its own "silo" where it can interact with me >>> reliably and confidently.?? Some of them probably do it to better make >>> money.? But IMHO most of them do it because they have to - the Internet >>> doesn't provide any mechanisms to help. >> I'm not sure that's really the case.? We do, after all have things like >> X.509 certificates, and various mechanisms defined on top of them.? Or, in >> the academic & enterprise worlds, we have IAM mechanisms that work across >> multiple institutions (e.g., Shibboleth and the like). >>> So we get lots of "splintering".??? IMHO that has at least partially >>> been driven by the lack of mechanisms within the Internet technology to >>> deal with control of resources in ways that the users require. So they >>> have invented their own individual mechanisms as needs arose.? It's not >>> just at the router/ISP level, where splintering can be caused by things >>> like the absence of mechanisms for "policy routing" or "type of service" >>> or "security" that's important to someone. >> And here, I'll come back to commercial interests as driving the show. >> >> In the academic world - where interoperability and resource/information >> sharing are a priority - we have a world of identify federations.? Yes, one >> has to have permissions and such, but one doesn't need multiple library >> cards to access multiple libraries, or to make interlibrary loans.? For that >> matter, we can do business worldwide, with one bank account or credit card. >> >> But, when it comes to things like, say, distributing medical records, it >> took the Medicare administrators to force all doctors offices, hospitals, >> etc. to use the same format for submitting billing records.? Meanwhile >> commercial firms have made a fortune creating and selling portals and >> private email systems - and convincing folks that the only way they can meet >> HIPPA requirements is to use said private systems.? And now they've started >> to sell their users on mechanisms to share records between providers (kind >> of like the early days of email - "there are more folks on our system then >> the other guys,' so we're your best option for letting doctors exchange >> patient records").? Without a forcing function for interoperability (be it >> ARPA funding the ARPANET specifically to enable resource sharing, or >> Medicare, or some other large institution) - market forces, and perhaps >> basic human psychology, push toward finding ways to segment markets, isolate >> tribes, carve off market niches, etc. >> >> Come to think of it, the same applies to "web services" - we developed a >> perfectly good protocol stack, and built RESTful services on top of it.? But >> somebody had to go off and reinvent everything, push all the functions up to >> the application layer, and make everything incredibly baroque and >> cumbersome.? And then folks started to come to their senses and start >> standardizing, a bit, on how to do RESTful web services in ways that sort of >> work for everyone.? (Of course, there are those who are trying to repeat the >> missteps, with "Web 3.0," smart contracts, and all of that stuff.) >>> "Double Login" momentarily was eliminated, but revived and has evolved >>> into "Continuous Login" since the Internet doesn't provide what's needed >>> by the users in today's complex world. >> A nice way of putting it. >> >> Though, perhaps it's equally useful to view things as "no login." Everything >> is a transaction, governed by a set of rules, accompanied by credentials and >> currency. >> >> And we have models for that that date back millennia - basically contracts >> and currency.? Later we invented multi-part forms & checking accounts.? Now >> we have a plethora of mechanisms - all doing basically the same thing - and >> competing with each other for market share.? (Kind of like standards, we >> need a standard way of talking to each other - so let's invent a new one.) >> >> Maybe, we can take a breath, take a step backwards, and start building on >> interoperable building blocks that have stood the test of time.? In the same >> way that e-books "work" a lot better than reading on laptops, and now >> tablets are merging the form factor in ways that are practical.? Or chat, in >> the form of SMS & MMS messaging, is pretty much still the standard for >> reaching anybody, anywhere, any time. >> >> But... absent a major institution pushing things forward (or together)... it >> probably will take a concerted effort, by those of us who understand the >> issues, and are in positions to specify technology for large systems, or >> large groups/organizations, to keep nudging things in the right direction, >> when we have the opportunity to do so. >> >>> I was involved in operating a "splinternet" corporate internet in the >>> 90s, connected to "the Internet" only by an email gateway.? We just >>> couldn't trust the Internet so we kept it at arms length. >>> >>> Hope this helps some historian.... >>> Jack Haverty >> And, perhaps, offer some lessons learned to those who would prefer not to >> repeat history! >> >> Cheers, >> >> Miles >> >> >> -- >> In theory, there is no difference between theory and practice. >> In practice, there is. .... Yogi Berra >> >> Theory is when you know everything but nothing works. >> Practice is when everything works but no one knows why. >> In our lab, theory and practice are combined: >> nothing works and no one knows why. ... unknown >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From tte at cs.fau.de Sat Mar 12 23:43:48 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Sun, 13 Mar 2022 08:43:48 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> Message-ID: I guess the W3C approach failed because it was just too complex ? Was there ever an attempt to put content classifications into DNS ? Also: Just because some technology is not ubiquitous doesn't mean its not valuable. To me the big value of the Internet is the freedom to innovate. Which is why all the filtering to anything but 80/443 is so annoying to me, having experienced a time when there was much more creativity in innovation before it got locked out by almost every edge-firewall. Which actually are ubiquitous. For better or worse. Cheers Toerless Whether or not exist for t On Sat, Mar 12, 2022 at 11:24:48PM -0800, Jack Haverty via Internet-history wrote: > if you look at history from the Users' perspective, IMHO the problem has > been a lack of follow-through.? Lots of technology (protocols, formats, > algorithms) has been created, documented in 1000s of RFCs and such.? But > unless it gets to the field and becomes an inherent and pervasive capability > of the Internet, it doesn't really exist for the Users, whether they be > individuals or corporations or governments or network or cloud operators. > > Two good examples of technology beyond basic TCP/IP that have made that leap > are DNS and NTP.? You can pretty much count on them to be available no > matter where you connect to the Internet and what kind of device you use to > make that connection. > > In contrast, many other technologies may "exist" but haven't made that leap. > > E.g., X.509 and certificates may exist, but IMHO they aren't widely used.?? > I occasionally see my browser advise me that a certificate is invalid.?? But > the only path forward it offers is to ignore the error if I want to continue > doing whatever I'm trying to do.? I typically say "go ahead", and I suspect > most Users do the same. Similarly, I have PGP and S/MIME credentials, but I > rarely use them, and rarely receive any email from others using them. > > Control of Internet content, to provide child protection or other > constraints, was developed by W3C in the 90s (look up PICS - Platform for > Internet Content Selection).? It was even implemented in popular browsers of > the day.? As a rep to W3C I helped get that in place as a general mechanism > for attaching metadata to Web content, but AFAIK it never got any real use > in the broad Internet and by now seems to have disappeared. > > Perhaps some historian will someday explain why such mechanisms don't seem > to make it to the field and get widely implemented, deployed, and used.? Why > are they different from TCP/IP, DNS, NTP and maybe a few others which had > success in the early stages of the Internet? > > Jack > > > On 3/12/22 22:55, Toerless Eckert via Internet-history wrote: > > Access control would be a lovely topic to take to the IETF. For something > > what Jack described as a review of historic methods to learn from (would be a very > > helpful info RFC, but lot of work i guess), and for todays perspective IMHO > > what access control methods could be recommended to avoid the problematic filtering at > > network layer. > > > > For example, we just had another incident of a court in germany issuing blocking > > orders to german ISPs (which typically operates on DNS), against a porn service > > that wasn't providing adequate child protection. How do we get rid of such recurring > > challenges to the basic internet infrastructure (IP and naming level...) ? > > > > Funnily, i am just trying to watch a movie on disneyplus ("All King Man") while being in > > Germany with a USA based account, and the account only allows me to select <= PG14. > > Talked with tech-support, and the only solution was to temporarily update the account location > > to germany because (as i figure) it's even logically impossible to automate this: In germany > > kids are allowed/disallowed to watch different movies than in the USA, but travelling > > parents might be caught by surprise (especially on the "allowed" part). So that's > > from an arguably kids-friendly global content provider. Now try to imagine how governments > > are struggling, that many parents do expect to provide some useful degree of protection for > > kids. If the answer to the problem is "well, we can't figure out how to do this for the > > Internet at large", then this will even increase the monopolization of services to those > > global providers that do. > > > > Sorry. Too much current-day text. The Internet was definitely a lot easier in <= 1990'th, > > when we had not enough kids on the Internet to worry about that issue. > > > > How about "The Internet was built for adults" ? > > > > Cheers > > Toerless > > > > > > On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via Internet-history wrote: > > > A helpful perspective.? Thanks Jack. > > > > > > Not sure I completely agree with all of it (see below) - but pretty close. > > > > > > Jack Haverty via Internet-history wrote: > > > > IMHO, the Internet has been splintered for decades, going back to the > > > > days of DDN, and the introduction of EGP which enabled carving up the > > > > Internet into many pieces, each run by different operators. > > > > > > > > But the history is a bit more complex than that.?? Back in the mid-80s, > > > > I used to give a lot of presentations about the Internet. One of the > > > > points I made was that the first ten years of the Internet were all > > > > about connectivity -- making it possible for every computer on the > > > > planet to communicate with every other computer.? I opined then that the > > > > next ten years would be about making it *not* possible for every > > > > computer to talk with every other -- i.e., to introduce mechanisms that > > > > made it possible to constrain connectivity, for any of a number of > > > > reasons already mentioned. That was about 40 years ago -- my ten year > > > > projection was way off target. > > > > > > > > At the time, the usage model of the Internet was based on the way that > > > > computers of that era were typically used.? A person would use a > > > > terminal of some kind (typewriter or screen) and do something to connect > > > > it to a computer.? He or she would then somehow "log in" to that > > > > computer with a name and password, and gain the ability to use whatever > > > > programs, data, and resources that individual was allowed to use.? At > > > > the end of that "session", the user would log out, and that terminal > > > > would no longer be able to do anything until the next user repeated the > > > > process. > > > > > > > > In the early days of the Internet, that model was translated into the > > > > network realm.? E.g., there was a project called TACACS (TAC Access > > > > Control System) that provided the mechanisms for a human user to "log > > > > in" to the Internet, using a name and a password. DDN, for example, > > > > issued DDN Access Cards which had your name and network password that > > > > enabled a human user to log in to the DDN as a network. > > > > > > > > Having logged in to the network, you could then still connect to your > > > > chosen computer as before.? But you no longer had to log in to that > > > > computer.?? The network could tell the computer which user was > > > > associated with the new connection, and, assuming the computer manager > > > > trusted the network, the user would be automatically logged in and be > > > > able to do whatever that user was allowed to do.?? This new feature was > > > > termed "Double Login Elimination", since it removed the necessity to log > > > > in more than once for a given session, regardless of how many computers > > > > you might use. > > > > > > > > Those mechanisms didn't have strong security, but it was straightforward > > > > to add it for situations where it was required. The basic model was that > > > > network activity was always associated with some user, who was > > > > identified and verified by the network mechanisms.?? Each computer that > > > > the user might use would be told who the user was, and could then apply > > > > its own rules about what that user could do.?? If the user made a > > > > network connection out to some other computer, the user's identity would > > > > be similarly passed along to the other computer. > > > > > > > > At about that time (later 1980s), LANs and PCs began to spread through > > > > the Internet, and the user-at-a-terminal model broke down. Instead of > > > > users at terminals making connections to the network, now there were > > > > users at microcomputers making connections.?? Such computers were > > > > "personal" computers, not under management by the typical "data center" > > > > or network operator but rather by individuals.??? Rather than connecting > > > > to remote computers as "terminals", connections started to also be made > > > > by programs running on those personal computers.?? The human user might > > > > not even be aware that such connections were happening. > > > > > > > > With that evolution of the network/user model, mechanisms such as TACACS > > > > became obsolete.? Where it was often reasonable to trust the > > > > identification of a user performed by a mechanism run by the network or > > > > a datacenter, it was difficult to similarly trust the word of one of the > > > > multitude of microcomputers and software packages that were now > > > > involved. > > > > > > > > So, the notion that a "user" could be identified and then constrained in > > > > use of the resources on the Internet was no longer available. > > > > > > > > AFAIK, since that time in the 80s, there hasn't been a new "usage model" > > > > developed to deal with the reality of today's Internet.? We each have > > > > many devices now, not just one personal computer.?? Many of them are > > > > online all of the time; there are no "sessions" now with a human > > > > interacting with a remote computer as in the 80s. When we use a website, > > > > what appears on our screen may come from dozens of computers somewhere > > > > "out there".?? Some of the content on the screen isn't even what we > > > > asked for.?? Who is the "user" asking for advertising popups to > > > > appear??? Did I give that user permission to use some of my screen > > > > space??? Who did? > > > > > > > > User interaction with today's network is arguably much more complex than > > > > it was 40 years ago.? IMHO, no one has developed a good model of network > > > > usage for such a world, that enables the control of the resources > > > > (computing, data) accessed across the Internet.?? For mechanisms that > > > > have been developed, such as privacy-enhanced electronic mail, > > > > deployment seems to have been very spotty for some reason.?? We get > > > > email from identified Users, but can we trust that the email actually > > > > came from that User? When the Web appeared, the Internet got really > > > > complicated. > > > > > > > > Lacking appropriate mechanisms, users still need some way to control who > > > > can utliize what.?? So they improvise and generate adhoc point > > > > solutions.? My bank wants to interact with me safely, so it sets up a > > > > separate account on its own computers, with name, password, and 2-factor > > > > authentication.?? It can't trust the Internet to tell it who I am.?? It > > > > sends me email when I need to do something, advising me to log in to my > > > > account and read its message to me there, where it knows that I'm me, > > > > and I know that it's my bank.?? It can't trust Internet email for more > > > > than advising me to come in to its splinter of the Internet. > > > > > > > > All my vendors do the same.? My newspaper.? My doctors.? My media > > > > subscriptions.? Each has its own "silo" where it can interact with me > > > > reliably and confidently.?? Some of them probably do it to better make > > > > money.? But IMHO most of them do it because they have to - the Internet > > > > doesn't provide any mechanisms to help. > > > I'm not sure that's really the case.? We do, after all have things like > > > X.509 certificates, and various mechanisms defined on top of them.? Or, in > > > the academic & enterprise worlds, we have IAM mechanisms that work across > > > multiple institutions (e.g., Shibboleth and the like). > > > > So we get lots of "splintering".??? IMHO that has at least partially > > > > been driven by the lack of mechanisms within the Internet technology to > > > > deal with control of resources in ways that the users require. So they > > > > have invented their own individual mechanisms as needs arose.? It's not > > > > just at the router/ISP level, where splintering can be caused by things > > > > like the absence of mechanisms for "policy routing" or "type of service" > > > > or "security" that's important to someone. > > > And here, I'll come back to commercial interests as driving the show. > > > > > > In the academic world - where interoperability and resource/information > > > sharing are a priority - we have a world of identify federations.? Yes, one > > > has to have permissions and such, but one doesn't need multiple library > > > cards to access multiple libraries, or to make interlibrary loans.? For that > > > matter, we can do business worldwide, with one bank account or credit card. > > > > > > But, when it comes to things like, say, distributing medical records, it > > > took the Medicare administrators to force all doctors offices, hospitals, > > > etc. to use the same format for submitting billing records.? Meanwhile > > > commercial firms have made a fortune creating and selling portals and > > > private email systems - and convincing folks that the only way they can meet > > > HIPPA requirements is to use said private systems.? And now they've started > > > to sell their users on mechanisms to share records between providers (kind > > > of like the early days of email - "there are more folks on our system then > > > the other guys,' so we're your best option for letting doctors exchange > > > patient records").? Without a forcing function for interoperability (be it > > > ARPA funding the ARPANET specifically to enable resource sharing, or > > > Medicare, or some other large institution) - market forces, and perhaps > > > basic human psychology, push toward finding ways to segment markets, isolate > > > tribes, carve off market niches, etc. > > > > > > Come to think of it, the same applies to "web services" - we developed a > > > perfectly good protocol stack, and built RESTful services on top of it.? But > > > somebody had to go off and reinvent everything, push all the functions up to > > > the application layer, and make everything incredibly baroque and > > > cumbersome.? And then folks started to come to their senses and start > > > standardizing, a bit, on how to do RESTful web services in ways that sort of > > > work for everyone.? (Of course, there are those who are trying to repeat the > > > missteps, with "Web 3.0," smart contracts, and all of that stuff.) > > > > "Double Login" momentarily was eliminated, but revived and has evolved > > > > into "Continuous Login" since the Internet doesn't provide what's needed > > > > by the users in today's complex world. > > > A nice way of putting it. > > > > > > Though, perhaps it's equally useful to view things as "no login." Everything > > > is a transaction, governed by a set of rules, accompanied by credentials and > > > currency. > > > > > > And we have models for that that date back millennia - basically contracts > > > and currency.? Later we invented multi-part forms & checking accounts.? Now > > > we have a plethora of mechanisms - all doing basically the same thing - and > > > competing with each other for market share.? (Kind of like standards, we > > > need a standard way of talking to each other - so let's invent a new one.) > > > > > > Maybe, we can take a breath, take a step backwards, and start building on > > > interoperable building blocks that have stood the test of time.? In the same > > > way that e-books "work" a lot better than reading on laptops, and now > > > tablets are merging the form factor in ways that are practical.? Or chat, in > > > the form of SMS & MMS messaging, is pretty much still the standard for > > > reaching anybody, anywhere, any time. > > > > > > But... absent a major institution pushing things forward (or together)... it > > > probably will take a concerted effort, by those of us who understand the > > > issues, and are in positions to specify technology for large systems, or > > > large groups/organizations, to keep nudging things in the right direction, > > > when we have the opportunity to do so. > > > > > > > I was involved in operating a "splinternet" corporate internet in the > > > > 90s, connected to "the Internet" only by an email gateway.? We just > > > > couldn't trust the Internet so we kept it at arms length. > > > > > > > > Hope this helps some historian.... > > > > Jack Haverty > > > And, perhaps, offer some lessons learned to those who would prefer not to > > > repeat history! > > > > > > Cheers, > > > > > > Miles > > > > > > > > > -- > > > In theory, there is no difference between theory and practice. > > > In practice, there is. .... Yogi Berra > > > > > > Theory is when you know everything but nothing works. > > > Practice is when everything works but no one knows why. > > > In our lab, theory and practice are combined: > > > nothing works and no one knows why. ... unknown > > > > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From mfidelman at meetinghouse.net Sun Mar 13 07:50:19 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 13 Mar 2022 10:50:19 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> Message-ID: <613328ff-dc28-13f6-2a24-11902b1347ad@meetinghouse.net> Toerless Eckert wrote: > > How about "The Internet was built for adults" ? > Was it?? An awful lot of it was built for students - first university level, but quickly extended to K-12.? And that's before commercial services aimed specifically at kids. At this point, it's infrastructure, used by everybody - kind of like the phone system, or the post. Now access controls, based on age, seem totally appropriate - but even then, one can get into trouble really quickly:? Consider the "child safety" controls that prevent kids, at libraries, from accessing various kinds of health information. Mile Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From mfidelman at meetinghouse.net Sun Mar 13 08:01:43 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 13 Mar 2022 11:01:43 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> Message-ID: <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> Jack Haverty via Internet-history wrote: > if you look at history from the Users' perspective, IMHO the problem > has been a lack of follow-through.? Lots of technology (protocols, > formats, algorithms) has been created, documented in 1000s of RFCs and > such.? But unless it gets to the field and becomes an inherent and > pervasive capability of the Internet, it doesn't really exist for the > Users, whether they be individuals or corporations or governments or > network or cloud operators. > > Two good examples of technology beyond basic TCP/IP that have made > that leap are DNS and NTP.? You can pretty much count on them to be > available no matter where you connect to the Internet and what kind of > device you use to make that connection. > > In contrast, many other technologies may "exist" but haven't made that > leap. > > E.g., X.509 and certificates may exist, but IMHO they aren't widely > used.?? I occasionally see my browser advise me that a certificate is > invalid.?? But the only path forward it offers is to ignore the error > if I want to continue doing whatever I'm trying to do.? I typically > say "go ahead", and I suspect most Users do the same. Similarly, I > have PGP and S/MIME credentials, but I rarely use them, and rarely > receive any email from others using them. But... they're used EVERYWHERE in government, particularly the military - where you need to plug a CAC card into your computer, just to log in. They're used, because you HAVE to use them. Same again for things like Microsoft Active Directory in the corporate environment, or Shibboleth in the academic world.? (Which, in turn, are based on Kerberos, if memory serves.) If the folks who enforce HIPPA were to pass a regulation requiring a standard format, and standard protocols, for exchanging medical records - that was based on X.509 certificates and S/MIME - guaranteed that every medical systems provider would migrate from their proprietary formats and protocols, to the standard. (Particularly, since pretty much every mail and web client has the capabilities built in.) > > Control of Internet content, to provide child protection or other > constraints, was developed by W3C in the 90s (look up PICS - Platform > for Internet Content Selection).? It was even implemented in popular > browsers of the day.? As a rep to W3C I helped get that in place as a > general mechanism for attaching metadata to Web content, but AFAIK it > never got any real use in the broad Internet and by now seems to have > disappeared. > > Perhaps some historian will someday explain why such mechanisms don't > seem to make it to the field and get widely implemented, deployed, and > used.? Why are they different from TCP/IP, DNS, NTP and maybe a few > others which had success in the early stages of the Internet? Lack of a forcing function - be it a vacuum demanding to be filled, or legislation, or buying behavior of a large client, or customer demand. Miles > > Jack > > > On 3/12/22 22:55, Toerless Eckert via Internet-history wrote: >> Access control would be a lovely topic to take to the IETF. For >> something >> what Jack described as a review of historic methods to learn from >> (would be a very >> helpful info RFC, but lot of work i guess), and for todays >> perspective IMHO >> what access control methods could be recommended to avoid the >> problematic filtering at >> network layer. >> >> For example, we just had another incident of a court in germany >> issuing blocking >> orders to german ISPs (which typically operates on DNS), against a >> porn service >> that wasn't providing adequate child protection. How do we get rid of >> such recurring >> challenges to the basic internet infrastructure (IP and naming >> level...) ? >> >> Funnily, i am just trying to watch a movie on disneyplus ("All King >> Man") while being in >> Germany with a USA based account, and the account only allows me to >> select <= PG14. >> Talked with tech-support, and the only solution was to temporarily >> update the account location >> ? to germany because (as i figure) it's even logically impossible to >> automate this: In germany >> kids are allowed/disallowed to watch different movies than in the >> USA, but travelling >> parents might be caught by surprise (especially on the "allowed" >> part). So that's >> from an arguably kids-friendly global content provider. Now try to >> imagine how governments >> are struggling, that many parents do expect to provide some useful >> degree of protection for >> kids.? If the answer to the problem is "well, we can't figure out how >> to do this for the >> Internet at large", then this will even increase the monopolization >> of services to those >> global providers that do. >> >> Sorry. Too much current-day text. The Internet was definitely a lot >> easier in <= 1990'th, >> when we had not enough kids on the Internet to worry about that issue. >> >> How about "The Internet was built for adults" ? >> >> Cheers >> ???? Toerless >> >> >> On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via >> Internet-history wrote: >>> A helpful perspective.? Thanks Jack. >>> >>> Not sure I completely agree with all of it (see below) - but pretty >>> close. >>> >>> Jack Haverty via Internet-history wrote: >>>> IMHO, the Internet has been splintered for decades, going back to the >>>> days of DDN, and the introduction of EGP which enabled carving up the >>>> Internet into many pieces, each run by different operators. >>>> >>>> But the history is a bit more complex than that.?? Back in the >>>> mid-80s, >>>> I used to give a lot of presentations about the Internet. One of the >>>> points I made was that the first ten years of the Internet were all >>>> about connectivity -- making it possible for every computer on the >>>> planet to communicate with every other computer.? I opined then >>>> that the >>>> next ten years would be about making it *not* possible for every >>>> computer to talk with every other -- i.e., to introduce mechanisms >>>> that >>>> made it possible to constrain connectivity, for any of a number of >>>> reasons already mentioned. That was about 40 years ago -- my ten year >>>> projection was way off target. >>>> >>>> At the time, the usage model of the Internet was based on the way that >>>> computers of that era were typically used.? A person would use a >>>> terminal of some kind (typewriter or screen) and do something to >>>> connect >>>> it to a computer.? He or she would then somehow "log in" to that >>>> computer with a name and password, and gain the ability to use >>>> whatever >>>> programs, data, and resources that individual was allowed to use.? At >>>> the end of that "session", the user would log out, and that terminal >>>> would no longer be able to do anything until the next user repeated >>>> the >>>> process. >>>> >>>> In the early days of the Internet, that model was translated into the >>>> network realm.? E.g., there was a project called TACACS (TAC Access >>>> Control System) that provided the mechanisms for a human user to "log >>>> in" to the Internet, using a name and a password. DDN, for example, >>>> issued DDN Access Cards which had your name and network password that >>>> enabled a human user to log in to the DDN as a network. >>>> >>>> Having logged in to the network, you could then still connect to your >>>> chosen computer as before.? But you no longer had to log in to that >>>> computer.?? The network could tell the computer which user was >>>> associated with the new connection, and, assuming the computer manager >>>> trusted the network, the user would be automatically logged in and be >>>> able to do whatever that user was allowed to do.?? This new feature >>>> was >>>> termed "Double Login Elimination", since it removed the necessity >>>> to log >>>> in more than once for a given session, regardless of how many >>>> computers >>>> you might use. >>>> >>>> Those mechanisms didn't have strong security, but it was >>>> straightforward >>>> to add it for situations where it was required. The basic model was >>>> that >>>> network activity was always associated with some user, who was >>>> identified and verified by the network mechanisms.?? Each computer >>>> that >>>> the user might use would be told who the user was, and could then >>>> apply >>>> its own rules about what that user could do.?? If the user made a >>>> network connection out to some other computer, the user's identity >>>> would >>>> be similarly passed along to the other computer. >>>> >>>> At about that time (later 1980s), LANs and PCs began to spread through >>>> the Internet, and the user-at-a-terminal model broke down. Instead of >>>> users at terminals making connections to the network, now there were >>>> users at microcomputers making connections.?? Such computers were >>>> "personal" computers, not under management by the typical "data >>>> center" >>>> or network operator but rather by individuals.??? Rather than >>>> connecting >>>> to remote computers as "terminals", connections started to also be >>>> made >>>> by programs running on those personal computers.?? The human user >>>> might >>>> not even be aware that such connections were happening. >>>> >>>> With that evolution of the network/user model, mechanisms such as >>>> TACACS >>>> became obsolete.? Where it was often reasonable to trust the >>>> identification of a user performed by a mechanism run by the >>>> network or >>>> a datacenter, it was difficult to similarly trust the word of one >>>> of the >>>> multitude of microcomputers and software packages that were now >>>> involved. >>>> >>>> So, the notion that a "user" could be identified and then >>>> constrained in >>>> use of the resources on the Internet was no longer available. >>>> >>>> AFAIK, since that time in the 80s, there hasn't been a new "usage >>>> model" >>>> developed to deal with the reality of today's Internet.? We each have >>>> many devices now, not just one personal computer.?? Many of them are >>>> online all of the time; there are no "sessions" now with a human >>>> interacting with a remote computer as in the 80s. When we use a >>>> website, >>>> what appears on our screen may come from dozens of computers somewhere >>>> "out there".?? Some of the content on the screen isn't even what we >>>> asked for.?? Who is the "user" asking for advertising popups to >>>> appear??? Did I give that user permission to use some of my screen >>>> space??? Who did? >>>> >>>> User interaction with today's network is arguably much more complex >>>> than >>>> it was 40 years ago.? IMHO, no one has developed a good model of >>>> network >>>> usage for such a world, that enables the control of the resources >>>> (computing, data) accessed across the Internet.?? For mechanisms that >>>> have been developed, such as privacy-enhanced electronic mail, >>>> deployment seems to have been very spotty for some reason. We get >>>> email from identified Users, but can we trust that the email actually >>>> came from that User? When the Web appeared, the Internet got really >>>> complicated. >>>> >>>> Lacking appropriate mechanisms, users still need some way to >>>> control who >>>> can utliize what.?? So they improvise and generate adhoc point >>>> solutions.? My bank wants to interact with me safely, so it sets up a >>>> separate account on its own computers, with name, password, and >>>> 2-factor >>>> authentication.?? It can't trust the Internet to tell it who I >>>> am.?? It >>>> sends me email when I need to do something, advising me to log in >>>> to my >>>> account and read its message to me there, where it knows that I'm me, >>>> and I know that it's my bank.?? It can't trust Internet email for more >>>> than advising me to come in to its splinter of the Internet. >>>> >>>> All my vendors do the same.? My newspaper.? My doctors.? My media >>>> subscriptions.? Each has its own "silo" where it can interact with me >>>> reliably and confidently.?? Some of them probably do it to better make >>>> money.? But IMHO most of them do it because they have to - the >>>> Internet >>>> doesn't provide any mechanisms to help. >>> I'm not sure that's really the case.? We do, after all have things like >>> X.509 certificates, and various mechanisms defined on top of them.? >>> Or, in >>> the academic & enterprise worlds, we have IAM mechanisms that work >>> across >>> multiple institutions (e.g., Shibboleth and the like). >>>> So we get lots of "splintering". IMHO that has at least partially >>>> been driven by the lack of mechanisms within the Internet >>>> technology to >>>> deal with control of resources in ways that the users require. So they >>>> have invented their own individual mechanisms as needs arose.? It's >>>> not >>>> just at the router/ISP level, where splintering can be caused by >>>> things >>>> like the absence of mechanisms for "policy routing" or "type of >>>> service" >>>> or "security" that's important to someone. >>> And here, I'll come back to commercial interests as driving the show. >>> >>> In the academic world - where interoperability and resource/information >>> sharing are a priority - we have a world of identify federations.? >>> Yes, one >>> has to have permissions and such, but one doesn't need multiple library >>> cards to access multiple libraries, or to make interlibrary loans.? >>> For that >>> matter, we can do business worldwide, with one bank account or >>> credit card. >>> >>> But, when it comes to things like, say, distributing medical >>> records, it >>> took the Medicare administrators to force all doctors offices, >>> hospitals, >>> etc. to use the same format for submitting billing records. Meanwhile >>> commercial firms have made a fortune creating and selling portals and >>> private email systems - and convincing folks that the only way they >>> can meet >>> HIPPA requirements is to use said private systems.? And now they've >>> started >>> to sell their users on mechanisms to share records between providers >>> (kind >>> of like the early days of email - "there are more folks on our >>> system then >>> the other guys,' so we're your best option for letting doctors exchange >>> patient records").? Without a forcing function for interoperability >>> (be it >>> ARPA funding the ARPANET specifically to enable resource sharing, or >>> Medicare, or some other large institution) - market forces, and perhaps >>> basic human psychology, push toward finding ways to segment markets, >>> isolate >>> tribes, carve off market niches, etc. >>> >>> Come to think of it, the same applies to "web services" - we >>> developed a >>> perfectly good protocol stack, and built RESTful services on top of >>> it.? But >>> somebody had to go off and reinvent everything, push all the >>> functions up to >>> the application layer, and make everything incredibly baroque and >>> cumbersome.? And then folks started to come to their senses and start >>> standardizing, a bit, on how to do RESTful web services in ways that >>> sort of >>> work for everyone.? (Of course, there are those who are trying to >>> repeat the >>> missteps, with "Web 3.0," smart contracts, and all of that stuff.) >>>> "Double Login" momentarily was eliminated, but revived and has evolved >>>> into "Continuous Login" since the Internet doesn't provide what's >>>> needed >>>> by the users in today's complex world. >>> A nice way of putting it. >>> >>> Though, perhaps it's equally useful to view things as "no login." >>> Everything >>> is a transaction, governed by a set of rules, accompanied by >>> credentials and >>> currency. >>> >>> And we have models for that that date back millennia - basically >>> contracts >>> and currency.? Later we invented multi-part forms & checking >>> accounts.? Now >>> we have a plethora of mechanisms - all doing basically the same >>> thing - and >>> competing with each other for market share.? (Kind of like >>> standards, we >>> need a standard way of talking to each other - so let's invent a new >>> one.) >>> >>> Maybe, we can take a breath, take a step backwards, and start >>> building on >>> interoperable building blocks that have stood the test of time.? In >>> the same >>> way that e-books "work" a lot better than reading on laptops, and now >>> tablets are merging the form factor in ways that are practical.? Or >>> chat, in >>> the form of SMS & MMS messaging, is pretty much still the standard for >>> reaching anybody, anywhere, any time. >>> >>> But... absent a major institution pushing things forward (or >>> together)... it >>> probably will take a concerted effort, by those of us who understand >>> the >>> issues, and are in positions to specify technology for large >>> systems, or >>> large groups/organizations, to keep nudging things in the right >>> direction, >>> when we have the opportunity to do so. >>> >>>> I was involved in operating a "splinternet" corporate internet in the >>>> 90s, connected to "the Internet" only by an email gateway. We just >>>> couldn't trust the Internet so we kept it at arms length. >>>> >>>> Hope this helps some historian.... >>>> Jack Haverty >>> And, perhaps, offer some lessons learned to those who would prefer >>> not to >>> repeat history! >>> >>> Cheers, >>> >>> Miles >>> >>> >>> -- >>> In theory, there is no difference between theory and practice. >>> In practice, there is.? .... Yogi Berra >>> >>> Theory is when you know everything but nothing works. >>> Practice is when everything works but no one knows why. >>> In our lab, theory and practice are combined: >>> nothing works and no one knows why.? ... unknown >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From tte at cs.fau.de Sun Mar 13 08:10:55 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Sun, 13 Mar 2022 16:10:55 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: <613328ff-dc28-13f6-2a24-11902b1347ad@meetinghouse.net> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <613328ff-dc28-13f6-2a24-11902b1347ad@meetinghouse.net> Message-ID: On Sun, Mar 13, 2022 at 10:50:19AM -0400, Miles Fidelman via Internet-history wrote: > Toerless Eckert wrote: > > > > How about "The Internet was built for adults" ? > > > Was it?? An awful lot of it was built for students - first university level, > but quickly extended to K-12.? And that's before commercial services aimed > specifically at kids. What i said was meant as a cynical (im)possible post-factum justification for why we pushed up the problem of access control for decades now, still without a comprehensive solution. As in "how could the Internet ever have been designed without tackling that problem ? Oh well, because it was bilt for adults". > At this point, it's infrastructure, used by everybody - kind of like the > phone system, or the post. > > Now access controls, based on age, seem totally appropriate except for the obvious problem of jurisdiction based policies vs. the desire for unfethered global connectibvity and location anonymity... ? > - but even then, > one can get into trouble really quickly:? Consider the "child safety" > controls that prevent kids, at libraries, from accessing various kinds of > health information. Sure. But for age sensitive access control at the edge to Internet content, i think there should be of helpful options (like what i mentioned) that don't produce false positives, so i am still baffled why not more progressis made. Seems like an ongoing battle between folks who don't want to do anything on one side ("slippery slope"), and folks clueless how to do it on the other side. Cheers Toerless > > Mile Fidelman > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From johnl at iecc.com Sun Mar 13 10:56:37 2022 From: johnl at iecc.com (John Levine) Date: 13 Mar 2022 13:56:37 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: Message-ID: <20220313175637.E60E138F2BAB@ary.qy> It appears that Toerless Eckert via Internet-history said: >I guess the W3C approach failed because it was just too complex ? Whatever problem it solved wasn't a problem that people actually had. >Was there ever an attempt to put content classifications into DNS ? I think you can assume there have been attempts to put everything into the DNS at one point or another. Doesn't mean it was a good idea or useful. R's, John From mfidelman at meetinghouse.net Sun Mar 13 11:26:28 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 13 Mar 2022 14:26:28 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <613328ff-dc28-13f6-2a24-11902b1347ad@meetinghouse.net> Message-ID: <054c1d2b-659b-6483-65ee-ec7db3ef4efe@meetinghouse.net> Toerless Eckert wrote: > On Sun, Mar 13, 2022 at 10:50:19AM -0400, Miles Fidelman via Internet-history wrote: >> Toerless Eckert wrote: >>> How about "The Internet was built for adults" ? >>> >> Was it?? An awful lot of it was built for students - first university level, >> but quickly extended to K-12.? And that's before commercial services aimed >> specifically at kids. > What i said was meant as a cynical (im)possible post-factum justification for > why we pushed up the problem of access control for decades now, still without > a comprehensive solution. As in "how could the Internet ever have been designed > without tackling that problem ? Oh well, because it was bilt for adults". Ahh... > >> At this point, it's infrastructure, used by everybody - kind of like the >> phone system, or the post. >> >> Now access controls, based on age, seem totally appropriate > except for the obvious problem of jurisdiction based policies vs. the desire for > unfethered global connectibvity and location anonymity... ? > >> - but even then, >> one can get into trouble really quickly:? Consider the "child safety" >> controls that prevent kids, at libraries, from accessing various kinds of >> health information. > Sure. But for age sensitive access control at the edge to Internet content, > i think there should be of helpful options (like what i mentioned) that don't > produce false positives, so i am still baffled why not more progressis made. Seems > like an ongoing battle between folks who don't want to do anything on one side > ("slippery slope"), and folks clueless how to do it on the other side. > > Let's not forget those who have commercial & political interests in not doing anything. "Follow the money" and all that. Cheers, Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From jack at 3kitty.org Sun Mar 13 12:55:05 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 13 Mar 2022 12:55:05 -0700 Subject: [ih] Preparing for the splinternet In-Reply-To: <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> Message-ID: <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> X.509 may be used "EVERYWHERE in government" (at least some governments); but it's not used everywhere else, e.g., in the much larger community of Internet users worldwide. Forcing functions seem to create silos.? TCP/IP was nurtured in such a silo, where a "force" had effect.? It started with the US Defense Department who mandated it for their world, while at the same time keeping their options open for planned adoption of OSI technology. TCP/IP broke out of its silo, spread throughout the world, and reduced competing silos (SNA, DECNet, OSI, SPX/IPX, ...) to oblivion. A few other silos had a similar experience, e.g., DNS and NTP which seem to have no competitors now.?? Dave Mills and his crew built the NTP silo.? IIRC, he just needed good clocks to perform some experiments on the neonatal Internet.? So he built them as NTP.? Web technology (HTTP, HTML, URLs) started in Tim Berners-Lee's silo but similarly broke out and became ubiquitous.?? Competitors such as Gopher, and even well-funded and pre-existing ones like Lotus Notes, didn't endure.?? In the early days of the Web, there were several silos competing to provide security for Electronic Commerce.? But rather quickly HTTPS became dominant and seems ubiquitous today. People who build silos sometimes build them with fragile materials, easily broken.?? E.g., TCP/IP was built in a government silo, but was explicitly made very "open" for anyone to adopt.?? I've always thought such open nature was important for ubiquity.?? But while TCP/IP V4 broke out of its silo and became ubiquitous, TCP/IP V6, with presumably the same characteristics, has still not replaced V4.?? Internet technologies such as IRC (Internet Relay Chat) provided an open mechanism for people to carry on public discussions.? But that didn't prevent the emergence of myriad social media mechanisms that collectively dominate today as competing silos.?? The battle continues, and IRC still exists as a minor contestant, but it's not likely to win.?? Similarly, NNTP provided a mechanism for disseminating news across the Internet; there's lots of news today on the 'net, but I don't think it travels using NNTP. An open nature is apparently insufficient.?? A strong "forcing function" is also insufficient, except in its own silo where its force is effective. Ray Tomlinson's introduction of @ has dominated for decades.? But now it seems more and more likely as part of a Twitter identity than an Internet one. DNS seems ubiquitous, but I sense that its dominance is waning. There are too many "Acme Plumbing" websites now, making it hard to remember the DNS name for the one in my neighborhood.? Even "Four Seasons" doesn't always get you what you expect... I find myself now using search engines and browser history to remember where to find things I use, rather than remembering their DNS names.?? Ubiquity and dominance seem to not be permanent. As Toerless pointed out, silos (and splinters) enable innovation. A good thing.? They also encourage complexity and walled gardens. Bad things, IMHO. So why do some silos break open and their technology spreads to become dominant and ubiquitous, while others languish for decades. That's my question perhaps some Historian can answer someday.?? I suspect the answer will be complicated. Jack Haverty On 3/13/22 08:01, Miles Fidelman via Internet-history wrote: > Jack Haverty via Internet-history wrote: >> if you look at history from the Users' perspective, IMHO the problem >> has been a lack of follow-through.? Lots of technology (protocols, >> formats, algorithms) has been created, documented in 1000s of RFCs >> and such.? But unless it gets to the field and becomes an inherent >> and pervasive capability of the Internet, it doesn't really exist for >> the Users, whether they be individuals or corporations or governments >> or network or cloud operators. >> >> Two good examples of technology beyond basic TCP/IP that have made >> that leap are DNS and NTP.? You can pretty much count on them to be >> available no matter where you connect to the Internet and what kind >> of device you use to make that connection. >> >> In contrast, many other technologies may "exist" but haven't made >> that leap. >> >> E.g., X.509 and certificates may exist, but IMHO they aren't widely >> used.?? I occasionally see my browser advise me that a certificate is >> invalid.?? But the only path forward it offers is to ignore the error >> if I want to continue doing whatever I'm trying to do.? I typically >> say "go ahead", and I suspect most Users do the same. Similarly, I >> have PGP and S/MIME credentials, but I rarely use them, and rarely >> receive any email from others using them. > > But... they're used EVERYWHERE in government, particularly the > military - where you need to plug a CAC card into your computer, just > to log in. > > They're used, because you HAVE to use them. > > Same again for things like Microsoft Active Directory in the corporate > environment, or Shibboleth in the academic world. (Which, in turn, are > based on Kerberos, if memory serves.) > > If the folks who enforce HIPPA were to pass a regulation requiring a > standard format, and standard protocols, for exchanging medical > records - that was based on X.509 certificates and S/MIME - guaranteed > that every medical systems provider would migrate from their > proprietary formats and protocols, to the standard. (Particularly, > since pretty much every mail and web client has the capabilities built > in.) > >> >> Control of Internet content, to provide child protection or other >> constraints, was developed by W3C in the 90s (look up PICS - Platform >> for Internet Content Selection).? It was even implemented in popular >> browsers of the day.? As a rep to W3C I helped get that in place as a >> general mechanism for attaching metadata to Web content, but AFAIK it >> never got any real use in the broad Internet and by now seems to have >> disappeared. >> >> Perhaps some historian will someday explain why such mechanisms don't >> seem to make it to the field and get widely implemented, deployed, >> and used.? Why are they different from TCP/IP, DNS, NTP and maybe a >> few others which had success in the early stages of the Internet? > > Lack of a forcing function - be it a vacuum demanding to be filled, or > legislation, or buying behavior of a large client, or customer demand. > > Miles > > >> >> Jack >> >> >> On 3/12/22 22:55, Toerless Eckert via Internet-history wrote: >>> Access control would be a lovely topic to take to the IETF. For >>> something >>> what Jack described as a review of historic methods to learn from >>> (would be a very >>> helpful info RFC, but lot of work i guess), and for todays >>> perspective IMHO >>> what access control methods could be recommended to avoid the >>> problematic filtering at >>> network layer. >>> >>> For example, we just had another incident of a court in germany >>> issuing blocking >>> orders to german ISPs (which typically operates on DNS), against a >>> porn service >>> that wasn't providing adequate child protection. How do we get rid >>> of such recurring >>> challenges to the basic internet infrastructure (IP and naming >>> level...) ? >>> >>> Funnily, i am just trying to watch a movie on disneyplus ("All King >>> Man") while being in >>> Germany with a USA based account, and the account only allows me to >>> select <= PG14. >>> Talked with tech-support, and the only solution was to temporarily >>> update the account location >>> ? to germany because (as i figure) it's even logically impossible to >>> automate this: In germany >>> kids are allowed/disallowed to watch different movies than in the >>> USA, but travelling >>> parents might be caught by surprise (especially on the "allowed" >>> part). So that's >>> from an arguably kids-friendly global content provider. Now try to >>> imagine how governments >>> are struggling, that many parents do expect to provide some useful >>> degree of protection for >>> kids.? If the answer to the problem is "well, we can't figure out >>> how to do this for the >>> Internet at large", then this will even increase the monopolization >>> of services to those >>> global providers that do. >>> >>> Sorry. Too much current-day text. The Internet was definitely a lot >>> easier in <= 1990'th, >>> when we had not enough kids on the Internet to worry about that issue. >>> >>> How about "The Internet was built for adults" ? >>> >>> Cheers >>> ???? Toerless >>> >>> >>> On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via >>> Internet-history wrote: >>>> A helpful perspective.? Thanks Jack. >>>> >>>> Not sure I completely agree with all of it (see below) - but pretty >>>> close. >>>> >>>> Jack Haverty via Internet-history wrote: >>>>> IMHO, the Internet has been splintered for decades, going back to the >>>>> days of DDN, and the introduction of EGP which enabled carving up the >>>>> Internet into many pieces, each run by different operators. >>>>> >>>>> But the history is a bit more complex than that.?? Back in the >>>>> mid-80s, >>>>> I used to give a lot of presentations about the Internet. One of the >>>>> points I made was that the first ten years of the Internet were all >>>>> about connectivity -- making it possible for every computer on the >>>>> planet to communicate with every other computer.? I opined then >>>>> that the >>>>> next ten years would be about making it *not* possible for every >>>>> computer to talk with every other -- i.e., to introduce mechanisms >>>>> that >>>>> made it possible to constrain connectivity, for any of a number of >>>>> reasons already mentioned. That was about 40 years ago -- my ten year >>>>> projection was way off target. >>>>> >>>>> At the time, the usage model of the Internet was based on the way >>>>> that >>>>> computers of that era were typically used.? A person would use a >>>>> terminal of some kind (typewriter or screen) and do something to >>>>> connect >>>>> it to a computer.? He or she would then somehow "log in" to that >>>>> computer with a name and password, and gain the ability to use >>>>> whatever >>>>> programs, data, and resources that individual was allowed to use.? At >>>>> the end of that "session", the user would log out, and that terminal >>>>> would no longer be able to do anything until the next user >>>>> repeated the >>>>> process. >>>>> >>>>> In the early days of the Internet, that model was translated into the >>>>> network realm.? E.g., there was a project called TACACS (TAC Access >>>>> Control System) that provided the mechanisms for a human user to "log >>>>> in" to the Internet, using a name and a password. DDN, for example, >>>>> issued DDN Access Cards which had your name and network password that >>>>> enabled a human user to log in to the DDN as a network. >>>>> >>>>> Having logged in to the network, you could then still connect to your >>>>> chosen computer as before.? But you no longer had to log in to that >>>>> computer.?? The network could tell the computer which user was >>>>> associated with the new connection, and, assuming the computer >>>>> manager >>>>> trusted the network, the user would be automatically logged in and be >>>>> able to do whatever that user was allowed to do.?? This new >>>>> feature was >>>>> termed "Double Login Elimination", since it removed the necessity >>>>> to log >>>>> in more than once for a given session, regardless of how many >>>>> computers >>>>> you might use. >>>>> >>>>> Those mechanisms didn't have strong security, but it was >>>>> straightforward >>>>> to add it for situations where it was required. The basic model >>>>> was that >>>>> network activity was always associated with some user, who was >>>>> identified and verified by the network mechanisms.?? Each computer >>>>> that >>>>> the user might use would be told who the user was, and could then >>>>> apply >>>>> its own rules about what that user could do.?? If the user made a >>>>> network connection out to some other computer, the user's identity >>>>> would >>>>> be similarly passed along to the other computer. >>>>> >>>>> At about that time (later 1980s), LANs and PCs began to spread >>>>> through >>>>> the Internet, and the user-at-a-terminal model broke down. Instead of >>>>> users at terminals making connections to the network, now there were >>>>> users at microcomputers making connections.?? Such computers were >>>>> "personal" computers, not under management by the typical "data >>>>> center" >>>>> or network operator but rather by individuals.??? Rather than >>>>> connecting >>>>> to remote computers as "terminals", connections started to also be >>>>> made >>>>> by programs running on those personal computers.?? The human user >>>>> might >>>>> not even be aware that such connections were happening. >>>>> >>>>> With that evolution of the network/user model, mechanisms such as >>>>> TACACS >>>>> became obsolete.? Where it was often reasonable to trust the >>>>> identification of a user performed by a mechanism run by the >>>>> network or >>>>> a datacenter, it was difficult to similarly trust the word of one >>>>> of the >>>>> multitude of microcomputers and software packages that were now >>>>> involved. >>>>> >>>>> So, the notion that a "user" could be identified and then >>>>> constrained in >>>>> use of the resources on the Internet was no longer available. >>>>> >>>>> AFAIK, since that time in the 80s, there hasn't been a new "usage >>>>> model" >>>>> developed to deal with the reality of today's Internet. We each have >>>>> many devices now, not just one personal computer.?? Many of them are >>>>> online all of the time; there are no "sessions" now with a human >>>>> interacting with a remote computer as in the 80s. When we use a >>>>> website, >>>>> what appears on our screen may come from dozens of computers >>>>> somewhere >>>>> "out there".?? Some of the content on the screen isn't even what we >>>>> asked for.?? Who is the "user" asking for advertising popups to >>>>> appear??? Did I give that user permission to use some of my screen >>>>> space??? Who did? >>>>> >>>>> User interaction with today's network is arguably much more >>>>> complex than >>>>> it was 40 years ago.? IMHO, no one has developed a good model of >>>>> network >>>>> usage for such a world, that enables the control of the resources >>>>> (computing, data) accessed across the Internet.?? For mechanisms that >>>>> have been developed, such as privacy-enhanced electronic mail, >>>>> deployment seems to have been very spotty for some reason. We get >>>>> email from identified Users, but can we trust that the email actually >>>>> came from that User? When the Web appeared, the Internet got really >>>>> complicated. >>>>> >>>>> Lacking appropriate mechanisms, users still need some way to >>>>> control who >>>>> can utliize what.?? So they improvise and generate adhoc point >>>>> solutions.? My bank wants to interact with me safely, so it sets up a >>>>> separate account on its own computers, with name, password, and >>>>> 2-factor >>>>> authentication.?? It can't trust the Internet to tell it who I >>>>> am.?? It >>>>> sends me email when I need to do something, advising me to log in >>>>> to my >>>>> account and read its message to me there, where it knows that I'm me, >>>>> and I know that it's my bank.?? It can't trust Internet email for >>>>> more >>>>> than advising me to come in to its splinter of the Internet. >>>>> >>>>> All my vendors do the same.? My newspaper.? My doctors. My media >>>>> subscriptions.? Each has its own "silo" where it can interact with me >>>>> reliably and confidently.?? Some of them probably do it to better >>>>> make >>>>> money.? But IMHO most of them do it because they have to - the >>>>> Internet >>>>> doesn't provide any mechanisms to help. >>>> I'm not sure that's really the case.? We do, after all have things >>>> like >>>> X.509 certificates, and various mechanisms defined on top of them.? >>>> Or, in >>>> the academic & enterprise worlds, we have IAM mechanisms that work >>>> across >>>> multiple institutions (e.g., Shibboleth and the like). >>>>> So we get lots of "splintering". IMHO that has at least partially >>>>> been driven by the lack of mechanisms within the Internet >>>>> technology to >>>>> deal with control of resources in ways that the users require. So >>>>> they >>>>> have invented their own individual mechanisms as needs arose.? >>>>> It's not >>>>> just at the router/ISP level, where splintering can be caused by >>>>> things >>>>> like the absence of mechanisms for "policy routing" or "type of >>>>> service" >>>>> or "security" that's important to someone. >>>> And here, I'll come back to commercial interests as driving the show. >>>> >>>> In the academic world - where interoperability and >>>> resource/information >>>> sharing are a priority - we have a world of identify federations.? >>>> Yes, one >>>> has to have permissions and such, but one doesn't need multiple >>>> library >>>> cards to access multiple libraries, or to make interlibrary loans.? >>>> For that >>>> matter, we can do business worldwide, with one bank account or >>>> credit card. >>>> >>>> But, when it comes to things like, say, distributing medical >>>> records, it >>>> took the Medicare administrators to force all doctors offices, >>>> hospitals, >>>> etc. to use the same format for submitting billing records. Meanwhile >>>> commercial firms have made a fortune creating and selling portals and >>>> private email systems - and convincing folks that the only way they >>>> can meet >>>> HIPPA requirements is to use said private systems.? And now they've >>>> started >>>> to sell their users on mechanisms to share records between >>>> providers (kind >>>> of like the early days of email - "there are more folks on our >>>> system then >>>> the other guys,' so we're your best option for letting doctors >>>> exchange >>>> patient records").? Without a forcing function for interoperability >>>> (be it >>>> ARPA funding the ARPANET specifically to enable resource sharing, or >>>> Medicare, or some other large institution) - market forces, and >>>> perhaps >>>> basic human psychology, push toward finding ways to segment >>>> markets, isolate >>>> tribes, carve off market niches, etc. >>>> >>>> Come to think of it, the same applies to "web services" - we >>>> developed a >>>> perfectly good protocol stack, and built RESTful services on top of >>>> it.? But >>>> somebody had to go off and reinvent everything, push all the >>>> functions up to >>>> the application layer, and make everything incredibly baroque and >>>> cumbersome.? And then folks started to come to their senses and start >>>> standardizing, a bit, on how to do RESTful web services in ways >>>> that sort of >>>> work for everyone.? (Of course, there are those who are trying to >>>> repeat the >>>> missteps, with "Web 3.0," smart contracts, and all of that stuff.) >>>>> "Double Login" momentarily was eliminated, but revived and has >>>>> evolved >>>>> into "Continuous Login" since the Internet doesn't provide what's >>>>> needed >>>>> by the users in today's complex world. >>>> A nice way of putting it. >>>> >>>> Though, perhaps it's equally useful to view things as "no login." >>>> Everything >>>> is a transaction, governed by a set of rules, accompanied by >>>> credentials and >>>> currency. >>>> >>>> And we have models for that that date back millennia - basically >>>> contracts >>>> and currency.? Later we invented multi-part forms & checking >>>> accounts.? Now >>>> we have a plethora of mechanisms - all doing basically the same >>>> thing - and >>>> competing with each other for market share.? (Kind of like >>>> standards, we >>>> need a standard way of talking to each other - so let's invent a >>>> new one.) >>>> >>>> Maybe, we can take a breath, take a step backwards, and start >>>> building on >>>> interoperable building blocks that have stood the test of time.? In >>>> the same >>>> way that e-books "work" a lot better than reading on laptops, and now >>>> tablets are merging the form factor in ways that are practical.? Or >>>> chat, in >>>> the form of SMS & MMS messaging, is pretty much still the standard for >>>> reaching anybody, anywhere, any time. >>>> >>>> But... absent a major institution pushing things forward (or >>>> together)... it >>>> probably will take a concerted effort, by those of us who >>>> understand the >>>> issues, and are in positions to specify technology for large >>>> systems, or >>>> large groups/organizations, to keep nudging things in the right >>>> direction, >>>> when we have the opportunity to do so. >>>> >>>>> I was involved in operating a "splinternet" corporate internet in the >>>>> 90s, connected to "the Internet" only by an email gateway. We just >>>>> couldn't trust the Internet so we kept it at arms length. >>>>> >>>>> Hope this helps some historian.... >>>>> Jack Haverty >>>> And, perhaps, offer some lessons learned to those who would prefer >>>> not to >>>> repeat history! >>>> >>>> Cheers, >>>> >>>> Miles >>>> >>>> >>>> -- >>>> In theory, there is no difference between theory and practice. >>>> In practice, there is.? .... Yogi Berra >>>> >>>> Theory is when you know everything but nothing works. >>>> Practice is when everything works but no one knows why. >>>> In our lab, theory and practice are combined: >>>> nothing works and no one knows why.? ... unknown >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >> > > From mfidelman at meetinghouse.net Sun Mar 13 14:23:40 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sun, 13 Mar 2022 17:23:40 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> Message-ID: <99b2b7bc-60e0-e3e4-651a-0aff27592cec@meetinghouse.net> re.? "An open nature is apparently insufficient.?? A strong "forcing function" is also insufficient, except in its own silo where its force is effective." OR... in a situation where a strong "anchor" customer (or vendor) drives a marketplace. Hence my mention of Medicare's influence on standardization of reporting formats across the medical industry.? Similarly, one might expect that if the folks who enforce HIPPA regulations were to mandate, or at least endorse, X.509 based PKI, and S/MIME for data exchange - that would push back on the variety of proprietary email and record sharing systems in the medical community. Or consider what might happen if the IRS and the Social Security Administration adopted an open standards policy. Similarly, we have the ubiquity of Kerberos and Shibboleth in the academic community.? And Active Directory in corporate settings. Or, for that matter, Login.Gov. Cheers, Miles Jack Haverty via Internet-history wrote: > X.509 may be used "EVERYWHERE in government" (at least some > governments); but it's not used everywhere else, e.g., in the much > larger community of Internet users worldwide. > > Forcing functions seem to create silos.? TCP/IP was nurtured in such a > silo, where a "force" had effect.? It started with the US Defense > Department who mandated it for their world, while at the same time > keeping their options open for planned adoption of OSI technology. > > TCP/IP broke out of its silo, spread throughout the world, and reduced > competing silos (SNA, DECNet, OSI, SPX/IPX, ...) to oblivion. > > A few other silos had a similar experience, e.g., DNS and NTP which > seem to have no competitors now.?? Dave Mills and his crew built the > NTP silo.? IIRC, he just needed good clocks to perform some > experiments on the neonatal Internet.? So he built them as NTP.? Web > technology (HTTP, HTML, URLs) started in Tim Berners-Lee's silo but > similarly broke out and became ubiquitous.?? Competitors such as > Gopher, and even well-funded and pre-existing ones like Lotus Notes, > didn't endure.?? In the early days of the Web, there were several > silos competing to provide security for Electronic Commerce.? But > rather quickly HTTPS became dominant and seems ubiquitous today. > > People who build silos sometimes build them with fragile materials, > easily broken.?? E.g., TCP/IP was built in a government silo, but was > explicitly made very "open" for anyone to adopt. I've always thought > such open nature was important for ubiquity. But while TCP/IP V4 broke > out of its silo and became ubiquitous, TCP/IP V6, with presumably the > same characteristics, has still not replaced V4.?? Internet > technologies such as IRC (Internet Relay Chat) provided an open > mechanism for people to carry on public discussions.? But that didn't > prevent the emergence of myriad social media mechanisms that > collectively dominate today as competing silos.?? The battle > continues, and IRC still exists as a minor contestant, but it's not > likely to win.?? Similarly, NNTP provided a mechanism for > disseminating news across the Internet; there's lots of news today on > the 'net, but I don't think it travels using NNTP. > > An open nature is apparently insufficient.?? A strong "forcing > function" is also insufficient, except in its own silo where its force > is effective. > > Ray Tomlinson's introduction of @ has dominated for decades.? But now > it seems more and more likely as part of a Twitter identity than an > Internet one. > > DNS seems ubiquitous, but I sense that its dominance is waning. There > are too many "Acme Plumbing" websites now, making it hard to remember > the DNS name for the one in my neighborhood.? Even "Four Seasons" > doesn't always get you what you expect... > > I find myself now using search engines and browser history to remember > where to find things I use, rather than remembering their DNS names.?? > Ubiquity and dominance seem to not be permanent. > > As Toerless pointed out, silos (and splinters) enable innovation. A > good thing.? They also encourage complexity and walled gardens. Bad > things, IMHO. > > So why do some silos break open and their technology spreads to become > dominant and ubiquitous, while others languish for decades. > > That's my question perhaps some Historian can answer someday.?? I > suspect the answer will be complicated. > > Jack Haverty > > On 3/13/22 08:01, Miles Fidelman via Internet-history wrote: >> Jack Haverty via Internet-history wrote: >>> if you look at history from the Users' perspective, IMHO the problem >>> has been a lack of follow-through.? Lots of technology (protocols, >>> formats, algorithms) has been created, documented in 1000s of RFCs >>> and such.? But unless it gets to the field and becomes an inherent >>> and pervasive capability of the Internet, it doesn't really exist >>> for the Users, whether they be individuals or corporations or >>> governments or network or cloud operators. >>> >>> Two good examples of technology beyond basic TCP/IP that have made >>> that leap are DNS and NTP.? You can pretty much count on them to be >>> available no matter where you connect to the Internet and what kind >>> of device you use to make that connection. >>> >>> In contrast, many other technologies may "exist" but haven't made >>> that leap. >>> >>> E.g., X.509 and certificates may exist, but IMHO they aren't widely >>> used.?? I occasionally see my browser advise me that a certificate >>> is invalid.?? But the only path forward it offers is to ignore the >>> error if I want to continue doing whatever I'm trying to do.? I >>> typically say "go ahead", and I suspect most Users do the same. >>> Similarly, I have PGP and S/MIME credentials, but I rarely use them, >>> and rarely receive any email from others using them. >> >> But... they're used EVERYWHERE in government, particularly the >> military - where you need to plug a CAC card into your computer, just >> to log in. >> >> They're used, because you HAVE to use them. >> >> Same again for things like Microsoft Active Directory in the >> corporate environment, or Shibboleth in the academic world. (Which, >> in turn, are based on Kerberos, if memory serves.) >> >> If the folks who enforce HIPPA were to pass a regulation requiring a >> standard format, and standard protocols, for exchanging medical >> records - that was based on X.509 certificates and S/MIME - >> guaranteed that every medical systems provider would migrate from >> their proprietary formats and protocols, to the standard. >> (Particularly, since pretty much every mail and web client has the >> capabilities built in.) >> >>> >>> Control of Internet content, to provide child protection or other >>> constraints, was developed by W3C in the 90s (look up PICS - >>> Platform for Internet Content Selection).? It was even implemented >>> in popular browsers of the day.? As a rep to W3C I helped get that >>> in place as a general mechanism for attaching metadata to Web >>> content, but AFAIK it never got any real use in the broad Internet >>> and by now seems to have disappeared. >>> >>> Perhaps some historian will someday explain why such mechanisms >>> don't seem to make it to the field and get widely implemented, >>> deployed, and used.? Why are they different from TCP/IP, DNS, NTP >>> and maybe a few others which had success in the early stages of the >>> Internet? >> >> Lack of a forcing function - be it a vacuum demanding to be filled, >> or legislation, or buying behavior of a large client, or customer >> demand. >> >> Miles >> >> >>> >>> Jack >>> >>> >>> On 3/12/22 22:55, Toerless Eckert via Internet-history wrote: >>>> Access control would be a lovely topic to take to the IETF. For >>>> something >>>> what Jack described as a review of historic methods to learn from >>>> (would be a very >>>> helpful info RFC, but lot of work i guess), and for todays >>>> perspective IMHO >>>> what access control methods could be recommended to avoid the >>>> problematic filtering at >>>> network layer. >>>> >>>> For example, we just had another incident of a court in germany >>>> issuing blocking >>>> orders to german ISPs (which typically operates on DNS), against a >>>> porn service >>>> that wasn't providing adequate child protection. How do we get rid >>>> of such recurring >>>> challenges to the basic internet infrastructure (IP and naming >>>> level...) ? >>>> >>>> Funnily, i am just trying to watch a movie on disneyplus ("All King >>>> Man") while being in >>>> Germany with a USA based account, and the account only allows me to >>>> select <= PG14. >>>> Talked with tech-support, and the only solution was to temporarily >>>> update the account location >>>> ? to germany because (as i figure) it's even logically impossible >>>> to automate this: In germany >>>> kids are allowed/disallowed to watch different movies than in the >>>> USA, but travelling >>>> parents might be caught by surprise (especially on the "allowed" >>>> part). So that's >>>> from an arguably kids-friendly global content provider. Now try to >>>> imagine how governments >>>> are struggling, that many parents do expect to provide some useful >>>> degree of protection for >>>> kids.? If the answer to the problem is "well, we can't figure out >>>> how to do this for the >>>> Internet at large", then this will even increase the monopolization >>>> of services to those >>>> global providers that do. >>>> >>>> Sorry. Too much current-day text. The Internet was definitely a lot >>>> easier in <= 1990'th, >>>> when we had not enough kids on the Internet to worry about that issue. >>>> >>>> How about "The Internet was built for adults" ? >>>> >>>> Cheers >>>> ???? Toerless >>>> >>>> >>>> On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via >>>> Internet-history wrote: >>>>> A helpful perspective.? Thanks Jack. >>>>> >>>>> Not sure I completely agree with all of it (see below) - but >>>>> pretty close. >>>>> >>>>> Jack Haverty via Internet-history wrote: >>>>>> IMHO, the Internet has been splintered for decades, going back to >>>>>> the >>>>>> days of DDN, and the introduction of EGP which enabled carving up >>>>>> the >>>>>> Internet into many pieces, each run by different operators. >>>>>> >>>>>> But the history is a bit more complex than that.?? Back in the >>>>>> mid-80s, >>>>>> I used to give a lot of presentations about the Internet. One of the >>>>>> points I made was that the first ten years of the Internet were all >>>>>> about connectivity -- making it possible for every computer on the >>>>>> planet to communicate with every other computer.? I opined then >>>>>> that the >>>>>> next ten years would be about making it *not* possible for every >>>>>> computer to talk with every other -- i.e., to introduce >>>>>> mechanisms that >>>>>> made it possible to constrain connectivity, for any of a number of >>>>>> reasons already mentioned. That was about 40 years ago -- my ten >>>>>> year >>>>>> projection was way off target. >>>>>> >>>>>> At the time, the usage model of the Internet was based on the way >>>>>> that >>>>>> computers of that era were typically used.? A person would use a >>>>>> terminal of some kind (typewriter or screen) and do something to >>>>>> connect >>>>>> it to a computer.? He or she would then somehow "log in" to that >>>>>> computer with a name and password, and gain the ability to use >>>>>> whatever >>>>>> programs, data, and resources that individual was allowed to >>>>>> use.? At >>>>>> the end of that "session", the user would log out, and that terminal >>>>>> would no longer be able to do anything until the next user >>>>>> repeated the >>>>>> process. >>>>>> >>>>>> In the early days of the Internet, that model was translated into >>>>>> the >>>>>> network realm.? E.g., there was a project called TACACS (TAC Access >>>>>> Control System) that provided the mechanisms for a human user to >>>>>> "log >>>>>> in" to the Internet, using a name and a password. DDN, for example, >>>>>> issued DDN Access Cards which had your name and network password >>>>>> that >>>>>> enabled a human user to log in to the DDN as a network. >>>>>> >>>>>> Having logged in to the network, you could then still connect to >>>>>> your >>>>>> chosen computer as before.? But you no longer had to log in to that >>>>>> computer.?? The network could tell the computer which user was >>>>>> associated with the new connection, and, assuming the computer >>>>>> manager >>>>>> trusted the network, the user would be automatically logged in >>>>>> and be >>>>>> able to do whatever that user was allowed to do.?? This new >>>>>> feature was >>>>>> termed "Double Login Elimination", since it removed the necessity >>>>>> to log >>>>>> in more than once for a given session, regardless of how many >>>>>> computers >>>>>> you might use. >>>>>> >>>>>> Those mechanisms didn't have strong security, but it was >>>>>> straightforward >>>>>> to add it for situations where it was required. The basic model >>>>>> was that >>>>>> network activity was always associated with some user, who was >>>>>> identified and verified by the network mechanisms. Each computer >>>>>> that >>>>>> the user might use would be told who the user was, and could then >>>>>> apply >>>>>> its own rules about what that user could do.?? If the user made a >>>>>> network connection out to some other computer, the user's >>>>>> identity would >>>>>> be similarly passed along to the other computer. >>>>>> >>>>>> At about that time (later 1980s), LANs and PCs began to spread >>>>>> through >>>>>> the Internet, and the user-at-a-terminal model broke down. >>>>>> Instead of >>>>>> users at terminals making connections to the network, now there were >>>>>> users at microcomputers making connections.?? Such computers were >>>>>> "personal" computers, not under management by the typical "data >>>>>> center" >>>>>> or network operator but rather by individuals.??? Rather than >>>>>> connecting >>>>>> to remote computers as "terminals", connections started to also >>>>>> be made >>>>>> by programs running on those personal computers.?? The human user >>>>>> might >>>>>> not even be aware that such connections were happening. >>>>>> >>>>>> With that evolution of the network/user model, mechanisms such as >>>>>> TACACS >>>>>> became obsolete.? Where it was often reasonable to trust the >>>>>> identification of a user performed by a mechanism run by the >>>>>> network or >>>>>> a datacenter, it was difficult to similarly trust the word of one >>>>>> of the >>>>>> multitude of microcomputers and software packages that were now >>>>>> involved. >>>>>> >>>>>> So, the notion that a "user" could be identified and then >>>>>> constrained in >>>>>> use of the resources on the Internet was no longer available. >>>>>> >>>>>> AFAIK, since that time in the 80s, there hasn't been a new "usage >>>>>> model" >>>>>> developed to deal with the reality of today's Internet. We each have >>>>>> many devices now, not just one personal computer.?? Many of them are >>>>>> online all of the time; there are no "sessions" now with a human >>>>>> interacting with a remote computer as in the 80s. When we use a >>>>>> website, >>>>>> what appears on our screen may come from dozens of computers >>>>>> somewhere >>>>>> "out there".?? Some of the content on the screen isn't even what we >>>>>> asked for.?? Who is the "user" asking for advertising popups to >>>>>> appear??? Did I give that user permission to use some of my screen >>>>>> space??? Who did? >>>>>> >>>>>> User interaction with today's network is arguably much more >>>>>> complex than >>>>>> it was 40 years ago.? IMHO, no one has developed a good model of >>>>>> network >>>>>> usage for such a world, that enables the control of the resources >>>>>> (computing, data) accessed across the Internet.?? For mechanisms >>>>>> that >>>>>> have been developed, such as privacy-enhanced electronic mail, >>>>>> deployment seems to have been very spotty for some reason. We get >>>>>> email from identified Users, but can we trust that the email >>>>>> actually >>>>>> came from that User? When the Web appeared, the Internet got really >>>>>> complicated. >>>>>> >>>>>> Lacking appropriate mechanisms, users still need some way to >>>>>> control who >>>>>> can utliize what.?? So they improvise and generate adhoc point >>>>>> solutions.? My bank wants to interact with me safely, so it sets >>>>>> up a >>>>>> separate account on its own computers, with name, password, and >>>>>> 2-factor >>>>>> authentication.?? It can't trust the Internet to tell it who I >>>>>> am.?? It >>>>>> sends me email when I need to do something, advising me to log in >>>>>> to my >>>>>> account and read its message to me there, where it knows that I'm >>>>>> me, >>>>>> and I know that it's my bank.?? It can't trust Internet email for >>>>>> more >>>>>> than advising me to come in to its splinter of the Internet. >>>>>> >>>>>> All my vendors do the same.? My newspaper.? My doctors. My media >>>>>> subscriptions.? Each has its own "silo" where it can interact >>>>>> with me >>>>>> reliably and confidently.?? Some of them probably do it to better >>>>>> make >>>>>> money.? But IMHO most of them do it because they have to - the >>>>>> Internet >>>>>> doesn't provide any mechanisms to help. >>>>> I'm not sure that's really the case.? We do, after all have things >>>>> like >>>>> X.509 certificates, and various mechanisms defined on top of >>>>> them.? Or, in >>>>> the academic & enterprise worlds, we have IAM mechanisms that work >>>>> across >>>>> multiple institutions (e.g., Shibboleth and the like). >>>>>> So we get lots of "splintering". IMHO that has at least partially >>>>>> been driven by the lack of mechanisms within the Internet >>>>>> technology to >>>>>> deal with control of resources in ways that the users require. So >>>>>> they >>>>>> have invented their own individual mechanisms as needs arose.? >>>>>> It's not >>>>>> just at the router/ISP level, where splintering can be caused by >>>>>> things >>>>>> like the absence of mechanisms for "policy routing" or "type of >>>>>> service" >>>>>> or "security" that's important to someone. >>>>> And here, I'll come back to commercial interests as driving the show. >>>>> >>>>> In the academic world - where interoperability and >>>>> resource/information >>>>> sharing are a priority - we have a world of identify federations.? >>>>> Yes, one >>>>> has to have permissions and such, but one doesn't need multiple >>>>> library >>>>> cards to access multiple libraries, or to make interlibrary >>>>> loans.? For that >>>>> matter, we can do business worldwide, with one bank account or >>>>> credit card. >>>>> >>>>> But, when it comes to things like, say, distributing medical >>>>> records, it >>>>> took the Medicare administrators to force all doctors offices, >>>>> hospitals, >>>>> etc. to use the same format for submitting billing records. Meanwhile >>>>> commercial firms have made a fortune creating and selling portals and >>>>> private email systems - and convincing folks that the only way >>>>> they can meet >>>>> HIPPA requirements is to use said private systems.? And now >>>>> they've started >>>>> to sell their users on mechanisms to share records between >>>>> providers (kind >>>>> of like the early days of email - "there are more folks on our >>>>> system then >>>>> the other guys,' so we're your best option for letting doctors >>>>> exchange >>>>> patient records").? Without a forcing function for >>>>> interoperability (be it >>>>> ARPA funding the ARPANET specifically to enable resource sharing, or >>>>> Medicare, or some other large institution) - market forces, and >>>>> perhaps >>>>> basic human psychology, push toward finding ways to segment >>>>> markets, isolate >>>>> tribes, carve off market niches, etc. >>>>> >>>>> Come to think of it, the same applies to "web services" - we >>>>> developed a >>>>> perfectly good protocol stack, and built RESTful services on top >>>>> of it.? But >>>>> somebody had to go off and reinvent everything, push all the >>>>> functions up to >>>>> the application layer, and make everything incredibly baroque and >>>>> cumbersome.? And then folks started to come to their senses and start >>>>> standardizing, a bit, on how to do RESTful web services in ways >>>>> that sort of >>>>> work for everyone.? (Of course, there are those who are trying to >>>>> repeat the >>>>> missteps, with "Web 3.0," smart contracts, and all of that stuff.) >>>>>> "Double Login" momentarily was eliminated, but revived and has >>>>>> evolved >>>>>> into "Continuous Login" since the Internet doesn't provide what's >>>>>> needed >>>>>> by the users in today's complex world. >>>>> A nice way of putting it. >>>>> >>>>> Though, perhaps it's equally useful to view things as "no login." >>>>> Everything >>>>> is a transaction, governed by a set of rules, accompanied by >>>>> credentials and >>>>> currency. >>>>> >>>>> And we have models for that that date back millennia - basically >>>>> contracts >>>>> and currency.? Later we invented multi-part forms & checking >>>>> accounts.? Now >>>>> we have a plethora of mechanisms - all doing basically the same >>>>> thing - and >>>>> competing with each other for market share.? (Kind of like >>>>> standards, we >>>>> need a standard way of talking to each other - so let's invent a >>>>> new one.) >>>>> >>>>> Maybe, we can take a breath, take a step backwards, and start >>>>> building on >>>>> interoperable building blocks that have stood the test of time.? >>>>> In the same >>>>> way that e-books "work" a lot better than reading on laptops, and now >>>>> tablets are merging the form factor in ways that are practical.? >>>>> Or chat, in >>>>> the form of SMS & MMS messaging, is pretty much still the standard >>>>> for >>>>> reaching anybody, anywhere, any time. >>>>> >>>>> But... absent a major institution pushing things forward (or >>>>> together)... it >>>>> probably will take a concerted effort, by those of us who >>>>> understand the >>>>> issues, and are in positions to specify technology for large >>>>> systems, or >>>>> large groups/organizations, to keep nudging things in the right >>>>> direction, >>>>> when we have the opportunity to do so. >>>>> >>>>>> I was involved in operating a "splinternet" corporate internet in >>>>>> the >>>>>> 90s, connected to "the Internet" only by an email gateway. We just >>>>>> couldn't trust the Internet so we kept it at arms length. >>>>>> >>>>>> Hope this helps some historian.... >>>>>> Jack Haverty >>>>> And, perhaps, offer some lessons learned to those who would prefer >>>>> not to >>>>> repeat history! >>>>> >>>>> Cheers, >>>>> >>>>> Miles >>>>> >>>>> >>>>> -- >>>>> In theory, there is no difference between theory and practice. >>>>> In practice, there is.? .... Yogi Berra >>>>> >>>>> Theory is when you know everything but nothing works. >>>>> Practice is when everything works but no one knows why. >>>>> In our lab, theory and practice are combined: >>>>> nothing works and no one knows why.? ... unknown >>>>> >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> >> > -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From york at isoc.org Mon Mar 14 07:58:00 2022 From: york at isoc.org (Dan York) Date: Mon, 14 Mar 2022 14:58:00 +0000 Subject: [ih] Preparing for the splinternet In-Reply-To: <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> Message-ID: <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Jack, > On Mar 13, 2022, at 3:55 PM, Jack Haverty via Internet-history wrote: > > Internet technologies such as IRC (Internet Relay Chat) provided an open mechanism for people to carry on public discussions. But that didn't prevent the emergence of myriad social media mechanisms that collectively dominate today as competing silos. The battle continues, and IRC still exists as a minor contestant, but it's not likely to win. Similarly, NNTP provided a mechanism for disseminating news across the Internet; there's lots of news today on the 'net, but I don't think it travels using NNTP. > > An open nature is apparently insufficient. A strong "forcing function" is also insufficient, except in its own silo where its force is effective. I think a critical element with both IRC and NNTP (of which I was a strong user of both) and similar other technologies based on open standards was, and still is? *user experience* (UX). I remember very well in the mid-2000s when many of us were working on VoIP systems based on the Session Initiation Protocol. We were working hard to get SIP to a place where it could replace H.323 and other various proprietary protocols. We were making progress on many different fronts, but there was a lot of complexity involved with making SIP work in so many different network configurations. Then along came Skype with its extremely simple UX. You just installed the software and? ta da? you were making audio calls to people. And it was SO SIMPLE that ?anyone? could install Skype on their computer and have it ?just work?. We saw this happen with messaging with IRC and also Jabber/XMPP. ?Regular? users got used to the increasingly sophisticated UX of proprietary messaging apps like WhatsApp, Apple?s iMessage, Facebook Messenger, Skype, and many others. Those consumer experiences drove enterprise/organization expectations. And IRC clients and Jabber clients just couldn?t keep up. Along came Slack with its slick UX and? poof? people started leaving IRC and XMPP networks for the simplicity and ?just works? UX of Slack. ?. (And now other similar proprietary messaging systems.) In both the case of Skype and Slack, they are centralized systems/services using proprietary protocols. Those centralized services also made it extremely easy to discover other people - and both started to have large directories of users. (A separate but related issue I wrote about a while ago - and is still a key issue around acceptance of these systems - some of the players have just changed since I wrote this in 2016: https://circleid.com/posts/20160515_directory_dilemma_why_facebook_google_skype_may_win_mobile_app_war ) I think NNTP had similar issues with the UX of news readers? but I also think there were larger issues there with companies seeking to use news as a means to keep people inside their new walled gardens.. and also to provide moderated experiences. (But that could be a whole other email thread.) My 2 cents, Dan From johnl at iecc.com Mon Mar 14 08:16:59 2022 From: johnl at iecc.com (John Levine) Date: 14 Mar 2022 15:16:59 -0000 Subject: [ih] Preparing for the splinternet In-Reply-To: <216a389f-69 <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> References: <216a389f-69 <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Message-ID: According to Dan York via Internet-history : >I think NNTP had similar issues with the UX of news readers??? but I also think there were larger issues there with >companies seeking to use news as a means to keep people inside their new walled gardens.. and also to provide moderated >experiences. (But that could be a whole other email thread.) I still run a moderated newsgroup which gets significant traffic. The problem with usenet wasn't the UI, which for the most part was the same as for mail programs (Thunderbird still does both.) It was that like any freeish push medium, it was overrun with spam. By the time we got the spam under control, most of the users had moved on to other places. R's, John -- Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From york at isoc.org Mon Mar 14 08:25:49 2022 From: york at isoc.org (Dan York) Date: Mon, 14 Mar 2022 15:25:49 +0000 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Message-ID: <6A7EF002-2753-491B-9CB2-88AAD2A86313@isoc.org> John, On Mar 14, 2022, at 11:16 AM, John Levine via Internet-history > wrote: According to Dan York via Internet-history >: I think NNTP had similar issues with the UX of news readers? but I also think there were larger issues there with companies seeking to use news as a means to keep people inside their new walled gardens.. and also to provide moderated experiences. (But that could be a whole other email thread.) I still run a moderated newsgroup which gets significant traffic. Fascinating! I haven?t interacted with an actual NNTP newsgroup in? so many years I cannot even think of when. Probably at least 15+ or more. The problem with usenet wasn't the UI, which for the most part was the same as for mail programs (Thunderbird still does both.) It was that like any freeish push medium, it was overrun with spam. By the time we got the spam under control, most of the users had moved on to other places. Ugh. Yes? VERY true! I remember that now! Yes, it was BAD. When I was writing ?moderated experiences? I was thinking more of people seeking environments with fewer trolls, but of course spam was a huge issue. I had that toward the end of my personal use of XMPP servers. One of my accounts was on a public jabber server and it basically devolved to just being a ginormous pit of spam. I had to stop connecting to it because the account just overwhelmed my XMPP client with spam messages. Thanks, Dan From cos at aaaaa.org Mon Mar 14 09:01:29 2022 From: cos at aaaaa.org (Ofer Inbar) Date: Mon, 14 Mar 2022 12:01:29 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> References: <4511b78a-dd91-61f1-d06d-78f6b7b9e71c@3kitty.org> <216a389f-69a7-e1a4-6ace-71c3c5fb5120@meetinghouse.net> <9ada1dc3-08c0-834d-e4e6-3a5d5f4a26a8@meetinghouse.net> <9da0a512-48e1-61fd-acf4-398fe1d98c26@3kitty.org> <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Message-ID: <20220314160129.GE7307@miplet.aaaaa.org> On Mon, Mar 14, 2022 at 02:58:00PM +0000, Dan York via Internet-history wrote: > I think a critical element with both IRC and NNTP (of which I was a strong user of both) and similar other technologies based on open standards was, and still is??? *user experience* (UX). > > I remember very well in the mid-2000s when many of us were working on VoIP systems based on the Session Initiation Protocol. We were working hard to get SIP to a place where it could replace H.323 and other various proprietary protocols. We were making progress on many different fronts, but there was a lot of complexity involved with making SIP work in so many different network configurations. > > Then along came Skype with its extremely simple UX. You just installed the software and??? ta da??? you were making audio calls to people. And it was SO SIMPLE that ???anyone??? could install Skype on their computer and have it ???just work???. > > We saw this happen with messaging with IRC and also Jabber/XMPP. ???Regular??? users got used to the increasingly sophisticated UX of proprietary messaging apps like WhatsApp, Apple???s iMessage, Facebook Messenger, Skype, and many others. > > Those consumer experiences drove enterprise/organization expectations. And IRC clients and Jabber clients just couldn???t keep up. > > Along came Slack with its slick UX and??? poof??? people started leaving IRC and XMPP networks for the simplicity and ???just works??? UX of Slack. ????. (And now other similar proprietary messaging systems.) > > In both the case of Skype and Slack, they are centralized systems/services using proprietary protocols. Two big tradeoffs I've seen affect this over the course of the past few decades on the net, that I don't think I've seen fleshed out in this thread. A. Centralization & speed of development When you centralize the service and make the protocol proprietary, it allows for much faster feature development. The same company fully controls the clients and server side, and can make rapid changes to the protocol to support changes in functionality. Open protocols are bound to develop much more slowly and lag behind. On the other side of the tradeoff, open protocols allow many different clients and give the user a choice. So many things I dislike about Slack and Discord's user-side that I would've solved on IRC by choosing the client that does what I want, for example. And with email, we still have that in part, except that a lot of people are kind of stuck with their organization's choice of email provder. But can still choose something else for their personal email. Unfortunately, even with choice of clients, they're still constricted by the slow development of the protocol. There were a number of IRC clients (well, probably still are) that are easy to install and easy to use and auto-configure, but that still won't give you easy user discovery, for example. This isn't the whole story, but it's a big part of the story: More people prefer the benefits of the proprietary side of this tradeoff than the open side. The kinds of features that a proprietary protocol can deliver in months but would take an open protocol decades, are a big reason why in many spheres they have taken over. B. Who stores and who pays Remember the "social media" of the early web? Home pages and web rings. Your "profile" was your home page. Your "friends list" was often literally a page called "friends" but sometimes "links" that linked to a bunch of other people's homepages, with brief notes about each of them and why you were linking to them. And then your home page could also link to your photo gallery, etc. Tripod and Geocities and others of that era kind of adopted this model, by providing a service where you could very easily make a homepage, and link to others, but they shifted it in a big way: you no longer had to find a host for storing your web data, because this one big centralized company was willing to just store it for you. But once that happens, there's a very big incentive for them to attracted everyone else you might be interacting with, and to use that critical mass of lots of people on the same host to develop new features that go beyond just hyperlinks. But the more special features only work when everyone involved is on the same hosting service, the less of an open web it is. Usenet doesn't require the individual end user to find a place for all their newsgroups to be stored, but it does mostly rely on sites to volunteer to store and forward a lot of content that isn't all for their own users. So then you get things like Digg and Reddit that offer to store all of it in one centralized place for everyone, and support it with ads, and the dynamic seems pretty similar from there. -- Cos From mfidelman at meetinghouse.net Tue Mar 15 07:49:01 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Tue, 15 Mar 2022 10:49:01 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <216a389f-69 <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Message-ID: John Levine via Internet-history wrote: > According to Dan York via Internet-history : >> I think NNTP had similar issues with the UX of news readers??? but I also think there were larger issues there with >> companies seeking to use news as a means to keep people inside their new walled gardens.. and also to provide moderated >> experiences. (But that could be a whole other email thread.) > I still run a moderated newsgroup which gets significant traffic. I THINK, that DoD's logistics folks, still use NNTP as the platform for JOPES groups (anybody know for sure?). > > The problem with usenet wasn't the UI, which for the most part was the same as for > mail programs (Thunderbird still does both.) It was that like any freeish push medium, > it was overrun with spam. By the time we got the spam under control, most of the users > had moved on to other places. > > For a very short time, AOL was distributing an open source new server, that supported private newsgroups.? It was a really great alternative to running a listserver. Unfortunately When AOL was sold, it went away. It's a shame - NNTP has all the hooks for global identity and access control, based on crypto.? There's a real opportunity to build an open, distributed forum capability that isn't some technical abortion like Discord.? (Anybody want to collaborate on such a beast?) Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From tte at cs.fau.de Tue Mar 15 08:01:07 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Tue, 15 Mar 2022 16:01:07 +0100 Subject: [ih] Preparing for the splinternet In-Reply-To: References: <16858613-8E86-4576-B9E9-8CA6124F1FE1@isoc.org> Message-ID: Its not only the federation abilities that we loose by moving away from Usenet to the million of siloed "forums" on the Internet. It is not even clear to me how important the federation character is for most forums. But what is IMHO even more important for the end-user is the loss of freedom of user experience (through NNTP). The same (loss of freedom of user experience) is true for any type of video streaming compared to prior analog/digital TV experiences, where every vendor of end-user equipment could build a customized user-experience. And make that end-user-experience be the aggregator. Of course, end-users are feeling that missing customized/aggregated experience the more streaming services are offered and/or subscribed, but they seemingly are not in a position of enough power to change that fundamentally - primarily because the platforms are making it hard if not impossible for third-parties to to do such aggregation. In one recent instances, TiVo tried to provide such an aggregated experience on the Android platform, just to bail out, when Google announced that they too want to do it, further monopolizing the platform at all stages end-to-end. Lets see if regulators will wake up one day to this vertical monopolization and break it apart, like arguably they at least tried with OS/browser. *sigh* Toerless On Tue, Mar 15, 2022 at 10:49:01AM -0400, Miles Fidelman via Internet-history wrote: > John Levine via Internet-history wrote: > > According to Dan York via Internet-history : > > > I think NNTP had similar issues with the UX of news readers??? but I also think there were larger issues there with > > > companies seeking to use news as a means to keep people inside their new walled gardens.. and also to provide moderated > > > experiences. (But that could be a whole other email thread.) > > I still run a moderated newsgroup which gets significant traffic. > I THINK, that DoD's logistics folks, still use NNTP as the platform for > JOPES groups (anybody know for sure?). > > > > > The problem with usenet wasn't the UI, which for the most part was the same as for > > mail programs (Thunderbird still does both.) It was that like any freeish push medium, > > it was overrun with spam. By the time we got the spam under control, most of the users > > had moved on to other places. > > > > > For a very short time, AOL was distributing an open source new server, that > supported private newsgroups.? It was a really great alternative to running > a listserver. > > Unfortunately When AOL was sold, it went away. > > It's a shame - NNTP has all the hooks for global identity and access > control, based on crypto.? There's a real opportunity to build an open, > distributed forum capability that isn't some technical abortion like > Discord.? (Anybody want to collaborate on such a beast?) > > Miles > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From lpress at csudh.edu Tue Mar 15 13:49:53 2022 From: lpress at csudh.edu (Larry Press) Date: Tue, 15 Mar 2022 20:49:53 +0000 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt Message-ID: The limited, but significant role of SpaceX Starlink in Ukraine: https://circleid.com/posts/20220308-spacex-starlink-in-ukraine-a-week-later Reminds me of the role of RELCOM during the Soviet coup attempt of 1991: https://cis471.blogspot.com/2011/01/before-twitter-revolutions-there-was.html Larry From surfer at mauigateway.com Tue Mar 15 15:24:41 2022 From: surfer at mauigateway.com (surfer at mauigateway.com) Date: Tue, 15 Mar 2022 18:24:41 -0400 Subject: [ih] Preparing for the splinternet In-Reply-To: References: Message-ID: <1647383081.omdc687604wogc8g@webmail.mauigateway.com> ? In: https://www.cavebear.com/cavebear-blog/internet_quo_vadis You mention: "The island-and-bridge internet is coming.? It?s not going to come as a tsunami; it?s going to come as a slow incoming tide." How do satellite networks play into that?? It seems they will overcome the "only one bridge crosses the moat" thing. scott On Sat, 12 Mar 2022 01:56:51 -0800, Karl Auerbach via Internet-history wrote: On 3/10/22 5:02 PM, the keyboard of geoff goodfellow via Internet-history wrote: > EXCERPT: > > According to Wikipedia > , a > researcher at the Cato Institute first used the word "splinternet" in 2001 > to describe the idea of "parallel Internets that would be run as distinct, > private and autonomous universes." Well, "splinternet" it isn't quite "Internet history", it's more of a prophesy of things that could come.? And I sense that none of us want disjoint "splinters", and I don't think users want that either.? But if we use the analogy of wood then today's Internet is nice, clear lumber. And what we have called "splinternet" might be gluelams or fiberboard, i.e. many pieces that are joined to form something that is at least as strong and useful as a timber cut from a single tree. If we expand our view of Internet History to encompass predecessors we see that that "splinters" have existed yet the system as a whole provided acceptable service to users. Perhaps the earliest system that used store-and-forward handling of electronic messages was the telegraph system that arose in the 1830s.? Although it was never a single technically uniform global system, it did have "splinters" that worked acceptably well and were sufficiently joined so that from the users' point of view, it was one system. (We can say the same about the voice telephone system, but I view that more as a circuit switching paradigm rather than store-and-forward message handling.) I, personally, am of the belief that just as the Internet began as a single network and then became a network of networks, i.e. an Internet, the time may be near when we add yet another tier; that the Internet evolves into a network of internets. How this may come to pass is uncertain.? However, I believe that the weak fracture plane is that users no longer care about elegant end-to-end principles but, rather, live in a world of Apps and those users care nothing whether the underlying plumbing is elegant or a jumble - the users only care that their favorite Apps work. Early Internet protocols needed end-to-end connections.? But as the years passed more and more protocols were designed with the idea that they could operate via relays and proxies.? SMTP was an early one, HTTP a later one.? It is that acceptance of proxies and relays that reduces the strength of the end-to-end principle to act as a glue that holds the Internet into a single system. (This is not a negative reflection on those protocols; it is merely the recognition that a useful and common "feature" may also become the means through which the net could separate into realms that touch one another only via relays and proxies.) I wrote about this some years ago in note I titled "Internet: Quo Vadis (Where are you going?)" at https://www.cavebear.com/cavebear-blog/internet_quo_vadis/ ??? ??? --karl-- -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history ? From steffen at sdaoden.eu Tue Mar 15 16:17:52 2022 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Wed, 16 Mar 2022 00:17:52 +0100 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: References: Message-ID: <20220315231752.ex6n-%steffen@sdaoden.eu> Larry Press wrote in : |The limited, but significant role of SpaceX Starlink in Ukraine: |https://circleid.com/posts/20220308-spacex-starlink-in-ukraine-a-week-later Especially on the day where Germany announced to buy F-35 jet fighters (and more then hopefully functioning Eurofighters, which i would personally have _very much_ preferred) with i think highly sophisticated electronic warfare it would be interesting whether military forces who really desire to would be able to completely "turn off the heaven". I personally totally oppose this massive satellitisation which requires hundreds of rocket starts. May the French friend who is also on this list defend the (very much smaller) French series of satellites which was/is being brought up to space for the very same purpose or not. I mean even in the scientists in the antarctic have internet, may it be slow, it seems sufficient to do science there. For such purposes, yes, i go with that. Military of all sorts i cannot prevent as much as their friendly fire. But private households? Maybe somewhere in a desert or what, but for us in Europe? For the American west or east coast, for -- what do i know of that -- Texas, Arizona? Thousands of satellites? Is that better than directional radion here and there or some cables along railways? Really?? In my opinion: no. |Reminds me of the role of RELCOM during the Soviet coup attempt of 1991: |https://cis471.blogspot.com/2011/01/before-twitter-revolutions-there-was\ |.html You know. I think it is a philosophical issue, if you really do not want to name it a religious one. I could show you many people in my personal neighbourhood who somehow have the desire for freedom, for expression, but they do not feel free. This is not about the two million children who need "additional food care", or the ten million Germans who in the meantime work in the low wages sector, this is not about our disrespectful dealing with one another, with other animals, with nature. Or maybe the latter three things a bit. This is not about "you get what you deserve." So these two administrators had a "success!", that is great. You like the western world regardless of the mentioned, and i did not talk about all the addictions at all, and not only meaning drugs like alcohol or whatever, or consumption as such, that is also great. But different cultures have different values, and i personally am totally fine with that. Then again people are a bit the same everywhere, and you will find in America people who would possibly fit better into a society like Russia, or Germany, or Italy, as well as you find quite some Russians who would die for having the all-american experience. You know. What i mean is, i think, that the Christians live in parables, and Israel and the Temple Mount are possibly just a state of mind, and that is the real freedom people should be endorsed in and guided to, and which society should jointly strive for, and one should look for the three fingers that point back rather than always target with the one pointing somewhere else. And if you desire sovereignity for yourself, and somehow gain it, then you should not prohibit others the same conditions, but have the splendor to live in peace and mutual respect. I guess that is a bit John Lennon and Imagine though. And though all off-topic, i also dislike battery powered cars. We all need hydrogen and fuel-cell technology, and graceful sustainability. And it was the Club of Rome who said that 50 years ago, and it was in the word "technology" that Al Gore said in the TV duel for the presidentship. But mind you, i did not believe that will work out almost a decade earlier. You know, even earlier, i read in a book of a wonderful journalist i think (such things _did_ exist when i was young) the words of a black Bishop of somewhere in Africa (i do not know whether i would find the thing) saying "in the year 2700 the white man will have destroyed life on earth, and then the time of the africans begin", and .. i overwhelmingly believed him that this is the truth. Have a nice evening. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From touch at strayalpha.com Tue Mar 15 17:25:19 2022 From: touch at strayalpha.com (Joe Touch) Date: Tue, 15 Mar 2022 17:25:19 -0700 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: <20220315231752.ex6n-%steffen@sdaoden.eu> References: <20220315231752.ex6n-%steffen@sdaoden.eu> Message-ID: Again. Internet history. If you feel the need to preface a post or portion with ?off topic?, it?s a clear indication to not post that material to this list. That?s a second reminder. Joe (list admin) From jhlowry at mac.com Wed Mar 16 04:30:00 2022 From: jhlowry at mac.com (John Lowry) Date: Wed, 16 Mar 2022 07:30:00 -0400 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: <20220315231752.ex6n-%steffen@sdaoden.eu> References: <20220315231752.ex6n-%steffen@sdaoden.eu> Message-ID: <6174851D-B4A6-4BAD-8173-C1966E5C4E57@mac.com> As a Starlink user who lives a mere 40 miles from Portland Maine, I can say that Starlink is a blessing. There are many other testimonials. Most everyone I know points out that Starlink is not for the privileged and the privileged don?t seem interested in it. My children live near Boston and get 3 to 4 times the bandwidth for 60% of the cost from cable Internet. Their attitude is a slightly condescending ?We?re glad for you Dad, but no thanks.? My only choice other than Starlink is 768/128 Kbps. I keep that capability as emergency backup. As to the carbon footprint of rocket launches that I think was referred to, the calculation reported in the linked story has been repeated and reported independently elsewhere. Environmentally, in terms of atmospheric, debris, and visual environments, Spacex seems to be well thought out and not a significant threat or burden. I would rather see private jets and personal space travel eliminated. https://www.treehugger.com/spacex-launch-puts-out-much-co-flying-people-across-atlantic-4857958 Sent from my high-carbon iPad via Starlink > On Mar 15, 2022, at 7:18 PM, Steffen Nurpmeso via Internet-history wrote: > > ?Larry Press wrote in > ook.com>: > |The limited, but significant role of SpaceX Starlink in Ukraine: > |https://circleid.com/posts/20220308-spacex-starlink-in-ukraine-a-week-later > > Especially on the day where Germany announced to buy F-35 jet > fighters (and more then hopefully functioning Eurofighters, which > i would personally have _very much_ preferred) with i think highly > sophisticated electronic warfare it would be interesting whether > military forces who really desire to would be able to completely > "turn off the heaven". > > I personally totally oppose this massive satellitisation which > requires hundreds of rocket starts. May the French friend who is > also on this list defend the (very much smaller) French series of > satellites which was/is being brought up to space for the very > same purpose or not. > > I mean even in the scientists in the antarctic have internet, may > it be slow, it seems sufficient to do science there. For such > purposes, yes, i go with that. Military of all sorts i cannot > prevent as much as their friendly fire. But private households? > Maybe somewhere in a desert or what, but for us in Europe? For > the American west or east coast, for -- what do i know of that -- > Texas, Arizona? Thousands of satellites? Is that better than > directional radion here and there or some cables along railways? > Really?? In my opinion: no. > > |Reminds me of the role of RELCOM during the Soviet coup attempt of 1991: > |https://cis471.blogspot.com/2011/01/before-twitter-revolutions-there-was\ > |.html > > You know. I think it is a philosophical issue, if you really do > not want to name it a religious one. I could show you many people > in my personal neighbourhood who somehow have the desire for > freedom, for expression, but they do not feel free. This is not > about the two million children who need "additional food care", or > the ten million Germans who in the meantime work in the low wages > sector, this is not about our disrespectful dealing with one > another, with other animals, with nature. Or maybe the latter > three things a bit. This is not about "you get what you deserve." > So these two administrators had a "success!", that is great. You > like the western world regardless of the mentioned, and i did not > talk about all the addictions at all, and not only meaning drugs > like alcohol or whatever, or consumption as such, that is also > great. But different cultures have different values, and > i personally am totally fine with that. Then again people are > a bit the same everywhere, and you will find in America people who > would possibly fit better into a society like Russia, or Germany, > or Italy, as well as you find quite some Russians who would die > for having the all-american experience. You know. What i mean > is, i think, that the Christians live in parables, and Israel and > the Temple Mount are possibly just a state of mind, and that is > the real freedom people should be endorsed in and guided to, and > which society should jointly strive for, and one should look for > the three fingers that point back rather than always target with > the one pointing somewhere else. And if you desire sovereignity > for yourself, and somehow gain it, then you should not prohibit > others the same conditions, but have the splendor to live in peace > and mutual respect. I guess that is a bit John Lennon and Imagine > though. > > And though all off-topic, i also dislike battery powered cars. > We all need hydrogen and fuel-cell technology, and graceful > sustainability. And it was the Club of Rome who said that 50 > years ago, and it was in the word "technology" that Al Gore said > in the TV duel for the presidentship. But mind you, i did not > believe that will work out almost a decade earlier. You know, > even earlier, i read in a book of a wonderful journalist i think > (such things _did_ exist when i was young) the words of a black > Bishop of somewhere in Africa (i do not know whether i would find > the thing) saying "in the year 2700 the white man will have > destroyed life on earth, and then the time of the africans begin", > and .. i overwhelmingly believed him that this is the truth. > > Have a nice evening. > > --steffen > | > |Der Kragenbaer, The moon bear, > |der holt sich munter he cheerfully and one by one > |einen nach dem anderen runter wa.ks himself off > |(By Robert Gernhardt) > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jhlowry at mac.com Wed Mar 16 04:31:34 2022 From: jhlowry at mac.com (John Lowry) Date: Wed, 16 Mar 2022 07:31:34 -0400 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: References: Message-ID: Apologies for my response to Steffen ? Sent from my iPad > On Mar 15, 2022, at 8:25 PM, Joe Touch via Internet-history wrote: > > ?Again. Internet history. > > If you feel the need to preface a post or portion with ?off topic?, it?s a clear indication to not post that material to this list. > > That?s a second reminder. > > Joe (list admin) > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From julf at Julf.com Wed Mar 16 12:24:23 2022 From: julf at Julf.com (Johan Helsingius) Date: Wed, 16 Mar 2022 20:24:23 +0100 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: References: Message-ID: <126491db-a60d-2567-4c53-94782d17c7f7@Julf.com> > Reminds me of the role of RELCOM during the Soviet coup attempt of 1991: > https://cis471.blogspot.com/2011/01/before-twitter-revolutions-there-was.html We were running the other end of their link, in Helsinki. We did have some interesting discussions with the people in the US in order to get a go-ahead on that link. It wasn't just USENET. There is that famous IRC log at http://www.ibiblio.org/pub/academic/communications/logs/report-ussr-gorbatchev Julf From julf at Julf.com Wed Mar 16 12:29:05 2022 From: julf at Julf.com (Johan Helsingius) Date: Wed, 16 Mar 2022 20:29:05 +0100 Subject: [ih] legal models [was: there must be a corollary to Godwin's law about Sec 230, was ARPANET pioneer] In-Reply-To: <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> References: <20220306220744.4662E389279D@ary.qy> <9bfcc1f2-d9ac-b3d0-9823-58baf5e2e393@gmail.com> Message-ID: <5cd56557-4dca-0866-51c5-172fe3bf205c@Julf.com> On 07/03/2022 02:26, Brian E Carpenter via Internet-history wrote: > 1) In Common Law countries such as the US and UK, arguing from > precedent about new? technologies seems to be generally accepted. > But in other jurisdictions, such as those based on Napoleonic law, > this is less clear. The classic case was my (anon.penet.fi) case with the Church of Scientology. They subpoenaed me as a witness. The Finnish law had clauses about people like mailmen not having to witness about what they carried, but the law only covered paper mail, telephony, telex, fax and even telegraphy, but not Internet email, as it wasn't explicitly listed. Julf From lpress at csudh.edu Fri Mar 18 08:55:10 2022 From: lpress at csudh.edu (Larry Press) Date: Fri, 18 Mar 2022 15:55:10 +0000 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: <126491db-a60d-2567-4c53-94782d17c7f7@Julf.com> References: <126491db-a60d-2567-4c53-94782d17c7f7@Julf.com> Message-ID: Here is an archive with USENET News, Radio Free Europe, papers, and other material that was on the Net at the time: http://www.cs.oswego.edu/~dab/coup/ I was in Moscow for a conference just before the coup attempt and spent a lot of time with RELCOM folks who I had "met" online since we had used the net to organize the conference. Fun fact -- local calls were free at the time, so Vadim Antonov had a Teletype in his apartment that had been online continuously for 6 months. Larry Soviet Coup Archive - Oswego Soviet Coup Archive Welcome! This is an archive of materials related to the coup that took place in the Soviet Union during late August of 1991. www.cs.oswego.edu ? ________________________________ From: Internet-history on behalf of Johan Helsingius via Internet-history Sent: Wednesday, March 16, 2022 12:24 PM To: internet-history at elists.isoc.org Subject: Re: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt > Reminds me of the role of RELCOM during the Soviet coup attempt of 1991: > https://urldefense.com/v3/__https://cis471.blogspot.com/2011/01/before-twitter-revolutions-there-was.html__;!!P7nkOOY!v_u5dApkav89xCZcCHqfobxNXeq4n-ISGhgtVuaqjzmIC-CyKXv7AaazcMMNQduihrNeLo3jHtzJOoV5oihqNC8sbHeTSjU$ We were running the other end of their link, in Helsinki. We did have some interesting discussions with the people in the US in order to get a go-ahead on that link. It wasn't just USENET. There is that famous IRC log at https://urldefense.com/v3/__http://www.ibiblio.org/pub/academic/communications/logs/report-ussr-gorbatchev__;!!P7nkOOY!v_u5dApkav89xCZcCHqfobxNXeq4n-ISGhgtVuaqjzmIC-CyKXv7AaazcMMNQduihrNeLo3jHtzJOoV5oihqNC8s46bQ9cs$ Julf -- Internet-history mailing list Internet-history at elists.isoc.org https://urldefense.com/v3/__https://elists.isoc.org/mailman/listinfo/internet-history__;!!P7nkOOY!v_u5dApkav89xCZcCHqfobxNXeq4n-ISGhgtVuaqjzmIC-CyKXv7AaazcMMNQduihrNeLo3jHtzJOoV5oihqNC8sT8VV_1M$ From julf at Julf.com Fri Mar 18 09:18:04 2022 From: julf at Julf.com (Johan Helsingius) Date: Fri, 18 Mar 2022 17:18:04 +0100 Subject: [ih] SpaceX Starlink in Ukraine and RELCOM during the Soviet coup attempt In-Reply-To: References: <126491db-a60d-2567-4c53-94782d17c7f7@Julf.com> Message-ID: <9533dbb0-4e2d-ee78-3023-dd10a7d68bb1@Julf.com> On 18/03/2022 16:55, Larry Press wrote: > I was in Moscow for a conference just before the coup attempt and spent > a lot?of time with RELCOM folks who I had "met" online since we had used > the net to organize the conference. Fun fact -- local calls were free at > the time, so Vadim Antonov had a Teletype in his apartment that had been > online continuously for 6 months. Small world. :) I helped to organize the first UNIX conference in Moscow, and of course had a fair bit of contact with the RELCOM team because of the connection to us in Helsinki. Somewhere I still have their UNIX book (as well as Mario Zagar's UNIX book I got in 1990 as I was a guest speaker at the Yugoslav UNIX User Group meeting just as Yogoslavia was falling apart. Julf From bpurvy at gmail.com Fri Mar 18 10:02:10 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 18 Mar 2022 10:02:10 -0700 Subject: [ih] GOSIP & compliance Message-ID: I was around for all this, but probably not as much as some of you. So many memories fade... I've been reading this . This passage... *By August 1990, federal agencies were required to procure GOSIP-compliantproducts. Through this procurement requirement, the government intended to stimulate the market for OSI products. However, many network administrators resisted the GOSIP procurement policy and continued to operate TCP/IP networks, noting that the federal mandate, by specifying only procurement, did not prohibit the use of products built around the more familiar and more readily available TCP/IP.* ... in particular stuck out for me. Admins were required to go OSI, but somehow it never happened. Does anyone have any personal stories to relate about this, either your own or someone else's? *Disclosure*: I'm writing historical fiction, mostly because that's what I want to do. So there won't be any actual names in whatever I write. I'm interested in the private choices people make, not the institutions, towering figures, and impersonal forces that most historians write about. From bill.n1vux at gmail.com Fri Mar 18 11:11:49 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Fri, 18 Mar 2022 14:11:49 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > ... to operate TCP/IP networks, noting that the federal mandate, by >> specifying >> only procurement, did not prohibit the use of products built around the >> more familiar and more readily available TCP/IP.* >> > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened. Does anyone have any personal stories to relate > about this, either your own or someone else's? > (I was out of government systems by 1990 but was still talking to MAP (naturally), so this is my memory of his commentary, plus what has been discussed here over the last decade. Take with large grain of salt.) I read this as the mandate tacitly acknowledged that ISO/OSI ISORM- and GOSIP-compliant products were just not available COTS (Commercial Off-the-Shelf), such that an injection of procurement $$$ was required to spur development. (And GOSIP even less so than international OSI.) In the 1980s, to elucidate the advantage of *working *standards over *paper* standards, MAP described the ISORM OSI/GOSIP mandated preferred competitor to the ARPAnet Reference Model & TCP/IP stack as if a teen asks parents for a car, and is handed a *photograph* of a frame with engine & drivetrain, four different size wheels mounted, but no body, let alone friperies like windows or seats. So anyone doing *Information Technology* with a limited budget and tight schedule (i.e., without a procurement project manager, procurement budget, and lengthy government procurement schedule), would just order COTS and justify it as COTS. (And if necessary, declare the COTS as "only an interim solution" and sketch vague plans to request additional budget for a procurement for GOSIP-compliant products in some nebulous glorious future, provided Congress granted the budget increase needed to do that.) The "interim" solution of TCP/IP etc Internet protocol stack continues to mostly function (albeit half-baked as per recent thread). One might say GOSIP Federal IT is much like the promise of Fusion power ... it has remained 10 years (and many $$) in the future for several decades. :-D *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. > Given that angle, i don't know if MAP's memoir "*And They Argued All Night...*" from 2000 will be of any help at all. https://n1vux.github.io/articles/MAP/RFC/allnight.html (invited memoir essay for *Matrix News* magazine's "*Lest They Forget/Be Forgotten*" series (Peter H Salus , Ed.), ? *2000*) -- Bill Ricker Executor, Literary & Spiritous Estate of Michael A Padlipsky bill.n1vux at gmail.com https://www.linkedin.com/in/n1vux From bpurvy at gmail.com Fri Mar 18 11:14:16 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 18 Mar 2022 11:14:16 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: Thanks, Bill! Everything helps. On Fri, Mar 18, 2022, 11:12 AM Bill Ricker wrote: > > > On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > >> ... to operate TCP/IP networks, noting that the federal mandate, by >>> specifying >>> only procurement, did not prohibit the use of products built around the >>> more familiar and more readily available TCP/IP.* >>> >> >> ... in particular stuck out for me. Admins were required to go OSI, but >> somehow it never happened. Does anyone have any personal stories to >> relate >> about this, either your own or someone else's? >> > > (I was out of government systems by 1990 but was still talking to MAP > (naturally), so this is my memory of his commentary, plus what has been > discussed here over the last decade. Take with large grain of salt.) > > I read this as the mandate tacitly acknowledged that ISO/OSI ISORM- and > GOSIP-compliant products were just not available COTS (Commercial > Off-the-Shelf), such that an injection of procurement $$$ was required to > spur development. (And GOSIP even less so than international OSI.) > > In the 1980s, to elucidate the advantage of *working *standards over > *paper* standards, MAP described the ISORM OSI/GOSIP mandated preferred > competitor to the ARPAnet Reference Model & TCP/IP stack as if a teen asks > parents for a car, and is handed a *photograph* of a frame with engine & > drivetrain, four different size wheels mounted, but no body, let alone > friperies like windows or seats. > > So anyone doing *Information Technology* with a limited budget and tight > schedule (i.e., without a procurement project manager, procurement budget, > and lengthy government procurement schedule), would just order COTS and > justify it as COTS. (And if necessary, declare the COTS as "only an interim > solution" and sketch vague plans to request additional budget for a > procurement for GOSIP-compliant products in some nebulous glorious future, > provided Congress granted the budget increase needed to do that.) > > The "interim" solution of TCP/IP etc Internet protocol stack continues to > mostly function (albeit half-baked as per recent thread). > > One might say GOSIP Federal IT is much like the promise of Fusion power > ... it has remained 10 years (and many $$) in the future for several > decades. :-D > > > *Disclosure*: I'm writing historical fiction, mostly because that's what I >> want to do. So there won't be any actual names in whatever I write. I'm >> interested in the private choices people make, not the institutions, >> towering figures, and impersonal forces that most historians write about. >> > > Given that angle, i don't know if MAP's memoir "*And They Argued All > Night...*" from 2000 will be of any help at all. > https://n1vux.github.io/articles/MAP/RFC/allnight.html > (invited memoir essay for *Matrix News* magazine's "*Lest They Forget/Be > Forgotten*" series (Peter H Salus > , Ed.), ? *2000*) > > > -- > Bill Ricker > Executor, Literary & Spiritous Estate of Michael A Padlipsky > > bill.n1vux at gmail.com > https://www.linkedin.com/in/n1vux > From dhc at dcrocker.net Fri Mar 18 11:21:03 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 18 Mar 2022 11:21:03 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <28ec4015-e804-aac4-9b7a-710a54d1ca6d@dcrocker.net> On 3/18/2022 11:11 AM, Bill Ricker via Internet-history wrote: > So anyone doing*Information Technology* with a limited budget and tight > schedule (i.e., without a procurement project manager, procurement budget, > and lengthy government procurement schedule), would just order COTS and > justify it as COTS In the latter 1980s, I was managing a small engineering team, at a company doing after-market network stacks. We had TCP/IP, of course, but we also developed some OSI stacks, to the extent the standards allowed. A couple of distinctive moments: 1. Predictably, Europe was the hotbed of ISO advocacy, yet it was quite a major source to our TCP/IP revenue. In fact, one of our customers was the IT department at ISO... I chatted with the manager in charge and asked him whether he got any flack for using TCP/IP. Being an operations guy, his response was direct and curt. He said he was given an operational requirement and he met it with the best available solution. 2. We started considering development of transition tools, to move from TCP/IP to OSI. We started querying existing customer, and they gave us an overwhelmingly consistent response: They /did/ want transition tools. To go from OSI to TCP/IP... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From agmalis at gmail.com Fri Mar 18 11:34:33 2022 From: agmalis at gmail.com (Andrew G. Malis) Date: Fri, 18 Mar 2022 14:34:33 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: It's been a while, but as I recall, as a part of this requirement, TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote such a transition plan for the MILNET (or it might have been for the DoD as a whole, as I said, things are hazy). I'm sure that it just went on a shelf somewhere once the requirement for a plan was met. Cheers, Andy On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > I was around for all this, but probably not as much as some of you. So many > memories fade... > > I've been reading this > < > https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf > >. > This passage... > > > *By August 1990, federal agencies were required to procure > GOSIP-compliantproducts. Through this procurement requirement, the > government intended to stimulate the market for OSI products. However, many > network administrators resisted the GOSIP procurement policy and continued > to operate TCP/IP networks, noting that the federal mandate, by specifying > only procurement, did not prohibit the use of products built around the > more familiar and more readily available TCP/IP.* > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened. Does anyone have any personal stories to relate > about this, either your own or someone else's? > > *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From geoff at iconia.com Fri Mar 18 12:05:04 2022 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Fri, 18 Mar 2022 09:05:04 -1000 Subject: [ih] History of the Internet in Russia Message-ID: EXCERPTing from https://en.wikipedia.org/wiki/History_of_the_Internet_in_Russia: "*Background* In the USSR , the first computer networks appeared in the 1950s in missile defense system at Sary Shagan (first they were tested in Moscow at Lebedev Institute of Precision Mechanics and Computer Engineering ). In the 1960s, the massive computer network project called OGAS was proposed but failed to be implemented.[3] Apollo?Soyuz USA?USSR joint space program (1972?1975) used digital data for spaceships transmitted between two countries.[4] Since the late 1970s, X.25 Soviet networks began to appear and Akademset emerged in Leningrad in 1978. By 1982 VNIIPAS [5] institute was created in Moscow to serve as Akademset's central node, which established X.25 regular connection to IIASA in Austria (which allowed access to other worldwide networks). In 1983, VNIIPAS together with USA government and George Soros created Soviet X.25 service provider called SFMT ("San Francisco ? Moscow Teleport") that later became Sovam Teleport ("Soviet-American Teleport"). VNIIPAS also provided X.25 services, including over satellite, to Eastern bloc countries together with Mongolia, Cuba and Vietnam. At the time, Western users of Usenet were generally unaware of that, and considered such networking in USSR unexistent, so one of them on April 1, 1984 made an "April fool " hoax about "Kremvax " ("Kremlin VAX ") that gained some popularity for subsequent years. USSR nominally joined private Fidonet network in October 1990 when first node of *Region 50* appeared in Novosibirsk . Some of the early Soviet/Russian networks were also initiated as parts of BITNET . Foundation of the Russian Internet See also: Internet in Russia ? History Sovam Teleport Main articles: Akademset and VNIIPAS Sovam Teleport is a Russian telecommunications company that was founded in 1990. The company was established as a joint venture of the San Francisco Moscow Teleport network and the All-Russian Research Institute of Automated Application Systems (???????) .[6] The name stands for "Short sOViet-AMerican Teleport". San Francisco Moscow Teleport (SFMT) was launched in 1983 by financier George Soros and American Joel Schatz[7] with the support of the US government. It was a non-profit project with a goal to expand the Internet to the USSR. In 1986, the project changed its status and became a commercial enterprise. The All-Russian Research Institute of Automated Application Systems provided a data transmission network with some countries in Eastern Europe, as well as Cuba, Mongolia, and Vietnam, almost all of the data traffic was scientific and technical information, and in 1983 organized a non-state email network. By the beginning of the 1990s, almost half of the VNII traffic amounted to operational data from electronic mail systems.[8] The company's first network was built on the X.25 protocol in 1990. In 1992, Sovam Teleport began to build a UUCP mail and terminal access system through American servers. Johnson & Johnson, Coca-Cola, DuPont, Estee Lauder, Time magazine, and France Presse were among the first corporate clients of the company. Since 1992, the British company Cable & Wireless, which has its own fiber-optic channels in Europe, has become the third co-founder of the company. On June 4, 1992, the company was re-registered as a limited liability partnership, and all three co-founders - Cable & Wireless, All-Russian Research Institute of Automated Application Systems and SFMT - received almost equal shares. On July 28, 1993, a communications center in Tashkent began servicing customers. The provider domain sovam.com, which opened on February 24, 1994, became the first public Internet site in Russia.[8] Sovam Teleport in early 1990s became a first SWIFT network provider for emerging Russian banks (over x.25)." [...] https://en.wikipedia.org/wiki/History_of_the_Internet_in_Russia -- Geoff.Goodfellow at iconia.com living as The Truth is True From winowicki at yahoo.com Fri Mar 18 13:26:04 2022 From: winowicki at yahoo.com (Bill Nowicki) Date: Fri, 18 Mar 2022 20:26:04 +0000 (UTC) Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <159533040.305462.1647635164110@mail.yahoo.com> I was a bit involved in this at the time. My role was the lone TCP/IP guy at Sun Microsystems during the1980s, while we had a dedicated team doing the OSI stack. That was because the assumption in the marketing world was that?TCP/IP was "for research and education", while commercial and government production users would use OSI. It was especially amusing to hear that the Corporation for Open Systems, the group formed to promote the OSI stack, itself used TCP/IP (including PC NFS for file sharing, that was leading edge technology at the time) on its internal systems.? I especially remember having a lunch with Milo Medin, who ran the network at NASA's Ames Research Center nearby. He pointed out that the letter of the law was that the vendor needed to show it supplied the OSI stack (it was available and actually worked to some extent), not that each US government customer needed to actually buy it. That is one reason why the revenues from Sun's OSI product were fairly trivial; TCP/IP was included in the OS for no extra charge. The Sun marketing called this a "strategic" product. Which became the running joke in Silicon Valley: whenever a product was a powerful person's pet idea but generated no revenue, it was called "strategic". Bill On Friday, March 18, 2022, 11:34:59 AM PDT, Andrew G. Malis via Internet-history wrote: It's been a while, but as I recall, as a part of this requirement, TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote such a transition plan for the MILNET (or it might have been for the DoD as a whole, as I said, things are hazy). I'm sure that it just went on a shelf somewhere once the requirement for a plan was met. Cheers, Andy On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > I was around for all this, but probably not as much as some of you. So many > memories fade... > > I've been reading this > < > https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf > >. > This passage... > > > *By August 1990, federal agencies were required to procure > GOSIP-compliantproducts. Through this procurement requirement, the > government intended to stimulate the market for OSI products. However, many > network administrators resisted the GOSIP procurement policy and continued > to operate TCP/IP networks, noting that the federal mandate, by specifying > only procurement, did not prohibit the use of products built around the > more familiar and more readily available TCP/IP.* > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened.? Does anyone have any personal stories to relate > about this, either your own or someone else's? > > *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Fri Mar 18 13:32:39 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 18 Mar 2022 13:32:39 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <159533040.305462.1647635164110@mail.yahoo.com> References: <159533040.305462.1647635164110@mail.yahoo.com> Message-ID: Funny: that quote is actually in my 2nd book (not out yet). " 'Strategic' means you don't make any money." I attributed it to someone else. I wonder who first coined it? At 3Com, the Microsoft LAN Manager deal was unquestionably Strategic. While my product, 3+Mail, which owed no royalties to anyone, was *very definitely* not Strategic. (Note that I'm not claiming I wrote 3+Mail. It was already there when I joined, although I did take it over.) On Fri, Mar 18, 2022 at 1:26 PM Bill Nowicki wrote: > I was a bit involved in this at the time. My role was the lone TCP/IP guy > at Sun Microsystems during the1980s, while we had a dedicated team doing > the OSI stack. That was because the assumption in the marketing world was > that TCP/IP was "for research and education", while commercial and > government production users would use OSI. It was especially amusing to > hear that the Corporation for Open Systems, the group formed to promote the > OSI stack, itself used TCP/IP (including PC NFS for file sharing, that was > leading edge technology at the time) on its internal systems. I especially > remember having a lunch with Milo Medin, who ran the network at NASA's Ames > Research Center nearby. He pointed out that the letter of the law was that > the vendor needed to show it supplied the OSI stack (it was available and > actually worked to some extent), not that each US government customer > needed to actually buy it. That is one reason why the revenues from Sun's > OSI product were fairly trivial; TCP/IP was included in the OS for no extra > charge. The Sun marketing called this a "strategic" product. Which became > the running joke in Silicon Valley: whenever a product was a powerful > person's pet idea but generated no revenue, it was called "strategic". > > Bill > > On Friday, March 18, 2022, 11:34:59 AM PDT, Andrew G. Malis via > Internet-history wrote: > > > It's been a while, but as I recall, as a part of this requirement, > TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote > such a transition plan for the MILNET (or it might have been for the DoD as > a whole, as I said, things are hazy). I'm sure that it just went on a shelf > somewhere once the requirement for a plan was met. > > Cheers, > Andy > > > On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > > > I was around for all this, but probably not as much as some of you. So > many > > memories fade... > > > > I've been reading this > > < > > > https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf > > >. > > This passage... > > > > > > *By August 1990, federal agencies were required to procure > > GOSIP-compliantproducts. Through this procurement requirement, the > > government intended to stimulate the market for OSI products. However, > many > > network administrators resisted the GOSIP procurement policy and > continued > > to operate TCP/IP networks, noting that the federal mandate, by > specifying > > only procurement, did not prohibit the use of products built around the > > more familiar and more readily available TCP/IP.* > > > > ... in particular stuck out for me. Admins were required to go OSI, but > > somehow it never happened. Does anyone have any personal stories to > relate > > about this, either your own or someone else's? > > > > *Disclosure*: I'm writing historical fiction, mostly because that's what > I > > want to do. So there won't be any actual names in whatever I write. I'm > > interested in the private choices people make, not the institutions, > > towering figures, and impersonal forces that most historians write about. > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From julf at Julf.com Fri Mar 18 14:25:10 2022 From: julf at Julf.com (Johan Helsingius) Date: Fri, 18 Mar 2022 22:25:10 +0100 Subject: [ih] History of the Internet in Russia In-Reply-To: References: Message-ID: <52aef4ab-80d4-ea0d-b971-37cc0c9d0825@Julf.com> Did you on purpose end your quote just before this part? DEMOS-based network Main articles: DEMOS and RELCOM After invading Afghanistan, the Soviet Union found itself under sanctions. However, a group of developers made a Russian version of the Unix operating system, secretly brought from America, and called it DEMOS. Some Unix developers, working at the Kurchatov Nuclear Energy Research Institute created a network that used DEMOS, namely RELCOM. The main feature of this network was that it was a fully horizontal network, i.e. each networked computer could directly communicate with other computers on the network. Many labs took part in joint experiments, so rapid communication was very much needed. Therefore, the first network users were mainly Soviet research institutes, so they could exchange scientific information more rapidly. Julf On 18/03/2022 20:05, the keyboard of geoff goodfellow via Internet-history wrote: > EXCERPTing from > https://en.wikipedia.org/wiki/History_of_the_Internet_in_Russia: > > "*Background* > > In the USSR , the first computer > networks appeared in the 1950s in missile defense > system at Sary Shagan > (first they were tested in > Moscow at Lebedev Institute of Precision Mechanics and Computer Engineering > ). > In the 1960s, the massive computer network project called OGAS > was proposed but failed to be > implemented.[3] > > Apollo?Soyuz USA?USSR > joint space program (1972?1975) used digital data for spaceships > transmitted between two countries.[4] > > > Since the late 1970s, X.25 Soviet > networks > began to appear and Akademset emerged > in Leningrad in 1978. By 1982 > VNIIPAS [5] > > institute > was created in Moscow to serve as > Akademset's central node, which established X.25 regular connection to IIASA > in Austria (which allowed access to > other worldwide networks). In 1983, VNIIPAS together with USA government > and George Soros created > Soviet X.25 service provider called SFMT ("San Francisco ? Moscow > Teleport") that later became Sovam Teleport > > ("Soviet-American > Teleport"). VNIIPAS also provided X.25 services, including over satellite, > to Eastern bloc countries > together with Mongolia, Cuba and Vietnam. At the time, Western users of > Usenet were generally unaware of > that, and considered such networking in USSR unexistent, so one of them on > April 1, 1984 made an "April fool > " hoax about "Kremvax > " ("Kremlin > VAX > ") that gained some popularity for > subsequent years. USSR nominally joined private Fidonet > network in October 1990 when first > node of *Region 50* appeared in Novosibirsk > . > > Some of the early Soviet/Russian networks were also initiated as parts of > BITNET . > Foundation of the Russian Internet > See also: Internet in Russia ? History > > Sovam Teleport > Main articles: Akademset and > VNIIPAS > > Sovam Teleport is a Russian telecommunications company that was founded in > 1990. The company was established as a joint venture of the San Francisco > Moscow Teleport network and the All-Russian Research Institute of Automated > Application Systems (???????) .[6] > > The > name stands for "Short sOViet-AMerican Teleport". > > San Francisco Moscow Teleport (SFMT) was launched in 1983 by financier > George Soros and American Joel Schatz[7] > > with > the support of the US government. It was a non-profit project with a goal > to expand the Internet to the USSR. In 1986, the project changed its status > and became a commercial enterprise. The All-Russian Research Institute of > Automated Application Systems provided a data transmission network with > some countries in Eastern Europe, as well as Cuba, Mongolia, and Vietnam, > almost all of the data traffic was scientific and technical information, > and in 1983 organized a non-state email network. By the beginning of the > 1990s, almost half of the VNII traffic amounted to operational data from > electronic mail systems.[8] > > > The company's first network was built on the X.25 protocol > in 1990. In 1992, Sovam Teleport began > to build a UUCP mail and terminal > access system through American servers. Johnson & Johnson, Coca-Cola, > DuPont, Estee Lauder, Time magazine, and France Presse were among the first > corporate clients of the company. Since 1992, the British company Cable & > Wireless, which has its own fiber-optic channels in Europe, has become the > third co-founder of the company. On June 4, 1992, the company was > re-registered as a limited liability partnership, and all three co-founders > - Cable & Wireless, All-Russian Research Institute of Automated Application > Systems and SFMT - received almost equal shares. On July 28, 1993, a > communications center in Tashkent began servicing customers. The provider > domain sovam.com, which opened on February 24, 1994, became the first > public Internet site in Russia.[8] > > > Sovam Teleport in early 1990s became a first SWIFT > network provider for emerging Russian > banks (over x.25)." > > [...] > > https://en.wikipedia.org/wiki/History_of_the_Internet_in_Russia > > > From julf at Julf.com Fri Mar 18 14:28:19 2022 From: julf at Julf.com (Johan Helsingius) Date: Fri, 18 Mar 2022 22:28:19 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <87aad87b-686e-5f92-d921-4a624817ac3b@Julf.com> On 18/03/2022 19:34, Andrew G. Malis via Internet-history wrote: > It's been a while, but as I recall, as a part of this requirement, > TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote > such a transition plan for the MILNET (or it might have been for the DoD as > a whole, as I said, things are hazy). I'm sure that it just went on a shelf > somewhere once the requirement for a plan was met. I still have the "EUnet transition plan to OSI" (that Daniel Karrenberg wrote) in my bookshelf. I think we all knew it would never be used, but it was required by the EU. Julf From bill.n1vux at gmail.com Fri Mar 18 14:41:22 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Fri, 18 Mar 2022 17:41:22 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: On Fri, Mar 18, 2022 at 2:34 PM Andrew G. Malis via Internet-history < internet-history at elists.isoc.org> wrote: > It's been a while, but as I recall, as a part of this requirement, > TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote > such a transition plan for the MILNET (or it might have been for the DoD as > a whole, as I said, things are hazy). I'm sure that it just went on a shelf > somewhere once the requirement for a plan was met. > It wasn't just MILNET. The classified side of DDN (for WIN, DODIIS, SACDIN) was also mandated to move to ISORM/ISO-OSI, well before UK GOSIP '88 or US FIPS OSI '90 mandates for civil government. And since the classified side needed procurement/development to produce security-labelled variants of protocols and Policy-safe implementations thereof (and applications to use such!), and thus could not be directly cloned from the COTS (d)ARPAnet/Internet TCP/IP stack, the classified side supposedly would lead "procurement driven" ISO-OSI (ISORM) development, to the benefit of Civil government and secular industry, under the benevolent gaze of ISO/ITU/IEC. IIRC it was DoDIIS PMO that was MAP's sponsor at The MITRE? Corporation in the 1980s. As a captive QNGO, we didn't compete with Industry; we prototyped, specified, and provided contract-monitoring-assistance to a PMO. Specifying a network for the entire Intelligence Community (IC) was interesting, i gather, since some classification codewords were themselves classified, how you gonna handle THAT in your network protocols? :-D Our department also had a field site working with SAC HQ (and presumably thus SACDIN?). (I don't remember offhand if WIN was specifically addressed during those years or if it was just presumed that when DDN could handle the security issues of SAC and the IC it could handle anything less classified as well.) ? it's a metaphor, not an acronym. no really! :-D This work was of course coordinated with NATO partners. (Stopping off in Scotland when returning from NATO tech meetings at SAHQ Brussels is how MAP did his Scotch research.) (But I can't speak to whether UK GOSIP '88 was an outgrowth of DoD=>NATO=>MoD OSI discussions or if there was direct contagion from UK ISO/IEC.) According to WikiPedia, the DMS (Defense Messagi(e|ing) System) is still OSI X.400/X.500/X.509 based. I guess it works well enough ... and there may be some value in it *not* being overly interoperable ;-). (My own work while i was with MAP at MITRE was in the provably-secure Multi-Level software, in uses and limits of cryptography, risk management, and in labeling complex data - not the network protocols, for which our Dept had MAP in the next group down the hall. MAP's tales of jousting with ISORMites at Project meetings - including at our own firm's beltway site ! - i found fascinating but not directly applicable - but informative on the limits of Proof to a simple policy. I should mention that also down the hall was Len LaPadula, and across campus was Dave Bell, neither of whom expected their simple, academic dual model of security and integrity to become _the_ operational standard for DoD software, since it was both too strong - basically mandated against every doing anything useful - and too narrow. On the same hallway as Dave were the ghost-editors of the Rainbow books, and a Honeywell SCOMP with a negative serial number. The Sun 3-160C that i ordered for our team's prototyping MLS DBMS UI project using color* - a new concept in UI in 1984! - was also used by another group to prototype labeled word processing with Interleaf TPS. ) *(work of others, it was "my" computer only because i ordered it - but that meant i had 'root' :-D ) (Being offered rapid promotion to management if i took a rotation through Omaha SAC HQ where i'd network support that should have been done by private contractors contributed to my decision to pursue career options that would not result in classified paragraphs on my resum?. There are other forms of interesting meta-data besides security classification tagging!) From mgrant at grant.org Fri Mar 18 14:45:58 2022 From: mgrant at grant.org (Michael Grant) Date: Fri, 18 Mar 2022 22:45:58 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: <159533040.305462.1647635164110@mail.yahoo.com> References: <159533040.305462.1647635164110@mail.yahoo.com> Message-ID: <202203182145.22ILjvCd2796269@bottom.networkguild.org> I was that guy along with Andrew Partan who worked for COS who got COS on the internet, well, what became the Internet. We simply couldn?t do anything without being connected to the net. Nothing happened over X.400 and no files were ever transferred over FTAM. We had Suns on our desks, Sun servers in the computer room which I helped run, and were connected to Uunet via a telebit modem. Prior to that, we had UUCP dialup to Uunet. I remember very distinctly that the members of the consortium asked us to use the OSI protocols for our own stuff but we basically ignored them. In fact, even internally at COS there was a Wang system and the Suns and we couldn?t even connect their email systems together. About the closest we could come was we played with Marshall Rose?s ISODE. Honestly, there wasn?t even a single way to get a fully connected network when some computers were running TP0, others TP4 and no way to interconnect! And there wasn?t a single router that routed CLNP. Did one ever exist? There were serious fundamental problems with OSI. I got hired away by Sun from COS when it imploded and eventually worked with the OSI team in France which eventually pivoted to LDAP. Sun?s OSI product was not written at Sun. I don?t recall the vender, they were French. It was taken in house and essentially ported to SunOS. It was pretty robust for what it was but quite over complicated and very difficult to debug. It was never really fully integrated into SunOS, and then we kind of silently stopped supporting it. If I recall correctly, there was only FTAM, X.400, and maybe X.500. I don?t really recall the network stack below these application layers but I know they were there because I had tested them at COS. About the biggest interoperability that I ever saw was this trade show which we called The Event in Baltimore around 1990 which COS had a booth demonstrating about a dozen vendors able to transfer files and send mail to one another. It was a mini Interop. Took a couple years to actually put together and was good fun. After that things seemed to fall apart. Bill, you and I must have interacted back then. It was ?strategic? in that it checked some box for some gov?t contracts which nobody ever intended on using. From: Bill Nowicki via Internet-history Sent: 18 March 2022 21:26 To: Bob Purvy; Andrew G. Malis Cc: internet-history at elists.isoc.org >> Internet History Subject: Re: [ih] GOSIP & compliance I was a bit involved in this at the time. My role was the lone TCP/IP guy at Sun Microsystems during the1980s, while we had a dedicated team doing the OSI stack. That was because the assumption in the marketing world was that?TCP/IP was "for research and education", while commercial and government production users would use OSI. It was especially amusing to hear that the Corporation for Open Systems, the group formed to promote the OSI stack, itself used TCP/IP (including PC NFS for file sharing, that was leading edge technology at the time) on its internal systems.? I especially remember having a lunch with Milo Medin, who ran the network at NASA's Ames Research Center nearby. He pointed out that the letter of the law was that the vendor needed to show it supplied the OSI stack (it was available and actually worked to some extent), not that each US government customer needed to actually buy it. That is one reason why the revenues from Sun's OSI product were fairly trivial; TCP/IP was included in the OS for no extra charge. The Sun marketing called this a "strategic" product. Which became the running joke in Silicon Valley: whenever a product was a powerful person's pet idea but generated no revenue, it was called "strategic". Bill On Friday, March 18, 2022, 11:34:59 AM PDT, Andrew G. Malis via Internet-history wrote: It's been a while, but as I recall, as a part of this requirement, TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote such a transition plan for the MILNET (or it might have been for the DoD as a whole, as I said, things are hazy). I'm sure that it just went on a shelf somewhere once the requirement for a plan was met. Cheers, Andy On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > I was around for all this, but probably not as much as some of you. So many > memories fade... > > I've been reading this > < > https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf > >. > This passage... > > > *By August 1990, federal agencies were required to procure > GOSIP-compliantproducts. Through this procurement requirement, the > government intended to stimulate the market for OSI products. However, many > network administrators resisted the GOSIP procurement policy and continued > to operate TCP/IP networks, noting that the federal mandate, by specifying > only procurement, did not prohibit the use of products built around the > more familiar and more readily available TCP/IP.* > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened.? Does anyone have any personal stories to relate > about this, either your own or someone else's? > > *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From tony.li at tony.li Fri Mar 18 15:01:48 2022 From: tony.li at tony.li (Tony Li) Date: Fri, 18 Mar 2022 15:01:48 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <202203182145.22ILjvCd2796269@bottom.networkguild.org> References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> Message-ID: <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> > On Mar 18, 2022, at 2:45 PM, Michael Grant via Internet-history wrote: > > And there wasn?t a single router that routed CLNP. Did one ever exist? Yes. Brand C had a CLNP stack, including an IS-IS implementation an a CLNP version of their in-house proprietary routing protocol. They did not have an implementation of IDRP, so it wasn?t a full stack, but it was deployable. Tony From vint at google.com Fri Mar 18 17:34:10 2022 From: vint at google.com (Vint Cerf) Date: Fri, 18 Mar 2022 20:34:10 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> Message-ID: https://datatracker.ietf.org/doc/html/rfc1169.html crafted to allow TCP/IP to be used for an indefinite future while OSI implementations matured... Pelkey: https://historyofcomputercommunications.info/section/14.5/the-department-of-defense-osi-and-tcp-ip/ Pelkey: https://historyofcomputercommunications.info/section/14.8/the-nbs-in-action-osinet,-cos,-and-gosip/ If I am remembering the history correctly, I wrote a request to NIST in 1992 as president of the Internet Society asking that a blue ribbon panel be assembled to again evaluate OSI vs TCP/IP. This would effectively revisit the earlier NRC panel that concluded that TCP and TP4 were essentially equivalent but that the OSI protocol ought to be the final standard destination because of its support in ISO. The new panel took a year to review the question and concluded that TCP and TP4 were essentially similar in functionality but that the widely available TCP/IP protocols should be allowed in lieu of OSI. At least, that is what I believe happened. I have not found any correspondence to confirm this and maybe I am misremembering but it seems to me that by 1993 (on the cusp of MOSAIC and after the demonstration of HTTP running over TCP/IP in 1991), the OSI mandate basically faded out. v On Fri, Mar 18, 2022 at 6:01 PM Tony Li via Internet-history < internet-history at elists.isoc.org> wrote: > > > > On Mar 18, 2022, at 2:45 PM, Michael Grant via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > And there wasn?t a single router that routed CLNP. Did one ever exist? > > > Yes. Brand C had a CLNP stack, including an IS-IS implementation an a > CLNP version of their in-house proprietary routing protocol. They did not > have an implementation of IDRP, so it wasn?t a full stack, but it was > deployable. > > Tony > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From dhc at dcrocker.net Fri Mar 18 18:41:06 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Fri, 18 Mar 2022 18:41:06 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> Message-ID: <3e660d8e-8cd1-c505-9072-f97c15b0c1d2@dcrocker.net> > https://datatracker.ietf.org/doc/html/rfc1169.html > > crafted to allow TCP/IP to be used for an indefinite future while OSI > implementations matured... As was ISODE https://en.wikipedia.org/wiki/ISO_Development_Environment d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Fri Mar 18 18:58:37 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 19 Mar 2022 14:58:37 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <8020d910-758d-d4e8-c6f2-c3c1d64f63df@gmail.com> Bob, US-GOSIP was not without influence in Europe, and at CERN, although the European OSI profiles specified X.25 rather than CLNP, we thought US-GOSIP was the way to go for on-site networking. Our users, the physicists, used a lot of DECnet in the 1980s, and DEC's strategy was DECnet/OSI (i.e. DECnet over US-GOSIP). That in itself was not unaffordable - it was just the next version of DECnet. So it seemed to be a genuine Route 128 strategy, not a Silicon Valley "strategy". Around that time, CERN standardised on Motorola 68000s for in-house microprocessor applications. So off I went to America to look into CLNP for the 68000 (specifically for the RMS68K operating system). I remember visiting some software house in Santa Monica. I can't remember their name, but I know I stayed in a Holiday Inn on Pico Boulevard, across the road from Santa Monica High. I'm pretty sure this was in 1983. Anyway - their license for CLNP started at $50,000, iirc. For a microprocessor. D'oh. We did a homebrew datagram service instead, subsetting CLNP. And of course when we asked IBM about CLNP on the mainframe the answer was equally ridiculous. The IBM "strategy" for OSI came from North Carolina and had a much higher price tag than anything from Silicon Valley. Of course TCP/IP was by then "free" for Unix. Once the Bell license for Unix ceased to be a problem, it was really free, including our Cray, Suns, etc., and not too expensive for VAX/VMS, Mac, PC or even the IBM mainframe. Ben Segal at CERN initially advocated for TCP/IP, starting in 1985, the same year that we formally proclaimed an OSI strategy. It was 1989 when we publicly switched our strategy to TCP/IP. (More about this is in Chapter 7 "Diversity" of my book. I also wrote a segment on "The Protocol Wars" which is on pp 106-110 of "A history of international research networking" by Davies & Bressan.) Regards Brian Carpenter On 19-Mar-22 06:02, Bob Purvy via Internet-history wrote: > I was around for all this, but probably not as much as some of you. So many > memories fade... > > I've been reading this > . > This passage... > > > *By August 1990, federal agencies were required to procure > GOSIP-compliantproducts. Through this procurement requirement, the > government intended to stimulate the market for OSI products. However, many > network administrators resisted the GOSIP procurement policy and continued > to operate TCP/IP networks, noting that the federal mandate, by specifying > only procurement, did not prohibit the use of products built around the > more familiar and more readily available TCP/IP.* > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened. Does anyone have any personal stories to relate > about this, either your own or someone else's? > > *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. > From brian.e.carpenter at gmail.com Fri Mar 18 19:05:27 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 19 Mar 2022 15:05:27 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> Message-ID: <9454d2cf-7a88-444a-15f1-5796a0fe7e9c@gmail.com> On 19-Mar-22 11:01, Tony Li via Internet-history wrote: > > >> On Mar 18, 2022, at 2:45 PM, Michael Grant via Internet-history wrote: >> >> And there wasn?t a single router that routed CLNP. Did one ever exist? > > > Yes. Brand C had a CLNP stack, including an IS-IS implementation an a CLNP version of their in-house proprietary routing protocol. They did not have an implementation of IDRP, so it wasn?t a full stack, but it was deployable. Every DECnet Phase V router was a CLNP router by definition, built into VMS. Sold, deployed, used. Brian From bpurvy at gmail.com Fri Mar 18 19:13:57 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Fri, 18 Mar 2022 19:13:57 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> Message-ID: Thanks, Vint. This illustrates why I'm not writing a *history* of internetworking! It's already been written. Now, what your average network admin or system designer thought of all this, and did when he/she had to choose: that part isn't done. A friend of mine recalls: *Very early days, and I get into an elevator. Inside is a guy dressed like a pimp: open shirt, chains, sleazy looking. We start talking and I soon learn he?s actually a techie and seems pretty smart. And he?s starting a firm! He says it?s called ?girls.com ?. * *I ask him what it does. And he said it will deliver porn to the web.* *That was a genuinely new idea for me. And I recalled thinking: naked women on the internet? Would that work? Was there a market for such a thing? * On Fri, Mar 18, 2022 at 5:34 PM Vint Cerf via Internet-history < internet-history at elists.isoc.org> wrote: > https://datatracker.ietf.org/doc/html/rfc1169.html > > crafted to allow TCP/IP to be used for an indefinite future while OSI > implementations matured... > > Pelkey: > > https://historyofcomputercommunications.info/section/14.5/the-department-of-defense-osi-and-tcp-ip/ > > Pelkey: > > https://historyofcomputercommunications.info/section/14.8/the-nbs-in-action-osinet,-cos,-and-gosip/ > > If I am remembering the history correctly, I wrote a request to NIST in > 1992 as president of the Internet Society asking that a blue ribbon panel > be assembled to again evaluate OSI vs TCP/IP. This would effectively > revisit the earlier NRC panel that concluded that TCP and TP4 were > essentially equivalent but that the OSI protocol ought to be the final > standard destination because of its support in ISO. The new panel took a > year to review the question and concluded that TCP and TP4 were essentially > similar in functionality but that the widely available TCP/IP protocols > should be allowed in lieu of OSI. At least, that is what I believe > happened. I have not found any correspondence to confirm this and maybe I > am misremembering but it seems to me that by 1993 (on the cusp of MOSAIC > and after the demonstration of HTTP running over TCP/IP in 1991), the OSI > mandate basically faded out. > > v > > > On Fri, Mar 18, 2022 at 6:01 PM Tony Li via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > > > > > On Mar 18, 2022, at 2:45 PM, Michael Grant via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > > And there wasn?t a single router that routed CLNP. Did one ever exist? > > > > > > Yes. Brand C had a CLNP stack, including an IS-IS implementation an a > > CLNP version of their in-house proprietary routing protocol. They did > not > > have an implementation of IDRP, so it wasn?t a full stack, but it was > > deployable. > > > > Tony > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Fri Mar 18 19:20:49 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sat, 19 Mar 2022 15:20:49 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: <87aad87b-686e-5f92-d921-4a624817ac3b@Julf.com> References: <87aad87b-686e-5f92-d921-4a624817ac3b@Julf.com> Message-ID: <6ef61ab7-5651-c07e-3b43-ed14e2b32bed@gmail.com> On 19-Mar-22 10:28, Johan Helsingius via Internet-history wrote: > On 18/03/2022 19:34, Andrew G. Malis via Internet-history wrote: >> It's been a while, but as I recall, as a part of this requirement, >> TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote >> such a transition plan for the MILNET (or it might have been for the DoD as >> a whole, as I said, things are hazy). I'm sure that it just went on a shelf >> somewhere once the requirement for a plan was met. > > I still have the "EUnet transition plan to OSI" (that Daniel Karrenberg > wrote) in my bookshelf. I think we all knew it would never be used, but > it was required by the EU. Yes, that was at the time that *we* knew TCP/IP had won, but the suits in Brussels didn't. Google tells me that Daniel presented that plan in public at the RARE Networkshop in Les Diablerets, Switzerland in 1988. One year later in Trieste I presented "Is OSI Too Late?" (and we all knew the answer). Brian From julf at Julf.com Sat Mar 19 01:38:33 2022 From: julf at Julf.com (Johan Helsingius) Date: Sat, 19 Mar 2022 09:38:33 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: <6ef61ab7-5651-c07e-3b43-ed14e2b32bed@gmail.com> References: <87aad87b-686e-5f92-d921-4a624817ac3b@Julf.com> <6ef61ab7-5651-c07e-3b43-ed14e2b32bed@gmail.com> Message-ID: On 19/03/2022 03:20, Brian E Carpenter wrote: > Yes, that was at the time that *we* knew TCP/IP had won, but the suits > in Brussels didn't. Indeed. > Google tells me that Daniel presented that plan in public at the RARE > Networkshop in Les Diablerets, Switzerland in 1988. One year later in > Trieste I presented "Is OSI Too Late?" (and we all knew the answer). Yes, we did, but still much later (1997?) the European Commission officer in charge of one of the research projects we were involved in insisted "Do what you want, as long as you use (Euro-)ISDN!". Julf From dan at lynch.com Sat Mar 19 07:36:56 2022 From: dan at lynch.com (Dan Lynch) Date: Sat, 19 Mar 2022 07:36:56 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> At Interop we were a teaching organization about interoperability so while we were TCP/IP bigots if the world was going to OSI we would definitely teach that too. Only a few students signed up for the OSI courses. We only offered them for a few years. I think by 91 it disappeared. The buyer is king. Dan Cell 650-776-7313 > On Mar 18, 2022, at 11:34 AM, Andrew G. Malis via Internet-history wrote: > > ?It's been a while, but as I recall, as a part of this requirement, > TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote > such a transition plan for the MILNET (or it might have been for the DoD as > a whole, as I said, things are hazy). I'm sure that it just went on a shelf > somewhere once the requirement for a plan was met. > > Cheers, > Andy > > >> On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> I was around for all this, but probably not as much as some of you. So many >> memories fade... >> >> I've been reading this >> < >> https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf >>> . >> This passage... >> >> >> *By August 1990, federal agencies were required to procure >> GOSIP-compliantproducts. Through this procurement requirement, the >> government intended to stimulate the market for OSI products. However, many >> network administrators resisted the GOSIP procurement policy and continued >> to operate TCP/IP networks, noting that the federal mandate, by specifying >> only procurement, did not prohibit the use of products built around the >> more familiar and more readily available TCP/IP.* >> >> ... in particular stuck out for me. Admins were required to go OSI, but >> somehow it never happened. Does anyone have any personal stories to relate >> about this, either your own or someone else's? >> >> *Disclosure*: I'm writing historical fiction, mostly because that's what I >> want to do. So there won't be any actual names in whatever I write. I'm >> interested in the private choices people make, not the institutions, >> towering figures, and impersonal forces that most historians write about. >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From clemc at ccc.com Sat Mar 19 07:59:58 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 19 Mar 2022 10:59:58 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: On Sat, Mar 19, 2022 at 10:37 AM Dan Lynch via Internet-history < internet-history at elists.isoc.org> wrote: > At Interop we were a teaching organization about interoperability so while > we were TCP/IP bigots if the world was going to OSI we would definitely > teach that too. Only a few students signed up for the OSI courses. We only > offered them for a few years. I think by 91 it disappeared. The buyer is > king. > > Dan > *IP vs. OSI -- "**Simple Economics always beats Sophisticated Design"* This is not just an Internet thing. Lots of examples in business (particularly the computer biz), but I'll pick two others that have some relevance here since they also were pushed by DoD and DoC and a lot of the same people behind GOSSIP -> FIPS-151 [UNIX as the default system for the USG] and even the whole Ada fiasco. The *idea* was that with a standard, it would be 'cheaper' and more 'efficient' -- the USG would have better choices of vendors but get the same functionality. But .. and Vince points out: "The new panel took a year to review the question and concluded that TCP and TP4 were essentially similar in functionality but *that the widely available TCP/IP protocols **should be allowed* in lieu of OSI People got exception to use and IP stack over and OSI stack for exactly the same reasons as they got exceptions for Ada (or to use Windows/NT for that matter) -- it was cheaper/faster/easier to get *their job done* and the team that put the tender out, was more interested in getting *their own problem solved* that looking for the 'best/official/whatever' solution. It's human nature and simple economics. I think these are all examples of even trying to legislate conformance will not work, if the economics are against the solution. People's built-in self interest will win out [or as Dan points out -- the market sets the standard in the end]. And in the Internet's case, Metcalfe's law took over -- the value to be interconnected to the wider network was more valuable [OSI vs. IP just did not cut it]. Which brings this back to another thread - SplinterNet. As other has pointed out - we already have it to an extent but ... being part of the whole is way to valuable because pf Bob's observation, so while I worry about the new increased cost that new seams will create, I fundamentally don't worry too much, as those outside the main will have a economic desire to make it as seamless as they can. So just as we saw with OSI vs. IP, the economic incentive to be part of the mainstream, will beat trying to legislate it. Clem From dhc at dcrocker.net Sat Mar 19 09:08:08 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 19 Mar 2022 09:08:08 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: <4900fd5e-07e5-7879-20d6-302a309161fd@dcrocker.net> On 3/19/2022 7:59 AM, Clem Cole via Internet-history wrote: > *IP vs. OSI -- "**Simple Economics always beats Sophisticated Design"* This is certainly an appealing saying, and it might even be true. Sometimes. But it does not describe the core reason OSI failed and TCP/IP succeeded. By the time this saying was relevant, TCP/IP had already won the war. Rather, this saying merely describes coming to the recognition of which won. That real core was more like: "simple, operational technology always beats elaborate, incomplete, dysfunctional technology". OSI was /not/ sophisticated design. It was cumbersome /over/-design. The reference to TCP vs. TP4 is an example of missing the point, since there was a mess of other TPs, for use depending on what the underlying networking technology was. For the Lynch & Rose 1993 book, Internet System Handbook, I did a chapter about Internet technical processes, which prompted my considering differences between Internet and OSI processes. (I had some limited experience in the OSI realm.) Simply put, I believe the two communities did not differ in intelligence, knowledge or intent, but in pragmatics and a core bit of politics. The OSI work required unanimity, which meant pleasing everyone, which meant including pretty much everything from everyone's various laundry lists. This meant design took an extraordinarily long time, while tending to produce highly bloated specs. In contrast, the TCP/IP community typically wanted something work by yesterday, which mean using only the intersection of everyone's lists. That produced smaller designs, with an implicit basis for knowing what was included would be useful. A revised version of that chapter was published as Making Standards the IETF Way 1993, Association for Computing Machinery [Reprinted from StandardsView, Vol. 1, No. 1. There were, of course, a number of other differences that probably had a large effect, including meetings (open vs. closed), primary venue (online vs. f2f), and document access (free vs. charged). d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From bpurvy at gmail.com Sat Mar 19 09:55:00 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sat, 19 Mar 2022 09:55:00 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <4900fd5e-07e5-7879-20d6-302a309161fd@dcrocker.net> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <4900fd5e-07e5-7879-20d6-302a309161fd@dcrocker.net> Message-ID: I'm happy to say that I benefited hugely from this "only the intersection of everyone's lists" philosophy when the RDBMS MIB working group came along in late 1993. Marshall Rose was our advisor, and as the Oracle rep I had the honor of chairing it. Every major database company was in on it, and somehow or other, mostly by chanting that mantra, we got RFC 1697 out in July 1994, if memory serves. There were all kinds of requests for additional features, but we always won by saying "let's get this first version out, and then deal with that." There were never any later versions, btw. Oracle implemented it; I don't know if any other vendors did. I think Ingres and Informix did, but I'm not sure. *One point no one's brought up that bears mentioning*: I always played heavily on the participants' desire to get *done*, tell their boss that they succeeded, and not tie up the company's resources forever. In the "official" standards bodies, you can draw full-time standards politicians who don't have any other job. IETF drew people who actually did work. On Sat, Mar 19, 2022 at 9:08 AM Dave Crocker via Internet-history < internet-history at elists.isoc.org> wrote: > On 3/19/2022 7:59 AM, Clem Cole via Internet-history wrote: > > *IP vs. OSI -- "**Simple Economics always beats Sophisticated Design"* > > > This is certainly an appealing saying, and it might even be true. > Sometimes. > > But it does not describe the core reason OSI failed and TCP/IP > succeeded. By the time this saying was relevant, TCP/IP had already won > the war. Rather, this saying merely describes coming to the recognition > of which won. > > That real core was more like: "simple, operational technology always > beats elaborate, incomplete, dysfunctional technology". > > OSI was /not/ sophisticated design. It was cumbersome /over/-design. > > The reference to TCP vs. TP4 is an example of missing the point, since > there was a mess of other TPs, for use depending on what the underlying > networking technology was. > > For the Lynch & Rose 1993 book, Internet System Handbook, I did a > chapter about Internet technical processes, which prompted my > considering differences between Internet and OSI processes. (I had some > limited experience in the OSI realm.) > > Simply put, I believe the two communities did not differ in > intelligence, knowledge or intent, but in pragmatics and a core bit of > politics. The OSI work required unanimity, which meant pleasing > everyone, which meant including pretty much everything from everyone's > various laundry lists. This meant design took an extraordinarily long > time, while tending to produce highly bloated specs. > > In contrast, the TCP/IP community typically wanted something work by > yesterday, which mean using only the intersection of everyone's lists. > That produced smaller designs, with an implicit basis for knowing what > was included would be useful. > > A revised version of that chapter was published as Making Standards the > IETF Way > > 1993, Association for Computing Machinery [Reprinted from StandardsView, > Vol. 1, No. 1. > > There were, of course, a number of other differences that probably had a > large effect, including meetings (open vs. closed), primary venue > (online vs. f2f), and document access (free vs. charged). > > > d/ > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From craig at tereschau.net Sat Mar 19 09:56:33 2022 From: craig at tereschau.net (Craig Partridge) Date: Sat, 19 Mar 2022 10:56:33 -0600 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: On Sat, Mar 19, 2022 at 9:00 AM Clem Cole via Internet-history < internet-history at elists.isoc.org> wrote: > > *IP vs. OSI -- "**Simple Economics always beats Sophisticated Design"* > > I would actually says "works beats does not work." Not because OSI couldn't work. I think, for the most part, the implementation efforts of the time showed that, with some set of bugfixes and adjustments to standards, OSI could work. But... TCP/IP was already working. As best I can tell (I didn't join the scene until 1983), 1990 OSI implementations were not as mature as 1981 TCP/IP implementations. And you would not have wanted to run a 1981 TCP/IP network -- indeed, a certain share of ARPANET folks were none too happy when forced to run TCP/IP in 1983. Several years of operational experience made a huge difference in terms of operational stability. Note, it is not as if the OSI advocates did not know this. In fact, when queried, they'd say they needed to import the wisdom of TCP/IP operations into OSI. But... they never did (and there's probably a good case study there about why). Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From amckenzie3 at yahoo.com Sat Mar 19 10:47:50 2022 From: amckenzie3 at yahoo.com (Alex McKenzie) Date: Sat, 19 Mar 2022 17:47:50 +0000 (UTC) Subject: [ih] GOSIP & compliance In-Reply-To: <4900fd5e-07e5-7879-20d6-302a309161fd@dcrocker.net> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <4900fd5e-07e5-7879-20d6-302a309161fd@dcrocker.net> Message-ID: <708346708.327339.1647712070173@mail.yahoo.com> I'm moved to add my $0.02.? I was deeply involved in both ARPAnet standards (I was the editor of the ARPAnet NCP document), and I was chair of the ISO Session Layer subcommittee.? From those perspectives I make the following observations. 1. ARPA/DARPA provided the money for the first users of ARPAnet and later the TCP Internet.? ARPA demanded that they work, and soon.? The protocol design committees felt that pressure.? The ISO committees were mostly made up of industry representatives who wanted their own private protocol stack incorporated into the standard and were willing to delay approval until that happened. 2. ARPAnet and the TCP Internet were intended to ensure that every connected system could connect to and interwork with every other connected system.? If there were to be options in any protocol, they were options to be added to the basic protocol implemented by everyone, and options could be refused.? OSI was intended to allow manufacturers and software vendors to certify that they were compliant.? For example, at the Transport layer there were several options; no option was required and the options were not interoperable.? So the CCITT Transport, the GOSIP Transport, the IBM Transport, etc could all be compliant and not interoperate with each other.? As another example, the ARPAnet/TCP Telnet protocol demanded that every system be able to send text in the ASCII encoding.? In the ISO Session layer meetings, many participants demanded that there be NO minimal standard text encoding - rather there should be negotiation about what encoding would be used, with the possibility of finding that two systems did not support any common encoding and yet both were compliant. 3. ARPA/DARPA enthusiastically supported the development of the "host" software implementing the ARPAnet and TCP Internet protocols for a wide variety of the most common computers, starting with the mainframes of the early 1970's, thru the workstations of the 1970-80's, and to the personal computers of the 1980s.? ARPA/DARPA encouraged this software to be made widely available cheaply or for free.? The OSI software development was not supported financially by governments, it was expected to be developed privately by computer manufacturers (in addition to their own proprietary protocol software), or by software vendors who could recover their development costs through expensive sales or licensing. 4.? As pointed out by John Day, a lot of the OSI work was a battle between the PTTs and the computer manufacturers about who was going to "own" the added value that networking would provide.? The PTTs wanted a protocol architecture that kept the added value in the network.? The manufacturers wanted a commodity network with the value outside.? For TCP, ARPA had settled the argument in favor of the computers. In view of these factors, it is not surprising that when people wanted the values networking could bring (and especially after the development of point-and-click web browsers) they opted for TCP. Cheers,Alex On 3/19/2022 7:59 AM, Clem Cole via Internet-history wrote: > *IP vs. OSI? ? -- "**Simple Economics always beats Sophisticated Design"* This is certainly an appealing saying, and it might even be true. Sometimes. But it does not describe the core reason OSI failed and TCP/IP succeeded.? By the time this saying was relevant, TCP/IP had already won the war.? Rather, this saying merely describes coming to the recognition of which won. That real core was more like: "simple, operational technology always beats elaborate, incomplete, dysfunctional technology". OSI was /not/ sophisticated design.? It was cumbersome /over/-design. The reference to TCP vs. TP4 is an example of missing the point, since there was a mess of other TPs, for use depending on what the underlying networking technology was. For the Lynch & Rose 1993 book, Internet System Handbook, I did a chapter about Internet technical processes, which prompted my considering differences between Internet and OSI processes.? (I had some limited experience in the OSI realm.) Simply put, I believe the two communities did not differ in intelligence, knowledge or intent, but in pragmatics and a core bit of politics.? The OSI work required unanimity, which meant pleasing everyone, which meant including pretty much everything from everyone's various laundry lists.? This meant design took an extraordinarily long time, while tending to produce highly bloated specs. In contrast, the TCP/IP community typically wanted something work by yesterday, which mean using only the intersection of everyone's lists. That produced smaller designs, with an implicit basis for knowing what was included would be useful. A revised version of that chapter was published as Making Standards the IETF Way 1993, Association for Computing Machinery [Reprinted from StandardsView, Vol. 1, No. 1. There were, of course, a number of other differences that probably had a large effect, including meetings (open vs. closed), primary venue (online vs. f2f), and document access (free vs. charged). d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From mfidelman at meetinghouse.net Sat Mar 19 11:09:09 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Sat, 19 Mar 2022 14:09:09 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: <5fc4e81e-b6a5-98f6-5a54-bd50f1043323@meetinghouse.net> I seem to recall the story that, once Europeans saw the shownet at one of the European Interop shows, and realized that TCP/IP was working, while OSI was still vaporware... the game was over. Dan.. you'd probably be the one to validate this. Miles Dan Lynch via Internet-history wrote: > At Interop we were a teaching organization about interoperability so while we were TCP/IP bigots if the world was going to OSI we would definitely teach that too. Only a few students signed up for the OSI courses. We only offered them for a few years. I think by 91 it disappeared. The buyer is king. > > Dan > > Cell 650-776-7313 > >> On Mar 18, 2022, at 11:34 AM, Andrew G. Malis via Internet-history wrote: >> >> ?It's been a while, but as I recall, as a part of this requirement, >> TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote >> such a transition plan for the MILNET (or it might have been for the DoD as >> a whole, as I said, things are hazy). I'm sure that it just went on a shelf >> somewhere once the requirement for a plan was met. >> >> Cheers, >> Andy >> >> >>> On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>> I was around for all this, but probably not as much as some of you. So many >>> memories fade... >>> >>> I've been reading this >>> < >>> https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf >>>> . >>> This passage... >>> >>> >>> *By August 1990, federal agencies were required to procure >>> GOSIP-compliantproducts. Through this procurement requirement, the >>> government intended to stimulate the market for OSI products. However, many >>> network administrators resisted the GOSIP procurement policy and continued >>> to operate TCP/IP networks, noting that the federal mandate, by specifying >>> only procurement, did not prohibit the use of products built around the >>> more familiar and more readily available TCP/IP.* >>> >>> ... in particular stuck out for me. Admins were required to go OSI, but >>> somehow it never happened. Does anyone have any personal stories to relate >>> about this, either your own or someone else's? >>> >>> *Disclosure*: I'm writing historical fiction, mostly because that's what I >>> want to do. So there won't be any actual names in whatever I write. I'm >>> interested in the private choices people make, not the institutions, >>> towering figures, and impersonal forces that most historians write about. >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From brian.e.carpenter at gmail.com Sat Mar 19 13:13:56 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Sun, 20 Mar 2022 09:13:56 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: <5fc4e81e-b6a5-98f6-5a54-bd50f1043323@meetinghouse.net> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <5fc4e81e-b6a5-98f6-5a54-bd50f1043323@meetinghouse.net> Message-ID: <39b63b93-7cfb-0769-3631-587cbd50b677@gmail.com> Not to knock Interop, but you didn't need to go to Interop to know that TCP/IP was working and OSI wasn't. I didn't even need to step away from my desk to know that. The people in my team working on FTAM to FTP and SMTP to X.400 gateways were pretty clear on the point too. Regards Brian Carpenter On 20-Mar-22 07:09, Miles Fidelman via Internet-history wrote: > I seem to recall the story that, once Europeans saw the shownet at one > of the European Interop shows, and realized that TCP/IP was working, > while OSI was still vaporware... the game was over. > > Dan.. you'd probably be the one to validate this. > > Miles > > Dan Lynch via Internet-history wrote: >> At Interop we were a teaching organization about interoperability so while we were TCP/IP bigots if the world was going to OSI we would definitely teach that too. Only a few students signed up for the OSI courses. We only offered them for a few years. I think by 91 it disappeared. The buyer is king. >> >> Dan >> >> Cell 650-776-7313 >> >>> On Mar 18, 2022, at 11:34 AM, Andrew G. Malis via Internet-history wrote: >>> >>> ?It's been a while, but as I recall, as a part of this requirement, >>> TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote >>> such a transition plan for the MILNET (or it might have been for the DoD as >>> a whole, as I said, things are hazy). I'm sure that it just went on a shelf >>> somewhere once the requirement for a plan was met. >>> >>> Cheers, >>> Andy >>> >>> >>>> On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>> I was around for all this, but probably not as much as some of you. So many >>>> memories fade... >>>> >>>> I've been reading this >>>> < >>>> https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf >>>>> . >>>> This passage... >>>> >>>> >>>> *By August 1990, federal agencies were required to procure >>>> GOSIP-compliantproducts. Through this procurement requirement, the >>>> government intended to stimulate the market for OSI products. However, many >>>> network administrators resisted the GOSIP procurement policy and continued >>>> to operate TCP/IP networks, noting that the federal mandate, by specifying >>>> only procurement, did not prohibit the use of products built around the >>>> more familiar and more readily available TCP/IP.* >>>> >>>> ... in particular stuck out for me. Admins were required to go OSI, but >>>> somehow it never happened. Does anyone have any personal stories to relate >>>> about this, either your own or someone else's? >>>> >>>> *Disclosure*: I'm writing historical fiction, mostly because that's what I >>>> want to do. So there won't be any actual names in whatever I write. I'm >>>> interested in the private choices people make, not the institutions, >>>> towering figures, and impersonal forces that most historians write about. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > > From clemc at ccc.com Sat Mar 19 13:52:18 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 19 Mar 2022 16:52:18 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: On Sat, Mar 19, 2022 at 12:56 PM Craig Partridge wrote: > > > On Sat, Mar 19, 2022 at 9:00 AM Clem Cole via Internet-history < > internet-history at elists.isoc.org> wrote: > >> >> *IP vs. OSI -- "**Simple Economics always beats Sophisticated Design"* >> >> > I would actually says "works beats does not work." Not because OSI > couldn't work. I think, for the most part, the implementation efforts of > the time showed that, with some set of bugfixes and adjustments to > standards, OSI could work. But... TCP/IP was already working. > Exactly ... economics is the high order bit. Doesn't matter if its legislated or not. If it works and gets the job done, is providing value, its pretty hard to displace it [hey IPv6 has yet to displace IPv4 in practice for the same reason -- it works and is economical]. Cristinsen's book explains it. To successfully disrupt, you have to find a new (and rapidly growing) user base that values the new technology AND is willing to accept its downsides at the beginning. But Metcalfe notes that's really hard in communications networks, because the value of the network is less determined by the technology, but by the number of users that are part of the community. > > As best I can tell (I didn't join the scene until 1983), 1990 OSI > implementations were not as mature as 1981 TCP/IP implementations. > I agree. > And you would not have wanted to run a 1981 TCP/IP network -- indeed, a > certain share of ARPANET folks were none too happy when forced to run > TCP/IP in 1983. Several years of operational experience made a huge > difference in terms of operational stability. > Metcalfe's law. There was not a new user base and the old user base valued what it had. It >>just worked<< and the network was growing at an incredible rate making it even more valuable if you joined it. Note, it is not as if the OSI advocates did not know this. In fact, when > queried, they'd say they needed to import the wisdom of TCP/IP operations > into OSI. But... they never did (and there's probably a good case study > there about why). > I'm not so sure if they had just imported it, it would have been enough to displace it. Politics and business interests aside, you had a lot of smart techies on all sides. But this was an economic issue to the end user [and network operator]. The OSI folks either needed a whole new set of customers to create a new network that the IP folks were going to want to flock too (unlikely IMO), or they needed to find a way to use the Microsoft 'Embrace and Extend' idea -- join the mainstream and then make something of value that was only possible in their world. Which (as MSFT discovered) basically went against the grain of the way IP was built/maturing. From dhc at dcrocker.net Sat Mar 19 14:29:16 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 19 Mar 2022 14:29:16 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> On 3/19/2022 1:52 PM, Clem Cole via Internet-history wrote: > There was not a new user base and the old user base > valued what it had. Please forgive my disagreeing again, but there was an enormous, potential user base. Pretty much the entire world. The established user base for TCP/IP was relatively small. By the late 1980s, OSI had done a spectacularly good job of selling the concept of interoperability. Really, I'd claim it create the awareness of the possibility that products from different vendors could be made to work together. Previously, that was pretty much never done, and vendors had a strong incentive to feed the myth that it couldn't. The profit margins for proprietary solutions are markedly (pun?) higher than for open, interoperable ones. At the same time, OSI failed to deliver workable solutions. So it created the market, and TCP/IP satisfied it. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From clemc at ccc.com Sat Mar 19 14:55:13 2022 From: clemc at ccc.com (Clem Cole) Date: Sat, 19 Mar 2022 17:55:13 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> Message-ID: On Sat, Mar 19, 2022 at 5:29 PM Dave Crocker wrote: > On 3/19/2022 1:52 PM, Clem Cole via Internet-history wrote: > > There was not a new user base and the old user base > > valued what it had. > > > Please forgive my disagreeing again, but there was an enormous, > potential user base. Pretty much the entire world. The established > user base for TCP/IP was relatively small. > Fair enough and you do have a good point. But I counter that a >>potential user base<< is not a user base. Again read Christensen's first book - he is (was) much more eloquent than I (and includes a lot of data and graphics from studying this issue). My point with OSI is that if they had satisfied that new user base, and it grew at a faster rate than the established one, Christensen says they could have succeeded. In fact, Christensen's theory talks about the disruptive technology being a 'lessor' technology when it first is introduced, but the new user base values it while the old user base does not. But because the new user base is growing so fast, the money is there to improve the new and it will over take the old. A nice example is how SMS texting took off -- it sucked compared to email that you and I grew up with. But the new user base at the time (teens born in the late 80's/early 90s ) had access and didn't care that keyboarding was hard or the limit to size of the messages. It took off with them - it was something >>they<< valued and that new user base took over the old one. And as the user base got bigger, it got better -- the devices that could be created improved and the issues were less and less a problem. Now those devices can do both [although I personally hate sending much email from my phone]. My own 20-30 yo kids grew up with it. Getting my son to read email is just not going to happen - if I want to communicate, the best I can do is get him to use Signal. My daughter went to college the same way as he did, but being a computer scientist ( and working at Google for a few years), I think she discovered why texting is not as good [although she sends Signal style txts to her mom]. That said, she also traditionally uses SMS for her non-techie friends. So to me, the problem is that while the >>potential<< was there as you correctly point out. The OSI development community did not deliver something that new users valued. They found the old scheme (and it worked) so it grew. From johnl at iecc.com Sat Mar 19 15:30:29 2022 From: johnl at iecc.com (John Levine) Date: 19 Mar 2022 18:30:29 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: Message-ID: <20220319223031.BD05B396BB4A@ary.qy> It appears that Clem Cole via Internet-history said: >Christinsen's book explains it. To successfully disrupt, you have to find a >new (and rapidly growing) user base that values the new technology AND is >willing to accept its downsides at the beginning. ... Before you take "Innovator's Dilemma" as gospel, read Jill Lepore's rather devastating analysis. His analysis of the disk drive business was just wrong if you look at what happened after 1989, and his other examples don't fare too much better. IBM, which invented the disk drive in the 1950s, continued to compete successfully until they sold the product line to Hitachi in 2002. https://www.newyorker.com/magazine/2014/06/23/the-disruption-machine While it is true that a cheaper and worse technology can evolve to be cheaper and better, that doesn't have a whole lot to do with whether the companies doing it live or die. R's, John From dan at lynch.com Sat Mar 19 15:34:45 2022 From: dan at lynch.com (Dan Lynch) Date: Sat, 19 Mar 2022 15:34:45 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <5fc4e81e-b6a5-98f6-5a54-bd50f1043323@meetinghouse.net> References: <5fc4e81e-b6a5-98f6-5a54-bd50f1043323@meetinghouse.net> Message-ID: <69212114-0012-46E2-9CC9-48E3503CEEA4@lynch.com> Yeah, I knew that the highly functioning shownet running TCP in Paris would put OSI to rest. By the next year in Berlin it was all over. Dan Cell 650-776-7313 > On Mar 19, 2022, at 11:09 AM, Miles Fidelman via Internet-history wrote: > > ?I seem to recall the story that, once Europeans saw the shownet at one of the European Interop shows, and realized that TCP/IP was working, while OSI was still vaporware... the game was over. > > Dan.. you'd probably be the one to validate this. > > Miles > > Dan Lynch via Internet-history wrote: >> At Interop we were a teaching organization about interoperability so while we were TCP/IP bigots if the world was going to OSI we would definitely teach that too. Only a few students signed up for the OSI courses. We only offered them for a few years. I think by 91 it disappeared. The buyer is king. >> >> Dan >> >> Cell 650-776-7313 >> >>>> On Mar 18, 2022, at 11:34 AM, Andrew G. Malis via Internet-history wrote: >>> >>> ?It's been a while, but as I recall, as a part of this requirement, >>> TCP/IP-to-OSI transition plans were necessary. While I was at BBN, I wrote >>> such a transition plan for the MILNET (or it might have been for the DoD as >>> a whole, as I said, things are hazy). I'm sure that it just went on a shelf >>> somewhere once the requirement for a plan was met. >>> >>> Cheers, >>> Andy >>> >>> >>>> On Fri, Mar 18, 2022 at 1:02 PM Bob Purvy via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>> >>>> I was around for all this, but probably not as much as some of you. So many >>>> memories fade... >>>> >>>> I've been reading this >>>> < >>>> https://courses.cs.duke.edu//common/compsci092/papers/govern/consensus.pdf >>>>> . >>>> This passage... >>>> >>>> >>>> *By August 1990, federal agencies were required to procure >>>> GOSIP-compliantproducts. Through this procurement requirement, the >>>> government intended to stimulate the market for OSI products. However, many >>>> network administrators resisted the GOSIP procurement policy and continued >>>> to operate TCP/IP networks, noting that the federal mandate, by specifying >>>> only procurement, did not prohibit the use of products built around the >>>> more familiar and more readily available TCP/IP.* >>>> >>>> ... in particular stuck out for me. Admins were required to go OSI, but >>>> somehow it never happened. Does anyone have any personal stories to relate >>>> about this, either your own or someone else's? >>>> >>>> *Disclosure*: I'm writing historical fiction, mostly because that's what I >>>> want to do. So there won't be any actual names in whatever I write. I'm >>>> interested in the private choices people make, not the institutions, >>>> towering figures, and impersonal forces that most historians write about. >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Sat Mar 19 15:37:49 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sat, 19 Mar 2022 15:37:49 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> Message-ID: As for "potential users" being the whole world, I think that's kinda like saying that the whole TV-watching audience in 1999 was a "potential user" of a streaming service. Why is that the correct analogy? You wouldn't have signed up for Hulu then if they'd offered it -- your Internet speed was too slow, and your TV was too crappy, too. And not much of the good content was available. The same limitations applied to PC users who weren't online in 1985 (like me, at home). It was just too difficult then. You'd have a 2400 baud modem, if you were lucky, and then it was a very techie experience to connect to anything. Mr. and Mrs. Average were never going to do it. On the other hand, hackers at work with Unix or VAX machines had a much easier time of it. *They* were the potential audience, and they were using TCP. By the way, the Minitel *did* ring all the bells. I used one in Paris in 1989. It was pretty nice, and they had the revenue model down pat. It was only the PTT's ineptitude, slowth, and narrow-mindedness that kept that from taking off and selling the OSI model. They didn't even try. On Sat, Mar 19, 2022 at 2:55 PM Clem Cole via Internet-history < internet-history at elists.isoc.org> wrote: > On Sat, Mar 19, 2022 at 5:29 PM Dave Crocker wrote: > > > On 3/19/2022 1:52 PM, Clem Cole via Internet-history wrote: > > > There was not a new user base and the old user base > > > valued what it had. > > > > > > Please forgive my disagreeing again, but there was an enormous, > > potential user base. Pretty much the entire world. The established > > user base for TCP/IP was relatively small. > > > Fair enough and you do have a good point. But I counter that a > >>potential user base<< is not a user base. > Again read Christensen's first book - he is (was) much more eloquent than I > (and includes a lot of data and graphics from studying this issue). > > My point with OSI is that if they had satisfied that new user base, and it > grew at a faster rate than the established one, > Christensen says they could have succeeded. In fact, Christensen's theory > talks about the disruptive technology > being a 'lessor' technology when it first is introduced, but the new user > base values it while the old user base does not. > But because the new user base is growing so fast, the money is there to > improve the new and it will over take the old. > > A nice example is how SMS texting took off -- it sucked compared to email > that you and I grew up with. But the new user base at the time > (teens born in the late 80's/early 90s ) had access and didn't care that > keyboarding was hard or the limit to size of the messages. > It took off with them - it was something >>they<< valued and that new user > base took over the old one. > And as the user base got bigger, it got better -- the devices that could be > created improved and the issues were less and less a problem. > Now those devices can do both [although I personally hate sending much > email from my phone]. > > My own 20-30 yo kids grew up with it. Getting my son to read email is > just not going to happen - if I want to communicate, the best > I can do is get him to use Signal. My daughter went to college the same > way as he did, but being a computer > scientist ( and working at Google for a few years), I think she discovered > why texting is not as good > [although she sends Signal style txts to her mom]. That said, she also > traditionally uses SMS for her non-techie friends. > > So to me, the problem is that while the >>potential<< was there as you > correctly point out. > The OSI development community did not deliver something that new users > valued. They found the old scheme > (and it worked) so it grew. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From lyndon at orthanc.ca Sat Mar 19 17:39:33 2022 From: lyndon at orthanc.ca (Lyndon Nerenberg (VE7TFX/VE6BBM)) Date: Sat, 19 Mar 2022 17:39:33 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: Clem Cole via Internet-history writes: > Cristinsen's book explains it. To successfully disrupt, you have to find a > new (and rapidly growing) user base that values the new technology AND is > willing to accept its downsides at the beginning. But Metcalfe notes > that's really hard in communications networks, because the value of the > network is less determined by the technology, but by the number of users > that are part of the community. And so what we need is for Netflix to go IPv6-only. I guarantee you the global Internet will have left IPv4 behind within six months (at most). Yes, that's somewhat tongue-in-cheek, but really, the only things that would get left behind are all those boxes that need IPv4 to PXE boot, and the many IPMI interfaces that are v4-only. And in both cases those are almost certainly talking to very local and very restricted networks, so none of that needs to get past a customer's edge router. (And as the hardware dies off, so will even that need. I've noticed that an appreciable fraction of the hardware we've been buying lately has v6 support for both PXE and IPMI.) --lyndon --lyndon From julf at Julf.com Sun Mar 20 02:00:51 2022 From: julf at Julf.com (Johan Helsingius) Date: Sun, 20 Mar 2022 10:00:51 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> Message-ID: <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> On 19/03/2022 23:37, Bob Purvy via Internet-history wrote: > By the way, the Minitel *did* ring all the bells. I used one in Paris in > 1989. It was pretty nice, and they had the revenue model down pat. It was > only the PTT's ineptitude, slowth, and narrow-mindedness that kept that > from taking off and selling the OSI model. They didn't even try. The problem with Minitel wasn't actually the PTT - they actually wanted to make it more open and Internet-like. The problem was the traditional publishing industry that feared the online small ads and marketplaces, and successfully lobbied for all kinds of restrictions. Julf From vgcerf at gmail.com Sun Mar 20 06:15:09 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sun, 20 Mar 2022 09:15:09 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: While developing MCI-Mail, I tried to get Minitel to agree to interconnect to allow email exchange but they refused. this would have been around 1984. v On Sun, Mar 20, 2022 at 5:01 AM Johan Helsingius via Internet-history < internet-history at elists.isoc.org> wrote: > On 19/03/2022 23:37, Bob Purvy via Internet-history wrote: > > > By the way, the Minitel *did* ring all the bells. I used one in Paris in > > 1989. It was pretty nice, and they had the revenue model down pat. It was > > only the PTT's ineptitude, slowth, and narrow-mindedness that kept that > > from taking off and selling the OSI model. They didn't even try. > > The problem with Minitel wasn't actually the PTT - they actually wanted > to make it more open and Internet-like. The problem was the traditional > publishing industry that feared the online small ads and marketplaces, > and successfully lobbied for all kinds of restrictions. > > Julf > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From julf at Julf.com Sun Mar 20 06:18:35 2022 From: julf at Julf.com (Johan Helsingius) Date: Sun, 20 Mar 2022 14:18:35 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: <1d8e5f82-a683-677b-8dd3-9ca55a6bc1d7@Julf.com> On 20/03/2022 14:15, vinton cerf wrote: > While developing MCI-Mail, I tried to get Minitel to agree to > interconnect to allow email exchange but they refused. Did you speak French? :) Julf From vgcerf at gmail.com Sun Mar 20 06:20:39 2022 From: vgcerf at gmail.com (vinton cerf) Date: Sun, 20 Mar 2022 09:20:39 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <1d8e5f82-a683-677b-8dd3-9ca55a6bc1d7@Julf.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> <1d8e5f82-a683-677b-8dd3-9ca55a6bc1d7@Julf.com> Message-ID: Well, that is indeed an awkward problem. Despite my last name, my father's family came from Alsace and he hired a tutor from Berlin to teach me German and not French! v On Sun, Mar 20, 2022 at 9:19 AM Johan Helsingius wrote: > On 20/03/2022 14:15, vinton cerf wrote: > > While developing MCI-Mail, I tried to get Minitel to agree to > > interconnect to allow email exchange but they refused. > > Did you speak French? :) > > Julf > > From julf at Julf.com Sun Mar 20 07:50:40 2022 From: julf at Julf.com (Johan Helsingius) Date: Sun, 20 Mar 2022 15:50:40 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> <1d8e5f82-a683-677b-8dd3-9ca55a6bc1d7@Julf.com> Message-ID: <88aa93bc-933c-d89c-5205-89eb42558297@Julf.com> On 20/03/2022 14:20, vinton cerf wrote: > Well, that is indeed an awkward problem. Despite my last name, my > father's family came from Alsace and he hired a tutor from Berlin to > teach me German and not French! Ah, yes, the complexity of European history (says the Swedish-speaking Finn living in Amsterdam :) ). At least "proper" German is probably more useful than Alsatian. Julf From bpurvy at gmail.com Sun Mar 20 09:01:10 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 20 Mar 2022 09:01:10 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: > The problem with Minitel wasn't actually the PTT - they actually wanted to make it more open and Internet-like. The problem was the traditional publishing industry that feared the online small ads and marketplaces, and successfully lobbied for all kinds of restrictions. I never heard that. Interesting. One still wonders why the other European PTTs didn't do their own and interoperate with Minitel. Too much NIH? I recall reading research papers back then on "videotex" (a term you don't hear anymore). I think there were lots of research efforts on it, but it never went beyond small trials. IIRC. Probably someone here knows the full story. On Sun, Mar 20, 2022 at 2:01 AM Johan Helsingius via Internet-history < internet-history at elists.isoc.org> wrote: > On 19/03/2022 23:37, Bob Purvy via Internet-history wrote: > > > By the way, the Minitel *did* ring all the bells. I used one in Paris in > > 1989. It was pretty nice, and they had the revenue model down pat. It was > > only the PTT's ineptitude, slowth, and narrow-mindedness that kept that > > from taking off and selling the OSI model. They didn't even try. > > The problem with Minitel wasn't actually the PTT - they actually wanted > to make it more open and Internet-like. The problem was the traditional > publishing industry that feared the online small ads and marketplaces, > and successfully lobbied for all kinds of restrictions. > > Julf > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From julf at Julf.com Sun Mar 20 09:21:53 2022 From: julf at Julf.com (Johan Helsingius) Date: Sun, 20 Mar 2022 17:21:53 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: <764be6aa-959a-0a33-a19a-eb3d6ac0c52e@Julf.com> On 20/03/2022 17:01, Bob Purvy wrote: > I never heard that. Interesting. There has been a couple of academic papers about it published in "Internet Histories" (an academic journal for historians). > One still wonders why the other European PTTs didn't do their own and > interoperate with Minitel. Too much NIH? Some of them did their own (I know of at least of "telesampo" by Telecom Finland. I guess there was not much point in interoperating, as Minitel had french-language content and Telesampo Finnish-language content. > I recall reading research papers back then on "videotex" (a term you > don't hear anymore). I think there were lots of research efforts on it, > but it never went beyond small trials. Prestel was pretty popular not just in the UK, but also adapted in Sweden, Germany and Netherlands. Julf From johnl at iecc.com Sun Mar 20 09:41:51 2022 From: johnl at iecc.com (John Levine) Date: 20 Mar 2022 12:41:51 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: Message-ID: <20220320164152.05C2A3974C06@ary.qy> It appears that Bob Purvy via Internet-history said: >One still wonders why the other European PTTs didn't do their own and >interoperate with Minitel. Too much NIH? Remember that the business case for Minitel was that it would replace paper phone books and directory assistance operators. Everything else was an add-on. You didn't need to interoperate to do that. R's, John From bpurvy at gmail.com Sun Mar 20 09:45:25 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 20 Mar 2022 09:45:25 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <20220320164152.05C2A3974C06@ary.qy> References: <20220320164152.05C2A3974C06@ary.qy> Message-ID: Well, don't people in France ever want to look up numbers in Germany, England, and Italy? Also, there were lots of other apps on top of Minitel, including a dating service! It did replace calls for directory assistance, but then people discovered it could do a lot of other things, too. On Sun, Mar 20, 2022 at 9:41 AM John Levine wrote: > It appears that Bob Purvy via Internet-history said: > >One still wonders why the other European PTTs didn't do their own and > >interoperate with Minitel. Too much NIH? > > Remember that the business case for Minitel was that it would replace > paper phone books and directory assistance operators. Everything else > was an add-on. You didn't need to interoperate to do that. > > R's, > John > From olejacobsen at me.com Sun Mar 20 09:53:41 2022 From: olejacobsen at me.com (Ole Jacobsen) Date: Sun, 20 Mar 2022 09:53:41 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <20220320164152.05C2A3974C06@ary.qy> Message-ID: <377BFE55-8693-4814-B6FA-925D47ED7A13@me.com> For some historical perspective: In 1994 we published an article about Minitel in ConneXions--The Interoperability Report. It's the first article in the April issue. The entire archive of ConneXions is available from the Charles Babbage Institute, but for easy access to this particular issue I've uploaded a copy to my directory on Yikes. See: https://www.yikes.com/~ole/store/ConneXions8-04_Apr1994.pdf Ole > On Mar 20, 2022, at 09:45, Bob Purvy via Internet-history wrote: > > Well, don't people in France ever want to look up numbers in Germany, > England, and Italy? > > Also, there were lots of other apps on top of Minitel, including a dating > service! It did replace calls for directory assistance, but then people > discovered it could do a lot of other things, too. > > On Sun, Mar 20, 2022 at 9:41 AM John Levine wrote: > >> It appears that Bob Purvy via Internet-history said: >>> One still wonders why the other European PTTs didn't do their own and >>> interoperate with Minitel. Too much NIH? >> >> Remember that the business case for Minitel was that it would replace >> paper phone books and directory assistance operators. Everything else >> was an add-on. You didn't need to interoperate to do that. >> >> R's, >> John >> Ole J. Jacobsen Editor and Publisher The Internet Protocol Journal Office: +1 415-550-9433 Cell: +1 415-370-4628 Web: protocoljournal.org E-mail: olejacobsen at me.com E-mail: ole at protocoljournal.org Skype: organdemo From johnl at iecc.com Sun Mar 20 10:45:22 2022 From: johnl at iecc.com (John R. Levine) Date: 20 Mar 2022 13:45:22 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: <20220320164152.05C2A3974C06@ary.qy> Message-ID: <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> > Well, don't people in France ever want to look up numbers in Germany, > England, and Italy? Perhaps, but historically the way that worked is that each national telco had operators in a room full of out of date foreign phone books. I doubt any of the telcos would have found that compelling. > Also, there were lots of other apps on top of Minitel, including a dating > service! It did replace calls for directory assistance, but then people > discovered it could do a lot of other things, too. Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel Rose. It apparently made France Telecom a lot of money since users paid by the minute, but I think they were kind of embarassed by the whole thing. It is a reasonable question why other PTTs didn't just clone Minitel, but I don't think at the time there would have been much incentive to hook them together. Apparently they did trials in Belgium and Ireland, but without the PTT subsidy to provide the terminals for free, they didn't go anywhere. Regards, John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From markotime at gmail.com Sun Mar 20 10:53:51 2022 From: markotime at gmail.com (markotime) Date: Sun, 20 Mar 2022 10:53:51 -0700 Subject: [ih] Videotex Message-ID: Canada had a fairly active program going, but not enough inertia around the world. IIRC, minitel may have been a candidate here. From jack at 3kitty.org Sun Mar 20 11:18:39 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 20 Mar 2022 11:18:39 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: <0bcc8847-2b3f-d1e2-c26e-ea7915e66d5c@3kitty.org> My recollection is from 1990-1991.?? I had joined Oracle as "Internet Architect".? Our networking technology was very explicitly agnostic - we supported all kinds of network infrastructure, and even provided for interconnecting disparate worlds.? So if your engineering department used TCP, your marketing folks required Macs and Appletalk, your mainframes were SNA, and your administrative groups used PCs with Netware, everyone could still get at all all their business data, no matter what kind of server it lived on, or what kind of networks were involved.?? Even OSI, where you could find it. We had a group of customers that visited HQ every few months, called IIRC the "Customer Council".? Mostly they discussed database issues and ideas, but occasionally they wanted to talk about Networking, so I got called into the meeting.?? The attendees were all high-level business managers - CEO, COO, CTO, et al.?? The companies involved were all technology users, not vendors, and were international. E.g., from banking, insurance, retail, shipping, manufacturing, etc., based in the Americas, Europe, Asia, etc.? Even a government or two IIRC.? It was a very diverse group and very open since they didn't see each other as competitors.?? Their challenge was to figure out how to *use* networking technology to further their business goals. At one of the meetings, I did a quick survey around the room, asking each person to describe their current network operations.? The answers were unsurprising.? There were SNA shops of course, plus DECNET, Apple, Netware, Vines, and such all in use.? Whatever the dominant networking was, they all had some other technologies that had unexpectedly penetrated into their IT worlds, even including TCP. Then I did another round of the room, asking everyone what their plans were.?? I.e., what were they trying to work toward as their future networking structure? I was shocked to hear the answers.?? Every single person, from every company, from every segment of industry, from every continent, said the same thing. They were all heading to a TCP-based network architecture.?? As fast as possible. OSI was not even mentioned.?? All of them had some kind of experiment going on, introducing TCP into some part of their business.? After learning how to use it, they planned, and hoped, to migrate everything to TCP, assuming of course that their experiments all worked out well.?? Note that converting to TCP did not mean moving on to The Internet; each corporation would instead have its own private TCP-based intranet. At that point, I stopped talking about our technology as a way for diverse protocols to coexist.?? But it was still a good mechanism to facilitate a transition to TCP, maintaining access to a corporations business data as they proceded to migrate their networking infrastructure.?? Our networking became a transition tool, rather than one to enable coexistence. I made one last survey around the room, basically asking why each organization had chosen TCP as their target infrastructure.? Some of the reasons were as others here have mentioned.?? TCP "just worked", and their experiments were confirming that.? They could also buy TCP-based products, especially LANs, workstations, and PCs.? All had TCP available; in fact at that point there were more than 30 implementations of TCP available for Windows from all sorts of startups. But there was one reason which seemed to be especially important. Their IT departments were highly dependent on a constant stream of new engineers to get all the new stuff to work.? Colleges and universities around the world were producing a steady stream of computer people to supply that talent.? Pretty much all of those people came out of school with a degree of course, but also a working knowledge of TCP.? They had built things in school, using TCP.? They had read the RFCs and IENs.? They knew how to make it work.? But they had never had such access to SNA, or DECNET, or certainly OSI.?? Such things just weren't common in the academic environment. So a major driving force for TCP adoption was not only the availability of products that implemented TCP, but also the "supply chain" of people who knew how to use TCP in real world applications. This all occurred in 1990-91 -- just before Tim Berners-Lee released the World Wide Web on the world.? When that happened a few years later, and was based on TCP, it sealed the dominance of TCP.?? The Web provided a way for all those corporations to interact with their customers and suppliers, as well as all their internal departments.?? But it required a TCP infrastructure.?? They hadn't built it to use OSI or anything else. Jack On 3/20/22 09:01, Bob Purvy via Internet-history wrote: >> The problem with Minitel wasn't actually the PTT - they actually wanted > to make it more open and Internet-like. The problem was the traditional > publishing industry that feared the online small ads and marketplaces, > and successfully lobbied for all kinds of restrictions. > > I never heard that. Interesting. > > One still wonders why the other European PTTs didn't do their own and > interoperate with Minitel. Too much NIH? > > I recall reading research papers back then on "videotex" (a term you don't > hear anymore). I think there were lots of research efforts on it, but it > never went beyond small trials. > > IIRC. Probably someone here knows the full story. > > On Sun, Mar 20, 2022 at 2:01 AM Johan Helsingius via Internet-history < > internet-history at elists.isoc.org> wrote: > >> On 19/03/2022 23:37, Bob Purvy via Internet-history wrote: >> >>> By the way, the Minitel *did* ring all the bells. I used one in Paris in >>> 1989. It was pretty nice, and they had the revenue model down pat. It was >>> only the PTT's ineptitude, slowth, and narrow-mindedness that kept that >>> from taking off and selling the OSI model. They didn't even try. >> The problem with Minitel wasn't actually the PTT - they actually wanted >> to make it more open and Internet-like. The problem was the traditional >> publishing industry that feared the online small ads and marketplaces, >> and successfully lobbied for all kinds of restrictions. >> >> Julf >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From internet-history at gtaylor.tnetconsulting.net Sun Mar 20 11:21:03 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Sun, 20 Mar 2022 12:21:03 -0600 Subject: [ih] GOSIP & compliance In-Reply-To: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: <6c4d5cf4-1aa3-d962-05af-7275e6b9cacd@spamtrap.tnetconsulting.net> On 3/19/22 8:36 AM, Dan Lynch via Internet-history wrote: > At Interop we were a teaching organization about interoperability so > while we were TCP/IP bigots if the world was going to OSI we would > definitely teach that too. Only a few students signed up for the > OSI courses. We only offered them for a few years. I think by 91 it > disappeared. The buyer is king. I wonder if any of this (type of) old course material is still around anywhere. My retro computing / retro networking has expanded to reading lots of documentation / training material from mostly the '90s. But earlier or later is also interesting to me. -- The '90s seems to be the sweet spot with versions of software that I play with and can relatively easily experiment with what I'm reading about. -- Grant. . . . unix || die From internet-history at gtaylor.tnetconsulting.net Sun Mar 20 11:26:01 2022 From: internet-history at gtaylor.tnetconsulting.net (Grant Taylor) Date: Sun, 20 Mar 2022 12:26:01 -0600 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> Message-ID: <08d30a62-e3a4-a8ec-86a7-246f199d076e@spamtrap.tnetconsulting.net> On 3/19/22 8:59 AM, Clem Cole via Internet-history wrote: > People got exception to use and IP stack over and OSI stack for > exactly the same reasons as they got exceptions for Ada (or to > use Windows/NT for that matter) -- it was cheaper/faster/easier > to get*their job done* and the team that put the tender out, was > more interested in getting *their own problem solved* that looking > for the 'best/official/whatever' solution. It's human nature and > simple economics. We rely on the "Don't let perfect be the enemy of good enough." drive a lot of decisions at $DAY_JOB. It helps keep feeping creaturism in check in a company with a lot of people looking for how to justify their next promotion. -- Grant. . . . unix || die From jeanjour at comcast.net Sun Mar 20 11:56:19 2022 From: jeanjour at comcast.net (John Day) Date: Sun, 20 Mar 2022 14:56:19 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: <20220320164152.05C2A3974C06@ary.qy> References: <20220320164152.05C2A3974C06@ary.qy> Message-ID: <1F5B01E4-5C3E-4715-A589-D981258C1D75@comcast.net> I believe that was more the excuse to justify building it and putting it in every subscriber. The other uses was part of the plan all along. In fact, the plan was that everything that today would be the Web would be owned by the PTT. It was railing against that that got Pouzin in trouble. > On Mar 20, 2022, at 12:41, John Levine via Internet-history wrote: > > It appears that Bob Purvy via Internet-history said: >> One still wonders why the other European PTTs didn't do their own and >> interoperate with Minitel. Too much NIH? > > Remember that the business case for Minitel was that it would replace > paper phone books and directory assistance operators. Everything else > was an add-on. You didn't need to interoperate to do that. > > R's, > John > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From bpurvy at gmail.com Sun Mar 20 13:05:15 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 20 Mar 2022 13:05:15 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> References: <20220320164152.05C2A3974C06@ary.qy> <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> Message-ID: > It apparently made France Telecom a lot of money since users paid by the minute, but I think they were kind of embarrassed by the whole thing. I believe we now have a corollary to the theorem: *'Strategic' means you don't make any money.* It's: * If you're making money, it's not strategic.* On Sun, Mar 20, 2022 at 10:45 AM John R. Levine wrote: > > Well, don't people in France ever want to look up numbers in Germany, > > England, and Italy? > > Perhaps, but historically the way that worked is that each national telco > had operators in a room full of out of date foreign phone books. I doubt > any of the telcos would have found that compelling. > > > Also, there were lots of other apps on top of Minitel, including a dating > > service! It did replace calls for directory assistance, but then people > > discovered it could do a lot of other things, too. > > Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel > Rose. It apparently made France Telecom a lot of money since users paid > by the minute, but I think they were kind of embarassed by the whole > thing. > > It is a reasonable question why other PTTs didn't just clone Minitel, but > I don't think at the time there would have been much incentive to hook > them together. Apparently they did trials in Belgium and Ireland, but > without the PTT subsidy to provide the terminals for free, they didn't go > anywhere. > > Regards, > John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for > Dummies", > Please consider the environment before reading this e-mail. https://jl.ly > From brian.e.carpenter at gmail.com Sun Mar 20 13:55:16 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 21 Mar 2022 09:55:16 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: <1F5B01E4-5C3E-4715-A589-D981258C1D75@comcast.net> References: <20220320164152.05C2A3974C06@ary.qy> <1F5B01E4-5C3E-4715-A589-D981258C1D75@comcast.net> Message-ID: <3a0d890d-4e9f-a60a-27e8-30b4ca751eb5@gmail.com> On 21-Mar-22 07:56, John Day via Internet-history wrote: > I believe that was more the excuse to justify building it and putting it in every subscriber. The other uses was part of the plan all along. In fact, the plan was that everything that today would be the Web would be owned by the PTT. It was railing against that that got Pouzin in trouble. To add context, that was exactly the time in history when the PTT monopolies were under very strong attack in Europe, as the impact of deregulation in the USA sank in. Also, Pekka Tarjanne (head of the ITU) was pushing very hard against the monopolies. Certainly France Telecom was digging in its heels against de-monopolisation, and against value added services from 3rd parties. Minitel was a tool in that battle and was also touted as a matter of national pride. Connectionless networks and 3rd-party packet switching were viewed (correctly) as a massive economic threat by France Telecom and all the other monopolies. (That's why their doctrine was OSI/X.25 and not OSI/CLNP.) But I think the main point about Minitel was that it only existed because it was massively subsidised by a government monopoly. Economically, it was nonsense. In most West European countries at that time, this was a no-no, especially as the monopolies began to fall. Brian > >> On Mar 20, 2022, at 12:41, John Levine via Internet-history wrote: >> >> It appears that Bob Purvy via Internet-history said: >>> One still wonders why the other European PTTs didn't do their own and >>> interoperate with Minitel. Too much NIH? >> >> Remember that the business case for Minitel was that it would replace >> paper phone books and directory assistance operators. Everything else >> was an add-on. You didn't need to interoperate to do that. >> >> R's, >> John >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From julf at Julf.com Mon Mar 21 00:10:49 2022 From: julf at Julf.com (Johan Helsingius) Date: Mon, 21 Mar 2022 08:10:49 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: <3a0d890d-4e9f-a60a-27e8-30b4ca751eb5@gmail.com> References: <20220320164152.05C2A3974C06@ary.qy> <1F5B01E4-5C3E-4715-A589-D981258C1D75@comcast.net> <3a0d890d-4e9f-a60a-27e8-30b4ca751eb5@gmail.com> Message-ID: <39736b1b-8263-8198-2b8c-e313ca51c97f@Julf.com> On 20/03/2022 21:55, Brian E Carpenter via Internet-history wrote: > Also, Pekka Tarjanne (head of the ITU) > was pushing very hard against the monopolies. Which surprised us a bit, as he had been the head of the Finnish PTT for 12 years before he became the head of ITU. I guess what helped was that the Finnish PTT was never a 100% monopoly - thy had monopoly on international and long distance traffic, but the larger cities all had independent local phone companies (originally mostly co-ops). Julf From b_a_denny at yahoo.com Mon Mar 21 01:18:11 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Mon, 21 Mar 2022 08:18:11 +0000 (UTC) Subject: [ih] GOSIP & compliance In-Reply-To: <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> References: <159533040.305462.1647635164110@mail.yahoo.com> <202203182145.22ILjvCd2796269@bottom.networkguild.org> <7F10BA29-C2E4-494A-AB94-BBF4C2A1B096@tony.li> Message-ID: <334131367.1252058.1647850691546@mail.yahoo.com> I didn't have any significant OSI projects but I think I was asked to demonstrate brand C had? support.? It was a very small task for me so my memory is very hazy about the effort. ( I am guessing it was probably for the Army under that task order Agreement I mentioned earlier).?? I wasn't sure what I could use for the demonstration but luckily I? knew that I could get the code to NeVoT (Network Voice Tool but it also had video support).? I started to poke around at the software and I had some questions I sent to Henning Schulzrinne about what I was trying to do. Luckily for me, Henning offered to modify the code to use the OSI stack instead of just TCP/IP.? ? I was able to show the application working. FYI, I think vic and vat are better known but I don't think I had access to the source code at the time (I also don't remember the timeframe well enough to verify these applications existed yet but I think they did). barbara On Friday, March 18, 2022, 03:01:58 PM PDT, Tony Li via Internet-history wrote: > On Mar 18, 2022, at 2:45 PM, Michael Grant via Internet-history wrote: > > And there wasn?t a single router that routed CLNP.? Did one ever exist? Yes.? Brand C had a CLNP stack, including an IS-IS implementation an a CLNP version of their in-house proprietary routing protocol.? They did not have an implementation of IDRP, so it wasn?t a full stack, but it was deployable. Tony -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From francesco.fondelli at gmail.com Mon Mar 21 09:13:42 2022 From: francesco.fondelli at gmail.com (Francesco Fondelli) Date: Mon, 21 Mar 2022 17:13:42 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <20220320164152.05C2A3974C06@ary.qy> <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> Message-ID: In Italy we had Videotel, similar to Minitel. I never had a Videotel terminal (was expensive and yes paid by the minute) but in the 90s you could connect with a V.23/V.21 (?) modem to ITAPAC (X.25 network) and somehow access some of the Videotel services (at local per-call-rate... 200 lire IIRC). I think Videotel main app was... chat. Still have the phone numbers of some ITAPAC "gateway"... ciao On Sun, Mar 20, 2022 at 9:05 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > > It apparently made France Telecom a lot of money since users paid > by the minute, but I think they were kind of embarrassed by the whole > thing. > > I believe we now have a corollary to the theorem: > > *'Strategic' means you don't make any money.* > > > It's: > > * If you're making money, it's not strategic.* > > > On Sun, Mar 20, 2022 at 10:45 AM John R. Levine wrote: > > > > Well, don't people in France ever want to look up numbers in Germany, > > > England, and Italy? > > > > Perhaps, but historically the way that worked is that each national telco > > had operators in a room full of out of date foreign phone books. I doubt > > any of the telcos would have found that compelling. > > > > > Also, there were lots of other apps on top of Minitel, including a > dating > > > service! It did replace calls for directory assistance, but then people > > > discovered it could do a lot of other things, too. > > > > Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel > > Rose. It apparently made France Telecom a lot of money since users paid > > by the minute, but I think they were kind of embarassed by the whole > > thing. > > > > It is a reasonable question why other PTTs didn't just clone Minitel, but > > I don't think at the time there would have been much incentive to hook > > them together. Apparently they did trials in Belgium and Ireland, but > > without the PTT subsidy to provide the terminals for free, they didn't go > > anywhere. > > > > Regards, > > John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for > > Dummies", > > Please consider the environment before reading this e-mail. > https://jl.ly > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From bill.n1vux at gmail.com Mon Mar 21 11:38:34 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Mon, 21 Mar 2022 14:38:34 -0400 Subject: [ih] Videotex In-Reply-To: References: Message-ID: On Sun, Mar 20, 2022 at 1:54 PM markotime via Internet-history < internet-history at elists.isoc.org> wrote: > Canada had a fairly active program going, but not enough inertia around the > world. IIRC, minitel may have been a candidate here. > minitel - When i was at MITRE - in the same department as the MILNET/DoDIIS project (including MAP), mid 1980s, corporate IT acquired a few minitel terminals as experiments for remote (=300bd? dialup) access. I signed one out for a weekend. Alas I must say I rather preferred the Tandy/RadioShack 100 portable, it had a much better keyboard if one was a decent QWERTY typist; either was a tradeoff on screen real-estate. TRS just didn't have the mass production of France Telecom bringing Minitel unit-prices down. (Either was an arguable improvement over the TI Silent 700 or the bigger portable printing terminals that came before, despite those being full width. ) -- Bill Ricker bill.n1vux at gmail.com https://www.linkedin.com/in/n1vux From bpurvy at gmail.com Tue Mar 22 15:57:25 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Tue, 22 Mar 2022 15:57:25 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <20220320164152.05C2A3974C06@ary.qy> <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> Message-ID: Is anyone familiar with NASA Ames' internal network history? I found this document (disclaimer: which I haven't read yet), which seems to indicate OSI wasn't *totally, officially* dead by the 90s (I see Vint in the Acknowledgements.) On Mon, Mar 21, 2022 at 9:13 AM Francesco Fondelli < francesco.fondelli at gmail.com> wrote: > In Italy we had Videotel, similar to Minitel. I never had a Videotel > terminal (was expensive and yes paid by the minute) but in the 90s you > could connect with a V.23/V.21 (?) modem to ITAPAC (X.25 network) and > somehow access some of the Videotel services (at local per-call-rate... 200 > lire IIRC). > > I think Videotel main app was... chat. > > Still have the phone numbers of some ITAPAC "gateway"... > > ciao > > On Sun, Mar 20, 2022 at 9:05 PM Bob Purvy via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > It apparently made France Telecom a lot of money since users paid >> by the minute, but I think they were kind of embarrassed by the whole >> thing. >> >> I believe we now have a corollary to the theorem: >> >> *'Strategic' means you don't make any money.* >> >> >> It's: >> >> * If you're making money, it's not strategic.* >> >> >> On Sun, Mar 20, 2022 at 10:45 AM John R. Levine wrote: >> >> > > Well, don't people in France ever want to look up numbers in Germany, >> > > England, and Italy? >> > >> > Perhaps, but historically the way that worked is that each national >> telco >> > had operators in a room full of out of date foreign phone books. I >> doubt >> > any of the telcos would have found that compelling. >> > >> > > Also, there were lots of other apps on top of Minitel, including a >> dating >> > > service! It did replace calls for directory assistance, but then >> people >> > > discovered it could do a lot of other things, too. >> > >> > Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel >> > Rose. It apparently made France Telecom a lot of money since users paid >> > by the minute, but I think they were kind of embarassed by the whole >> > thing. >> > >> > It is a reasonable question why other PTTs didn't just clone Minitel, >> but >> > I don't think at the time there would have been much incentive to hook >> > them together. Apparently they did trials in Belgium and Ireland, but >> > without the PTT subsidy to provide the terminals for free, they didn't >> go >> > anywhere. >> > >> > Regards, >> > John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for >> > Dummies", >> > Please consider the environment before reading this e-mail. >> https://jl.ly >> > >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > From tony.li at tony.li Tue Mar 22 16:37:27 2022 From: tony.li at tony.li (Tony Li) Date: Tue, 22 Mar 2022 16:37:27 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <20220320164152.05C2A3974C06@ary.qy> <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> Message-ID: <7B1D4810-1787-4055-AC43-F162302EE4B3@tony.li> Hi Bob, I was something of a spectator. Milo Medin and Jeff Burgan are authoritative. NASA?s network started off as DECnet and painfully migrated to DECnet Phase V (i.e. OSI). So it?s correct, OSI wasn?t totally, officially dead yet. But it was VERY clear that to talk to anyone else, it was IP. T > On Mar 22, 2022, at 3:57 PM, Bob Purvy via Internet-history wrote: > > Is anyone familiar with NASA Ames' internal network history? I found this > document > > (disclaimer: > which I haven't read yet), which seems to indicate OSI wasn't *totally, > officially* dead by the 90s > > (I see Vint in the Acknowledgements.) > > On Mon, Mar 21, 2022 at 9:13 AM Francesco Fondelli < > francesco.fondelli at gmail.com> wrote: > >> In Italy we had Videotel, similar to Minitel. I never had a Videotel >> terminal (was expensive and yes paid by the minute) but in the 90s you >> could connect with a V.23/V.21 (?) modem to ITAPAC (X.25 network) and >> somehow access some of the Videotel services (at local per-call-rate... 200 >> lire IIRC). >> >> I think Videotel main app was... chat. >> >> Still have the phone numbers of some ITAPAC "gateway"... >> >> ciao >> >> On Sun, Mar 20, 2022 at 9:05 PM Bob Purvy via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>>> It apparently made France Telecom a lot of money since users paid >>> by the minute, but I think they were kind of embarrassed by the whole >>> thing. >>> >>> I believe we now have a corollary to the theorem: >>> >>> *'Strategic' means you don't make any money.* >>> >>> >>> It's: >>> >>> * If you're making money, it's not strategic.* >>> >>> >>> On Sun, Mar 20, 2022 at 10:45 AM John R. Levine wrote: >>> >>>>> Well, don't people in France ever want to look up numbers in Germany, >>>>> England, and Italy? >>>> >>>> Perhaps, but historically the way that worked is that each national >>> telco >>>> had operators in a room full of out of date foreign phone books. I >>> doubt >>>> any of the telcos would have found that compelling. >>>> >>>>> Also, there were lots of other apps on top of Minitel, including a >>> dating >>>>> service! It did replace calls for directory assistance, but then >>> people >>>>> discovered it could do a lot of other things, too. >>>> >>>> Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel >>>> Rose. It apparently made France Telecom a lot of money since users paid >>>> by the minute, but I think they were kind of embarassed by the whole >>>> thing. >>>> >>>> It is a reasonable question why other PTTs didn't just clone Minitel, >>> but >>>> I don't think at the time there would have been much incentive to hook >>>> them together. Apparently they did trials in Belgium and Ireland, but >>>> without the PTT subsidy to provide the terminals for free, they didn't >>> go >>>> anywhere. >>>> >>>> Regards, >>>> John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for >>>> Dummies", >>>> Please consider the environment before reading this e-mail. >>> https://jl.ly >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From brian.e.carpenter at gmail.com Tue Mar 22 18:51:39 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 23 Mar 2022 14:51:39 +1300 Subject: [ih] GOSIP & compliance In-Reply-To: <7B1D4810-1787-4055-AC43-F162302EE4B3@tony.li> References: <20220320164152.05C2A3974C06@ary.qy> <2fcd898c-527c-9878-3ae9-145955072407@johnlevine.com> <7B1D4810-1787-4055-AC43-F162302EE4B3@tony.li> Message-ID: It wasn't just NASA and it certainly wasn't just Ames. ESA (the European Space Agency) also planned to migrate to Phase V, and so did the intercontinental High Energy Physics DECnet, which interworked with the space science DECnet. Denise Heagerty was in charge of that for CERN. Phase V worked (from some time in 1993, I think) but it didn't last and the physics DECnet died by about 1998 as the physicists switched to *ix and TCP/IP (and, of course, DEC vanished). See page 11 at https://nssdc.gsfc.nasa.gov/nssdc_news/nssdc_news_06_02.pdf/ . I would have thought Milo Medin was at that meeting, so maybe he skipped the photo. The day after that meeting**, I met with Steve Goldstein at NSF HQ to discuss boosting transatlantic IP connectivity. In my book I wrote: "The end of that story is a sad one: we just about got the high-energy physics DECnets converted to Phase V when it was time to switch them off, since everyone had started using TCP/IP instead." [If anybody cares about details contact me off list. But I'll have to ask Denise.] The main upside of moving to Phase V was that it got rid of "hidden areas" which was DECnet Phase IV's hack for running out of address space. (That was the main source of my unconditional hatred of NAT.) At the same time, we were telling DEC they needed to get the DECnet upper layers running over TCP/IP, but that was a hard message to get across. However, HP to this day has a support page entitled "DECnet/OSI - Configuring a Node to Use DECnet/OSI Over TCP/IP". It cites RFC 1006 (a.k.a. STD 35). Regards Brian Carpenter ** Utterly irrelevant: The Hubble telescope had been launched a couple of weeks earlier, and its defective mirror was in the process of being discovered right then. Our hosts wanted to show us the Hubble control room at Goddard, but it was all closed up with the curtains drawn across the viewing windows. They were very puzzled; the defect wasn't announced in public until three weeks later. On 23-Mar-22 12:37, Tony Li via Internet-history wrote: > > Hi Bob, > > I was something of a spectator. Milo Medin and Jeff Burgan are authoritative. > > NASA?s network started off as DECnet and painfully migrated to DECnet Phase V (i.e. OSI). So it?s correct, OSI wasn?t totally, officially dead yet. But it was VERY clear that to talk to anyone else, it was IP. > > T > > >> On Mar 22, 2022, at 3:57 PM, Bob Purvy via Internet-history wrote: >> >> Is anyone familiar with NASA Ames' internal network history? I found this >> document >> >> (disclaimer: >> which I haven't read yet), which seems to indicate OSI wasn't *totally, >> officially* dead by the 90s >> >> (I see Vint in the Acknowledgements.) >> >> On Mon, Mar 21, 2022 at 9:13 AM Francesco Fondelli < >> francesco.fondelli at gmail.com> wrote: >> >>> In Italy we had Videotel, similar to Minitel. I never had a Videotel >>> terminal (was expensive and yes paid by the minute) but in the 90s you >>> could connect with a V.23/V.21 (?) modem to ITAPAC (X.25 network) and >>> somehow access some of the Videotel services (at local per-call-rate... 200 >>> lire IIRC). >>> >>> I think Videotel main app was... chat. >>> >>> Still have the phone numbers of some ITAPAC "gateway"... >>> >>> ciao >>> >>> On Sun, Mar 20, 2022 at 9:05 PM Bob Purvy via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>>> It apparently made France Telecom a lot of money since users paid >>>> by the minute, but I think they were kind of embarrassed by the whole >>>> thing. >>>> >>>> I believe we now have a corollary to the theorem: >>>> >>>> *'Strategic' means you don't make any money.* >>>> >>>> >>>> It's: >>>> >>>> * If you're making money, it's not strategic.* >>>> >>>> >>>> On Sun, Mar 20, 2022 at 10:45 AM John R. Levine wrote: >>>> >>>>>> Well, don't people in France ever want to look up numbers in Germany, >>>>>> England, and Italy? >>>>> >>>>> Perhaps, but historically the way that worked is that each national >>>> telco >>>>> had operators in a room full of out of date foreign phone books. I >>>> doubt >>>>> any of the telcos would have found that compelling. >>>>> >>>>>> Also, there were lots of other apps on top of Minitel, including a >>>> dating >>>>>> service! It did replace calls for directory assistance, but then >>>> people >>>>>> discovered it could do a lot of other things, too. >>>>> >>>>> Yes, I know. In our 1995 Internet Secrets, we had a chapter on Minitel >>>>> Rose. It apparently made France Telecom a lot of money since users paid >>>>> by the minute, but I think they were kind of embarassed by the whole >>>>> thing. >>>>> >>>>> It is a reasonable question why other PTTs didn't just clone Minitel, >>>> but >>>>> I don't think at the time there would have been much incentive to hook >>>>> them together. Apparently they did trials in Belgium and Ireland, but >>>>> without the PTT subsidy to provide the terminals for free, they didn't >>>> go >>>>> anywhere. >>>>> >>>>> Regards, >>>>> John Levine, johnl at taugh.com, Primary Perpetrator of "The Internet for >>>>> Dummies", >>>>> Please consider the environment before reading this e-mail. >>>> https://jl.ly >>>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From karl at cavebear.com Thu Mar 24 16:37:34 2022 From: karl at cavebear.com (Karl Auerbach) Date: Thu, 24 Mar 2022 16:37:34 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: Message-ID: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> We ought not to forget the force that was the Air Force's "ULANA" (Unified Local Area Network Architecture) program of the mid 1980's. It was a quite large procurement for the time, and it pushed for commercial, off-the-shelf, components, mostly TCP/IPv4 based. ISO/OSI stuff had been published.? I had done some ASN.1/BER stuff which ended up turning being incorporated into my company's (Epilogue Technology) SNMP engine. GOSIP was lurking and forming in the background, but the Air Force essentially ignored it. I worked on the TRW team bid.? We won, but AT&T protested and the whole program ended up going down in flames. During the preparation of our response we pushed hard on some very new and small companies - such as Cisco (which was then operating out of a garage), etc.? (Including my own company for SNMP engines.) Because it was COTS we got TRW to design and build for us a smart ethernet board for PC's - to do essentially on Ethernet what the Sytek boards did for Netbios on the Sytek ring networks.? To be COTS we had to sell it to the public.? Don't know whether anybody bought any. The bid required that things would work together, and this was before the Interop show network, so we created fairly extensive interoperability testbeds.? This created a powerful forcing function that really underscored that vendors needed to make sure that their products played nice with the competition and other pieces of the net. It's been my feeling, as well as the feeling of others who I worked with on the project, that ULANA was one of the important things that made TCP based systems commercially viable, and at the same time created a "prove to me that it works and interoperates" silver bullet that eventually helped to bring down the GOSIP momentum. On the first couple of years of the Interop show network we deployed TCP/IP, ISO/OSI, XNS/Netware, and Decnet.? It was a maddening collection of diversity.? The TCP/IP based stuff came together on the shownet quite well, as did Netware and Decnet. But we always had trouble with ISO/OSI.? (We also had a lot of stuff at that time that used the overwrought IEEE 802 Ethernet frame header mess with SNAP headers and all.) All of that said, however, the ISO/OSI stuff had a lot of really interesting ideas (wrapped in expensive documents filled with indecipherable gibberish and more unnecessary bells and whistles than a circus Calliope.).? I think the IETF's hostile attitude towards ISO/OSI created an atmosphere of auto-rejection in which those ideas were too often ignored. ??? ??? --karl-- On 3/18/22 10:02 AM, Bob Purvy via Internet-history wrote: > I was around for all this, but probably not as much as some of you. So many > memories fade... > > I've been reading this > . > This passage... > > > *By August 1990, federal agencies were required to procure > GOSIP-compliantproducts. Through this procurement requirement, the > government intended to stimulate the market for OSI products. However, many > network administrators resisted the GOSIP procurement policy and continued > to operate TCP/IP networks, noting that the federal mandate, by specifying > only procurement, did not prohibit the use of products built around the > more familiar and more readily available TCP/IP.* > > ... in particular stuck out for me. Admins were required to go OSI, but > somehow it never happened. Does anyone have any personal stories to relate > about this, either your own or someone else's? > > *Disclosure*: I'm writing historical fiction, mostly because that's what I > want to do. So there won't be any actual names in whatever I write. I'm > interested in the private choices people make, not the institutions, > towering figures, and impersonal forces that most historians write about. From dhc at dcrocker.net Thu Mar 24 17:03:07 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 24 Mar 2022 17:03:07 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> Message-ID: <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> On 3/24/2022 4:37 PM, Karl Auerbach via Internet-history wrote: > But we always had trouble with ISO/OSI.? (We also had a lot of stuff at > that time that used the overwrought IEEE 802 Ethernet frame header mess > with SNAP headers and all.) > > All of that said, however, the ISO/OSI stuff had a lot of really > interesting ideas (wrapped in expensive documents filled with > indecipherable gibberish and more unnecessary bells and whistles than a > circus Calliope.). This is a nice summary of the practical realities of that suite. It was always going to be fully ready in a couple of years. Again and again and again. > I think the IETF's hostile attitude towards ISO/OSI > created an atmosphere of auto-rejection in which those ideas were too > often ignored. This is silliness. Given the amount of national and business support the OSI work had, and the constant and aggressive marginalization they attempted, for the TCP/IP stuff, any negative 'tone' within the IETF community was irrelevant. And then there is the small fact that the IETF community was more helpful to OSI pragmatics than the OSI community was. Consider the concession to using ASN.1, for SNMP, that you cited, which was a bone tossed to appease the CMIP people, in spite of the problems using ASN.1 caused. And, by the way, it was CMOT, rather than CMIP. Consider the T... Which points up another example of trying to be helpful. Namely ISODE, which gave OSI apps a place to get field experience, since the OSI community could to that at scale. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dan at lynch.com Fri Mar 25 09:27:10 2022 From: dan at lynch.com (Dan Lynch) Date: Fri, 25 Mar 2022 09:27:10 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> Message-ID: Yes, ULANA was an early forcing function. It was a billion dollar buyer demanding interoperability. TCP/IP was the only thing out there that met that goal. Dan Cell 650-776-7313 > On Mar 24, 2022, at 4:37 PM, Karl Auerbach via Internet-history wrote: > > ?We ought not to forget the force that was the Air Force's "ULANA" (Unified Local Area Network Architecture) program of the mid 1980's. > > It was a quite large procurement for the time, and it pushed for commercial, off-the-shelf, components, mostly TCP/IPv4 based. > > ISO/OSI stuff had been published. I had done some ASN.1/BER stuff which ended up turning being incorporated into my company's (Epilogue Technology) SNMP engine. > > GOSIP was lurking and forming in the background, but the Air Force essentially ignored it. > > I worked on the TRW team bid. We won, but AT&T protested and the whole program ended up going down in flames. > > During the preparation of our response we pushed hard on some very new and small companies - such as Cisco (which was then operating out of a garage), etc. (Including my own company for SNMP engines.) > > Because it was COTS we got TRW to design and build for us a smart ethernet board for PC's - to do essentially on Ethernet what the Sytek boards did for Netbios on the Sytek ring networks. To be COTS we had to sell it to the public. Don't know whether anybody bought any. > > The bid required that things would work together, and this was before the Interop show network, so we created fairly extensive interoperability testbeds. This created a powerful forcing function that really underscored that vendors needed to make sure that their products played nice with the competition and other pieces of the net. > > It's been my feeling, as well as the feeling of others who I worked with on the project, that ULANA was one of the important things that made TCP based systems commercially viable, and at the same time created a "prove to me that it works and interoperates" silver bullet that eventually helped to bring down the GOSIP momentum. > > On the first couple of years of the Interop show network we deployed TCP/IP, ISO/OSI, XNS/Netware, and Decnet. It was a maddening collection of diversity. The TCP/IP based stuff came together on the shownet quite well, as did Netware and Decnet. But we always had trouble with ISO/OSI. (We also had a lot of stuff at that time that used the overwrought IEEE 802 Ethernet frame header mess with SNAP headers and all.) > > All of that said, however, the ISO/OSI stuff had a lot of really interesting ideas (wrapped in expensive documents filled with indecipherable gibberish and more unnecessary bells and whistles than a circus Calliope.). I think the IETF's hostile attitude towards ISO/OSI created an atmosphere of auto-rejection in which those ideas were too often ignored. > > --karl-- > > >> On 3/18/22 10:02 AM, Bob Purvy via Internet-history wrote: >> I was around for all this, but probably not as much as some of you. So many >> memories fade... >> >> I've been reading this >> . >> This passage... >> >> >> *By August 1990, federal agencies were required to procure >> GOSIP-compliantproducts. Through this procurement requirement, the >> government intended to stimulate the market for OSI products. However, many >> network administrators resisted the GOSIP procurement policy and continued >> to operate TCP/IP networks, noting that the federal mandate, by specifying >> only procurement, did not prohibit the use of products built around the >> more familiar and more readily available TCP/IP.* >> >> ... in particular stuck out for me. Admins were required to go OSI, but >> somehow it never happened. Does anyone have any personal stories to relate >> about this, either your own or someone else's? >> >> *Disclosure*: I'm writing historical fiction, mostly because that's what I >> want to do. So there won't be any actual names in whatever I write. I'm >> interested in the private choices people make, not the institutions, >> towering figures, and impersonal forces that most historians write about. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jnc at mercury.lcs.mit.edu Fri Mar 25 15:04:41 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Fri, 25 Mar 2022 18:04:41 -0400 (EDT) Subject: [ih] "History of Computer Communications" Message-ID: <20220325220441.E4C9A18C09F@mercury.lcs.mit.edu> Hi I'd like to point everyone at Jim Pelkey's online book "History of Computer Communications": https://historyofcomputercommunications.info/ Everyone should review sections that cover stuff they lived through, and send in updates. E.g. in reading the section on Proteon: https://historyofcomputercommunications.info/section/14.18/proteon/ I discovered a number of issues with that chapter; probably explained by limited sources, which didn't cover in detail the background at LCS which to Proteon's products. I am writing up a moderate-length note which covers the early work on rings and routers at LCS, and will send it to Jim's collaborator, Loring Robbins, who expressed an interest on my elucidations on the 'C Gateway'. (If anyone here is interested in seeing those notes, I can send them along, which would also put them in the archive here for public availability.) Anyway, everyone else should check it out and lend a hard with improving it! Noel From karl at cavebear.com Sat Mar 26 01:22:00 2022 From: karl at cavebear.com (Karl Auerbach) Date: Sat, 26 Mar 2022 01:22:00 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> Message-ID: <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> On 3/24/22 5:03 PM, Dave Crocker wrote: >> I think the IETF's hostile attitude towards ISO/OSI created an >> atmosphere of auto-rejection in which those ideas were too often >> ignored. > > This is silliness. Yes, the OSI people wrote impenetrable documents and did their best to hinder dissemination.? And they never explained the "why" only the "how".? It took real effort to read between the lines to understand.? But some of us did.? And sometimes we found valuable nuggets. (By-the-way, the IETF has slid into a similar mode, expressing only the "how" of a specification without much, if any, note of alternative approaches that were examined and rejected, and why they were rejected.? The patent attorney in me shakes my head in bewilderment at that seemingly intentional erasure of potential "prior art" and the opportunities that creates for patent trolls to cause expensive mischief.) Examples of useful concepts in ISO/OSI that we missed (or ignored) to our detriment.? (Some recent designs, such as QUIC, have rediscovered some of these things.) ? - In IPv6 we had a choice to rethink the IP checksum, but we didn't.? The Fletcher checksum used in OSI is significantly stronger than the IP checksum and not subject to the implementation complexity of thinking in ones-complement arithmetic that exists on no present computer architecture.? (If I remember correctly there are two or three RFCs dealing with issues with the existing IP checksum.)? The Fletcher checksum looks scary - a first glance makes one think it is full of expensive multiplications - but there are quite fast algorithms for doing full and incremental calculations.? (There may have been good arguments against such a change, such as the pseudo headers for TCP and UDP, but I do not remember these ever coming up.) ? - The IETF never inquired why ISO/OSI had a session layer. Turns out that it is an important concept that we've had to re-invent in different forms, such as web cookies. ?? - Similarly in the development of IPv6 the IETF never really considered connection-time data.? This was a universal concept throughout ISO/OSI.? So we ended up kinda kludging it together for things like TLS/SNI for HTTPS. (QUIC improves this considerably.) ?? - ISO/OSI also had "application entity titles" which were names that could be used to rebind associations/sessions if the logical entity (we call these cloud base applications these days) disconnected (due to transport failure caused by noise, congestion, or network address change), moved, split, or merged. OSI didn't fully solve these problems, but it had an architectural hook to latch onto.? We're wrestling with these concepts in the TCP world. Yes, the ISO/OSI people never put on an engineering hat and trimmed and reshaped their stuff into something that could actually be built.? And their mode of expression was horrid (and just getting the documents was expensive). BTW, I thought the use of ASN.1/BER for SNMP was far from the best choice (indeed, from an implementation quality and interoperability point of view would be hard to find one that was worse.)? I preferred the HEMS proposal as the most elegant, even if it did use XML. CMIP had some really good ideas (most particularly with regard to selecting and filtering data at the server for highly efficient bulk fetches.)? Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) was very inventive and clever.? That kinda demonstrated the potential viability, rather than the impossibility, of things like CMIP/T, even in its bloated form. Diverting from the main points here: About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER and using JSON, throwing out UDP and using TCP (optionally with TLS), and adding in some of the filtering concepts from CMIP.? I preserved most MIB names and instrumentation variable semantics (and thus preserving a lot of existing instrumentation code in devices.) The resulting running code (in Python) is quite small - on par with the 12kbytes (machine code) of the core of my Epilogue SNMP engine.? And it runs several decimal orders of magnitude faster than SNMP (in terms of compute cycles, network activity, and start-to-finish time.)? Plus I can do things like "give me all data on all interfaces with a received packet error rate greater than 0.1%".? I can even safely issue complex control commands to devices, something that SNMP can't do very well.? I considered doing commercial grade, perhaps open-source, version but it could have ended up disturbing the then nascent Netconf effort. ??? ??? --karl-- From cabo at tzi.org Sat Mar 26 05:32:48 2022 From: cabo at tzi.org (Carsten Bormann) Date: Sat, 26 Mar 2022 13:32:48 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: References: <27FD71E2-0E29-4917-8A6A-3E527C9ECF9B@lynch.com> <028d2890-5a8b-8649-bd98-d4c7c9ee11aa@dcrocker.net> <17c22b47-81b7-3b74-7eb1-21f2a467f315@Julf.com> Message-ID: <158AE7B3-EDE6-4389-A143-6D41F5A4D625@tzi.org> On 2022-03-20, at 14:15, vinton cerf via Internet-history wrote: > > While developing MCI-Mail, I tried to get Minitel to agree to interconnect > to allow email exchange but they refused. > this would have been around 1984. Between 1983 and 1987 I spent quite some energy connecting to the German ?Bildschirmtext? (Btx) system [1], the German PRESTEL copycat (equivalent of Minitel). They had the concept of an ?externer Rechner? (external server) that was connected to the core Bildschirmtext system that was developed by IBM. The ?externer Rechner? (ER) connected to Bildschirmtext via X.25 (which was the part that I worked on), and, since OSI wasn?t ready, used German pre-standard higher-layer protocols called ?einheitliche h?here Kommunikationsprotokolle? (EHKP4 to EHKP6, using the OSI model layer numbers). The EHKP implementers sat next door, so I have only hearsay, but it must have been grueling work to get this stuff going, all with various IBM components roaring in the background (we once had a S/36 in the next office). The basic idea was that as the provider of an ?externer Rechner? you could provide ?Mehrwertdienste?, which could have included e-mail service ? as long as you provided a clunky TV-style user interface (24x40 character ISO 6937 plus DRCS (*) in color!) through the EHKP stack. (The access modems the participants used were V.23 asynchronous 1200 bit/s down, 75 bit/s up, and used CEPT T/CD 06-01 [2].) The provider of an externer Rechner could charge DEM 0.01 to 9.99 per page requested (or also levy a per minute charge), which would have provided for a revenue model. Note that this required a ?Staatsvertrag? between the (then 10+1) German federal states to satisfy the complex legal environment, which also limited the selection of equipment you could use. The established players tried to make sure the homologation(*) requirements were as onerous as possible (to fend off competition), so there was some wrestling until we finally could connect our ?externe Rechner?. Connection also wasn?t cheap (starting with the need to get expensive ?Datex-P? (X.25) service). On the consumer side, while you could do some banking over Btx via ER, the overall usefulness was rather limited so the Btx service never grew to reach even 1 % of the population before it became a sidecar to Internet access. Gr??e, Carsten [1]: https://de.wikipedia.org/wiki/Bildschirmtext [2]: https://www.etsi.org/deliver/etsi_i_ets/300001_300099/300072/01_60/ets_300072e01p.pdf (*) DRCS = dynamically redefinable character set [2] (*) (Getting through that homologation process was also the first time my na?ve young self encountered openly corrupt state officials in oh-so-clean Germany, but that is a different story.) From bpurvy at gmail.com Sat Mar 26 08:51:13 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sat, 26 Mar 2022 08:51:13 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: CMOT! There's a term you don't hear much anymore. At 3Com, towards the late 80's or early 90's, I was on an "architecture committee" led by Amatzia Ben-Artzi. We spent a lot of time debating CMOT vs. SNMP. I don't recall that anything much was accomplished. On Sat, Mar 26, 2022 at 1:22 AM Karl Auerbach via Internet-history < internet-history at elists.isoc.org> wrote: > > On 3/24/22 5:03 PM, Dave Crocker wrote: > >> I think the IETF's hostile attitude towards ISO/OSI created an > >> atmosphere of auto-rejection in which those ideas were too often > >> ignored. > > > > This is silliness. > > Yes, the OSI people wrote impenetrable documents and did their best to > hinder dissemination. And they never explained the "why" only the > "how". It took real effort to read between the lines to understand. > But some of us did. And sometimes we found valuable nuggets. > > (By-the-way, the IETF has slid into a similar mode, expressing only the > "how" of a specification without much, if any, note of alternative > approaches that were examined and rejected, and why they were rejected. > The patent attorney in me shakes my head in bewilderment at that > seemingly intentional erasure of potential "prior art" and the > opportunities that creates for patent trolls to cause expensive mischief.) > > Examples of useful concepts in ISO/OSI that we missed (or ignored) to > our detriment. (Some recent designs, such as QUIC, have rediscovered > some of these things.) > > - In IPv6 we had a choice to rethink the IP checksum, but we didn't. > The Fletcher checksum used in OSI is significantly stronger than the IP > checksum and not subject to the implementation complexity of thinking in > ones-complement arithmetic that exists on no present computer > architecture. (If I remember correctly there are two or three RFCs > dealing with issues with the existing IP checksum.) The Fletcher > checksum looks scary - a first glance makes one think it is full of > expensive multiplications - but there are quite fast algorithms for > doing full and incremental calculations. (There may have been good > arguments against such a change, such as the pseudo headers for TCP and > UDP, but I do not remember these ever coming up.) > > - The IETF never inquired why ISO/OSI had a session layer. Turns out > that it is an important concept that we've had to re-invent in different > forms, such as web cookies. > > - Similarly in the development of IPv6 the IETF never really > considered connection-time data. This was a universal concept > throughout ISO/OSI. So we ended up kinda kludging it together for > things like TLS/SNI for HTTPS. (QUIC improves this considerably.) > > - ISO/OSI also had "application entity titles" which were names that > could be used to rebind associations/sessions if the logical entity (we > call these cloud base applications these days) disconnected (due to > transport failure caused by noise, congestion, or network address > change), moved, split, or merged. OSI didn't fully solve these problems, > but it had an architectural hook to latch onto. We're wrestling with > these concepts in the TCP world. > > Yes, the ISO/OSI people never put on an engineering hat and trimmed and > reshaped their stuff into something that could actually be built. And > their mode of expression was horrid (and just getting the documents was > expensive). > > BTW, I thought the use of ASN.1/BER for SNMP was far from the best > choice (indeed, from an implementation quality and interoperability > point of view would be hard to find one that was worse.) I preferred > the HEMS proposal as the most elegant, even if it did use XML. > > CMIP had some really good ideas (most particularly with regard to > selecting and filtering data at the server for highly efficient bulk > fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > was very inventive and clever. That kinda demonstrated the potential > viability, rather than the impossibility, of things like CMIP/T, even in > its bloated form. > > Diverting from the main points here: > > About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > and using JSON, throwing out UDP and using TCP (optionally with TLS), > and adding in some of the filtering concepts from CMIP. I preserved > most MIB names and instrumentation variable semantics (and thus > preserving a lot of existing instrumentation code in devices.) > > The resulting running code (in Python) is quite small - on par with the > 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it > runs several decimal orders of magnitude faster than SNMP (in terms of > compute cycles, network activity, and start-to-finish time.) Plus I can > do things like "give me all data on all interfaces with a received > packet error rate greater than 0.1%". I can even safely issue complex > control commands to devices, something that SNMP can't do very well. I > considered doing commercial grade, perhaps open-source, version but it > could have ended up disturbing the then nascent Netconf effort. > > --karl-- > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dhc at dcrocker.net Sat Mar 26 08:59:53 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Sat, 26 Mar 2022 08:59:53 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> On 3/26/2022 8:51 AM, Bob Purvy via Internet-history wrote: > We spent a lot of time debating CMOT > vs. SNMP. > > I don't recall that anything much was accomplished. A fair assessment of the topic, more widely. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jack at 3kitty.org Sat Mar 26 10:30:27 2022 From: jack at 3kitty.org (Jack Haverty) Date: Sat, 26 Mar 2022 10:30:27 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> Message-ID: SNMP et al are mechanisms for data collection, i.e., retrieving all sorts of metrics about how things out in the network are behaving. But has there been much thought and effort about how to *use* all that data to operate, troubleshoot, plan or otherwise manage all the technology involved in whatever the users are doing? When I was at Oracle in the early 90s, I was immersed in a sea of people who knew a lot about how data was analyzed and used in all sorts of business processes.? So one day, one of the data-guys and I sat down in our Network Operations Center (more like a closet...) and cobbled together some shell scripts that used SNMP to collect whatever we could get, stuff it all into a database, and then use the well-worn standard database tools to analyze, aggregate, compare, predict, and visualize how our network applications were behaving.? It was literally a day's work, exploiting the synergy between SNMP data collection and database tools.?? At that point, all that data from SNMP became actually useful for us in operating our own IT infrastructure. We also discovered quite a few bugs in various SNMP implementations, where the data being provided were actually quite obviously incorrect.?? I wondered at the time whether anyone else had ever tried to actually use the SNMP data, more than just writing it into a log file. I suspect that a "lack of accomplishment" in the SNMP/CMOT/etc activities might have been influenced by a lack of attention to how all that operations data might actually be used by IT operators and end users.?? Curious too how such data is actually used by today's Operators.?? Is it? Jack Haverty On 3/26/22 08:59, Dave Crocker via Internet-history wrote: > On 3/26/2022 8:51 AM, Bob Purvy via Internet-history wrote: >> We spent a lot of time debating CMOT >> vs. SNMP. >> >> I don't recall that anything much was accomplished. > > A fair assessment of the topic, more widely. > d/ > From bpurvy at gmail.com Sat Mar 26 11:43:31 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sat, 26 Mar 2022 11:43:31 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> Message-ID: Actually, when I was still at Oracle (I don't know if you'd left by then), Dimitris Nakos, Robert Ash, and I all went to Atlanta for the HP World convention. We had a little Open View app we were demo'ing. I can't remember if Open View could save its data to Oracle, or if that was just being negotiated. Don't come at me for Open View! It was one of those products that they'd buy and keep by the door so the VP would see it, while the real operators continued using ping and traceroute to do their work. On Sat, Mar 26, 2022 at 10:30 AM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > SNMP et al are mechanisms for data collection, i.e., retrieving all > sorts of metrics about how things out in the network are behaving. But > has there been much thought and effort about how to *use* all that data > to operate, troubleshoot, plan or otherwise manage all the technology > involved in whatever the users are doing? > > When I was at Oracle in the early 90s, I was immersed in a sea of people > who knew a lot about how data was analyzed and used in all sorts of > business processes. So one day, one of the data-guys and I sat down in > our Network Operations Center (more like a closet...) and cobbled > together some shell scripts that used SNMP to collect whatever we could > get, stuff it all into a database, and then use the well-worn standard > database tools to analyze, aggregate, compare, predict, and visualize > how our network applications were behaving. It was literally a day's > work, exploiting the synergy between SNMP data collection and database > tools. At that point, all that data from SNMP became actually useful > for us in operating our own IT infrastructure. > > We also discovered quite a few bugs in various SNMP implementations, > where the data being provided were actually quite obviously incorrect. > I wondered at the time whether anyone else had ever tried to actually > use the SNMP data, more than just writing it into a log file. > > I suspect that a "lack of accomplishment" in the SNMP/CMOT/etc > activities might have been influenced by a lack of attention to how all > that operations data might actually be used by IT operators and end > users. Curious too how such data is actually used by today's > Operators. Is it? > > Jack Haverty > > > On 3/26/22 08:59, Dave Crocker via Internet-history wrote: > > On 3/26/2022 8:51 AM, Bob Purvy via Internet-history wrote: > >> We spent a lot of time debating CMOT > >> vs. SNMP. > >> > >> I don't recall that anything much was accomplished. > > > > A fair assessment of the topic, more widely. > > d/ > > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From karl at cavebear.com Sat Mar 26 13:12:55 2022 From: karl at cavebear.com (Karl Auerbach) Date: Sat, 26 Mar 2022 13:12:55 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> Message-ID: <2ccc8241-1933-932d-9f15-cdf1d5bbb242@cavebear.com> On 3/26/22 10:30 AM, Jack Haverty via Internet-history wrote: > SNMP et al are mechanisms for data collection, i.e., retrieving all > sorts of metrics about how things out in the network are behaving. But > has there been much thought and effort about how to *use* all that > data to operate, troubleshoot, plan or otherwise manage all the > technology involved in whatever the users are doing? The short answer is "yes".? I've been thinking about it for a long time, since the 1980's.? I tend to use the phrase "homeostatic networking". I helped with a DARPA project about "smart networks".? (They weren't really "smart" in the switching plane, the "smarts" was in stuff in the control plane.)? In that project we fed a bunch of information into a modelling system that produced MPLS paths, including backups so that we could do things like switching over within a few milliseconds. The modelling was done externally; results would be disseminated into the network. The idea was to put somewhat autonomous smarts into the routers so that they could manage themselves (to a very limited degree) in accord with the model by watching things like queue lengths, internal drops, etc, and decide when to switchover to a new path definition.? (I was going to use JVMs into Cisco IOS - someone had already done that - to run this code.) We realized, of course, that we were on thin ice - an error could bring down an otherwise operational network in milliseconds. My part was based on the idea "what we are doing isn't improving things, what do we do now?"? To me that was a sign of one of several possible things: ? a) our model was wrong. ? b) the network topology was different than we thought it was (either due to failure, error, or security penetration) ? c) something was not working properly (or had been penetrated) ? d) A new thing had arrived in the structure of the net (all kinds of reasons, including security penetration) ? etc. In our view that would trigger entry into a "troubleshooting" mode rather than a control/management mode.? That would invoke all kinds of tools, some of which would scare security managers (and thus needed to be carefully wielded by a limited cadre of people.) One of the things that fell out of this is that we lack something that I call a database of network pathology.? It would begin with a collection of anecdotal data about symptoms and the reasoning chain (including tests that would need to be performed) to work backwards towards possible causes. (Back in the 1990's I began some test implementations of pieces of this - originally in Prolog.? I've since taken a teensy tiny part of that and incorporated it into one of our protocol testing products.? But it is just a tiny piece, mainly some data structures to represent the head of a reverse-reasoning chain of logic.) In broader sense several other things were revealed. One was that we are deeply under investing in our network diagnostic and repair technology.? And as we build ever higher and thicker security walls we are making it more and more difficult to figure out what is going awry and correcting it.? And that, in turn, raises questions whether we are going to need to create a kind of network priesthood of people who are privileged to go into the depths of networks, often across administrative boundaries, going where privacy and security concerns must be honored.? As a lawyer who has lived for decades legally bound to such obligations I do not feel that this is a bad thing but many others do not feel the same way that I do about a highly privileged class of network repair people. Another thing that I have realized along the way is that we need to look to biology for guidance.? Living things are very robust; they survive changes that would collapse many of our human creations.? How do they do that?? Well, first we have to realize that in biology, death is a useful tool that we often can't accept in our technical systems. But as we dig deeper into why biological things survive while human things don't we find that evolution usually does not throw out existing solutions to problems, but layers on new solutions. All of these are always active, pulling with and against one another, but the newer ones tend to dominate.? So as a tree faces a 1000 year drought if first pulls the latest solutions from its genetic bag of tricks, like folding leaves down to reduce evaporation.? But when that doesn't do the job older solutions start to become top-dog and exercise control. It is that competition of solutions in biology that provides robustness.? The goal is survival.? Optimal use of resources comes into play only as an element of survival. But on our networks we too often have exactly one solution.? And if that solution is brittle or does not extend into a new situation then we have a potential failure.? An example of this is how TCP congestion detection and avoidance ran into something new - too many buffers in switching devices - and caused a failure mode: bufferbloat. > We also discovered quite a few bugs in various SNMP implementations, > where the data being provided were actually quite obviously incorrect. > I wondered at the time whether anyone else had ever tried to actually > use the SNMP data, more than just writing it into a log file. I still make a surprising large part of my income from helping people find and fix SNMP errors.? It's an amazing difficult protocol to implement properly. My wife and I wrote a paper back in 1996, "Towards Useful Network Management" that remains even 26 years later, in my opinion, a rather useful guide to some things we need. https://www.iwl.com/idocs/towards-useful-network-management ??? ??? --karl-- From karl at cavebear.com Sat Mar 26 13:21:30 2022 From: karl at cavebear.com (Karl Auerbach) Date: Sat, 26 Mar 2022 13:21:30 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> Message-ID: <83464e6d-f6c7-d973-b1a6-0eba9d439693@cavebear.com> On 3/26/22 11:43 AM, Bob Purvy via Internet-history wrote: > Don't come at me for Open View! It was one of those products that they'd > buy and keep by the door so the VP would see it, while the real operators > continued using ping and traceroute to do their work. Down the road a bit, at Sun, Bob Page and I were trying to turn Sun Network Manager into something more useful, using things like my "area manager" concept.? But we kinda got distracted and instead started to build a low earth orbit data network using Russian satellites (sent up on Russian rockets) - we had exactly one known user: Steve Roberts who had his Sparcstation equipped bicycle.? I am probably one of the few people who use the phrase "solar blanking" in networking conversations (I had previously worked with geo-sync satellites where that was a very serious issue.) ??? ??? --karl-- From geoff at iconia.com Sat Mar 26 15:07:33 2022 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sat, 26 Mar 2022 12:07:33 -1000 Subject: [ih] Eric Allman, the Sendmail DEBUG command and "people who are privileged to go into the depths of networks, often across administrative boundaries..." (was GOSIP & compliance In-Reply-To: <2ccc8241-1933-932d-9f15-cdf1d5bbb242@cavebear.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> <2ccc8241-1933-932d-9f15-cdf1d5bbb242@cavebear.com> Message-ID: vis-a-vis On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach who wrote: "... people who are privileged to go into the depths of networks, often across administrative boundaries, going where privacy and security concerns must be honored..." an Internet History anecdote "relating" to the above with regards to the "history and purpose" of the Sendmail DEBUG command that was responsible for the Robert Tappen Morris Internet Worm Incident in 1988 as excerpted from a "Hillside Club Fireside Meeting: Eric Allman" last year starting at about 51 mins and 15 secs: *Tim Pozar*: *"... geoff goodfellow is back here saying could you give some backstory with respect to the Sendmail DEBUG command that led to the Robert Tappen Morris 1988 Internet Worm Incident?"* *Eric Allman: *"Actually, this is a good story. So for those who don't know, I put in a command in Sendmail that I could connect to remotely and say DEBUG. And it would give me basically permissions that I wasn't supposed to have on that computer. Big permissions on that computer. So why did I do something like that? Well, it turns out that this is back when Sendmail was being used on campus, and I was at least a part time student, and there was a bug on one of the computers, but it was one of the computers that was used for administrative computing. And they said, So there's this bug. And I said, well, let me come in and look at it. They said, oh, we can't let you onto that computer. You're a student. You don't have authorization to get to that computer. Well, I can't fix your problem then. Oh, no, you have to fix our problem. I can't. You have to. You can't. You have to. They wouldn't let me onto the computer, but I did send them saying something like, here's a new version (of sendmail). I've done some stuff to it. Why don't you install it and try it out? And that gave me the access to the machine that I needed to actually fix their problem. And if they had never done that, the DEBUG command would never have happened. It was like they were unrealistic about the security. Now it is totally my fault that I did not immediately remove the DEBUG command. And that was, frankly, because, wow, it was so useful there. I might need that again. Here we go. And at some point I kind of forgot about it and it was out way too far on the net. That was just pure stupidity. I apologize for that." https://www.youtube.com/watch?v=j6h-jCxtSDA *Hillside Club Fireside Meeting: Eric Allman* *"On January 1, 1983, the Internet was born from the ashes of the ARPAnet, and sendmail was already there. Written by Eric Allman as a stopgap measure in the early 1980s, it grew with the Internet, at one point delivering around 90% of all the email on the network.* *The early developers of the Internet believed that "universal communication" would promote democracy and bring people closer together. Things didn't work out that way. Many folks, including Eric, gave away their work for free. That changed too. * *Arlene Baxter engages Eric Allman in conversation about those early, heady days as electronic communication began to be an essential part of all of our lives. This conversation will discuss the origins of sendmail, the attitudes of the time, and how the Internet grew and changed over the years."* On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach via Internet-history < internet-history at elists.isoc.org> wrote: > > On 3/26/22 10:30 AM, Jack Haverty via Internet-history wrote: > > SNMP et al are mechanisms for data collection, i.e., retrieving all > > sorts of metrics about how things out in the network are behaving. But > > has there been much thought and effort about how to *use* all that > > data to operate, troubleshoot, plan or otherwise manage all the > > technology involved in whatever the users are doing? > > The short answer is "yes". I've been thinking about it for a long time, > since the 1980's. I tend to use the phrase "homeostatic networking". > > I helped with a DARPA project about "smart networks". (They weren't > really "smart" in the switching plane, the "smarts" was in stuff in the > control plane.) In that project we fed a bunch of information into a > modelling system that produced MPLS paths, including backups so that we > could do things like switching over within a few milliseconds. The > modelling was done externally; results would be disseminated into the > network. The idea was to put somewhat autonomous smarts into the routers > so that they could manage themselves (to a very limited degree) in > accord with the model by watching things like queue lengths, internal > drops, etc, and decide when to switchover to a new path definition. (I > was going to use JVMs into Cisco IOS - someone had already done that - > to run this code.) > > We realized, of course, that we were on thin ice - an error could bring > down an otherwise operational network in milliseconds. > > My part was based on the idea "what we are doing isn't improving things, > what do we do now?" To me that was a sign of one of several possible > things: > > a) our model was wrong. > > b) the network topology was different than we thought it was (either > due to failure, error, or security penetration) > > c) something was not working properly (or had been penetrated) > > d) A new thing had arrived in the structure of the net (all kinds of > reasons, including security penetration) > > etc. > > In our view that would trigger entry into a "troubleshooting" mode > rather than a control/management mode. That would invoke all kinds of > tools, some of which would scare security managers (and thus needed to > be carefully wielded by a limited cadre of people.) > > One of the things that fell out of this is that we lack something that I > call a database of network pathology. It would begin with a collection > of anecdotal data about symptoms and the reasoning chain (including > tests that would need to be performed) to work backwards towards > possible causes. > > (Back in the 1990's I began some test implementations of pieces of this > - originally in Prolog. I've since taken a teensy tiny part of that and > incorporated it into one of our protocol testing products. But it is > just a tiny piece, mainly some data structures to represent the head of > a reverse-reasoning chain of logic.) > > In broader sense several other things were revealed. > > One was that we are deeply under investing in our network diagnostic and > repair technology. And as we build ever higher and thicker security > walls we are making it more and more difficult to figure out what is > going awry and correcting it. And that, in turn, raises questions > whether we are going to need to create a kind of network priesthood of > people who are privileged to go into the depths of networks, often > across administrative boundaries, going where privacy and security > concerns must be honored. As a lawyer who has lived for decades legally > bound to such obligations I do not feel that this is a bad thing but > many others do not feel the same way that I do about a highly privileged > class of network repair people. > > Another thing that I have realized along the way is that we need to look > to biology for guidance. Living things are very robust; they survive > changes that would collapse many of our human creations. How do they do > that? Well, first we have to realize that in biology, death is a useful > tool that we often can't accept in our technical systems. > > But as we dig deeper into why biological things survive while human > things don't we find that evolution usually does not throw out existing > solutions to problems, but layers on new solutions. All of these are > always active, pulling with and against one another, but the newer ones > tend to dominate. So as a tree faces a 1000 year drought if first pulls > the latest solutions from its genetic bag of tricks, like folding leaves > down to reduce evaporation. But when that doesn't do the job older > solutions start to become top-dog and exercise control. > > It is that competition of solutions in biology that provides > robustness. The goal is survival. Optimal use of resources comes into > play only as an element of survival. > > But on our networks we too often have exactly one solution. And if that > solution is brittle or does not extend into a new situation then we have > a potential failure. An example of this is how TCP congestion detection > and avoidance ran into something new - too many buffers in switching > devices - and caused a failure mode: bufferbloat. > > > > We also discovered quite a few bugs in various SNMP implementations, > > where the data being provided were actually quite obviously incorrect. > > I wondered at the time whether anyone else had ever tried to actually > > use the SNMP data, more than just writing it into a log file. > > I still make a surprising large part of my income from helping people > find and fix SNMP errors. It's an amazing difficult protocol to > implement properly. > > My wife and I wrote a paper back in 1996, "Towards Useful Network > Management" that remains even 26 years later, in my opinion, a rather > useful guide to some things we need. > > https://www.iwl.com/idocs/towards-useful-network-management > > --karl-- > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From bpurvy at gmail.com Sun Mar 27 12:20:20 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 27 Mar 2022 12:20:20 -0700 Subject: [ih] Speaking of Minitel: Here's an oldie NO one remembers Message-ID: In the late 70's / early 80's, a friend of mine worked at a company CXI in Irvine, CA. This was a real company, with maybe 250 people. I can't even find it with Google now. Their whole thesis was "people in offices don't want computers -- they want *telephones*" with a big PBX behind it. I distinctly remember a full page ad, probably in *Datamation*, touting their system, "The Rose", and calling it Office Humanation. Because, you know... office peope don't want a stupid *computer*; they want a big fancy phone with all the functions. From ocl at gih.com Sun Mar 27 12:50:31 2022 From: ocl at gih.com (=?UTF-8?Q?Olivier_MJ_Cr=c3=a9pin-Leblond?=) Date: Sun, 27 Mar 2022 22:50:31 +0300 Subject: [ih] GOSIP & compliance In-Reply-To: <377BFE55-8693-4814-B6FA-925D47ED7A13@me.com> References: <20220320164152.05C2A3974C06@ary.qy> <377BFE55-8693-4814-B6FA-925D47ED7A13@me.com> Message-ID: <892bc50c-0897-1932-4288-511f800595e0@gih.com> I remember reading this historical perspective and it was pretty accurate. I could add a couple of first hand experiences with France Telecom, Minitel, Transpac and VTCOM. In 1997 I had a meeting with top level executives of France Telecom, Transpac and VTCOM, discussing an Internet project. All the way then, the suited people I had in front of me, then a young postgraduate, were obviously very proud of the Minitel's success. I would say, too proud, to the extent that they were completely blinded by it - and as a Frenchman, I can say that it's sometimes a trait that we have in France - national pride, for better or for worse. In this case, it was for worse. I told them the Minitel was on life support, based on a technology that was developed in the 70s. I knew everything about its limitations, including its intended evolution through Broadband ISDN - B-ISDN, which was also going to be a non starter due to its lack of flexibility and top down control. But that fell on deaf ears: Transpac was France's National Pride, a stable network running on X.25. VTCOM was the main supplier/controller of content - serving millions of paying customers. Why would they need to look at anything else than their pride and joy? I told them Transpac was on its last legs due to its inability to support faster transfer speeds. And top down control of all content was not scalable. The Internet was the future. A decentralised network where everyone could produce content. The laughter around the room was not hidden. "With the Minitel we make a lot of money because people pay for their services. How is your... Internet ever going to make money, when its services are free? Nobody has ever made any money giving things away for free. You are living in a dream world. An anglo saxon dream world. Not everyone speaks English, you know?" After an hour and fifteen minutes, I was promptly kicked out of their office. They weren't disrespectful, but didn't hide the fact that they had a strategy and it definitely wasn't mine. I won't share the names, but these were added to a long long list of people suffering the Peter principle that I have met in my life. Kindest regards, Olivier (with apologies for answering an old thread - I have been very busy am just catching up) On 20/03/2022 19:53, Ole Jacobsen via Internet-history wrote: > For some historical perspective: In 1994 we published an article about Minitel > in ConneXions--The Interoperability Report. It's the first article in the April > issue. The entire archive of ConneXions is available from the Charles Babbage > Institute, but for easy access to this particular issue I've uploaded a copy > to my directory on Yikes. > > See: > > https://www.yikes.com/~ole/store/ConneXions8-04_Apr1994.pdf > > Ole > >> On Mar 20, 2022, at 09:45, Bob Purvy via Internet-history wrote: >> >> Well, don't people in France ever want to look up numbers in Germany, >> England, and Italy? >> >> Also, there were lots of other apps on top of Minitel, including a dating >> service! It did replace calls for directory assistance, but then people >> discovered it could do a lot of other things, too. >> >> On Sun, Mar 20, 2022 at 9:41 AM John Levine wrote: >> >>> It appears that Bob Purvy via Internet-history said: >>>> One still wonders why the other European PTTs didn't do their own and >>>> interoperate with Minitel. Too much NIH? >>> Remember that the business case for Minitel was that it would replace >>> paper phone books and directory assistance operators. Everything else >>> was an add-on. You didn't need to interoperate to do that. >>> >>> R's, >>> John >>> > Ole J. Jacobsen > Editor and Publisher > The Internet Protocol Journal > Office: +1 415-550-9433 > Cell: +1 415-370-4628 > Web: protocoljournal.org > E-mail:olejacobsen at me.com > E-mail:ole at protocoljournal.org > Skype: organdemo > > > -- Olivier MJ Cr?pin-Leblond, PhD http://www.gih.com/ocl.html From brian.e.carpenter at gmail.com Sun Mar 27 13:02:48 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 28 Mar 2022 09:02:48 +1300 Subject: [ih] Speaking of Minitel: Here's an oldie NO one remembers In-Reply-To: References: Message-ID: <184c5c7f-53e1-69e9-194e-96a3d2abe001@gmail.com> Reminds me of the ICL "One Per Desk" (yes, that was the name of the product). Computer? No. Word Processor? No. Telephone? No. In fact, after the sales pitch I still didn't know what it was or why I'd ever want one (even the free ones that the rep was offering to CERN as a trial). In all seriousness, we couldn't see the slightest use for it. In particular, it had no network connection (except a phone line), so it was a hard sell to a networking group. https://en.wikipedia.org/wiki/One_Per_Desk (Worth reading the "Legacy" section.) Oh, and a Google search for "Office Humanation" found the ad, no trouble: Page 65 at http://www.bitsavers.org/magazines/Datamation/198402.pdf It also found a Computerworld ad and other related stuff, e.g. https://bizstanding.com/p/the+office+humanation+company-106997146 Regards Brian Carpenter On 28-Mar-22 08:20, Bob Purvy via Internet-history wrote: > In the late 70's / early 80's, a friend of mine worked at a company CXI in > Irvine, CA. This was a real company, with maybe 250 people. I can't even > find it with Google now. > > Their whole thesis was "people in offices don't want computers -- they want > *telephones*" with a big PBX behind it. > > I distinctly remember a full page ad, probably in *Datamation*, touting > their system, "The Rose", and calling it Office Humanation. > > Because, you know... office peope don't want a stupid *computer*; they want > a big fancy phone with all the functions. > From bpurvy at gmail.com Sun Mar 27 13:11:42 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Sun, 27 Mar 2022 13:11:42 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <892bc50c-0897-1932-4288-511f800595e0@gih.com> References: <20220320164152.05C2A3974C06@ary.qy> <377BFE55-8693-4814-B6FA-925D47ED7A13@me.com> <892bc50c-0897-1932-4288-511f800595e0@gih.com> Message-ID: LOL. * "How is y*our... Internet ever going to make money, when its services are free?" reminds me of the line in *Who Framed Roger Rabbit?* * "Who's going to drive on your... 'freeway' when the Red Car costs a nickel?"* On Sun, Mar 27, 2022 at 12:50 PM Olivier MJ Cr?pin-Leblond via Internet-history wrote: > I remember reading this historical perspective and it was pretty accurate. > > I could add a couple of first hand experiences with France Telecom, > Minitel, Transpac and VTCOM. > > In 1997 I had a meeting with top level executives of France Telecom, > Transpac and VTCOM, discussing an Internet project. All the way then, > the suited people I had in front of me, then a young postgraduate, were > obviously very proud of the Minitel's success. I would say, too proud, > to the extent that they were completely blinded by it - and as a > Frenchman, I can say that it's sometimes a trait that we have in France > - national pride, for better or for worse. > > In this case, it was for worse. I told them the Minitel was on life > support, based on a technology that was developed in the 70s. I knew > everything about its limitations, including its intended evolution > through Broadband ISDN - B-ISDN, which was also going to be a non > starter due to its lack of flexibility and top down control. But that > fell on deaf ears: Transpac was France's National Pride, a stable > network running on X.25. VTCOM was the main supplier/controller of > content - serving millions of paying customers. Why would they need to > look at anything else than their pride and joy? I told them Transpac was > on its last legs due to its inability to support faster transfer speeds. > And top down control of all content was not scalable. The Internet was > the future. A decentralised network where everyone could produce > content. The laughter around the room was not hidden. "With the Minitel > we make a lot of money because people pay for their services. How is > your... Internet ever going to make money, when its services are free? > Nobody has ever made any money giving things away for free. You are > living in a dream world. An anglo saxon dream world. Not everyone speaks > English, you know?" > After an hour and fifteen minutes, I was promptly kicked out of their > office. They weren't disrespectful, but didn't hide the fact that they > had a strategy and it definitely wasn't mine. > I won't share the names, but these were added to a long long list of > people suffering the Peter principle that I have met in my life. > > Kindest regards, > > Olivier > (with apologies for answering an old thread - I have been very busy am > just catching up) > > On 20/03/2022 19:53, Ole Jacobsen via Internet-history wrote: > > For some historical perspective: In 1994 we published an article about > Minitel > > in ConneXions--The Interoperability Report. It's the first article in > the April > > issue. The entire archive of ConneXions is available from the Charles > Babbage > > Institute, but for easy access to this particular issue I've uploaded a > copy > > to my directory on Yikes. > > > > See: > > > > https://www.yikes.com/~ole/store/ConneXions8-04_Apr1994.pdf < > https://www.yikes.com/~ole/store/ConneXions8-04_Apr1994.pdf> > > > > Ole > > > >> On Mar 20, 2022, at 09:45, Bob Purvy via Internet-history< > internet-history at elists.isoc.org> wrote: > >> > >> Well, don't people in France ever want to look up numbers in Germany, > >> England, and Italy? > >> > >> Also, there were lots of other apps on top of Minitel, including a > dating > >> service! It did replace calls for directory assistance, but then people > >> discovered it could do a lot of other things, too. > >> > >> On Sun, Mar 20, 2022 at 9:41 AM John Levine wrote: > >> > >>> It appears that Bob Purvy via Internet-history > said: > >>>> One still wonders why the other European PTTs didn't do their own and > >>>> interoperate with Minitel. Too much NIH? > >>> Remember that the business case for Minitel was that it would replace > >>> paper phone books and directory assistance operators. Everything else > >>> was an add-on. You didn't need to interoperate to do that. > >>> > >>> R's, > >>> John > >>> > > Ole J. Jacobsen > > Editor and Publisher > > The Internet Protocol Journal > > Office: +1 415-550-9433 > > Cell: +1 415-370-4628 > > Web: protocoljournal.org > > E-mail:olejacobsen at me.com > > E-mail:ole at protocoljournal.org > > Skype: organdemo > > > > > > > > -- > Olivier MJ Cr?pin-Leblond, PhD > http://www.gih.com/ocl.html > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From mark at good-stuff.co.uk Sun Mar 27 13:24:17 2022 From: mark at good-stuff.co.uk (Mark Goodge) Date: Sun, 27 Mar 2022 21:24:17 +0100 Subject: [ih] Speaking of Minitel: Here's an oldie NO one remembers In-Reply-To: <184c5c7f-53e1-69e9-194e-96a3d2abe001@gmail.com> References: <184c5c7f-53e1-69e9-194e-96a3d2abe001@gmail.com> Message-ID: On 27/03/2022 21:02, Brian E Carpenter via Internet-history wrote: > Reminds me of the ICL "One Per Desk" (yes, that was the name of the > product). > > Computer? No. Word Processor? No. Telephone? No. In fact, after the sales > pitch I still didn't know what it was or why I'd ever want one (even the > free ones that the rep was offering to CERN as a trial). In all > seriousness, > we couldn't see the slightest use for it. In particular, it had no network > connection (except a phone line), so it was a hard sell to a networking > group. It was a computer, it just wasn't a networked one. So yes, trying to sell it to you was rather pointless. But that's sales departments for you; they don't necessarily have any real understanding of what they're selling or who they are selling to! The OPD was, in many respects, quite cutting edge for its time. The lack of networking capability wasn't a problem for its target market - office computers, at the time, tended to be standalone machines rather than networked, and the OPD was no different. And it was a competent (again, for the time) word processor - it was more advanced than Apple's products of the era, and better value for money than the equivalent IBM PC. But, like a lot of consumer computer products of the 80s, it went down an evolutionary dead end. > https://en.wikipedia.org/wiki/One_Per_Desk > (Worth reading the "Legacy" section.) That neatly illustrates one of its biggest problems. The cost was such that the people who would have used it weren't given it. And by the time PCs genuinely were "one per desk", it was IBM PCs and their clones sitting on those desks. Mark From julf at Julf.com Sun Mar 27 23:25:36 2022 From: julf at Julf.com (Johan Helsingius) Date: Mon, 28 Mar 2022 08:25:36 +0200 Subject: [ih] Speaking of Minitel: Here's an oldie NO one remembers In-Reply-To: <184c5c7f-53e1-69e9-194e-96a3d2abe001@gmail.com> References: <184c5c7f-53e1-69e9-194e-96a3d2abe001@gmail.com> Message-ID: <0a204cc7-7daa-e0e1-fde1-2117741c3d71@Julf.com> On 27/03/2022 22:02, Brian E Carpenter via Internet-history wrote: > Computer? No. Word Processor? No. Telephone? No. In fact, after the sales > pitch I still didn't know what it was or why I'd ever want one (even the > free ones that the rep was offering to CERN as a trial). In all > seriousness, > we couldn't see the slightest use for it. In particular, it had no network > connection (except a phone line), so it was a hard sell to a networking > group. The Bell Labs/Lucent/Philips Shannon/IS2630 web phone (running Inferno, the commercial version of Plan 9) at least had networking (but only over phone line), but no VoIP, even if it was a phone... https://i.imgur.com/TV7x46ml.png I still have a couple of those somewhere in a storage unit... Julf From tte at cs.fau.de Mon Mar 28 00:51:09 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Mon, 28 Mar 2022 09:51:09 +0200 Subject: [ih] Arpanet on Kitchen computer ? (was: Re: Speaking of Minitel: Here's an oldie NO one remembers) Message-ID: Even more fun ? I had seen one of those Kitchen Computers at the CHS, but only after reading the wikipedia article did it dawn on me, that one could have maybe have built the ARPANET with them given how they used the IMP minicomputer hardware: https://en.wikipedia.org/wiki/Honeywell_316#Kitchen_Computer (maybe not enough network interfaces though ;-) On Mon, Mar 28, 2022 at 08:25:36AM +0200, Johan Helsingius via Internet-history wrote: > On 27/03/2022 22:02, Brian E Carpenter via Internet-history wrote: > > > Computer? No. Word Processor? No. Telephone? No. In fact, after the sales > > pitch I still didn't know what it was or why I'd ever want one (even the > > free ones that the rep was offering to CERN as a trial). In all > > seriousness, > > we couldn't see the slightest use for it. In particular, it had no network > > connection (except a phone line), so it was a hard sell to a networking > > group. > > The Bell Labs/Lucent/Philips Shannon/IS2630 web phone (running > Inferno, the commercial version of Plan 9) at least had networking > (but only over phone line), but no VoIP, even if it was a phone... > > https://i.imgur.com/TV7x46ml.png > > I still have a couple of those somewhere in a storage unit... > > Julf > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From tte at cs.fau.de Mon Mar 28 01:21:55 2022 From: tte at cs.fau.de (Toerless Eckert) Date: Mon, 28 Mar 2022 10:21:55 +0200 Subject: [ih] Eric Allman, the Sendmail DEBUG command and "people who are privileged to go into the depths of networks, often across administrative boundaries..." (was GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <6a087930-55b9-7da6-8f65-3f421dc3e404@dcrocker.net> <2ccc8241-1933-932d-9f15-cdf1d5bbb242@cavebear.com> Message-ID: There is actually a decades long process about remote troubleshooting originating from experiences like this. On routers, it is the AFAIK still ongoing honorous process of requesting all type of diagnostics output "show foobar"/"debug something" and so on. And as the remote expert you had to know how to play 4 round lookahead chess to come up with all possible requested commands to minimize RTTs, which could save a week or more (24 hours RTT for each request/reply was quite common). And that lookahead of course doesn't really work for actual level 3 support where it's not user misconfig but an actual novel product issue. Whenever one therefore was as an expert in a position of power, one would flat out reject this official process and request direct access to the nodes of interest. I don't think there was ever a more structured approach to evolve this process. For example, i don't think there is any attribution mechanism in YANG to allow defining "private" elements, such as passwords, or otherwise not-to-be-exposed information for the purpose of troubleshooting. Instead, this is all still coded ad-hoc in product specific fashions. With the lack of a more structured solution to these problems of troubleshooting, it is no wonder that we continue to see those "backdoors" in products IMHO: Backdoors are what happens when customers indulge in wishful thinking. CHeers Toerless On Sat, Mar 26, 2022 at 12:07:33PM -1000, the keyboard of geoff goodfellow via Internet-history wrote: > vis-a-vis On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach who wrote: > "... people who are privileged to go into the depths of networks, > often across administrative boundaries, going where privacy and > security concerns must be honored..." > > an Internet History anecdote "relating" to the above with regards to the > "history and purpose" of the Sendmail DEBUG command that was responsible > for the Robert Tappen Morris Internet Worm Incident in 1988 as excerpted > from a "Hillside Club Fireside Meeting: Eric Allman" last year starting at > about 51 mins and 15 secs: > > *Tim Pozar*: *"... geoff goodfellow is back here saying could you give some > backstory with respect to the Sendmail DEBUG command that led to the Robert > Tappen Morris 1988 Internet Worm Incident?"* > > *Eric Allman: *"Actually, this is a good story. So for those who don't > know, I put in a command in Sendmail that I could connect to remotely and > say DEBUG. And it would give me basically permissions that I wasn't > supposed to have on that computer. Big permissions on that computer. So why > did I do something like that? Well, it turns out that this is back when > Sendmail was being used on campus, and I was at least a part time student, > and there was a bug on one of the computers, but it was one of the > computers that was used for administrative computing. And they said, So > there's this bug. And I said, well, let me come in and look at it. They > said, oh, we can't let you onto that computer. You're a student. You don't > have authorization to get to that computer. Well, I can't fix your problem > then. Oh, no, you have to fix our problem. I can't. You have to. You can't. > You have to. They wouldn't let me onto the computer, but I did send them > saying something like, here's a new version (of sendmail). I've done some > stuff to it. Why don't you install it and try it out? And that gave me the > access to the machine that I needed to actually fix their problem. And if > they had never done that, the DEBUG command would never have happened. It > was like they were unrealistic about the security. Now it is totally my > fault that I did not immediately remove the DEBUG command. And that was, > frankly, because, wow, it was so useful there. I might need that again. > Here we go. And at some point I kind of forgot about it and it was out way > too far on the net. That was just pure stupidity. I apologize for that." > > https://www.youtube.com/watch?v=j6h-jCxtSDA > > *Hillside Club Fireside Meeting: Eric Allman* > *"On January 1, 1983, the Internet was born from the ashes of the ARPAnet, > and sendmail was already there. Written by Eric Allman as a stopgap measure > in the early 1980s, it grew with the Internet, at one point delivering > around 90% of all the email on the network.* > > > > *The early developers of the Internet believed that "universal > communication" would promote democracy and bring people closer together. > Things didn't work out that way. Many folks, including Eric, gave away > their work for free. That changed too. * > *Arlene Baxter engages Eric Allman in conversation about those early, heady > days as electronic communication began to be an essential part of all of > our lives. This conversation will discuss the origins of sendmail, the > attitudes of the time, and how the Internet grew and changed over the > years."* > > > On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > > On 3/26/22 10:30 AM, Jack Haverty via Internet-history wrote: > > > SNMP et al are mechanisms for data collection, i.e., retrieving all > > > sorts of metrics about how things out in the network are behaving. But > > > has there been much thought and effort about how to *use* all that > > > data to operate, troubleshoot, plan or otherwise manage all the > > > technology involved in whatever the users are doing? > > > > The short answer is "yes". I've been thinking about it for a long time, > > since the 1980's. I tend to use the phrase "homeostatic networking". > > > > I helped with a DARPA project about "smart networks". (They weren't > > really "smart" in the switching plane, the "smarts" was in stuff in the > > control plane.) In that project we fed a bunch of information into a > > modelling system that produced MPLS paths, including backups so that we > > could do things like switching over within a few milliseconds. The > > modelling was done externally; results would be disseminated into the > > network. The idea was to put somewhat autonomous smarts into the routers > > so that they could manage themselves (to a very limited degree) in > > accord with the model by watching things like queue lengths, internal > > drops, etc, and decide when to switchover to a new path definition. (I > > was going to use JVMs into Cisco IOS - someone had already done that - > > to run this code.) > > > > We realized, of course, that we were on thin ice - an error could bring > > down an otherwise operational network in milliseconds. > > > > My part was based on the idea "what we are doing isn't improving things, > > what do we do now?" To me that was a sign of one of several possible > > things: > > > > a) our model was wrong. > > > > b) the network topology was different than we thought it was (either > > due to failure, error, or security penetration) > > > > c) something was not working properly (or had been penetrated) > > > > d) A new thing had arrived in the structure of the net (all kinds of > > reasons, including security penetration) > > > > etc. > > > > In our view that would trigger entry into a "troubleshooting" mode > > rather than a control/management mode. That would invoke all kinds of > > tools, some of which would scare security managers (and thus needed to > > be carefully wielded by a limited cadre of people.) > > > > One of the things that fell out of this is that we lack something that I > > call a database of network pathology. It would begin with a collection > > of anecdotal data about symptoms and the reasoning chain (including > > tests that would need to be performed) to work backwards towards > > possible causes. > > > > (Back in the 1990's I began some test implementations of pieces of this > > - originally in Prolog. I've since taken a teensy tiny part of that and > > incorporated it into one of our protocol testing products. But it is > > just a tiny piece, mainly some data structures to represent the head of > > a reverse-reasoning chain of logic.) > > > > In broader sense several other things were revealed. > > > > One was that we are deeply under investing in our network diagnostic and > > repair technology. And as we build ever higher and thicker security > > walls we are making it more and more difficult to figure out what is > > going awry and correcting it. And that, in turn, raises questions > > whether we are going to need to create a kind of network priesthood of > > people who are privileged to go into the depths of networks, often > > across administrative boundaries, going where privacy and security > > concerns must be honored. As a lawyer who has lived for decades legally > > bound to such obligations I do not feel that this is a bad thing but > > many others do not feel the same way that I do about a highly privileged > > class of network repair people. > > > > Another thing that I have realized along the way is that we need to look > > to biology for guidance. Living things are very robust; they survive > > changes that would collapse many of our human creations. How do they do > > that? Well, first we have to realize that in biology, death is a useful > > tool that we often can't accept in our technical systems. > > > > But as we dig deeper into why biological things survive while human > > things don't we find that evolution usually does not throw out existing > > solutions to problems, but layers on new solutions. All of these are > > always active, pulling with and against one another, but the newer ones > > tend to dominate. So as a tree faces a 1000 year drought if first pulls > > the latest solutions from its genetic bag of tricks, like folding leaves > > down to reduce evaporation. But when that doesn't do the job older > > solutions start to become top-dog and exercise control. > > > > It is that competition of solutions in biology that provides > > robustness. The goal is survival. Optimal use of resources comes into > > play only as an element of survival. > > > > But on our networks we too often have exactly one solution. And if that > > solution is brittle or does not extend into a new situation then we have > > a potential failure. An example of this is how TCP congestion detection > > and avoidance ran into something new - too many buffers in switching > > devices - and caused a failure mode: bufferbloat. > > > > > > > We also discovered quite a few bugs in various SNMP implementations, > > > where the data being provided were actually quite obviously incorrect. > > > I wondered at the time whether anyone else had ever tried to actually > > > use the SNMP data, more than just writing it into a log file. > > > > I still make a surprising large part of my income from helping people > > find and fix SNMP errors. It's an amazing difficult protocol to > > implement properly. > > > > My wife and I wrote a paper back in 1996, "Towards Useful Network > > Management" that remains even 26 years later, in my opinion, a rather > > useful guide to some things we need. > > > > https://www.iwl.com/idocs/towards-useful-network-management > > > > --karl-- > > > > > -- > Geoff.Goodfellow at iconia.com > living as The Truth is True > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From cdel at firsthand.net Mon Mar 28 05:03:54 2022 From: cdel at firsthand.net (christian de larrinaga) Date: Mon, 28 Mar 2022 13:03:54 +0100 Subject: [ih] GOSIP & compliance In-Reply-To: <892bc50c-0897-1932-4288-511f800595e0@gih.com> References: <20220320164152.05C2A3974C06@ary.qy> <377BFE55-8693-4814-B6FA-925D47ED7A13@me.com> <892bc50c-0897-1932-4288-511f800595e0@gih.com> Message-ID: <8735j2i811.fsf@firsthand.net> This made me laugh. Reminds me of a meeting with a senior IBM exec in UK early 1982. I'd been called in by a third party to advise on as I had a systems business specialising in 16 bit microcomputer solutions, for all of 18 months by then. The exec sat on the phone behind his desk with his crystal water jug status on display, talking to whoever about me .. and laughing dismissively .. "Nah he's not "big iron". That's been a badge of honour ever since C Olivier MJ Cr?pin-Leblond via Internet-history writes: > I remember reading this historical perspective and it was pretty accurate. > > I could add a couple of first hand experiences with France Telecom, > Minitel, Transpac and VTCOM. > > In 1997 I had a meeting with top level executives of France Telecom, > Transpac and VTCOM, discussing an Internet project. All the way then, > the suited people I had in front of me, then a young postgraduate, > were obviously very proud of the Minitel's success. I would say, too > proud, to the extent that they were completely blinded by it - and as > a Frenchman, I can say that it's sometimes a trait that we have in > France - national pride, for better or for worse. > > In this case, it was for worse. I told them the Minitel was on life > support, based on a technology that was developed in the 70s. I knew > everything about its limitations, including its intended evolution > through Broadband ISDN - B-ISDN, which was also going to be a non > starter due to its lack of flexibility and top down control. But that > fell on deaf ears: Transpac was France's National Pride, a stable > network running on X.25. VTCOM was the main supplier/controller of > content - serving millions of paying customers. Why would they need to > look at anything else than their pride and joy? I told them Transpac > was on its last legs due to its inability to support faster transfer > speeds. And top down control of all content was not scalable. The > Internet was the future. A decentralised network where everyone could > produce content. The laughter around the room was not hidden. "With > the Minitel we make a lot of money because people pay for their > services. How is your... Internet ever going to make money, when its > services are free? Nobody has ever made any money giving things away > for free. You are living in a dream world. An anglo saxon dream > world. Not everyone speaks English, you know?" > After an hour and fifteen minutes, I was promptly kicked out of their > office. They weren't disrespectful, but didn't hide the fact that they > had a strategy and it definitely wasn't mine. > I won't share the names, but these were added to a long long list of > people suffering the Peter principle that I have met in my life. > > Kindest regards, > > Olivier > (with apologies for answering an old thread - I have been very busy am > just catching up) > > On 20/03/2022 19:53, Ole Jacobsen via Internet-history wrote: >> For some historical perspective: In 1994 we published an article about Minitel >> in ConneXions--The Interoperability Report. It's the first article in the April >> issue. The entire archive of ConneXions is available from the Charles Babbage >> Institute, but for easy access to this particular issue I've uploaded a copy >> to my directory on Yikes. >> >> See: >> >> https://www.yikes.com/~ole/store/ConneXions8-04_Apr1994.pdf >> >> Ole >> >>> On Mar 20, 2022, at 09:45, Bob Purvy via Internet-history wrote: >>> >>> Well, don't people in France ever want to look up numbers in Germany, >>> England, and Italy? >>> >>> Also, there were lots of other apps on top of Minitel, including a dating >>> service! It did replace calls for directory assistance, but then people >>> discovered it could do a lot of other things, too. >>> >>> On Sun, Mar 20, 2022 at 9:41 AM John Levine wrote: >>> >>>> It appears that Bob Purvy via Internet-history said: >>>>> One still wonders why the other European PTTs didn't do their own and >>>>> interoperate with Minitel. Too much NIH? >>>> Remember that the business case for Minitel was that it would replace >>>> paper phone books and directory assistance operators. Everything else >>>> was an add-on. You didn't need to interoperate to do that. >>>> >>>> R's, >>>> John >>>> >> Ole J. Jacobsen >> Editor and Publisher >> The Internet Protocol Journal >> Office: +1 415-550-9433 >> Cell: +1 415-370-4628 >> Web: protocoljournal.org >> E-mail:olejacobsen at me.com >> E-mail:ole at protocoljournal.org >> Skype: organdemo >> >> >> > > -- > Olivier MJ Cr?pin-Leblond, PhD > http://www.gih.com/ocl.html -- christian de larrinaga https://firsthand.net From mfidelman at meetinghouse.net Mon Mar 28 08:47:45 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 28 Mar 2022 11:47:45 -0400 Subject: [ih] Speaking of Minitel: Here's an oldie NO one remembers In-Reply-To: References: Message-ID: <09e3d2a0-a7a6-7342-2e01-4654006fd8e6@meetinghouse.net> Bob Purvy via Internet-history wrote: > In the late 70's / early 80's, a friend of mine worked at a company CXI in > Irvine, CA. This was a real company, with maybe 250 people. I can't even > find it with Google now. > > Their whole thesis was "people in offices don't want computers -- they want > *telephones*" with a big PBX behind it. > > I distinctly remember a full page ad, probably in *Datamation*, touting > their system, "The Rose", and calling it Office Humanation. > > Because, you know... office peope don't want a stupid *computer*; they want > a big fancy phone with all the functions. I thought we all wanted Cerebrum Communicators! Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From emiliano.spinella at syndeno.com Mon Mar 28 09:41:42 2022 From: emiliano.spinella at syndeno.com (Emiliano Spinella) Date: Mon, 28 Mar 2022 18:41:42 +0200 Subject: [ih] SMTP History Message-ID: Hi everyone, Lately, I have been looking for information regarding the history of SMTP but could not find much information. Basically, I am interested in the initial Email system protocols and how SMTP got its final form. I imagine there must have been multiple protocol alternatives for Email but somehow SMTP became a standard. Were there some relevant milestones that helped SMTP to become broadly adopted? I imagine there must have been some important institution or set of institutions that drove the adoption. Also, is there any reason why POP3 and later IMAP were not part of SMTP? I was wondering if somebody in this list could give me some light. Thanks, Emiliano -- _LEGAL NOTICE: The content of this email message, including the attached files, is confidential and is protected by article 18.3 of the Spanish Constitution, which guarantees the secrecy of communications. If you receive this message in error, please contact the sender to inform them of this fact, and do not broadcast its content or make copies. _ _*** This message has been verified with removal tools for viruses and malicious content *** _ _This legal notice has been automatically incorporated into the message. _ *---------------------------------------------* *AVISO LEGAL: El contenido de este mensaje de correo electr?nico, incluidos los ficheros adjuntos, es confidencial y est? protegido por el art?culo 18.3 de la Constituci?n Espa?ola, que garantiza el secreto de las comunicaciones. Si usted recibe este mensaje por error, por favor p?ngase en contacto con el remitente para informarle de este hecho, y no difunda su contenido ni haga copias. * _*** Este mensaje ha sido verificado con herramientas de eliminaci?n de virus y contenido malicioso *** _ _Este aviso legal ha sido incorporado autom?ticamente al mensaje._ From craig at tereschau.net Mon Mar 28 09:48:59 2022 From: craig at tereschau.net (Craig Partridge) Date: Mon, 28 Mar 2022 10:48:59 -0600 Subject: [ih] GOSIP & compliance In-Reply-To: <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: Quick comments on Karl's comments (from this note and later). ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to return self-describing vendor specific data without requiring a supplemental MIB definition. This was important because HEMS allowed one to ask for the full status of an aggregate object (such as an interface or even an entire subjection of the MIB) and we wanted folks to be able to add additional data they thought relevant to the aggregate. ASN.1 was the only self-describing data format of the time. The UDP vs. TCP debate was pretty fierce and the experience of the time came down firmly on the UDP side. Recall this was the era of daily congestion collapse of the Internet between 1987 and 1990. Re: doing things in Python. I'm not surprised. The HEMS implementation proved reasonably small at the time. By the way, many of the features noted in CMIP were actually in HEMS and/or SNMP first. We were pretty open about playing with ideas at the time and the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to borrow liberally. (Or, at least, that's my recollection). There was a network management project in the late 1980s, name now eludes me but led by Jil Wescott and DARPA funded, that sound similar in goals to what Jack H. describes doing at Oracle. I leaned on wisdom from those folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure out what HEMS should look like. As we do these assessments, it is worth remembering that the operational community of the time was struggling with the immediate challenge of managing networks that were flaky and ran on 68000 processors and where only a few 100K of memory was available for the management protocols. The SNMP team found a way to shoe-horn the key features into that limited footprint and it promptly made a *huge* difference. Craig On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < internet-history at elists.isoc.org> wrote: > ... > BTW, I thought the use of ASN.1/BER for SNMP was far from the best > choice (indeed, from an implementation quality and interoperability > point of view would be hard to find one that was worse.) I preferred > the HEMS proposal as the most elegant, even if it did use XML. > > CMIP had some really good ideas (most particularly with regard to > selecting and filtering data at the server for highly efficient bulk > fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > was very inventive and clever. That kinda demonstrated the potential > viability, rather than the impossibility, of things like CMIP/T, even in > its bloated form. > > Diverting from the main points here: > > About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > and using JSON, throwing out UDP and using TCP (optionally with TLS), > and adding in some of the filtering concepts from CMIP. I preserved > most MIB names and instrumentation variable semantics (and thus > preserving a lot of existing instrumentation code in devices.) > > The resulting running code (in Python) is quite small - on par with the > 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it > runs several decimal orders of magnitude faster than SNMP (in terms of > compute cycles, network activity, and start-to-finish time.) Plus I can > do things like "give me all data on all interfaces with a received > packet error rate greater than 0.1%". I can even safely issue complex > control commands to devices, something that SNMP can't do very well. I > considered doing commercial grade, perhaps open-source, version but it > could have ended up disturbing the then nascent Netconf effort. > > --karl-- > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From mfidelman at meetinghouse.net Mon Mar 28 10:03:12 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Mon, 28 Mar 2022 13:03:12 -0400 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: Emiliano Spinella via Internet-history wrote: > Hi everyone, > > Lately, I have been looking for information regarding the history of SMTP > but could not find much information. > > Basically, I am interested in the initial Email system protocols and how > SMTP got its final form. I imagine there must have been multiple protocol > alternatives for Email but somehow SMTP became a standard. > > Were there some relevant milestones that helped SMTP to become broadly > adopted? I imagine there must have been some important institution or set > of institutions that drove the adoption. Perhaps you should go through the early RFCs on the subject.? And discussion around them.? Lot's of interesting history buried in there. > > Also, is there any reason why POP3 and later IMAP were not part of SMTP? Completely different functionality.? It kind of helps to understand the technology. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From vint at google.com Mon Mar 28 10:04:23 2022 From: vint at google.com (Vint Cerf) Date: Mon, 28 Mar 2022 13:04:23 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: IAB reviewed three proposals (SGMP, HEMS and CMOT). After a lot of discussion, Craig withdrew HEMS, CMOT was seen as "the long term" (smile) and SGMP became SNMP and was the target of immediate adoption effort. https://datatracker.ietf.org/doc/html/rfc1052 v On Mon, Mar 28, 2022 at 12:49 PM Craig Partridge via Internet-history < internet-history at elists.isoc.org> wrote: > Quick comments on Karl's comments (from this note and later). > > ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to > return self-describing vendor specific data without requiring a > supplemental MIB definition. This was important because HEMS allowed one > to ask for the full status of an aggregate object (such as an interface or > even an entire subjection of the MIB) and we wanted folks to be able to add > additional data they thought relevant to the aggregate. ASN.1 was the only > self-describing data format of the time. > > The UDP vs. TCP debate was pretty fierce and the experience of the time > came down firmly on the UDP side. Recall this was the era of daily > congestion collapse of the Internet between 1987 and 1990. > > Re: doing things in Python. I'm not surprised. The HEMS implementation > proved reasonably small at the time. > > By the way, many of the features noted in CMIP were actually in HEMS and/or > SNMP first. We were pretty open about playing with ideas at the time and > the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to > borrow liberally. (Or, at least, that's my recollection). > > There was a network management project in the late 1980s, name now eludes > me but led by Jil Wescott and DARPA funded, that sound similar in goals to > what Jack H. describes doing at Oracle. I leaned on wisdom from those > folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure > out what HEMS should look like. > > As we do these assessments, it is worth remembering that the operational > community of the time was struggling with the immediate challenge of > managing networks that were flaky and ran on 68000 processors and where > only a few 100K of memory was available for the management protocols. The > SNMP team found a way to shoe-horn the key features into that > limited footprint and it promptly made a *huge* difference. > > Craig > > On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < > internet-history at elists.isoc.org> wrote: > > > ... > > BTW, I thought the use of ASN.1/BER for SNMP was far from the best > > choice (indeed, from an implementation quality and interoperability > > point of view would be hard to find one that was worse.) I preferred > > the HEMS proposal as the most elegant, even if it did use XML. > > > > CMIP had some really good ideas (most particularly with regard to > > selecting and filtering data at the server for highly efficient bulk > > fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > > was very inventive and clever. That kinda demonstrated the potential > > viability, rather than the impossibility, of things like CMIP/T, even in > > its bloated form. > > > > Diverting from the main points here: > > > > About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > > and using JSON, throwing out UDP and using TCP (optionally with TLS), > > and adding in some of the filtering concepts from CMIP. I preserved > > most MIB names and instrumentation variable semantics (and thus > > preserving a lot of existing instrumentation code in devices.) > > > > The resulting running code (in Python) is quite small - on par with the > > 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it > > runs several decimal orders of magnitude faster than SNMP (in terms of > > compute cycles, network activity, and start-to-finish time.) Plus I can > > do things like "give me all data on all interfaces with a received > > packet error rate greater than 0.1%". I can even safely issue complex > > control commands to devices, something that SNMP can't do very well. I > > considered doing commercial grade, perhaps open-source, version but it > > could have ended up disturbing the then nascent Netconf effort. > > > > --karl-- > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From dhc at dcrocker.net Mon Mar 28 10:30:01 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 28 Mar 2022 10:30:01 -0700 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> On 3/28/2022 9:41 AM, Emiliano Spinella via Internet-history wrote: > Lately, I have been looking for information regarding the history of SMTP > but could not find much information. > > Basically, I am interested in the initial Email system protocols and how > SMTP got its final form. I imagine there must have been multiple protocol > alternatives for Email but somehow SMTP became a standard. > > Were there some relevant milestones that helped SMTP to become broadly > adopted? I imagine there must have been some important institution or set > of institutions that drove the adoption. > > Also, is there any reason why POP3 and later IMAP were not part of SMTP? Emiliano, hello. 1. A partial timeline of email milestones is at 2. A far more complete timeline is under development, by Jake Feinler and John Vittal. I don't know when it will be public. 3. There are a number of Internet mail history summaries available through online searches. Most of the ones that do not focus on purported invention in the late 1970s are reasonable. 4. In the very early 1970s, the Arpanet FTP protocol -- which has mostly transferred to the Internet environment -- was under development. I only went to its final meeting and don't remember whether email was in the version; from the RFC publication history, it appears not. However Abhay Bhushan, who was the document editor, some years ago told me it was always planned. 5. Ray Tomlinson created the first networked email in late 1971. It used a proprietary arrangement, with sndmsg, readmail and cpynet user software and file transfer mechanism, specific to BBN's Tenex system. But since Tenex was popular around the Arpanet, this usage spread quickly within that community. Some years ago, before his death, Ray said that his effort on email was in response to discussions that were underway amongst the Arpanet folk, for a more elaborate -- and IMO far less useful -- protocol to support remote printing of interoffice memos, rather than online, person-to-person message exchange. 6. There is much written about the culture of developing Arpanet (and Internet protocols.) The sequence that produced SMTP was an example of that culture. The FTP-based mechanism was in use for about 10 years, before SMTP was specified. There was an effort in the latter 1970s, to do an elaborate, multi-media protocol, but it didn't make much progress. The SMTP effort was more modest -- hence the 'S' -- essentially seeking only to carve off the email transfer mechanism from FTP, and mostly just add the ability to specify multiple recipients, to save extra network transfers; it preserved that syntax and semantics of the email object that had been transfered in the 1970s. Network bandwidth was a lot slower and more expensive, in those days. 7. Adoption of new protocols, in those days, wasn't very difficult, as long as people saw functional or operational benefit. Small community, and easily shared code, and mostly simple protocols. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From steffen at sdaoden.eu Mon Mar 28 11:01:59 2022 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 28 Mar 2022 20:01:59 +0200 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: <20220328180159.qVIej%steffen@sdaoden.eu> Miles Fidelman wrote in : |Emiliano Spinella via Internet-history wrote: |> Hi everyone, |> |> Lately, I have been looking for information regarding the history of SMTP |> but could not find much information. |> |> Basically, I am interested in the initial Email system protocols and how |> SMTP got its final form. I imagine there must have been multiple protocol |> alternatives for Email but somehow SMTP became a standard. |> |> Were there some relevant milestones that helped SMTP to become broadly |> adopted? I imagine there must have been some important institution or set |> of institutions that drove the adoption. |Perhaps you should go through the early RFCs on the subject.? And |discussion around them.? Lot's of interesting history buried in there. |> |> Also, is there any reason why POP3 and later IMAP were not part of SMTP? |Completely different functionality.? It kind of helps to understand the |technology. Careful with that axe Eugene; in today's world .. IETF standardized JMAP and it "milks the shit of out it". --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From jeanjour at comcast.net Mon Mar 28 11:19:08 2022 From: jeanjour at comcast.net (John Day) Date: Mon, 28 Mar 2022 14:19:08 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: <260392E3-0401-4EAE-A10A-D8648D4D166B@comcast.net> Just to add to the comments, > On Mar 28, 2022, at 12:48, Craig Partridge via Internet-history wrote: 802 had already done a management protocol (84-85) very similar to SNMP and had discovered its weaknesses. In particular that it generated a lot of traffic to accomplish little. (I always called it the ?Turning Machine Syndrome.? It was so simple, it was too complex. That lead to the adoption of an object-oriented model with Request/Responses on a Transport Protocol and Events on a connectionless protocol. It is very easy to generate important things to do that with SNMP take 100s of request/responses that with HEMS or CMIP take 2 and considerably fewer packets. > > Quick comments on Karl's comments (from this note and later). > > ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to > return self-describing vendor specific data without requiring a > supplemental MIB definition. This was important because HEMS allowed one > to ask for the full status of an aggregate object (such as an interface or > even an entire subjection of the MIB) and we wanted folks to be able to add > additional data they thought relevant to the aggregate. ASN.1 was the only > self-describing data format of the time. ASN.1 also has the property that it makes the protocol invariant with respect to syntax. By not having the ability to define the encoding rules to be used, SNMP was locked into the overly verbose BER. Whereas, PER was 70% more compact, and 80% less processing. PER was so processing efficient that the plans for ?Lightweight Encoding Rules? that was to be processing efficient were scrapped. > > The UDP vs. TCP debate was pretty fierce and the experience of the time > came down firmly on the UDP side. Recall this was the era of daily > congestion collapse of the Internet between 1987 and 1990. Somehow this argument (which I know was intense at the time) is the most absurd. All of the functions in TCP that are relevant are feedback functions that only involve the source and destination. In between, the handling of UDP and TCP packets by the routers is the same. If anything, TCP packets with congestion control have a better chance of being received and a TCP solution would have required fewer packets be generated in the first place. (The last thing a management system should be doing when things go bad is generating lots of traffic, but SNMP was good at that.) One of the misconceptions at the time (across the board) was that connection-oriented of virtual-circuit and connection-oriented of transport protocols were both the same ?connection-oriented.? They aren?t. Transport protocols should not have been lumped into that argument, which speaking of intense was *really* intense. (I know that someone will correct me on this.) > > Re: doing things in Python. I'm not surprised. The HEMS implementation > proved reasonably small at the time. I was told by reliable sources at the time, that of the 3 protocols, SNMP was the largest implementation. And Python didn?t exist yet. > > By the way, many of the features noted in CMIP were actually in HEMS and/or > SNMP first. We were pretty open about playing with ideas at the time and > the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to > borrow liberally. (Or, at least, that's my recollection). As hinted above, the CMIP work was based on the earlier experience in 802. The big switch was moving to an OO model and including scope and filter, which HEMS could have had and SNMP didn?t. OSI had the additional problem that IBM was advertising that OSI did data transfer, not management, SNA did management, and was playing the usual ?keep the discussion going? games to see to it that progress was not being made. They were totally unprepared when complete management architecture and protocol proposals came in from IEEE in 1985. (They worked like hell to try to get it thrown out but couldn?t because there were too many companies behind it). That broke the logjam and got things going. What is amusing is that when SNMP was approved, a major router vendor played the same game IBM had arguing that SNMP would be okay for monitoring but not configuration because it wasn?t secure. ;-) Of course, it wasn?t secure, but it was a heck of a lot more secure using ASN.1 than the vendor?s solution of opening a Telnet connection and sending passwords in the clear. ;-) Every laptop on the planet had a telnet program, but exceedingly few had ASN.1 compilers. The vendor played the industry for suckers and they fell for it. The resulting debacle over SNMPv2 pretty much sealed the fate of SNMP. > There was a network management project in the late 1980s, name now eludes > me but led by Jil Wescott and DARPA funded, that sound similar in goals to > what Jack H. describes doing at Oracle. I leaned on wisdom from those > folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure > out what HEMS should look like. The database issues were always at the forefront in the network management development. Most everyone else blew it by trying to use relational databases, which were totally unsuited for the problem. (Charlie Bachman and I use to jokingly debate: He would say, you can?t do a bill of material (parts explosion) structure in a relational database. I would counter that you could but who the heck would want to!! It would be like writing a COBOL compiler for a Turing Machine!) ;-) So when it came to network management, we immediately adopted an E-R database. (Charlie always contended that every relational database had an ER model under it for speed. I don?t know if he was right about ?all? but it was true of a lot of them.) One of our MIT grads had been taught relational was the only way. So we said do the performance comparison. We hadn?t heard anything so we finally asked how it came out: In the best case, the relational model was only 19 times slower than what we were using. HP, DEC, and others had to learn the hard way. We also recognized from the beginning that commonality across MIB structures would the key element. Leveraging Chapter 5 of the OSI Reference Model (not the part that describes the specific 7 layers) and augmenting it, we were able to achieve far more commonality than OSI was. (The company wouldn?t let us contribute what we had.) and of course with SNMP MIBs there basically was none. We had a common MIB structure that covered all 3 forms of LANs, X.25, T1, IP, TCP, the OSI stuff and probably some things I have forgotten. That commonality allowed our management system (fielded in 86) to at least partially manage devices we had never seen and automatically conofigure devices that we had: One just selected the objects on the network map to be configured and pull down a menu and selected ?configure? and it was done. Automatic configuration turned out to be straightforward. The processors at the time were a bit of a constraint but not overwhelming. A lot of people were still operating under the influence from when the constraints were even greater. There was a lot of learning going on during that period. John > > As we do these assessments, it is worth remembering that the operational > community of the time was struggling with the immediate challenge of > managing networks that were flaky and ran on 68000 processors and where > only a few 100K of memory was available for the management protocols. The > SNMP team found a way to shoe-horn the key features into that > limited footprint and it promptly made a *huge* difference. > > Craig > > On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < > internet-history at elists.isoc.org> wrote: > >> ... >> BTW, I thought the use of ASN.1/BER for SNMP was far from the best >> choice (indeed, from an implementation quality and interoperability >> point of view would be hard to find one that was worse.) I preferred >> the HEMS proposal as the most elegant, even if it did use XML. >> >> CMIP had some really good ideas (most particularly with regard to >> selecting and filtering data at the server for highly efficient bulk >> fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) >> was very inventive and clever. That kinda demonstrated the potential >> viability, rather than the impossibility, of things like CMIP/T, even in >> its bloated form. >> >> Diverting from the main points here: >> >> About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER >> and using JSON, throwing out UDP and using TCP (optionally with TLS), >> and adding in some of the filtering concepts from CMIP. I preserved >> most MIB names and instrumentation variable semantics (and thus >> preserving a lot of existing instrumentation code in devices.) >> >> The resulting running code (in Python) is quite small - on par with the >> 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it >> runs several decimal orders of magnitude faster than SNMP (in terms of >> compute cycles, network activity, and start-to-finish time.) Plus I can >> do things like "give me all data on all interfaces with a received >> packet error rate greater than 0.1%". I can even safely issue complex >> control commands to devices, something that SNMP can't do very well. I >> considered doing commercial grade, perhaps open-source, version but it >> could have ended up disturbing the then nascent Netconf effort. >> >> --karl-- >> >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From dhc at dcrocker.net Mon Mar 28 11:20:22 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 28 Mar 2022 11:20:22 -0700 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: <0ef250ec-ba74-98a5-2edb-0fd27770b34f@dcrocker.net> On 3/28/2022 10:03 AM, Miles Fidelman via Internet-history wrote: >> Also, is there any reason why POP3 and later IMAP were not part of SMTP? > Completely different functionality.? It kind of helps to understand the > technology. a review of RFC 5598 might help with this. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From bpurvy at gmail.com Mon Mar 28 12:09:58 2022 From: bpurvy at gmail.com (Bob Purvy) Date: Mon, 28 Mar 2022 12:09:58 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <260392E3-0401-4EAE-A10A-D8648D4D166B@comcast.net> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <260392E3-0401-4EAE-A10A-D8648D4D166B@comcast.net> Message-ID: I wasn't one of the *pioneers* of SNMP, but I led RFC 1697, implemented it on Oracle, and acquired Emanate for the Packeteer devices. It's not true that there was NO commonality among devices. Everyone implemented MIB-II. HP OpenView was able to do a reasonable job of network discovery, using only or primarily that. Configuration via SNMP: I know a lot of people did that. We never did. It wasn't suited for it, IMHO. On Mon, Mar 28, 2022 at 11:19 AM John Day via Internet-history < internet-history at elists.isoc.org> wrote: > Just to add to the comments, > > > On Mar 28, 2022, at 12:48, Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > > 802 had already done a management protocol (84-85) very similar to SNMP > and had discovered its weaknesses. In particular that it generated a lot of > traffic to accomplish little. (I always called it the ?Turning Machine > Syndrome.? It was so simple, it was too complex. That lead to the adoption > of an object-oriented model with Request/Responses on a Transport Protocol > and Events on a connectionless protocol. > > It is very easy to generate important things to do that with SNMP take > 100s of request/responses that with HEMS or CMIP take 2 and considerably > fewer packets. > > > > Quick comments on Karl's comments (from this note and later). > > > > ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to > > return self-describing vendor specific data without requiring a > > supplemental MIB definition. This was important because HEMS allowed one > > to ask for the full status of an aggregate object (such as an interface > or > > even an entire subjection of the MIB) and we wanted folks to be able to > add > > additional data they thought relevant to the aggregate. ASN.1 was the > only > > self-describing data format of the time. > > ASN.1 also has the property that it makes the protocol invariant with > respect to syntax. By not having the ability to define the encoding rules > to be used, SNMP was locked into the overly verbose BER. Whereas, PER was > 70% more compact, and 80% less processing. PER was so processing efficient > that the plans for ?Lightweight Encoding Rules? that was to be processing > efficient were scrapped. > > > > The UDP vs. TCP debate was pretty fierce and the experience of the time > > came down firmly on the UDP side. Recall this was the era of daily > > congestion collapse of the Internet between 1987 and 1990. > > Somehow this argument (which I know was intense at the time) is the most > absurd. All of the functions in TCP that are relevant are feedback > functions that only involve the source and destination. In between, the > handling of UDP and TCP packets by the routers is the same. If anything, > TCP packets with congestion control have a better chance of being received > and a TCP solution would have required fewer packets be generated in the > first place. (The last thing a management system should be doing when > things go bad is generating lots of traffic, but SNMP was good at that.) > > One of the misconceptions at the time (across the board) was that > connection-oriented of virtual-circuit and connection-oriented of transport > protocols were both the same ?connection-oriented.? They aren?t. Transport > protocols should not have been lumped into that argument, which speaking of > intense was *really* intense. > > (I know that someone will correct me on this.) > > > > Re: doing things in Python. I'm not surprised. The HEMS implementation > > proved reasonably small at the time. > > I was told by reliable sources at the time, that of the 3 protocols, SNMP > was the largest implementation. And Python didn?t exist yet. > > > > By the way, many of the features noted in CMIP were actually in HEMS > and/or > > SNMP first. We were pretty open about playing with ideas at the time and > > the CMIP folks, who had an empty spec when SNMP and HEMS started, chose > to > > borrow liberally. (Or, at least, that's my recollection). > > As hinted above, the CMIP work was based on the earlier experience in 802. > The big switch was moving to an OO model and including scope and filter, > which HEMS could have had and SNMP didn?t. > > OSI had the additional problem that IBM was advertising that OSI did data > transfer, not management, SNA did management, and was playing the usual > ?keep the discussion going? games to see to it that progress was not being > made. They were totally unprepared when complete management architecture > and protocol proposals came in from IEEE in 1985. (They worked like hell to > try to get it thrown out but couldn?t because there were too many companies > behind it). That broke the logjam and got things going. > > What is amusing is that when SNMP was approved, a major router vendor > played the same game IBM had arguing that SNMP would be okay for monitoring > but not configuration because it wasn?t secure. ;-) Of course, it wasn?t > secure, but it was a heck of a lot more secure using ASN.1 than the > vendor?s solution of opening a Telnet connection and sending passwords in > the clear. ;-) Every laptop on the planet had a telnet program, but > exceedingly few had ASN.1 compilers. The vendor played the industry for > suckers and they fell for it. The resulting debacle over SNMPv2 pretty > much sealed the fate of SNMP. > > > There was a network management project in the late 1980s, name now eludes > > me but led by Jil Wescott and DARPA funded, that sound similar in goals > to > > what Jack H. describes doing at Oracle. I leaned on wisdom from those > > folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to > figure > > out what HEMS should look like. > > The database issues were always at the forefront in the network management > development. Most everyone else blew it by trying to use relational > databases, which were totally unsuited for the problem. (Charlie Bachman > and I use to jokingly debate: He would say, you can?t do a bill of material > (parts explosion) structure in a relational database. I would counter that > you could but who the heck would want to!! It would be like writing a COBOL > compiler for a Turing Machine!) ;-) > > So when it came to network management, we immediately adopted an E-R > database. (Charlie always contended that every relational database had an > ER model under it for speed. I don?t know if he was right about ?all? but > it was true of a lot of them.) One of our MIT grads had been taught > relational was the only way. So we said do the performance comparison. We > hadn?t heard anything so we finally asked how it came out: In the best > case, the relational model was only 19 times slower than what we were > using. HP, DEC, and others had to learn the hard way. > > We also recognized from the beginning that commonality across MIB > structures would the key element. Leveraging Chapter 5 of the OSI Reference > Model (not the part that describes the specific 7 layers) and augmenting > it, we were able to achieve far more commonality than OSI was. (The company > wouldn?t let us contribute what we had.) and of course with SNMP MIBs there > basically was none. We had a common MIB structure that covered all 3 forms > of LANs, X.25, T1, IP, TCP, the OSI stuff and probably some things I have > forgotten. > > That commonality allowed our management system (fielded in 86) to at least > partially manage devices we had never seen and automatically conofigure > devices that we had: One just selected the objects on the network map to be > configured and pull down a menu and selected ?configure? and it was done. > Automatic configuration turned out to be straightforward. > > The processors at the time were a bit of a constraint but not > overwhelming. A lot of people were still operating under the influence from > when the constraints were even greater. > > There was a lot of learning going on during that period. > > John > > > > > As we do these assessments, it is worth remembering that the operational > > community of the time was struggling with the immediate challenge of > > managing networks that were flaky and ran on 68000 processors and where > > only a few 100K of memory was available for the management protocols. > The > > SNMP team found a way to shoe-horn the key features into that > > limited footprint and it promptly made a *huge* difference. > > > > Craig > > > > On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> ... > >> BTW, I thought the use of ASN.1/BER for SNMP was far from the best > >> choice (indeed, from an implementation quality and interoperability > >> point of view would be hard to find one that was worse.) I preferred > >> the HEMS proposal as the most elegant, even if it did use XML. > >> > >> CMIP had some really good ideas (most particularly with regard to > >> selecting and filtering data at the server for highly efficient bulk > >> fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > >> was very inventive and clever. That kinda demonstrated the potential > >> viability, rather than the impossibility, of things like CMIP/T, even in > >> its bloated form. > >> > >> Diverting from the main points here: > >> > >> About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > >> and using JSON, throwing out UDP and using TCP (optionally with TLS), > >> and adding in some of the filtering concepts from CMIP. I preserved > >> most MIB names and instrumentation variable semantics (and thus > >> preserving a lot of existing instrumentation code in devices.) > >> > >> The resulting running code (in Python) is quite small - on par with the > >> 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it > >> runs several decimal orders of magnitude faster than SNMP (in terms of > >> compute cycles, network activity, and start-to-finish time.) Plus I can > >> do things like "give me all data on all interfaces with a received > >> packet error rate greater than 0.1%". I can even safely issue complex > >> control commands to devices, something that SNMP can't do very well. I > >> considered doing commercial grade, perhaps open-source, version but it > >> could have ended up disturbing the then nascent Netconf effort. > >> > >> --karl-- > >> > >> > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > > > > > > -- > > ***** > > Craig Partridge's email account for professional society activities and > > mailing lists. > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From brian.e.carpenter at gmail.com Mon Mar 28 13:56:32 2022 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 29 Mar 2022 09:56:32 +1300 Subject: [ih] MIBs and YANG [was GOSIP & compliance] In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <260392E3-0401-4EAE-A10A-D8648D4D166B@comcast.net> Message-ID: <8557f546-5550-8ea1-1108-65114ef431ca@gmail.com> It's perhaps worth noting that the IETF hasn't published a MIB since 2018 (RFC 8502) but has published many YANG modules in recent years. I don't know whether there are any statistics about deployment, but it's clear that NETCONF/YANG has taken over from SNMP/MIB as far as development work goes. Regards Brian Carpenter On 29-Mar-22 08:09, Bob Purvy via Internet-history wrote: > I wasn't one of the *pioneers* of SNMP, but I led RFC 1697, implemented it > on Oracle, and acquired Emanate for the Packeteer devices. > > It's not true that there was NO commonality among devices. Everyone > implemented MIB-II. HP OpenView was able to do a reasonable job of network > discovery, using only or primarily that. > > Configuration via SNMP: I know a lot of people did that. We never did. It > wasn't suited for it, IMHO. > > On Mon, Mar 28, 2022 at 11:19 AM John Day via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Just to add to the comments, >> >>> On Mar 28, 2022, at 12:48, Craig Partridge via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >> 802 had already done a management protocol (84-85) very similar to SNMP >> and had discovered its weaknesses. In particular that it generated a lot of >> traffic to accomplish little. (I always called it the ?Turning Machine >> Syndrome.? It was so simple, it was too complex. That lead to the adoption >> of an object-oriented model with Request/Responses on a Transport Protocol >> and Events on a connectionless protocol. >> >> It is very easy to generate important things to do that with SNMP take >> 100s of request/responses that with HEMS or CMIP take 2 and considerably >> fewer packets. >>> >>> Quick comments on Karl's comments (from this note and later). >>> >>> ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to >>> return self-describing vendor specific data without requiring a >>> supplemental MIB definition. This was important because HEMS allowed one >>> to ask for the full status of an aggregate object (such as an interface >> or >>> even an entire subjection of the MIB) and we wanted folks to be able to >> add >>> additional data they thought relevant to the aggregate. ASN.1 was the >> only >>> self-describing data format of the time. >> >> ASN.1 also has the property that it makes the protocol invariant with >> respect to syntax. By not having the ability to define the encoding rules >> to be used, SNMP was locked into the overly verbose BER. Whereas, PER was >> 70% more compact, and 80% less processing. PER was so processing efficient >> that the plans for ?Lightweight Encoding Rules? that was to be processing >> efficient were scrapped. >>> >>> The UDP vs. TCP debate was pretty fierce and the experience of the time >>> came down firmly on the UDP side. Recall this was the era of daily >>> congestion collapse of the Internet between 1987 and 1990. >> >> Somehow this argument (which I know was intense at the time) is the most >> absurd. All of the functions in TCP that are relevant are feedback >> functions that only involve the source and destination. In between, the >> handling of UDP and TCP packets by the routers is the same. If anything, >> TCP packets with congestion control have a better chance of being received >> and a TCP solution would have required fewer packets be generated in the >> first place. (The last thing a management system should be doing when >> things go bad is generating lots of traffic, but SNMP was good at that.) >> >> One of the misconceptions at the time (across the board) was that >> connection-oriented of virtual-circuit and connection-oriented of transport >> protocols were both the same ?connection-oriented.? They aren?t. Transport >> protocols should not have been lumped into that argument, which speaking of >> intense was *really* intense. >> >> (I know that someone will correct me on this.) >>> >>> Re: doing things in Python. I'm not surprised. The HEMS implementation >>> proved reasonably small at the time. >> >> I was told by reliable sources at the time, that of the 3 protocols, SNMP >> was the largest implementation. And Python didn?t exist yet. >>> >>> By the way, many of the features noted in CMIP were actually in HEMS >> and/or >>> SNMP first. We were pretty open about playing with ideas at the time and >>> the CMIP folks, who had an empty spec when SNMP and HEMS started, chose >> to >>> borrow liberally. (Or, at least, that's my recollection). >> >> As hinted above, the CMIP work was based on the earlier experience in 802. >> The big switch was moving to an OO model and including scope and filter, >> which HEMS could have had and SNMP didn?t. >> >> OSI had the additional problem that IBM was advertising that OSI did data >> transfer, not management, SNA did management, and was playing the usual >> ?keep the discussion going? games to see to it that progress was not being >> made. They were totally unprepared when complete management architecture >> and protocol proposals came in from IEEE in 1985. (They worked like hell to >> try to get it thrown out but couldn?t because there were too many companies >> behind it). That broke the logjam and got things going. >> >> What is amusing is that when SNMP was approved, a major router vendor >> played the same game IBM had arguing that SNMP would be okay for monitoring >> but not configuration because it wasn?t secure. ;-) Of course, it wasn?t >> secure, but it was a heck of a lot more secure using ASN.1 than the >> vendor?s solution of opening a Telnet connection and sending passwords in >> the clear. ;-) Every laptop on the planet had a telnet program, but >> exceedingly few had ASN.1 compilers. The vendor played the industry for >> suckers and they fell for it. The resulting debacle over SNMPv2 pretty >> much sealed the fate of SNMP. >> >>> There was a network management project in the late 1980s, name now eludes >>> me but led by Jil Wescott and DARPA funded, that sound similar in goals >> to >>> what Jack H. describes doing at Oracle. I leaned on wisdom from those >>> folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to >> figure >>> out what HEMS should look like. >> >> The database issues were always at the forefront in the network management >> development. Most everyone else blew it by trying to use relational >> databases, which were totally unsuited for the problem. (Charlie Bachman >> and I use to jokingly debate: He would say, you can?t do a bill of material >> (parts explosion) structure in a relational database. I would counter that >> you could but who the heck would want to!! It would be like writing a COBOL >> compiler for a Turing Machine!) ;-) >> >> So when it came to network management, we immediately adopted an E-R >> database. (Charlie always contended that every relational database had an >> ER model under it for speed. I don?t know if he was right about ?all? but >> it was true of a lot of them.) One of our MIT grads had been taught >> relational was the only way. So we said do the performance comparison. We >> hadn?t heard anything so we finally asked how it came out: In the best >> case, the relational model was only 19 times slower than what we were >> using. HP, DEC, and others had to learn the hard way. >> >> We also recognized from the beginning that commonality across MIB >> structures would the key element. Leveraging Chapter 5 of the OSI Reference >> Model (not the part that describes the specific 7 layers) and augmenting >> it, we were able to achieve far more commonality than OSI was. (The company >> wouldn?t let us contribute what we had.) and of course with SNMP MIBs there >> basically was none. We had a common MIB structure that covered all 3 forms >> of LANs, X.25, T1, IP, TCP, the OSI stuff and probably some things I have >> forgotten. >> >> That commonality allowed our management system (fielded in 86) to at least >> partially manage devices we had never seen and automatically conofigure >> devices that we had: One just selected the objects on the network map to be >> configured and pull down a menu and selected ?configure? and it was done. >> Automatic configuration turned out to be straightforward. >> >> The processors at the time were a bit of a constraint but not >> overwhelming. A lot of people were still operating under the influence from >> when the constraints were even greater. >> >> There was a lot of learning going on during that period. >> >> John >> >>> >>> As we do these assessments, it is worth remembering that the operational >>> community of the time was struggling with the immediate challenge of >>> managing networks that were flaky and ran on 68000 processors and where >>> only a few 100K of memory was available for the management protocols. >> The >>> SNMP team found a way to shoe-horn the key features into that >>> limited footprint and it promptly made a *huge* difference. >>> >>> Craig >>> >>> On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> ... >>>> BTW, I thought the use of ASN.1/BER for SNMP was far from the best >>>> choice (indeed, from an implementation quality and interoperability >>>> point of view would be hard to find one that was worse.) I preferred >>>> the HEMS proposal as the most elegant, even if it did use XML. >>>> >>>> CMIP had some really good ideas (most particularly with regard to >>>> selecting and filtering data at the server for highly efficient bulk >>>> fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) >>>> was very inventive and clever. That kinda demonstrated the potential >>>> viability, rather than the impossibility, of things like CMIP/T, even in >>>> its bloated form. >>>> >>>> Diverting from the main points here: >>>> >>>> About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER >>>> and using JSON, throwing out UDP and using TCP (optionally with TLS), >>>> and adding in some of the filtering concepts from CMIP. I preserved >>>> most MIB names and instrumentation variable semantics (and thus >>>> preserving a lot of existing instrumentation code in devices.) >>>> >>>> The resulting running code (in Python) is quite small - on par with the >>>> 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it >>>> runs several decimal orders of magnitude faster than SNMP (in terms of >>>> compute cycles, network activity, and start-to-finish time.) Plus I can >>>> do things like "give me all data on all interfaces with a received >>>> packet error rate greater than 0.1%". I can even safely issue complex >>>> control commands to devices, something that SNMP can't do very well. I >>>> considered doing commercial grade, perhaps open-source, version but it >>>> could have ended up disturbing the then nascent Netconf effort. >>>> >>>> --karl-- >>>> >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> >>> >>> -- >>> ***** >>> Craig Partridge's email account for professional society activities and >>> mailing lists. >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> From jnc at mercury.lcs.mit.edu Mon Mar 28 14:03:30 2022 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 28 Mar 2022 17:03:30 -0400 (EDT) Subject: [ih] SMTP History Message-ID: <20220328210330.C0D3018C087@mercury.lcs.mit.edu> > On 3/28/2022 9:41 AM, Emiliano Spinella via Internet-history wrote: > Lately, I have been looking for information regarding the history of > SMTP but could not find much information. > > Basically, I am interested in the initial Email system protocols and > how SMTP got its final form. I imagine there must have been multiple > protocol alternatives for Email but somehow SMTP became a standard. I was working on different stuff when SMTP happened, so I can't provide details from personal memory, but I knew it was happening I think your mental model of the _very_ early days of networking is probably wrong. There wasn't a big organized effort; it was just a few people (originally communicating via telephone and/or printed memos, with an occasional in-person meeting - no email yet, right?) Before SMTP, there was email mode in FTP - because FTP was there, and it was a minor hack to add email transfer. (Open a different file, and for appending...) (Ironically, much the same logic was applied when we added 'mail' mode to TFTP.) Probably the best source for how SMTP in particular came to be, as a replacement for mail in FTP (and why a replacement was needed) is the email archives of the 'msggroup' list (available at: http://www.chiappa.net/~jnc/tech/msggroup/ after I made an effort to save it); it covers June '75 through March 1986, and so definitely covers the time when SMTP was done. The 'header-people archives: http://www.chiappa.net/~jnc/tech/header/ might also have some information; it was mostly about the _format_ of email messages, not the _transport_ of them, but there was some leakage. (If anyone has the first two volumes of that, please let me know, so I can add them to that collection.) Noel From craig at tereschau.net Mon Mar 28 14:04:21 2022 From: craig at tereschau.net (Craig Partridge) Date: Mon, 28 Mar 2022 15:04:21 -0600 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: Dear Emiliano: I wrote a history for IEEE History of Computing that covers much of this ground. I was able to talk to a lot of the people involved in early developments in Email (including SMTP), some of whom are now deceased. http://emailhistory.org/papers/partridge-email.pdf Specifically on SMTP, it was a rethinking of MTP. MTP was created by Sluizer and Postel. There was a rumor at the time that the most vocal and insightful critic of MTP was Peter Kirstein but I was never able to confirm that (note 82 observes that Sluizer remembered Postel talking about "people" with concerns). In any case, Postel created SMTP in response to feedback about MTP and I sketch a logical path to SMTP from some of the work Sluizer and Postel did, which may have inspired Jon's solution. Craig On Mon, Mar 28, 2022 at 10:41 AM Emiliano Spinella via Internet-history < internet-history at elists.isoc.org> wrote: > Hi everyone, > > Lately, I have been looking for information regarding the history of SMTP > but could not find much information. > > Basically, I am interested in the initial Email system protocols and how > SMTP got its final form. I imagine there must have been multiple protocol > alternatives for Email but somehow SMTP became a standard. > > Were there some relevant milestones that helped SMTP to become broadly > adopted? I imagine there must have been some important institution or set > of institutions that drove the adoption. > > Also, is there any reason why POP3 and later IMAP were not part of SMTP? > > I was wondering if somebody in this list could give me some light. > > Thanks, > Emiliano > > -- > _LEGAL NOTICE: The content of this email message, including the attached > files, is confidential and is protected by article 18.3 of the Spanish > Constitution, which guarantees the secrecy of communications. If you > receive this message in error, please contact the sender to inform them of > this fact, and do not broadcast its content or make copies. > _ > _*** This > message has been verified with removal tools for viruses and malicious > content *** > _ > _This legal notice has been automatically incorporated into > the message. > _ > > *---------------------------------------------* > *AVISO > LEGAL: El contenido de este mensaje de correo electr?nico, incluidos los > ficheros adjuntos, es confidencial y est? protegido por el art?culo 18.3 > de > la Constituci?n Espa?ola, que garantiza el secreto de las comunicaciones. > Si usted recibe este mensaje por error, por favor p?ngase en contacto con > el remitente para informarle de este hecho, y no difunda su contenido ni > haga copias. > * > _*** Este mensaje ha sido verificado con herramientas de > eliminaci?n de virus y contenido malicioso *** > _ > _Este aviso legal ha sido > incorporado autom?ticamente al mensaje._ > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From jack at 3kitty.org Mon Mar 28 14:07:04 2022 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 28 Mar 2022 14:07:04 -0700 Subject: [ih] SMTP History In-Reply-To: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: On 3/28/22 10:30, Dave Crocker via Internet-history wrote: > 4. In? the very early 1970s, the Arpanet FTP protocol -- which has > mostly transferred to the Internet environment -- was under > development. ?I only went to its final meeting and don't remember > whether email was in the version; from the RFC publication history, it > appears not. However Abhay Bhushan, who was the document editor, some > years ago told me it was always planned. Much of the history of that early research has been lost in email exchanges which were apparently never archived.? Here's what I remember, perhaps clouded a bit by 50 years of elapsed time.?? Lots of other people were involved too, so this is only what I personally can recall. ----------------------------------- Abhay Bhushan's office was a few doors down the hall from mine while we were in Licklider's group at MIT.?? I was working on "messaging", which was part of Lick's vision of office automation, which included more than just email - e.g., things like workflow of documents, approval processes for releasing documents, functions such as notarization, escrow (proof of sending), etc.?? Abhay was working on FTP; we had a lot of discussions/arguments about how to get the "higher level" mechanisms of Lick's vision supported by the mechanisms of FTP. There are artifacts in the RFCs capturing some of the early work. FTP began circa 1971 with RFC172.? At the same time, there was discussion of a "Mail Box Protocol" intended to enable functions like remote printing as a way of sending something to someone else over the ARPANET.?? You just send it to their printer.?? See RFCs 196, 221. At first, FTP added a "MAIL " command, which each machine receiving such MAIL could process as it saw fit.? Print it out.?? Put it in a file somewhere.? Tell the that mail is waiting.?? Whatever... Human users could send mail by simply opening an FTP connection to their addressee's computer, issuing a MAIL command, and typing their message, indicating the end by a line containing just a period.? No "mail software" was needed, although people started building such things to send and receive mail on behalf of their users.?? Such software especially solved the problem of a user wanting to send mail to another user, but the destination host was down at the time, which wasn't uncommon.?? So a local "mail daemon" could keep the outgoing mail and periodically try to send it as needed.?? I wrote one; so did lots of other people. FTP was upgraded in late 1971, to include the command "MLFL".?? RFCs 265 and 278 show some of the details of how that worked.?? I recall at the time badgering Abhay about the problems I was having with the "MAIL" command, and its use of . as an end-of-message indicator, which meant that every message had to be scanned before transmission to avoid premature end-of-message if the message's author happened to include that sequence in the message text.?? I think he got tired of my complaining and put in the MLFL command which solved my . problem. Ray's introduction of the @ convention was really a private protocol that only worked if both participants used the TENEX software.?? It was more of a user-interface convention on how to format users' addresses in the @ pattern, which would then be used by the TENEX Cpynet mechanisms.? So it provided email across the ARPANET, but only between consenting TENEX machines.?? Other people (like me, Ken Pogran on Multics, Ken Harrenstien on MIT-AI, etc.)? used FTP to transfer "messages" from one machine to another across the ARPANET. Using FTP's mechanisms, the sender would specify the target of a message by issuing the command "MAIL " after establishing a connection to the proper destination host.?? If the recipient server recognized the , it would receive the message and put it where the could see it - e.g., appending to a file in the user's home directory such as "DSK:JFH;JFH MAIL" (on my ITS system).?? There was no particular standardization and various problems surfaced.? E.g., Multics was especially reluctant to allow processes receiving something from somewhere via the ARPANET to write into a user's file space without first logging in as that user. Such messages didn't have any particular format.?? It was just whatever text the sender submitted.?? There were no "headers" unless the human created them.? It became popular to put some lines at the beginning of the message containing useful information, like who the message was from, when it was sent, etc.?? There was no standard format for that, and people got very creative, which caused havoc in any program that tried to understand that information.???? "User at Host" was popular, as well as Ray's "User at Host".?? But creativity could result in headers such as "From the nimble fingers of Bill Smith at the Center of The Universe known as Host 987 on the ARPANET".?? Just try writing software to understand such stuff!? I did, with the computer power of 50 years ago.? Not good..... RFC 475 describes how mail was handled in the 1973 timeframe, and suggests more additions to the FTP protocol to permit such "metadata" as the identity of the sender to be conveyed as part of the FTP protocol.?? There was also a *lot* of discussion on a mailing list called "HEADER-PEOPLE" (@MIT-AI IIRC, started by Ken Harrenstien) about how to define email headers that would be included at the beginning of each message, as an alternative to putting them into the FTP protocol. At least two camps emerged in that discussion.? I was in the "Office Automation" camp, following Lick's vision, which motivated providing headers which could contain a lot of different kinds of such metadata about a message, but were very precisely defined to make them more easily readable by programs on the ends of the message transfers.?? The other camp was interested in getting something workable and easy to implement, which was understandable since their "real work" was something other than implementing elaborate email systems. The "elaborate" scheme was loosely called "MTP" for Message Transmission Protocol".?? I documented one proposed format for the associated data transfer component called MSDTP - Message Services Data Transmission Protocol" in RFC 713. MTP would be powerful but necessarily somewhat complex to implement, so an interim solution was developed to provide a much simpler initial step - the Simple Message Transmission Protocol, or SMTP. All hosts had to implement SMTP in order to participate in email on the ARPANET, which was rather important since that's how ARPA insisted on interacting with its contractors.?? So it was straightforward for everyone to find the time to implement SMTP. MTP was only of interest to groups working on the broader realm of "Office Automation", but it did live on to some extent in projects such as the MME (Military Message Experiment), where the "office automation" aspects were pursued in the context of a military command and control "office".? There's a summary of MME in https://apps.dtic.mil/sti/pdfs/ADA098187.pdf?? -- which illustrates some of the complex functionality of such "office automation" technology.?? Quite less "simple" than SMTP. That's what I remember, IIRC of course.?? I'm still waiting for MTP....... Jack Haverty From dhc at dcrocker.net Mon Mar 28 14:24:03 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Mon, 28 Mar 2022 14:24:03 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: > There are artifacts in the RFCs capturing some of the early work. FTP > began circa 1971 with RFC172.? At the same time, there was discussion of > a "Mail Box Protocol" intended to enable functions like remote printing > as a way of sending something to someone else over the ARPANET.?? You > just send it to their printer.?? See RFCs 196, 221. > > At first, FTP added a "MAIL " command, which each machine > receiving such MAIL could process as it saw fit.? Print it out. RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the string 'mail'. RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL and MLFL commands. It is a meeting report discussing agreement to create those commands. RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the string 'mail'. RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a mail command, it does not specify it. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From jeanjour at comcast.net Mon Mar 28 14:50:58 2022 From: jeanjour at comcast.net (John Day) Date: Mon, 28 Mar 2022 17:50:58 -0400 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: MAIL and MLFL were added at the last minute of the FTP meeting at BBN in March 1973. We were about to wrap up the meeting when Steve Crocker came in and said we have to have a Mail in FTP. So we did it. That was the same meeting at which in response to the question what happens when one stores a file with a BYTE size of 23 and RETRieves it with a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into apples one gets lemons.? And it was immediately decided that was the correct response. Good ol? MAP. ;-) John > On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history wrote: > > On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: >> There are artifacts in the RFCs capturing some of the early work. FTP began circa 1971 with RFC172. At the same time, there was discussion of a "Mail Box Protocol" intended to enable functions like remote printing as a way of sending something to someone else over the ARPANET. You just send it to their printer. See RFCs 196, 221. >> At first, FTP added a "MAIL " command, which each machine receiving such MAIL could process as it saw fit. Print it out. > > > RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the string 'mail'. > > RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL and MLFL commands. It is a meeting report discussing agreement to create those commands. > > RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the string 'mail'. > > RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a mail command, it does not specify it. > > > d/ > > -- > Dave Crocker > Brandenburg InternetWorking > bbiw.net > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jack at 3kitty.org Mon Mar 28 15:01:04 2022 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 28 Mar 2022 15:01:04 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: <63a34667-1836-53fb-b947-86577bea36e2@3kitty.org> The RFCs, IMHO, are useful but only as an imperfect historical record.?? At the time, RFCs, despite their name as "request for comments",? were often released well after implementation of whatever they documented, when someone (often Jon Postel) took the initiative to write down what had happened. So, with something like FTP, despite what the "official spec" might have said in some RFC at the time, it was easy for an actual developer to add new functionality and try it out, possibly collaborating with others. ? Adding a MAIL command would have been easy for any FTP developer, and invisible to other FTPs who hadn't ever heard of it.?? Such things were expected as a vital aspect of research.?? Ray's introduction of @ on TENEX propagated quickly since there were numerous TENEX machines on the net at the time. I don't remember who first added the MAIL command to an FTP implementation (might have been Abhay...?), or how its use propagated throughout the ARPANET community as other developers added it to their FTP software.? It was easy, and common, for developers to see someone else's idea and simply adopt it, well before it appeared in any RFC. I do remember that I first used, and insisted on including, the "Message-ID" header field as a way of making it possible for a computer program to distinguish specific messages, so that, for example, they could be linked together into conversations.? But most details of the SMTP and headers just sort of appeared over time and if people found them useful, they got more widely implemented.?? I have no recollection of where they started. Eventually an innovation could become popular enough that it became effectively "standard". ? It might only be after that occurred that it actually appeared captured in an RFC.?? Rough consensus and running code came first.?? Documentation later. I wrote RFC722 in an attempt to document some of the issues and principles involved in that kind of evolutionary development of network mechanisms.? Never got many comments thought to that RFC. IMHO, the email records such as Noel captured are likely a more accurate, but still incomplete, historical record. ? But there were lots of such email interactions not using email lists, or even, gasp, by telephone or in-person discussions, probably now lost, that reflect the actual history of what happened when and who did it. That's why I try to write down just what I personally remember.?? My own "mail archives" were lost long ago, when my Dectapes turned into magnetic dust. Jack On 3/28/22 14:24, Dave Crocker wrote: > On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: >> There are artifacts in the RFCs capturing some of the early work. FTP >> began circa 1971 with RFC172.? At the same time, there was discussion >> of a "Mail Box Protocol" intended to enable functions like remote >> printing as a way of sending something to someone else over the >> ARPANET.?? You just send it to their printer.?? See RFCs 196, 221. >> >> At first, FTP added a "MAIL " command, which each machine >> receiving such MAIL could process as it saw fit.? Print it out. > > > RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the > string 'mail'. > > RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL > and MLFL commands. It is a meeting report discussing agreement to > create those commands. > > RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the > string 'mail'. > > RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a > mail command, it does not specify it. > > > d/ > From jack at 3kitty.org Mon Mar 28 20:08:15 2022 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 28 Mar 2022 20:08:15 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> Interesting.?? Do you remember... Was Steve an ARPA Program Manager at the time?? I.e., was adding mail to FTP an order or a suggestion??? Did he say anything about why Mail had to be in FTP? Were there any FTP implementations already functional with some form of MAIL command implemented that Steve wanted to become ubiquitous??? /Jack On 3/28/22 14:50, John Day wrote: > MAIL and MLFL were added at the last minute of the FTP meeting at BBN in March 1973. > > We were about to wrap up the meeting when Steve Crocker came in and said we have to have a Mail in FTP. So we did it. > > That was the same meeting at which in response to the question what happens when one stores a file with a BYTE size of 23 and RETRieves it with a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into apples one gets lemons.? And it was immediately decided that was the correct response. Good ol? MAP. ;-) > > John > >> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history wrote: >> >> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: >>> There are artifacts in the RFCs capturing some of the early work. FTP began circa 1971 with RFC172. At the same time, there was discussion of a "Mail Box Protocol" intended to enable functions like remote printing as a way of sending something to someone else over the ARPANET. You just send it to their printer. See RFCs 196, 221. >>> At first, FTP added a "MAIL " command, which each machine receiving such MAIL could process as it saw fit. Print it out. >> >> RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the string 'mail'. >> >> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL and MLFL commands. It is a meeting report discussing agreement to create those commands. >> >> RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the string 'mail'. >> >> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a mail command, it does not specify it. >> >> >> d/ >> >> -- >> Dave Crocker >> Brandenburg InternetWorking >> bbiw.net >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history From vgcerf at gmail.com Mon Mar 28 20:23:39 2022 From: vgcerf at gmail.com (vinton cerf) Date: Mon, 28 Mar 2022 23:23:39 -0400 Subject: [ih] SMTP History In-Reply-To: <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> Message-ID: steve was at arpa 1971-1974 but he focused then on AI. while at UCLA from 1968?-1971 he was the head of the Network Working Group and in that role had much to say about where we were headed. v On Mon, Mar 28, 2022 at 11:08 PM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Interesting. Do you remember... Was Steve an ARPA Program Manager at > the time? I.e., was adding mail to FTP an order or a suggestion? Did > he say anything about why Mail had to be in FTP? Were there any FTP > implementations already functional with some form of MAIL command > implemented that Steve wanted to become ubiquitous? /Jack > > On 3/28/22 14:50, John Day wrote: > > MAIL and MLFL were added at the last minute of the FTP meeting at BBN in > March 1973. > > > > We were about to wrap up the meeting when Steve Crocker came in and said > we have to have a Mail in FTP. So we did it. > > > > That was the same meeting at which in response to the question what > happens when one stores a file with a BYTE size of 23 and RETRieves it with > a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into > apples one gets lemons.? And it was immediately decided that was the > correct response. Good ol? MAP. ;-) > > > > John > > > >> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history < > internet-history at elists.isoc.org> wrote: > >> > >> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: > >>> There are artifacts in the RFCs capturing some of the early work. FTP > began circa 1971 with RFC172. At the same time, there was discussion of a > "Mail Box Protocol" intended to enable functions like remote printing as a > way of sending something to someone else over the ARPANET. You just send > it to their printer. See RFCs 196, 221. > >>> At first, FTP added a "MAIL " command, which each machine > receiving such MAIL could process as it saw fit. Print it out. > >> > >> RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the > string 'mail'. > >> > >> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL > and MLFL commands. It is a meeting report discussing agreement to create > those commands. > >> > >> RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the > string 'mail'. > >> > >> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a > mail command, it does not specify it. > >> > >> > >> d/ > >> > >> -- > >> Dave Crocker > >> Brandenburg InternetWorking > >> bbiw.net > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From steve at shinkuro.com Mon Mar 28 20:25:56 2022 From: steve at shinkuro.com (Steve Crocker) Date: Mon, 28 Mar 2022 23:25:56 -0400 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> Message-ID: I don?t recall giving instructions about adding mail to ftp. Could have happened, but I don?t recall it. Steve On Mon, Mar 28, 2022 at 11:23 PM vinton cerf via Internet-history < internet-history at elists.isoc.org> wrote: > steve was at arpa 1971-1974 but he focused then on AI. > while at UCLA from 1968?-1971 he was the head of the Network Working Group > and in that role had much to say about where we were headed. > > v > > > On Mon, Mar 28, 2022 at 11:08 PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > > Interesting. Do you remember... Was Steve an ARPA Program Manager at > > the time? I.e., was adding mail to FTP an order or a suggestion? Did > > he say anything about why Mail had to be in FTP? Were there any FTP > > implementations already functional with some form of MAIL command > > implemented that Steve wanted to become ubiquitous? /Jack > > > > On 3/28/22 14:50, John Day wrote: > > > MAIL and MLFL were added at the last minute of the FTP meeting at BBN > in > > March 1973. > > > > > > We were about to wrap up the meeting when Steve Crocker came in and > said > > we have to have a Mail in FTP. So we did it. > > > > > > That was the same meeting at which in response to the question what > > happens when one stores a file with a BYTE size of 23 and RETRieves it > with > > a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into > > apples one gets lemons.? And it was immediately decided that was the > > correct response. Good ol? MAP. ;-) > > > > > > John > > > > > >> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history < > > internet-history at elists.isoc.org> wrote: > > >> > > >> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: > > >>> There are artifacts in the RFCs capturing some of the early work. FTP > > began circa 1971 with RFC172. At the same time, there was discussion of > a > > "Mail Box Protocol" intended to enable functions like remote printing as > a > > way of sending something to someone else over the ARPANET. You just > send > > it to their printer. See RFCs 196, 221. > > >>> At first, FTP added a "MAIL " command, which each machine > > receiving such MAIL could process as it saw fit. Print it out. > > >> > > >> RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the > > string 'mail'. > > >> > > >> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL > > and MLFL commands. It is a meeting report discussing agreement to create > > those commands. > > >> > > >> RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the > > string 'mail'. > > >> > > >> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a > > mail command, it does not specify it. > > >> > > >> > > >> d/ > > >> > > >> -- > > >> Dave Crocker > > >> Brandenburg InternetWorking > > >> bbiw.net > > >> -- > > >> Internet-history mailing list > > >> Internet-history at elists.isoc.org > > >> https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jeanjour at comcast.net Mon Mar 28 20:33:11 2022 From: jeanjour at comcast.net (John Day) Date: Mon, 28 Mar 2022 23:33:11 -0400 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> Message-ID: <3DD1804C-49A5-47AB-B926-C99B02A3E8C5@comcast.net> I was young and impressionable. It was a bit like you happened to be at BBN for something else and dropped in to the meeting for maybe 30 minutes. I definitely remember we hadn?t considered mail before that. john > On Mar 28, 2022, at 23:25, Steve Crocker via Internet-history wrote: > > I don?t recall giving instructions about adding mail to ftp. Could have > happened, but I don?t recall it. > > Steve > > On Mon, Mar 28, 2022 at 11:23 PM vinton cerf via Internet-history < > internet-history at elists.isoc.org> wrote: > >> steve was at arpa 1971-1974 but he focused then on AI. >> while at UCLA from 1968?-1971 he was the head of the Network Working Group >> and in that role had much to say about where we were headed. >> >> v >> >> >> On Mon, Mar 28, 2022 at 11:08 PM Jack Haverty via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> Interesting. Do you remember... Was Steve an ARPA Program Manager at >>> the time? I.e., was adding mail to FTP an order or a suggestion? Did >>> he say anything about why Mail had to be in FTP? Were there any FTP >>> implementations already functional with some form of MAIL command >>> implemented that Steve wanted to become ubiquitous? /Jack >>> >>> On 3/28/22 14:50, John Day wrote: >>>> MAIL and MLFL were added at the last minute of the FTP meeting at BBN >> in >>> March 1973. >>>> >>>> We were about to wrap up the meeting when Steve Crocker came in and >> said >>> we have to have a Mail in FTP. So we did it. >>>> >>>> That was the same meeting at which in response to the question what >>> happens when one stores a file with a BYTE size of 23 and RETRieves it >> with >>> a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into >>> apples one gets lemons.? And it was immediately decided that was the >>> correct response. Good ol? MAP. ;-) >>>> >>>> John >>>> >>>>> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>>>> >>>>> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: >>>>>> There are artifacts in the RFCs capturing some of the early work. FTP >>> began circa 1971 with RFC172. At the same time, there was discussion of >> a >>> "Mail Box Protocol" intended to enable functions like remote printing as >> a >>> way of sending something to someone else over the ARPANET. You just >> send >>> it to their printer. See RFCs 196, 221. >>>>>> At first, FTP added a "MAIL " command, which each machine >>> receiving such MAIL could process as it saw fit. Print it out. >>>>> >>>>> RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the >>> string 'mail'. >>>>> >>>>> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL >>> and MLFL commands. It is a meeting report discussing agreement to create >>> those commands. >>>>> >>>>> RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the >>> string 'mail'. >>>>> >>>>> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a >>> mail command, it does not specify it. >>>>> >>>>> >>>>> d/ >>>>> >>>>> -- >>>>> Dave Crocker >>>>> Brandenburg InternetWorking >>>>> bbiw.net >>>>> -- >>>>> Internet-history mailing list >>>>> Internet-history at elists.isoc.org >>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From steve at shinkuro.com Mon Mar 28 20:44:30 2022 From: steve at shinkuro.com (Steve Crocker) Date: Mon, 28 Mar 2022 23:44:30 -0400 Subject: [ih] SMTP History In-Reply-To: <3DD1804C-49A5-47AB-B926-C99B02A3E8C5@comcast.net> References: <3DD1804C-49A5-47AB-B926-C99B02A3E8C5@comcast.net> Message-ID: <5BEE62C3-ACBD-48ED-B6CC-04B93BBB7041@shinkuro.com> Perfectly possible. Sent from my iPhone > On Mar 28, 2022, at 11:33 PM, John Day wrote: > > ?I was young and impressionable. It was a bit like you happened to be at BBN for something else and dropped in to the meeting for maybe 30 minutes. I definitely remember we hadn?t considered mail before that. > > john > >> On Mar 28, 2022, at 23:25, Steve Crocker via Internet-history wrote: >> >> I don?t recall giving instructions about adding mail to ftp. Could have >> happened, but I don?t recall it. >> >> Steve >> >>> On Mon, Mar 28, 2022 at 11:23 PM vinton cerf via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>> steve was at arpa 1971-1974 but he focused then on AI. >>> while at UCLA from 1968?-1971 he was the head of the Network Working Group >>> and in that role had much to say about where we were headed. >>> >>> v >>> >>> >>> On Mon, Mar 28, 2022 at 11:08 PM Jack Haverty via Internet-history < >>> internet-history at elists.isoc.org> wrote: >>> >>>> Interesting. Do you remember... Was Steve an ARPA Program Manager at >>>> the time? I.e., was adding mail to FTP an order or a suggestion? Did >>>> he say anything about why Mail had to be in FTP? Were there any FTP >>>> implementations already functional with some form of MAIL command >>>> implemented that Steve wanted to become ubiquitous? /Jack >>>> >>>> On 3/28/22 14:50, John Day wrote: >>>>> MAIL and MLFL were added at the last minute of the FTP meeting at BBN >>> in >>>> March 1973. >>>>> >>>>> We were about to wrap up the meeting when Steve Crocker came in and >>> said >>>> we have to have a Mail in FTP. So we did it. >>>>> >>>>> That was the same meeting at which in response to the question what >>>> happens when one stores a file with a BYTE size of 23 and RETRieves it >>> with >>>> a BYTE size of 17? Padlipsky said, ?Sometimes when changing oranges into >>>> apples one gets lemons.? And it was immediately decided that was the >>>> correct response. Good ol? MAP. ;-) >>>>> >>>>> John >>>>> >>>>>> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history < >>>> internet-history at elists.isoc.org> wrote: >>>>>> >>>>>> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: >>>>>>> There are artifacts in the RFCs capturing some of the early work. FTP >>>> began circa 1971 with RFC172. At the same time, there was discussion of >>> a >>>> "Mail Box Protocol" intended to enable functions like remote printing as >>> a >>>> way of sending something to someone else over the ARPANET. You just >>> send >>>> it to their printer. See RFCs 196, 221. >>>>>>> At first, FTP added a "MAIL " command, which each machine >>>> receiving such MAIL could process as it saw fit. Print it out. >>>>>> >>>>>> RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the >>>> string 'mail'. >>>>>> >>>>>> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL >>>> and MLFL commands. It is a meeting report discussing agreement to create >>>> those commands. >>>>>> >>>>>> RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the >>>> string 'mail'. >>>>>> >>>>>> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a >>>> mail command, it does not specify it. >>>>>> >>>>>> >>>>>> d/ >>>>>> >>>>>> -- >>>>>> Dave Crocker >>>>>> Brandenburg InternetWorking >>>>>> bbiw.net >>>>>> -- >>>>>> Internet-history mailing list >>>>>> Internet-history at elists.isoc.org >>>>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>>> -- >>>> Internet-history mailing list >>>> Internet-history at elists.isoc.org >>>> https://elists.isoc.org/mailman/listinfo/internet-history >>>> >>> -- >>> Internet-history mailing list >>> Internet-history at elists.isoc.org >>> https://elists.isoc.org/mailman/listinfo/internet-history >>> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history > From jack at 3kitty.org Mon Mar 28 21:11:42 2022 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 28 Mar 2022 21:11:42 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> <1ddc8816-eb60-1a2f-bf5f-098b683a787b@3kitty.org> Message-ID: <3cb5c8d2-5182-def6-974b-2c9aa3593138@3kitty.org> OMG!? Perhaps that was an early prototype AI-powered ARPA autonomous drone in human robotic form that somehow managed to escape from Arlington and fly to Boston and attend that meeting. Jack (well it's close to April 1....) On 3/28/22 20:25, Steve Crocker wrote: > I don?t recall giving instructions about adding mail to ftp.? Could > have happened, but I don?t recall it. > > Steve > > On Mon, Mar 28, 2022 at 11:23 PM vinton cerf via Internet-history > wrote: > > steve was at arpa 1971-1974 but he focused then on AI. > while at UCLA from 1968?-1971 he was the head of the Network > Working Group > and in that role had much to say about where we were headed. > > v > > > On Mon, Mar 28, 2022 at 11:08 PM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > > > Interesting.? ?Do you remember... Was Steve an ARPA Program > Manager at > > the time?? I.e., was adding mail to FTP an order or a > suggestion?? ?Did > > he say anything about why Mail had to be in FTP? Were there any FTP > > implementations already functional with some form of MAIL command > > implemented that Steve wanted to become ubiquitous? ?/Jack > > > > On 3/28/22 14:50, John Day wrote: > > > MAIL and MLFL were added at the last minute of the FTP meeting > at BBN in > > March 1973. > > > > > > We were about to wrap up the meeting when Steve Crocker came > in and said > > we have to have a Mail in FTP. So we did it. > > > > > > That was the same meeting at which in response to the question > what > > happens when one stores a file with a BYTE size of 23 and > RETRieves it with > > a BYTE size of 17? Padlipsky said, ?Sometimes when changing > oranges into > > apples one gets lemons.? And it was immediately decided that was the > > correct response. Good ol? MAP.? ;-) > > > > > > John > > > > > >> On Mar 28, 2022, at 17:24, Dave Crocker via Internet-history < > > internet-history at elists.isoc.org> wrote: > > >> > > >> On 3/28/2022 2:07 PM, Jack Haverty via Internet-history wrote: > > >>> There are artifacts in the RFCs capturing some of the early > work. FTP > > began circa 1971 with RFC172.? At the same time, there was > discussion of a > > "Mail Box Protocol" intended to enable functions like remote > printing as a > > way of sending something to someone else over the ARPANET.? ?You > just send > > it to their printer.? ?See RFCs 196, 221. > > >>> At first, FTP added a "MAIL " command, which each machine > > receiving such MAIL could process as it saw fit.? Print it out. > > >> > > >> RFC 354 (July 1972 and edited by Abhay Bhushan) does not > contain the > > string 'mail'. > > >> > > >> RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses > FTP's MAIL > > and MLFL commands. It is a meeting report discussing agreement > to create > > those commands. > > >> > > >> RFC 542 (August 1973 and edited by Nancy Neigus) does not > contain the > > string 'mail'. > > >> > > >> RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is > cites a > > mail command, it does not specify it. > > >> > > >> > > >> d/ > > >> > > >> -- > > >> Dave Crocker > > >> Brandenburg InternetWorking > > >> bbiw.net > > >> -- > > >> Internet-history mailing list > > >> Internet-history at elists.isoc.org > > >> https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From dhc at dcrocker.net Tue Mar 29 06:44:03 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 29 Mar 2022 06:44:03 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: On 3/28/2022 2:24 PM, Dave Crocker via Internet-history wrote: > > RFC 354 (July 1972 and edited by Abhay Bhushan) does not contain the > string 'mail'. > > RFC 475 (March, 1973 and edited by Abhay Bhushan) discusses FTP's MAIL > and MLFL commands. It is a meeting report discussing agreement to create > those commands. > > RFC 542 (August 1973 and edited by Nancy Neigus) does not contain the > string 'mail'. > > RFC 765 (Aug, 1973 and edit by Jon Postel) does. But while is cites a > mail command, it does not specify it. Looking back farther: RFC 171 (june 1971, lots of authors) cites mail as one of the likely customers of a 'data transfer protocol'. This document led to RFC 354. RFC 196 (July 1971, Dick Watson) proposed a Mail Box Protocol. So email was in people's minds from at least the middle of 1971. To the extent that Steve gave advice or direction in March, 1973, it would have about priority of the standardization effort, not introducing the topic or need. Note that by then, I suspect every Tenex on the net was using Ray Tomlinson's networked email enhancement. (I don't remember when Larry Robert's RD MUA was developed.) One bit of fallout, from the dust-up about the invention of email, that happened a few years ago, was Ray Tomlinson's commenting to me that what he did was in reaction to ongoing work by Watson, et al. (Others had already heard this, but I hadn't.) That work was in mid-/late- 1971. He didn't agree with the approach or goal their work had and thought something far simpler and fully online would be better. By March, 1973, the challenge was not to do email, but to generalize the mechanism. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dhc at dcrocker.net Tue Mar 29 07:01:56 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 29 Mar 2022 07:01:56 -0700 Subject: [ih] SMTP History In-Reply-To: References: <5ed28e22-b8c8-fda0-a8f4-311f878d096f@dcrocker.net> Message-ID: <6545004d-3651-1feb-8a5e-a9555271cd14@dcrocker.net> On 3/29/2022 6:44 AM, Dave Crocker via Internet-history wrote: > (I don't remember when Larry Robert's RD MUA was developed.) sigh. hadn't bothered to check the limited timeline, at: http://emailhistory.org/Email-Timeline.html It shows RD in 1972. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From mfidelman at meetinghouse.net Tue Mar 29 07:55:31 2022 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Tue, 29 Mar 2022 10:55:31 -0400 Subject: [ih] SMTP History In-Reply-To: <0ef250ec-ba74-98a5-2edb-0fd27770b34f@dcrocker.net> References: <0ef250ec-ba74-98a5-2edb-0fd27770b34f@dcrocker.net> Message-ID: <58243ff5-b458-9c6f-b8b5-fe93d5f7221b@meetinghouse.net> Dave Crocker via Internet-history wrote: > RFC 5598 A nice piece of work... kudos, Dave. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From pnr at planet.nl Tue Mar 29 08:13:38 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Tue, 29 Mar 2022 17:13:38 +0200 Subject: [ih] SMTP History (Jack Haverty) In-Reply-To: References: Message-ID: > Message: 3 > Date: Mon, 28 Mar 2022 14:07:04 -0700 > From: Jack Haverty [..snip..] > The "elaborate" scheme was loosely called "MTP" for Message Transmission > Protocol". [..snip..] > That's what I remember, IIRC of course.?? I'm still waiting for MTP....... > > Jack Haverty There is this code from 1981: https://minnie.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/src/mtp Paul From vgcerf at gmail.com Tue Mar 29 08:20:20 2022 From: vgcerf at gmail.com (vinton cerf) Date: Tue, 29 Mar 2022 11:20:20 -0400 Subject: [ih] SMTP History (Jack Haverty) In-Reply-To: References: Message-ID: and then there was PEM... v On Tue, Mar 29, 2022 at 11:13 AM Paul Ruizendaal via Internet-history < internet-history at elists.isoc.org> wrote: > > Message: 3 > > Date: Mon, 28 Mar 2022 14:07:04 -0700 > > From: Jack Haverty > > [..snip..] > > > The "elaborate" scheme was loosely called "MTP" for Message Transmission > > Protocol". > > [..snip..] > > > That's what I remember, IIRC of course.?? I'm still waiting for > MTP....... > > > > Jack Haverty > > There is this code from 1981: > > https://minnie.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/src/mtp > > Paul > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From pnr at planet.nl Tue Mar 29 09:07:41 2022 From: pnr at planet.nl (Paul Ruizendaal) Date: Tue, 29 Mar 2022 18:07:41 +0200 Subject: [ih] SMTP History In-Reply-To: References: Message-ID: <128B64E6-D04E-4C89-8272-4E2E720C070F@planet.nl> > There is this code from 1981: > > https://minnie.tuhs.org/cgi-bin/utree.pl?file=BBN-Vax-TCP/src/mtp I had a quick look through this code. The original poster may find some answers in this code as to what came before SMTP. The code seems to have its roots in 1976, written by Steve Holmgren -- one of the authors of ?Arpanet Unix?. It seems to have originally used FTP as its transport. Later additions are support for TCP instead of NCP, and support for MTP instead of FTP. When compiled for NCP, it seems to have first tried delivering via MTP and failing that falling back to FTP. From dhc at dcrocker.net Tue Mar 29 09:15:47 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 29 Mar 2022 09:15:47 -0700 Subject: [ih] SMTP History (Jack Haverty) In-Reply-To: References: Message-ID: <462ad2bf-f679-593f-fa35-370c29ee722a@dcrocker.net> On 3/29/2022 8:20 AM, vinton cerf via Internet-history wrote: > and then there was PEM... Ahh, yes, email object security. One of these days, we should find a way to make it work at scale... As I recall, PEM began as an IRT activity, then migrated into the IETF. The former was pre-1990, but the IETF specs came out around 1993. PGP came out in 1991 and was immediately deployed and important. I remember Zimmerman citing a thank you from some folks involved in the efforts that produced dissolution of the Soviet Union... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From dhc at dcrocker.net Tue Mar 29 09:17:07 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Tue, 29 Mar 2022 09:17:07 -0700 Subject: [ih] SMTP History In-Reply-To: <58243ff5-b458-9c6f-b8b5-fe93d5f7221b@meetinghouse.net> References: <0ef250ec-ba74-98a5-2edb-0fd27770b34f@dcrocker.net> <58243ff5-b458-9c6f-b8b5-fe93d5f7221b@meetinghouse.net> Message-ID: <9c1fa79b-833c-aa20-ec4b-b813e2997eea@dcrocker.net> On 3/29/2022 7:55 AM, Miles Fidelman via Internet-history wrote: > A nice piece of work... kudos, Dave. Thanks! Perhaps oddly (though perhaps not) there is quite a bit of resistance to using that doc, amongst various IETF email folk. Go figure. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From bill.n1vux at gmail.com Thu Mar 31 15:51:59 2022 From: bill.n1vux at gmail.com (Bill Ricker) Date: Thu, 31 Mar 2022 18:51:59 -0400 Subject: [ih] CXC Rose Humanation Re: Speaking of Minitel: Here's an oldie NO one remembers In-Reply-To: References: Message-ID: On Sun, Mar 27, 2022 at 3:20 PM Bob Purvy via Internet-history < internet-history at elists.isoc.org> wrote: > In the late 70's / early 80's, a friend of mine worked at a company CXI in > CXC, although with their logo, CXI is a fair reading! Irvine, CA. This was a real company, with maybe 250 people. I can't even > find it with Google now. > Found it for you. I get 9 hits at ARCHIVE.org with "Search text contents." (Sometimes one has to search the specific archives instead of trusting Google to have *everything.*) https://archive.org/search.php?query=%22The%20Rose%22%20%22Office%20Humanation%22&sin=TXT Most of which are all the same ad in different formats (and 3 are indeed in *Datamation *as you remembered): "Everyone in favor of office automation, raise your hand" (*with robotic hands*). but there's an article and one-third of a commentary column : - *MIS Week* 1984-05-23: Vol 5 Iss 21, p. 1 photo continued to p. 34 *Will ITT Distribute CSC's 'Rose' PBX?*NEW YORK??The Rose,? a new fourth-generation PBX from start-up CXC Corp. in Irvine, Calif., will shortly put down some strong roots in the nascent garden of the U.S. office automation market by acquiring a world-class distributor. ... *ITT-CXC Accord Seen Near for 'Rose' PBX* Hawk came East last week to introduce The Rose system, Release 1, to the press. His presentation revealed that not until next year, as part of Release 2, will The Rose acquire its real muscle?the token-ring local area network with packet switch, distributed architecture and the voice store-and-forward that will truly transform it into ?fourth generation?? class. ... - *Computerworld* 1984-07-04: Vol 18 Iss 27A, p.11 A year ago, a communications magazine ran a news brief on CXC Corp's *Rose* fourth generation ...PBX. The article let readers to believe ... At the same time *Business Week* published a laudatory profile of the company ... ... CXC, which brands itself as ?the office humanation company,? has equipped its Rose with a plenitude of communicating bells and whistles. The Rose consists of one to 64 small microcomputer- based switches, each controlling 192 telephone lines connected by high-capacity coaxial cable. The Rose is said to include a proprietary local-area network integrating a 33M bit/sec circuit-switched ring and a 16M bit/sec token ring over 50M bit/sec broadband cable. The system is also said to include store-and-forward messaging for both text and voice mail, gateways to external data commu- nications networks and programmable applications processors. ... Claiming that the Rose?s shipping dates have coincided with the firm?s original business plan in 1981, Robert Hawk, CXC?s vice-president of marketing, refused to acknowledge a delay in the product?s availability. It had been rumored that development of the Rose was delayed because the custom-made chips from International Microelectronic Products, Inc., another young company in California, were not yet available. When confronted with this possibility during a meeting with reporters in May, Hawk did concede that ?chips had something to do with it,? an apparent contradiction to his earlier assertion that no delay existed. -- Bill Ricker bill.n1vux at gmail.com https://www.linkedin.com/in/n1vux From b_a_denny at yahoo.com Thu Mar 31 16:38:08 2022 From: b_a_denny at yahoo.com (Barbara Denny) Date: Thu, 31 Mar 2022 23:38:08 +0000 (UTC) Subject: [ih] GOSIP & compliance In-Reply-To: References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> Message-ID: <967226810.844120.1648769888555@mail.yahoo.com> Craig, Is ANM ( Automated Network Management) the name of Jil's project you couldn't recall? This popped up in my head today. barbara On Monday, March 28, 2022, 09:49:21 AM PDT, Craig Partridge via Internet-history wrote: Quick comments on Karl's comments (from this note and later). ASN.1 was in SNMP because it was in HEMS.? HEMS did it to allow us to return self-describing vendor specific data without requiring a supplemental MIB definition.? This was important because HEMS allowed one to ask for the full status of an aggregate object (such as an interface or even an entire subjection of the MIB) and we wanted folks to be able to add additional data they thought relevant to the aggregate.? ASN.1 was the only self-describing data format of the time. The UDP vs. TCP debate was pretty fierce and the experience of the time came down firmly on the UDP side. Recall this was the era of daily congestion collapse of the Internet between 1987 and 1990. Re: doing things in Python.? I'm not surprised.? The HEMS implementation proved reasonably small at the time. By the way, many of the features noted in CMIP were actually in HEMS and/or SNMP first.? We were pretty open about playing with ideas at the time and the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to borrow liberally.? (Or, at least, that's my recollection). There was a network management project in the late 1980s, name now eludes me but led by Jil Wescott and DARPA funded, that sound similar in goals to what Jack H. describes doing at Oracle.? I leaned on wisdom from those folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure out what HEMS should look like. As we do these assessments, it is worth remembering that the operational community of the time was struggling with the immediate challenge of managing networks that were flaky and ran on 68000 processors and where only a few 100K of memory was available for the management protocols.? The SNMP team found a way to shoe-horn the key features into that limited footprint and it promptly made a *huge* difference. Craig On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < internet-history at elists.isoc.org> wrote: > ... > BTW, I thought the use of ASN.1/BER for SNMP was far from the best > choice (indeed, from an implementation quality and interoperability > point of view would be hard to find one that was worse.)? I preferred > the HEMS proposal as the most elegant, even if it did use XML. > > CMIP had some really good ideas (most particularly with regard to > selecting and filtering data at the server for highly efficient bulk > fetches.)? Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > was very inventive and clever.? That kinda demonstrated the potential > viability, rather than the impossibility, of things like CMIP/T, even in > its bloated form. > > Diverting from the main points here: > > About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > and using JSON, throwing out UDP and using TCP (optionally with TLS), > and adding in some of the filtering concepts from CMIP.? I preserved > most MIB names and instrumentation variable semantics (and thus > preserving a lot of existing instrumentation code in devices.) > > The resulting running code (in Python) is quite small - on par with the > 12kbytes (machine code) of the core of my Epilogue SNMP engine.? And it > runs several decimal orders of magnitude faster than SNMP (in terms of > compute cycles, network activity, and start-to-finish time.)? Plus I can > do things like "give me all data on all interfaces with a received > packet error rate greater than 0.1%".? I can even safely issue complex > control commands to devices, something that SNMP can't do very well.? I > considered doing commercial grade, perhaps open-source, version but it > could have ended up disturbing the then nascent Netconf effort. > >? ? ? ? ? --karl-- > > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. -- Internet-history mailing list Internet-history at elists.isoc.org https://elists.isoc.org/mailman/listinfo/internet-history From craig at tereschau.net Thu Mar 31 16:50:31 2022 From: craig at tereschau.net (Craig Partridge) Date: Thu, 31 Mar 2022 17:50:31 -0600 Subject: [ih] GOSIP & compliance In-Reply-To: <967226810.844120.1648769888555@mail.yahoo.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <967226810.844120.1648769888555@mail.yahoo.com> Message-ID: It is indeed -- great memory! Craig On Thu, Mar 31, 2022 at 5:38 PM Barbara Denny via Internet-history < internet-history at elists.isoc.org> wrote: > Craig, > > Is ANM ( Automated Network Management) the name of Jil's project you > couldn't recall? This popped up in my head today. > > barbara > > On Monday, March 28, 2022, 09:49:21 AM PDT, Craig Partridge via > Internet-history wrote: > > Quick comments on Karl's comments (from this note and later). > > ASN.1 was in SNMP because it was in HEMS. HEMS did it to allow us to > return self-describing vendor specific data without requiring a > supplemental MIB definition. This was important because HEMS allowed one > to ask for the full status of an aggregate object (such as an interface or > even an entire subjection of the MIB) and we wanted folks to be able to add > additional data they thought relevant to the aggregate. ASN.1 was the only > self-describing data format of the time. > > The UDP vs. TCP debate was pretty fierce and the experience of the time > came down firmly on the UDP side. Recall this was the era of daily > congestion collapse of the Internet between 1987 and 1990. > > Re: doing things in Python. I'm not surprised. The HEMS implementation > proved reasonably small at the time. > > By the way, many of the features noted in CMIP were actually in HEMS and/or > SNMP first. We were pretty open about playing with ideas at the time and > the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to > borrow liberally. (Or, at least, that's my recollection). > > There was a network management project in the late 1980s, name now eludes > me but led by Jil Wescott and DARPA funded, that sound similar in goals to > what Jack H. describes doing at Oracle. I leaned on wisdom from those > folks (esp. the late Charlie Lynn) as Glenn Trewitt and I sought to figure > out what HEMS should look like. > > As we do these assessments, it is worth remembering that the operational > community of the time was struggling with the immediate challenge of > managing networks that were flaky and ran on 68000 processors and where > only a few 100K of memory was available for the management protocols. The > SNMP team found a way to shoe-horn the key features into that > limited footprint and it promptly made a *huge* difference. > > Craig > > On Sat, Mar 26, 2022 at 2:22 AM Karl Auerbach via Internet-history < > internet-history at elists.isoc.org> wrote: > > > ... > > BTW, I thought the use of ASN.1/BER for SNMP was far from the best > > choice (indeed, from an implementation quality and interoperability > > point of view would be hard to find one that was worse.) I preferred > > the HEMS proposal as the most elegant, even if it did use XML. > > > > CMIP had some really good ideas (most particularly with regard to > > selecting and filtering data at the server for highly efficient bulk > > fetches.) Marshall Rose's rehosting CMIP onto TCP (thus creating CMOT) > > was very inventive and clever. That kinda demonstrated the potential > > viability, rather than the impossibility, of things like CMIP/T, even in > > its bloated form. > > > > Diverting from the main points here: > > > > About a dozen years ago I decided to rework SNMP, throwing out ASN.1/BER > > and using JSON, throwing out UDP and using TCP (optionally with TLS), > > and adding in some of the filtering concepts from CMIP. I preserved > > most MIB names and instrumentation variable semantics (and thus > > preserving a lot of existing instrumentation code in devices.) > > > > The resulting running code (in Python) is quite small - on par with the > > 12kbytes (machine code) of the core of my Epilogue SNMP engine. And it > > runs several decimal orders of magnitude faster than SNMP (in terms of > > compute cycles, network activity, and start-to-finish time.) Plus I can > > do things like "give me all data on all interfaces with a received > > packet error rate greater than 0.1%". I can even safely issue complex > > control commands to devices, something that SNMP can't do very well. I > > considered doing commercial grade, perhaps open-source, version but it > > could have ended up disturbing the then nascent Netconf effort. > > > > --karl-- > > > > > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From dhc at dcrocker.net Thu Mar 31 16:53:28 2022 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 31 Mar 2022 16:53:28 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <967226810.844120.1648769888555@mail.yahoo.com> References: <87157443-d081-cfee-04e0-59f37d062d47@cavebear.com> <5f96ddef-1dd4-91c9-41ca-a6805a591e7f@dcrocker.net> <095d2950-bddb-68f9-4d2c-e9649b68d1e2@cavebear.com> <967226810.844120.1648769888555@mail.yahoo.com> Message-ID: <91f88337-6000-1777-bcb2-7b2c1ae8e760@dcrocker.net> > ASN.1 was in SNMP because it was in HEMS. S I recall -- and I may still have been AD for network management then -- Things weren't quite that simple. The politics mostly pressed by the OSI folks prompted pressure to accept ASN.1 in the hopes that management /data/ would be interoperable, independent of which /protocol/ ultimately won. > By the way, many of the features noted in CMIP were actually in HEMS and/or > SNMP first.? We were pretty open about playing with ideas at the time and > the CMIP folks, who had an empty spec when SNMP and HEMS started, chose to > borrow liberally.? (Or, at least, that's my recollection). +1 > As we do these assessments, it is worth remembering that the operational > community of the time was struggling with the immediate challenge of > managing networks that were flaky and ran on 68000 processors and where > only a few 100K of memory was available for the management protocols.? The > SNMP team found a way to shoe-horn the key features into that > limited footprint and it promptly made a *huge* difference. +1 d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From bernie at fantasyfarm.com Thu Mar 31 17:06:20 2022 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Thu, 31 Mar 2022 20:06:20 -0400 Subject: [ih] GOSIP & compliance In-Reply-To: References: , <967226810.844120.1648769888555@mail.yahoo.com>, Message-ID: <624641FC.23433.28FA8FCD@bernie.fantasyfarm.com> On 31 Mar 2022 at 17:50, Craig Partridge via Internet-history wrote: > It is indeed -- great memory! > Craig > > On Thu, Mar 31, 2022 at 5:38 PM Barbara Denny via Internet-history < > internet-history at elists.isoc.org> wrote: > > > Craig, > > > > Is ANM ( Automated Network Management) the name of Jil's project you > > couldn't recall? This popped up in my head today. Bruce Laird and I "inherited" it. I don't know exactly the poltics of it all, but our project was to get it working on SUN workstations [It was written in Lisp]. We got it working and deployed in several places. And then we embarked on a more ambitious version: to make fully distributed. The idea was that there could be a sort of "cloud" of systems receiving network monitoring data and then it would forward through to another cloud of systems that'd process all that data in various ways. As I did decades before in the early ARPAnet NMC code, one idea was to aggregate the data coming in an figure out what was *really* wrong. On the ARPAnet, for example, if the network got disconnected all the nodes on the "other side" of the net would show up as "down" - - not very helpful to the staff. So I hacked it to know the network topology and it could figure out where the *actual* outage was and then put "unknown" for the ones it couldn't see. We had intended to do something like that with all the network-status feeds and it was intended to be extensible on both sides [that is, more ways to collect data and more ways to process/understand/display it]. Alas. I don't know how it all turned out: I ended up retiring and passed the project onto other folk. /Bernie\ Bernie Cosell bernie at fantasyfarm.com -- Too many people; too few sheep -- From jack at 3kitty.org Thu Mar 31 20:10:06 2022 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 31 Mar 2022 20:10:06 -0700 Subject: [ih] GOSIP & compliance In-Reply-To: <624641FC.23433.28FA8FCD@bernie.fantasyfarm.com> References: <967226810.844120.1648769888555@mail.yahoo.com> <624641FC.23433.28FA8FCD@bernie.fantasyfarm.com> Message-ID: <94157f9a-4544-cac8-9312-00232a11707f@3kitty.org> I can provide some recollections of the origin and intent of ANM - Automated Network Management. Sometime in early 1983, Bob Kahn and I were talking one day about the Internet.? In particular, we were musing about how the Internet might be operated and managed as EGP was introduced and the Internet became a loose confederation of individual Autonomous Systems, each operated and managed by a separate organization.?? That was quite different from the ARPANET, which had a centralized management approach with the NOC and refined it over a decade of operation. We had used that ARPANET model as a guide to put the first management mechanisms into the "core gateways", basically using the success of the ARPANET techniques to get the Internet going quickly as a reliable operational communications facility.?? But, as the saying goes, it was obvious that "it won't scale". Another difference from the ARPANET model was that many of the network mechanisms that were in ARPANET IMPs had now been placed into the attached Host computers.? Packetization decisions, flow control, retransmission, reordering, and other such "virtual circuit" mechanisms were now performed by Host software rather than in the Switch.? Making the situation even more complex, there was a need for non-guaranteed "datagram" service for use in applications such as interactive packet voice. With so many players now involved in providing the network service, the ARPANET approach of central monitoring and control from "the NOC" would not be viable.?? Manual coordination, e.g., phone calls between various NOC and Host operators to diagnose problems, seemed unlikely to work - especially since Host operators didn't seem to think that TCP behavior was their problem. So the notion was that some kind of automation needed to be put into the management architecture, with tasks commonly done previously in the ARPANET NOC instead being done by computers and heuristic software.?? I.e., the goal was to automate at least some of the processes of network management. One example I recall is detecting problems in the Internet, e.g., excessive retransmissions, duplicates, lost datagrams, or such behavior that was unusual (whatever that might mean).? Gateways (routers) could collect all sorts of data about traffic flow, packet drops, and TTL timeouts.?? But only Hosts could detect the need to retransmit, discard duplicates, and monitor behavior of flow control Windows.? Each of the related "operators" could clearly collect data and make it available via SNMP or similar protocol.? Even a simple TCP connection through the Internet would involve three or more operators, of the Hosts at the endpoints and at least one Autonomous System in the transit path.? With the proliferation of LANs, it seemed likely that a common scenario would involved 4, 5 or more separate "managers" for each TCP (or UDP) "connection" So the focus of Automated Network Management was of course defining the mechanisms to collect such data from multiple sources, but more importantly exploring what some intelligent software could *do* with that data, i.e., analyze it, draw conclusions about what, if any, problem existed, and do something to mitigate the situation. One example might be if excessive latency was detected, such that some user application was being disrupted.?? This could happen in packet voice, for example, if an audio packet didn't reach its destination in time to be sent to the speaker.?? At the time, we had been experimenting with "dial-up" circuits, where additional bandwidth could be added to the Internet between 2 points by creating a dial-up circuit between those points.?? So one function of ANM might be to detect that the problem was occurring, isolate where the delay was occurring, and create a bypass path using dial-up to reduce the latency between those 2 points.?? That might lead to some future architecture where the topology of the Internet was highly dynamic, with many "circuits" between routers being added and subtracted between appropriate routers as decided on the fly by the Automated system.?? It might even be possible to predict the need for a topology change based on past experience or trend curves, and achieve Problem Avoidance rather than Problem Detection. Obviously there is a lot of detail missing in what I just wrote, describing how to detect the problem, how to figure out where a dial-up circuit should be added, and how to monitor the subsequent activity and detect when that dial-up link could be terminated. And of course make sure that such actions didn't drive the routing protocols and mechanisms to insanity.?? And of course there were many other scenarios involved in network operations that might be able to be automated. That was all research to be done.?? IIRC, I wrote something as a proposal to be added to the next ARPA contract.?? As usual, it probably basically said "Send money; we promise to do good stuff" with deliverables being only Quarterly Reports.? The work could begin when the contract renewed for the next government fiscal year in Septamber 1983.?? What I just wrote above was probably just in my head, not yet written down but the work would start in a few months.? IIRC, Bob Kahn and I were both happy with that. Where the "Politics" came in to the picture was on July 1, 1983. BBN had a significant reorganization on that date, and I discovered that many of my projects had been reallocated and now would reside in several different divisions and different subsidiaries. By contract renewal, those changes were complete.? I was in one subsidiary, and ANM in another.? So I never actually worked on ANM, and lost track of what it became in its new home.?? Perhaps someone else can describe what happened.?? Did any ot the research make it into today's operating ISPs?? When I was involved in operating a corporate internet in the early 90s, it would have been nice to have ANM tools! Jack Haverty On 3/31/22 17:06, Bernie Cosell via Internet-history wrote: > On 31 Mar 2022 at 17:50, Craig Partridge via Internet-history wrote: > >> It is indeed -- great memory! >> Craig >> >> On Thu, Mar 31, 2022 at 5:38 PM Barbara Denny via Internet-history < >> internet-history at elists.isoc.org> wrote: >> >>> Craig, >>> >>> Is ANM ( Automated Network Management) the name of Jil's project you >>> couldn't recall? This popped up in my head today. > Bruce Laird and I "inherited" it. I don't know exactly the poltics of it all, but our > project was to get it working on SUN workstations [It was written in Lisp]. We > got it working and deployed in several places. And then we embarked on a more > ambitious version: to make fully distributed. The idea was that there could be a > sort of "cloud" of systems receiving network monitoring data and then it would > forward through to another cloud of systems that'd process all that data in > various ways. > > As I did decades before in the early ARPAnet NMC code, one idea was to > aggregate the data coming in an figure out what was *really* wrong. On the > ARPAnet, for example, if the network got disconnected all the nodes on the > "other side" of the net would show up as "down" - - not very helpful to the staff. > So I hacked it to know the network topology and it could figure out where the > *actual* outage was and then put "unknown" for the ones it couldn't see. We > had intended to do something like that with all the network-status feeds and it > was intended to be extensible on both sides [that is, more ways to collect data > and more ways to process/understand/display it]. > > Alas. I don't know how it all turned out: I ended up retiring and passed the > project onto other folk. > > /Bernie\ > > Bernie Cosell > bernie at fantasyfarm.com > -- Too many people; too few sheep -- >