[ih] IPv8...
Greg Skinner
gregskinner0 at icloud.com
Mon Apr 20 11:07:53 PDT 2026
On Apr 19, 2026, at 11:38 PM, Jack Haverty <jack at 3kitty.org> wrote:
>
> Aah, OK, I don't remember exactly what I wrote on the various lists but here's some more detail...
>
> The various messaging forums were more about implementation than scenarios. There were other emails, not usually on lists, often from Lick or his "chief of staff" Al Vezza, lobbying with various people at ARPA or other contractors to lobby for Lick's vision. I often wrote some of the content, but the emails came from Lick or Al.
>
> On the mailing lists, the focus was on what was needed for implementing the vision. Lick's group had a PDP-10 on the ARPANET, but Lick's vision included lots of computers all talking to each other. Without other players it was difficult to see how to try out things like protocols, formats, etc. A network with only one host computer is not very interesting.
>
> A major element of Lick's vision is that everything humans did using the galactic network would always involve at least two computers. Everyone would have access to "their" computer, which would actually make things happen by communicating with the "other guy's" computer. Gemini summarized that as "everyone had a terminal at home". In the 1970s terminals were the common way to interact with your computer somewhere across the ARPANET. No person could afford to have their personal computer. Today of course it would be the phone in your pocket, or the PC on your desk, or the tablet you use for video chats with your relatives or neighbors or colleagues. Or all of the above. The computers would all talk amongst themselves and sort it all out.
>
> With two computers involved, lots of issues were expected. The system became a multiprocessor, with elements distributed over a possibly wide area of geography and of time. Lots of issues, such as "locking" had to be solved as well as the protocols, packet formats, and such stuff.
>
> The ubiquity of computers motivated the need for technology appropriate to computer-computer interactions, rather than human-computer ones. At the time, the network community preferred interactions that were understandable by humans. That made it much easier to debug programs. You could even send email by connecting your terminal directly to another site's FTP server and typing your email at it. You had to be careful not to make mistakes, since the FTP servers didn't provide any support for backspace. Such techniques were useful in debugging problems since the interactions were all human-readable, as well as writable.
>
> Many of my posts on the mailing lists in the 1970s were about a proposed mechanism for an initial protocol and formats that would be more friendly to computers interacting. I had spent many hours writing heuristics to try to figure out the information my mail system was receiving in the headers of incoming messages. There was little structure and lots of human artistic creativity -- e.g., "From: The Desk of so-and-so" or "Date: It's lunchtime!" Converting such information into something a computer could use was ... difficult. We didn't have even today's AIs back then.
>
> Thateffort became RFC713 (see https://www.rfc-editor.org/rfc/rfc713.html ) and generated a lot of backpressure from the community. It was intended as a way to transfer data structures from one machine to another, much as FTP had allowed files to be transferred for several years. But many people wanted their headers to remain human-oriented, and didn't see the need to do more implementation work. Lots of messages on the lists capture that debate.
>
> At about the same time I wrote RFC722, which contained some basic principles for a system in which computers interacted with other computers rather than humans interacting with their computers using plain terminals. Likewise that was the beginning of a complex implementation that wasn't popular.
>
> There were other RFCs planned to flesh out more details, but after ARPA shelved the project there was no point to write them.
>
> One major element of the vision that did achieve traction survives today - the "Message-ID" field in the typical email header. The notion in Lick's vision was that when a message was created it would be assigned a unique identifier. That task was left up to whatever computer was used to create the message, with the ID containing the identity of that computer plus whatever it decided was something unique that it could generate.
>
> The implementation vision was that any particular message, uniquely identified by its ID, was frozen when it was created, and could not be subsequently changed. However, it could be passed across the network, as a data structure between mail servers. It might go to each recipient as the sender's computer talked to the recipient's machine. Or it might be retrievable from "the Datacomputer", which was essentially a NAS to serve the entire ARPANET community. Or your computer might connect to the original sender's computer, as identified by the structure of the message-ID, and retrieve that message from the source. No matter how it was retrieved, you'd get the same thing if you had the necessary Message-ID.
>
> In the scenarios this architecture had useful consequences. Messages could be forwarded by simply sending a Message-ID. If a recipient wanted to comment on a particular piece of some message, some kind of "structured text" scheme would indicate the particular part of the message involved. The program displaying a message for a human would then know enough about the structure to be able to display the message as the user desired, e.g., hiding or displaying the pieces which the previous commenter had highlighted. Contrast that scenario with the typical one today, where a long sequence of messages in a threaded form is virtually impossible for a human to sort out.
>
> Another scenario identified many different types of "Roles" even for a single sender. Different roles might reflect different scenarios and different levels of authority. A message from a CEO of some company might be handled differently from another message, from the same human, but acting in the role of his son's Scout Leader.
>
> Yet another scenario built on top of Roles would be the various "workflow" paths involved in sending a single message. In the military environment, all messages might formally come from the Base Commander role, but likely went through a long pathway to get there. A message might have to be approved, for example by the legal overseers. Workflows might be accomplished using a series of independent messages within the organization, as the message works through the workflow steps.
>
> Similar workflows often exist in corporate environments. The CEO may issue a message, but along its workflow it may have been checked and approved by legal, marketing, finance, and other such departments.
>
> The key in all of these scenarios was that they were accomplished by computers talking to other computers, able to exchange data structures, and keep all the individual components secure, private if needed, and their sources authenticated. Contrast that with the "headers" you probably see on messages such as this one.
>
> Hope this helps explain what all that discussion was about back in the 1970s...
>
> /Jack Haverty
>
OK. Possibly, some of this correspondence between yourself, Al Vezza, Lick, etc. is available from the MIT Libraries collections.
From the looks of RFC 713, it seems that from an IETF perspective, that type of activity takes place in the Applications and Real Time Area <https://datatracker.ietf.org/wg/#ART>. There are several working groups in that area. I don’t know offhand to what extent they discuss scenarios and/or use cases, but that information is possibly available in their mailing lists, meeting videos (available from YouTube), etc.
--gregbo
More information about the Internet-history
mailing list