[ih] IPv8...
Jack Haverty
jack at 3kitty.org
Sun Apr 19 23:38:43 PDT 2026
Aah, OK, I don't remember exactly what I wrote on the various lists but
here's some more detail...
The various messaging forums were more about implementation than
scenarios. There were other emails, not usually on lists, often from
Lick or his "chief of staff" Al Vezza, lobbying with various people at
ARPA or other contractors to lobby for Lick's vision. I often wrote
some of the content, but the emails came from Lick or Al.
On the mailing lists, the focus was on what was needed for implementing
the vision. Lick's group had a PDP-10 on the ARPANET, but Lick's vision
included lots of computers all talking to each other. Without other
players it was difficult to see how to try out things like protocols,
formats, etc. A network with only one host computer is not very
interesting.
A major element of Lick's vision is that everything humans did using the
galactic network would always involve at least two computers. Everyone
would have access to "their" computer, which would actually make things
happen by communicating with the "other guy's" computer. Gemini
summarized that as "everyone had a terminal at home". In the 1970s
terminals were the common way to interact with your computer somewhere
across the ARPANET. No person could afford to have their personal
computer. Today of course it would be the phone in your pocket, or the
PC on your desk, or the tablet you use for video chats with your
relatives or neighbors or colleagues. Or all of the above. The
computers would all talk amongst themselves and sort it all out.
With two computers involved, lots of issues were expected. The system
became a multiprocessor, with elements distributed over a possibly wide
area of geography and of time. Lots of issues, such as "locking" had to
be solved as well as the protocols, packet formats, and such stuff.
The ubiquity of computers motivated the need for technology appropriate
to computer-computer interactions, rather than human-computer ones. At
the time, the network community preferred interactions that were
understandable by humans. That made it much easier to debug programs.
You could even send email by connecting your terminal directly to
another site's FTP server and typing your email at it. You had to be
careful not to make mistakes, since the FTP servers didn't provide any
support for backspace. Such techniques were useful in debugging
problems since the interactions were all human-readable, as well as
writable.
Many of my posts on the mailing lists in the 1970s were about a proposed
mechanism for an initial protocol and formats that would be more
friendly to computers interacting. I had spent many hours writing
heuristics to try to figure out the information my mail system was
receiving in the headers of incoming messages. There was little
structure and lots of human artistic creativity -- e.g., "From: The Desk
of so-and-so" or "Date: It's lunchtime!" Converting such information
into something a computer could use was ... difficult. We didn't have
even today's AIs back then.
Thateffort became RFC713 (see https://www.rfc-editor.org/rfc/rfc713.html
) and generated a lot of backpressure from the community. It was
intended as a way to transfer data structures from one machine to
another, much as FTP had allowed files to be transferred for several
years. But many people wanted their headers to remain human-oriented,
and didn't see the need to do more implementation work. Lots of
messages on the lists capture that debate.
At about the same time I wrote RFC722, which contained some basic
principles for a system in which computers interacted with other
computers rather than humans interacting with their computers using
plain terminals. Likewise that was the beginning of a complex
implementation that wasn't popular.
There were other RFCs planned to flesh out more details, but after ARPA
shelved the project there was no point to write them.
One major element of the vision that did achieve traction survives today
- the "Message-ID" field in the typical email header. The notion in
Lick's vision was that when a message was created it would be assigned a
unique identifier. That task was left up to whatever computer was used
to create the message, with the ID containing the identity of that
computer plus whatever it decided was something unique that it could
generate.
The implementation vision was that any particular message, uniquely
identified by its ID, was frozen when it was created, and could not be
subsequently changed. However, it could be passed across the network,
as a data structure between mail servers. It might go to each
recipient as the sender's computer talked to the recipient's machine.
Or it might be retrievable from "the Datacomputer", which was
essentially a NAS to serve the entire ARPANET community. Or your
computer might connect to the original sender's computer, as identified
by the structure of the message-ID, and retrieve that message from the
source. No matter how it was retrieved, you'd get the same thing if
you had the necessary Message-ID.
In the scenarios this architecture had useful consequences. Messages
could be forwarded by simply sending a Message-ID. If a recipient
wanted to comment on a particular piece of some message, some kind of
"structured text" scheme would indicate the particular part of the
message involved. The program displaying a message for a human would
then know enough about the structure to be able to display the message
as the user desired, e.g., hiding or displaying the pieces which the
previous commenter had highlighted. Contrast that scenario with the
typical one today, where a long sequence of messages in a threaded form
is virtually impossible for a human to sort out.
Another scenario identified many different types of "Roles" even for a
single sender. Different roles might reflect different scenarios and
different levels of authority. A message from a CEO of some company
might be handled differently from another message, from the same human,
but acting in the role of his son's Scout Leader.
Yet another scenario built on top of Roles would be the various
"workflow" paths involved in sending a single message. In the military
environment, all messages might formally come from the Base Commander
role, but likely went through a long pathway to get there. A message
might have to be approved, for example by the legal overseers.
Workflows might be accomplished using a series of independent messages
within the organization, as the message works through the workflow steps.
Similar workflows often exist in corporate environments. The CEO may
issue a message, but along its workflow it may have been checked and
approved by legal, marketing, finance, and other such departments.
The key in all of these scenarios was that they were accomplished by
computers talking to other computers, able to exchange data structures,
and keep all the individual components secure, private if needed, and
their sources authenticated. Contrast that with the "headers" you
probably see on messages such as this one.
Hope this helps explain what all that discussion was about back in the
1970s...
/Jack Haverty
On 4/19/26 14:13, Greg Skinner wrote:
> On Apr 19, 2026, at 3:34 AM, Vint Cerf <vint at google.com> wrote:
>>
>> From Gemini:
>>
>> J.C.R. Licklider, often called the "Computing's Johnny Appleseed,"
>> didn't just view electronic mail as a digital version of a post
>> office. In his seminal 1968 paper, *"The Computer as a Communication
>> Device"* (co-authored with Robert Taylor), he envisioned a future
>> where email was the backbone of collaborative intelligence.
>>
>> Licklider’s scenarios for electronic mail weren't just about moving
>> text; they were about *distributed thinking.* Here are the key
>> scenarios and concepts he outlined:
>>
>> ------------------------------------------------------------------------
>>
>>
>> 1. The "OLIVER" (Automated Digital Assistants)
>>
>> One of Licklider’s most prophetic scenarios involved what he called
>> the *OLIVER* (/On-Line Interactive Vicarious Expediter and Responder/).
>>
>> *
>>
>> *The Scenario:* He imagined a personalized program that would
>> reside in the network to handle your mail.
>>
>> *
>>
>> *The Function:* The OLIVER would screen incoming messages, reply
>> to routine inquiries on your behalf, and prioritize important
>> data. This is essentially the 1960s vision of *AI-driven inbox
>> management* and automated responders.
>>
>>
>> 2. Communities of Common Interest
>>
>> Licklider predicted that electronic mail would destroy the tyranny of
>> geography.
>>
>> *
>>
>> *The Scenario:* He argued that people would no longer be grouped
>> by where they lived, but by what they cared about.
>>
>> *
>>
>> *The Impact:* He foresaw "online interactive communities" where
>> specialists across the globe could exchange ideas instantly. To
>> Licklider, email was the "glue" that would hold these
>> non-geographic communities together.
>>
>>
>> 3. Collaborative "Modeling"
>>
>> For Licklider, communication wasn't just sending a message; it was
>> about *cooperation.*
>>
>> *
>>
>> *The Scenario:* He envisioned two researchers using the network
>> to work on a shared "model" (data or a simulation).
>>
>> *
>>
>> *The Function:* Email and network messaging would allow people to
>> send not just words, but executable programs and data sets. One
>> person could "mail" a piece of code, and the receiver could run
>> it immediately to see the sender’s logic.
>>
>>
>> 4. Asynchronous Problem Solving
>>
>> He recognized that human schedules are messy.
>>
>> *
>>
>> *The Scenario:* Licklider saw electronic mail as a way to
>> facilitate "interpersonal communication" without requiring both
>> parties to be present at the same time.
>>
>> *
>>
>> *The Value:* This would allow for a "more thoughtful and
>> better-documented" exchange than a telephone call, as users could
>> take time to research a response before sending it back through
>> the system.
>>
>>
>> 5. The "Information Utility"
>>
>> Licklider compared the future of electronic communication to a public
>> utility, like electricity or water.
>>
>> *
>>
>> *The Scenario:* He imagined a world where everyone—not just
>> scientists—had a terminal at home.
>>
>> *
>>
>> *The Daily Use:* People would use the "mail" system to check bank
>> balances, schedule appointments, and engage in "the creative
>> process" of social interaction.
>>
>> ------------------------------------------------------------------------
>>
>>
>> Summary of Licklider’s Vision
>>
>> *Feature* *Licklider's Prediction* *Modern Equivalent*
>> *Medium* "The message is the model" Shared Google Docs / GitHub
>> *Agent* The OLIVER AI Assistants (Copilot, Gemini)
>> *Geography* "Communities of interest" Subreddits / Discord / Slack
>> *Speed* "Interactive but asynchronous" Modern Email / Threaded
>> messaging
>>
>> "In a few years, men will be able to communicate more effectively
>> through a machine than face to face."
>>
>> — *J.C.R. Licklider, 1968*
>>
>> Licklider’s genius was realizing that the computer wasn't a "giant
>> brain" meant for calculating trajectories, but a *medium* meant for
>> connecting human minds.
>>
>>
>
> Thanks for the scenarios. I probably should have been more specific.
> I was hoping Jack would give examples of *what he wrote* *on those
> mailing lists* that he considered to be scenarios based on Lick’s
> visions. That way, we could make some comparisons between the way
> that was done and how it’s done on IETF lists today. Since in his
> response to Tony he mentioned email archives which are probably lost
> by now, I thought the header-people and msggroup lists, both of which
> have survived (mostly), might at least give some idea of how those
> scenarios were discussed.
>
> I am not an email guru, so my assessments are based on a cursory
> glance at some IETF lists that are about email standards. There is a
> list called ietf-822 that seems similar to the header-people and
> msggroup lists, in the sense that its participants bring scenarios to
> the discussions. [1] But ietf-822 is not a working group list. There
> are some other lists such as emailcore and mailmaint that are working
> group lists. [2] [3] The discussions are more focused on getting
> drafts ready for publication, making use of more specific information,
> such as use cases. In my experience, use cases are more concrete,
> thus more of an aid to getting drafts publication-ready.
>
> --gregbo
>
> [1] https://mailarchive.ietf.org/arch/browse/ietf-822/
> [2] https://mailarchive.ietf.org/arch/browse/emailcore/
> [3] https://mailarchive.ietf.org/arch/browse/mailmaint/
>
>
>> On Sun, Apr 19, 2026 at 3:13 AM Greg Skinner via Internet-history
>> <internet-history at elists.isoc.org> wrote:
>>
>> On Apr 18, 2026, at 6:15 PM, Jack Haverty via Internet-history
>> <internet-history at elists.isoc.org> wrote:
>> >
>> >
>> > On 4/18/26 14:38, Tony Li wrote:
>> >> Hi Jack,
>> >>
>> >>> Somewhere in the timeline of Internet History, the notion of
>> scenarios as drivers of technical choices must have disappeared.
>> >> No, not at all. In fact, it’s hard to get any solution
>> through the IETF anymore without an independent “use cases” document.
>> >>
>> >> Regards,
>> >> Tony
>> >>
>> > "Use Cases" and "Scenarios" are different things. Both are needed.
>> >
>> > My understanding of "Use Cases" is that they serve to show how
>> some particular technology (protocol, algorithm, whatever) can be
>> actually used in the Internet. In other words, they are driven
>> from the technology side, explaining how a technology could be used.
>> >
>> > In contrast, "Scenarios" are driven from the end-users'
>> perspective. They capture things that the end users need to be
>> able to do, given the overall system of technologies that exist
>> at the time. The C3I scenario I described earlier captured one
>> example of what the aggregate of military end-users needed to do,
>> using the Internet to provide the communications infrastructure.
>> If there was a technological piece still missing, the scenario
>> was not possible.
>> >
>> > A particular technology may be useful and even necessary. But
>> by itself it is likely insufficient to actually enable any but
>> the simplest end-users' scenarios. Other technologies may also
>> be needed before a particular scenario is workable. Sometimes
>> many technologies have to exist and work together.
>> >
>> > It's also conceivable that some particular decision of a
>> technology precludes ever reaching the goal of enabling a
>> scenario. A particular technology decision with a "use case" may
>> rule out approaches to other technology issues that must also
>> exist to enable the scenario.
>> >
>> > In the C3I example I described, lots of technology advances
>> seemed to be likely needed. Routing algorithms almost certainly
>> needed to evolve. Congestion and flow control likely needed
>> changes too. In military contexts, security was always a
>> requirement. Techniques for prioritizing traffic flows were
>> likely needed. Techniques for compressing large documents, voice
>> streams, et al had to be created. Etc. To meet the needs of
>> the scenario, all the technical pieces had to exist and work
>> together.
>> >
>> > I first encountered "scenarios" while I was a student, in
>> Professor Licklider's group at MIT. Lick was my adviser and
>> later boss for several years in the 1970s. About ten years
>> earlier, Lick had written memos about his vision of a "galactic
>> network" in which computers were available to humans everywhere,
>> and were somehow interconnected so that they could communicate
>> with each other.
>> >
>> > Lick's training was in psychology, so he thought from the
>> human's end-user point of view. He described the "scenario" of
>> his "galactic network" vision as "computers everywhere helping
>> humans do everything that humans do." He understood that such a
>> scenario was a bit too vague to serve as a specification. But
>> for a vision that was OK, and could serve to create other more
>> detailed scenarios -- like the military ones.
>> >
>> > In the mid-70s, one of the network research topics focussed on
>> what came to be called email on the ARPANET. I recall lots of
>> discussions with Lick and many others to define relevant and more
>> detailed scenarios. Lick's vision was broad, encompassing all
>> sorts of human-human communication. Emails might be short notes
>> or massive documents. They might be urgent, and require
>> mechanisms for tracking through delivery. They might be
>> multi-media, perhaps starting as text communications, switching
>> to conversational voice, and even evolving into an interactive
>> conferencing session using text and/or voice (video was just too
>> hard to think about in the 1970s). Interactions could be saved
>> (such as on the "Datacomputer", which did exist on the ARPANET
>> and our email system used it). Long "conversations" could be
>> related to each other as something like today's email "threads"
>> and "forums". Some communications might have to be private,
>> protected from prying eyes along the way. Some might need to
>> have the author, and/or recipients, verified so that you could
>> believe what you saw or heard came from where you thought it came
>> from. Some might be routed through a kind of "escrow agent",
>> who could later independently testify that a document was real
>> and had been authored, created, delivered, and handled at
>> particular points in time. The scenario for comprehensive
>> human-human communications was very complex.
>> >
>> > We developed a rudimentary technical architecture for such a
>> scenario, to at least serve as a starting point while computer
>> technology advanced and more things became feasible. Sadly it
>> was probably never captured in any form more permanent than email
>> archives, which are probably lost by now. Lots of technologies
>> would be needed. It would take a while. Research does.
>> >
>> > That human-human communications architecture was shelved by
>> ARPA in the mid-1970s in favor of a much simpler approach, to
>> provide an interim solution that we now would all recognize as
>> today's electronic mail. I think Lick's scenario is still a
>> good target, but I don't think anyone's been working on it for
>> the last 50 years.
>> >
>> > Anyway, I hope that explains the concept of "Scenario".....
>> >
>> > /Jack Haverty
>> >
>>
>> Jack, you contributed regularly to the header-people and msggroup
>> mailing lists, most of which are still available. If you have
>> time, can you review some of what you wrote about, and identify
>> topics that reflected Lick’s vision of what human-human
>> communication could be, based on scenarios? Offhand, I did see
>> one in which you posted an example of how in the military,
>> messages were required to be from the commander, with an address
>> distinct from a person. [3]
>>
>> --gregbo
>>
>> [1]
>> https://web.archive.org/web/20241123091106/http://www.chiappa.net/~jnc/tech/header/
>> [2]
>> https://web.archive.org/web/20241123091106/http://www.chiappa.net/~jnc/tech/msggroup/
>> [3]
>> https://web.archive.org/web/20250121110700/http://mercury.lcs.mit.edu/~jnc/tech/msggroup/msggroup0401-0500.txt
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>> -
>> Unsubscribe:
>> https://app.smartsheet.com/b/form/9b6ef0621638436ab0a9b23cb0668b0b?The%20list%20to%20be%20unsubscribed%20from=Internet-history
>>
>>
>>
>> --
>> Please send any postal/overnight deliveries to:
>> Vint Cerf
>> Google, LLC
>> 1900 Reston Metro Plaza, 16th Floor
>> Reston, VA 20190
>> +1 (571) 213 1346
>>
>>
>> until further notice
>>
>>
>>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20260419/72546f92/attachment-0001.asc>
More information about the Internet-history
mailing list