[ih] Overlay networks
Jack Haverty
jack at 3kitty.org
Wed Aug 20 09:14:56 PDT 2025
Agreed, the Internet is basically an "overlay network". IIRC even the
term "internet" was derived from "interconnecting networks". Earlier it
had been called "Catenet" for "concatenated networks".
IIRC, the notion of Inter-Process Communication as the basis for
"networking" goes back at least to the early 1960s with Lick's paper on
the intergalactic network. At least that's what I learned -- but no
doubt I was biased as a member of Lick's group at MIT circa 1970.
Humans didn't have any innate ability to send electronic signals (we
still don't but there's work in progress...). We can receive certain
electromagnetic radiation through our eyes, but can't send anything.
Our ability to receive information is very limited. If we somehow
arrange to look at the output of a fiber-optic link, carrying gigabits
per second of information, it's likely all we can interpret will be just
perceived as noise.
To interact over an electronic "network" a human needed some kind of
computer equipment that had the ability to send packets, datagrams,
messages, whatever. Computers talked to other computers, to help do
what humans wanted to do.
Some kind of program, such as Telnet or FTP or email, acted as the
intermediary between the humans and the network. So all ARPANET usage
was actually communications between processes, running in computers
somewhere. The "intergalactic network" was the set of computers,
somehow able to communicate amongst themselves using some kind of
electronic mechanisms, helping human users do what they wanted to do --
log in to a distant computer, move files between computers, interact
with other humans with email, etc.
So "the network" was essentially a mechanism for the various programs on
far-flung computers to talk amongst themselves. IPC was the only way
the ARPANET could be used. It was essentially a way for computer
processes to interact.
Sometime in 1967 or so, Djikstra gave a guest lecture in our CompSci
class at MIT, and explained the importance of the "P" and "V" primitives
he had defined. It didn't seem so earth-shaking then and took me years
to realize the implications for networking. Essentially networking
changed the world of IPC.
In an environment contained in a single machine, locks, semaphores, and
such techniques were necessary to make communications among processes
reliable and consistent. In a networking environment, interactions are
always in a loosely coupled, distributed, multiprocessor system, where
mechanisms analogous to locks and semaphores are needed as well, but
operating over long distances with multiple computers involved.
IIRC, many of the early networking systems didn't realize this need, or
perhaps didn't know how to implement needed mechanisms. For example,
the 1980s/90s fascination with the oxymoronic notion of "Global LANs"
led to common user problems with things like "lock files" being left in
place on some machine because the network or the "other" computer had
crashed before getting around to releasing the lock.
Such things still happen. IPC is hard, especially in the highly
distributed, multi-processor, loosely coupled, world of today's
Internet. Well-designed programs have figured out their own mechanisms,
but AFAIK there is still no "standard" for mechanisms such as Djikstra's
P and V primitives. The need is still there.
/Jack Haverty
On 8/20/25 07:27, John Day via Internet-history wrote:
> Overlays go back over 50 years.
>
> The solution to Internetworking was an overlay network. In 1972, when the problem came up, DARPA had been considering protocol translation at the gateways. In October 72, they were introduced to CYCLADES led by Louis Pouzin. CYCLADES introduced ‘best effort’ datagrams and end-to-end transport to networking. But CYCLADES also assumed that hosts would not be close to the ‘routers’, called CIGALE (grasshopper in French) and could be connected to more than one. Pouzin pointed out that in terms of CYCLADES, all they had to do to solve the Internetworking problem was change the name of the Transport Layer to Internet Transport Layer and treat it as an overlay. All protocol translation disappears and all the individual networks have to do is support the minimal requirements of the Internet Transport Layer.
>
> To further comment on Joe’s reply, in the early days of the ARPANET it was generally recognized that networking was IPC. Many talked of it in those terms, Dave Walden discussed it in RFC 61, Bob Metcalfe mentioned it in passing in a paper on writing NCPs (which was basically IPC), and Padlipsky called it axiomatic to networking in RFC 871.
>
> Take care,
> John
>
>> On Aug 20, 2025, at 09:57, Joe Touch via Internet-history<internet-history at elists.isoc.org> wrote:
>>
>>
>>> On Aug 20, 2025, at 4:51 AM, Lawrence Stewart via Internet-history<internet-history at elists.isoc.org> wrote:
>>>
>>> After the discussion about the Arpanet routing being shortest path I am wondering if anyone experimented with overlay networks.
>> Yeah - it was a whole area of research starting in the late 90s to today. I didn’t recall them many levels deep and wide -https://www.strayalpha.com/virtual-nets/
>>
>> There were many earlier versions that used the approach, with the first that influenced the field being the m-bone that allowed multicast to be deployed on a net that didn’t natively support it.
>>
>>> If the hosts know the the topology and the network routing algorithm, they can build a network with a different routing algorithm on top of the existing network by essentially doing store and forward at the hosts while using the base network only as links.
>> Yes, that was at least one reason for them. There are others in the papers/ projects linked at the site below.
>>
>>> The first time I heard of this was Sandia Labs work on GUPS (or HPCC Random Access) in 2004, which achieved much better than expected results on Red Storm by aggregating small application messages for transport over the base network. A small message might traverse the base network several times, but the advantage of large messages was so great that it overcame the inefficiency.
>>>
>>> The idea also turns up in collective algorithms in MPI, SHMEM, and others
>> There’s a summary of work with links that at least once worked here:
>> https://www.strayalpha.com/xbone/
>>
>> My later doc cousin is that these nets are just a special case of layering and that tunnels are just links not unlike any other. The whole system is recursive, not 7-layered, with the actual physical network being the base case. I.e., the ability of an overlay to control routing at its layer isn’t any different than IP relaying over Ethernet, or email relaying over TCP (as the bundle protocols of DTN prove).
>>
>> I came at that from a comm theory side and coined it recursive networking in the X-Bone overlay deployment system as RNA (recursive networking architecture), though the term came up earlier in a less generalized form in Andrew Campbells Resilient Overlay Network overlay system. John Day (who frequents this list) came to a similar view from the process/OS side originally called network IPC (interprocess comm) as it was later renamed RINA (I for Internet).
>>
>> So overlays go back over 20yrs as an active area of investigation before the ones you found. Anyone know of earlier that explicitly layered a net on a working net? (Vs ones that arguably to this with different layers, as with bang-path routing of email in the 1980s)
>>
>> Joe
>>
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>> -
>> Unsubscribe:https://app.smartsheet.com/b/form/9b6ef0621638436ab0a9b23cb0668b0b?The%20list%20to%20be%20unsubscribed%20from=Internet-history
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20250820/ac355c36/attachment-0001.asc>
More information about the Internet-history
mailing list