[ih] Internet analyses (Was Re: IPv8...)

Jack Haverty jack at 3kitty.org
Thu Apr 23 12:38:58 PDT 2026


Thanks, Greg.  Those studies are not quite what I was seeking.  I was 
curious if there had been "system level" analyses of different 
approaches, decisions made, and results from field observations, and 
conclusions about which was the "right choice" with retrospective 
experience.

For example, the ARPANET (and others) provided a "virtual circuit" 
service to its attached computers.  In the Internet, TCP provided a 
similar service of a reliable byte-stream between two computers.

Early ARPANET analyses, circa 1970s or before, analyzed various 
techniques for providing a virtual circuit service.  For example, I 
recall reading arguments proposing or opposing the use of a "front-end" 
to hold much of the communications mechanisms, in lieu of putting such 
mechanisms in the attached computers.   The resultant implementation had 
the ARPANET composed of IMP minicomputers, where much of the flow, 
error, addressing, and such communications mechanisms were implemented.

In contrast, the TCP architecture moved much of that same kind of 
mechanism into the attached computers, rather than into all of the 
various switching boxes that most underlying component networks.  There 
were some "front end" implementations of TCP, but I think they all died 
out.  While I was at BBN, I saw evidence in comments in IMP code that 
indicated other ideas for Internet capabilities were being considered 
for implementation within IMPs.  For example, there were comments in the 
IMP code about things like "network numbers" of other networks.

A "system level" analysis might have compared these two approaches.  The 
ARPANET ran for more than a decade and was measured a lot, so the data 
about its behavior may still be available.  The Internet has been 
operational for more than 40 years, so lots of operational data may also 
be available.  A system analysis would consider performance, efficiency, 
reliability, and even things like economics.

For example, the recent discussions about "installed base" effects might 
be analyzed from an economics perspectove; the "installed base" of the 
TCP universe is not only large because of all the devices and people we 
now have using the Internet, the architectural differences alone make a 
TCP network "larger" than a similar-sized ARPANET.   Such effects were 
even noticeable back in the early 1980s, when the Internet wasn't much 
bigger than the ARPANET by itself.  Changing some mechanism or protocol 
in an IMP was a lot easier than changing a similar mechanism in TCP, 
which had all sorts of computers, operating systems, and organizations 
involved.  Has the "cost" of that difference in size of installed base 
been helpful or harmful?

Other architectures for "internets" were possible.  Here's one you might 
not know about.

When I joined Oracle in 1990, I learned more about the end-users' needs 
of that time.  That was the era of "multiprotocol" networks, where all 
sorts of vendors offered their own products and even their own 
"internets" using their own proprietary technology. Corporations, 
including us, were struggling with operating such "multiprotocol" 
networks, to interconnect the systems their various departments had 
bought - perhaps Finance on PCs, corporate records on SNA, Marketing 
committed to Appletalk, Netware here and there, DECNET in Engineering, 
etc.   That "scenario" replaced the "military scenarios" that had been 
in my head during the early Internet research a decade or more earlier.

I noticed the parallels between TCP's goal to interconnect "islands" of 
different network technologies, and the "islands" of virtual network 
technologies that had been created by all the different implementations 
from the computer/network vendors.   Our forte was software, so we built 
a "router" at levels "up in the stack" (trying to fit this into an ISO 
layer diagram may be hazardous to your mental health).  We called it an 
"Interchange".   It was simply a software package that would be run on 
some computer that had the ability to use more than one type of 
network.  With an "Internet" composed of Interchanges, a computer on one 
kind of network could interact with a computer on a different kind of 
network, even though neither could use both kinds.  A computer on 
Appletalk might access a computer on SNA, and perhaps use an intervening 
TCP network along the way.

That is yet another architecture for creating an Internet. Basically 
"Interchanges" simply were "patch panels" that plugged together virtual 
circuits.   Every network type had some kind of protocol to deliver 
virtual circuit service, and they were simply interconnected by the 
Interchange software.

Was patching together virtual circuits a good idea?  I don't know, we 
just did it as a solution to a very real problem that we and our 
customers were experiencing.  The "gateways" in the early Internet could 
have used this architecture and patched together virtual circuits, but 
chose to build on top of "datagrams" instead.  Routers were evolved to 
have the capability to pass around all sorts of datagrams - IP, IPX, 
etc., and that enabled the creation of multi-protocol internets.   Lots 
of choices.  Lots of good reasons, pro and con, for each.

It also became obvious at that time in the early 1990s, at least to me, 
that TCP had probably won the protocol wars.   All the customers I 
encountered were at some point along their own paths to converting their 
entire world to TCP.   Most common reasons I heard: "It works."  "Our 
new hires out of college know all about TCP."

The "Interchange" provided a way for them to perform their planned IT 
evolution at their own pace, driven by their own business situation and 
needs.  The cross-protocol capability of Interchanges apparently had 
value.  Even if they never actually used an Interchange, the knowledge 
of its availability helped with a corporation's planning and execution.  
They could migrate different departments IT approaches at their own 
schedule and PERT chart.  It could be especially helpful in situations 
such as mergers and acquisitions, where the two corporations being 
interconnected might have very different IT architectures but had to be 
integrated quickly for business viability.

That's another "cost" factor for consideration in a system-level 
analysis.   I've wondered how much effect it might have had on the 
success of TCP in the protocol wars by easing the pain in corporations 
of adopting TCP in some "flag day" plan with the associated high risks.  
All the other competitors on the protocol battlefield disappeared in an 
amazingly (to me) short time.

Unlike the DoD in the 1970s, most corporations didn't have much power to 
declare TCP a "Corporate Standard" and set procurement rules requiring 
its use in any product they purchased.  If a corporation was big enough 
it might have had such clout.   Most corporations did pick some 
standard, usually associated with a particular vendor - e.g., an IBM 
shop, or DEC shop, etc.  But they hated the "lock in" nature of such 
decisions.

It became pretty common that availability of TCP influenced IT decisions 
in purchasing new equipment.  I recall for example seeing a huge 
semi-truck filled with Sun workstations delivering to a Wall Street 
investment house.  The computer industry figured out that new 
characteristic of their marketplace, TCP became the effective Standard, 
regardless of what the official bodies said.

Seems to me like there's at least a few PhD theses here.  Have they been 
written...?

/Jack Haverty

On 4/22/26 22:21, Greg Skinner wrote:
> On Apr 21, 2026, at 1:04 PM, Jack Haverty via Internet-history 
> <internet-history at elists.isoc.org> wrote:
>>
>> Back in the ARPANET era, there were lots of analytical studies of the 
>> mechanisms of packet networks.  Lots of measurements were also taken, 
>> to compare real-world behavior to the predictions of analytical 
>> models (such as Kleinrock's work at UCLA, and later BBN's work to 
>> monitor the ARPANET as it grew and use the data gathered to modify 
>> the internal mechanisms based on operational behavior.
>>
>> Have such theoretical analyses and operational experience ever been 
>> applied to The Internet?    For example, the internal mechanisms in 
>> the ARPANET based routing decisions on measurements of transit time 
>> for packets to traverse the ARPANET.   For pragmatic reasons, that 
>> was not feasible in the early Internet, and routing decisions were 
>> based on "hops" instead of time.  Maybe that's still true?
>>
>> Was the Internet ever analyzed mathematically as the ARPANET was? Or 
>> were (are?) measurements of operational behavior used (or even 
>> collected?) in some way to influence technical evolution? Personally 
>> I haven't seen any discussions of such things but I also haven't been 
>> looking.   But I think it's an important aspect of Internet History.
>>
>> /Jack
>>
>
> CAIDA <https://www.caida.org> does many types of studies.  Their 
> overview page 
> <https://www.caida.org/catalog/datasets/overview/> provides links to 
> examples.  Some, like this analysis of how ISP competition affects 
> autonomous system path quality and ISP profits, are quite 
> mathematical, IMO. [1]
>
> --gregbo
>
> [1] https://arxiv.org/pdf/2305.06811
>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20260423/abbf312e/attachment.asc>


More information about the Internet-history mailing list