[ih] "The First Router" on Jeopardy
Jack Haverty
jack at 3kitty.org
Thu Nov 25 12:16:50 PST 2021
On 11/24/21 5:47 PM, Guy Almes via Internet-history wrote:
> First, by 1987, when both the ARPAnet and the proto-NSFnet backbone
> were both operational, networks that connected to both had to decide
> which to use, and that led to interesting routing decisions. Problems
> encountered then led, for example, to creation of BGP.
FYI, for The Historians, I offer a little of the earlier history, all
from the 1980-1983 timeframe.
At the ICCB (before it was renamed IAB) meetings, there was a list kept
on the whiteboard of "things that need to get figured out". One of the
items on that list was "Routing", which included issues like "Type Of
Service", "Shortest Path First", and "Policy Routing" -- all part of the
"How should routing behave in the future Internet?" topic. There were
two concrete motivators for these issues.
The Internet had started to evolve from its "fuzzy peach" stage, where
essentially the ARPANET was the peach surrounded by a fuzzy layer of
LANs, into a richer topology where there were actually choices to be made.
First, SATNET had linked the US and Europe for a while, but Bob Kahn
initiated the addition of a transatlantic path using the public X.25
service. The interconnect was called a "VAN Gateway" (VAN stood for
Value Added Network but I never did understand what that really meant).
The VAN gateway essentially added an interface option to the existing
gateways, allowing them to connect to the public X.25 networks, and use
them as a "circuit" (albeit virtual) between gateways. In effect, the
entire X.25 public network system was made an available component of the
Internet; it became just another network that could be used within the
overall Internet.
The "dial up" nature of the X.25 service also introduced the possibility
of dynamic configuration of the Internet topology -- e.g., adding or
deleting circuits between pairs of gateways as a situation warranted,
simply by using the dialup infrastructure of the public X.27/X.75
system. We called that something like "Dynamic Adaptive Topology", but
I don't recall ever actually trying to use that capability except on the
single US<->EU path in parallel with SATNET.
This "VAN" capability was used to create a topology where there were two
ways IP datagrams could cross the Atlantic. Bringing economics into
the picture, the SATNET path was funded by ARPA, to be used only for
ARPA-approved projects. The X.25 path was funded by whoever opened the
circuit first (which we, as ARPA contractors, silently engineered to be
usually the European side; seemed like the right thing to do). This
issue appeared on the ICCB's to-do list as "Policy Based Routing".
Pending the "real" solution, I don't recall exactly how the gateways
back then made the choice of path for each packet; my vague recollection
is that it had something to do with destination addresses - i.e., a host
might have two distinct IP addresses, one for use via SATNET and the
other for use via X.25. And a single physical net would be assigned
two network numbers (no shortage then), one for use via X.25 and the
other for use via SATNET. IIRC, that's how the UCL network in London
was configured. The early Internet had to sometimes use patching
plaster and baling wire to keep it going as the research sought the
right way for the future.
Second, the Wideband Net, a satellite-based network spanning the US, was
made part of the Internet topology by gateways between it and the
ARPANET. There were then multiple network paths across the US. But
the gateways' routing metric of "hops" would never cause any datagrams
to be sent over the Wideband Net. Since the Wideband Net was only
interconnected by gateways to ARPANET nodes, any route that used the
Wideband Net would necessarily be 2 hops longer than a direct path
across the ARPANET. The routing mechanisms would never make such a
choice. This issue was captured on the ICCB's list as "Expressway
Routing", a reference to the need for car drivers to make a decision to
head toward the nearest freeway entrance, rather than taking a direct
route toward their destination, in order to get to their destination faster.
I don't recall how people experimented with the Wideband Net, i.e., how
they got datagrams to actually flow over it. Perhaps that was a use of
the "Source Routing" mechanisms in the IP headers. Maybe someone else
remembers....
We didn't know how to best address these situations, but of course there
were a lot of ideas. In addition, the existing Internet lacked some
basic mechanisms that seemed to be necessary. In particular, the use of
"hops" to determine which path was the shortest was woefully
inadequate. A "hop" through a Satellite net might be expected to take
much longer than a hop through a terrestrial net, simply due to Physics.
But a hop through the ARPANET traversing many IMPs, when the net was
congested, might actually take longer than a satellite transit. A
time-based metric was not feasible in the gateways without some means of
accurately measuring time, at a precision of milliseconds, in the
routers scattered across the continents.
Dave Mills was on the ICCB, and he took on this quest with unbridled
energy and determination. NTP was the result - an impressive piece of
engineering. Using NTP, computers (e.g., routers, gateways, hosts,
servers, whatever you call them) can maintain highly synchronized
clocks, and measure actual transit times of IP datagrams for use in
calculating "shortest path". Everyone can thank Dave and his crew
that your computers know what time it is today.
The Time mechanisms would be helpful, but much more was needed to handle
"Policy Based" and "Expressway" routing situations. Lots of people had
ideas and wanted them put into the "core gateways" that BBN operated.
But doing that kind of experimentation and also keeping the Internet
core reliably running 24x7 was a struggle.
I was also on the ICCB at the time, and I recruited Dr. Eric Rosen back
at BBN to help think about it. He and I had numerous day-long sessions
with a whiteboard. The result was EGP - the Exterior Gateway
Protocol. If you read Eric's now ancient RFC defining EGP, you'll see
that it was not intended as a routing protocol. Rather it was more of
a "firewall" mechanism that would allow the Internet to be carved up
into pieces, each of which was implemented and operated at arm's length
from the others but could interoperate to present a single Internet to
the end users.
The intent was that such a mechanism would make it possible for some
collection of gateways (e.g., the "core gateways" of the Internet at
that time) to be operated as a reliable service, while also enabling
lots of other collections of gateways to be used as guinea pigs for all
sorts of experiments to try out various ideas that had come up. Each
such collection was called an "Autonomous System" of gateways using some
particular technical mechanisms and under some single operator's
control. EGP was a mechanism to permit reliable operational services
to coexist in the Internet with research and experimentation.
When the ideas had been tried, and the traditional "rough consensus"
emerged to identify the best system design, the new algorithms,
mechanisms, protocols, and anything else needed, would be instantiated
in a new Autonomous System, which would then grow as the new system was
deployed - much as the ARPANET has served as the nursery for the
fledgling Internet, with all IMPs disappearing over time as they were
replaced by routers directly connected with wires.
That's where my direct involvement in the "research" stopped, as I went
more into deploying and operating network stuff, from about mid-1983
on. Perhaps someone else can fill in more gaps in the History.
Enjoy,
Jack Haverty
More information about the Internet-history
mailing list