[ih] The Importance of Time in the Internet

Jack Haverty jack at 3kitty.org
Tue Oct 4 14:26:10 PDT 2022


Brian asked:  "To be blunt, why? " - from  -- [ih] nice story about dave 
mills and NTP

OK, I'll try to explain why I believed Time was so important to The 
Internet back in the 1980s.  Or at least what I remember.... changing 
the subject line to be more relevant.

Basically, the "why" is "to provide the network services that the Users 
need."  In other words, to keep the customers happy.   That's the short 
answer.   Here's the longer story:

---------------------------

As far as I can remember, there wasn't any "specifications" document of 
The Internet back in the early 80s when IPV4 et al were congealing.  
Nothing like a "Requirements" document that you'd typically find for 
major government projects that detailed what the resultant system had to 
be able to do.

Yes, there have been lots of documents, e.g., RFCs, detailing the 
formats, protocols, algorithms, and myriad technical details of the 
evolving design.   But I can't remembar any document specifying what The 
Internet was expected to provide as services to its Users. IIRC, even 
the seminal 1974 Cerf/Kahn paper on "A Protocol for Packet Network 
Interconnection" that created TCP says nothing about what such an 
aggregate of networks would provide as services to its users' attached 
computers.  In other words, what should "customers" of The Internet 
expect to be able to do with it?

That's understandable for a research environment.  But to actually build 
the early Internet, we had to have some idea of what the thing being 
built should do, in order to figure out what's still missing, what might 
or might not work, what someone should think about for the future, and 
so on.

I believe ARPA's strategy, at least in the 80s, was to define what The 
Internet had to be able to do by using a handful of "scenarios" of how 
The Internet might be used in a real-world (customer) situation.  In 
addition, it was important to have concrete physical demonstrations in 
order to show that the ideas actually worked. Such demonstrations showed 
how the technology might actually be useful in the real world, and that 
theory and research had connections to practice and real-world situations.

The "customer" of the early Internet was the government(s) - largely the 
US, but several countries in Europe were also involved. Specifically, 
the military world was the customer.   Keeping the customer happy, by 
seeing working demonstrations that related to real-world situations, was 
crucial to keeping the funding flowing. Generals and government VIPs 
care about what they can envision using.   Generals don't read RFCs.  
But they do open their wallets when they see something that will be 
useful to them.

At the early Internet meetings, and especially at the ICCB (now IAB) 
initial meetings, I remember Vint often describing one such scenario, 
which we used to drive thought experiments to imagine how some technical 
idea would behave in the real world.   It was of course a military 
scenario, in which a battlefield commander is in contact with the chain 
of command up to the President, as well as with diverse military 
elements in the air, on ships, in moving vehicles on the ground, in 
intelligence centers, and everything else you can imagine is used in a 
military scenario.   Allies too. That's what the customer wanted to do.

In that 1980s scenario, a "command and control" conference is being 
held, using The Internet to connect the widely scattered participants.   
A general might be using a shared multimedia display (think of a static 
graphical map with a cursor/pointer - no thought of interactive video in 
the 80s...) to understand what was happening "in the field", consult 
with advisors and other command staffs, and order appropriate actions.   
While pointing at the map, the orders are given.

Soldier in a Jeep: "The enemy supply depot is here, and a large body of 
infantry is here"
...
...
General: "OK, send the third Division here, and have that bomber 
squadron hit here."

While speaking, the field commanders and General draw a cursor on their 
screen, indicating the various locations.  Everyone else sees a similar 
screen.  Questions and clarifications happen quickly, in a 
conversational manner familiar to military members from their long 
experience using radios.   But it's all online, through the Internet.

So what can go wrong?

Most obvious is that the datagrams supporting the interactive 
conversations need to get to their destinations in time to be useful in 
delivering the audio, graphics, etc., to all the members of the 
conversation, and properly synchronized.   That need related directly to 
lots of mechanisms we put into the Internet IPV4 technology - TTL, TOS, 
Multicast, etc.  If the data doesn't arrive soon enough, the 
conversation will be painful and prone to errors and misinterpretation.

But there was also a need to be able to synchronize diverse data 
streams, so that the content delivered by a voice transmission, perhaps 
flowing over UDP, was "in sync" with graphical information carried by a 
TCP connection.   Those applications needed to know how The Internet was 
handling their datagrams, and how long it was taking for them to get 
delivered through whatever path of networks was still functioning at the 
time.  Does this speech fragment coincide in time with that graphics 
update - that kind of situation.

In the scenario, it was crucial that the field reports and General's 
commands were in sync with the cursor movements on the shared graphics 
screens.   Otherwise very bad things could happen.  (think about it...)

Time was important.

Within the physical Internet of the 80s, there were enough 
implementations of the pieces to demonstrate such capabilities. The 
ARPANET provided connectivity among fixed locations in the US and some 
other places, including governmental sites such as the Pentagon.  SATNET 
provided transatlantic connectivity.   A clone of SATNET, called MATNET, 
was deployed by the Navy.  One MATNET node was on an aircraft carrier 
(USS Carl Vinson), which could have been where that squadron of bombers 
in the Scenario came from.  Army personnel were moving around a 
battlefield in Jeeps and helicopters, in field exercises with Packet 
Radios in their vehicles.   They could move quickly wherever the orders 
told them to go, and the Packet Radio networks would keep them in 
contact with all the other players in a demo of that Scenario.

Networks were slow in those days, with 56 kilobits/second considered 
"fast".  ARPA had deployed a "Wideband Net" using satellite technology, 
that used a 3 megabits/second channel.  That could obviously carry much 
more traffic than other networks.   But the Wideband Net (aka WBNET) was 
connected only to the ARPANET.   Like the ARPANET, the WBNET spanned the 
continental US, able to carry perhaps 10 times the traffic that the 
ARPANET could support.   But how to actually use the WBNET - that was 
the problem.

Since routing in the 1980s Internet was effectively based on "hop 
count", despite the name given to the TTL field, the gateways, and the 
"host" computers on the ARPANET, would never send any traffic towards 
the WBNET.   Such traffic would always be two "hops" longer through a 
WBNET path than if it travelled directly through the ARPANET.  The WBNET 
was never going to be the chosen route from anywhere to anywhere else in 
The Internet.

In the scenario, if the WBNET was somehow effectively utilized, perhaps 
it would be possible to convey much more detailed maps and other 
graphics.  Maybe even video.

But there was no way to use WBNET.   So we put "Source Routing" 
mechanisms into the IPV4 headers, as a way for experimenters to force 
traffic over the WBNET, despite the gateways belief that such a path was 
never the best way to go.   In effect, the "host" computers were making 
their own decision about how their traffic should be carried through the 
Internet, likely contradicting the decision made by the routing 
mechanisms in the Gateways.  There was even a term for the necessary 
algorithms and code in those "host" computers - they had to act as "Half 
Gateways".  To make decisions about where to send their datagrams, the 
hosts had to somehow participate in the exchange of routing information 
with the networks' Gateways.  At the time that was only done by hand, 
configuring the host code to send appropriate packets with Source 
Routing to perform particular experiments.  No design of a "Half 
Gateway" was developed AFAIK.

In the ICCB's list of "Things that need to be done", this was part of 
the "Expressway Routing" issue.   The analogy we used was from 
everyone's familiarity driving in urban areas.  Even though you can get 
from point A to point B by using just the city streets "network", it's 
often better and faster to head for the nearest freeway entrance, even 
thought it involves going a short distance in the "wrong direction".  
The route may be longer with three hops through Streets/Freeway/Streets, 
but it's the fastest way to get there, much better than just travelling 
on Streets.   Datagrams have needs just like travellers in cars; their 
passengers need to get to the destination before the event starts.   
Time matters.   So does achievable bandwidth, to get enough information 
delivered so that good decisions can be made.  You can't always count on 
getting both.

We thought gateways should be smart about Expressway Routing, and offer 
different types of service for different user needs, but didn't know how 
to do it.  Meanwhile, I don't know the details, but I believe there was 
quite a lot of such experimentation using the WBNET.   The expectation 
was that such experiments could work out how to best transport voice, 
graphical, and other such "non traditional" network traffic.   Later the 
gateways would know how to better use all the available resources and 
match their routes to the particular traffic's needs, and Source Routing 
would no longer be needed (at least for that situation).

All of what I just wrote happened almost 40 years ago, so things have 
changed.   A lot.  Maybe Time is no longer important, and notions such 
as TOS are no longer needed.  But today, in 2022, I see the talking 
heads on TV interviewing reporters, experts, or random people "out 
there" somewhere in the world.   The Internet seems to be everywhere 
(even active battlefields!) and it's used a lot.  I've been amazed at 
how well it works -- usually.  But you still sometimes see video 
breaking up, fragments of conversations being lost, and sometimes it 
gets bad enough that the anchor person apologizes for the "technical 
difficulties" and promises to get the interviewee back as soon as they can.

Perhaps that's caused by a loose cable somewhere.  Or perhaps it's 
caused by "buffer bloat" somewhere, which may have disappeared if you 
try later.  Perhaps it would work better if the Internet had TTL, TOS, 
and other such stuff that was envisioned in the 80s. Meanwhile, the 
Users (like me) have just become used to the fact that such things 
happen, you have to expect them, and just try again.

The General would not be happy.

I hope I'm wrong, but I fear "technical difficulties" has become a de 
facto feature of the Internet technology, now baked into the technical 
design.   Anyway, I hope I've explained why I (still) think Time is 
important.   It's all about The Internet providing the services that the 
customers need to do what they need to do.

-------

One last thing while I'm remembering it, just to capture a bit more of 
the 80s Internet history for the historians.  At the time, we had some 
ideas about how to solve these "Time" problems.  One idea was somewhat 
heretical.   I don't remember who was in the "We" group of heretics who 
were pursuing that idea.   But I admit to being such a heretic.

The gist of the Idea was "Packet Switching is Not Always the Right Answer!"

Pure Heresy! in the 1980s' Internet Community.

The core observation was that if you had a fairly consistent flow of 
data (bits, not packets) between point A and point B, the best way to 
carry that traffic was to simply have an appropriately sized circuit 
between A and B.   If you had some traffic that needed low-latency 
service, you'd route it over that circuit.   Other traffic, that 
wouldn't "fit" in the circuit could be routed over longer paths using 
classic packet switching.   Clever routing algorithms could make such 
decisions, selecting paths appropriate for each type of traffic using 
the settings conveyed in the TOS and TTL fields.   A heavy flow of 
traffic between two points might even utilize several distinct pathways 
through the Internet, and achieve throughput from A to B greater than 
what any single "best route" could accomplish.

In the ICCB, this was called the "Multipath Routing" issue.  It wasn't a 
new issue; the same situation existed in the ARPANET and solutions 
werebeing researched for introduction into the IMP software.   There was 
quite a lot of such research going on, exploring how to improve the 
behavior of the ARPANET and its clones (the DDN, Defense Data Network, 
being a prime example of where new techniques would be very useful).

In the ARPANET, ten years of operations had led to the development of 
machinery to change the topology of the network as traffic patterns 
changed.  Analysts would look at traffic statistics, and at network 
performance data such as packet transit times, and run mathematical 
models to decide where it would be appropriate to have telephone 
circuits between pairs of IMPs.  Collecting such data, doing the 
analysis, and "provisioning" the circuits (getting the appropriate phone 
company to install them) took time - months at least, perhaps sometimes 
even years.

In the telephony network, there were even more years of experience using 
Circuit Switches - the technology of traditional phone calls, where the 
network switches allocated a specific quantity of bandwidth along 
circuits between switching centers, dedicating some bandwidth to each 
call and patching them all together in series so the end users thought 
that they had a simple wire connecting the two ends of the call.   
Packet switching provided Virtual Circuits and would try its best to 
handle whatever the Users gave it.  Circuit Switching provided real 
Circuits that provided stable bandwidth and delay, or told you it 
couldn't ("busy signal").

In the 80s ARPANET, we had experimented with faster ways to add or 
subtract bandwidth, by simply using dial-up modems.   An IMP could "add 
a circuit" to another IMP by using the dial-up telephony network to 
"make a call" to the other IMP, and the routing mechanisms would notice 
that that circuit had "come up", and simply incorporate it into the 
traffic flows.   Such mechanisms were manually triggered, since the IMP 
software didn't know how to make decisions about such "dynamic 
topology".  We used it successfully to enable a new IMP to join an 
existing network by simply "dialing in" to a modem on some IMP already 
running in that network.   The new IMP would quickly become just another 
operating node in the existing network, and its attached host computers 
could then make connections to other sites on the network.

The heretical idea in the Internet arena was that a similar "dynamic 
topology" mechanism could be introduced, where bandwidth between points 
A and B could be added and subtracted on the fly between pairs of 
Gateways, as some human operator, or very clever algorithm, determined 
it was appropriate.

With such a mechanism, (we hoped that) different types of service could 
be supported on the Internet.  Gateways might determine that there was 
need for a low-latency pathway between points A and B, and that it was 
unable to provide such service with the current number of "hops" (more 
specifically Time) involved in the current best route.   So it could 
"dial up" more bandwidth directly between A and B, thereby eliminating 
multiple hops through intermediate gateways and associated packet 
transmission delays, buffering, etc.

So, Packet Switching was not always the right answer.  When you need a 
Circuit, you should use Circuit Switching....   Heresy!

There were all sorts of ideas floating around about how that might 
work.  One example I remember was called something like "Cut Through 
Routing".   The basic idea was that a Gateway, when it started to 
receive a datagram, could look at the header and identify that datagram 
as being high priority, and associated with an ongoing traffic flow that 
needed low latency.   The gateway could then start transmitting that 
same datagram on the way to its next outbound destination -- even before 
the datagram had been completely received from the incoming circuit.   
This would reduce transit time through that node to possibly just a 
handful of "bit times", rather than however long it would take to 
receive and then retransmit the entire datagram.  But there were 
problems with such a scheme - what do you do about checksums?

Obviously such a system would require a lot of new work.  In the 
interim, to gain experience from operations and hopefully figure out 
what those clever routing algorithms should do, we envisioned a network 
in which a "node" contained two separate pieces of equipment - a typical 
Gateway (now called a Router), and a typical Circuit Switch (as you 
would find in an 80s telephony network).    Until the algorithms were 
figured out, a human operator/analyst would make the decisions about how 
to use the packet and circuit capabilities, much as the dial-up modems 
were being used, and hopefully figure out how such things should work so 
it could be transformed into algorithms, protocols, and code.

At BBN, we actually proposed such a network project to one client (not 
ARPA), using off-the-shelf IMPs, Circuit Switches, and Gateways to 
create each network node.  The Circuit network would provide circuits to 
be used by the Packet Network, and such Circuits could be reconfigured 
on demand as needed.  If two Gateways really needed a circuit connecting 
them, it could be "provisioned" by simply issuing commands to the 
Circuit Switches.   The Gateways would (eventually) realize that they 
had a new circuit available, and it would become the shortest route 
between A and B.

BBN even bought a small company that had been making Circuit Switches 
for the Telephony market.  AFAIK, that project didn't happen.  I suspect 
the client realized that there was a bit too much "research" that still 
needed to be done before such a system would be ready for production use.

Anyway, I thought this recollection of 1980s networking might be of 
historical interest.  After 40 years, things have no doubt changed a 
lot.  I don't know much about how modern network nodes actually work.   
Perhaps they now do use a hybrid of packet and circuit switching and use 
dynamic topology?  Perhaps it's all now in silicon deep inside where the 
fiber light is transformed back and forth into electrons.  Perhaps it's 
all done optically using some kind of quantum technique...?  Or perhaps 
they just have added more memory everywhere and hoped that lots of 
buffering would be enough to meet the Users' needs.   Memory is cheaper 
to get than new algorithms and protocols.

In any event, I hope explains why I think Time was, and is still, 
important to The Internet.  It's not an easy problem.  And my own 
empirical and anecdotal observation, as just a User now, is that bad 
things still seem to happen far too frequently to explain as technical 
difficulties.

Although many people use The Internet today, there are some communities 
that find it unusable.  Serious Gamers I've talked with struggle to find 
places to plug in to The Internet where they can enjoy their games.  I 
also wonder, as we watch the news from "the front", wherever that is 
today, whether today's military actually uses The Internet as that 1980s 
scenario envisioned.   Or perhaps they have their own private internet 
now, tuned to do what they need it to do?

Hope this helps some Historians.   Someone should have written it down 
40 years ago, in a form more permanent than emails.   Sorry about that....

Thanks for getting this far,
Jack Haverty


On 10/2/22 12:50, Brian E Carpenter wrote:
> Jack,
> On 03-Oct-22 06:55, Jack Haverty via Internet-history wrote:
>> The short answer is "Yes".  The Time-To-Live field was intended to count
>> down actual transit time as a datagram proceeded through the Internet.
>> A datagram was to be discarded as soon as some algorithm determined it
>> wasn't going to get to its destination before its TTL ran to zero.   But
>> we didn't have the means to measure time, so hop-counts were the
>> placeholder.
>>
>> I wasn't involved in the IPV6 work, but I suspect the change of the
>> field to "hop count" reflected the reality of what the field actually
>> was.   But it would have been better to have actually made Time work.
>
> To be blunt, why?
>
> There was no promise of guaranteed latency in those days, was there?
> As soon as queueing theory entered the game, that wasn't an option.
> So it wasn't just the absence of precise time, it was the presence of
> random delays that made a hop count the right answer, not just the
> convenient answer.
>
> I think that's why IPv6 never even considered anything but a hop count.
> The same lies behind the original TOS bits and their rebranding as
> the Differentiated Services Code Point many years later. My motto
> during the diffserv debates was "You can't beat queueing theory."
>
> There are people in the IETF working hard on Detnet ("deterministic
> networking") today. Maybe they have worked out how to beat queueing
> theory, but I doubt it. What I learned from working on real-time
> control systems is that you can't guarantee timing outside a very
> limited and tightly managed set of resources, where unbounded
> queues cannot occur.
>
>    Brian
>
>>
>> Much of these "original ideas" probably weren't ever written down in
>> persistent media.  Most discussions in the 1980 time frame were done
>> either in person or more extensively in email.   Disk space was scarce
>> and expensive, so much of such email was probably never archived -
>> especially email not on the more "formal" mailing lists of the day.
>>
>> As I recall, Time was considered very important, for a number of
>> reasons.  So here's what I remember...
>> -----
>>
>> Like every project using computers, the Internet was constrained by too
>> little memory, too slow processors, and too limited bandwidth. A
>> typical, and expensive, system might have a few dozen kilobytes of
>> memory, a processor running at perhaps 1 MHz, and "high speed"
>> communications circuits carrying 56 kilobits per second.   So there was
>> strong incentive not to waste resources.
>>
>> At the time, the ARPANET had been running for about ten years, and quite
>> a lot of experience had been gained through its operation and crises.
>> Over that time, a lot of mechanisms had been put in place, internally in
>> the IMP algorithms and hardware, to "protect" the network and keep it
>> running despite what the user computers tried to do.  So, for example,
>> an IMP could regulate the flow of traffic from any of its "host"
>> computers, and even shut it off completely if needed.  (Google "ARPANET
>> RFNM counting" if curious).
>>
>> In the Internet, the gateways had no such mechanisms available. We were
>> especially concerned about the "impedance mismatch" that would occur at
>> a gateway connecting a LAN to a much slower and "skinnier" long-haul
>> network.  All of the "flow control" mechanisms that were implemented
>> inside an ARPANET IMP would be instead implemented inside TCP software
>> in users' host computers.
>>
>> We didn't know how that would work.   But something had to be in the
>> code....  So the principle was that IP datagrams could be simply
>> discarded when necessary, wherever necessary, and TCP would retransmit
>> them so they would eventually get delivered.
>>
>> We envisioned that approach could easily lead to "runaway" scenarios,
>> with the Internet full of duplicate datagrams being dropped at any
>> "impedance mismatch" point along the way.   In fact, we saw exactly that
>> at a gateway between ARPANET and SATNET - IIRC in one of Dave's
>> transatlantic experiments ("Don't do that!!!")
>>
>> So, Source Quench was invented, as a way of telling some host to "slow
>> down", and the gateways sent an SQ back to the source of any datagram it
>> had to drop.  Many of us didn't think that would work very well (e.g., a
>> host might send one datagram and get back an SQ - what should it do to
>> "slow down"...?).   I recall that Dave knew exactly what to do. Since
>> his machine's datagram had been dropped, it meant he should immediately
>> retransmit it.   Another "Don't do that!" moment....
>>
>> But SQ was a placeholder too -- to be replaced by some "real" flow
>> control mechanism as soon as the experimentation revealed what that
>> should be.
>>
>> -----
>>
>> TCP retransmissions were based on Time.  If a TCP didn't receive a
>> timely acknowledgement that data had been received, it could assume that
>> someone along the way had dropped the datagram and it should retransmit
>> it.  SQ datagrams were also of course not guaranteed to get to their
>> destination, so you couldn't count on them as a signal to retransmit.
>> So Time was the only answer.
>>
>> But how to set the Timer in your TCP - that was subject to
>> experimentation, with lots of ideas.  If you sent a copy of your data
>> too soon, it would just overload everything along the path through the
>> Internet with superfluous data consuming those scarce resources.  If you
>> waited too long, your end-users would complain that the Internet was too
>> slow.   So the answer was to have each TCP estimate how long it was
>> taking for a datagram to get to its destination, and set its own
>> "retransmission timer" to slightly longer than that value.
>>
>> Of course, such a technique requires instrumentation and data. Also,
>> since the delays might depend on the direction of a datagram's travel,
>> you needed synchronized clocks at the two endpoint of a TCP connection,
>> so they could accurately measure one-way transit times.
>>
>> Meanwhile, inside the gateways, there were ideas about how to do even
>> better by using Time.  For example, if the routing protocols were
>> actually based on Time (shortest transit time) rather than Hops (number
>> of gateways between here and destination), the Internet would provide
>> better user performance and be more efficient.  Even better - if a
>> gateway could "know" that a particular datagram wouldn't get to its
>> destination before it's TTL ran out, it could discard that datagram
>> immediately, even though it still had time to live.  No point in wasting
>> network resources carrying a datagram already sentenced to death.
>>
>> We couldn't do all that.   Didn't have the hardware, didn't have the
>> algorithms, didn't have the protocols.  So in the meantime, any computer
>> handling an IP datagram should simply decrement the TTL value, and if it
>> reached zero the datagram should be discarded. TTL effectively became a
>> "hop count".
>>
>> When Dave got NTP running, and enough Time Servers were online and
>> reliable, and the gateways and hosts had the needed hardware, Time could
>> be measured, TTL could be set based on Time, and the Internet would be
>> better.
>>
>> In the meanwhile, all of us TCP implementers just picked some value for
>> our retransmission timers.  I think I set mine to 3 seconds. No
>> exhaustive analysis or sophisticated mathematics involved.  It just felt
>> right.....there was a lot of that going on in the early Internet.
>>
>> -----
>>
>> While all the TCP work was going on, other uses were emerging. We knew
>> that there was more to networking than just logging in to distant
>> computers or transferring files between them - uses that had been common
>> for years in the ARPANET.   But the next "killer app" hadn't appeared
>> yet, although there were lots of people trying to create one.
>>
>> In particular, "Packet Voice" was popular, with a contingent of
>> researchers figuring out how to do that on the fledgling Internet. There
>> were visions that someday it might even be possible to do Video.  In
>> particular, *interactive* voice was the goal, i.e., the ability to have
>> a conversation by voice over the Internet (I don't recall when the term
>> VOIP emerged, probably much later).
>>
>> In a resource-constrained network, you don't want to waste resources on
>> datagrams that aren't useful.  In conversational voice, a datagram that
>> arrives too late isn't useful.  A fragment of audio that should have
>> gone to the speaker 500 milliseconds ago can only be discarded. It
>> would be better that it hadn't been sent at all, but at least discarding
>> it along the way, as soon as it's known to be too late to arrive, would
>> be appropriate.
>>
>> Of course, that needs Time.  UDP was created as an adjunct to TCP,
>> providing a different kind of network service.   Where TCP got all of
>> the data to its destination, no matter how long it took, UDP would get
>> as much data as possible to the destination, as long as it got there in
>> time to be useful.   Time was important.
>>
>> UDP implementations, in host computers, didn't have to worry about
>> retransmissions.  But they did still have to worry about how long it
>> would take for a datagram to get to its destination.  With that
>> knowledge, they could set their datagrams' TTL values to something
>> appropriate for the network conditions at the time.  Perhaps they might
>> even tell their human users "Sorry, conversational use not available
>> right now." -- an Internet equivalent of the "busy signal" - if the
>> current network transit times were too high to provide a good user
>> experience.
>>
>> Within the world of gateways, the differing needs of TCP and UDP
>> motivated different behaviors.  That motivated the inclusion of the TOS
>> - Type Of Service - field in the IP datagram header.  Perhaps UDP
>> packets would receive higher priority, being placed at the head of
>> queues so they got transmitted sooner.  Perhaps they would be discarded
>> immediately if the gateway knew, based on its routing mechanisms, that
>> the datagram would never get delivered in time. Perhaps UDP would be
>> routed differently, using a terrestrial but low-bandwidth network, while
>> TCP traffic was directed over a high-bandwidth but long-delay satellite
>> path.   A gateway mesh might have two or more independent routing
>> mechanisms, each using a "shortest path" approach, but with different
>> metrics for determining "short" - e.g., UDP using the shortest time
>> route, while some TCP traffic travelled a route with least ("shortest")
>> usage at the time.
>>
>> We couldn't do all that either.  We needed Time, hardware, algorithms,
>> protocols, etc.  But the placeholders were there, in the TCP, IP, and
>> UDP formats, ready for experimentation to figure all that stuff out.
>>
>> -----
>>
>> When Time was implemented, there could be much needed experimentation to
>> figure out the right answers.  Meanwhile, we had to keep the Internet
>> working.  By the early 1980s, the ARPANET had been in operation for more
>> than a decade, and lots of operational experience had accrued. We knew,
>> for example, that things could "go wrong" and generate a crisis for the
>> network operators to quickly fix.    TTL, even as just a hop count, was
>> one mechanism to suppress problems.  We knew that "routing loops" could
>> occur.   TTL would at least prevent situations where datagrams
>> circulated forever, orbiting inside the Internet until someone
>> discovered and fixed whatever was causing a routing loop to keep those
>> datagrams speeding around.
>>
>> Since the Internet was an Experiment, there were mechanisms put in place
>> to help run experiments.  IIRC, in general things were put in the IP
>> headers when we thought they were important and would be needed long
>> after the experimental phase was over - things like TTL, SQ, TOS.
>>
>> Essentially every field in the IP header, and every type of datagram,
>> was there for some good reason, even though its initial implementation
>> was known to be inadequate.   The Internet was built on Placeholders....
>>
>> Other mechanisms were put into the "Options" mechanism of the IP
>> format.   A lot of that was targeted towards supporting experiments, or
>> as occasional tools to be used to debug problems in crises during
>> Internet operations.
>>
>> E.g., all of the "Source Routing" mechanisms might be used to route
>> traffic in particular paths that the current gateways wouldn't otherwise
>> use.  An example would be routing voice traffic over specific paths,
>> which the normal gateway routing wouldn't use.   The Voice experimenters
>> could use those mechanisms to try out their ideas in a controlled
>> experiment.
>>
>> Similarly, Source Routing might be used to debug network problems. A
>> network analyst might use Source Routing to probe a particular remote
>> computer interface, where the regular gateway mechanisms would avoid
>> that path.
>>
>> So a general rule was that IP headers contained important mechanisms,
>> often just as placeholders, while Options contained things useful only
>> in particular circumstances.
>>
>> But all of these "original ideas" needed Time.   We knew Dave was "on
>> it"....
>>
>> -----
>>
>> Hopefully this helps...  I (and many others) probably should have
>> written these "original ideas" down 40 years ago.   We did, but I
>> suspect all in the form of emails which have now been lost. Sorry
>> about that.   There was always so much code to write.  And we didn't
>> have the answers yet to motivate creating RFCs which were viewed as more
>> permanent repositories of the solved problems.
>>
>> Sorry about that.....
>>
>> Jack Haverty
>>
>>
>>
>> On 10/2/22 07:45, Alejandro Acosta via Internet-history wrote:
>>> Hello Jack,
>>>
>>>    Thanks a lot for sharing this, as usual, I enjoy this kind of
>>> stories :-)
>>>
>>>    Jack/group, just a question regarding this topic. When you 
>>> mentioned:
>>>
>>> "This caused a lot of concern about protocol elements such as
>>> Time-To-Live, which were temporarily to be implemented purely as "hop
>>> counts"
>>>
>>>
>>>    Do you mean, the original idea was to really drop the packet at
>>> certain time, a *real* Time-To-Live concept?.
>>>
>>>
>>> Thanks,
>>>
>>> P.S. That's why it was important to change the field's name to hop
>>> count in v6 :-)
>>>
>>>
>>>
>>> On 2/10/22 12:35 AM, Jack Haverty via Internet-history wrote:
>>>> On 10/1/22 16:30, vinton cerf via Internet-history wrote:
>>>>> in the New Yorker
>>>>>
>>>>> https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time 
>>>>>
>>>>>
>>>>>
>>>>> v
>>>>
>>>> Agree, nice story.   Dave did a *lot* of good work.  Reading the
>>>> article reminded me of the genesis of NTP.
>>>>
>>>> IIRC....
>>>>
>>>> Back in the early days circa 1980, Dave was the unabashed tinkerer,
>>>> experimenter, and scientist.  Like all good scientists, he wanted to
>>>> run experiments to explore what the newfangled Internet was doing and
>>>> test his theories.   To do that required measurements and data.
>>>>
>>>> At the time, BBN was responsible for the "core gateways" that
>>>> provided most of the long-haul Internet connectivity, e.g., between
>>>> US west and east coasts and Europe.  There were lots of ideas about
>>>> how to do things - e.g., strategies for TCP retransmissions,
>>>> techniques for maintaining dynamic tables of routing information,
>>>> algorithms for dealing with limited bandwidth and memory, and other
>>>> such stuff that was all intentionally very loosely defined within the
>>>> protocols.   The Internet was an Experiment.
>>>>
>>>> I remember talking with Dave back at the early Internet meetings, and
>>>> his fervor to try things out, and his disappointment at the lack of
>>>> the core gateway's ability to measure much of anything. In
>>>> particular, it was difficult to measure how long things took in the
>>>> Internet, since the gateways didn't even have real-time clocks. This
>>>> caused a lot of concern about protocol elements such as Time-To-Live,
>>>> which were temporarily to be implemented purely as "hop counts",
>>>> pending the introduction of some mechanism for measuring Time into
>>>> the gateways.  (AFAIK, we're still waiting....)
>>>>
>>>> Curiously, in the pre-Internet days of the ARPANET, the ARPANET IMPs
>>>> did have a pretty good mechanism for measuring time, at least between
>>>> pairs of IMPs at either end of a communications circuit, because such
>>>> circuits ran at specific speeds.   So one IMP could tell how long it
>>>> was taking to communicate with one of its neighbors, and used such
>>>> data to drive the ARPANET internal routing mechanisms.
>>>>
>>>> In the Internet, gateways couldn't tell how long it took to send a
>>>> datagram over one of its attached networks.   The networks of the day
>>>> simply didn't make such information available to its "users" (e.g., a
>>>> gateway).
>>>>
>>>> But experiments require data, and labs require instruments to collect
>>>> that data, and Dave wanted to test out lots of ideas, and we (BBN)
>>>> couldn't offer any hope of such instrumentation in the core gateways
>>>> any time soon.
>>>>
>>>> So Dave built it.
>>>>
>>>> And that's how NTP got started.  IIRC, the rest of us were all just
>>>> trying to get the Internet to work at all.   Dave was interested in
>>>> understanding how and why it worked.  So while he built NTP, that
>>>> didn't really affect any other projects.  Plus most (at least me)
>>>> didn't understand how it was possible to get such accurate
>>>> synchronization when the delays through the Internet mesh were so
>>>> large and variable.   (I still don't). But Dave thought it was
>>>> possible, and that's why your computer, phone, laptop, or whatever
>>>> know what time it is today.
>>>>
>>>> Dave was responsible for another long-lived element of the
>>>> Internet.   Dave's experiments were sometimes disruptive to the
>>>> "core" Internet that we were tasked to make a reliable 24x7 service.
>>>> Where Dave The Scientist would say "I wonder what happens when I do
>>>> this..." We The Engineers would say "Don't do that!"
>>>>
>>>> That was the original motivation for creating the notion of
>>>> "Autonomous Systems" and EGP - a way to insulate the "core" of the
>>>> Internet from the antics of the Fuzzballs.  I corralled Eric Rosen
>>>> after one such Fuzzball-triggered incident and we sat down and
>>>> created ASes, so that we could keep "our" AS running reliably.  It
>>>> was intended as an interim mechanism until all the experimentation
>>>> revealed what should be the best algorithms and protocol features to
>>>> put in the next generation, and the Internet Experiment advanced into
>>>> a production network service.   We defined ASes and EGP to protect
>>>> the Internet from Dave's Fuzzball mania.
>>>>
>>>> AFAIK, that hasn't happened yet ... and from that article, Dave is
>>>> still Experimenting..... and The Internet is still an Experiment.
>>>>
>>>> Fun times,
>>>> Jack Haverty
>>>>
>>





More information about the Internet-history mailing list