[ih] ARPANET pioneer Jack Haverty says the internet was never finished
Jack Haverty
jack at 3kitty.org
Wed Mar 2 17:06:20 PST 2022
Absolutely. The audience for my talk was technical-savvy people who are
involved in building and/or operating the pieces of the Internet in
places where fiber hasn't carpeted the area yet - places like Bangladesh
et al, where they do have to pay attention to traffic engineering.
But even so, I included the anecdote of my friend and his recent attempt
at a "gaming" experience (actually a remote-desktop kind of situation)
over the path between LA and Reno, NV. Even in fiber-rich US, the
Internet doesn't work for some users when they try to do certain
things. I speculate that we can see this every day now by watching TV
news interviews with their occasional audio glitches, video freezing,
etc. I can't "see" the traffic over their Internet path, but I surmise
that some of those datagrams aren't getting to their destination soon
enough to be useful, as was happening in my friend's experience.
That same behavior was reported by people like Steve Casner, Jim Forgie,
et al as they tried to do real-time interactive voice over the early
1980s Internet. That experience led to the splitting of TCP into
TCP/IP, and the creation of UDP to also run over IP and provide another
type of service. Where TCP provided a virtual circuit, UDP provided raw
datagrams. no guarantees at all. We realized that different kinds of
uses motivated different kinds of network behavior.
Those additional services didn't require that the underlying datagram
transport mechanisms (routers) necessarily provide multiple types of
service. But we thought that such an architecture might be desirable.
For example, to reduce wasted bandwidth, the TTL value would indicate
how long a datagram could still be in transit and still be useful when
it arrived at its destination. Routers could simply discard such
datagrams immediately, even if their TTL was not yet zero, if they
somehow knew that the datagram would not get to its destination before
its TTL expired. We expected that might especially occur at the
boundary between a fast LAN and slow WAN (ARPANET). Routers could also
prioritize traffic if doing so would get it to its destination "in
time", e.g., by placing such datagrams at the head of an output queue.
Or perhaps they would route such traffic over a separate path - one path
for bulk traffic, the other for express.
We didn't know how best to do all that, i.e., it was Research. Also,
there were important mechanisms missing. E.g., TTL was defined in
"hops" because the routers of the day had no means to measure time or
synchronize across the net. Dave Mills took on that challenge and NTP
was the result. I heard that his "fuzzballs" subsequently somehow used
time instead of hops in their routing and queue management algorithms.
Placeholder mechanisms were put in place. TOS bits were a way for a
host to indicate what kind of service was required for each datagram,
after someone figured out what different services routers could
provide. TTL was hops, but could readily become time later. Source
Quench was a rudimentary mechanism to reflect congestion from somewhere
inside the network back to a source so that it could "slow down". Some
people however decided receipt of a Source Quench meant your last
datagram got discarded -- so you should instantly retransmit it.
Personally, I had no idea what my own TCP implementation should do on
receiving a Source Quench; I think I incremented a counter somewhere.
All of the above occurred in the time between TCP2 and TCP4, with the
expectation that the ongoing research would produce some answers which
would be introduced into V5, V6, V7, etc. It's been 40 years, so it's
quite possible that TOS bits, for example, are no longer needed, and
that the new mechanisms have been well documented and standardized in
the thousands of RFCs that have been written.
But from a user's perspective, mechanisms and algorithms aren't useful
until they're present and operating in all the equipment that's involved
in whatever the user is trying to do. Are they there? Can't tell.
The talking heads on TV still pixelate. My friend can't play his game.
The other point I was trying to make probably didn't come through
clearly - just not enough time to explain it well.
Networks are no longer just the collection of switching equipment and
communications "lines" that interconnect them and the algorithms cast
into their software. Much of the mechanism that in "the old days" you
would find inside the switches is no longer there.
The ARPANET, and X.25 or other nets of the 80s, had elaborate internal
mechanisms to implement virtual circuits, manage resources to avoid
congestion, and "push back" on senders to force them to slow down when
needed. In today's Internet, much of that mechanism has been "moved
out" from the switches and into the "hosts", i.e., the billions of
desktops, laptops, smartphones, and even refrigerators, TVs, attic fans,
and such. Some is in the related OS' TCP/IP implementation. Some is
in the applications as they try to figure out how to best use whatever
the network is providing right now.
Moving such mechanisms from the "switches" to the "hosts" was, IMHO, a
salient part of the Internet Experiment. It certainly made routers
easier to build than switches. But was it a good idea to put such
mechanisms into the billions of hosts?
To a network operator, trying to keep its customers satisfied, that
means that it has to look not only at how the switches and lines are
performing, but also at how those "network" mechanisms, now residing in
the "hosts", are performing. It all has to work well for the user to
be happy with the service, and the network operators happy with their
equipment. That's what I tried to highlight in my anecdote about the
network glitch and TCP retransmissions over a trans-Pacific path. The
users weren't happy because the network was slow. The operators weren't
happy, not only because the users were complaining, but because half of
those expensive transoceanic circuits was being wasted. TCP does a
wonderful job of keeping the data flowing despite all sorts of
obstacles. It also does a wonderful job of hiding problems unless
someone goes digging to see what's going on.
In the earliest ARPANET days, the NOC used to keep track of end-to-end
delay, with a target of keeping it under 250 milliseconds (IIRC). Most
users then interacted with their remote computers at typewriter
terminals, and became unhappy if their keystrokes didn't echo back at
least that quickly.
Today's Internet, admittedly from my anecdotal experience, seems to
think 30 seconds is perfectly acceptable as long as all the datagrams
get there eventually.
Jack
On 3/2/22 08:53, touch--- via Internet-history wrote:
>> On Mar 2, 2022, at 8:22 AM, Noel Chiappa via Internet-history <internet-history at elists.isoc.org> wrote:
>>
>>> On Tue, Mar 1, 2022 at 8:46 PM Jack Haverty wrote:
>>> One that I used in the talk was TOS, i.e., how should routers (and TCPs)
>>> treat datagrams differently depending on their TOS values.
>> I actually don't think that's that important any more (or multicast either).
>> TOS is only realy important in a network with resource limitations, or very
>> different service levels. We don't have those any more - those limitations
>> have just been engineered away.
> Not all networks can be over-provisioned; DSCPs and traffic engineering are alive and well.
>
> They’ve just been buried so low that you don’t notice them. It’s like driving on cement and claiming no more need for iron rebar.
>
> Taking Clarke’s Third Law a step further*, “any sufficiently useful technology fades into the background".
>
> Joe
>
> *”Any sufficiently advanced technology is indistinguishable from magic"
> —
> Dr. Joe Touch, temporal epistemologist
> www.strayalpha.com
More information about the Internet-history
mailing list