[ih] IEN's as txt
Jack Haverty
jack at 3kitty.org
Thu Feb 9 20:36:50 PST 2017
On 02/09/2017 04:02 PM, Paul Ruizendaal wrote:
> My underlying motive for this is to understand the changes to
> TCP (and IP/ICMP/UDP) in the 1978-1981 time frame, and diff's
> of the IEN's and RFC's would help. Perhaps this analysis has
> already been done?
>
> One thing that surprised me is that the closing mechanics in the
> TCP state diagram kept changing until very late. Perhaps it
> was just a matter of ever more precise specification, but if it
> was conceptual change it would seem odd that it did not show up
> earlier in the testing process and 'bake offs'.
>
> Same goes for ICMP: it was a late arrival and the rationale for
> abandoning the earlier approach is not entirely clear.
>
Hi Paul,
I don't think you'll find much in the IENs and RFCs about the rationale
for the various design changes in the 1978-81 timeframe. There was a
lot going on then, and most of us had become electronic mail addicts by
then. There was much more discussion and debate carried out on mailing
lists (like this one) than appeared in RFCs or IENs. Those documents,
despite their names, were viewed as more "formal" places to document
results rather than as places to carry out discussions as things were
changed. Electronic mail and FTP were just so much faster than the
traditional stream of papers in academic situations.
If you can find ancient archives of lists such as the TCP-IP Working
Group (can't recall it's exact mail address format), that would be the
best source.
I'm still trying to get to scanning in some of the ancient paper in my
basement (e.g., my listing of 1979 Unix TCP). I also have my old
notebooks from all of the meetings in that time frame, which I'll go
through as well.
Meanwhile, ... from what I remember:
- TCP progressed rapidly through a bunch of slightly different versions
from TCP 2, 2.5, 2.5+epsilon, 2.5+2epsilon, and 3, eventually congealing
in 4. All of these changes, IMHO, reflected 2 influences: what we had
learned by experimenting with a live Internet, and what we learned
people wanted to do using the Internet.
There was a lot of discussion about how to handle the fact that
datagrams might linger on the Internet, in buffers in hosts or gateways,
for quite a while, even hours or days. They would cause considerable
confusion, and possibly incorrect data streams, if they surfaced after a
new connection had been established. Machines went up and down a lot
more frequently in those days than today. So the conventional wisdom
that packet lifetimes wouldn't ever be more than a few seconds was
judged to be incorrect.
We also discovered some obscure situations in which such spurious
packets could cause problems with closing connections. These
discussions led to the changes in the state machine. IIRC, several
additional states were added. These changes continued "very late"
because we kept finding additional situations where some change was
needed to make sure the connection worked properly.
In our experiments with the live Internet, we learned that it was rather
difficult to tell what was going on, especially when things weren't
working right (very common in those early days). ICMP was added as a
mechanism for adding in some of the functionality needed for things like
debugging (Ping), and "out-of-band" control (Source Quench, URGent) in
case the windowing mechanisms in-band in the data stream didn't do the job).
There was still a lot of uncertainty about exactly what the Internet
service should be.
For example, some host computers were "record-oriented" and liked to
deal with communications as exchanging "records" (think of the
punch-card days, which we were still doing back then). In TCP, that
service was implemented in the "letter" mechanisms, and the EOL flag
("End Of Letter") and the "Rubber EOL" mechanism, also known as "Rubber
Baby Buffer Bumpers". All of this kind of discussion really had to do
with ease of implementation and efficiency inside the various kinds of
machines involved.
Sometimes, design choices were made by non-technical means. Tenex liked
"Rubber EOLs", and Bill Plummer was a major proponent of that mechanism.
At one point, he left the TCP world to join another project ... and at
the next TCP meeting we all decided to remove Rubber EOL and Letters
from the TCP design. The "byte-stream" model was so much cleaner.
Another example is voice. The ARPANET never really tried to do anything
other than textual interactions with such limited bandwidth. But DARPA
had desires for the Internet to be able to carry voice, especially
interactive voice. Voice and data datagrams have different needs. With
data, you want all of the data to get to the destination, no matter how
long it takes. With voice, you want as much data as you can get for the
next fraction of a second that you're converting that data into sound.
TCP isn't the best choice for voice. Various ARPA projects (e.g., Steve
Casner's work at ISI) wanted to experiment with voice coding and
protocols, but it wouldn't work well over TCP.
That was one motivation for the splitting apart of TCP and IP; it
permitted UDP to ride on top of IP in parallel with TCP.
With more than one service offered by the Internet ("get it all there
eventually" and "get whatever you can there fast"), there was a need to
tell the Internet how to treat your datagrams. That motivated the
inclusion of the Type-Of-Service field and mechanisms.
With multiple types-of-service, the question of course was what
mechanisms are needed to actually treat packets differently. E.G., it
might be necessary to have multiple simultaneous routing protocols in
action, each reflecting the state of the network for a particular type
of service.
There were lots of ideas about what to put in the IP layer, but it was
already getting pretty big... So, the "Options" mechanism was added to
permit anybody to add extra mechanisms. That was, IIRC, how the "SPT"
functionality was introduced to reflect Autodin needs.
Much of the impetus for ICMP, and other "ancillary" mechanisms such as
SNMP, came from the experiences in the ARPANET. I was at BBN in the
same group that built and was operating the ARPANET, and I took the helm
of various ARPA projects, including the gateway one, at that time. So
we did a lot of stuff in the gateways based on the experience with what
had been proven to work in the ARPANET. It was a way to bring the
Internet into reliable operation as a service rather than just an
experiment.
Dave Mills was very interested in Time, and I'm sure some of the
mechanisms put into IP/ICMP enabled him to define and refine the NTP
mechanisms. He in particular wanted to do a lot of experiments
measuring how long it took for things to happen on the Internet, and the
lack of a coordinated time measurement mechanisms was a big obstacle.
So he built one.
I'm sure there's a lot more "rationale" to explain what happened. Maybe
some of the others who "were there" at the time could recount their
experfiences too... (you know who you are!)
You also need to remember that the Internet was officially an
Experiment, and was officially tasked to try things that had not been
tried before, and might not work. Source Quench was one of those things
- many of us didn't think it would actually work as a means for
congestion control.
Another thing to remember is that TCP4 was the 4th or 5th new version of
TCP over that few years, and we all expected the next version to arrive
in another year or so after TCP4 was defined. So it was OK to have
mechanisms in TCP/IP/ICMP that were unproven, so we could get experience
in the live net for changing the specs.
Our estimate of that timeline was of course off by multiple
decades...and counting.
After I go through my notes, I'm going to try to write up some more
about what happened back then. It was a wild time, and not well
captured in the more formal RFCs and IENs of that day. We were too
busy getting rough consensus and running code.
HTH,
/Jack Haverty
More information about the Internet-history
mailing list