[ih] TTL [was Exterior Gateway Protocol]

Jack Haverty jack at 3kitty.org
Sat Sep 5 23:40:07 PDT 2020


Lukasz,

I think that the earliest implementations of TTL called it "Time", but
I'm not aware that anyone actually used time per se in gateways, at
least in the early days (1977-1982 or so). 

TCP implementations didn't do anything with TTL other than set it on
outgoing datagrams, and at least in my implementation (TCP for Unix), it
was just set to some arbitrary value.  Until we had some data from
experimentation it was hard to evaluate ideas about what routers, hosts,
et al should actually do.   The early TCPs did use time in handling
retransmission timers, and there was work a bit later to incorporate
time more powerfully into TCP behavior, e.g., Van Jacobson's work.

The early gateways, IIRC, used the terminology "time", but in practice
used just hop counts, since time measurements were difficult to
implement.   The exception to that may be Dave Mills' Fuzzballs, since
Dave was the implementor most interested in time and making precise
measurements of network behavior.   I *think* Dave may have used time
values and delay-based routing amongst his "fuzzies".

The BBN doc you're seeking might have been one of many that discussed
the ARPANET internal mechanisms, e.g., ones with titles like "Routing
Algorithm Improvements".  The ARPANET internal mechanisms did use time. 
It was fairly simple in the IMPs, since the delay introduced by the
synchronous communications lines could be easily predicted, and the
other major component of delay was the time spent in queues, which could
be measured fairly easily.  

I even found one BBN ARPANET Project QTR from circa 1975 that discussed
the merits of the new-fangled TCP proposal that some professor had
published -- and seemed to conclude it couldn't possibly work.

My involvement in implementations of TCPs and gateways lasted through
about mid-1983, so I don't know much of the detail of subsequent
implementations.  For the various BBN gateway/router equipment, Bob
Hinden would probably be a good source.  The other major early player
was MIT and spinoffs (Proteon), which perhaps Noel Chiappa will
remember.   There's also at least one paper on the Fuzzballs which may
have some details.

One thing I'd advise being careful of is the various "specifications" in
RFCs.  Much of the wording in those was intentionally non-prescriptive
(use of "should" or "may" instead of "must"), to provide as much
latitude as possible for experimentation with new ideas, especially
within an AS.   The Internet was an Experiment.

Also, there was no consistent enforcement mechanism to assure that
implementations actually even conformed to the "must" elements.   So
Reality could be very different from Specification.

I don't know of any gateway implementations that have survived.   There
*is* ARPANET IMP software which was recently restored and a small
ARPANET was run using simulated IMP hardware.   I still have a ~1979
listing of the TCP I wrote for Unix, but haven't scanned it into digital
form yet.

Jack

On 9/5/20 7:38 PM, Łukasz Bromirski wrote:
> Jack,
>
> I was reading a lot of old BBN PDFs thanks to all good souls on
> this list that post nice URLs from time to time.
>
> I remember reading in at least one of them, that apparently first
> TCP/IP implementations were indeed using TTL as literally “time”,
> not hop count. I believe there somewhere there between PDP docs
> and ARPANET docs I’ve read something to the effect “and from this
> time we changed from measuring time to simply count routing hops”.
> Of course, right now google-fu is failing me.
>
> Quoting RFC 1009 that was already brought up, there’s quite
> direct “definition” of the field:
>
> "4.8.  Time-To-Live
>
>  The Time-to-Live (TTL) field of the IP header is defined to be a
>  timer limiting the lifetime of a datagram in the Internet.  It is
>  an 8-bit field and the units are seconds.  This would imply that
>  for a maximum TTL of 255 a datagram would time-out after about 4
>  and a quarter minutes.  Another aspect of the definition requires
>  each gateway (or other module) that handles a datagram to
>  decrement the TTL by at least one, even if the elapsed time was
>  much less than a second.  Since this is very often the case, the
>  TTL effectively becomes a hop count limit on how far a datagram
>  can propagate through the Internet."
>
> Were there any implementations that survived somewhere and actually
> did exactly that - counted actual time/processing delay, not hops?
> And if it took 2s to process packet, did they really decrement TTL
> by two?
>
> Thanks for any pointers,




More information about the Internet-history mailing list