[ih] Why was hop by hop flow control eventually abandonded?

Jack Haverty jack at 3kitty.org
Fri Jul 19 18:15:52 PDT 2013


I guess I was "there at the time", so here's some thoughts...anybody else
still around?????

---------------------

In the late 70s/early 80s (pre-1984) there was a lot of discussion that
involved a wide range of internal mechanisms for use in the early
Internet.  I don't recall that ever being organized at the time into any
more formal "discipline" as described in Saltzer's paper.  However,
coauthors Dave Clark and Dave Reed were both also "there at the time", so
it's likely that they took the experiences and discussions of the various
brainstorming sessions involved in building that early Internet, and used
those to sort out some more formal or broader principles of design for
communications systems which appeared in the 1984 paper.

I think most of those discussions never got written down.  They occurred in
email exchanges, or sessions at meetings, or around a table at a restaurant
or bar.  They were probably the primary driver to the "rough consensus"
that was needed to get the Internet actually built and running.

For example, there was at one point a general feeling that the basic
service of the Internet should be the "best effort datagram" service, where
the net simply tried to deliver as many datagrams as it could, but made no
guarantees about what order, how long, or whether they would all get there
at all.  Then other services, like TCP's reliable-byte-stream, or the
audio/video unreliable but undelayed bitstream, could be built on top of
that basic service.   Those discussions led to the "consensus" structure we
see in TCP4, UDP, and IP.

In that context, I remember discussions we had to try to figure out what
mechanisms to implement.  It wasn't done by exhaustive scientific
analysis.  E.G., one question was -- "How much datagram loss/corruption is
acceptable as part of the normal service?"    After much opining, someone
just said 'How about 1%, that sounds reasonable?"  and we all pretty much
agreed since no one had any argument for a different number.   Rough
consensus.  If you were seeing less than 1% datagram loss, things were
working OK.

There was also discussion of mechanisms for hop-by-hop transmissions.  For
example, it was obviously "bad" for a datagram to wend its way through a
tortuous route only to get discarded because it was damaged in its final
error-prone hop.  So one option considered for use on individual
error-prone hops was to put in some kind of ARQ scheme (Automatic ReQuest -
the recipient on seeing a damaged datagram would immediately ask its
neighbor to send it again).  This kind of technique was used in the ARPANET
environment, so it was part of the "let's do it the way we know works"
philosophy.

It was very difficult to come to consensus on any specific hop-hop
mechanism that would be applicable everywhere.  Different nets that
gateways were communicating across had very different characteristics of
errors, duplicating or reordering datagrams, etc.  The "carrier pigeon net"
was even discussed, as an extreme example.  Yes, the Internet was actually
designed to be able to utilize carrier pigeons as a component
communications network if necessary.  One datagram per leg, two legs per
pigeon, with network performance measured in pps - pigeons per second.
Bps was bits per pigeon.   Kbps was too weird to imagine.   (This was most
likely at one of those late night sessions in the hotel bar....but the
principle was a good one, anything that can carry a datagram should be
usable)

Of course, there were extreme constraints on the gateway hardware - not
enough memory, processors too slow, etc., for anything fancy.  So the only
mechanisms that actually fit into the early hardware were the most simple
ones.  As I said before, we added mechanisms in response to problems that
actually occurred.  That was probably a basic, if unstated, design
principle.  Saltzer's paper describes an experience in an MIT network which
motivated the inclusion of end-end error control, but that was a repeat of
a similar earlier experience in the ARPANET where the checksum mechanisms
on those error-prone phone lines was proved inadequate when a memory
failure in one IMP caused widespread havoc one day when it corrupted a
routing update.

Later, the Internet architecture was pretty fundamentally restructured when
we created EGP.   (google "haverty kahn subway strap") That allowed the
internet to be naturally composed not just of interconnected gateways, but
of interconnected systems of independent gateways.  This permitted
different people/groups to pursue their own ideas about what kind of
internal mechanisms to use (the so-call "IGP" or Internal Gateway Protocol"
-- which wasn't restricted to routing although many people think of it that
way).  So if someone thought it was useful to put an ARQ mechanism to
achieve desired reliability on hops, they could do so, inside their own
autonomous system.  This made experimentation with different ideas a lot
easier, and better computers that became available made it feasible to play
with other ideas that earlier were limited to thought experiments only.

So, since Dave Clark was one of the first TCP implementers (he did Multics
at the same time I was doing Unix TCP), and others in the MIT crew (e.g.,
Noel Chiappa) were also involved, I suspect all that experience got
digested and fed into the 1984 "Saltzer et al form".

These kinds of attempts to distill general principles from the crazy
reality of early networks was an underlying theme.  For example, see RFC
722 from 1976.

The "end-end design principle" was important, and Saltzer/Reed/Clark
captured it in words.  But it was certainly discussed earlier.  I remember
that one of the notions that I promoted was that you had to carefully
consider exactly where the "ends" really were.  Most pieces of the TCP/IP
datagram involve the host computers as the "ends", and that's the obvious
way to think about it.   But in reality some parts of datagram flows have
one or more "ends" which are not at the hosts. For example, many IP header
fields are intended for use by the gateways - so the gateway is actually
one of the "ends" for that flow of information and that fact should be
considered in designs.   Vint told me at the time that I really should
write that up -- well, here it is, only about 30 years later......oops.

It is truly amazing that this stuff still works....!

Hope this helps,
/Jack Haverty


On Wed, Jul 17, 2013 at 9:44 PM, Brian E Carpenter <
brian.e.carpenter at gmail.com> wrote:

> On 17/07/2013 05:35, John Kristoff wrote:
> > On Tue, Jul 16, 2013 at 05:55:00PM +0200, Detlef Bosau wrote:
> >
> >> The more I think about it, the more I fear, that although the decision
> >> to abandon hop by hop flow control
>
> Could somebody who was there at the time comment on whether the e2e
> argument (in its 1984 Saltzer et al form) was already part of the
> discussion then, or if it was a post hoc argument?
>
>     Brian
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20130719/1c48e8db/attachment.htm>


More information about the Internet-history mailing list