[ih] Why was hop by hop flow control eventually abandonded?
Jack Haverty
jack at 3kitty.org
Fri Jul 19 22:47:26 PDT 2013
Hmmm. Maybe this was more important than I realized back then....
The basic idea was that for any chunk of information - bit, byte, field,
header, etc., you need to identify the source of that information and
also *all* of the consumers of that information. So a piece of data such
as a field in an IP header is characterized both by its original source
- wherever the contents were created, and by all of the destinations
that in any way utilize that information. So some IP header field might
get used not only by the host at the other end, but also by each of the
gateways along the way -- anyone who looks at and uses that information
in any way.
So then *each* such information flow is a distinct end-end information
flow. The communications mechanisms between any pair of ends must have
the characteristics needed for that particular information flow -
reliability, security, delay, privacy, integrity, etc. You need to put
in mechanisms as needed to meet each flow's needs, so that the service
it implements supports the needs of the other information flows.
Something like a typical TCP connection thus actually involves a bunch
of separate information flows among all of the players involved in
performing that information transfer. So the aggregate characteristics
of a TCP E2E information flow (reliability, delay, variance, integrity,
etc.) depend on all of the characteristics of all the associated
supporting information flows as well as what we think of as "the"
end-to-end mechanisms themselves between hosts.
I can see where firewalls and load balancers would be influenced by
this. They need information to operate properly, so they are endpoints,
as well as sources, of information flows. Headers are full of
information that is intended for use by someone "along the way".
Back in the 80s, I talked about this as "end-middle" communications -
the notion that in addition to an end-end mechanism like the E2E TCP
interactions, a bunch of end-middle communications flows were also
involved in any TCP flow. So the whole system had to be designed to
make everything work properly.
This actually surfaced at one point when we realized that, no matter how
robust the TCP/IP datagram transport service was, if the domain name
system and servers weren't designed and operated properly, end-users
wouldn't be able to open their telnet/ftp/mail connections -- since they
relied on DNS to get the proper IP addresses. Similarly, if the
gateway-gateway routing exchanges didn't work properly, nothing the TCPs
could do would be able to compensate. Source quenches, whatever they
accomplished, had to get to their destinations to have any effect. Etc.
Etc.
Better than "end-middle" is just to realize that the "middles" are just
ends of other information flows. And all of the discipline of end-end
mechanisms has to be applied there too, to each such E2E flow,
regardless of where the End happens to be.
I really should have written this down......it seems like it's
important. Vint was right.
/Jack Haverty
On 07/19/2013 08:47 PM, Brian E Carpenter wrote:
>> For example, many IP header
>> fields are intended for use by the gateways - so the gateway is actually
>> one of the "ends" for that flow of information and that fact should be
>> considered in designs. Vint told me at the time that I really should
>> write that up -- well, here it is, only about 30 years later......oops.
> Fascinating. fyi this relates to a current issue for the
> deployment of IPv6 - firewalls and load balancers need to
> inspect header fields that were designed specifically for
> end-host to end-host use.
>
> "Those who cannot remember the past are condemned to repeat it."
>
> Regards
> Brian Carpenter
More information about the Internet-history
mailing list