[ih] inter-network communication history

the keyboard of geoff goodfellow geoff at iconia.com
Thu Nov 7 12:05:34 PST 2019


jack, that was Really Excellent... say, in The Interest in further
documenting Internet History, could you please elucidate for us on *The
Internet "Control Panel"* and its functionality/workings (as excerpted from
your website -- http://3kitty.org/):

 ... *(At one point back around 1980, the "control panel" for The Internet
was on his desk!)*...


geoff

On Thu, Nov 7, 2019 at 9:23 AM Jack Haverty via Internet-history <
internet-history at elists.isoc.org> wrote:

>
>
>
> ---------- Forwarded message ----------
> From: Jack Haverty <jack at 3kitty.org>
> To: dcrocker at bbiw.net
> Cc: internet-history at elists.isoc.org
> Bcc:
> Date: Thu, 7 Nov 2019 11:22:39 -0800
> Subject: Re: [ih] inter-network communication history
> Dan Lynch's recollection of the sacred "end-to-end" nature of  TCP is
> right on target.  (Hi Dan!)   The Internet was architected to place TCP
> at the ends of any interaction, as close to the "user" (human or
> program) as possible.
>
> The IP transport service along the paths between the ends was always to
> be under suspicion, and it might drop, delay, replicate, misdeliver,
> mangle, or even inject IP datagrams that might look like they came from
> the endpoint source.   But TCP, and other technical and procedural
> mechanisms at the endpoints, would detect such behavior and compensate
> for it.
>
> The scenarios driving such thinking were simple in military arenas - you
> had to assume that some of the stuff between the endpoints might have
> been compromised and under enemy control, possibly without your
> knowledge.  In that scenario, tanks, troops, special ops and such are
> involved.  In today's Internet it's more likely to be bugs, hackers,
> viruses, and trojan horses.  In any event, the TCP and related stuff at
> the endpoints would counteract such problems in the intermediate IP
> environment.
>
> That was the architecture - TCP to provide end-to-end "sacred"
> mechanisms, IP to provide untrustworthy along-the-path best efforts.
>
> I think of myself as more of an architectural pragmatist than purist.
> For a while in the 80s, I was responsible for BBN's work with DCA in
> "DDN System Engineering", i.e., taking this Internet stuff and getting
> it to work in the operational world.  It didn't quite work "out of the
> box"...
>
> That involved dealing with a lot of "administrative boundaries", and
> adding some architectural components to make them possible.  Two
> examples come to mind.
>
> First, in early 1982, after Bob Kahn convinced me of the importance of
> such boundaries, Eric Rosen and I brainstormed and created the notion of
> "autonomous systems" and the EGP protocol.  If you look at RFC827, it
> says "It is proposed to establish a standard for Gateway to Gateway
> procedures that allow the Gateways to be mutually suspicious."   That
> was the key addition to the Architecture that would make it possible to
> isolate "bad" pieces of the IP infrastructure and keep the rest of the
> IP transport system functioning.  EGP was just a first step, to enable
> further experimentation and development (which I don't know ever
> happened).  EGP didn't say how to be suspicious; it just established a
> boundary so you could be suspicious if you figured out how to do so.
>
> Second, around the same time, we defined a "DDN Standard Node".  This
> was simply two gateways, interconnected by a wire.  It built on the
> previous idea that a wire was just a very simple network which had only
> two "hosts", "this end" and "that end".
>
> In the DDN, such a node would go into every site.  Instead of a single
> gateway at a site, there would be two connected in series.  One gateway
> would connect to that site's internal network of LANs and such.  The
> other would connect to another site by some long-haul communications
> medium, e.g., a PRNet, SATNET or ARPANET clone, etc.  The "inside"
> gateway would be "owned" by the base or ship commander and his/her IT
> staff.  The "outside" gateway would be owned by DCA and the DDN staff.
>
> in addition to these two, there were other mechanisms for operational
> needs, e.g., TACACS to provide a mechanism to identify, and control, who
> was using the Internet and what they were doing (connecting to hosts).
>
> Such an architecture was trying to establish the needed administrative
> boundaries.  E.g., he "DDN Standard Node" provided a mechanism to create
> such a boundary wherever appropriate, at the IP level.  Different pieces
> of the government want to control their own stuff....
>
> Circa 1984, I remember giving lots of presentations where one theme was
> that we had spent the first 10 years of the Internet (taking the 1974
> TCP paper as the start) making it possible for every computer to talk
> with every other computer.  We would spend the next 10 years making it
> not possible to do such things, so that only communications that were
> permitted would be possible.
>
> Sadly, I'm not sure that ever happened.  The commercial world started
> adopting TCP big time.   The government decided to focus on using COTS -
> Commercial Off-The-Shelf hardware and software.  The Research world
> focused on things like faster and bigger networks.   At BBN, the focus
> shifted to X.25, SNA, and such stuff that promised a big marketplace.
> TCP had gone through 5 releases from TCP2 through TCP4 in just a few
> years, so remaining items on the To-Do list, like address space, were
> expected to be addressed shortly.
>
> I'm not sure if anyone ever conveyed this architecture to the IETF or
> all the vendors that were popping up with products to build
> Internet(s).  I think changes like NAT came about to solve pragmatic
> problems.  But that of course broke the "end-to-end" architecture, which
> would view NAT actions as those of an intruder or equipment failure.
> So TCP became no longer end-to-end.
>
> The Internet is typically viewed as a way to interconnect networks.  But
> I think it's evolved operationally to become the way to interconnect
> across administrative boundaries, where Autonomous Systems have become
> associated with different ISPs, other mechanisms are used by vendors to
> create their own walled gardens of services (e.g., "clouds" or
> "messaging"), and NAT is used at the edges to connect to users'
> internets.  The end-to-end nature is gone.
>
> But that's just based on my observations from the outside.  I don't have
> a clue as to what today's actual Internet Architecture is, other than a
> collection of RFCs and product manuals that may or may not reflect
> reality, or if there is anyone actually able to manage the
> architecture.  From my user's perspective, it's a Wild West out there.....
>
> And the definition of The Internet is still elusive.  I agree that the
> users' definition is the best working one -- The Internet is the thing
> I'm connected to to do what I do when I get "on the Net."
>
> Fascinating to watch this over 50 years...who would have thought it
> would last this long?
>
> /Jack Haverty
>
>
> On 11/7/19 7:29 AM, Dave Crocker wrote:
> > On 11/6/2019 4:08 PM, Jack Haverty wrote:
> >> The flaw in my definition of computers talking to computers comes from
> >> the tweaks added to the technology well after TCP/IP itself -- things
> >> like firewalls, port forwarding, NAT, et al.  When I worked at Oracle,
> >> we ran our own internet, which had thousands of computers attached that
> >> could all talk to each other.  But only one of them could talk out to
> >> the rest of the world.
> >
> >
> > Here I'll disagree.  Nothing about those additional components gets in
> > the way of your definition.  (That's written as an small, implicit pun.)
> >
> > In spite of the changes those components effect, the computers at the
> > end points still interoperate, which is what your language specifies.
> >
> > As for the Oracle example, I'll suggest that it merely demonstrates
> > that 'the' Internet includes other internets, and that while true, I
> > don't offer it as much of an insight.
> >
> > As for the strong reactions Internet architecture purists have about
> > these additional components, mostly it seems to stem from a failure to
> > appreciate the operational importance of administrative boundaries.
> > For some reason, we think it fine to have those when doing global
> > routing, but not for other aspects of transit data processing, in
> > spite of the continuing and pervasive demonstration of their need.
> >
> > I'm never any good at attributing quotations or getting their wording
> > right, but there was long ago an observation that a law, which is
> > violated by a large percentage of the population, is not a very good
> > law.  The same logic applies to architectural purity criticisms of
> > NATs, etc.
> >
> > d/
> >
>
>
>
> ---------- Forwarded message ----------
> From: Jack Haverty via Internet-history <internet-history at elists.isoc.org>
> To: dcrocker at bbiw.net
> Cc: internet-history at elists.isoc.org
> Bcc:
> Date: Thu, 7 Nov 2019 11:22:39 -0800
> Subject: Re: [ih] inter-network communication history
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>


-- 
Geoff.Goodfellow at iconia.com
living as The Truth is True
http://geoff.livejournal.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20191107/1d093d74/attachment.htm>


More information about the Internet-history mailing list