[ih] More topology

Jack Haverty jack at 3kitty.org
Sun Aug 29 19:29:59 PDT 2021


Thanks Barbara -- yes, the port Expander was one of the things I called 
"homegrown LANs".  I never did learn how the PE handled RFNMs, in 
particular how it interacted with its associated NCP host that it was 
"stealing" RFNMs from.
/jack

On 8/29/21 2:38 PM, Barbara Denny wrote:
> There was also SRI's port expander which increased the number of host 
> ports available on an IMP.
>
> You can find the SRI technical report (1080-140-1) on the web. The 
> title is "The Arpanet Imp Port Expander".
>
> barbara
>
> On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via 
> Internet-history <internet-history at elists.isoc.org> wrote:
>
>
> Thanks Steve.   I guess I was focussed only on the longhaul hops. The
> maps didn't show where host computers were attached. At the time
> (1981) the ARPANET consisted of several clusters of nodes (DC, Boston,
> LA, SF), almost like an early form of Metropolitan Area Network (MAN),
> plus single nodes scattered around the US and a satellite circuit to
> Europe.  The "MAN" parts of the ARPANET were often richly connected, and
> the circuits might have even been in the same room or building or
> campus.   So the long-haul circuits were in some sense more important in
> their scarcity and higher risk of problems from events such as marauding
> backhoes (we called such network outages "backhoe fade").
>
> While I still remember...here's a little Internet History.
>
> The Internet, at the time in late 70s and early 80s, was in what I used
> to call the "Fuzzy Peach" stage of its development.  In addition to
> computers directly attached to an IMP, there were various kinds of
> "local area networks", including things such as Packet Radio networks
> and a few homegrown LANs, which provided connectivity in a small
> geographical area.  Each of those was attached to an ARPANET IMP
> somewhere close by, and the ARPANET provided all of the long-haul
> communications.   The exception to that was the SATNET, which provided
> connectivity across the Atlantic, with a US node (in West Virginia
> IIRC), and a very active node in the UK.   So the ARPANET was the
> "peach" and all of the local networks and computers in the US were the
> "fuzz", with SATNET attaching extending the Internet to Europe.
>
> That topology had some implications on the early Internet behavior.
>
> At the time, I was responsible for BBN's contract with ARPA in which one
> of the tasks was "make the core Internet reliable 24x7".   That
> motivated quite frequent interactions with the ARPANET NOC, especially
> since it was literally right down the hall.
>
> TCP/IP was in use at the time, but most of the long-haul traffic flows
> were through the ARPANET.  With directly-connected computers at each
> end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol
> in use as the ARPANET TIPs became TACs.
>
> However...   There's always a "however"...  The ARPANET itself already
> implemented a lot of the functionality that TCP provided. ARPANET
> already provided reliable end-end byte streams, as well as flow control;
> the IMPs would allow only 8 "messages" in transit between two endpoints,
> and would physically block the computer from sending more than that.
> So IP datagrams never got lost, or reordered, or duplicated, and never
> had to be discarded or retransmitted.   TCP/IP could do such things too,
> but in the "fuzzy peach" situation, it didn't have to do so.
>
> The prominent exception to the "fuzzy peach" was transatlantic traffic,
> which had to cross both the ARPANET and SATNET.   The gateway
> interconnecting those two had to discard IP datagrams when they came in
> faster than they could go out.   TCP would have to notice, retransmit,
> and reorder things at the destination.
>
> Peter Kirstein's crew at UCL were quite active in experimenting with the
> early Internet, and their TCP/IP traffic had to actually do all of the
> functions that the Fuzzy Peach so successfully hid from those directly
> attached to it.   I think the experiences in that path motivated a lot
> of the early thinking about algorithms for TCP behavior, as well as
> gateway actions.
>
> Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or
> phone messages waiting for me every morning advising that "The Internet
> Is Broken!", either from Europe directly or through ARPA.  One of the
> first troubleshooting steps, after making sure the gateway was running,
> was to see what was going on in the Fuzzy Peach which was so important
> to the operation of the Internet.   Bob Hinden, Alan Sheltzer, and Mike
> Brescia might remember more since they were usually on the front lines.
>
> Much of the experimentation at the time involved interactions between
> the UK crowd and some machine at ISI.   If the ARPANET was acting up,
> the bandwidth and latency of those TCP/IP traffic flows could gyrate
> wildly, and TCP/IP implementations didn't always respond well to such
> things, especially since they didn't typically occur when you were just
> using the Fuzzy Peach.
>
> Result - "The Internet Is Broken".   That long-haul ARPA-ISI circuit was
> an important part of the path from Europe to California.   If it was
> "down", the path became 3 or more additional hops (IMP hops, not IP),
> and became further loaded by additional traffic routing around the
> break.   TCPs would timeout, retransmit, and make the problem worse
> while their algorithms tried to adapt.
>
> So that's probably what I was doing in the NOC when I noticed the
> importance of that ARPA<->USC ARPANET circuit.
>
> /Jack Haverty
>
>
> On 8/29/21 10:09 AM, Stephen Casner wrote:
> > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at
> > ISI which is 10 miles and 2 or 3 IMPs from USC.
> >
> >         -- Steve
> >
> > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote:
> >
> >> Actually July 1981 -- see
> >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg 
> <http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg >(thanks, 
> Noel!)
> >> The experience I recall was being in the ARPANET NOC for some 
> reason and
> >> noticing the topology on the big map that covered one wall of the 
> NOC.  There
> >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure 
> where the
> >> PDP-10s were attached.  Still just historically curious how the 
> decision was
> >> made to configure that topology....but we'll probably never know.  
> /Jack
> >>
> >>
> >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote:
> >>>    A look at some ARPAnet maps available on the web shows that in 
> 1982 it was
> >>> four hops from ARPA to ISI, but by 1985 it was one hop.
> >>> Alex McKenzie
> >>>
> >>>      On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via
> >>> Internet-history <internet-history at elists.isoc.org 
> <mailto:internet-history at elists.isoc.org>> wrote:
> >>>      This is the second email from Jack mentioning a 
> point-to-point line
> >>> between the ARPA TIP and the ISI site.  I don't believe that is an 
> accurate
> >>> statement of the ARPAnet topology.  In January 1975 there were 5 hops
> >>> between the 2 on the shortest path. In October 1975 there were 6.  
> I don't
> >>> believe it was ever one or two hops, but perhaps someone can find 
> a network
> >>> map that proves me wrong.
> >>> Alex McKenzie
> >>>
> >>>      On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via
> >>> Internet-history <internet-history at elists.isoc.org 
> <mailto:internet-history at elists.isoc.org>> wrote:
> >>>      Sounds right.  My experience was well after that early 
> experimental
> >>> period.  The ARPANET was much bigger (1980ish) and the topology had
> >>> evolved over the years.  There was a direct 56K line (IIRC between
> >>> ARPA-TIP and ISI) at that time.  Lots of other circuits too, but in
> >>> normal conditions ARPA<->ISI traffic flowed directly over that 
> long-haul
> >>> circuit.  /Jack
> >>>
> >>> On 8/28/21 1:55 PM, Vint Cerf wrote:
> >>>> Jack, the 4 node configuration had two paths between UCLA and SRI and
> >>>> a two hop path to University of Utah.
> >>>> We did a variety of tests to force alternate routing (by congesting
> >>>> the first path).
> >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to get
> >>>> this effect. Of course, we also crashed the Arpanet with these early
> >>>> experiments.
> >>>>
> >>>> v
> >>>>
> >>>>
> >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty <jack at 3kitty.org 
> <mailto:jack at 3kitty.org>
> >>>> <mailto:jack at 3kitty.org <mailto:jack at 3kitty.org>>> wrote:
> >>>>
> >>>>      Thanks, Steve.  I hadn't heard the details of why ISI was
> >>>>      selected.  I can believe that economics was probably a 
> factor but
> >>>>      the people and organizational issues could have been the 
> dominant
> >>>>      factors.
> >>>>
> >>>>      IMHO, the "internet community" seems to often ignore 
> non-technical
> >>>>      influences on historical events, preferring to view 
> everything in
> >>>>      terms of RFCs, protocols, and such.  I think the other 
> influences
> >>>>      are an important part of the story - hence my "economic lens".
> >>>>      You just described a view through a manager's lens.
> >>>>
> >>>>      /Jack
> >>>>
> >>>>      PS - I always thought that the "ARPANET demo" aspect of that
> >>>>      ARPANET timeframe was suspect, especially after I noticed 
> that the
> >>>>      ARPANET had been configured with a leased circuit directly 
> between
> >>>>      the nearby IMPs to ISI and ARPA.  So as a demo of "packet
> >>>>      switching", there wasn't much actual switching involved.  The 2
> >>>>      IMPs were more like multiplexors.
> >>>>
> >>>>      I never heard whether that configuration was mandated by 
> ARPA, or
> >>>>      BBN decided to put a line in as a way to keep the customer 
> happy,
> >>>>      or if it just happened naturally as a result of the ongoing
> >>>>      measurement of traffic flows and reconfiguration of the topology
> >>>>      to adapt as needed.  Or something else.  The interactivity 
> of the
> >>>>      service between a terminal at ARPA and a PDP-10 at ISI was
> >>>>      noticeably better than other users (e.g., me) experienced.
> >>>>
> >>>>      On 8/28/21 11:51 AM, Steve Crocker wrote:
> >>>>>      Jack,
> >>>>>
> >>>>>      You wrote:
> >>>>>
> >>>>>          I recall many visits to ARPA on Wilson Blvd in 
> Arlington, VA.
> >>>>>          There were
> >>>>>          terminals all over the building, pretty much all connected
> >>>>>          through the
> >>>>>          ARPANET to a PDP-10 3000 miles away at USC in Marine 
> Del Rey,
> >>>>>          CA.  The
> >>>>>          technology of Packet Switching made it possible to keep a
> >>>>>          PDP-10 busy
> >>>>>          servicing all those Users and minimize the costs of 
> everything,
> >>>>>          including those expensive communications circuits.  
> This was
> >>>>>          circa
> >>>>>          1980. Users could efficiently share expensive 
> communications,
> >>>>> and
> >>>>>          expensive and distant computers -- although I always 
> thought
> >>>>>          ARPA's
> >>>>>          choice to use a computer 3000 miles away was probably 
> more to
> >>>>>          demonstrate the viability of the ARPANET than because 
> it was
> >>>>>          cheaper
> >>>>>          than using a computer somewhere near DC.
> >>>>>
> >>>>>
> >>>>>      The choice of USC-ISI in Marina del Rey was due to other
> >>>>>      factors.  In 1972, with ARPA/IPTO (Larry Roberts) strong 
> support,
> >>>>>      Keith Uncapher moved his research group out of RAND.  Uncapher
> >>>>>      explored a couple of possibilities and found a comfortable
> >>>>>      institutional home with the University of Southern California
> >>>>>      (USC) with the proviso the institute would be off campus.
> >>>>>      Uncapher was solidly supportive of both ARPA/IPTO and of the
> >>>>>      Arpanet project.  As the Arpanet grew, Roberts needed a 
> place to
> >>>>>      have multiple PDP-10s providing service on the Arpanet.  
> Not just
> >>>>>      for the staff at ARPA but for many others as well.  
> Uncapher was
> >>>>>      cooperative and the rest followed easily.
> >>>>>
> >>>>>      The fact that it demonstrated the viability of packet-switching
> >>>>>      over that distance was perhaps a bonus, but the same would have
> >>>>>      been true almost anywhere in the continental U.S. at that time.
> >>>>>      The more important factor was the quality of the relationship.
> >>>>>      One could imagine setting up a small farm of machines at 
> various
> >>>>>      other universities, non-profits, or selected for profit 
> companies
> >>>>>      or even some military bases.  For each of these, cost,
> >>>>>      contracting rules, the ambitions of the principal investigator,
> >>>>>      and staff skill sets would have been the dominant concerns.
> >>>>>
> >>>>>      Steve
> >>>>>
> >>>>
> >>>> --
> >>>> Please send any postal/overnight deliveries to:
> >>>> Vint Cerf
> >>>> 1435 Woodhurst Blvd
> >>>> McLean, VA 22102
> >>>> 703-448-0965
> >>>>
> >>>> until further notice
>
>
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
> https://elists.isoc.org/mailman/listinfo/internet-history 
> <https://elists.isoc.org/mailman/listinfo/internet-history>




More information about the Internet-history mailing list