[ih] More topology

vinton cerf vgcerf at gmail.com
Tue Aug 31 12:36:50 PDT 2021


actually, transmissions between two hosts on the same IMP were normal and
called "incestuous" traffic - we found that the bulk of UCLA traffic was
between the 360/91 and the Sigma-7 on the same campus!

v


On Tue, Aug 31, 2021 at 3:24 PM Barbara Denny via Internet-history <
internet-history at elists.isoc.org> wrote:

>  Hi Jack,
> Based on what you said, during RP testing I think I remember seeing where
> a host on one IMP couldn't even send packets to another host on a different
> port of the same IMP.  Just want to double check this was possible when you
> say any destination.
> barbara
>     On Monday, August 30, 2021, 11:01:44 AM PDT, Jack Haverty <
> jack at 3kitty.org> wrote:
>
>   Yes, but it was more complicated than that...a little more history:
>
>  ARPANET used RFNMs (Request For Next Message) as a means of flow
> control.  Every message (packet/datagram/whatever) sent by a host would
> eventually cause a RFNM to be returned to the host.   IIRC, hosts were
> allowed to send up to 8 messages to any particular destination.   So there
> could be up to 8 pending RFNMs to come back to the host for traffic to that
> destination.   If the host tried to send a 9th message to a particular
> destination, the IMP would block all transmissions from the host until
> those RFNMs arrived, by shutting off the hardware interface.   So, if a
> host exceeded that limit of "8 in flight" to any destination, the IMP would
> block it, at least temporarily, from sending anything to any destination.
> That would probably be A Bad Thing.
>
>  Hosts could implement a simple algorithm and simply send one message, and
> hold the next message until a RFNM came back.  But to increase throughput,
> it was advisable to implement some sort of "RFNM Counting" where the host
> would keep track of how many messages were "in flight", and avoid sending
> another message to a particular destination if that message would exceed
> the 8-in-flight constraint, and thereby avoid having the IMP shut off all
> of its traffic to all destinations.    The TCP/IP I implemented for Unix
> did that kind of RFNM Counting on the ARPANET interface, but I'm not sure
> how other implementations handled the RFNM issues.
>
>  Any "box" (such as a Port Expander) that was "spliced into" the
> connection between a host and an IMP had to perform two related functions.
>   It had to act as a host itself in interacting with the IMP.   It also had
> to "look like an IMP" to the host(s) that were attached to it.   It had to
> essentially implement "timesharing" of the IMP's interface.
>
>  The "1822 specifications" defined the interface between a Host and an
> IMP.    From it, engineers could build interfaces for their hosts to
> connect them to the ARPANET.  However (always a however...) the 1822 spec
> appeared to be symmetrical.  But it wasn't.   Interfaces that met the 1822
> specs could successfully interact with an IMP.   Also, if you plugged two
> such 1822 interfaces back-to-back (as was done in connecting the 4 host to
> a Port Expander), it would often work apparently fine.   The "Host to IMP"
> specification wasn't quite the same as the (internal-to-BBN) "IMP To Host"
> specification;  it was easy for people to treat it as if it was.
>
>  But in that early Internet, there were lots of "outages" to be
> investigated.  I remember doing a "deep dive" into one such configuration
> where equipment was "spliced into" a Host/IMP 1822 cable with unreliable
> results.   It turned out to be a hardware issue, with the root cause being
> the invalid assumption that any 1822-compliant interface on a host could
> also successfully emulate the 1822 interface on an IMP.
>
>  This was a sufficiently common problem that I wrote IEN 139 "Hosts As
> IMPs" to explain the situation (see
> https://www.rfc-editor.org/ien/scanned/ien139.pdf ), to warn anyone
> trying to do such things.  But that IEN only addressed the low-level issues
> of hardware, signals, voltages, and noise., and warned that to do such
> things might require more effort to actually behave as an IMP.
>
>  RFNMs, and RFNM counting, weren't specified in 1822, but to "look like an
> IMP", a box such as a Port Expander faced design choices for providing
> functionality such as RFNMs.  I never knew how it did that, and how
> successfully it "looked like an IMP" to all its attached hosts.   E.g., if
> all 4 hosts, thinking they were connected to their own dedicated IMP port,
> did their own RFNM Counting, how did the PE make that all work reliably?
> Maybe the situation just never came up often enough in practice to motivate
> troubleshooting.
>
>  Not an issue now of course, but historically I wonder how much of the
> early reliability issues in the Internet in the Fuzzy Peach era might have
> been caused by such situations.
>
>  /Jack
>
>  PS - the same kind of thought has occurred to me with respect to NAT,
> which seems to perform a similar "look like an Internet" function.
>
>
>
>
>  On 8/30/21 3:54 AM, Vint Cerf wrote:
>
>
> two tcp connections could multiplex on a given IMP-IMP link - one RFNM per
> IP packet regardless of the TCP layer "connection" v
>
>   On Sun, Aug 29, 2021 at 10:30 PM Jack Haverty via Internet-history <
> internet-history at elists.isoc.org> wrote:
>
> Thanks Barbara -- yes, the port Expander was one of the things I called
>  "homegrown LANs".  I never did learn how the PE handled RFNMs, in
>  particular how it interacted with its associated NCP host that it was
>  "stealing" RFNMs from.
>  /jack
>
>  On 8/29/21 2:38 PM, Barbara Denny wrote:
>  > There was also SRI's port expander which increased the number of host
>  > ports available on an IMP.
>  >
>  > You can find the SRI technical report (1080-140-1) on the web. The
>  > title is "The Arpanet Imp Port Expander".
>  >
>  > barbara
>  >
>  > On Sunday, August 29, 2021, 12:54:39 PM PDT, Jack Haverty via
>  > Internet-history <internet-history at elists.isoc.org> wrote:
>  >
>  >
>  > Thanks Steve.   I guess I was focussed only on the longhaul hops. The
>  > maps didn't show where host computers were attached. At the time
>  > (1981) the ARPANET consisted of several clusters of nodes (DC, Boston,
>  > LA, SF), almost like an early form of Metropolitan Area Network (MAN),
>  > plus single nodes scattered around the US and a satellite circuit to
>  > Europe.  The "MAN" parts of the ARPANET were often richly connected, and
>  > the circuits might have even been in the same room or building or
>  > campus.   So the long-haul circuits were in some sense more important in
>  > their scarcity and higher risk of problems from events such as marauding
>  > backhoes (we called such network outages "backhoe fade").
>  >
>  > While I still remember...here's a little Internet History.
>  >
>  > The Internet, at the time in late 70s and early 80s, was in what I used
>  > to call the "Fuzzy Peach" stage of its development.  In addition to
>  > computers directly attached to an IMP, there were various kinds of
>  > "local area networks", including things such as Packet Radio networks
>  > and a few homegrown LANs, which provided connectivity in a small
>  > geographical area.  Each of those was attached to an ARPANET IMP
>  > somewhere close by, and the ARPANET provided all of the long-haul
>  > communications.   The exception to that was the SATNET, which provided
>  > connectivity across the Atlantic, with a US node (in West Virginia
>  > IIRC), and a very active node in the UK.   So the ARPANET was the
>  > "peach" and all of the local networks and computers in the US were the
>  > "fuzz", with SATNET attaching extending the Internet to Europe.
>  >
>  > That topology had some implications on the early Internet behavior.
>  >
>  > At the time, I was responsible for BBN's contract with ARPA in which one
>  > of the tasks was "make the core Internet reliable 24x7".   That
>  > motivated quite frequent interactions with the ARPANET NOC, especially
>  > since it was literally right down the hall.
>  >
>  > TCP/IP was in use at the time, but most of the long-haul traffic flows
>  > were through the ARPANET.  With directly-connected computers at each
>  > end, such as the ARPA-TIP and a PDP-10 at ISI, TCP became the protocol
>  > in use as the ARPANET TIPs became TACs.
>  >
>  > However...   There's always a "however"...  The ARPANET itself already
>  > implemented a lot of the functionality that TCP provided. ARPANET
>  > already provided reliable end-end byte streams, as well as flow control;
>  > the IMPs would allow only 8 "messages" in transit between two endpoints,
>  > and would physically block the computer from sending more than that.
>  > So IP datagrams never got lost, or reordered, or duplicated, and never
>  > had to be discarded or retransmitted.   TCP/IP could do such things too,
>  > but in the "fuzzy peach" situation, it didn't have to do so.
>  >
>  > The prominent exception to the "fuzzy peach" was transatlantic traffic,
>  > which had to cross both the ARPANET and SATNET.   The gateway
>  > interconnecting those two had to discard IP datagrams when they came in
>  > faster than they could go out.   TCP would have to notice, retransmit,
>  > and reorder things at the destination.
>  >
>  > Peter Kirstein's crew at UCL were quite active in experimenting with the
>  > early Internet, and their TCP/IP traffic had to actually do all of the
>  > functions that the Fuzzy Peach so successfully hid from those directly
>  > attached to it.   I think the experiences in that path motivated a lot
>  > of the early thinking about algorithms for TCP behavior, as well as
>  > gateway actions.
>  >
>  > Europe is 5+ hours ahead of Boston, so I learned to expect emails and/or
>  > phone messages waiting for me every morning advising that "The Internet
>  > Is Broken!", either from Europe directly or through ARPA.  One of the
>  > first troubleshooting steps, after making sure the gateway was running,
>  > was to see what was going on in the Fuzzy Peach which was so important
>  > to the operation of the Internet.   Bob Hinden, Alan Sheltzer, and Mike
>  > Brescia might remember more since they were usually on the front lines.
>  >
>  > Much of the experimentation at the time involved interactions between
>  > the UK crowd and some machine at ISI.   If the ARPANET was acting up,
>  > the bandwidth and latency of those TCP/IP traffic flows could gyrate
>  > wildly, and TCP/IP implementations didn't always respond well to such
>  > things, especially since they didn't typically occur when you were just
>  > using the Fuzzy Peach.
>  >
>  > Result - "The Internet Is Broken".   That long-haul ARPA-ISI circuit was
>  > an important part of the path from Europe to California.   If it was
>  > "down", the path became 3 or more additional hops (IMP hops, not IP),
>  > and became further loaded by additional traffic routing around the
>  > break.   TCPs would timeout, retransmit, and make the problem worse
>  > while their algorithms tried to adapt.
>  >
>  > So that's probably what I was doing in the NOC when I noticed the
>  > importance of that ARPA<->USC ARPANET circuit.
>  >
>  > /Jack Haverty
>  >
>  >
>  > On 8/29/21 10:09 AM, Stephen Casner wrote:
>  > > Jack, that map shows one hop from ARPA to USC, but the PDP10s were at
>  > > ISI which is 10 miles and 2 or 3 IMPs from USC.
>  > >
>  > >         -- Steve
>  > >
>  > > On Sun, 29 Aug 2021, Jack Haverty via Internet-history wrote:
>  > >
>  > >> Actually July 1981 -- see
>  > >> http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg
>  > <http://mercury.lcs.mit.edu/~jnc/tech/jpg/ARPANet/G81Jul.jpg >(thanks,
>  > Noel!)
>  > >> The experience I recall was being in the ARPANET NOC for some
>  > reason and
>  > >> noticing the topology on the big map that covered one wall of the
>  > NOC.  There
>  > >> were 2 ARPANET nodes at that time labelled ISI, but I'm not sure
>  > where the
>  > >> PDP-10s were attached.  Still just historically curious how the
>  > decision was
>  > >> made to configure that topology....but we'll probably never know.
>  > /Jack
>  > >>
>  > >>
>  > >> On 8/29/21 8:02 AM, Alex McKenzie via Internet-history wrote:
>  > >>>    A look at some ARPAnet maps available on the web shows that in
>  > 1982 it was
>  > >>> four hops from ARPA to ISI, but by 1985 it was one hop.
>  > >>> Alex McKenzie
>  > >>>
>  > >>>      On Sunday, August 29, 2021, 10:04:05 AM EDT, Alex McKenzie via
>  > >>> Internet-history <internet-history at elists.isoc.org
>  > <mailto:internet-history at elists.isoc.org>> wrote:
>  > >>>      This is the second email from Jack mentioning a
>  > point-to-point line
>  > >>> between the ARPA TIP and the ISI site.  I don't believe that is an
>  > accurate
>  > >>> statement of the ARPAnet topology.  In January 1975 there were 5
> hops
>  > >>> between the 2 on the shortest path. In October 1975 there were 6.
>  > I don't
>  > >>> believe it was ever one or two hops, but perhaps someone can find
>  > a network
>  > >>> map that proves me wrong.
>  > >>> Alex McKenzie
>  > >>>
>  > >>>      On Saturday, August 28, 2021, 05:06:54 PM EDT, Jack Haverty via
>  > >>> Internet-history <internet-history at elists.isoc.org
>  > <mailto:internet-history at elists.isoc.org>> wrote:
>  > >>>      Sounds right.  My experience was well after that early
>  > experimental
>  > >>> period.  The ARPANET was much bigger (1980ish) and the topology had
>  > >>> evolved over the years.  There was a direct 56K line (IIRC between
>  > >>> ARPA-TIP and ISI) at that time.  Lots of other circuits too, but in
>  > >>> normal conditions ARPA<->ISI traffic flowed directly over that
>  > long-haul
>  > >>> circuit.  /Jack
>  > >>>
>  > >>> On 8/28/21 1:55 PM, Vint Cerf wrote:
>  > >>>> Jack, the 4 node configuration had two paths between UCLA and SRI
> and
>  > >>>> a two hop path to University of Utah.
>  > >>>> We did a variety of tests to force alternate routing (by congesting
>  > >>>> the first path).
>  > >>>> I used traffic generators in the IMPs and in the UCLA Sigma-7 to
> get
>  > >>>> this effect. Of course, we also crashed the Arpanet with these
> early
>  > >>>> experiments.
>  > >>>>
>  > >>>> v
>  > >>>>
>  > >>>>
>  > >>>> On Sat, Aug 28, 2021 at 4:15 PM Jack Haverty <jack at 3kitty.org
>  > <mailto:jack at 3kitty.org>
>  > >>>> <mailto:jack at 3kitty.org <mailto:jack at 3kitty.org>>> wrote:
>  > >>>>
>  > >>>>      Thanks, Steve.  I hadn't heard the details of why ISI was
>  > >>>>      selected.  I can believe that economics was probably a
>  > factor but
>  > >>>>      the people and organizational issues could have been the
>  > dominant
>  > >>>>      factors.
>  > >>>>
>  > >>>>      IMHO, the "internet community" seems to often ignore
>  > non-technical
>  > >>>>      influences on historical events, preferring to view
>  > everything in
>  > >>>>      terms of RFCs, protocols, and such.  I think the other
>  > influences
>  > >>>>      are an important part of the story - hence my "economic lens".
>  > >>>>      You just described a view through a manager's lens.
>  > >>>>
>  > >>>>      /Jack
>  > >>>>
>  > >>>>      PS - I always thought that the "ARPANET demo" aspect of that
>  > >>>>      ARPANET timeframe was suspect, especially after I noticed
>  > that the
>  > >>>>      ARPANET had been configured with a leased circuit directly
>  > between
>  > >>>>      the nearby IMPs to ISI and ARPA.  So as a demo of "packet
>  > >>>>      switching", there wasn't much actual switching involved.  The
> 2
>  > >>>>      IMPs were more like multiplexors.
>  > >>>>
>  > >>>>      I never heard whether that configuration was mandated by
>  > ARPA, or
>  > >>>>      BBN decided to put a line in as a way to keep the customer
>  > happy,
>  > >>>>      or if it just happened naturally as a result of the ongoing
>  > >>>>      measurement of traffic flows and reconfiguration of the
> topology
>  > >>>>      to adapt as needed.  Or something else.  The interactivity
>  > of the
>  > >>>>      service between a terminal at ARPA and a PDP-10 at ISI was
>  > >>>>      noticeably better than other users (e.g., me) experienced.
>  > >>>>
>  > >>>>      On 8/28/21 11:51 AM, Steve Crocker wrote:
>  > >>>>>      Jack,
>  > >>>>>
>  > >>>>>      You wrote:
>  > >>>>>
>  > >>>>>          I recall many visits to ARPA on Wilson Blvd in
>  > Arlington, VA.
>  > >>>>>          There were
>  > >>>>>          terminals all over the building, pretty much all
> connected
>  > >>>>>          through the
>  > >>>>>          ARPANET to a PDP-10 3000 miles away at USC in Marine
>  > Del Rey,
>  > >>>>>          CA.  The
>  > >>>>>          technology of Packet Switching made it possible to keep a
>  > >>>>>          PDP-10 busy
>  > >>>>>          servicing all those Users and minimize the costs of
>  > everything,
>  > >>>>>          including those expensive communications circuits.
>  > This was
>  > >>>>>          circa
>  > >>>>>          1980. Users could efficiently share expensive
>  > communications,
>  > >>>>> and
>  > >>>>>          expensive and distant computers -- although I always
>  > thought
>  > >>>>>          ARPA's
>  > >>>>>          choice to use a computer 3000 miles away was probably
>  > more to
>  > >>>>>          demonstrate the viability of the ARPANET than because
>  > it was
>  > >>>>>          cheaper
>  > >>>>>          than using a computer somewhere near DC.
>  > >>>>>
>  > >>>>>
>  > >>>>>      The choice of USC-ISI in Marina del Rey was due to other
>  > >>>>>      factors.  In 1972, with ARPA/IPTO (Larry Roberts) strong
>  > support,
>  > >>>>>      Keith Uncapher moved his research group out of RAND.
> Uncapher
>  > >>>>>      explored a couple of possibilities and found a comfortable
>  > >>>>>      institutional home with the University of Southern California
>  > >>>>>      (USC) with the proviso the institute would be off campus.
>  > >>>>>      Uncapher was solidly supportive of both ARPA/IPTO and of the
>  > >>>>>      Arpanet project.  As the Arpanet grew, Roberts needed a
>  > place to
>  > >>>>>      have multiple PDP-10s providing service on the Arpanet.
>  > Not just
>  > >>>>>      for the staff at ARPA but for many others as well.
>  > Uncapher was
>  > >>>>>      cooperative and the rest followed easily.
>  > >>>>>
>  > >>>>>      The fact that it demonstrated the viability of
> packet-switching
>  > >>>>>      over that distance was perhaps a bonus, but the same would
> have
>  > >>>>>      been true almost anywhere in the continental U.S. at that
> time.
>  > >>>>>      The more important factor was the quality of the
> relationship.
>  > >>>>>      One could imagine setting up a small farm of machines at
>  > various
>  > >>>>>      other universities, non-profits, or selected for profit
>  > companies
>  > >>>>>      or even some military bases.  For each of these, cost,
>  > >>>>>      contracting rules, the ambitions of the principal
> investigator,
>  > >>>>>      and staff skill sets would have been the dominant concerns.
>  > >>>>>
>  > >>>>>      Steve
>  > >>>>>
>  > >>>>
>  > >>>> --
>  > >>>> Please send any postal/overnight deliveries to:
>  > >>>> Vint Cerf
>  > >>>> 1435 Woodhurst Blvd
>  > >>>> McLean, VA 22102
>  > >>>> 703-448-0965
>  > >>>>
>  > >>>> until further notice
>  >
>  >
>  > --
>  > Internet-history mailing list
>  > Internet-history at elists.isoc.org <mailto:
> Internet-history at elists.isoc.org>
>  > https://elists.isoc.org/mailman/listinfo/internet-history
>  > <https://elists.isoc.org/mailman/listinfo/internet-history>
>
>  --
>  Internet-history mailing list
>  Internet-history at elists.isoc.org
>  https://elists.isoc.org/mailman/listinfo/internet-history
>
>
>
>   --
>    Please send any postal/overnight deliveries to: Vint Cerf 1435
> Woodhurst Blvd  McLean, VA 22102 703-448-0965
>   until further notice
>
>
>
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>



More information about the Internet-history mailing list