[ih] NBS seminar on TCP/IP (was TCP RTT Estimator)
Jack Haverty
jack at 3kitty.org
Sun Apr 27 11:48:37 PDT 2025
I'm working on it.....
On 4/27/25 06:11, vinton cerf wrote:
> thanks for a great story, Jack! Your adventures in Internet-land
> deserve a book of its own.
>
> v
>
>
> On Sat, Apr 26, 2025 at 3:55 PM Jack Haverty via Internet-history
> <internet-history at elists.isoc.org> wrote:
>
> The TCP implementation I did for Unix on a PDP-11/40 was for use
> in the
> BCR project. I also had lots of performance issues, some of which
> were
> finally traced to hardware issues such as connectors!
>
> In the late 1960s as an undergraduate I had a student job
> programming a
> PDP-8 doing data collection at the MIT Instrumentation Lab (also
> known
> as "The ILab"). The group I worked for was designing and deploying
> inertial navigation systems. At the time their focus was on the
> Apollo
> moon missions and "PIGA" devices (Pendulous Integrating Gyroscopic
> Accelerometers), but their technology had been in use for years in
> older
> systems such as the Minuteman ICBMs. (Google "minuteman piga" if
> you're curious).
>
> In EE classes, we had been learning about all sorts of Engineering
> techniques for optimizing circuit designs - things like Karnaugh
> maps.
> One day while sitting at my desk in the ILab, I realized that the
> engineer sitting at the next desk was an actual "rocket scientist"
> working on rocket stuff. So I asked him what Engineering
> principles and
> techniques he found most useful in his design work.
>
> His answer surprised me -- "Connectors are all that matters!". All
> designs were focussed on minimizing the number of connectors. Nothing
> else was considered important, as long as it fit in the size, weight,
> and power budget.
>
> Over years of accumulated field data, they had determined that
> failures
> were mainly associated with connectors. A few extra logic gates
> didn't
> matter. An extra connector made the system noticeably less reliable.
>
> That's why computer problems even today sometimes disappear if you
> simply unplug and replug the cables. Any sliding metal-metal
> contact
> (i.e., a "connector") eventually corrodes and disrupts whatever
> signal
> was travelling through it. The sliding action of replugging usually
> cleans the contacts and things work again.
>
> Plugging and replugging connectors isn't easy when the connector is
> somewhere between the Earth and Moon, or deep inside a missile
> buried in
> some farmer's field. So that engineer told me that they avoided such
> problems by assuring that every circuit passing through a connector
> always had some tiny current flowing through it. Even a few
> microamps
> was enough to avoid corrosion. That was another crucial design
> principle.
>
> They didn't teach such things in school back then. I wonder if
> that's
> changed.
>
> My desk neighbor was Oriental, and told me of an old "Confucian
> Curse",
> from a time far before electricity was discovered. It applies to all
> sorts of "interfaces" and undoubtedly loses in the translation,
> but he
> said today's curse would be "May You Be In Charge Of Connectors!"
>
> TCP does a wonderful job of hiding such problems. Such
> experiences were
> what motivated me later to push hard for SNMP as instrumentation --
> including instrumentation of TCP behavior in end-users computers.
>
> My Unix TCP had counters for things like duplicates, retransmissions,
> and the like. Such data was invaluable to even detect that something
> wasn't working as it should, whether it was a bug in the software, a
> defect in the protocol, or a flaky connector under the floor.
> When I
> was later involved in operating a large intranet, such mechanisms
> were
> very useful in figuring out why the users were rightly complaining
> about
> network performance.
>
> Connectors in networking, whether they are plugs/sockets,
> protocols, or
> APIs, matter.
>
> Jack Haverty
>
>
> On 4/26/25 03:58, Vint Cerf wrote:
> > I worked closely with Ed Cain at DCEC. In fact, while I was
> still at
> > Stanford, I used to fly in on the red-eye to Dulles, go to DCEC
> > building which is now walking distance from my Reston Office at
> > Google, and worked on the BCR project.I would take a late afternoon
> > flight to get back to SFO and go to my morning lectures the next
> day.
> > If memory serves, some of the BCRs were installed at DCEC, on
> Wiehle
> > Ave/Sunset Blvd. I even seem to recall that BCR TCP retransmissions
> > hid a flaky connection until we realized the data rates we were
> > getting were much worse than reasonably expected. Ed or someone
> > reporting to him pulled up a floor plate to discover the flaky
> connector.
> >
> > v
> >
> >
> > On Fri, Apr 25, 2025 at 7:13 PM Jack Haverty via Internet-history
> > <internet-history at elists.isoc.org> wrote:
> >
> > The managers in the military hierarchy mostly had to trust their
> > technical staff, who were either within the military or one
> of their
> > contractors, for technical issues. DCA was responsible for
> > operating
> > communications infrastructure and changing it as new
> technologies
> > became
> > viable. ARPA was responsible for supplying a stream of new
> > technologies. Together they ran the "pipeline" to move
> ideas from
> > research to operational systems.
> >
> > DCA had a "lab" arm, called the Defense Communications
> Engineering
> > Center (DCEC) where technologies, such as TCP/IP, were
> tested and
> > judged
> > to be, or not yet be, appropriate, for general deployment
> throughout
> > DoD. Ed Cain was in charge of a lab for evaluating TCP/IP.
> He was
> > also a member of Vint's ICCB. So DCA was aware of the
> outstanding
> > issues.
> >
> > But other issues are usually surfaced as any new technology goes
> > into an
> > operational environment. Getting "in the field" experience as
> > early as
> > possible provides a way to shake out those operational
> issues, and
> > feed
> > them back into the technical evolution.
> >
> > SATNET had been operated by ARPA for a while, but MATNET, a
> clone of
> > SATNET, was also being evaluated by a Navy lab, run by Frank
> > Deckelman,
> > with its nodes on ships such as the USS Carl Vinson. Similarly,
> > Packet
> > Radio networks were in use at places like Fort Bragg and
> elsewhere to
> > get similar operational experience in the Army. All such
> testbeds
> > could
> > generate feedback of issues important to military needs.
> >
> > TCP/IP was ready enough to go "into the field", but was known to
> > be not
> > yet "done". There was a list of outstanding issues that the
> ICCB
> > kept,
> > of things that needed to be addressed but were not yet
> resolved by us
> > "techies". Many of the issues involved routing, as well as
> how to
> > best
> > support a mix of traffic types with different needs for "type of
> > service". These were all things that us techies simply didn't
> > know how
> > to do yet.
> >
> > While the then-current version of TCP/IP (V4) was deployed and
> > operational experience gathered, the outstanding issues could be
> > addressed by the ongoing Research Community, to discuss, debate,
> > test,
> > and choose appropriate mechanisms for each of the pending
> issues, and
> > incorporate them in the next release of TCP/IP Specifications.
> >
> > Meanwhile, the limitations of the current release were
> understood and
> > avoided. IP Fragmentation is one example; we knew it
> didn't work
> > very
> > well, but the "Don't Fragment" bit was added to provide a way to
> > avoid it.
> >
> > So, important "features" weren't "omitted". They were targetted
> > to be
> > solved and incorporated in a future release, once the technical
> > experts
> > reached "rough consensus and running code".
> >
> > The Defense Research Internet (DRI) was supposed to be a new
> > high-speed
> > network to provide a foundation for continuing such
> research, such as
> > the "policy routing" and "type of service" requirements that
> were
> > on the
> > to-do list.
> >
> > I don't myself know much about the DRI though, or what role
> it might
> > have played in creating the "next release" of TCP/IP (V6?).
> Or if
> > DRI
> > ever got built at all. Perhaps someone else does.
> >
> > I think the whole Internet world changed quite a bit when
> NSF got
> > involved, and when the commercial world chose TCP/IP as
> their target
> > architecture. The military became just one of a global list of
> > customers. NSF instigated the creation of a bunch of regional
> > networks, with a mandate and deadline to become
> self-sufficient, and
> > public ISPs started to spread as a result - leading to The
> > Internet we
> > know today.
> >
> > Jack Haverty
> >
> > On 4/25/25 15:17, Greg Skinner wrote:
> > > Just to clarify, I have listened to the computer-freaks
> podcasts
> > about
> > > Joe Haughney. He (and his successors) had access to
> information
> > about
> > > the TCP/IP implementations that were discussed at the meetings
> > > summarized in the IEN notes. So in theory, there was a means
> > for them
> > > to raise concerns about retransmission algorithms, or collect
> > > information that could be passed on to people who had those
> > concerns.
> > > Furthermore, they were getting feedback on the tcp-ip
> list about
> > > implementation concerns, in general. [1]
> > >
> > > So far, based on what I’ve read, I don’t see any evidence
> that the
> > > concerns of the military, or users of lossy networks, were
> given
> > > insufficient consideration. I see that there were
> features left
> > out
> > > of RFC 793 that could have mitigated some retransmission
> issues
> > that
> > > impacted performance. But there were DCA people involved
> who, in
> > > theory, could have questioned the wisdom of omitting those
> > features,
> > > at least.
> > >
> > > --gregbo
> > >
> > > [1] https://www.columbia.edu/~rh120/other/tcpdigest_paper.txt
> > >
> > >
> > >> On Apr 25, 2025, at 11:11 AM, Jack Haverty
> <jack at 3kitty.org> wrote:
> > >>
> > >> Since I'm listed as one of the speakers, I was probably
> there.
> > But I
> > >> don't remember that particular seminar at all. OTOH,
> during the
> > >> early 1980s I went from Boston to DC probably at least once a
> > week,
> > >> often to brief someone or some group on The Internet. I
> do recall
> > >> going to NBS, but can't remember exactly why. It certainly
> > could have
> > >> been to give a talk on TCP for an hour to some audience.
> > >>
> > >> That seminar was almost certainly part of DCA's efforts
> to support
> > >> the standardization of TCP as a DoD Standard. See, for
> > example, RFC
> > >> 761, published in January 1980 - with a seminar scheduled for
> > November.
> > >>
> > >> The adoption of TCP as a requirement for DoD procurements
> (not
> > just
> > >> research contracts) triggered a lot of big and small
> government
> > >> contractors to get interested in exactly what this thing
> was that
> > >> they were going to have to implement. I recall even
> getting a
> > phone
> > >> call from a cousin, who worked at a big government shop,
> to get a
> > >> little free education and advice.
> > >>
> > >> So the NBS seminar was probably more of an educational
> venue for
> > >> people new to TCP; it was likely not a place where nuances of
> > >> retransmission algorithms would be of interest.
> > >>
> > >> At the time, the ARPANET had been transferred from ARPA
> to DCA.
> > >> Research results were progressing towards operation, part
> of the
> > >> "technology transfer" impetus. Joe Haughney was in
> charge of the
> > >> ARPANET in DCA Code 535. IIRC, there's a lot more detail
> in the
> > >> podcasts which Joe's daughter Christine put together
> recently -
> > see
> > >> https://www.inc.com/podcasts/computer-freaks
> > >>
> > >> Jack Haverty
> > >
> >
> > --
> > Internet-history mailing list
> > Internet-history at elists.isoc.org
> > https://elists.isoc.org/mailman/listinfo/internet-history
> >
> >
> >
> > --
> > Please send any postal/overnight deliveries to:
> > Vint Cerf
> > Google, LLC
> > 1900 Reston Metro Plaza, 16th Floor
> > Reston, VA 20190
> > +1 (571) 213 1346
> >
> >
> > until further notice
> >
> >
> >
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20250427/1213520c/attachment-0001.asc>
More information about the Internet-history
mailing list