[ih] TCP RTT Estimator

Vint Cerf vint at google.com
Wed Mar 26 14:17:15 PDT 2025


yes, the gateway was colocated with the Station (on the same computer). The
Station managed the Packet Radio network, maintained information about
connectivity among the radio relays. PRNET was not a star network. Topology
changes were tracked by the mobile nodes periodically reporting to the
station which other Packet Radios they could reach. Hosts on the PRNET
nodes could communicate with each other and, through the gateway, with
Arpanet and SATNET hosts. The PRNET nodes did NOT run TCP, that was running
on the hosts like the LSI-11/23's or the Station or....

v


On Wed, Mar 26, 2025 at 5:08 PM John Day <jeanjour at comcast.net> wrote:

> And those nodes relayed among themselves as well as with the gateway?
>
> IOW, PRNET wasn’t a star network with the gateway as the center, like a
> WIFI access point.
>
> So there would have been TCP connections between PRNET nodes as well as
> TCP connections potentially relayed by other PRNET nodes through the
> gateway to ARPANET hosts.  Right?
>
> Take care,
> John
>
> > On Mar 26, 2025, at 16:57, Vint Cerf via Internet-history <
> internet-history at elists.isoc.org> wrote:
> >
> > I think we had a fair number of nodes - at least a half dozen, possibly
> > more? Don would know, if you don't Barbara.
> > Yes to multiple mountain sites. Eichler - sounds like somebody's house! I
> > used to live in an Eichler in Palo Alto but never had a packet radio
> > installed. Xerox PARC had one (fixed location) though.
> >
> > v
> >
> >
> > On Wed, Mar 26, 2025 at 4:46 PM Barbara Denny via Internet-history <
> > internet-history at elists.isoc.org> wrote:
> >
> >>
> >> Saw Vint's message after I started this one so adding Don Nielson to
> this
> >> thread too.
> >> I would like to mention the PRnet in the Bay Area was larger than 2
> >> nodes.   I am guessing you are referring to the diagram I sent out for
> the
> >> 1976 demo/test.  That diagram shows the path the packets took to reach
> SRI
> >> from Rissotti's.  I am trying to find out if the rest of the network
> wasn't
> >> deployed in 1976 but I haven't been able to track it down.  If you look
> at
> >> the DTIC reference Greg Skinner provided previously (
> >> https://apps.dtic.mil/sti/tr/pdf/ADA157696.pdf),  there are diagrams
> >> starting at page 244 that show more of the Bay Area PR network in that
> >> report .  It includes sites at Grizzly Peak, Mission Peak, Mt. San
> Bruno,
> >> etc.  I am not sure I ever got a copy when I was at BBN so I don't feel
> I
> >> can comment if some of the node locations would change based on what
> >> connectivity was needed.
> >> BTW,  was the repeater marked Eichler in the 1976 demo diagram perhaps
> >> near/at Stanford's Dish (Don?/Vint?)? I think most people don't realize
> the
> >> Dish belongs to SRI and not Stanford.
> >> barbara    On Wednesday, March 26, 2025 at 11:56:39 AM PDT, John Day via
> >> Internet-history <internet-history at elists.isoc.org> wrote:
> >>
> >> I don’t quite understand about using TCP (or variants) here.
> >>
> >> The PRNET consisted of the van and two repeaters to a gateway to the
> >> ARPANET. The repeaters were physical layer relays I assume that did not
> >> interpret the packets. I presume that the PRNET had a link layer that
> did
> >> some error control. The van to gateway is generating TCP datagrams over
> its
> >> ‘link layer' protocol. (IP had not yet been created, or had it?.) I
> presume
> >> that the ARPANET relayed the TCP packets as Type 3 packets, i.e.,
> >> datagrams. The PRNET-Gateway would have looked like a host to the IMP it
> >> was connected to.  The IMPs had their own hop-by-hop error control over
> the
> >> physical lines. (There weren’t really layers in the IMPs. At least that
> is
> >> what Dave Walden told me. But we can assume that this error control was
> >> sort of like a link layer.)
> >>
> >> The error characteristics of the Van-Gateway link layer were very
> >> different and much more lossy than the Host-IMP or IMP-IMP lines. (Some
> of
> >> the IMP-IMP lines had very low error rates.) There was one Van-Gateway
> link
> >> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been
> >> meeting the requirements for a network layer as described in my email.
> The
> >> only major difference in error rate was the van-gateway. It would make
> more
> >> sense (and consistent with what is described in my email) to provide a
> more
> >> robust to enhance the van-gateway link protocol to be more robust to met
> >> the error characteristics.
> >>
> >> Using TCP would be in some sense trying to use 'the tail to wag the
> dog,’
> >> i.e., using an end-to-end transport protocol to compensate for 1 link
> that
> >> was not meeting the requirements of the network layer. This would have
> been
> >> much less effective. It is easy to see that errors in a smaller scope
> (the
> >> link layer) should not be propagated to layers of a greater scope for
> >> recovery. (Unless their frequency is very low as described previously,
> >> which this isn’t.) This what the architecture model requires.
> >>
> >> Not sure what congestion control has to do with this. The TCP congestion
> >> solution is pretty awful solution. The implicit notification makes it
> >> predatory and assumes that lost messages are due to congestion, which
> they
> >> aren’t. (Is that the connection?)  It works by causing congestion (some
> >> congestion avoidance strategy!) which generates many more
> retransmissions.
> >> A scheme that minimizes congestion events and retransmissions would be
> much
> >> preferred. (And one existed at the time.)
> >>
> >> Take care,
> >> John
> >>
> >>> On Mar 26, 2025, at 13:10, Greg Skinner <gregskinner0 at icloud.com>
> wrote:
> >>>
> >>>
> >>> On Mar 24, 2025, at 5:53 PM, John Day <jeanjour at comcast.net> wrote:
> >>>>
> >>>> I would go further and say that this is a general property of layers.
> >> We tend to focus on the service provided by a layer, but the minimal
> >> service the layer expects from supporting layers is just as important.
> >>>>
> >>>> The original concept of best-effort and end-to-end transport (circa
> >> 1972) was that errors in the network layer were from congestion and rare
> >> memory errors during relaying. Congestion research was already underway
> and
> >> the results were expected to keep the frequency of lost packets fairly
> low.
> >> Thus keeping retransmissions at the transport layer relatively low and
> >> allowing the transport layer to be reasonably efficient.
> >>>>
> >>>> For those early networks, the link layer was something HDLC-like and
> so
> >> reliable. However, it was recognized that this did not imply that the
> Link
> >> layers had to be reliable, but have an error rate well below the error
> rate
> >> created by the network layer. Given that there would be n link layers
> >> contributing to the additional error rate, where n is the diameter of
> the
> >> network, it is possible to estimate an upper bound that each link layer
> >> must meet to keep the error rate at the network layer low enough for the
> >> transport layer to be effective.
> >>>>
> >>>> There were soon examples of link layers that were datagram services
> but
> >> sufficiently reliable to meet those conditions, e.g., Ethernet. Later,
> to
> >> take up the packet radio topic, 802.11 is another good example, where
> what
> >> happens during the NAV (RTS, CTS, send data, get an Ack) is considered
> >> ‘atomic’ and if the Ack is not received, it is assumed that the packet
> was
> >> not delivered. (As WiFi data rates of increased this has been modified.)
> >> This seems to have the same property of providing a sufficiently low
> error
> >> rate to the network layer that transport remains effective. (Although I
> >> have to admit I have never come across typical goodput measurements for
> >> 802.11. They must exist, I just haven’t encountered them.)  ;-)
> >>>>
> >>>> One fun thing to do with students and WiFi is to point out that the
> >> original use of the NAV makes WiFi a stop-and-wait protocol. Most
> textbooks
> >> use stop-and-wait as a simple introductory protocol and show that under
> >> most circumstances would be quite slow and inefficient. Then since they
> use
> >> WiFi everyday is it slow? No. Then why not? ;-)
> >>>>
> >>>> It leads to a nice principle of protocol design.
> >>>>
> >>>> Take care,
> >>>> John
> >>>>
> >>>
> >>> Looking at this from another direction, there are several specialized
> >> versions of TCP, [1] Given the conditions experienced in the SF Bay Area
> >> PRNET, I can see how if circumstances permitted, something that today we
> >> might call “TCP Menlo” or “TCP Alpine” might have been created that
> would
> >> have addressed the lossy networks problem more directly. [2]
> >>>
> >>> --gregbo
> >>>
> >>> [1] https://en.wikipedia.org/wiki/TCP_congestion_control
> >>> [2]
> >>
> https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/
> >>
> >>
> >>
> >> --
> >> Internet-history mailing list
> >> Internet-history at elists.isoc.org
> >> https://elists.isoc.org/mailman/listinfo/internet-history
> >>
> >
> >
> > --
> > Please send any postal/overnight deliveries to:
> > Vint Cerf
> > Google, LLC
> > 1900 Reston Metro Plaza, 16th Floor
> > Reston, VA 20190
> > +1 (571) 213 1346 <(571)%20213-1346>
> >
> >
> > until further notice
> > --
> > Internet-history mailing list
> > Internet-history at elists.isoc.org
> > https://elists.isoc.org/mailman/listinfo/internet-history
>
>

-- 
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346


until further notice


More information about the Internet-history mailing list