[ih] TCP RTT Estimator
vinton cerf
vgcerf at gmail.com
Wed Mar 26 12:23:09 PDT 2025
see inline, adding don nielson
On Wed, Mar 26, 2025 at 2:56 PM John Day via Internet-history <
internet-history at elists.isoc.org> wrote:
> I don’t quite understand about using TCP (or variants) here.
>
> The PRNET consisted of the van and two repeaters to a gateway to the
> ARPANET. The repeaters were physical layer relays I assume that did not
> interpret the packets.
no, they were full up packet radios as I remember it.
> I presume that the PRNET had a link layer that did some error control.
yes
> The van to gateway is generating TCP datagrams over its ‘link layer'
> protocol. (IP had not yet been created, or had it?.)
IP came about 1977, the first tests in 1976, TCP only. The Nov 1977 tests were
full up TCP/IP
> I presume that the ARPANET relayed the TCP packets as Type 3 packets,
> i.e., datagrams.
Well, not necessarily. We used Type 3 for voice comms but not necessarily
for TCP traffic
> The PRNET-Gateway would have looked like a host to the IMP it was
> connected to.
yes
> The IMPs had their own hop-by-hop error control over the physical lines.
> (There weren’t really layers in the IMPs. At least that is what Dave Walden
> told me. But we can assume that this error control was sort of like a link
> layer.)
>
the IMPs carried Arpanet packets (TCP/IP packets were "messages" to the IMP
and broken into Arpanet packets for transport. The IMPs had sequenced
delivery, message reassembly and RFNM flow control except that was not the
case for Type 3 "uncontrolled" IMP packets)
>
> The error characteristics of the Van-Gateway link layer were very
> different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of
> the IMP-IMP lines had very low error rates.) There was one Van-Gateway link
> and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been
> meeting the requirements for a network layer as described in my email. The
> only major difference in error rate was the van-gateway. It would make more
> sense (and consistent with what is described in my email) to provide a more
> robust to enhance the van-gateway link protocol to be more robust to met
> the error characteristics.
>
> Using TCP would be in some sense trying to use 'the tail to wag the dog,’
> i.e., using an end-to-end transport protocol to compensate for 1 link that
> was not meeting the requirements of the network layer. This would have been
> much less effective. It is easy to see that errors in a smaller scope (the
> link layer) should not be propagated to layers of a greater scope for
> recovery. (Unless their frequency is very low as described previously,
> which this isn’t.) This what the architecture model requires.
>
Generally that's a fair line of argument (ie, try to bring links up to
better quality by forward error correction, link level retransmission,
ARQ) for each of the "networks' in the Internet. The end/end TCP was
mostly to deal with packets lost in the PRNET, in this case, because the
PRNET was potentially multihop and packets could be lost due to lack of
connectivity, timeouts, loss of a packet radio. There is a 1978 IEEE
Proceedings with a lot of the details of that time period for PRNET,
SATNET, etc.
>
> Not sure what congestion control has to do with this. The TCP congestion
> solution is pretty awful solution. The implicit notification makes it
> predatory and assumes that lost messages are due to congestion, which they
> aren’t. (Is that the connection?) It works by causing congestion (some
> congestion avoidance strategy!) which generates many more retransmissions.
> A scheme that minimizes congestion events and retransmissions would be much
> preferred. (And one existed at the time.)
>
Many congestion control methods have since been introduced (think Sally
Floyd, Van Jacobson) since the early and relatively naive TCP days.
>
> Take care,
> John
>
> > On Mar 26, 2025, at 13:10, Greg Skinner <gregskinner0 at icloud.com> wrote:
> >
> >
> > On Mar 24, 2025, at 5:53 PM, John Day <jeanjour at comcast.net> wrote:
> >>
> >> I would go further and say that this is a general property of layers.
> We tend to focus on the service provided by a layer, but the minimal
> service the layer expects from supporting layers is just as important.
> >>
> >> The original concept of best-effort and end-to-end transport (circa
> 1972) was that errors in the network layer were from congestion and rare
> memory errors during relaying. Congestion research was already underway and
> the results were expected to keep the frequency of lost packets fairly low.
> Thus keeping retransmissions at the transport layer relatively low and
> allowing the transport layer to be reasonably efficient.
> >>
> >> For those early networks, the link layer was something HDLC-like and so
> reliable. However, it was recognized that this did not imply that the Link
> layers had to be reliable, but have an error rate well below the error rate
> created by the network layer. Given that there would be n link layers
> contributing to the additional error rate, where n is the diameter of the
> network, it is possible to estimate an upper bound that each link layer
> must meet to keep the error rate at the network layer low enough for the
> transport layer to be effective.
> >>
> >> There were soon examples of link layers that were datagram services but
> sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to
> take up the packet radio topic, 802.11 is another good example, where what
> happens during the NAV (RTS, CTS, send data, get an Ack) is considered
> ‘atomic’ and if the Ack is not received, it is assumed that the packet was
> not delivered. (As WiFi data rates of increased this has been modified.)
> This seems to have the same property of providing a sufficiently low error
> rate to the network layer that transport remains effective. (Although I
> have to admit I have never come across typical goodput measurements for
> 802.11. They must exist, I just haven’t encountered them.) ;-)
> >>
> >> One fun thing to do with students and WiFi is to point out that the
> original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks
> use stop-and-wait as a simple introductory protocol and show that under
> most circumstances would be quite slow and inefficient. Then since they use
> WiFi everyday is it slow? No. Then why not? ;-)
> >>
> >> It leads to a nice principle of protocol design.
> >>
> >> Take care,
> >> John
> >>
> >
> > Looking at this from another direction, there are several specialized
> versions of TCP, [1] Given the conditions experienced in the SF Bay Area
> PRNET, I can see how if circumstances permitted, something that today we
> might call “TCP Menlo” or “TCP Alpine” might have been created that would
> have addressed the lossy networks problem more directly. [2]
> >
> > --gregbo
> >
> > [1] https://en.wikipedia.org/wiki/TCP_congestion_control
> > [2]
> https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>
More information about the Internet-history
mailing list