[ih] TCP RTT Estimator

John Day jeanjour at comcast.net
Wed Mar 26 11:56:25 PDT 2025


I don’t quite understand about using TCP (or variants) here.

The PRNET consisted of the van and two repeaters to a gateway to the ARPANET. The repeaters were physical layer relays I assume that did not interpret the packets. I presume that the PRNET had a link layer that did some error control. The van to gateway is generating TCP datagrams over its ‘link layer' protocol. (IP had not yet been created, or had it?.) I presume that the ARPANET relayed the TCP packets as Type 3 packets, i.e., datagrams. The PRNET-Gateway would have looked like a host to the IMP it was connected to.  The IMPs had their own hop-by-hop error control over the physical lines. (There weren’t really layers in the IMPs. At least that is what Dave Walden told me. But we can assume that this error control was sort of like a link layer.)

The error characteristics of the Van-Gateway link layer were very different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of the IMP-IMP lines had very low error rates.) There was one Van-Gateway link and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been meeting the requirements for a network layer as described in my email. The only major difference in error rate was the van-gateway. It would make more sense (and consistent with what is described in my email) to provide a more robust to enhance the van-gateway link protocol to be more robust to met the error characteristics.

Using TCP would be in some sense trying to use 'the tail to wag the dog,’ i.e., using an end-to-end transport protocol to compensate for 1 link that was not meeting the requirements of the network layer. This would have been much less effective. It is easy to see that errors in a smaller scope (the link layer) should not be propagated to layers of a greater scope for recovery. (Unless their frequency is very low as described previously, which this isn’t.) This what the architecture model requires.

Not sure what congestion control has to do with this. The TCP congestion solution is pretty awful solution. The implicit notification makes it predatory and assumes that lost messages are due to congestion, which they aren’t. (Is that the connection?)  It works by causing congestion (some congestion avoidance strategy!) which generates many more retransmissions. A scheme that minimizes congestion events and retransmissions would be much preferred. (And one existed at the time.)

Take care,
John

> On Mar 26, 2025, at 13:10, Greg Skinner <gregskinner0 at icloud.com> wrote:
> 
> 
> On Mar 24, 2025, at 5:53 PM, John Day <jeanjour at comcast.net> wrote:
>> 
>> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important.
>> 
>> The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient.
>> 
>> For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective.
>> 
>> There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ‘atomic’ and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven’t encountered them.)  ;-)
>> 
>> One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-)
>> 
>> It leads to a nice principle of protocol design.
>> 
>> Take care,
>> John
>> 
> 
> Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call “TCP Menlo” or “TCP Alpine” might have been created that would have addressed the lossy networks problem more directly. [2]
> 
> --gregbo
> 
> [1] https://en.wikipedia.org/wiki/TCP_congestion_control
> [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/



More information about the Internet-history mailing list