[ih] TCP RTT Estimator

John Day jeanjour at comcast.net
Wed Mar 26 13:30:19 PDT 2025



> On Mar 26, 2025, at 15:23, vinton cerf <vgcerf at gmail.com> wrote:
> 
> see inline, adding don nielson 
> 
> On Wed, Mar 26, 2025 at 2:56 PM John Day via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>> I don’t quite understand about using TCP (or variants) here.
>> 
>> The PRNET consisted of the van and two repeaters to a gateway to the ARPANET. The repeaters were physical layer relays I assume that did not interpret the packets.
> no, they were full up packet radios as I remember it.  

Right, I was assuming that the ’station’ in the van was a full packet radio with some sort of link layer and then TCP over that to the gateway. That comment was that the repeaters were repeaters not what Ethernet would call a ‘bridge.’

Did IP exist at that point?  I was assuming IP was ’78, which is close, so I wasn’t sure.

>> I presume that the PRNET had a link layer that did some error control.
> yes 

Makes sense.

>> The van to gateway is generating TCP datagrams over its ‘link layer' protocol. (IP had not yet been created, or had it?.)
> IP came about 1977, the first tests in 1976, TCP only. The Nov 1977 tests were full up TCP/IP
>> I presume that the ARPANET relayed the TCP packets as Type 3 packets, i.e., datagrams.
> Well, not necessarily. We used Type 3 for voice comms but not necessarily for TCP traffic 

Okay. minor difference. So an ARPANET ’network layer’.

>> The PRNET-Gateway would have looked like a host to the IMP it was connected to. 
> yes 
>> The IMPs had their own hop-by-hop error control over the physical lines. (There weren’t really layers in the IMPs. At least that is what Dave Walden told me. But we can assume that this error control was sort of like a link layer.)
> the IMPs carried Arpanet packets (TCP/IP packets were "messages" to the IMP and broken into Arpanet packets for transport. The IMPs had sequenced delivery, message reassembly and RFNM flow control except that was not the case for Type 3 "uncontrolled" IMP packets) 

Right.  That was what I was thinking.
>> 
>> The error characteristics of the Van-Gateway link layer were very different and much more lossy than the Host-IMP or IMP-IMP lines. (Some of the IMP-IMP lines had very low error rates.) There was one Van-Gateway link and several (n) Host-IMP and IMP-IMP links. The ARPANET would have been meeting the requirements for a network layer as described in my email. The only major difference in error rate was the van-gateway. It would make more sense (and consistent with what is described in my email) to provide a more robust to enhance the van-gateway link protocol to be more robust to met the error characteristics.
>> 
>> Using TCP would be in some sense trying to use 'the tail to wag the dog,’ i.e., using an end-to-end transport protocol to compensate for 1 link that was not meeting the requirements of the network layer. This would have been much less effective. It is easy to see that errors in a smaller scope (the link layer) should not be propagated to layers of a greater scope for recovery. (Unless their frequency is very low as described previously, which this isn’t.) This what the architecture model requires.
> Generally that's a fair line of argument (ie, try to bring links up to better quality by forward error correction, link level retransmission, ARQ)  for each of the "networks' in the Internet. The end/end TCP was mostly to deal with packets lost in the PRNET, in this case, because the PRNET was potentially multihop and packets could be lost due to lack of connectivity, timeouts, loss of a packet radio. There is a 1978 IEEE Proceedings with a lot of the details of that time period for PRNET, SATNET, etc. 

Okay, that makes sense.
>> 
>> Not sure what congestion control has to do with this. The TCP congestion solution is pretty awful solution. The implicit notification makes it predatory and assumes that lost messages are due to congestion, which they aren’t. (Is that the connection?)  It works by causing congestion (some congestion avoidance strategy!) which generates many more retransmissions. A scheme that minimizes congestion events and retransmissions would be much preferred. (And one existed at the time.)
> Many congestion control methods have since been introduced (think Sally Floyd, Van Jacobson) since the early and relatively naive TCP days.  

Well, I think Jain’s group at DEC nailed the problem, at least as a first approximation. (We might extend it today based on what we have learned.)  First was recognizing the need for ECN. That ensures the response is to congestion and not something else and it limits the response to events in THAT layer. ECN is essential. The Jacobson approach is more a network solution than an internet solution.  The other thing Jain’s group did was show that notification should begin when the average queue length was greater than or equal to 1. That is very early and would really reduce the probability of retransmissions, rather creating congestion and causing more retransmissions.

The Floyd/Jacobson hung on to the basic implicit notification, cause congestion model and tried to tweak that which was a dead end. I should also mention that Jain’s group optimized for the knee of the curve, while Jacobson’s solution optimized for the edge of the cliff where congestion collapse started. Hence more retransmissions.

The other thing that Jain’s group showed was that congestion is a stochastic phenomena and many ‘congestion’ events actually clear on their own. (which is why it is the average queue length, a filter for those sorts of events.)

The other mistake they all made was putting it in Transport. We know that ALL congestion control strategies deteriorate with increasing time-to-notify. Transport maximizes time-to-notify. (It has the largest scope.) Everyone always thought that congestion would go in the network layers where scope was bounded supporting the Internet Transport Layer.

Take care,
John

>> 
>> Take care,
>> John
>> 
>> > On Mar 26, 2025, at 13:10, Greg Skinner <gregskinner0 at icloud.com <mailto:gregskinner0 at icloud.com>> wrote:
>> > 
>> > 
>> > On Mar 24, 2025, at 5:53 PM, John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net>> wrote:
>> >> 
>> >> I would go further and say that this is a general property of layers. We tend to focus on the service provided by a layer, but the minimal service the layer expects from supporting layers is just as important.
>> >> 
>> >> The original concept of best-effort and end-to-end transport (circa 1972) was that errors in the network layer were from congestion and rare memory errors during relaying. Congestion research was already underway and the results were expected to keep the frequency of lost packets fairly low. Thus keeping retransmissions at the transport layer relatively low and allowing the transport layer to be reasonably efficient.
>> >> 
>> >> For those early networks, the link layer was something HDLC-like and so reliable. However, it was recognized that this did not imply that the Link layers had to be reliable, but have an error rate well below the error rate created by the network layer. Given that there would be n link layers contributing to the additional error rate, where n is the diameter of the network, it is possible to estimate an upper bound that each link layer must meet to keep the error rate at the network layer low enough for the transport layer to be effective.
>> >> 
>> >> There were soon examples of link layers that were datagram services but sufficiently reliable to meet those conditions, e.g., Ethernet. Later, to take up the packet radio topic, 802.11 is another good example, where what happens during the NAV (RTS, CTS, send data, get an Ack) is considered ‘atomic’ and if the Ack is not received, it is assumed that the packet was not delivered. (As WiFi data rates of increased this has been modified.) This seems to have the same property of providing a sufficiently low error rate to the network layer that transport remains effective. (Although I have to admit I have never come across typical goodput measurements for 802.11. They must exist, I just haven’t encountered them.)  ;-)
>> >> 
>> >> One fun thing to do with students and WiFi is to point out that the original use of the NAV makes WiFi a stop-and-wait protocol. Most textbooks use stop-and-wait as a simple introductory protocol and show that under most circumstances would be quite slow and inefficient. Then since they use WiFi everyday is it slow? No. Then why not? ;-)
>> >> 
>> >> It leads to a nice principle of protocol design.
>> >> 
>> >> Take care,
>> >> John
>> >> 
>> > 
>> > Looking at this from another direction, there are several specialized versions of TCP, [1] Given the conditions experienced in the SF Bay Area PRNET, I can see how if circumstances permitted, something that today we might call “TCP Menlo” or “TCP Alpine” might have been created that would have addressed the lossy networks problem more directly. [2]
>> > 
>> > --gregbo
>> > 
>> > [1] https://en.wikipedia.org/wiki/TCP_congestion_control
>> > [2] https://computerhistory.org/blog/born-in-a-van-happy-40th-birthday-to-the-internet/
>> 
>> -- 
>> Internet-history mailing list
>> Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
>> https://elists.isoc.org/mailman/listinfo/internet-history



More information about the Internet-history mailing list