[ih] TCP RTT Estimator

Greg Skinner gregskinner0 at icloud.com
Mon Apr 14 16:22:16 PDT 2025


On Apr 12, 2025, at 11:33 AM, Jack Haverty via Internet-history <internet-history at elists.isoc.org> wrote:
> 
> The 1980s era of The Internet was explicitly a time of research and the "Internet Experiment".    We tried to reflect that in the documents of the day, such as RFC793.
> 
> The general principle was that the "on the wire" formats and meanings were standardized, so that any implementation of TCP could communicate with any other implementation.  Everything else was at best Recommendations.
> 
> However, there were a lot of unanswered questions, such as the best way to deal with network errors such as dropped, duplicated, or mangled datagrams - such as discussed in IEN69.
> 
> To enable research into different techniques, the specific algorithms for TCP functions such as retransmission timers and strategies were explicitly *not* standardized.    That encouraged experimentation with different kinds of network environments and different ideas about how to cope with errors.   It also permitted implementations of TCP with different goals.  An implementer might pursue algorithms which minimized the load on their computer system.  Or load on the network.   Or rapidity of implementation. Or suitability for the specific user environment involved.  Or ...
> 
> No one in 1981 had any significant experience with real-world TCP networks and their behavior under heavy loads.   The ARPANET was the basic wide-area network in use as the substrate for The Internet, and the ARPANET provided only a reliable byte-stream service that made greatly simplified TCP's task.
> 
> IEN 177 says that the RSRE algorithm is the "current best procedure" and "will be included in the next ... specification".  I remember talking with Jon and others about this.  My recollection is that such an algorithm might be included as a "best practice" recommendation, not as a mandatory part of the standard.  In 1981 we simply didn't know enough to nail down an algorithm and there were lots of other outstanding unresolved issues that might be related (such as Type Of Service, Policy Routing, etc.).
> 
> In 1981, The Internet was still very much an Experiment, but being pulled forward by its adoption as a DoD Standard, and later to be rocketed forward by its adoption in non-military networking.   I think many of those research questions were never answered.  I recall we even at one point we even opined that The Internet would be fine as long as we kept enough capacity in the circuits and switches to avoid overloads, while the research continued, seeking the "right" answers.
> 
> Jack Haverty

OK, that seems reasonable.  I did a little more digging, and found that IEN 50 provides some “glue” by comparing some retransmission algorithms, using simulations and analytical techniques to arrive at some conclusions. [1]  It seems unfortunate to me that some of these IENs couldn’t have been included as references in RFC 793.  But as far as supporting (higher packet loss) military networking went, some of these concerns (in theory) could have been addressed in MIL-STD-1778 [2].  Does anyone know why they weren’t? The US DoD sent people to those Internet Meetings from the late 1970s and early 1980s, so (in theory) they had enough information to incorporate any additional requirements into the military standard for TCP.

--gregbo

[1] https://www.rfc-editor.org/ien/scanned/ien50.pdf
[2] http://everyspec.com/MIL-STD/MIL-STD-1700-1799/MIL-STD-1778_6676/


More information about the Internet-history mailing list