[ih] Question on Flow Control

Craig Partridge craig at tereschau.net
Mon Dec 29 09:57:35 PST 2025


On Mon, Dec 29, 2025 at 12:07 PM John Day <jeanjour at comcast.net> wrote:

>
> As for TCP initially using Selective-repeat or SACK, do you remember what
> the TCP retransmission time out was at that time? It makes a difference.
> The nominal value in the textbooks is RTT + 4D, where D is the mean
> variation. There is an RFC that says if 4D < 1 sec, set it to 1 sec. which
> seems high, but that is what it says.
>
> Take care,
> John
>

Serious study of what the RTO should be didn't happen until the late
1980s.  Before that, it was rather ad hoc.

RFC 793 says min(upper bound,  beta * min(lower bound, SRTT)). where SRTT
was an incremental moving average, SRTT = (alpha * SRTT) +
(1-alpha)(measured RTT).  But this leaves open all sorts of questions such
as: what should alpha and beta be (RFC 793 suggests alpha of .8 or so and
beta of 1.3 to 2), and do you measure an RTT once per window (BSD's
approach) or once per segment (I think TENEX's approach).  Not to
mention the retransmission ambiguity problem, which Lixia Z. and Raj Jain
discovered in 1985-6.  (If you are wondering why we didn't use variance --
it required a square root which was strictly a no-no in kernels of that
era;  Van J. solved part of this issue by finding a variance calculation
that could be done without a square root).

This is an improvement on TCP v2 (which is silent on the topic) and IEN 15
(1976) which says use 2 * RTT estimate.

Ethernet and ALOHA were more explicit about this process but both had far
easier problems, with well bounded prop delay (and in ALOHA's case, a prop
delay so long it swamped queueing times).

Part of the reason TCP was slow to realize the issues, I think, were (1)
the expectation loss would be low (Dave Clark used to say that in the
1970s, the notion was loss was below 1%, which, in a time when windows were
often 4, mean the RTO was used about 4% of the time); and (2) failure to
realize congestion collapse was an issue (when loss rates soar to 80% or
more and your RTO estimator really needs to be good or you make congestion
worse).  It is not chance that RTO issues came to a head as the Internet
was suffering congestion collapse.  I got pulled into the issues (and
helped Phil Karn solve retransmission ambiguity) because I was playing with
RDP, which had selective acks, and was seeing also sorts of strange holes
in my windows (as out of order segments were being acked) and trying to
figure out what to retransmit and when.

Craig


-- 
*****
Craig Partridge's email account for professional society activities and
mailing lists.


More information about the Internet-history mailing list