[ih] TCP RTT Estimator
Jack Haverty
jack at 3kitty.org
Tue Mar 11 16:02:11 PDT 2025
Congestion control was a major issue in the ARPANET as it got larger,
and especially as it morphed into the Defense Data Network. A lot of
effort was put into analyzing, simulating, and implementing changes to
the internal mechanisms and algorithms that implemented the ARPANET's
"virtual circuit" service analogous to TCP's role in The Internet.
IMHO there's a difference between designing an algorithm (such as
aspects of TCP) and designing a Network. The ARPANET and its clones
used pretty much the same algorithms, but there was a lot of effort put
into designing each particular network, and evolving it as user needs
changed. There was a large group at BBN called Network Analysis that
did much of that work.
Each network was designed to reflect the traffic requirements of the
users. Nodes were interconnected based on analysis of traffic patterns
and historical data. One ARPANET-clone, for a credit card processor,
was designed for one particular day - Black Friday. If it worked then,
it would work all year.
Circuit sizes were selected based on traffic flow peaks, with an
assumption that at any point in time some circuit might be out of
service. So circuits were somewhat "over-provisioned" in order to keep
traffic flowing even when some circuit was out of service. The network
knew how to divert traffic around failures.
Queuing theory indicated that delays were highly coupled to line
utilization. I don't recall the exact numbers, but if a circuit was
used more than about 75% it would result in occasional long delays. So
networks were designed to keep all the circuits below that level during
peak usage.
That design principle caused some problems with the bean counters of the
world. To them, 75% utlization meant that 25% of those expensive
circuit charges was being wasted. So we redefined "utilization" --
100% utilization was reached at 75% load. That meant occasionally a
network circuit could achieve 110% utilization, which made the bean
counters especially happy. Win-win.
As far as I know, The Internet has never been designed in the same way
the ARPANET was. TCP and other protocols were designed, algorithms for
retransmission et al were tested experimentally and documented. But
the Internet itself - the connectivity graph and the interconnection
capacities - were (and are?) decided by local operators of pieces of The
Internet. I don't know anything about how they make decisions of
network topology and such, or how that's changed over the decades of
Internet operation. Anyone else?
At one point I remember a meeting, sometime in the early 1980s, where
some bunch of us discussed "design" of the Internet. Most of the
ARPANET techniques weren't applicable -- how do you specify the "size"
and delay of the network paths that interconnect gateways? Telco
circuits were stable and predictable, and could be analyzed
mathematically. Analogous Internet connections were unpredictable and
mathematically intractable.
The conclusion at that meeting was that, while research continued to
find appropriate mechanisms, the Internet would operate acceptably if it
was always kept well below any kind of "saturation point". With enough
processing power, memory for buffers, and alternate paths, everything
would likely be mostly fine.
Someone asked what the performance specs of The Internet should be -
e.g., what packet drop rate would be "normal". After a little
discussion, someone said "How about 1%?" and that became the consensus
for "normal" behavior of the Internet. I remember changing my TCP to
report a network problem if a connection's drop rate (wildly
guesstimated as the retransmission rate) hit 1%.
Experiments could continue, seeking the "right answer" for Internet
algorithms and developing principles for Internet design. Are we there
yet...?
Jack Haverty
On 3/11/25 14:48, Barbara Denny via Internet-history wrote:
> I don't recall ever hearing, or reading, about TCP transport requirements from the underlying network but I wasn't there in the early days of TCP (70s).
> I have trouble thinking the problem with the congestion assumption wasn't brought up early but I certainly don't know.
> barbara
> On Tuesday, March 11, 2025 at 02:10:26 PM PDT, John Day<jeanjour at comcast.net> wrote:
>
> I would disagree. The Transport Layer assumes a minimal service from the layers below (actually all layers do). If the underlying layer doesn’t meet that normally, then measures are needed to bring the service up to the expected level. Given that the diameter of the net now is about 20 or so and probably back then 5 or 6. Packet radio constituted a small fraction of the lower layers that the packet had to cross. Assuming packet radio didn’t have to do anything had the tail wagging the dog.
>
> Of course the example some would point to was TCP congestion control assuming lost packets were due to congestion. That was a dumb assumption and didn’t take a systems view of the problem. (Of course, it wasn’t the only dumb thing in that design, it also maximized retransmissions.)
>
> Take care,
> John Day
>
>> On Mar 11, 2025, at 17:02, Barbara Denny via Internet-history<internet-history at elists.isoc.org> wrote:
>>
>> I do view packet radio as a stress test for the protocol(s). I think it is important to consider all the different dynamics that might come into play with the networks.
>> I still need to really read Jack's message but there were also military testbeds that had packet radio networks. I don't know what these users were trying to do. I was only involved if they experienced problems involving the network. My role was to figure out why and then get it fixed (with whatever contractor that was working that part of the system, including BBN).
>> barbara
>>
>>
>>
>>
>> On Tuesday, March 11, 2025 at 01:49:25 PM PDT, Dave Crocker<dhc at dcrocker.net> wrote:
>>
>>
>> I have always been curious how packet radio may, or may not, have impacted the calculations.
>>
>>
>>
>>
>> At the IETF, the presentation about an actual implementation of IP over Avian Carrier noted that it provided an excellent test of this algorithm.
>>
>>
>> d/
>> --
>> Dave Crocker
>>
>> Brandenburg InternetWorking
>> bbiw.net
>> bluesky: @dcrocker.bsky.social
>> mast: @dcrocker at mastodon.social
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20250311/b17e393d/attachment.asc>
More information about the Internet-history
mailing list