[ih] internet-history Digest, Vol 84, Issue 4

Noel Chiappa jnc at mercury.lcs.mit.edu
Wed May 21 07:31:31 PDT 2014


    > From: Guy Almes <galmes at tamu.edu>

    > Clarity on the degree to which the authors of the early TCP RFCs did
    > not recognize the importance of developing very good congestion control
    > algorithms.

I think it was as much (if not more) an issue of 'we didn't have the
capability to do one as good as Van's' as "recogniz[ing] the importance of
developing [a] very good" one.

To what degree that was the lack of a good understanding of the problem, and
to what degree simply that Van was better at control theory and analysis of
the system than the rest of us, is a good question, and one I don't have a
ready answer too. But if you look at something like "Why TCP Timers Don't
Work Well", it's clear we all just didn't understand what could be done.

We did understand that congestion control was important (although my
recollection is that I don't think we clearly foresaw the severe congestive
collapse which the ARPANET-based section of the Internet suffered not too
long before Van started working on the problem). Hence, we did put a certain
amount of thought into congestion control (Source Quench, the Nagle
algorithm, etc).

My vague recollection is that in the very early days we were more focused on
flow control in the hosts, rather than congestion control in the network, but
I think we did understand that congestion in the network was also aan issue
(hence SQ, etc).


The thing is that we understand all this so much better now - the importance
of congestion control, source algorithms to control it, etc - and we were
really groping in the dark back then.

The ARPANET (because of its effective VC nature, with flow and thus
congestion control built into the network itself) hadn't given us much in the
way of advance experience in this particular area. So, as with many things,
what is crystal clear in hindsight was rather obscured without the mental
frameworks, etc that we have now (e.g. F=ma).


    > Clarity on how/when it began to become evident that the naive
    > algorithms documented in the TCP RFCs and used in early testing would
    > themselves become the source of trouble.

Not just testing, but early service! (Q.v. the ARPANET-local congestive
collapse.)

But your wording makes it sound like they were positively incorrect. Well,
not really (to my eyes); they mostly simply were not _always effective_ at
controlling congestion (although they did generate some useless, duplicate
packets). But they were not positively defective, the way TFTP was, with
Sorcerer's Apprentice Syndrome:

  http://en.wikipedia.org/wiki/Sorcerer's_Apprentice_Syndrome

	Noel



More information about the Internet-history mailing list