[ih] Flow Control in IP

Jack Haverty jack at 3kitty.org
Wed May 1 11:54:29 PDT 2024


On 4/30/24 15:00, Karl Auerbach via Internet-history wrote:
> I think Steve Crocker answered your question about RFNM and why we 
> needed to deal with it across our cryptographic barrier.

One aspect of the Arpanet's flow control is often overlooked.   The IMPs 
were connected to the Hosts through a serial wire interface, defined by 
the 1822 specification.  An IMP could as a last resort to control flow 
from one of its Hosts could simply turn off the clock on that serial 
line.  That would block all traffic from that Host, on all connections, 
to all destinations.   It was "hardware flow control".

RFNMs provided a mechanism to avoid such blocking, and led to "RFNM 
Counting" as an element of the Host's code.   In 1977-1978, I was 
writing TCP for Unix, and had never written any code before to directly 
interact with an IMP.   Occasionally in my testing, all traffic flow 
would stop.  Investigation revealed that the IMP was blocking all 
traffic, and I learned it was because my code was not "counting RFNMs".

It may have changed over time with new IMP releases, but at the time any 
Host could have no more than 8 "messages in flight" across the Arpanet.  
If you tried to send the 9th message, the IMP would block all traffic 
from your host until whatever was causing the congestion had cleared.

For a TCP, that behavior also meant that all traffic on all connections 
would be blocked.  To avoid such events, your code could "count RFNMs". 
   Simply increment a counter when a message is sent to the IMP, and 
decrement the counter when the related RFNM is received.  Also, avoid 
sending any 9th message, and reflect that flow control back up to 
whatever application is using that TCP connection.   How to do that was 
of course different for each computer and operating system.

In the earlier days of the Arpanet, I think that the "messages in 
flight" limit was one, so that RFNM really did mean "Ready For Next 
Message".  If you tried to send a message before the IMP was Ready, the 
IMP would shut off the clock to keep you quiet.  But as the IMP evolved 
the limit was 8 by the time TCP appeared.

Of course, not all networks had the ability to do hardware flow 
control.  This impacted systems such as Gateways (aka routers now).   A 
Gateway interfaced to the Arpanet had to "count RFNMs" as well, if it 
wanted to avoid all traffic being blocked to all hosts, and other 
gateways, on the Arpanet.  But it often had no mechanism available to 
reflect that flow control "back" across some other network.

If a datagram arrived from an Ethernet, for example, destined to be sent 
to the Arpanet, but sending it would exceed the 8-in-flight limit, the 
Gateway had only two choices: put the datagram in a buffer, if any were 
available, or discard it, and possibly send back a "Source Quench" ICMP 
datagram to the sender, hoping that the sender would "slow down".   
Source Quench was a placeholder for some kind of effective flow control 
mechanism at the IP level, once someone figured out how to do such a thing.

Of course the Arpanet is long gone now.   Personally, I don't know if 
any of today's pieces of the Internet have any kind of "hardware flow 
control" or IP flow control available or if so how they use it.   The 
same choices still exist though:  keep datagrams in a buffer or discard 
them.   Memory has gotten a lot less expensive over the decades.  We 
even have a new named phenomenon: "Buffer Bloat", to replace "RFNM 
Counting" in the networking lexicon.

Jack Haverty

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20240501/4d80f49f/attachment.asc>


More information about the Internet-history mailing list