[ih] Global congestion collapse

David L. Mills mills at udel.edu
Wed Dec 15 08:48:36 PST 2004


Joe,

RED has always been a problem with me. That's like shooting a load of 
buckshot at the herd of elephants and tigers and hoping you hit an 
elephant. My agenda was to find the elephants first and then target them.

Dave

Joe Touch wrote:

>
>
> David L. Mills wrote:
>
>> Perry,
>>
>> Not so fast. Steve Wolff of NSF and I had a nasty little secret we 
>> did not tell the NSFnet maintenance crew who could never keep a 
>> secret. I built in priority queueing and preemption in the fuzzball 
>> routers. The former wiretapped the telnet port and made it just below 
>> NTP on the priority scale. We put mail on the bottom just below ftp. 
>> A lot of telnet users stopped complaining because they thought we 
>> "fixed" the network.
>>
>> The other thing was to shoot the elephants. When a new packet arrived 
>> and no buffer space was available, the output queues were scanned 
>> looking for the biggest elephant (total byte count on all queues from 
>> the same IP address) and killed its biggest  packet. Gunshots 
>> continued until either the arriving packet got shot or there was 
>> enough room to save it. It all worked gangbusters and the poor ftpers 
>> never found out.
>
>
> RED would benefit from two variants - per packet (when per-packet ops 
> are the bottleneck) and per-byte weighting, though it doesn't seem to 
> be described that way much. This sounds a lot like per-byte (the more 
> common case now anyway), except that RED is statistical (everyone gets 
> slammed, proportional to their load) and this hits each in series 
> (largest user first, then next-largest when largest backs off, etc.). 
> Was there ever any backlash (software oscillation or people 
> complaining) from that?
>
> Joe





More information about the Internet-history mailing list