[ih] Internet History - from Community to Big Tech?

Jack Haverty jack at 3kitty.org
Sat Mar 30 13:33:53 PDT 2019


The Unix-box problem was the only one I recall.  However, we did move
the testing world onto a separate LAN so anything bad that some random
box did wouldn't affect everyone.  So it may have happened but we didn't
care.   Our mission was to get the software working....

/Jack

On 3/30/19 12:41 AM, Olivier MJ Crépin-Leblond wrote:
> Dear Jack,
>
> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000
> compatible cards on the same network.
> Having started with Novell & 3COM cards, all on Coax, we found that we
> started getting timeouts when we added more cheap NE2000 compatible cards.
> Did the same thing with oscilloscopes/analysers and tweaked parameters
> to go around this problem.
> Warm regards,
>
> Olivier
>
> On 30/03/2019 02:57, Jack Haverty wrote:
>> I can confirm that there was at least one Unix vendor that violated the
>> Ethernet specs (10mb/s).  I was at Oracle in the early 90s, where we had
>> at least one of every common computer so that we could test software.
>>
>> While testing, we noticed that when one particular type of machine was
>> active doing a long bulk transfer, all of the other traffic on our LAN
>> slowed to a crawl.   I was a hardware guy in a software universe, but I
>> managed to find one other hardware type, and we scrounged up an
>> oscilloscope, and then looked closely at the wire and at the spec.
>>
>> I don't remember the details, but there was some timer that was supposed
>> to have a certain minimum value and that Unix box was consistently
>> violating it.  So it could effectively seize the LAN for as long as it
>> had traffic.
>>
>> Sorry, I can't remember which vendor it was.  It might have been Sun, or
>> maybe one specific model/vintage, since we had a lot of Sun equipment
>> but hadn't noticed the problem before.
>>
>> I suspect there's a lot of such "standards" that are routinely violated
>> in the network.   Putting it on paper and declaring it mandatory doesn't
>> make it true.  Personally I never saw much rigorous certification
>> testing or enforcement (not just of Ethernet), and the general
>> "robustness" designs can hide bad behavior.
>>
>> /Jack Haverty
>>
>>
>> On 3/29/19 5:40 PM, John Gilmore wrote:
>>> Karl Auerbach <karl at cavebear.com> wrote:
>>>> I recently had someone confirm a widely held belief that Sun
>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces
>>>> to have a winning bias against Ethernet machines that adhered to the
>>>> IEEE/DIX ethernet timer values.  Those of us who tended to work with
>>>> networked PC platforms were well aware of the effect of putting a Sun
>>>> onto the same Ethernet: what had worked before stopped working, but
>>>> the Suns all chatted among themselves quite happily.
>>> Are we talking about 10 Mbit Ethernet, or something later?
>>>
>>> I worked at Sun back then.  Sun was shipping products with Ethernet
>>> before the IBM PC even existed.  Sun products used standard Ethernet
>>> chips.  Some of those chips were super customizable via internal
>>> registers (I have a T1 card that uses an Ethernet chip with settings
>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to
>>> meet the standard specs.  What evidence is there of any non-standard
>>> settings?
>>>
>>> What Sun did differently was that we tuned the implementation so it
>>> could actually send and receive back-to-back packets, at the minimum
>>> specified inter-packet gaps.  By building both the hardware and the
>>> software ourselves (like Apple today, and unlike Microsoft), we were
>>> able to work out all the kinks to maximize performance.  We could
>>> improve everything: software drivers, interrupt latencies, TCP/IP
>>> stacks, DMA bus arbitration overhead.  Sun was the first to do
>>> production shared disk-drive access over Ethernet, to reduce the cost of
>>> our "diskless" workstations.  In sending 4Kbyte filesystem blocks among
>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three
>>> BACK-TO-BACK Ethernet packets.
>>>
>>> Someone, I think it was Van Jacobson, did some early work on maximizing
>>> Ethernet thruput, and reported it at USENIX conferences.  His
>>> observation was that to get maximal thruput, you needed 3 things to be
>>> happening absolutely simultaneously: the sender processing & queueing
>>> the next packet; the Ethernet wire moving the current packet; the
>>> receiver dequeueing and processing the previous packet.  If any of these
>>> operations took longer than the others, then that would be the limiting
>>> factor in the thruput.  This applies to half duplex operation (only one
>>> side transmits at a time); the end node processing requirement doubles
>>> if you run full duplex data in both directions (on more modern Ethernets
>>> that support that) .
>>>
>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his
>>> favorite things to work on was network performance.  Here's one of his
>>> signature blocks from 1996:
>>>
>>>   Yow! 11.26 MB/s remote host TCP bandwidth & ////
>>>   199 usec remote TCP latency over 100Mb/s   ////
>>>   ethernet.  Beat that!                     ////
>>>   -----------------------------------------////__________  o
>>>   David S. Miller, davem at caip.rutgers.edu /_____________/ / // /_/ ><
>>>
>>> My guess is that the ISA cards of the day had never even *seen* back to
>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between
>>> them), so of course they weren't tested to be able to handle them.  The
>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so
>>> most cards just had one or two packet buffers.  And if the CPU didn't
>>> immediately grab one of those received buffers, then the next packet
>>> would get dropped for lack of a buffer to put it in.  In sending, you
>>> had to have the second buffer queued long before the inter-packet gap, or
>>> you wouldn't send with minimum packet spacing on the wire.  Most PC
>>> operating systems couldn't do that.  And if your card was slower than
>>> the standard 9.6usec inter-packet gap after sensing carrier,
>>> then any Sun waiting to transmit would beat your card to the wire,
>>> deferring your card's transmission.
>>>
>>> You may have also been seeing the "Channel capture effect"; see:
>>>
>>>   https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect
>>>
>>> 	John
>>>
>



More information about the Internet-history mailing list