[ih] Internet History - from Community to Big Tech?

Richard Bennett richard at bennett.com
Sun Mar 31 20:13:45 PDT 2019


Heh, the hub-and-spoke redesign came from IEEE 802.3 Low-cost LAN task group, of which I was a member. Apart from NICs, the economics of coax Ethernet were dominated by labor, wire, transceivers, and fault isolation, all of which were much cheaper with twisted pair, hub-and-spoke, and RJ-45 connectors. 

> On Mar 31, 2019, at 9:08 PM, Brian E Carpenter <brian.e.carpenter at gmail.com> wrote:
> 
> Yes, but money was an issue and daisy-chained coax really was Cheaper.
> 
> Money ceased to be an issue when 100% of the CERN physics community 
> (a) insisted on a network connection at every desk, and
> (b) had experienced outages due to some clown breaking
> the Cheapernet daisy chain.
> 
> When those conditions were met (about the end of 1995, I think),
> I went to management and got the budget to recable everywhere
> with UTP-5. The biggest and easiest budget request I ever made.
> 
> Regards
>   Brian
> 
> On 01-Apr-19 15:37, Richard Bennett wrote:
>> 3Com’s office on Kifer Rd was wired in a hub-and-spoke configuration for thin coax, with each user’s cubicle wired direct to a multi-port repeater. This made it difficult for user to crash the department’s network by unplugging one of their BNC connectors.  Bus was always a mistake, and not just because offices are designed for hub-and-spoke power and phone wires. But wireless is better now that the speeds are up, so Aloha had it right all along.  
>> 
>>> On Mar 31, 2019, at 6:25 PM, Brian E Carpenter <brian.e.carpenter at gmail.com <mailto:brian.e.carpenter at gmail.com>> wrote:
>>> 
>>> I don't know the numbers but we had quite a lot at CERN, and several
>>> technicians who became experts at adding a tap. I think Boeing had
>>> a lot too, and Microsoft.
>>> 
>>> I do agree that CheaperNet made scaling up a lot easier and made
>>> everything more user-proof. Although we did once have a user (i.e.
>>> a physicist) who discovered ungrounded screens on a bunch of CheaperNet
>>> coax cables, and soldered them all to ground. Of course that created
>>> numerous ground loops and broke everything in his area, since the coax
>>> screen should only be grounded at one end. (There were some areas of
>>> CERN where you could measure ground currents of 30 or 40 amps AC, due
>>> to some very, very big electromagnets that inevitably unbalanced
>>> the 3-phase system.)
>>> 
>>> That particular user later became head of CERN's IT Division.
>>> 
>>> Regards
>>>   Brian Carpenter
>>> 
>>> On 01-Apr-19 11:56, Scott O. Bradner wrote:
>>>> hmm - we had a few hundred at Harvard at the peak so I find it hard to think that there were only a thousand world wide
>>>> 
>>>> Scott
>>>> 
>>>>> On Mar 31, 2019, at 6:43 PM, Richard Bennett <richard at bennett.com <mailto:richard at bennett.com>> wrote:
>>>>> 
>>>>> Ethernet is a catchy name, I’ll give you that.
>>>>> 
>>>>> 3Com quickly discovered that golden rod cable was a huge mistake and replaced it with thin coax, integrated transceivers, and BNC connectors. In reality, the number of golden rod installations in the whole world never numbered much more than thousand; that was good because there was no way to upgrade them to higher speeds.
>>>>> 
>>>>>> On Mar 31, 2019, at 4:23 PM, Scott O. Bradner <sob at sobco.com <mailto:sob at sobco.com>> wrote:
>>>>>> 
>>>>>> might have been crap but there sure was a lot of it, and it worked well enough to dominate the LAN space over
>>>>>> token ring
>>>>>> 
>>>>>> 10BaseT (and the multiple pre-standard twisted pair Ethernet systems) expanded the coverage hugely but
>>>>>> a lot of the original yellow cable was deployed
>>>>>> 
>>>>>> Scott
>>>>>> 
>>>>>>> On Mar 31, 2019, at 5:10 PM, Richard Bennett <richard at bennett.com <mailto:richard at bennett.com>> wrote:
>>>>>>> 
>>>>>>> Ethernet was total crap before 10BASE-T for many reasons. The original 3Com card, the 3C501, used a SEEQ 8001 chip that couldn’t handle incoming packets with less than 80ms interframe gap because it had to handle an interrupt and complete a DMA operation for the first packet before it could start the reception of the next one. The 3Com server NIC, the 3C-505, had an embedded CPU and first chip that actually supported the standard, the 82586. The server card added delay when it knew it was talking to 3C501 so as to avoid making it choke.
>>>>>>> 
>>>>>>> AMD’s first Ethernet chip seeded its random number generator (need for the random exponential binary backoff algorithm) at power up, so a short power fail caused stations to synchronize, making collision resolution impossible.
>>>>>>> 
>>>>>>> Most packets are generated by servers, so clients close to servers had higher priority than those farther away, an effect of propagation.
>>>>>>> 
>>>>>>> All in all, coax Ethernet was a horrible design in practice.
>>>>>>> 
>>>>>>> The first really commendable Ethernet chips were the 3Com 3C509 parallel tasking adapters. They took a linked list of buffer fragments from the driver and delivered packets direct to user space from a smallish FIFO in the chip. These cards also predicted interrupt latency, firing off the reception logic in the driver at a time calculated to have the driver running at or before EOF. This made for some fun code when the driver was ready for a packet that wasn’t quite there yet.
>>>>>>> 
>>>>>>> The 509 hit the market in 1994, at a time when the tech press liked to run speed/CPU load tests of Ethernet cards and the market for chips was dominated by 3Com and Intel. IIRC, these cards ran at some like 98% of wire speed.
>>>>>>> 
>>>>>>> The parallel tasking thing allowed 3Com to register some very valuable patents that outlived the company. Patent trolls ended up exploiting them, which expert witnesses to collect some nice fees until they expired.
>>>>>>> 
>>>>>>> 10BASE-T did away with CSMA-CD in favor of a full duplex transmission mode that buffered packets in the hub, did flow control, and ultimately allowed multiple frames to traverse switches at the same time, kinda like SDMA and beam forming in wireless systems. Hilariously, experts from Ivy League universities working for patent trolls continued to claim Ethernet was a half duplex, CSMA-CD system until the bitter end.
>>>>>>> 
>>>>>>> RB
>>>>>>> 
>>>>>>>> On Mar 30, 2019, at 2:33 PM, Jack Haverty <jack at 3kitty.org <mailto:jack at 3kitty.org>> wrote:
>>>>>>>> 
>>>>>>>> The Unix-box problem was the only one I recall.  However, we did move
>>>>>>>> the testing world onto a separate LAN so anything bad that some random
>>>>>>>> box did wouldn't affect everyone.  So it may have happened but we didn't
>>>>>>>> care.   Our mission was to get the software working....
>>>>>>>> 
>>>>>>>> /Jack
>>>>>>>> 
>>>>>>>> On 3/30/19 12:41 AM, Olivier MJ Crépin-Leblond wrote:
>>>>>>>>> Dear Jack,
>>>>>>>>> 
>>>>>>>>> I wonder if you had that problem with a mix of 3COM, NE2000 and NE2000
>>>>>>>>> compatible cards on the same network.
>>>>>>>>> Having started with Novell & 3COM cards, all on Coax, we found that we
>>>>>>>>> started getting timeouts when we added more cheap NE2000 compatible cards.
>>>>>>>>> Did the same thing with oscilloscopes/analysers and tweaked parameters
>>>>>>>>> to go around this problem.
>>>>>>>>> Warm regards,
>>>>>>>>> 
>>>>>>>>> Olivier
>>>>>>>>> 
>>>>>>>>> On 30/03/2019 02:57, Jack Haverty wrote:
>>>>>>>>>> I can confirm that there was at least one Unix vendor that violated the
>>>>>>>>>> Ethernet specs (10mb/s).  I was at Oracle in the early 90s, where we had
>>>>>>>>>> at least one of every common computer so that we could test software.
>>>>>>>>>> 
>>>>>>>>>> While testing, we noticed that when one particular type of machine was
>>>>>>>>>> active doing a long bulk transfer, all of the other traffic on our LAN
>>>>>>>>>> slowed to a crawl.   I was a hardware guy in a software universe, but I
>>>>>>>>>> managed to find one other hardware type, and we scrounged up an
>>>>>>>>>> oscilloscope, and then looked closely at the wire and at the spec.
>>>>>>>>>> 
>>>>>>>>>> I don't remember the details, but there was some timer that was supposed
>>>>>>>>>> to have a certain minimum value and that Unix box was consistently
>>>>>>>>>> violating it.  So it could effectively seize the LAN for as long as it
>>>>>>>>>> had traffic.
>>>>>>>>>> 
>>>>>>>>>> Sorry, I can't remember which vendor it was.  It might have been Sun, or
>>>>>>>>>> maybe one specific model/vintage, since we had a lot of Sun equipment
>>>>>>>>>> but hadn't noticed the problem before.
>>>>>>>>>> 
>>>>>>>>>> I suspect there's a lot of such "standards" that are routinely violated
>>>>>>>>>> in the network.   Putting it on paper and declaring it mandatory doesn't
>>>>>>>>>> make it true.  Personally I never saw much rigorous certification
>>>>>>>>>> testing or enforcement (not just of Ethernet), and the general
>>>>>>>>>> "robustness" designs can hide bad behavior.
>>>>>>>>>> 
>>>>>>>>>> /Jack Haverty
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On 3/29/19 5:40 PM, John Gilmore wrote:
>>>>>>>>>>> Karl Auerbach <karl at cavebear.com <mailto:karl at cavebear.com>> wrote:
>>>>>>>>>>>> I recently had someone confirm a widely held belief that Sun
>>>>>>>>>>>> Microsystems had tuned the CSMA/CD timers on their Ethernet interfaces
>>>>>>>>>>>> to have a winning bias against Ethernet machines that adhered to the
>>>>>>>>>>>> IEEE/DIX ethernet timer values.  Those of us who tended to work with
>>>>>>>>>>>> networked PC platforms were well aware of the effect of putting a Sun
>>>>>>>>>>>> onto the same Ethernet: what had worked before stopped working, but
>>>>>>>>>>>> the Suns all chatted among themselves quite happily.
>>>>>>>>>>> Are we talking about 10 Mbit Ethernet, or something later?
>>>>>>>>>>> 
>>>>>>>>>>> I worked at Sun back then.  Sun was shipping products with Ethernet
>>>>>>>>>>> before the IBM PC even existed.  Sun products used standard Ethernet
>>>>>>>>>>> chips.  Some of those chips were super customizable via internal
>>>>>>>>>>> registers (I have a T1 card that uses an Ethernet chip with settings
>>>>>>>>>>> that let it talk telco T1/DS1 protocol!), but Sun always set them to
>>>>>>>>>>> meet the standard specs.  What evidence is there of any non-standard
>>>>>>>>>>> settings?
>>>>>>>>>>> 
>>>>>>>>>>> What Sun did differently was that we tuned the implementation so it
>>>>>>>>>>> could actually send and receive back-to-back packets, at the minimum
>>>>>>>>>>> specified inter-packet gaps.  By building both the hardware and the
>>>>>>>>>>> software ourselves (like Apple today, and unlike Microsoft), we were
>>>>>>>>>>> able to work out all the kinks to maximize performance.  We could
>>>>>>>>>>> improve everything: software drivers, interrupt latencies, TCP/IP
>>>>>>>>>>> stacks, DMA bus arbitration overhead.  Sun was the first to do
>>>>>>>>>>> production shared disk-drive access over Ethernet, to reduce the cost of
>>>>>>>>>>> our "diskless" workstations.  In sending 4Kbyte filesystem blocks among
>>>>>>>>>>> client and server, we sent an IP-fragmented 4K+ UDP datagram in three
>>>>>>>>>>> BACK-TO-BACK Ethernet packets.
>>>>>>>>>>> 
>>>>>>>>>>> Someone, I think it was Van Jacobson, did some early work on maximizing
>>>>>>>>>>> Ethernet thruput, and reported it at USENIX conferences.  His
>>>>>>>>>>> observation was that to get maximal thruput, you needed 3 things to be
>>>>>>>>>>> happening absolutely simultaneously: the sender processing & queueing
>>>>>>>>>>> the next packet; the Ethernet wire moving the current packet; the
>>>>>>>>>>> receiver dequeueing and processing the previous packet.  If any of these
>>>>>>>>>>> operations took longer than the others, then that would be the limiting
>>>>>>>>>>> factor in the thruput.  This applies to half duplex operation (only one
>>>>>>>>>>> side transmits at a time); the end node processing requirement doubles
>>>>>>>>>>> if you run full duplex data in both directions (on more modern Ethernets
>>>>>>>>>>> that support that) .
>>>>>>>>>>> 
>>>>>>>>>>> Dave Miller did the SPARC and UltraSPARC ports of Linux, and one of his
>>>>>>>>>>> favorite things to work on was network performance.  Here's one of his
>>>>>>>>>>> signature blocks from 1996:
>>>>>>>>>>> 
>>>>>>>>>>> Yow! 11.26 MB/s remote host TCP bandwidth & ////
>>>>>>>>>>> 199 usec remote TCP latency over 100Mb/s   ////
>>>>>>>>>>> ethernet.  Beat that!                     ////
>>>>>>>>>>> -----------------------------------------////__________  o
>>>>>>>>>>> David S. Miller, davem at caip.rutgers.edu <mailto:davem at caip.rutgers.edu> /_____________/ / // /_/ ><
>>>>>>>>>>> 
>>>>>>>>>>> My guess is that the ISA cards of the day had never even *seen* back to
>>>>>>>>>>> back Ethernet packets (with only the 9.6 uSec interframe spacing between
>>>>>>>>>>> them), so of course they weren't tested to be able to handle them.  The
>>>>>>>>>>> ISA bus was slow, and the PC market was cheap, and RAM was expensive, so
>>>>>>>>>>> most cards just had one or two packet buffers.  And if the CPU didn't
>>>>>>>>>>> immediately grab one of those received buffers, then the next packet
>>>>>>>>>>> would get dropped for lack of a buffer to put it in.  In sending, you
>>>>>>>>>>> had to have the second buffer queued long before the inter-packet gap, or
>>>>>>>>>>> you wouldn't send with minimum packet spacing on the wire.  Most PC
>>>>>>>>>>> operating systems couldn't do that.  And if your card was slower than
>>>>>>>>>>> the standard 9.6usec inter-packet gap after sensing carrier,
>>>>>>>>>>> then any Sun waiting to transmit would beat your card to the wire,
>>>>>>>>>>> deferring your card's transmission.
>>>>>>>>>>> 
>>>>>>>>>>> You may have also been seeing the "Channel capture effect"; see:
>>>>>>>>>>> 
>>>>>>>>>>> https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_with_collision_detection#Channel_capture_effect
>>>>>>>>>>> 
>>>>>>>>>>> John
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> _______
>>>>>>>> internet-history mailing list
>>>>>>>> internet-history at postel.org <mailto:internet-history at postel.org>
>>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history
>>>>>>>> Contact list-owner at postel.org for assistance.
>>>>>>> 
>>>>>>>>>>>>>> Richard Bennett
>>>>>>> High Tech Forum Founder
>>>>>>> Ethernet & Wi-Fi standards co-creator
>>>>>>> 
>>>>>>> Internet Policy Consultant
>>>>>>> 
>>>>>>> _______
>>>>>>> internet-history mailing list
>>>>>>> internet-history at postel.org <mailto:internet-history at postel.org>
>>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history
>>>>>>> Contact list-owner at postel.org for assistance.
>>>>>> 
>>>>>> 
>>>>>> _______
>>>>>> internet-history mailing list
>>>>>> internet-history at postel.org <mailto:internet-history at postel.org>
>>>>>> http://mailman.postel.org/mailman/listinfo/internet-history
>>>>>> Contact list-owner at postel.org for assistance.
>>>>> 
>>>>>>>>>> Richard Bennett
>>>>> High Tech Forum Founder
>>>>> Ethernet & Wi-Fi standards co-creator
>>>>> 
>>>>> Internet Policy Consultant
>>>>> 
>>>> 
>>>> 
>>>> _______
>>>> internet-history mailing list
>>>> internet-history at postel.org <mailto:internet-history at postel.org>
>>>> http://mailman.postel.org/mailman/listinfo/internet-history
>>>> Contact list-owner at postel.org for assistance.
>>>> 
>>> 
>> 
>>>> Richard Bennett
>> High Tech Forum <http://hightechforum.org> Founder
>> Ethernet & Wi-Fi standards co-creator
>> 
>> Internet Policy Consultant
>> 
> 

—
Richard Bennett
High Tech Forum <http://hightechforum.org/> Founder
Ethernet & Wi-Fi standards co-creator

Internet Policy Consultant

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20190331/42dde35d/attachment.htm>


More information about the Internet-history mailing list