<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Heh. The 3C501 didn't really conform to the Ethernet Blue Book
standard, which called for a 9.6 us interpacket gap. Because of the
limitations of the Seeq Ethernet controller on which it was based, it
took something like 80 ms to process an inbound frame and provide a
buffer to the controller for the next frame. 3Com's disk server used
the Intel 82586, which really could do the standard, but 3Com had to
put code into the 3Server to insert an 80 ms pad between consecutive
frames to the same 3C501 so as not to drown it.<br>
<br>
Louis Mamakos wrote:
<blockquote cite="mid:36C6C90B-46B1-4211-96EE-A8082A29024E@transsys.com"
type="cite">
<pre wrap="">On Nov 26, 2009, at 11:48 AM, Noel Chiappa wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Why exactly fragmentation didn't work so well I don't recollect very well
(if we ever knew for sure exactly why). I suspect that the network back
then was 'lossier' (partly due to poor congestion control causing
congestion drops, partly due to flakier transmission systems). Since
end-end retransmission schemes don't work so well when loss rates go up
(typically there's a 'knee' where performance goes over a cliff), with
that many more packets involved for a given amount of data, we may have
gone over the 'knee'.
</pre>
</blockquote>
<pre wrap=""><!---->
We encountered a particular syndrome related to poorly designed
network interface hardware; though this was in early Ethernet network
interfaces, and not 1822 LH/DH interfaces. When generated, IP fragments
tended to have very small inter-packet gap times since the fragmentation
operation happend pretty "close" to the network interface, as compared
to sequentially generated data from the application.
One fairly painful manifestation of this that I experienced was in the
phase-2 NSFNET backbone, with T-1 trunks to upgrade from the 56kb/s
trunks between the "fuzzballs" on the phase-1 backbone. The phase-2
NSFNET routers were built out of IBM RT hardware, and the Ethernet
interfaces to the co-located subscriber site used the early 3Com 3C501
PC-AT Ethernet controller. Ah, what a wretched piece of hardware this
was! It had minimal on-board buffering, using the same buffer for receiving
a frame as well as transmitting an outgoing packet. A train of larger,
back-to-back packets on a busy interface was just about worst-case. We
customers donated the later 3C503 cards to the cause to make this
stop hurting as much.
There was a similar issue with NFS over UDP, with 8kbyte IP packets,
where the same retransmitted packet would never complete because
fragment 4 or so would never reliably be received because of the
small inter-packet gap. In this case, reusing the same IP ID field on
the retransmitted packets was of no help at all.
Of course, these days the hardware is more powerful and not so performance
constrained, and the CPU in the end systems have more MIPS available to
service the hardware. The bottlenecks have moved elsewhere, now that
we've got multiple CPU's all trying help.
Certainly in later years, all sorts of interesting bugs related to firewalls
and packet filtering attempts came to light..
Louis Mamakos
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC</pre>
</body>
</html>