[ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4]

Louis Mamakos louie at transsys.com
Thu May 22 10:26:26 PDT 2014


On May 22, 2014, at 7:25 AM, Craig Partridge <craig at aland.bbn.com> wrote:

>>> From: Brian E Carpenter <brian.e.carpenter at gmail.com>
>> 
>>> it's surely the case that actual bit destruction causing non-congestive
>>> packet loss was a much bigger worry in the 1970s than it was ten years
>>> later?
>> 
>> I don't recall us worrying about damaged packets, to be honest. If they
>> happened, they were re-transmitted, and you just didn't notice.
> 
> I remember damaged packets.
> 
> They usually came from serial links and wireless (largely satellite links).
> Louie observed Dave Mills' hard work on the satellite front, but serial
> links were the norm and often pretty lossy.  As I recall, the 1822 spec
> for ARPANET host connection had several variations, with varying degress of
> CRCs (and probably other stuff) depending on how far the serial line from
> the IMP to your host adapter was.
> 
> Then of course, there was SLIP -- which ran over dialup modem lines
> with no CRC...
> 
> Finally, host adapters were (and still are) of varying quality and
> buffer over/underruns, DMA drops, etc., were common.
> 
> As an illustration of the severity of the error problem, it was possible
> to run the NFS distributed file system over UDP with checksums off and
> checksums on.  Checksums off was much faster in the day, and many people
> believed errors were rare enough this was OK.  Many stories of folks who
> after a few months realized that their filesystem had substantial numbers
> of corrupted files and switched to checksums on.  (Still scary that the
> TCP checksum was considered strong enough, but...)
> 
> Thanks!
> 
> Craig

Yeah, DMA drops and other hardware problem were not unknown.

I recall a time in the mid 1980’s as we were bringing up an Ethernet 
interface on our UNIVAC mainframe.  I was testing with a VAX running
(I think) 4.2BSD at the time.  Curiously, we were seeing occasional
IP and TCP checksum errors on packets that traversed exactly one
Ethernet segment.  (This was at a time when Ethernet was REAL Ethernet
on big fat yellow coaxial cables.)  After much investigation we
eventually discovered a bug in the DEUNA ethernet interface 
plugged into a UNIBUS adapter on the VAX.  Every so often, a burst
of DMA would fail, and the bytes not actually end up in the
receive buffer.

Apparently even weak checksums can discover and protect against this
class of error.

Other big fun when a Sun on the LAN had a defective/missing Ethernet
address ROM.  Reading it yielded all 1 bits.  Someone ARP’s for the Sun’s
IP address, gets ff:ff:ff:ff:ff:ff (ethernet broadcast address) and now
sends packets to that destination.  At the time, many hosts helpfully
defaulted to IP forwarding being turned on.  They’d each receive a
packet, decide to helpfully forward it along, ARP, get broadcast,
big melt down.   So more sanity checks (ARP mappings don’t go to
multicast MAC addresses) and maybe defaulting ip_forwarding on isn’t
the best decision.

It all seems so obvious now.  This is the stuff that never ends up
being in protocol specifications, but maybe in best practices documents.

louie





More information about the Internet-history mailing list