[ih] Loss as a congestion signal [internet-history Digest, Vol 84, Issue 4]

Craig Partridge craig at aland.bbn.com
Thu May 22 04:25:13 PDT 2014


>     > From: Brian E Carpenter <brian.e.carpenter at gmail.com>
> 
>     > it's surely the case that actual bit destruction causing non-congestive
>     > packet loss was a much bigger worry in the 1970s than it was ten years
>     > later?
> 
> I don't recall us worrying about damaged packets, to be honest. If they
> happened, they were re-transmitted, and you just didn't notice.

I remember damaged packets.

They usually came from serial links and wireless (largely satellite links).
Louie observed Dave Mills' hard work on the satellite front, but serial
links were the norm and often pretty lossy.  As I recall, the 1822 spec
for ARPANET host connection had several variations, with varying degress of
CRCs (and probably other stuff) depending on how far the serial line from
the IMP to your host adapter was.

Then of course, there was SLIP -- which ran over dialup modem lines
with no CRC...

Finally, host adapters were (and still are) of varying quality and
buffer over/underruns, DMA drops, etc., were common.

As an illustration of the severity of the error problem, it was possible
to run the NFS distributed file system over UDP with checksums off and
checksums on.  Checksums off was much faster in the day, and many people
believed errors were rare enough this was OK.  Many stories of folks who
after a few months realized that their filesystem had substantial numbers
of corrupted files and switched to checksums on.  (Still scary that the
TCP checksum was considered strong enough, but...)

Thanks!

Craig



More information about the Internet-history mailing list