[ih] error detection
Karl Auerbach
karl at cavebear.com
Thu Oct 1 17:17:09 PDT 2020
I've hit the checksum issues in multiple directions:
1. I was at Sun when the "no parity on the S-Bus" hit us (i.e. it
created slow, creeping damage to a source code repository) when an
intermittent, undetected, bit flipping error hit one of our file server
machines. During those days UDP checksums on NFS were generally turned
off (all zeros) to improve performance. (Of course the human time lost
do to this one event outweighed all the performance gains ever
accumulated by this "optimization.")
2. John Romkey pointed out way back that the checksum does not check
byte order reversals - which has cropped up when code written for one
kind of big/little endian machine sent stuff to another machine with
another notion of endian-ness. The best way I saw this expressed was at
the first Unix users conference (mid 1970's at Champaign/Urbana) where
it was referred to as the "nuxi" problem - that's "unix" with the bytes
swapped on console output.
3. Since I write code to test Internet protocols I've had to do a lot of
checksum fixups when we alter packets in flight for the purpose of
tickling potential weak spots in implementations). It is amazing how
hard it is to get ones complement stuff perfect on a twos complement
machine. How many RFCs are there on calculating, and incrementally
calculating, the Internet checksum?
4. I did a bit of work with the ISO/OSI protocols. They used a thing
called the 32-bit Fletcher Checksum. At first glance it looks like a
horror involving an integer multiplication for every byte. But it can
be optimized so that the multiplications go away and it's roughly as
efficient as the Internet checksum. It does not have the byte order
insensitivity of the Internet checksum. I think that that checksum was
in some of the alternatives that were proposed back in the "what will
become IPv6" days - things like TUBA and UDP/TCP over CLNP, etc.
Back at SDC Dave Kaufman and I pretty much concluded that any encrypted
stuff had to be protected by some sort (and possibly imperfect) of
integrity check. We called 'em crypto checksums which has been
supplanted by "message digest".
I wonder - I am sure that we have all seen blotches in streaming video
and strange noises on streaming audio - are those the results of simple
gaps in the input flow to the rendering codecs or it the result of bad
data being fed to those codecs?
--karl--
More information about the Internet-history
mailing list