[ih] "how better protocols could solve those problems better"
Craig Partridge
craig at tereschau.net
Thu Oct 1 09:54:25 PDT 2020
Actually it is Hammond. As far as I can tell (from digging through mounds
of old papers [appropriate for an Internet History list!]), the paper that
launched CRC-32 as *the* CRC to use was a study by Joseph L. Hammond, J.E.
Brown and S.S. Liu, "Development of a transmission error model and an error
control model," Georgia Tech report (as I recall, to AFRL) in 1975.
Craig
On Thu, Oct 1, 2020 at 10:42 AM Vint Cerf <vint at google.com> wrote:
> presumably you meant Hamming not Hammond?
> v
>
>
> On Thu, Oct 1, 2020 at 11:05 AM Craig Partridge via Internet-history <
> internet-history at elists.isoc.org> wrote:
>
>> Hi John:
>>
>> Re: errors. The short answer is that cryptographic sums are designed to
>> detect any mangling of data with the same probability. For error sums,
>> you
>> can tune the checksum to the error patterns actually seen. In my view,
>> CRC-32 has done so well because Hammond did a really nice analysis for
>> AFRL
>> in the early 70s about what kinds of errors were likely on a link. Above
>> the link layer, the indications are that most errors are in the computer
>> logic of the interconnection devices, and so you see errors of runs of
>> octets or 16-bit or 32-bit words. You also see clear cases of pointers
>> being damaged. There are classes of checksums that detect those sorts of
>> bursts really well but they are less good on single bit errors.
>>
>> Thanks!
>>
>> Craig
>>
>> On Thu, Oct 1, 2020 at 8:24 AM John Day <jeanjour at comcast.net> wrote:
>>
>> > Craig,
>> > This is interesting. You are right.
>> >
>> > But what I have been trying to find out is what kinds of ‘errors’ the
>> > cryptographic hashes are design to catch? And what is their undetected
>> bit
>> > error rate? And it should be possible to design error codes for
>> something
>> > in between, right?
>> >
>> > I have always had this fear that we are not using these codes as they
>> are
>> > designed to be used and we are just lucky that the media is as reliable
>> as
>> > it is. (I always remember that back in the early ARPANET days, reading
>> a
>> > paper on the error rates and that line from Illinois to Utah had like 1
>> > error a month (or something outrageous like that) while the worst line
>> was
>> > Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the
>> > Illinois/Utah was probably a short hop to Hinsdale and then microwave to
>> > SLC, while the Rome/Cambridge went through multiple COs and old
>> > equipment!) ;-)
>> >
>> > O, and isn’t this data archive naming problem you have noted the kind of
>> > things that librarians and database people have a lot of experience
>> with?
>> >
>> > Take care,
>> > John
>> >
>> > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history <
>> > internet-history at elists.isoc.org> wrote:
>> > >
>> > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch <touch at strayalpha.com>
>> > wrote:
>> > >
>> > >>
>> > >>
>> > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history <
>> > >> internet-history at elists.isoc.org> wrote:
>> > >>>
>> > >>> I've got some NSF funding to figure out what the error patterns are
>> > >>> (nobody's capturing them) with the idea we might propose a new
>> checksum
>> > >>> and/or add checkpointing into the file transfer protocols. It is
>> > little
>> > >>> hard to add something on top of protocols that have a fail/discard
>> > model.
>> > >>
>> > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec.
>> > >>
>> > >> Why wouldn’t one (any one) of those suffice?
>> > >>
>> > >
>> > > Actually no. These are security checksums, which are different from
>> > error
>> > > checksums. The key differences are:
>> > >
>> > > * Security checksums miss an error 1 in 2^x, where x is the width of
>> the
>> > > sum in bits. Error checksums (good ones) are designed to catch 100%
>> of
>> > the
>> > > most common errors and miss other errors at a rate of 1 in 2^x. So a
>> > > security checksum is inferior in performance (sometimes dramatically)
>> to
>> > an
>> > > error checksum.
>> > >
>> > > * Security checksums are expensive to compute (because they assume an
>> > > adversary) and so people tend to try to skip doing them. Error
>> checksums
>> > > are easy to compute.
>> > >
>> > > Currently the best answer is that for data transmission (e.g. TCP
>> > segments)
>> > > you need an error checksum. At a higher level you do the security
>> > checksum.
>> > >
>> > > Craig
>> > >
>> > >
>> > > --
>> > > *****
>> > > Craig Partridge's email account for professional society activities
>> and
>> > > mailing lists.
>> > > --
>> > > Internet-history mailing list
>> > > Internet-history at elists.isoc.org
>> > > https://elists.isoc.org/mailman/listinfo/internet-history
>> >
>> >
>>
>> --
>> *****
>> Craig Partridge's email account for professional society activities and
>> mailing lists.
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>>
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> 1435 Woodhurst Blvd
> McLean, VA 22102
> 703-448-0965
>
> until further notice
>
>
>
>
--
*****
Craig Partridge's email account for professional society activities and
mailing lists.
More information about the Internet-history
mailing list