[ih] "how better protocols could solve those problems better"

John Day jeanjour at comcast.net
Thu Oct 1 07:23:17 PDT 2020


Craig,
This is interesting.  You are right.

But what I have been trying to find out is what kinds of ‘errors’ the cryptographic hashes are design to catch?  And what is their undetected bit error rate? And it should be possible to design error codes for something in between, right?

I have always had this fear that we are not using these codes as they are designed to be used and we are just lucky that the media is as reliable as it is.  (I always remember that back in the early ARPANET days, reading a paper on the error rates and that line from Illinois to Utah had like 1 error a month (or something outrageous like that) while the worst line was Rome, NY (Griffiths AFB) to Cambridge, MA!  ;-)  Of course the Illinois/Utah was probably a short hop to Hinsdale and then microwave to SLC, while the Rome/Cambridge went through multiple COs and old equipment!)  ;-)

O, and isn’t this data archive naming problem you have noted the kind of things that librarians and database people have a lot of experience with?  

Take care,
John

> On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history <internet-history at elists.isoc.org> wrote:
> 
> On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch <touch at strayalpha.com> wrote:
> 
>> 
>> 
>>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history <
>> internet-history at elists.isoc.org> wrote:
>>> 
>>> I've got some NSF funding to figure out what the error patterns are
>>> (nobody's capturing them) with the idea we might propose a new checksum
>>> and/or add checkpointing into the file transfer protocols.  It is little
>>> hard to add something on top of protocols that have a fail/discard model.
>> 
>> We already have TCP-MD5, TCP-AO, TLS, and IPsec.
>> 
>> Why wouldn’t one (any one) of those suffice?
>> 
> 
> Actually no.  These are security checksums, which are different from error
> checksums.  The key differences are:
> 
> * Security checksums miss an error 1 in 2^x, where x is the width of the
> sum in bits.  Error checksums (good ones) are designed to catch 100% of the
> most common errors and miss other errors at a rate of 1 in 2^x.  So a
> security checksum is inferior in performance (sometimes dramatically) to an
> error checksum.
> 
> * Security checksums are expensive to compute (because they assume an
> adversary) and so people tend to try to skip doing them.  Error checksums
> are easy to compute.
> 
> Currently the best answer is that for data transmission (e.g. TCP segments)
> you need an error checksum.  At a higher level you do the security checksum.
> 
> Craig
> 
> 
> -- 
> *****
> Craig Partridge's email account for professional society activities and
> mailing lists.
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history




More information about the Internet-history mailing list