[ih] "how better protocols could solve those problems better"

John Day jeanjour at comcast.net
Thu Oct 1 08:30:16 PDT 2020


I would wonder if given the changes in technology for even wired systems whether the error patterns haven’t changed in the last 50 years and of course the patterns for fiber and wireless and satellite are different yet.

Yes, the whole point of the CYCLADES architecture (which used (and assumed) an HDLC-like protocol for the link layer) was that the link layer got the mangled packet errors (or most of them) and the losses for the Transport Layer to catch were the rare memory error during relaying but mainly, to recover from losses due to congestion. CYCLADE started looking at congestion issues in 1972.  

The whole point of the architecture was that the link layer ensured that loss rate was well below the losses due to congestion (memory errors were in the noise), so that end-to-end error control at Transport was cost-effective. The link layer doesn’t have to be reliable, but good enough to keep the rate of loss well below rate of loss due to congestion. (The old 80/20 rule). 

Yes for the layers above the Link layer one is looking mainly single bit errors. Hasn’t that always been the intent?

Take care,
John

> On Oct 1, 2020, at 11:05, Craig Partridge <craig at tereschau.net> wrote:
> 
> Hi John:
> 
> Re: errors.  The short answer is that cryptographic sums are designed to detect any mangling of data with the same probability.  For error sums, you can tune the checksum to the error patterns actually seen.  In my view, CRC-32 has done so well because Hammond did a really nice analysis for AFRL in the early 70s about what kinds of errors were likely on a link.  Above the link layer, the indications are that most errors are in the computer logic of the interconnection devices, and so you see errors of runs of octets or 16-bit or 32-bit words.  You also see clear cases of pointers being damaged.  There are classes of checksums that detect those sorts of bursts really well but they are less good on single bit errors.
> 
> Thanks!
> 
> Craig
> 
> On Thu, Oct 1, 2020 at 8:24 AM John Day <jeanjour at comcast.net <mailto:jeanjour at comcast.net>> wrote:
> Craig,
> This is interesting.  You are right.
> 
> But what I have been trying to find out is what kinds of ‘errors’ the cryptographic hashes are design to catch?  And what is their undetected bit error rate? And it should be possible to design error codes for something in between, right?
> 
> I have always had this fear that we are not using these codes as they are designed to be used and we are just lucky that the media is as reliable as it is.  (I always remember that back in the early ARPANET days, reading a paper on the error rates and that line from Illinois to Utah had like 1 error a month (or something outrageous like that) while the worst line was Rome, NY (Griffiths AFB) to Cambridge, MA!  ;-)  Of course the Illinois/Utah was probably a short hop to Hinsdale and then microwave to SLC, while the Rome/Cambridge went through multiple COs and old equipment!)  ;-)
> 
> O, and isn’t this data archive naming problem you have noted the kind of things that librarians and database people have a lot of experience with?  
> 
> Take care,
> John
> 
> > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
> > 
> > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch <touch at strayalpha.com <mailto:touch at strayalpha.com>> wrote:
> > 
> >> 
> >> 
> >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history <
> >> internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
> >>> 
> >>> I've got some NSF funding to figure out what the error patterns are
> >>> (nobody's capturing them) with the idea we might propose a new checksum
> >>> and/or add checkpointing into the file transfer protocols.  It is little
> >>> hard to add something on top of protocols that have a fail/discard model.
> >> 
> >> We already have TCP-MD5, TCP-AO, TLS, and IPsec.
> >> 
> >> Why wouldn’t one (any one) of those suffice?
> >> 
> > 
> > Actually no.  These are security checksums, which are different from error
> > checksums.  The key differences are:
> > 
> > * Security checksums miss an error 1 in 2^x, where x is the width of the
> > sum in bits.  Error checksums (good ones) are designed to catch 100% of the
> > most common errors and miss other errors at a rate of 1 in 2^x.  So a
> > security checksum is inferior in performance (sometimes dramatically) to an
> > error checksum.
> > 
> > * Security checksums are expensive to compute (because they assume an
> > adversary) and so people tend to try to skip doing them.  Error checksums
> > are easy to compute.
> > 
> > Currently the best answer is that for data transmission (e.g. TCP segments)
> > you need an error checksum.  At a higher level you do the security checksum.
> > 
> > Craig
> > 
> > 
> > -- 
> > *****
> > Craig Partridge's email account for professional society activities and
> > mailing lists.
> > -- 
> > Internet-history mailing list
> > Internet-history at elists.isoc.org <mailto:Internet-history at elists.isoc.org>
> > https://elists.isoc.org/mailman/listinfo/internet-history <https://elists.isoc.org/mailman/listinfo/internet-history>
> 
> 
> 
> -- 
> *****
> Craig Partridge's email account for professional society activities and mailing lists.




More information about the Internet-history mailing list