From craig at tereschau.net Thu Oct 1 06:50:53 2020 From: craig at tereschau.net (Craig Partridge) Date: Thu, 1 Oct 2020 07:50:53 -0600 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> Message-ID: On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch wrote: > > > > On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > I've got some NSF funding to figure out what the error patterns are > > (nobody's capturing them) with the idea we might propose a new checksum > > and/or add checkpointing into the file transfer protocols. It is little > > hard to add something on top of protocols that have a fail/discard model. > > We already have TCP-MD5, TCP-AO, TLS, and IPsec. > > Why wouldn?t one (any one) of those suffice? > Actually no. These are security checksums, which are different from error checksums. The key differences are: * Security checksums miss an error 1 in 2^x, where x is the width of the sum in bits. Error checksums (good ones) are designed to catch 100% of the most common errors and miss other errors at a rate of 1 in 2^x. So a security checksum is inferior in performance (sometimes dramatically) to an error checksum. * Security checksums are expensive to compute (because they assume an adversary) and so people tend to try to skip doing them. Error checksums are easy to compute. Currently the best answer is that for data transmission (e.g. TCP segments) you need an error checksum. At a higher level you do the security checksum. Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. From mfidelman at meetinghouse.net Thu Oct 1 06:51:56 2020 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 1 Oct 2020 09:51:56 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: <400.1601517591@hop.toad.com> References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> Message-ID: On 9/30/20 9:59 PM, John Gilmore via Internet-history wrote: > Craig Partridge wrote: >> * Naming and organizing big data. We are generating big data in many areas >> faster than we can name it. And by "name" I don't simply mean giving >> something a filename but creating an environment to find that name, >> including the right metadata, and storing the data in places folks can >> easily retrieve it. You can probably through archiving into that too (when >> should data with this name be kept or discarded over time?). What good are >> FTP, SCP, HTTPS, if you can't find or retrieve the data? > The Internet Archive has this problem. I'm not the right expert to talk > about what they've done, but I can introduce you. > For a long time, I've maintained that we need a new generation of application layer protocols, for things like: - mailing list management (it's really a routing protocol,? isn't it?) - distributed map-reduce (on beyond encoding search strings after the ? in URLs) - distributed process management (in the cloud, an awful lot of o/s functions would seem better handled by protocols) But we could start by actually fixing things like calendaring - where the protocols exist, but nobody seems to implement them well. Miles Fidelman -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From steve at shinkuro.com Thu Oct 1 07:04:49 2020 From: steve at shinkuro.com (Steve Crocker) Date: Thu, 1 Oct 2020 10:04:49 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> Message-ID: On Thu, Oct 1, 2020 at 9:55 AM Miles Fidelman via Internet-history < internet-history at elists.isoc.org> wrote: > > For a long time, I've maintained that we need a new generation of > application layer protocols, for things like: > > ... > > But we could start by actually fixing things like calendaring - where > the protocols exist, but nobody seems to implement them well. > What do you have in mind that needs to be fixed re calendaring? I frequently have trouble with the calendaring. Changes sometimes don't propagate properly, and changes to recurring meetings get mangled. Some time ago I noticed there was a calendar working group, calisfy. I joined it because I wanted to suggest that in addition to the details re formats, etc., the specification should also say something about expected time to propagate changes. There was zero response within the WG. I stopped paying close attention but I remained on the mailing list. The protocol seems extremely complicated and I would not be surprised if the result from the WG turns out to be better but nonetheless still broken in various ways. Steve From jeanjour at comcast.net Thu Oct 1 07:23:17 2020 From: jeanjour at comcast.net (John Day) Date: Thu, 1 Oct 2020 10:23:17 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> Message-ID: <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Craig, This is interesting. You are right. But what I have been trying to find out is what kinds of ?errors? the cryptographic hashes are design to catch? And what is their undetected bit error rate? And it should be possible to design error codes for something in between, right? I have always had this fear that we are not using these codes as they are designed to be used and we are just lucky that the media is as reliable as it is. (I always remember that back in the early ARPANET days, reading a paper on the error rates and that line from Illinois to Utah had like 1 error a month (or something outrageous like that) while the worst line was Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the Illinois/Utah was probably a short hop to Hinsdale and then microwave to SLC, while the Rome/Cambridge went through multiple COs and old equipment!) ;-) O, and isn?t this data archive naming problem you have noted the kind of things that librarians and database people have a lot of experience with? Take care, John > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history wrote: > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch wrote: > >> >> >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> >>> I've got some NSF funding to figure out what the error patterns are >>> (nobody's capturing them) with the idea we might propose a new checksum >>> and/or add checkpointing into the file transfer protocols. It is little >>> hard to add something on top of protocols that have a fail/discard model. >> >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. >> >> Why wouldn?t one (any one) of those suffice? >> > > Actually no. These are security checksums, which are different from error > checksums. The key differences are: > > * Security checksums miss an error 1 in 2^x, where x is the width of the > sum in bits. Error checksums (good ones) are designed to catch 100% of the > most common errors and miss other errors at a rate of 1 in 2^x. So a > security checksum is inferior in performance (sometimes dramatically) to an > error checksum. > > * Security checksums are expensive to compute (because they assume an > adversary) and so people tend to try to skip doing them. Error checksums > are easy to compute. > > Currently the best answer is that for data transmission (e.g. TCP segments) > you need an error checksum. At a higher level you do the security checksum. > > Craig > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From jeanjour at comcast.net Thu Oct 1 07:27:57 2020 From: jeanjour at comcast.net (John Day) Date: Thu, 1 Oct 2020 10:27:57 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> Message-ID: <479E09A8-0725-4D2D-A9D9-5318E1E8A0C5@comcast.net> Actually, it isn?t so much the protocols as different (better defined, perhaps) object models or schemas that are needed. The operations are pretty much the same. You can cover most everything with create/delete, read/write, and start/stop and at worst send a script and start it. The fundamental difference is that data transfer protocols modify the state internal to the protocol, while application protocols modify state external to the protocol. It is much easier, faster, less error prone, etc. to modify 'object models? than protocols. Of course it is more fun to do new protocols. Take care, John > On Oct 1, 2020, at 09:51, Miles Fidelman via Internet-history wrote: > > On 9/30/20 9:59 PM, John Gilmore via Internet-history wrote: > >> Craig Partridge wrote: >>> * Naming and organizing big data. We are generating big data in many areas >>> faster than we can name it. And by "name" I don't simply mean giving >>> something a filename but creating an environment to find that name, >>> including the right metadata, and storing the data in places folks can >>> easily retrieve it. You can probably through archiving into that too (when >>> should data with this name be kept or discarded over time?). What good are >>> FTP, SCP, HTTPS, if you can't find or retrieve the data? >> The Internet Archive has this problem. I'm not the right expert to talk >> about what they've done, but I can introduce you. >> > For a long time, I've maintained that we need a new generation of application layer protocols, for things like: > > - mailing list management (it's really a routing protocol, isn't it?) > > - distributed map-reduce (on beyond encoding search strings after the ? in URLs) > > - distributed process management (in the cloud, an awful lot of o/s functions would seem better handled by protocols) > > But we could start by actually fixing things like calendaring - where the protocols exist, but nobody seems to implement them well. > > Miles Fidelman > > -- > In theory, there is no difference between theory and practice. > In practice, there is. .... Yogi Berra > > Theory is when you know everything but nothing works. > Practice is when everything works but no one knows why. > In our lab, theory and practice are combined: > nothing works and no one knows why. ... unknown > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history From cabo at tzi.org Thu Oct 1 07:45:31 2020 From: cabo at tzi.org (Carsten Bormann) Date: Thu, 1 Oct 2020 16:45:31 +0200 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> Message-ID: <1584E8FC-BDA2-47E0-8BA1-9789E3BB6821@tzi.org> On 2020-10-01, at 02:58, Joseph Touch via Internet-history wrote: > > We already have TCP-MD5, TCP-AO, TLS, and IPsec. And, more importantly, QUIC. Gr??e, Carsten From touch at strayalpha.com Thu Oct 1 07:49:28 2020 From: touch at strayalpha.com (Joseph Touch) Date: Thu, 1 Oct 2020 07:49:28 -0700 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> Message-ID: <02F332E4-BD0B-4F07-BD8F-C7D9F4377277@strayalpha.com> > On Oct 1, 2020, at 6:50 AM, Craig Partridge via Internet-history wrote: > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch wrote: > >> >> >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < >> internet-history at elists.isoc.org> wrote: >>> >>> I've got some NSF funding to figure out what the error patterns are >>> (nobody's capturing them) with the idea we might propose a new checksum >>> and/or add checkpointing into the file transfer protocols. It is little >>> hard to add something on top of protocols that have a fail/discard model. >> >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. >> >> Why wouldn?t one (any one) of those suffice? >> > > Actually no. These are security checksums, which are different from error > checksums. The key differences are: > > * Security checksums miss an error 1 in 2^x, where x is the width of the > sum in bits. Error checksums (good ones) are designed to catch 100% of the > most common errors and miss other errors at a rate of 1 in 2^x. So a > security checksum is inferior in performance (sometimes dramatically) to an > error checksum. Except for ?designed to catch? errors, 1 in 2^ = 1 in 2^x, so the real question is whether there are errors we should be designing to catch - and whether that sort of design is even possible. > * Security checksums are expensive to compute (because they assume an > adversary) and so people tend to try to skip doing them. Error checksums > are easy to compute. MD5 is roughly 20-30x more expensive than the IP checksum. CRCs can be similarly expensive when done in software. However, hardware accelerators exist and are already widely deployed for both. TCP-AO would be very easy to configure to support any checksum (just make the key string constant and public and pick a checksum as the algorithm). IPsec could similarly be extended. In both cases, we have the framework to solve the problem NOW with existing protocols. The only issue (as with any solution) is adoption and deployment. Picking an error checksum is relatively straightforward - measure the errors, test against known checksums to see if they suffice, and if so, pick one. The only challenging part of this problem happens if those don?t suffice - and its an algorithm issue, not a protocol one. The rest is legwork. This doesn?t fit the bill as ?better protocols? being needed. Joe From craig at tereschau.net Thu Oct 1 08:05:03 2020 From: craig at tereschau.net (Craig Partridge) Date: Thu, 1 Oct 2020 09:05:03 -0600 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Message-ID: Hi John: Re: errors. The short answer is that cryptographic sums are designed to detect any mangling of data with the same probability. For error sums, you can tune the checksum to the error patterns actually seen. In my view, CRC-32 has done so well because Hammond did a really nice analysis for AFRL in the early 70s about what kinds of errors were likely on a link. Above the link layer, the indications are that most errors are in the computer logic of the interconnection devices, and so you see errors of runs of octets or 16-bit or 32-bit words. You also see clear cases of pointers being damaged. There are classes of checksums that detect those sorts of bursts really well but they are less good on single bit errors. Thanks! Craig On Thu, Oct 1, 2020 at 8:24 AM John Day wrote: > Craig, > This is interesting. You are right. > > But what I have been trying to find out is what kinds of ?errors? the > cryptographic hashes are design to catch? And what is their undetected bit > error rate? And it should be possible to design error codes for something > in between, right? > > I have always had this fear that we are not using these codes as they are > designed to be used and we are just lucky that the media is as reliable as > it is. (I always remember that back in the early ARPANET days, reading a > paper on the error rates and that line from Illinois to Utah had like 1 > error a month (or something outrageous like that) while the worst line was > Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the > Illinois/Utah was probably a short hop to Hinsdale and then microwave to > SLC, while the Rome/Cambridge went through multiple COs and old > equipment!) ;-) > > O, and isn?t this data archive naming problem you have noted the kind of > things that librarians and database people have a lot of experience with? > > Take care, > John > > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > > > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch > wrote: > > > >> > >> > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < > >> internet-history at elists.isoc.org> wrote: > >>> > >>> I've got some NSF funding to figure out what the error patterns are > >>> (nobody's capturing them) with the idea we might propose a new checksum > >>> and/or add checkpointing into the file transfer protocols. It is > little > >>> hard to add something on top of protocols that have a fail/discard > model. > >> > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. > >> > >> Why wouldn?t one (any one) of those suffice? > >> > > > > Actually no. These are security checksums, which are different from > error > > checksums. The key differences are: > > > > * Security checksums miss an error 1 in 2^x, where x is the width of the > > sum in bits. Error checksums (good ones) are designed to catch 100% of > the > > most common errors and miss other errors at a rate of 1 in 2^x. So a > > security checksum is inferior in performance (sometimes dramatically) to > an > > error checksum. > > > > * Security checksums are expensive to compute (because they assume an > > adversary) and so people tend to try to skip doing them. Error checksums > > are easy to compute. > > > > Currently the best answer is that for data transmission (e.g. TCP > segments) > > you need an error checksum. At a higher level you do the security > checksum. > > > > Craig > > > > > > -- > > ***** > > Craig Partridge's email account for professional society activities and > > mailing lists. > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From jeanjour at comcast.net Thu Oct 1 08:30:16 2020 From: jeanjour at comcast.net (John Day) Date: Thu, 1 Oct 2020 11:30:16 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Message-ID: <72218E5E-0AD1-4EF9-994E-370AC13F267F@comcast.net> I would wonder if given the changes in technology for even wired systems whether the error patterns haven?t changed in the last 50 years and of course the patterns for fiber and wireless and satellite are different yet. Yes, the whole point of the CYCLADES architecture (which used (and assumed) an HDLC-like protocol for the link layer) was that the link layer got the mangled packet errors (or most of them) and the losses for the Transport Layer to catch were the rare memory error during relaying but mainly, to recover from losses due to congestion. CYCLADE started looking at congestion issues in 1972. The whole point of the architecture was that the link layer ensured that loss rate was well below the losses due to congestion (memory errors were in the noise), so that end-to-end error control at Transport was cost-effective. The link layer doesn?t have to be reliable, but good enough to keep the rate of loss well below rate of loss due to congestion. (The old 80/20 rule). Yes for the layers above the Link layer one is looking mainly single bit errors. Hasn?t that always been the intent? Take care, John > On Oct 1, 2020, at 11:05, Craig Partridge wrote: > > Hi John: > > Re: errors. The short answer is that cryptographic sums are designed to detect any mangling of data with the same probability. For error sums, you can tune the checksum to the error patterns actually seen. In my view, CRC-32 has done so well because Hammond did a really nice analysis for AFRL in the early 70s about what kinds of errors were likely on a link. Above the link layer, the indications are that most errors are in the computer logic of the interconnection devices, and so you see errors of runs of octets or 16-bit or 32-bit words. You also see clear cases of pointers being damaged. There are classes of checksums that detect those sorts of bursts really well but they are less good on single bit errors. > > Thanks! > > Craig > > On Thu, Oct 1, 2020 at 8:24 AM John Day > wrote: > Craig, > This is interesting. You are right. > > But what I have been trying to find out is what kinds of ?errors? the cryptographic hashes are design to catch? And what is their undetected bit error rate? And it should be possible to design error codes for something in between, right? > > I have always had this fear that we are not using these codes as they are designed to be used and we are just lucky that the media is as reliable as it is. (I always remember that back in the early ARPANET days, reading a paper on the error rates and that line from Illinois to Utah had like 1 error a month (or something outrageous like that) while the worst line was Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the Illinois/Utah was probably a short hop to Hinsdale and then microwave to SLC, while the Rome/Cambridge went through multiple COs and old equipment!) ;-) > > O, and isn?t this data archive naming problem you have noted the kind of things that librarians and database people have a lot of experience with? > > Take care, > John > > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history > wrote: > > > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch > wrote: > > > >> > >> > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < > >> internet-history at elists.isoc.org > wrote: > >>> > >>> I've got some NSF funding to figure out what the error patterns are > >>> (nobody's capturing them) with the idea we might propose a new checksum > >>> and/or add checkpointing into the file transfer protocols. It is little > >>> hard to add something on top of protocols that have a fail/discard model. > >> > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. > >> > >> Why wouldn?t one (any one) of those suffice? > >> > > > > Actually no. These are security checksums, which are different from error > > checksums. The key differences are: > > > > * Security checksums miss an error 1 in 2^x, where x is the width of the > > sum in bits. Error checksums (good ones) are designed to catch 100% of the > > most common errors and miss other errors at a rate of 1 in 2^x. So a > > security checksum is inferior in performance (sometimes dramatically) to an > > error checksum. > > > > * Security checksums are expensive to compute (because they assume an > > adversary) and so people tend to try to skip doing them. Error checksums > > are easy to compute. > > > > Currently the best answer is that for data transmission (e.g. TCP segments) > > you need an error checksum. At a higher level you do the security checksum. > > > > Craig > > > > > > -- > > ***** > > Craig Partridge's email account for professional society activities and > > mailing lists. > > -- > > Internet-history mailing list > > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > ***** > Craig Partridge's email account for professional society activities and mailing lists. From dhc at dcrocker.net Thu Oct 1 08:31:43 2020 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 1 Oct 2020 08:31:43 -0700 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> Message-ID: <9af4b91a-384e-b56b-aa41-524cd1bad599@dcrocker.net> On 10/1/2020 6:51 AM, Miles Fidelman via Internet-history wrote: > But we could start by actually fixing things like calendaring - where > the protocols exist, but nobody seems to implement them well. This isn't a technical or standards issue. There's no obvious information that the existing specifications are deficient. Ditto for instant messaging. And yet in both cases, we have a large number of operator-specific, stove-piped services. The issue, here, is lack of coherent, strong market forces towards a single, common capability. Operators have the strong incentive of user capture to motivate them to stovepipe.? They only relinquish that control when they are forced to.? By the market. (Or maybe by regulation, but good luck with that; it didn't work very well for OSI.) Absent that market pressure -- users, customers, whomever -- no amount or quality of specification work matters. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From vint at google.com Thu Oct 1 09:41:58 2020 From: vint at google.com (Vint Cerf) Date: Thu, 1 Oct 2020 12:41:58 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Message-ID: presumably you meant Hamming not Hammond? v On Thu, Oct 1, 2020 at 11:05 AM Craig Partridge via Internet-history < internet-history at elists.isoc.org> wrote: > Hi John: > > Re: errors. The short answer is that cryptographic sums are designed to > detect any mangling of data with the same probability. For error sums, you > can tune the checksum to the error patterns actually seen. In my view, > CRC-32 has done so well because Hammond did a really nice analysis for AFRL > in the early 70s about what kinds of errors were likely on a link. Above > the link layer, the indications are that most errors are in the computer > logic of the interconnection devices, and so you see errors of runs of > octets or 16-bit or 32-bit words. You also see clear cases of pointers > being damaged. There are classes of checksums that detect those sorts of > bursts really well but they are less good on single bit errors. > > Thanks! > > Craig > > On Thu, Oct 1, 2020 at 8:24 AM John Day wrote: > > > Craig, > > This is interesting. You are right. > > > > But what I have been trying to find out is what kinds of ?errors? the > > cryptographic hashes are design to catch? And what is their undetected > bit > > error rate? And it should be possible to design error codes for something > > in between, right? > > > > I have always had this fear that we are not using these codes as they are > > designed to be used and we are just lucky that the media is as reliable > as > > it is. (I always remember that back in the early ARPANET days, reading a > > paper on the error rates and that line from Illinois to Utah had like 1 > > error a month (or something outrageous like that) while the worst line > was > > Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the > > Illinois/Utah was probably a short hop to Hinsdale and then microwave to > > SLC, while the Rome/Cambridge went through multiple COs and old > > equipment!) ;-) > > > > O, and isn?t this data archive naming problem you have noted the kind of > > things that librarians and database people have a lot of experience with? > > > > Take care, > > John > > > > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > > > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch > > wrote: > > > > > >> > > >> > > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < > > >> internet-history at elists.isoc.org> wrote: > > >>> > > >>> I've got some NSF funding to figure out what the error patterns are > > >>> (nobody's capturing them) with the idea we might propose a new > checksum > > >>> and/or add checkpointing into the file transfer protocols. It is > > little > > >>> hard to add something on top of protocols that have a fail/discard > > model. > > >> > > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. > > >> > > >> Why wouldn?t one (any one) of those suffice? > > >> > > > > > > Actually no. These are security checksums, which are different from > > error > > > checksums. The key differences are: > > > > > > * Security checksums miss an error 1 in 2^x, where x is the width of > the > > > sum in bits. Error checksums (good ones) are designed to catch 100% of > > the > > > most common errors and miss other errors at a rate of 1 in 2^x. So a > > > security checksum is inferior in performance (sometimes dramatically) > to > > an > > > error checksum. > > > > > > * Security checksums are expensive to compute (because they assume an > > > adversary) and so people tend to try to skip doing them. Error > checksums > > > are easy to compute. > > > > > > Currently the best answer is that for data transmission (e.g. TCP > > segments) > > > you need an error checksum. At a higher level you do the security > > checksum. > > > > > > Craig > > > > > > > > > -- > > > ***** > > > Craig Partridge's email account for professional society activities and > > > mailing lists. > > > -- > > > Internet-history mailing list > > > Internet-history at elists.isoc.org > > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > -- Please send any postal/overnight deliveries to: Vint Cerf 1435 Woodhurst Blvd McLean, VA 22102 703-448-0965 until further notice From craig at tereschau.net Thu Oct 1 09:54:25 2020 From: craig at tereschau.net (Craig Partridge) Date: Thu, 1 Oct 2020 10:54:25 -0600 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Message-ID: Actually it is Hammond. As far as I can tell (from digging through mounds of old papers [appropriate for an Internet History list!]), the paper that launched CRC-32 as *the* CRC to use was a study by Joseph L. Hammond, J.E. Brown and S.S. Liu, "Development of a transmission error model and an error control model," Georgia Tech report (as I recall, to AFRL) in 1975. Craig On Thu, Oct 1, 2020 at 10:42 AM Vint Cerf wrote: > presumably you meant Hamming not Hammond? > v > > > On Thu, Oct 1, 2020 at 11:05 AM Craig Partridge via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Hi John: >> >> Re: errors. The short answer is that cryptographic sums are designed to >> detect any mangling of data with the same probability. For error sums, >> you >> can tune the checksum to the error patterns actually seen. In my view, >> CRC-32 has done so well because Hammond did a really nice analysis for >> AFRL >> in the early 70s about what kinds of errors were likely on a link. Above >> the link layer, the indications are that most errors are in the computer >> logic of the interconnection devices, and so you see errors of runs of >> octets or 16-bit or 32-bit words. You also see clear cases of pointers >> being damaged. There are classes of checksums that detect those sorts of >> bursts really well but they are less good on single bit errors. >> >> Thanks! >> >> Craig >> >> On Thu, Oct 1, 2020 at 8:24 AM John Day wrote: >> >> > Craig, >> > This is interesting. You are right. >> > >> > But what I have been trying to find out is what kinds of ?errors? the >> > cryptographic hashes are design to catch? And what is their undetected >> bit >> > error rate? And it should be possible to design error codes for >> something >> > in between, right? >> > >> > I have always had this fear that we are not using these codes as they >> are >> > designed to be used and we are just lucky that the media is as reliable >> as >> > it is. (I always remember that back in the early ARPANET days, reading >> a >> > paper on the error rates and that line from Illinois to Utah had like 1 >> > error a month (or something outrageous like that) while the worst line >> was >> > Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the >> > Illinois/Utah was probably a short hop to Hinsdale and then microwave to >> > SLC, while the Rome/Cambridge went through multiple COs and old >> > equipment!) ;-) >> > >> > O, and isn?t this data archive naming problem you have noted the kind of >> > things that librarians and database people have a lot of experience >> with? >> > >> > Take care, >> > John >> > >> > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history < >> > internet-history at elists.isoc.org> wrote: >> > > >> > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch >> > wrote: >> > > >> > >> >> > >> >> > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history < >> > >> internet-history at elists.isoc.org> wrote: >> > >>> >> > >>> I've got some NSF funding to figure out what the error patterns are >> > >>> (nobody's capturing them) with the idea we might propose a new >> checksum >> > >>> and/or add checkpointing into the file transfer protocols. It is >> > little >> > >>> hard to add something on top of protocols that have a fail/discard >> > model. >> > >> >> > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. >> > >> >> > >> Why wouldn?t one (any one) of those suffice? >> > >> >> > > >> > > Actually no. These are security checksums, which are different from >> > error >> > > checksums. The key differences are: >> > > >> > > * Security checksums miss an error 1 in 2^x, where x is the width of >> the >> > > sum in bits. Error checksums (good ones) are designed to catch 100% >> of >> > the >> > > most common errors and miss other errors at a rate of 1 in 2^x. So a >> > > security checksum is inferior in performance (sometimes dramatically) >> to >> > an >> > > error checksum. >> > > >> > > * Security checksums are expensive to compute (because they assume an >> > > adversary) and so people tend to try to skip doing them. Error >> checksums >> > > are easy to compute. >> > > >> > > Currently the best answer is that for data transmission (e.g. TCP >> > segments) >> > > you need an error checksum. At a higher level you do the security >> > checksum. >> > > >> > > Craig >> > > >> > > >> > > -- >> > > ***** >> > > Craig Partridge's email account for professional society activities >> and >> > > mailing lists. >> > > -- >> > > Internet-history mailing list >> > > Internet-history at elists.isoc.org >> > > https://elists.isoc.org/mailman/listinfo/internet-history >> > >> > >> >> -- >> ***** >> Craig Partridge's email account for professional society activities and >> mailing lists. >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> > > > -- > Please send any postal/overnight deliveries to: > Vint Cerf > 1435 Woodhurst Blvd > McLean, VA 22102 > 703-448-0965 > > until further notice > > > > -- ***** Craig Partridge's email account for professional society activities and mailing lists. From vgcerf at gmail.com Thu Oct 1 09:59:15 2020 From: vgcerf at gmail.com (vinton cerf) Date: Thu, 1 Oct 2020 12:59:15 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <0A501269-0908-4C55-BEA3-F9A0E832F52B@strayalpha.com> <42111A47-1E92-4F3F-AF0E-42F17C1E881D@comcast.net> Message-ID: ah, thanks - i was thinking you were referencing Hamming distances... v On Thu, Oct 1, 2020 at 12:54 PM Craig Partridge via Internet-history < internet-history at elists.isoc.org> wrote: > Actually it is Hammond. As far as I can tell (from digging through mounds > of old papers [appropriate for an Internet History list!]), the paper that > launched CRC-32 as *the* CRC to use was a study by Joseph L. Hammond, J.E. > Brown and S.S. Liu, "Development of a transmission error model and an error > control model," Georgia Tech report (as I recall, to AFRL) in 1975. > > Craig > > On Thu, Oct 1, 2020 at 10:42 AM Vint Cerf wrote: > > > presumably you meant Hamming not Hammond? > > v > > > > > > On Thu, Oct 1, 2020 at 11:05 AM Craig Partridge via Internet-history < > > internet-history at elists.isoc.org> wrote: > > > >> Hi John: > >> > >> Re: errors. The short answer is that cryptographic sums are designed to > >> detect any mangling of data with the same probability. For error sums, > >> you > >> can tune the checksum to the error patterns actually seen. In my view, > >> CRC-32 has done so well because Hammond did a really nice analysis for > >> AFRL > >> in the early 70s about what kinds of errors were likely on a link. > Above > >> the link layer, the indications are that most errors are in the computer > >> logic of the interconnection devices, and so you see errors of runs of > >> octets or 16-bit or 32-bit words. You also see clear cases of pointers > >> being damaged. There are classes of checksums that detect those sorts > of > >> bursts really well but they are less good on single bit errors. > >> > >> Thanks! > >> > >> Craig > >> > >> On Thu, Oct 1, 2020 at 8:24 AM John Day wrote: > >> > >> > Craig, > >> > This is interesting. You are right. > >> > > >> > But what I have been trying to find out is what kinds of ?errors? the > >> > cryptographic hashes are design to catch? And what is their > undetected > >> bit > >> > error rate? And it should be possible to design error codes for > >> something > >> > in between, right? > >> > > >> > I have always had this fear that we are not using these codes as they > >> are > >> > designed to be used and we are just lucky that the media is as > reliable > >> as > >> > it is. (I always remember that back in the early ARPANET days, > reading > >> a > >> > paper on the error rates and that line from Illinois to Utah had like > 1 > >> > error a month (or something outrageous like that) while the worst line > >> was > >> > Rome, NY (Griffiths AFB) to Cambridge, MA! ;-) Of course the > >> > Illinois/Utah was probably a short hop to Hinsdale and then microwave > to > >> > SLC, while the Rome/Cambridge went through multiple COs and old > >> > equipment!) ;-) > >> > > >> > O, and isn?t this data archive naming problem you have noted the kind > of > >> > things that librarians and database people have a lot of experience > >> with? > >> > > >> > Take care, > >> > John > >> > > >> > > On Oct 1, 2020, at 09:50, Craig Partridge via Internet-history < > >> > internet-history at elists.isoc.org> wrote: > >> > > > >> > > On Wed, Sep 30, 2020 at 6:58 PM Joseph Touch > >> > wrote: > >> > > > >> > >> > >> > >> > >> > >>> On Sep 30, 2020, at 4:58 PM, Craig Partridge via Internet-history > < > >> > >> internet-history at elists.isoc.org> wrote: > >> > >>> > >> > >>> I've got some NSF funding to figure out what the error patterns > are > >> > >>> (nobody's capturing them) with the idea we might propose a new > >> checksum > >> > >>> and/or add checkpointing into the file transfer protocols. It is > >> > little > >> > >>> hard to add something on top of protocols that have a fail/discard > >> > model. > >> > >> > >> > >> We already have TCP-MD5, TCP-AO, TLS, and IPsec. > >> > >> > >> > >> Why wouldn?t one (any one) of those suffice? > >> > >> > >> > > > >> > > Actually no. These are security checksums, which are different from > >> > error > >> > > checksums. The key differences are: > >> > > > >> > > * Security checksums miss an error 1 in 2^x, where x is the width of > >> the > >> > > sum in bits. Error checksums (good ones) are designed to catch 100% > >> of > >> > the > >> > > most common errors and miss other errors at a rate of 1 in 2^x. So > a > >> > > security checksum is inferior in performance (sometimes > dramatically) > >> to > >> > an > >> > > error checksum. > >> > > > >> > > * Security checksums are expensive to compute (because they assume > an > >> > > adversary) and so people tend to try to skip doing them. Error > >> checksums > >> > > are easy to compute. > >> > > > >> > > Currently the best answer is that for data transmission (e.g. TCP > >> > segments) > >> > > you need an error checksum. At a higher level you do the security > >> > checksum. > >> > > > >> > > Craig > >> > > > >> > > > >> > > -- > >> > > ***** > >> > > Craig Partridge's email account for professional society activities > >> and > >> > > mailing lists. > >> > > -- > >> > > Internet-history mailing list > >> > > Internet-history at elists.isoc.org > >> > > https://elists.isoc.org/mailman/listinfo/internet-history > >> > > >> > > >> > >> -- > >> ***** > >> Craig Partridge's email account for professional society activities and > >> mailing lists. > >> -- > >> Internet-history mailing list > >> Internet-history at elists.isoc.org > >> https://elists.isoc.org/mailman/listinfo/internet-history > >> > > > > > > -- > > Please send any postal/overnight deliveries to: > > Vint Cerf > > 1435 Woodhurst Blvd > > McLean, VA 22102 > > 703-448-0965 > > > > until further notice > > > > > > > > > > -- > ***** > Craig Partridge's email account for professional society activities and > mailing lists. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From jnc at mercury.lcs.mit.edu Thu Oct 1 11:28:33 2020 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Thu, 1 Oct 2020 14:28:33 -0400 (EDT) Subject: [ih] FTP RIP Message-ID: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> > From: Jack Haverty > I lost my own packrat stash when I failed to find a way to move info > from Dectapes to a more modern medium. Oh, you didn't pitch them, did you? There are a couple of people in the classic computer community who have working DECTape drives. (I have a TU56 and TC11 controller, but don't have them working yet.) So if you still have them they could be read. Ditto for RK packs, etc, etc. > the message archives Noel has saved for almost 50 years. Err, I didn't save them for the whole 50 years! About 10 years ago, I noticed that stuff that _used_ to be available on the Web had started to disappear. (There was one particular list archive which the person hosting it had taken down because they had developed an objection to it. I can't remember which list it was now; it was something from the early commercialization of the Internet. Maybe something about email?) So I went out and scarfed up all the archives I could find for lists which I remembered as early and important, and which seemed to me to be in danger of going offline. (As in, hosted by individuals, not institutions.) The Internet Archive was, IIRC, a big help; I had old URLs for some things which weren't up anymore, but the IA came through. A lot has gone, though, sigh; e.g. the DARPA Internet group had a list, one whose archives would be invaluable to historians of technology, but I think they are gone (although if institutions still have backup tapes from that era, perhaps they could be recovered). Speaking of which, Lars has found a copy of the two earliest Hearer-Prople archives (the ones I'm missing) on ITS backup tapes at the MIT Archives, and I'll be working on getting them released so they can be put up. Thanks, Lars! Noel From mfidelman at meetinghouse.net Thu Oct 1 12:10:24 2020 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 1 Oct 2020 15:10:24 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> Message-ID: <0be19a31-1c4f-3c03-65f6-9866e43c2df9@meetinghouse.net> On 10/1/20 10:04 AM, Steve Crocker wrote: > On Thu, Oct 1, 2020 at 9:55 AM Miles Fidelman via Internet-history > > wrote: > > > For a long time, I've maintained that we need a new generation of > application layer protocols, for things like: > > ... > > But we could start by actually fixing things like calendaring - where > the protocols exist, but nobody seems to implement them well. > > > What do you have in mind that needs to be fixed re calendaring?? I > frequently have trouble with the calendaring.? Changes sometimes don't > propagate properly, and changes to recurring meetings get mangled. > Well, for one, Google stopped supporting standard protocols - they insist on syncing via their proprietary mechanisms. (It always amazes me that the cleanest, most reliable, most interoperable implementation is built into Microsoft Excange). For another - which may have to do with the varying implementations - it's really easy to have calendar entries duplicate, or to accept an invitation only to get messages like "not found" or, as happened to me yesterday, I moved an event from one calendar layer to another, and suddenly that propagated as a new invite to 246 people. Which leads to another problem - the fully distributed model leads to both privacy & traffic issues when you have large invitee lists (like for a webinar or something). And then, it would sure be nice to extend the current protocols to do things like:? Negotiate a time & venue, build & manage agendas, and so forth. > Some time ago I noticed there was a calendar working group, calisfy.? > I joined it because I wanted to suggest that in addition to the > details re formats, etc., the specification should also say something > about expected time to propagate changes.? There was zero response > within the WG.? I stopped paying close attention but I remained on the > mailing list.? The protocol seems extremely complicated and I would > not be surprised if the result from the WG turns out to be better but > nonetheless still broken in various ways. Likewise, the group seems to be mostly inactive, and mostly different big vendors who really don't want to do much. Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From mfidelman at meetinghouse.net Thu Oct 1 12:38:54 2020 From: mfidelman at meetinghouse.net (Miles Fidelman) Date: Thu, 1 Oct 2020 15:38:54 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: <9af4b91a-384e-b56b-aa41-524cd1bad599@dcrocker.net> References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> <10027.1601335832@hop.toad.com> <400.1601517591@hop.toad.com> <9af4b91a-384e-b56b-aa41-524cd1bad599@dcrocker.net> Message-ID: <1f9c8da7-221f-1584-3119-69b9ec95d5c1@meetinghouse.net> On 10/1/20 11:31 AM, Dave Crocker wrote: > On 10/1/2020 6:51 AM, Miles Fidelman via Internet-history wrote: >> But we could start by actually fixing things like calendaring - where >> the protocols exist, but nobody seems to implement them well. > > > This isn't a technical or standards issue. > > There's no obvious information that the existing specifications are > deficient. > > Ditto for instant messaging. > > And yet in both cases, we have a large number of operator-specific, > stove-piped services. > > The issue, here, is lack of coherent, strong market forces towards a > single, common capability. > > Operators have the strong incentive of user capture to motivate them > to stovepipe.? They only relinquish that control when they are forced > to.? By the market. (Or maybe by regulation, but good luck with that; > it didn't work very well for OSI.) > > Absent that market pressure -- users, customers, whomever -- no amount > or quality of specification work matters. > Agree completely. Ahh for the days when the Internet was about resource sharing, and the research community drove both connectivity & interoperability? As opposed to the return of walled gardens, driven by commercial pressures. Definitely a matter of technoeconomics & technopolitics at play. It does seem kind of bizarre that IBM & Microsoft are now the biggest supporters of interoperability (and to a degree, open source) - while Apple & Google have become engines of evil (or at least Babel). Miles -- In theory, there is no difference between theory and practice. In practice, there is. .... Yogi Berra Theory is when you know everything but nothing works. Practice is when everything works but no one knows why. In our lab, theory and practice are combined: nothing works and no one knows why. ... unknown From jack at 3kitty.org Thu Oct 1 13:51:45 2020 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 1 Oct 2020 13:51:45 -0700 Subject: [ih] FTP RIP In-Reply-To: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: Unfortunately, my Dectapes weren't stored very well, and succumbed to decades of summer attic heat and winter below-zero abuse.?? They eventually became brittle and crumbled.? The plastic cases were surprisingly robust; the tape itself not so much. I also have a request in to MIT for whatever they can find of my ancient ITS work.? I just got back a response that my request has been "closed" with no action, but that "we are working on our procedure for this collection and hope to have it in place soon." So I have to now create an account and re-submit my request.? Best to wait a bit for "soon" to pass I guess. Yes, there were a lot of mailing lists, as well as a lot of interaction among small groups of people not using any formal list at all. If you're a packrat, Lars is the premier Internet Dumpster Diver.? It will be interesting to see those first two archives. /Jack On 10/1/20 11:28 AM, Noel Chiappa via Internet-history wrote: > > From: Jack Haverty > > > I lost my own packrat stash when I failed to find a way to move info > > from Dectapes to a more modern medium. > > Oh, you didn't pitch them, did you? There are a couple of people in the > classic computer community who have working DECTape drives. (I have a TU56 > and TC11 controller, but don't have them working yet.) So if you still > have them they could be read. Ditto for RK packs, etc, etc. > > > the message archives Noel has saved for almost 50 years. > > Err, I didn't save them for the whole 50 years! About 10 years ago, I noticed > that stuff that _used_ to be available on the Web had started to disappear. > > (There was one particular list archive which the person hosting it had taken > down because they had developed an objection to it. I can't remember which > list it was now; it was something from the early commercialization of the > Internet. Maybe something about email?) > > So I went out and scarfed up all the archives I could find for lists which I > remembered as early and important, and which seemed to me to be in danger of > going offline. (As in, hosted by individuals, not institutions.) The Internet > Archive was, IIRC, a big help; I had old URLs for some things which weren't > up anymore, but the IA came through. > > A lot has gone, though, sigh; e.g. the DARPA Internet group had a list, one > whose archives would be invaluable to historians of technology, but I think > they are gone (although if institutions still have backup tapes from that > era, perhaps they could be recovered). > > Speaking of which, Lars has found a copy of the two earliest Hearer-Prople > archives (the ones I'm missing) on ITS backup tapes at the MIT Archives, and > I'll be working on getting them released so they can be put up. Thanks, Lars! > > Noel From geoff at iconia.com Thu Oct 1 14:31:04 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Thu, 1 Oct 2020 11:31:04 -1000 Subject: [ih] FTP RIP In-Reply-To: References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: IIRC, ITS with its COMSAT mailer was the first ARPANET host to support mailing lists where one could send to MailingListName at MIT-{AI,DM,MC,ML} and the COMSAT MTA would then "automatically" sent out to others (without any human interaction for such memorable lists such as HUMAN-NETS, SF-LOVERS, HEADER-PEOPLE, TELECOM, etc.] TENEX -- which pretty much "ruled" the ARPANET at the time (:D) -- (never?) had no such capability... mailing lists like Peter Neumann's RISKS-FORUM which yours truly setup when at/in SRI-CSL and Einar Stefferud's MSGGROUP at USC-ISI collected submissions in a files only directory like & to which the list admin/moderator would then invoke an UI (like MSG) on to manually forward (or "ReDistribute" in HERMES) to the list members -- who were manually added or subtracted to a file. IIRC, this was pretty much the state of affairs until Unix (and MTA's such as delivermail, sendmail, MMDF, ...) came along for which majordomo was piped to, which then introduced automated list management... cue to: Mr. Email aka Dave Crocker.. :D // geoff On Thu, Oct 1, 2020 at 10:52 AM Jack Haverty via Internet-history < internet-history at elists.isoc.org> wrote: > Unfortunately, my Dectapes weren't stored very well, and succumbed to > decades of summer attic heat and winter below-zero abuse. They > eventually became brittle and crumbled. The plastic cases were > surprisingly robust; the tape itself not so much. > > I also have a request in to MIT for whatever they can find of my ancient > ITS work. I just got back a response that my request has been "closed" > with no action, but that "we are working on our procedure for this > collection and hope to have it in place soon." > > So I have to now create an account and re-submit my request. Best to > wait a bit for "soon" to pass I guess. > > Yes, there were a lot of mailing lists, as well as a lot of interaction > among small groups of people not using any formal list at all. > > If you're a packrat, Lars is the premier Internet Dumpster Diver. It > will be interesting to see those first two archives. > > /Jack > > > On 10/1/20 11:28 AM, Noel Chiappa via Internet-history wrote: > > > From: Jack Haverty > > > > > I lost my own packrat stash when I failed to find a way to move > info > > > from Dectapes to a more modern medium. > > > > Oh, you didn't pitch them, did you? There are a couple of people in the > > classic computer community who have working DECTape drives. (I have a > TU56 > > and TC11 controller, but don't have them working yet.) So if you still > > have them they could be read. Ditto for RK packs, etc, etc. > > > > > the message archives Noel has saved for almost 50 years. > > > > Err, I didn't save them for the whole 50 years! About 10 years ago, I > noticed > > that stuff that _used_ to be available on the Web had started to > disappear. > > > > (There was one particular list archive which the person hosting it had > taken > > down because they had developed an objection to it. I can't remember > which > > list it was now; it was something from the early commercialization of the > > Internet. Maybe something about email?) > > > > So I went out and scarfed up all the archives I could find for lists > which I > > remembered as early and important, and which seemed to me to be in > danger of > > going offline. (As in, hosted by individuals, not institutions.) The > Internet > > Archive was, IIRC, a big help; I had old URLs for some things which > weren't > > up anymore, but the IA came through. > > > > A lot has gone, though, sigh; e.g. the DARPA Internet group had a list, > one > > whose archives would be invaluable to historians of technology, but I > think > > they are gone (although if institutions still have backup tapes from that > > era, perhaps they could be recovered). > > > > Speaking of which, Lars has found a copy of the two earliest > Hearer-Prople > > archives (the ones I'm missing) on ITS backup tapes at the MIT Archives, > and > > I'll be working on getting them released so they can be put up. Thanks, > Lars! > > > > Noel > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From dhc at dcrocker.net Thu Oct 1 14:37:53 2020 From: dhc at dcrocker.net (Dave Crocker) Date: Thu, 1 Oct 2020 14:37:53 -0700 Subject: [ih] FTP RIP In-Reply-To: References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: <1c7113bc-f902-0c08-d7be-d082bd8ee6f2@dcrocker.net> On 10/1/2020 2:31 PM, the keyboard of geoff goodfellow wrote: > IIRC, this was pretty much the state of affairs until Unix (and MTA's > such as?delivermail, sendmail, MMDF, ...) came along for which majordomo > was piped to, which then introduced automated list management... cue to: > Mr. Email aka Dave Crocker.. :D I certainly recall that Stef's list and Postel's RFC distribution mails were manual ^b inclusions to sndmsg. MMDF had a mailing list channel circa 1979, which I assume counts as late, for this discussion. I don't remember whether sendmail's predecessor, delivermail, had such a capability, but 'alias' mechanisms date back pretty far and were often used as an implicit mailing list. d/ From stewart at serissa.com Thu Oct 1 15:22:23 2020 From: stewart at serissa.com (Lawrence Stewart) Date: Thu, 1 Oct 2020 18:22:23 -0400 Subject: [ih] error detection In-Reply-To: References: Message-ID: This is a fascinating discussion. There was a time, I think that people thought the hardware link checksum was sufficient, and indeed the CRC-32 is much better than the TCP software checksum. Folks quickly realized that important parts of the overall system were not protected. Datapath inside adapters, memory bus transfers, bad memory in hosts and routers, etc. Consequently an end to end checksum is essential. Craig put succinctly the properties of CRC-32 (and in general, linear congruential codes in general) - they detect 100% of single burst errors shorter than the checksum, and 1-2^-n of all other errors. It is not even clear The properties that make a good end-to-end checksum are a little different: * you?d like to detect all or nearly all the common types of errors, such as memory addressing errors, core clobbers, etc. * you?d like them to be if possible, so that a router can calculate a change to a checksum without recomputing the whole thing possibly based on erroneous data. * you?d like them to be very fast, so they can run at memory bandwidth speeds The latter requirement is a real problem, because we now have things like 100g interfaces and RDMA and zero copy data delivery to the end application. When exactly is the software going to pick up every byte? I think the attraction of error detecting codes over cryptographic hashes is a mistake. CPUs now include AES hardware, and it can be faster than any software alternative. Sure it doesn?t catch every burst error less than the block size, but who cares? You get the additional benefits of protection against actual adversaries in addition to protection against random and (most) burst errors. In practice, all that is necessary is to push down the undetected error rate below the next most likely cause of trouble. Undetected disk read errors for example, are around 10^-14 to 10^-16, which is the equivalent of about 48-50 bit CRCs. It seems likely that AES computed hashes, at 2^-128 are not going to be a problem for a long time. (And this is why people with a lot of disks use end-to-end file checksums as well.) https://www.jandrewrogers.com/2019/03/06/aquahash/ says that several year old things like Skylake can do AES hashes at 15 bytes/cycle, which is..impressively fast. Cryptographic hashes don?t solve the modifiable issue, but I suspect their other benefits are more important. Other useful references: https://www.ieee802.org/3/hssg/public/nov07/gustlin_01_1107.pdf From jack at 3kitty.org Thu Oct 1 15:41:38 2020 From: jack at 3kitty.org (Jack Haverty) Date: Thu, 1 Oct 2020 15:41:38 -0700 Subject: [ih] FTP RIP In-Reply-To: References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: Actually, there were two mail systems in use at MIT on ITS at the time.?? KLH (Ken Harrenstein) wrote COMSAT for MIT-AI; I wrote COMSYS for MIT-DM. Both of these had mailing-list functionality.? If you look through those Header-people archives that Noel has collected, you'll find one message where I asked whoever was in charge of the MSGGROUP list to add "MSGGRP at MIT-DM".?? That was a list kept on COMSYS that reduced ARPANET traffic by sending each MSGGROUP message once to MIT-DM, and then locally to users on MIT-DM.? In networking terms, addresses could be "multicast" in nature. We played around a lot with mailing lists, and unearthed some issues.? For example, replies, redistribution, and forwarding couldn't detect that addressees were duplicated.?? So a reply to a message would go to the original message's author twice, once directly and once through the mailing list.?? That still happens today - Geoff and Dave should get two copies of this message. One especially gnarly problem was how do you detect, and then prevent, "routing loops" when someone creates a mailing list which inadvertently contains another mailing list.?? This was one of the motivations for putting the "Message-ID" field in the header standard.?? Message systems could then detect "looping" messages -- if the programmer wrote the appropriate code. It's been close to 50 years; I wonder what would happen today if... /Jack On 10/1/20 2:31 PM, the keyboard of geoff goodfellow wrote: > IIRC, ITS with its COMSAT mailer was the first ARPANET host to support > mailing lists where one could send to > MailingListName at MIT-{AI,DM,MC,ML} and the COMSAT MTA would then > "automatically" sent out to others (without any human interaction for > such memorable lists such as HUMAN-NETS, SF-LOVERS, HEADER-PEOPLE, > TELECOM, etc.] > > TENEX -- which pretty much "ruled" the ARPANET at the time (:D) -- > (never?) had no such capability... mailing lists like Peter Neumann's > RISKS-FORUM which yours truly setup when at/in SRI-CSL and Einar > Stefferud's?MSGGROUP?at USC-ISI collected submissions in a files only > directory like & to which the list admin/moderator > would then invoke an UI (like MSG) on to manually forward (or > "ReDistribute" in HERMES) to the list members -- who were manually > added or subtracted to a file. > > IIRC, this was pretty much the state of affairs until Unix (and MTA's > such as?delivermail, sendmail, MMDF, ...) came along for which > majordomo was piped to, which then introduced automated list > management... cue to: Mr. Email aka Dave Crocker.. :D > > // geoff > > On Thu, Oct 1, 2020 at 10:52 AM Jack Haverty via Internet-history > > wrote: > > Unfortunately, my Dectapes weren't stored very well, and succumbed to > decades of summer attic heat and winter below-zero abuse.?? They > eventually became brittle and crumbled.? The plastic cases were > surprisingly robust; the tape itself not so much. > > I also have a request in to MIT for whatever they can find of my > ancient > ITS work.? I just got back a response that my request has been > "closed" > with no action, but that "we are working on our procedure for this > collection and hope to have it in place soon." > > So I have to now create an account and re-submit my request.? Best to > wait a bit for "soon" to pass I guess. > > Yes, there were a lot of mailing lists, as well as a lot of > interaction > among small groups of people not using any formal list at all. > > If you're a packrat, Lars is the premier Internet Dumpster Diver.? It > will be interesting to see those first two archives. > > /Jack > > > On 10/1/20 11:28 AM, Noel Chiappa via Internet-history wrote: > >? ? ?> From: Jack Haverty > > > >? ? ?> I lost my own packrat stash when I failed to find a way to > move info > >? ? ?> from Dectapes to a more modern medium. > > > > Oh, you didn't pitch them, did you? There are a couple of people > in the > > classic computer community who have working DECTape drives. (I > have a TU56 > > and TC11 controller, but don't have them working yet.) So if you > still > > have them they could be read. Ditto for RK packs, etc, etc. > > > >? ? ?> the message archives Noel has saved for almost 50 years. > > > > Err, I didn't save them for the whole 50 years! About 10 years > ago, I noticed > > that stuff that _used_ to be available on the Web had started to > disappear. > > > > (There was one particular list archive which the person hosting > it had taken > > down because they had developed an objection to it. I can't > remember which > > list it was now; it was something from the early > commercialization of the > > Internet. Maybe something about email?) > > > > So I went out and scarfed up all the archives I could find for > lists which I > > remembered as early and important, and which seemed to me to be > in danger of > > going offline. (As in, hosted by individuals, not institutions.) > The Internet > > Archive was, IIRC, a big help; I had old URLs for some things > which weren't > > up anymore, but the IA came through. > > > > A lot has gone, though, sigh; e.g. the DARPA Internet group had > a list, one > > whose archives would be invaluable to historians of technology, > but I think > > they are gone (although if institutions still have backup tapes > from that > > era, perhaps they could be recovered). > > > > Speaking of which, Lars has found a copy of the two earliest > Hearer-Prople > > archives (the ones I'm missing) on ITS backup tapes at the MIT > Archives, and > > I'll be working on getting them released so they can be put up. > Thanks, Lars! > > > >? ? ? ?Noel > > -- > Internet-history mailing list > Internet-history at elists.isoc.org > > https://elists.isoc.org/mailman/listinfo/internet-history > > > > -- > Geoff.Goodfellow at iconia.com > living as The Truth is True > > > From steve at shinkuro.com Thu Oct 1 16:02:07 2020 From: steve at shinkuro.com (Steve Crocker) Date: Thu, 1 Oct 2020 19:02:07 -0400 Subject: [ih] error detection In-Reply-To: References: Message-ID: On Thu, Oct 1, 2020 at 6:22 PM Lawrence Stewart via Internet-history < internet-history at elists.isoc.org> wrote: > > There was a time, I think that people thought the hardware link checksum > was sufficient, and indeed the CRC-32 is much better than the TCP software > checksum. > > Folks quickly realized that important parts of the overall system were not > protected. Datapath inside adapters, memory bus transfers, bad memory in > hosts and routers, etc. > > Consequently an end to end checksum is essential. It happened even more quickly than you're suggesting. In mid-February 1969 a few of us in the Network Working Group met with the BBN group for the first time. This was about six weeks after they had begun work on the IMP contract and 6-1/2 months before the first IMP was delivered to UCLA. We had a rough cut at the host-host protocol in mind. We intended to include a lightweight end-to-end checksum. Sixteen bit ones complement (end around carry) with one bit rotation every approximately thousand bits. Jeff Rulifson had argued this was good practice and might catch implementation errors at different layers. The one bit rotation was a small wrinkle we included to catch possible errors in reassembly of packets into messages. We knew the communication between IMPs would be protected by 24 bit hardware checksums; we weren't trying to complete with that. Frank Heart reacted strongly. "You'll make my network look slow!" We went back and forth briefly. I pointed out the path between the host and IMP was not protected and asked how reliable it would be. "As reliable as your accumulator," insisted Heart. To my regret, we -- primarily I -- relented and didn't include a checksum in the original host-host protocol. Sure enough, there was a problem with the early host-IMP connection on the Lincoln TX-2 -- harmonic interference between the drum and the host-IMP interface when both were operating, I believe. I heard they had a devil of a time tracking it down. Steve > > > From touch at strayalpha.com Thu Oct 1 16:41:45 2020 From: touch at strayalpha.com (Joseph Touch) Date: Thu, 1 Oct 2020 16:41:45 -0700 Subject: [ih] error detection In-Reply-To: References: Message-ID: <77B76266-4A0D-49A3-A5F4-DFDCCA7F036B@strayalpha.com> > On Oct 1, 2020, at 3:22 PM, Lawrence Stewart via Internet-history wrote: > > The properties that make a good end-to-end checksum are a little different: Agreed, but... > > ... > * you?d like them to be if possible, so that a router can calculate a change to a checksum without recomputing the whole thing possibly based on erroneous data > ? this particular requirement undermines the core of the definition of E2E. Joe From johnl at iecc.com Thu Oct 1 16:53:46 2020 From: johnl at iecc.com (John Levine) Date: 1 Oct 2020 19:53:46 -0400 Subject: [ih] "how better protocols could solve those problems better" In-Reply-To: Message-ID: <20201001235346.BFDE322CD61C@ary.qy> In article you write: >What do you have in mind that needs to be fixed re calendaring? I >frequently have trouble with the calendaring. Changes sometimes don't >propagate properly, and changes to recurring meetings get mangled. Implementation quality. I wrote some calendar publishing scripts earlier this year so I could push meeting updates out to members of a group I run, and I was quite dismayed to find how different the client implmentations were from what the spec says. R's, John From geoff at iconia.com Thu Oct 1 16:57:52 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Thu, 1 Oct 2020 13:57:52 -1000 Subject: [ih] FTP RIP In-Reply-To: References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: jack, actually no vis-a-vis "So a reply to a message would go to the original message's author twice, once directly and once through the mailing list. That still happens today - Geoff and Dave should get two copies of this message." as it seems that the IH mailing list SW thinger at ISOC is Very Smart/Clever in this regard and seems to "know" (looking and seeing of To: geoff at iconia.com in the header). ERGO, yours truly only got ONE copy of your reply (directly from you and none from/via the list), viz.: Received: from atl4mhfb03.myregisteredsite.com ( atl4mhfb03.myregisteredsite.com [209.17.115.119]) by strange.networkguild.org (8.15.2/8.15.2/Debian-20) with ESMTPS id 091Mg9Rg029555 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Thu, 1 Oct 2020 18:42:09 -0400 Received: from jax4mhob21.registeredsite.com (jax4mhob21.registeredsite.com [64.69.218.109]) by atl4mhfb03.myregisteredsite.com (8.14.4/8.14.4) with ESMTP id 091Mg6Rq020259 for ; Thu, 1 Oct 2020 18:42:08 -0400 Received: from mymail.myregisteredsite.com ( jax4wmnode3b.mymail.myregisteredsite.com [209.237.134.215]) by jax4mhob21.registeredsite.com (8.14.4/8.14.4) with SMTP id 091MfdKh076417 for ; Thu, 1 Oct 2020 18:41:39 -0400 Received: (qmail 31835 invoked by uid 80); 1 Oct 2020 22:41:39 -0000 Received: from unknown (HELO ?192.168.1.100?) (jack at 3kitty.org@73.235.248.92 ) by 209.237.134.154 with ESMTPA; 1 Oct 2020 22:41:39 -0000 Subject: Re: [ih] FTP RIP To: the keyboard of geoff goodfellow Cc: Internet-history , Dave Crocker < dcrocker at bbiw.net> References: <20201001182833.3E66918C0EC at mercury.lcs.mit.edu> < b210b7bc-d443-6baa-8034-3ec345b817cf at 3kitty.org> < CAEf-zrhraJ1eZ3gJvPG4fx8jtO0s8vXXOpspRvEhw_DBzJkz7w at mail.gmail.com> From: Jack Haverty just as you should/will hopefully also only receive one copy with this reply directly from yours truly's smtp out host bottom.networkguild.org and not from/via the ISOC list server elists.isoc.org (just check your Received: line headers :D)... // geoff On Thu, Oct 1, 2020 at 12:42 PM Jack Haverty wrote: > Actually, there were two mail systems in use at MIT on ITS at the time. > KLH (Ken Harrenstein) wrote COMSAT for MIT-AI; I wrote COMSYS for MIT-DM. > > Both of these had mailing-list functionality. If you look through those > Header-people archives that Noel has collected, you'll find one message > where I asked whoever was in charge of the MSGGROUP list to add > "MSGGRP at MIT-DM". That was a list kept on COMSYS that reduced ARPANET > traffic by sending each MSGGROUP message once to MIT-DM, and then locally > to users on MIT-DM. In networking terms, addresses could be "multicast" in > nature. > > We played around a lot with mailing lists, and unearthed some issues. > > For example, replies, redistribution, and forwarding couldn't detect that > addressees were duplicated. So a reply to a message would go to the > original message's author twice, once directly and once through the mailing > list. That still happens today - Geoff and Dave should get two copies of > this message. > > One especially gnarly problem was how do you detect, and then prevent, > "routing loops" when someone creates a mailing list which inadvertently > contains another mailing list. This was one of the motivations for > putting the "Message-ID" field in the header standard. Message systems > could then detect "looping" messages -- if the programmer wrote the > appropriate code. > > It's been close to 50 years; I wonder what would happen today if... > > /Jack > > On 10/1/20 2:31 PM, the keyboard of geoff goodfellow wrote: > > IIRC, ITS with its COMSAT mailer was the first ARPANET host to support > mailing lists where one could send to MailingListName at MIT-{AI,DM,MC,ML} > and the COMSAT MTA would then "automatically" sent out to others (without > any human interaction for such memorable lists such as HUMAN-NETS, > SF-LOVERS, HEADER-PEOPLE, TELECOM, etc.] > > TENEX -- which pretty much "ruled" the ARPANET at the time (:D) -- > (never?) had no such capability... mailing lists like Peter Neumann's > RISKS-FORUM which yours truly setup when at/in SRI-CSL and Einar > Stefferud's MSGGROUP at USC-ISI collected submissions in a files only > directory like & to which the list admin/moderator would > then invoke an UI (like MSG) on to manually forward (or "ReDistribute" in > HERMES) to the list members -- who were manually added or subtracted to a > file. > > IIRC, this was pretty much the state of affairs until Unix (and MTA's such > as delivermail, sendmail, MMDF, ...) came along for which majordomo was > piped to, which then introduced automated list management... cue to: Mr. > Email aka Dave Crocker.. :D > > // geoff > > On Thu, Oct 1, 2020 at 10:52 AM Jack Haverty via Internet-history < > internet-history at elists.isoc.org> wrote: > >> Unfortunately, my Dectapes weren't stored very well, and succumbed to >> decades of summer attic heat and winter below-zero abuse. They >> eventually became brittle and crumbled. The plastic cases were >> surprisingly robust; the tape itself not so much. >> >> I also have a request in to MIT for whatever they can find of my ancient >> ITS work. I just got back a response that my request has been "closed" >> with no action, but that "we are working on our procedure for this >> collection and hope to have it in place soon." >> >> So I have to now create an account and re-submit my request. Best to >> wait a bit for "soon" to pass I guess. >> >> Yes, there were a lot of mailing lists, as well as a lot of interaction >> among small groups of people not using any formal list at all. >> >> If you're a packrat, Lars is the premier Internet Dumpster Diver. It >> will be interesting to see those first two archives. >> >> /Jack >> >> >> On 10/1/20 11:28 AM, Noel Chiappa via Internet-history wrote: >> > > From: Jack Haverty >> > >> > > I lost my own packrat stash when I failed to find a way to move >> info >> > > from Dectapes to a more modern medium. >> > >> > Oh, you didn't pitch them, did you? There are a couple of people in the >> > classic computer community who have working DECTape drives. (I have a >> TU56 >> > and TC11 controller, but don't have them working yet.) So if you still >> > have them they could be read. Ditto for RK packs, etc, etc. >> > >> > > the message archives Noel has saved for almost 50 years. >> > >> > Err, I didn't save them for the whole 50 years! About 10 years ago, I >> noticed >> > that stuff that _used_ to be available on the Web had started to >> disappear. >> > >> > (There was one particular list archive which the person hosting it had >> taken >> > down because they had developed an objection to it. I can't remember >> which >> > list it was now; it was something from the early commercialization of >> the >> > Internet. Maybe something about email?) >> > >> > So I went out and scarfed up all the archives I could find for lists >> which I >> > remembered as early and important, and which seemed to me to be in >> danger of >> > going offline. (As in, hosted by individuals, not institutions.) The >> Internet >> > Archive was, IIRC, a big help; I had old URLs for some things which >> weren't >> > up anymore, but the IA came through. >> > >> > A lot has gone, though, sigh; e.g. the DARPA Internet group had a list, >> one >> > whose archives would be invaluable to historians of technology, but I >> think >> > they are gone (although if institutions still have backup tapes from >> that >> > era, perhaps they could be recovered). >> > >> > Speaking of which, Lars has found a copy of the two earliest >> Hearer-Prople >> > archives (the ones I'm missing) on ITS backup tapes at the MIT >> Archives, and >> > I'll be working on getting them released so they can be put up. Thanks, >> Lars! >> > >> > Noel >> >> -- >> Internet-history mailing list >> Internet-history at elists.isoc.org >> https://elists.isoc.org/mailman/listinfo/internet-history >> >> > > -- > Geoff.Goodfellow at iconia.com > living as The Truth is True > > > > > -- Geoff.Goodfellow at iconia.com living as The Truth is True From karl at cavebear.com Thu Oct 1 17:17:09 2020 From: karl at cavebear.com (Karl Auerbach) Date: Thu, 1 Oct 2020 17:17:09 -0700 Subject: [ih] error detection In-Reply-To: References: Message-ID: <2c2c1737-5cc5-4bc9-b7ad-df9c2002acef@cavebear.com> I've hit the checksum issues in multiple directions: 1. I was at Sun when the "no parity on the S-Bus" hit us (i.e. it created slow, creeping damage to a source code repository) when an intermittent, undetected, bit flipping error hit one of our file server machines.? During those days UDP checksums on NFS were generally turned off (all zeros) to improve performance.? (Of course the human time lost do to this one event outweighed all the performance gains ever accumulated by this "optimization.") 2. John Romkey pointed out way back that the checksum does not check byte order reversals - which has cropped up when code written for one kind of big/little endian machine sent stuff to another machine with another notion of endian-ness.? The best way I saw this expressed was at the first Unix users conference (mid 1970's at Champaign/Urbana) where it was referred to as the "nuxi" problem - that's "unix" with the bytes swapped on console output. 3. Since I write code to test Internet protocols I've had to do a lot of checksum fixups when we alter packets in flight for the purpose of tickling potential weak spots in implementations).? It is amazing how hard it is to get ones complement stuff perfect on a twos complement machine.? How many RFCs are there on calculating, and incrementally calculating, the Internet checksum? 4. I did a bit of work with the ISO/OSI protocols.? They used a thing called the 32-bit Fletcher Checksum.? At first glance it looks like a horror involving an integer multiplication for every byte.? But it can be optimized so that the multiplications go away and it's roughly as efficient as the Internet checksum.? It does not have the byte order insensitivity of the Internet checksum.? I think that that checksum was in some of the alternatives that were proposed back in the "what will become IPv6" days - things like TUBA and UDP/TCP over CLNP, etc. Back at SDC Dave Kaufman and I pretty much concluded that any encrypted stuff had to be protected by some sort (and possibly imperfect) of integrity check.? We called 'em crypto checksums which has been supplanted by "message digest". I wonder - I am sure that we have all seen blotches in streaming video and strange noises on streaming audio - are those the results of simple gaps in the input flow to the rendering codecs or it the result of bad data being fed to those codecs? ??? --karl-- From stewart at serissa.com Thu Oct 1 17:46:17 2020 From: stewart at serissa.com (Lawrence Stewart) Date: Thu, 1 Oct 2020 20:46:17 -0400 Subject: [ih] error detection In-Reply-To: <77B76266-4A0D-49A3-A5F4-DFDCCA7F036B@strayalpha.com> References: <77B76266-4A0D-49A3-A5F4-DFDCCA7F036B@strayalpha.com> Message-ID: > On 2020, Oct 1, at 7:41 PM, Joseph Touch wrote: > > > >> On Oct 1, 2020, at 3:22 PM, Lawrence Stewart via Internet-history > wrote: >> >> The properties that make a good end-to-end checksum are a little different: > > Agreed, but... >> >> ... >> * you?d like them to be if possible, so that a router can calculate a change to a checksum without recomputing the whole thing possibly based on erroneous data >> ? > > > this particular requirement undermines the core of the definition of E2E. > > Joe The point of modifiable checksums is to preserve E2E. If you recompute the whole thing, you are never sure if the plaintext other than the modified part is still valid. With a modifiable checksum, if you have to change a hopcount or TTL or some such, you can compute the checksum delta arising from the change to the specific field. This preserves the E2E-ness of the checksum on the parts of the message you didn?t change. This can be done for any linear checksum, including CRCs, 1s-complement-add-and-cycle and others. If you do it this way you don?t need a separate header checksum, so it can save space as well. Since cryptographic hashes are nonlinear, the scheme doesn?t work for them. -L From touch at strayalpha.com Thu Oct 1 18:03:44 2020 From: touch at strayalpha.com (Joseph Touch) Date: Thu, 1 Oct 2020 18:03:44 -0700 Subject: [ih] error detection In-Reply-To: References: <77B76266-4A0D-49A3-A5F4-DFDCCA7F036B@strayalpha.com> Message-ID: > On Oct 1, 2020, at 5:46 PM, Lawrence Stewart wrote: > > > >> On 2020, Oct 1, at 7:41 PM, Joseph Touch > wrote: >> >> >> >>> On Oct 1, 2020, at 3:22 PM, Lawrence Stewart via Internet-history > wrote: >>> >>> The properties that make a good end-to-end checksum are a little different: >> >> Agreed, but... >>> >>> ... >>> * you?d like them to be if possible, so that a router can calculate a change to a checksum without recomputing the whole thing possibly based on erroneous data >>> ? >> >> >> this particular requirement undermines the core of the definition of E2E. >> >> Joe > > The point of modifiable checksums is to preserve E2E. If you recompute the whole thing, E2E doesn?t exist once you modify the packet. Period. Any attempt to mod the checksum based on what you think you?ve changed could be in error - and could mask an error somewhere else as a result. Joe From lars at nocrew.org Fri Oct 2 02:58:07 2020 From: lars at nocrew.org (Lars Brinkhoff) Date: Fri, 02 Oct 2020 09:58:07 +0000 Subject: [ih] FTP RIP In-Reply-To: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> (Noel Chiappa via Internet-history's message of "Thu, 1 Oct 2020 14:28:33 -0400 (EDT)") References: <20201001182833.3E66918C0EC@mercury.lcs.mit.edu> Message-ID: <7wblhlvtjk.fsf@junk.nocrew.org> Noel Chiappa wrote: > Lars has found a copy of the two earliest Hearer-Prople archives (the > ones I'm missing) on ITS backup tapes at the MIT Archives, and I'll be > working on getting them released so they can be put up. Thanks, Lars! I can't take credit. I was tipped off by someone else. Thank you, attentive person! Jack Haverty wrote: > So I have to now create an account and re-submit my request. Best to > wait a bit for "soon" to pass I guess. I don't know what's going on exactly, but I think the intent is that an account should be registered first and then the request can proceed. From johnl at iecc.com Fri Oct 2 10:24:49 2020 From: johnl at iecc.com (John Levine) Date: 2 Oct 2020 13:24:49 -0400 Subject: [ih] mailing list magic, was FTP RIP In-Reply-To: Message-ID: <20201002172450.00A0922D668F@ary.qy> In article you write: >as it seems that the IH mailing list SW thinger at ISOC is Very >Smart/Clever in this regard and seems to "know" (looking and seeing of To: >geoff at iconia.com in the header). > >ERGO, yours truly only got ONE copy of your reply (directly from you and >none from/via the list), viz.: Yeah, that's a standard mailman feature. I find it annoying and turn it off because my list mail and my direct mail go to different places. R's, John From jack at 3kitty.org Fri Oct 2 12:41:09 2020 From: jack at 3kitty.org (Jack Haverty) Date: Fri, 2 Oct 2020 12:41:09 -0700 Subject: [ih] mailing list magic, was FTP RIP In-Reply-To: <20201002172450.00A0922D668F@ary.qy> References: <20201002172450.00A0922D668F@ary.qy> Message-ID: That "very smart/clever" behavior explains a lot more.? It reveals why sometimes when you send a message to a list, and to yourself either intentionally or as a result of the way your "reply" behaves, you mistakenly conclude that the message never made it to the list, since it never made it back to you (from the list).? So you send it again.? And again.? And then ask if the list is working. My mail receiving program (Thunderbird, FYI) is set up to automatically categorize all incoming messages that arrive via a mailing list into a separate folder, which then serves as a long-term archive.? But of course it won't archive messages it never receives because of something "clever" being done out there in the wilds of the Internet.?? That's probably why my own archive of this mailing list has few messages that I sent. This kind of clever behavior makes it much more difficult to figure out what's wrong when something goes wrong, or even that it is wrong.?? Mail today often seems to get "lost", and it's not easy to figure out why with all of the "cleverness" in the system. I wouldn't use the term "standard" with such hacks.? They may look clever, but there are unintended consequences. Back in the 70s when we were experimenting with mailing lists and office automation, I recall discussions about the need to have mechanisms enabling mail servers/clients to provide such functionality in a standard and predictable way.?? For example, perhaps a mechanism and protocols for managing lists, which would include such things as a way for one server to ask another about the contents of a mailing list.? Or a network service that would reliably archive important material (the DataComputer then, now a NAS).?? Or a way to reliably identify addressees.?? And ways to balance security and privacy. There were a lot of questions and technical challenges of course.? That effort never got very far, and was placed on the back burner so that the interim "simple" email mechanism could be put in place. That was about 45 years ago.? There's been a lot of technical progress since then, e.g., in "distributed databases".? Perhaps it's time for a rework of the "simple" mail architecture.?? One for your list, John Gilmore.... /Jack Haverty On 10/2/20 10:24 AM, John Levine via Internet-history wrote: > In article you write: >> as it seems that the IH mailing list SW thinger at ISOC is Very >> Smart/Clever in this regard and seems to "know" (looking and seeing of To: >> geoff at iconia.com in the header). >> >> ERGO, yours truly only got ONE copy of your reply (directly from you and >> none from/via the list), viz.: > Yeah, that's a standard mailman feature. > > I find it annoying and turn it off because my list mail and my direct > mail go to different places. > > R's, > John From geoff at iconia.com Sun Oct 4 08:43:37 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 4 Oct 2020 05:43:37 -1000 Subject: [ih] Digital pioneer Geoff Huston apologises for bringing the internet to Australia Message-ID: Huston says the internet is a 'gigantic vanity-reinforcing distorted TikTok selfie' and web security is 'the punchline to some demented sick joke'. But Australia's first Privacy Commissioner thinks he's being optimistic. [...] https://www.zdnet.com/article/digital-pioneer-geoff-huston-apologises-for-bringing-the-internet-to-australia/ -- Geoff.Goodfellow at iconia.com living as The Truth is True From agmalis at gmail.com Sun Oct 4 12:58:17 2020 From: agmalis at gmail.com (Andrew G. Malis) Date: Sun, 4 Oct 2020 15:58:17 -0400 Subject: [ih] Digital pioneer Geoff Huston apologises for bringing the internet to Australia In-Reply-To: References: Message-ID: Geoff, Thanks for forwarding. I've heard Geoff (H.) speak many times, and I can hear this in his own voice. Cheers, Andy On Sun, Oct 4, 2020 at 11:44 AM the keyboard of geoff goodfellow via Internet-history wrote: > Huston says the internet is a 'gigantic vanity-reinforcing distorted TikTok > selfie' and web security is 'the punchline to some demented sick joke'. But > Australia's first Privacy Commissioner thinks he's being optimistic. > [...] > > https://www.zdnet.com/article/digital-pioneer-geoff-huston-apologises-for-bringing-the-internet-to-australia/ > > -- > Geoff.Goodfellow at iconia.com > living as The Truth is True > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history > From geoff at iconia.com Sun Oct 4 13:35:07 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 4 Oct 2020 10:35:07 -1000 Subject: [ih] The Internet Crucible (was: Digital pioneer Geoff Huston apologises for bringing the internet to Australia) In-Reply-To: References: Message-ID: you're most welcome, Andy... and following on the "theme" of Geoff H's Thursday presentation at the NetThing internet governance conference, yours truly is/was kinda reminded of this Geoff's late 1980's presentation, er, eleemosynary publication: *The Internet Crucible.* The first *The Internet Crucible* went out in August 1989 and is summarily included below. Subsequent/follow on IC's went out in September 1989 and January & March 1990 and can be found at the *The Internet Crucible* archive at https://iconia.com/ic/ THE CRUCIBLE INTERNET EDITION August, 1989 Volume 1 : Issue 1 (reprint) In this issue: A Critical Analysis of the Internet Management Situation THE CRUCIBLE is a moderated forum for the discussion of Internet issues. Contributions received by the moderator are stripped of all identifying headers and signatures and forwarded to a panel of referees. Materials approved for publication will appear in THE CRUCIBLE without attribution. This policy encourages consideration of ideas solely on their intrinsic merit, free from the influences of authorship, funding sources and organizational affiliations. THE INTERNET CRUCIBLE is an eleemosynary publication of Geoff Goodfellow. Mail contributions to: crucible at fernwood.mpk.ca.us ------------------------------------------------------------------------------ A Critical Analysis of the Internet Management Situation: The Internet Lacks Governance ABSTRACT At its July 1989 meeting, the Internet Activities Board made some modifications in the management structure for the Internet. An outline of the new IAB structure was distributed to the Internet engineering community by Dr. Robert Braden, Executive Director. In part, the open letter stated: "These changes resulted from an appreciation of our successes, especially as reflected in the growth and vigor of the IETF, and in rueful acknowledgment of our failures (which I will not enumerate). Many on these lists are concerned with making the Internet architecture work in the real world." In this first issue of THE INTERNET CRUCIBLE we will focus on the failures and shortcomings in the Internet. Failures contain the lessons one often needs to achieve success. Success rarely leads to a search for new solutions. Recommendations are made for short and long term improvements to the Internet. A Brief History of Networking The Internet grew out of the early pioneering work on the ARPANET. This influence was more than technological, the Internet has also been significantly influenced by the economic basis of the ARPANET. The network resources of the ARPANET (and now Internet) are "free". There are no charges based on usage (unless your Internet connection is via an X.25 Public Data Network (PDN) in which case you're well endowed, or better be). Whether a site's Internet connection transfers 1 packet/day or a 1M packets/day, the "cost" is the same. Obviously, someone pays for the leased lines, router hardware, and the like, but this "someone" is, by and large, not the same "someone" who is sending the packets. In the context of the Research ARPANET, the "free use" paradigm was an appropriate strategy, and it has paid handsome dividends in the form of developing leading edge packet switching technologies. Unfortunately, there is a significant side-effect with both the management and technical ramifications of the current Internet paradigm: there is no accountability, in the formal sense of the word. In terms of management, it is difficult to determine who exactly is responsible for a particular component of the Internet. From a technical side, responsible engineering and efficiency has been replaced by the purchase of T1 links. Without an economic basis, further development of short-term Internet technology has been skewed. The most interesting innovations in Internet engineering over the last five years have occurred in resource poor, not resource rich, environments. Some of the best known examples of innovative Internet efficiency engineering are John Nagle's tiny-gram avoidance and ICMP source-quench mechanisms documented in RFC896, Van Jacobsen's slow-start algorithms and Phil Karn's retransmission timer method. In the Nagle, Jacobsen and Karn environments, it was not possible or cost effective to solve the performance and resource problems by simply adding more bandwidth -- some innovative engineering had to be done. Interestingly enough, their engineering had a dramatic impact on our understanding of core Internet technology. It should be noted that highly efficient networks are important when dealing with technologies such as radio where there is a finite amount of bandwidth/spectrum to be had. As in the Nagle, Jacobsen and Karn cases, there are many environments where adding another T1 link can not be used to solve the problem. Unless innovation continues in Internet technology, our less than optimal protocols will perform poorly in bandwidth or resource constrained environments. Developing at roughly the same time as Internet technology have been the "cost-sensitive" technologies and services, such as the various X.25-based PDNs, the UUCP and CSNET dial-up networks. These technologies are all based on the notion that bandwidth costs money and the subscriber pays for the resources used. This has the notable effect of focusing innovation to control costs and maximize efficiency of available resources and bandwidth. Higher efficiency is achieved by concentrating on sending the most amount of information through the pipe in the most efficient manner thereby making the best use of available bandwidth/cost ratio. For example, bandwidth conservation in the UUCP dial-up network has multiplied by leaps and bounds in the modem market with the innovation of Paul Baran's (the grandfather of packet switching technology) company, Telebit, which manufactures a 19.2KB dial-up modem especially optimized for UUCP and other well known transfer protocols. For another example, although strictly line-at-a-time terminal sessions are less "user friendly" than character-oriented sessions, they make for highly efficient use of X.25 PDN network resources with echoing and editing performed locally on the PAD. While few would argue the superiority of X.25 and dial-up CSNET and UUCP, these technologies have proved themselves both to spur innovation and to be accountable. The subscribers to such services appreciate the cost of the services they use, and often such costs form a well-known "line item" in the subscriber's annual budget. Nevertheless, the Internet suite of protocols are eminently successful, based solely on the sheer size and rate of growth of both the Internet and the numerous private internets, both domestically and internationally. You can purchase internet technology with a major credit card from a mail order catalog. Internet technology has achieved the promise of Open Systems, probably a decade before OSI will be able to do so. Failures of the Internet The evolution and growth of Internet technology have provided the basis for several failures. We think it is important to examine failures in detail, so as to learn from them. History often tends to repeat itself. Failure 1:- Network Nonmanagement The question of responsibility in todays proliferated Internet is completely open. For the last three years, the Internet has been suffering from non-management. While few would argue that a centralized czar is necessary (or possible) for the Internet, the fact remains there is little to be done today besides finger-pointing when a problem arises. In the NSFNET, MERIT is incharge of the backbone and each regional network provider is responsible for its respective area. However, trying to debug a networking problem across lines of responsibility, such as intermittent connectivity, is problematic at best. Consider three all too true refrains actually heard from NOC personal at the helm: "You can't ftp from x to y? Try again tomorrow, it will probably work then." "If you are not satisfied with the level of [network] service you are receiving you may have it disconnected." "The routers for network x are out of table space for routes, which is why hosts on that network can't reach your new (three-month old) network. We don't know when the routers will be upgraded, but it probably won't be for another year." One might argue that the recent restructuring of the IAB may work towards bringing the Internet under control and Dr. Vinton G. Cerf's recent involvement is a step in the right direction. Unfortunately, from a historical perspective, the new IAB structure is not likely to be successful in achieving a solution. Now the IAB has two task forces, the Internet Research Task Force (IRTF) and the Internet Engineering Task Force (IETF). The IRTF, responsible for long-term Internet research, is largely composed of the various task forces which used to sit at the IAB level. The IETF, responsible for the solution of short-term Internet problems, has retained its composition. The IETF is a voluntary organization and its members participate out of self interest only. The IETF has had past difficulties in solving some of the Internet's problems (i.e., it has taken the IETF well over a year to not yet produce RFCs for either a Point-To-Point Serial Line IP or Network Management enhancements). It is unlikely that the IETF has the resources to mount a concerted attack against the problems of today's ever expanding Internet. As one IETF old-timer put it: "No one's paid to go do these things, I don't see why they (the IETF management) think they can tell us what to do" and "No one is paying me, why should I be thinking about these things?" Even if the IETF had the technical resources, many of the Internet's problems are also due to lack of "hands on" management. The IETF, o Bites off more than it can chew; o Sometimes fails to understand a problem before making a solution; o Attempts to solve political/marketing problems with technical solutions; o Has very little actual power. The IETF has repeatedly demonstrated the lack of focus necessary to complete engineering tasks in a timely fashion. Further, the IRTF is chartered to look at problems on the five-year horizon, so they are out of the line of responsibility. Finally, the IAB, per se, is not situated to resolve these problems as they are inherent to the current structure of nonaccountability. During this crisis of non-management, the Internet has evolved into a patch quilt of interconnected networks that depend on lots of seat-of-the-pants flying to keep interoperating. It is not an unusual occurrence for an entire partition of the Internet to remain disconnected for a week because the person responsible for a key connection went on vacation and no one else knew how to fix it. This situation is but one example of an endemic problem of the global Internet. Failure 2:- Network Management The current fury over network management protocols for TCP/IP is but a microcosm of the greater Internet vs. OSI debate going on in the marketplace. While everyone in the market says they want OSI, anyone planning on getting any work done today buys Internet technology. So it is with network management, the old IAB made the CMOT an Internet standard despite the lack of a single implementation, while the only non-proprietary network management protocol in use in the Internet is the SNMP. The dual network management standardization blessings will no doubt have the effect of confusing end-users of Internet technology--making it appear there are two choices for network management, although only one choice, the SNMP has been implemented. The CMOT choice isn't implemented, doesn't work, or isn't interoperable. To compound matters, after spending a year trying to achieve consensus on the successor to the current Internet standard SMI/MIB, the MIB working group was disbanded without ever producing anything: the political climate prevented them from resolving the matter. (Many congratulatory notes were sent to the chair of the group thanking him for his time. This is an interesting new trend for the Internet--congratulating ourselves on our failures.) Since a common SMI/MIB could not be advanced, an attempt was made to de-couple the SNMP and the CMOT (RFC1109). The likely result of RFC1109 will be that the SNMP camp will continue to refine their experience towards workable network management systems, whilst the CMOT camp will continue the never-ending journey of tracking OSI while producing demo systems for trade shows exhibitions. Unfortunately the end-user will remain ever confused because of the IAB's controversial (and technically questionable) decision to elevate the CMOT prior to implementation. While the network management problem is probably too large for the SNMP camp to solve by themselves they seem to be the only people who are making any forward progress. Failure 3:- Bandwidth Waste Both the national and regional backbone providers are fascinated with T1 (and now T3) as the solution towards resource problems. T1/T3 seems to have become the Internet panacea of the late 80's. You never hear anything from the backbone providers about work being done to get hosts to implement the latest performance/congestion refinements to IP, TCP, or above. Instead, you hear about additional T1 links and plans for T3 links. While T1 links certainly have more "sex and sizzle" than efficient technology developments like slow-start, tiny gram avoidance and line mode telnet, the majority of users on the Internet will probably get much more benefit from properly behaving hosts running over a stable backbone than the current situation of misbehaving and semi-behaved hosts over an intermittent catenet. Failure 4:- Routing The biggest problem with routing today is that we are still using phase I (ARPANET) technology, namely EGP. The EGP is playing the role of routing glue in providing the coupling between the regional IGP and the backbone routing information. It was designed to only accommodate a single point of attachment to the catenet (which was all DCA could afford with the PSNs). However with lower line costs, one can build a reasonably inexpensive network using redundant links. However the EGP does not provide enough information nor does the model it is based upon support multiple connections between autonomous systems. Work is progressing in the Interconnectivity WG of the IETF to replace EGP. They are in the process of redefining the model to solve some of the current needs. BGP or the Border Gateway Protocol (RFC1105) is an attempt to codify some of the ideas the group is working on. Other problems with routing are caused by regionals wanting a backdoor connection to another regional directly. These connections require some sort of interface between the two routing systems. These interfaces are built by hand to avoid routing loops. Loops can be caused when information sent into one regional network is sent back towards the source. If the source doesn't recognize the information as its own, packets can flow until their time to live field expires. Routing problems are caused by the interior routing protocol or IGP. This is the routing protocol which is used by the regionals to pass information to and from its users. The users themselves can use a different IGP than the regional. Depending on the number of connections a user has to the regional network, routing loops can be an issue. Some regionals pass around information about all known networks in the entire catenet to their users. This information deluge is a problem with some IGPs. Newer IGPs such as the new OSPF from the IETF and IGRP from cisco attempt to provide some information hiding by adding hierarchy. OSPF is the internets first attempt at using a Dykstra type algorithm as an IGP. BBN uses it to route between their packet switch nodes below the 1822 or X.25 layer. Unstable routing is caused by hardware or hosts software. Older BSD software sets the TTL field in the IP header to a small number. The Internet today is growing and its diameter has exceed the software's ability to reach the other side. This problem is easily fixed by knowledgeable systems people, but one must be aware of the problem before they can fix it. Routing problems are also perceived when in fact a serial line problem or hardware problem is the real cause. If a serial line is intermittent or quickly cycles from the up state into the down state and back again, routing information will not be supplied in a uniform or smooth manner. Most current IGPs are Bellman-Ford based and employ some stabilizing techniques to stem the flow of routing oscillations due to "flapping" lines. Often when a route to a network disappears, it may take several seconds for it to reappear. This can occur at the source router who waits for the route to "decay" from the system. This pause should be short enough so that active connections persist but long enough that all routers in the routing system "forget" about routes to that network. Older host software with over-active TCP retransmission timers will time out connections instead of persevering in the face of this problem. Also routers, according to RFC1009, must be able to send ICMP unreachables when a packet is sent to a route which is not present in its routing database. Some host products on the market close down connections when a single ICMP reachable is received. This bug flies in the face of the Internet parable "be generous in what you accept and rigorous in what you send". Many of the perceived routing problems are really complex multiple interactions of differing products. Causes of the Failures The Internet failures and shortcomings can be traced to several sources: First and foremost, there is little or no incentive for efficiency and/or economy in the current Internet. As a direct result, the resources of the Internet and its components are limited by factors other than economics. When resources wear thin, congestion and poor performance result. There is little to no incentive to make things better, if 1 packet out of 10 gets through things "sort of work". It would appear that Internet technology has found a loophole in the "Tragedy of The Commons" allegory--things get progressively worse and worse, but eventually something does get through. The research community is interested in technology and not economics, efficiency or free-markets. While this tack has produced the Internet suite of protocols, the de facto International Standard for Open Systems, it has also created an atmosphere of intense in-breeding which is overly sensitive to criticism and quite hardened against outside influence. Meanwhile, the outside world goes on about developing economically viable and efficient networking technology without the benefit of direct participation on the part of the Internet. The research community also appears to be spending a lot of its time trying to hang onto the diminishing number of research dollars available to it (one problem of being a successful researcher is eventually your sponsors want you to be successful in other things). Despite this, the research community actively shuns foreign technology (e.g., OSI), but, inexplicably has not recently produced much innovation in new Internet technology. There is also a dearth of new and nifty innovative applications on the Internet. Business as usual on the Internet is mostly FTP, SMTP and Telnet or Rlogin as it has been for many years. The most interesting example of a distributed application on the Internet today is the Domain Name System, which is essentially an administrative facility, not an end-user service. The engineering community must receive equal blame in these matters. While there have been some successes on the part of the engineering community, such as those by Nagel, Jacobsen and Karn mentioned above, the output of the IETF, namely RFCs and corresponding implementations, has been surprisingly low over its lifetime. Finally, the Internet has become increasingly dependent on vendors for providing implementations of Internet technology. While this is no doubt beneficial in the long-term, the vendor community, rather than investing "real" resources when building these products, do little more than shrink-wrap code written primarily by research assistants at universities. This has lead to cataclysmic consequences (e.g., the Internet worm incident, where Sendmail with "debug" command and all was packaged and delivered to customers without proper consideration). Of course, when problems are found and fixed (either by the vendor's customers or software sources), the time to market with these fixes is commonly a year or longer. Thus, while vendors are vital to the long-term success of Internet technology, they certainly don't receive high marks in the short-term. Recommendations Short-term solutions (should happen by year's end): In terms of hardware, the vendor community has advanced to the point where the existing special-purpose technologies (Butterfly, NSSs) can be replaced by off-the-shelf routers at far less cost and with superior throughput and reliability. Obvious candidates for upgrade are both the NSFNET and ARPANET backbones. Given the extended unreliability of the mailbridges, the ARPA core is an immediate candidate (even though the days of net 10 are numbered). In terms of software, ALL devices in the Internet must be network manageable. This is becoming ever more critical when problems must be resolved. Since SNMP is the only open network management protocol functioning in the Internet, all devices must support SNMP and the Internet standard SMI and MIB. Host implementations must be made to support the not-so-recent TCP enhancements (e.g., those by Nagle, Jacobsen and Karn) and the more recent linemode TELNET. The national and regional providers must coordinate to share network management information and tools so that user problems can be dealt with in a predictable and timely fashion. Network management tools are a big help, but without the proper personnel support above this, the benefits can not be fully leveraged. The Internet needs leadership and hands-on guidance. No one is seemingly in charge today, and the people who actually care about the net are pressed into continually fighting the small, immediate problems. Long-term solutions: To promote network efficiency and a free-market system for the delivery of Internet services, it is proposed to switch the method by which the network itself is supported. Rather than a top-down approach where the money goes from funding agencies to the national backbone or regional providers, it is suggested the money go directly to end-users (campuses) who can then select from among the network service providers which among them best satisfies their needs and costs. This is a strict economic model: by playing with the full set of the laws of economics, a lot of the second-order problems of the Internet, both present and on the horizon, can be brought to heel. The Internet is no longer a research vehicle, it is a vibrant production facility. It is time to acknowledge this by using a realistic economic model in the delivery of Internet services to the community (member base). When Internet sites can vote with their pocketbooks, some new regionals will be formed; some, those which are non-performant or uncompetitive, will go away; and, the existing successful ones will grow. The existing regionals will then be able to use their economic power, as any consumer would, to ensure that the service providers (e.g., the national backbone providers) offer responsive service at reasonable prices. "The Market" is a powerful forcing function: it will be in the best interests of the national and regional providers to innovate, so as to be more competitive. Further, such a scheme would also allow the traditional telecommunications providers a means for becoming more involved in the Internet, thus allowing cross-leverage of technologies and experience. The transition from top-down to economic model must be handled carefully, but this is exactly the kind of statesmanship that the Internet should expect from its leadership. ------- On Sun, Oct 4, 2020 at 9:58 AM* Andrew G. Malis > wrote*: > Geoff, > > Thanks for forwarding. I've heard Geoff (H.) speak many times, and I can > hear this in his own voice. > > Cheers, > Andy > > > On Sun, Oct 4, 2020 at 11:44 AM the keyboard of geoff goodfellow via > Internet-history wrote: > >> Huston says the internet is a 'gigantic vanity-reinforcing distorted >> TikTok >> selfie' and web security is 'the punchline to some demented sick joke'. >> But >> Australia's first Privacy Commissioner thinks he's being optimistic. >> [...] >> >> https://www.zdnet.com/article/digital-pioneer-geoff-huston-apologises-for-bringing-the-internet-to-australia/ >> >> -- Geoff.Goodfellow at iconia.com living as The Truth is True From sghuter at nsrc.org Thu Oct 8 12:15:43 2020 From: sghuter at nsrc.org (Steven G. Huter) Date: Thu, 8 Oct 2020 12:15:43 -0700 (PDT) Subject: [ih] FTP RIP In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> Message-ID: On Tue, 29 Sep 2020, Brian E Carpenter via Internet-history wrote: > I don't know to what extent GridFTP is still used. Hello Brian Open Science Grid published a document in December 2019 outlining a timeline to phase it out over the next two years and move to HTTP-based tools. https://opensciencegrid.org/technology/policy/gridftp-gsi-migration/ Steve From surfer at mauigateway.com Thu Oct 8 15:31:52 2020 From: surfer at mauigateway.com (scott weeks) Date: Thu, 8 Oct 2020 12:31:52 -1000 Subject: [ih] FTP RIP In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> Message-ID: On 10/8/20 9:15 AM, Steven G. Huter via Internet-history wrote: > Open Science Grid published a document in December 2019 > outlining a timeline to phase it out over the next two years and > move to HTTP-based tools. --------------------------------------------------------- I sure hope they fail at that.? I use FTP all the time!? For example, I do this daily to store my ISP-assigned DHCP IP address.? I am not worried about security and the ISP will not support SCP/SFTP: #!/usr/local/bin/python from ftplib import FTP with FTP(host='isp-server', user='me', passwd='my-passwd') as ftp: ??? ftp.cwd('www') ??? with open('ip.txt', 'rb') as text_file: ??????? ftp.storlines('STOR ip.txt', text_file) From surfer at mauigateway.com Thu Oct 8 15:33:40 2020 From: surfer at mauigateway.com (scott weeks) Date: Thu, 8 Oct 2020 12:33:40 -1000 Subject: [ih] FTP RIP In-Reply-To: References: <03461BCD-99D9-4806-B44B-B8F116A1F81C@strayalpha.com> <20200928123317.GK3141@faui48f.informatik.uni-erlangen.de> Message-ID: <40986142-f876-fc73-4d1f-6691eefbcd87@mauigateway.com> My apologies, I misunderstood the email.? Please ignore me and I will head to the coffee pot... scott On 10/8/20 12:31 PM, scott weeks wrote: > > > On 10/8/20 9:15 AM, Steven G. Huter via Internet-history wrote: > >> Open Science Grid published a document in December 2019 outlining a >> timeline to phase it out over the next two years and move to >> HTTP-based tools. > --------------------------------------------------------- > > > > I sure hope they fail at that.? I use FTP all the time!? For example, > I do this daily to store my ISP-assigned DHCP IP address.? I am > not worried about security and the ISP will not support SCP/SFTP: > > > #!/usr/local/bin/python > > from ftplib import FTP > > with FTP(host='isp-server', user='me', passwd='my-passwd') as ftp: > > ??? ftp.cwd('www') > > ??? with open('ip.txt', 'rb') as text_file: > > ??????? ftp.storlines('STOR ip.txt', text_file) > From geoff at iconia.com Sun Oct 11 11:19:57 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 11 Oct 2020 08:19:57 -1000 Subject: [ih] =?utf-8?q?Frontier=E2=80=99s_Bankruptcy_Shows_Why_ISPs_Shou?= =?utf-8?q?ldn=E2=80=99t_Be_in_Charge_of_the_Internet?= Message-ID: https://ip.topicbox.com/groups/ip/T60110fae535afc47/frontier-s-bankruptcy-shows-why-isps-shouldn-t-be-in-charge-of-the-internet -- Geoff.Goodfellow at iconia.com living as The Truth is True From geoff at iconia.com Sun Oct 11 11:25:25 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 11 Oct 2020 08:25:25 -1000 Subject: [ih] Five Eyes and Japan call for Facebook backdoor to monitor crime In-Reply-To: References: Message-ID: *Security alliance worries encrypted messaging apps can be used by bad actors* EXCERPT: Japan has joined countries in the Five Eyes security alliance in a call for Facebook to review its encryption practices over concerns the company's messaging apps will become tools for terrorists and child traffickers, Nikkei has learned. Currently, Facebook encrypts the contents of messages exchanged between the sender and receiver so that no one else -- including Facebook itself -- can see them. While this technology serves to protect users' privacy, it also makes it impossible for the company to provide authorities with information related to crimes. The Five Eyes countries -- the U.S., the U.K., Australia, Canada and New Zealand -- plus Japan and India issued on Sunday a joint statement to press Facebook to change its encryption technology on Messenger and WhatsApp. In the statement, the countries say they understand the importance of protecting privacy, but say Facebook should seek a way to balance privacy and security concerns. It is expected that the countries will ask Facebook to introduce a backdoor that allows it to decrypt in case of an emergency. The decision for Tokyo to join the call comes as it seeks closer ties with alliance . The Five Eyes has also engaged Japan as it seeks to share confidential information in response to China's growing military expansion... [...] https://asia.nikkei.com/Business/Technology/Five-Eyes-and-Japan-call-for-Facebook-backdoor-to-monitor-crime -- Geoff.Goodfellow at iconia.com living as The Truth is True From geoff at iconia.com Sun Oct 11 11:32:19 2020 From: geoff at iconia.com (the keyboard of geoff goodfellow) Date: Sun, 11 Oct 2020 08:32:19 -1000 Subject: [ih] =?utf-8?q?Orders_from_the_Top=3A_The_EU=E2=80=99s_Timetable?= =?utf-8?q?_for_Dismantling_End-to-End_Encryption?= In-Reply-To: References: Message-ID: EXCERPT: The last few months have seen a steady stream of proposals, encouraged by the advocacy of the FBI and Department of Justice, to provide ?lawful access? to end-to-end encrypted services in the United States. Now lobbying has moved from the U.S., where Congress has been largely paralyzed by the nation?s polarization problems, to the European Union?where advocates for anti-encryption laws hope to have a smoother ride. A series of leaked documents from the EU?s highest institutions show a blueprint for how they intend to make that happen, with the apparent intention of presenting anti-encryption law to the European Parliament within the next year. The public signs of this shift in the EU?which until now has been largely supportive toward privacy-protecting technologies like end-to-end encryption?began in June with a speech by Ylva Johansson, the EU?s Commissioner for Home Affairs. Speaking at a webinar on ?Preventing and combating child sexual abuse [and] exploitation?, Johansson called for a ?technical solution? to what she described as the ?problem? of encryption, and announced that her office had initiated ?a special group of experts from academia, government, civil society and business to find ways of detecting and reporting encrypted child sexual abuse material.? The subsequent report was subsequently leaked to Politico . It includes a laundry list of tortuous ways to achieve the impossible: allowing government access to encrypted data, without somehow breaking encryption... [...] https://www.eff.org/deeplinks/2020/10/orders-top-eus-timetable-dismantling-end-end-encryption -- Geoff.Goodfellow at iconia.com living as The Truth is True