[ih] "how better protocols could solve those problems better"

Craig Partridge craig at tereschau.net
Wed Sep 30 16:49:19 PDT 2020


On Mon, Sep 28, 2020 at 5:31 PM John Gilmore via Internet-history <
internet-history at elists.isoc.org> wrote:

>
> Or, perhaps a simpler question, where are the pain points in current
> protocols, even if no replacements are on the horizon?
>

That's a great question, so I'll take a crack at part of it.

>From where I sit, I'm seeing the current protocols in pain related to Big
Data.  In particular, I'm seeing two pain points:

* Naming and organizing big data.  We are generating big data in many areas
faster than we can name it.  And by "name" I don't simply mean giving
something a filename but creating an environment to find that name,
including the right metadata, and storing the data in places folks can
easily retrieve it.  You can probably through archiving into that too (when
should data with this name be kept or discarded over time?).  What good are
FTP, SCP, HTTPS, if you can't find or retrieve the data?

* We are reaching the end of the TCP checksum's useful life.  It is a weak
16-bit checksum (by weak I mean that, in some cases, errors get past at a
rate greater than 1 in 2^16) and on big data transfers (gigabytes and
larger) in some parts of the Internet errors are slipping through.  Beyond
making data transfer unreliable the errors are exposing weaknesses in our
secure file transfer protocols, which assume that any transport error is
due to malice and thus kill connections, without saving data that was
successfully retrieved -- instead they force a complete new attempt to
transfer (the need for FTP checkpointing lives!).  The result in some big
data environments is secure file transfers failing as much as 60% (that's
not a typo) of the time.

Thanks!

Craig


-- 
*****
Craig Partridge's email account for professional society activities and
mailing lists.



More information about the Internet-history mailing list