[ih] "how better protocols could solve those problems better"

Craig Partridge craig at tereschau.net
Wed Sep 30 16:58:50 PDT 2020


I've got some NSF funding to figure out what the error patterns are
(nobody's capturing them) with the idea we might propose a new checksum
and/or add checkpointing into the file transfer protocols.  It is little
hard to add something on top of protocols that have a fail/discard model.

Craig

On Wed, Sep 30, 2020 at 5:51 PM Vint Cerf <vint at google.com> wrote:

> would a higher application level check be useful? new options in TCP?
> something else?
>
> v
>
>
> On Wed, Sep 30, 2020 at 7:49 PM Craig Partridge via Internet-history <
> internet-history at elists.isoc.org> wrote:
>
>> On Mon, Sep 28, 2020 at 5:31 PM John Gilmore via Internet-history <
>> internet-history at elists.isoc.org> wrote:
>>
>> >
>> > Or, perhaps a simpler question, where are the pain points in current
>> > protocols, even if no replacements are on the horizon?
>> >
>>
>> That's a great question, so I'll take a crack at part of it.
>>
>> From where I sit, I'm seeing the current protocols in pain related to Big
>> Data.  In particular, I'm seeing two pain points:
>>
>> * Naming and organizing big data.  We are generating big data in many
>> areas
>> faster than we can name it.  And by "name" I don't simply mean giving
>> something a filename but creating an environment to find that name,
>> including the right metadata, and storing the data in places folks can
>> easily retrieve it.  You can probably through archiving into that too
>> (when
>> should data with this name be kept or discarded over time?).  What good
>> are
>> FTP, SCP, HTTPS, if you can't find or retrieve the data?
>>
>> * We are reaching the end of the TCP checksum's useful life.  It is a weak
>> 16-bit checksum (by weak I mean that, in some cases, errors get past at a
>> rate greater than 1 in 2^16) and on big data transfers (gigabytes and
>> larger) in some parts of the Internet errors are slipping through.  Beyond
>> making data transfer unreliable the errors are exposing weaknesses in our
>> secure file transfer protocols, which assume that any transport error is
>> due to malice and thus kill connections, without saving data that was
>> successfully retrieved -- instead they force a complete new attempt to
>> transfer (the need for FTP checkpointing lives!).  The result in some big
>> data environments is secure file transfers failing as much as 60% (that's
>> not a typo) of the time.
>>
>> Thanks!
>>
>> Craig
>>
>>
>> --
>> *****
>> Craig Partridge's email account for professional society activities and
>> mailing lists.
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>>
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> 1435 Woodhurst Blvd
> McLean, VA 22102
> 703-448-0965
>
> until further notice
>
>
>
>

-- 
*****
Craig Partridge's email account for professional society activities and
mailing lists.



More information about the Internet-history mailing list