[ih] "how better protocols could solve those problems better"
John Gilmore
gnu at toad.com
Wed Sep 30 17:45:30 PDT 2020
Craig Partridge <craig at tereschau.net> wrote:
>>> * We are reaching the end of the TCP checksum's useful life. It is a weak
>>> 16-bit checksum (by weak I mean that, in some cases, errors get past at a
>>> rate greater than 1 in 2^16) and on big data transfers (gigabytes and
>>> larger) in some parts of the Internet errors are slipping through. Beyond
>>> making data transfer unreliable the errors are exposing weaknesses in our
>>> secure file transfer protocols, which assume that any transport error is
>>> due to malice and thus kill connections, without saving data that was
>>> successfully retrieved -- instead they force a complete new attempt to
>>> transfer (the need for FTP checkpointing lives!). The result in some big
>>> data environments is secure file transfers failing as much as 60% (that's
>>> not a typo) of the time.
>> would a higher application level check be useful? new options in TCP?
>> something else?
> I've got some NSF funding to figure out what the error patterns are
> (nobody's capturing them) with the idea we might propose a new checksum
> and/or add checkpointing into the file transfer protocols. It is little
> hard to add something on top of protocols that have a fail/discard model.
A very popular file transfer protocol with application level checksums,
and retries at the subblock level is BitTorrent. The file transfer
starts by transferring metadata that includes an overall checksum for
the file, plus hundreds or thousands of individual subblock checksums.
In the main data transfer, if the client receives subblock data that
doesn't match the checksum, that subblock gets discarded and
re-requested, until the data gets through error-free. Meanwhile, the
entire rest of the file continues to be transferred successfully, and
the implementations track which parts of the received partial file are
known-valid, and which are not, in stable storage that survives crashes
and restarts.
BT's original purpose was to start with a fast, small metadata transfer,
and then allow the rest of a large data file to come from any arbitrary
(thus untrustworthy) source on the network, both impeding censorship,
and speeding downloads of popular files. The protocol and code are
already handling dozens or hundreds of simultaneous TCP connections to
retrieve parts of the file from different sources. In use cases where
there's only one source for the data, that basic model would also
cleanly be extensible to allow opening multiple parallel TCP connections
from the same host (for getting past TCP bandwidth x delay product
issues and slow recovery from congestion) that also resolve issues in
the large-data environment. Besides also providing a higher level,
cryptographic checksum and retry mechanism.
BitTorrent originally used TCP, and it remains available in all
implementations, but implementations now tend to use UDP with peers that
support it, because TCP was poorly sensitive to packet delay and tended
to produce bufferbloat in intermediate routers. It originally used IPv4
but now also handles IPv6 and is happy to simultaneously handle
connections via both.
The specs for BitTorrent are published and well maintained, along with
an evolution process (BEPs; see
http://bittorrent.org/beps/bep_0000.html). Dozens of well maintained,
robust implementations exist, both open source and proprietary. Any of
these could be adapted for the specific use case of big data transfers
(and/or for research use in detecting and reporting patterns of
otherwise undetected UDP or TCP data corruption).
John
More information about the Internet-history
mailing list