[ih] uucp, was Question re rate of growth of the Arpanet
Lyndon Nerenberg (VE7TFX/VE6BBM)
lyndon at orthanc.ca
Tue Apr 22 12:19:20 PDT 2025
> The `t' protocol is intended for TCP links. It does no error
> checking or flow control, and requires an eight bit clear channel.
> I believe the `t' protocol originated in BSD versions of UUCP.
And down the protocol rabbit hole we go :-) 't' required an 8-bit
error corrected in-order channel. I.e. TCP. AT&T independently
created an almost identical protocol named 'e' for TCP (they were
unaware of the 't' protocol from Berkeley). Later, Taylor UUCP
introduced the 'i' protocol, a full-duplex version of 't'. Full
duplex in the sense that it transfered files on both directions
concurrently -- a huge time saving for sites that had bi-directional
Usenet feeds (pre-NNTP).
The 'f' protocol was designed for use over X.25 via X.28 PADs. It
encoded 8-bit data to fit in the 7-bit data channel, and escaped
various control characters that were used by the PADs to control
the terminal session.
The original protocol was 'g'. Somewhat similar to X.25's LAPB,
in AT&T's uucp it defaulted to 64 byte packets with a window size
of three. But the protocol parameters allowed a window size up to
7 and a maximum packet size of 256 bytes IIRC. Increasing both
values really sped things up, but the Xenix UUCP impleentation
had a bug that caused uucico to drop core if the remote tried to
negotiate a window size > 3, and the AT&T uucico binary didn't
let you change the window or packet size. I'm pretty sure
Honey DanBer did let you monkey with those settings.
Honey DanBer also introduced the 'G' protocol. It was mostly an
enhanced 'g' with larger packet sizes, from what I remember.
And there were several niche protocols written. E.g. Doug Evans
wrote the 'z' protocol. It was intended for use over "mostly 8-bit"
paths; it escaped the ^s/^q control characters, and a few others.
--lyndon
More information about the Internet-history
mailing list