[ih] Arpanet raw messages, voice, and TCP
Jack Haverty
jack at 3kitty.org
Tue Nov 24 23:28:55 PST 2009
Hi Matthias,
Good questions. I was at BBN in the 1978-1990 timeframe, part of Frank
Heart's division which among other things did the Arpanet and much of
the early TCP implementations. I did the first PDP-11 Unix
implementation and others working with me did the initial HP3000,
PDP-11/44, Vax-unix, TAC, and others, as well as the "core gateways".
Bill Plummer (who I know monitors this list sometimes) did the PDP-10
TCP from his perch in "Division 4".
As far as I can remember, none of those implementations used the Arpanet
raw message facility. There was a lot of effort put into dealing with
RFNM counting, but nothing I remember did anything with raw messages.
The raw messages were motivated primarily by the packet voice work,
where it was more important to get a packet through quickly than to get
all the packets through too late to deliver as voice. The ARPANet
standard RFNM-machinery was appropriate only for getting all packets
delivered ... eventually.
I remember that there was significant concern in the "Arpanet group"
about protecting the network from raw messages used improperly by hosts.
A sending host could easily keep the network flooded with packets that
were being discarded at the destination IMP if the host at that end
wouldn't take them. That concern about network disruption probably
helped keep using raw messages off the list of features to use in TCP
(and gateway) mechanisms.
The only exception might have been using them to send control messages
to get around a "blocked" path - e.g., so that the "gateway NOC" could
at least tell a gateway to reboot if it had gotten stuck to the point
that regular Arpanet traffic to it was RFNM-blocked. Can't remember if
that was ever implemented though.
Somebody else will have to comment about the voice work - that's where
I'd expect that the raw messages might have been used. A lot of the
voice work was on the Wideband Net rather than the ARPAnet.
The NOC at BBN had quite a lot of control over network hosts, and could
shut off particular interfaces (physical ports) on an IMP if some host
was severely misbehaving. I'm not sure how often that was needed,
probably not much. I believe it was used at some point shortly after a
new release of BSD Unix which had a demon program that methodically
PINGed all known gateways to maintain a connectivity/delay map of the
Internet. This caused lots of one-packet Arpanet messages to get
sprayed across the Arpanet - a traffic pattern very unlike the
traditional Telnet or FTP sessions. That pattern caused internal uproar
and the only way to shut it down was to turn off the IMP ports of the
offending hosts - which weren't really doing anything "illegal" but
nonetheless caused disruptions to other host traffic.
I don't recall the "raw message" discussions feeding much at all into
TCP. However, it did motivate the creation of UDP, and the split of
TCP2 into TCP/IP version 4.
There was much discussion at the time about where the functions of
reliability, congestion control, priority, etc., should be implemented.
In the Arpanet, much of that was performed by the IMP-IMP internal
machinery - essentially hosts saw virtual circuits. Gateway (and TCP)
implementations were allowed to freely discard IP packets, and the
responsibility for reliability etc. was placed on the hosts. At first
it was largely irrelevant, since most Internet paths were
host-LAN-ARPANET-LAN-host, and the Arpanet was the "weak link" in terms
of capacity, but the internal Arpanet mechanisms compensated so the
hosts' behavior wasn't that critical.
Later after gateway-gateway circuits were introduced it became a
different game, but I think by then the implementations were
sophisticated enough to handle it. At least that's what we surmised.
There really wasn't enough instrumentation available to see what was
happening - e.g., how many of the packets a host sends on a TCP
connection actually make it to the other end, how many are
retransmitted, how many duplicates are received, etc.
In the 90s, when I was at Oracle, I put some instrumentation into the
corporate worldwide net and took a look. There were lots of anomalies.
E.G., an FTP would be running fine, then there would be disruption
(circuit glitch, congestion dropped packets, etc.), and then things
would settle back down. But after the glitch, the FTP throughput, on
the same connection, was cut in half compared to the original, and most
packets were being received twice at the destination. The host
algorithms to do retransmission timer backoff or whatever had settled
into a new stable state where everything worked fine as seen by the user
- but the network traffic had doubled. Not so good on a very expensive
line from the US to Singapore! We had some discussions with the
computer vendors about the quality of their host network software...they
didn't have a clue it was happening.
That was more than a decade ago, but has anything changed? So, maybe
the jury is still out on whether or not the "well behaved host" approach
of the Internet is working as well as the "keep the hosts out of the
network machinery" approach of the Arpanet.
HTH,
/Jack
On Wed, 2009-11-25 at 02:21 +0100, Matthias Bärwolff wrote:
> Hi all,
>
> I was wondering if anyone here knows closely of the precise history of
> how the raw message facility of the Arpanet came about and in which ways
> it relates to early voice (NVP) and TCP experiments in the mid-1970s. I
> gather from the 1822 report (and a couple of other BBN reports from
> around 1974 to 1978) that hosts could send uncontrolled messages (one
> packet messages at that) that would be delivered without paranoid error
> control IMP-to-IMP and without RFNMs and retransmissions. However (!),
> BBN would control whether or not hosts could use that facility in the
> first place. From the 1822 report (as of 1976,
> http://www.bitsavers.org/pdf/bbn/imp/BBN1822_Jan1976.pdf):
>
> "Uncontrolled use of these messages will degrade the
> performance of the network for all users. Therefore,
> ability to use these messages will be regulated by the
> Network Control Center and will require prior arrange-
> ment for each experiment."
> (p. 3-36)
>
> My questions are:
>
> 1. What experiments or actual applications did people do with the raw
> messages? The papers on early network voice stuff indicate that three or
> four host sites where playing around with that. What about TCP?
>
> 2. How does this Arpanet feature relate to TCP and the ARPA agenda from
> around 1973/1974 of pushing development of TCP? Am I right in thinking
> that the first TCP experiments must have involved using the raw messages
> facility?
>
> 3. How did BBN actually control sites' use of the feature? And, what
> were the experiences with congestion (as in congesting intermediary
> nodes, as well as in overwhelming the destination IMP/host)?
>
> 4. How did those experiments (provided the link that I am here assuming
> exists) feed into actual TCP developments?
>
> 5. Last, the TCP/IP split is often ascribed somewhat to common sense
> (proper modularization, see e.g. IEN 2), and particularly the canonical
> example of interactive voice. But how does the actual availability of a
> "best effort" transport facility at Arpanet (the raw message facility)
> relate to the later notion of an IP protocol (which, too, provides an
> effective service guarantee of zero; and comes with all the congestion
> problems that will require the hosts to well behave)?
>
> Thanks for any suggestions, pointers and accounts on this. Also, I am
> told Bob Kahn would be a good person to ask on this, maybe someone here
> can reach out to him on this.
>
> Best,
> Matthias
>
More information about the Internet-history
mailing list