[ih] ARPAnet Type 3 packets (datagrams)
Noel Chiappa
jnc at mercury.lcs.mit.edu
Thu Nov 26 11:40:15 PST 2009
> From: "Bernie Cosell" <bernie at fantasyfarm.com>
> I'm pretty sure that the "reserve 8" machinery was in the first
> versions of the IMP code.
Looking carefully at the 1972 FJCC paper, I'm not sure this is correct:
although the paper is not 100% explicit aboutwhat changes are made, but it
seems to strongly _imply_ that the reservation-before-sending-any-frames
mechanism was added after the intial deployment. (And so my previous
message, where I said "the original 1970 network did not have this
[buffer] allocation mechanism", was a bit speculative/aggressive.)
Here's what the 1972 paper says, though.
The failure mode that it describes (pp. 742-743) is only indirectly due to
lack of buffers at the destination IMP for partially-assembled
messages. (Although it does not say whether those buffers were allocated
before the time the first frame arrived - although the implication, from much
later statements, is that they were allocated when the first frame of a
multi-frame message arrived, not _prior_ to any frames from a given
amulti-frame message entering the subnet.)
Rather, the immediate problem was that i) the destination IMP had no free
buffers, since all its buffer space was reserved for partially-assembled
messages, and ii) its _neighbour_ IMP's buffers were filled with frames from
_other_ multi-frame messages (no frames from which were being held in the
destination IMP - perhaps because it had no free buffers to hold them, and
thus refused to acknowledge them if they were sent).
Since the neighbour IMP could not discard any of those frames (since it had
already acknowledged them to _its_ upstream neigbours), that meant that the
missing frames needed for the partial messages being held at the destination
IMP could not get through the intermediate IMP. Of course, without those
missing frames,the destination IMP could not re-assemble those messages and
send them to the host, freeing the buffers, and until it did that, it couldn't
take any of the frames that were filling the buffers of the intermediate
IMP.... Deadlock!
However, the paper does not say explicitly that this behaviour was
actually _observed in service_, it merely talks about "messages ..
entering the network for which network buffering is not available and
which could congest the network and lead to reassembly lockup"; i.e. it
could be speaking of a theoretical failure mode. Later on it says
"simulations and experiments artificially loading the network", so again
there is no reference to 'the network failed when X happened, in actual
service'.
(What a clever failure mode, BTW! It resulted, basically, from the hop-by-hop
reliability, combined with a failure to keep an adequate level of free buffers
at each IMP. Note that there is no way for the intermediate IMP to be able to
do anything about the problem, since it has no way of knowing which frames the
destination IMP needs, in order to be able to free up buffers. Only end-end
flow-control can solve this problem...)
Anyway, at the end it says:
"our solution also utilizes a request mechanism from source IMP to
destination IMP. Specifically, no multi-[frame] message is allowed
to enter the network until storage for the message has been allocated at
the destination IMP. As soon as the source IMP takes in the first
[frame] of a multi-[frame] messsage, it sends a small control [request]
to the destination IMP requesting that reassembly storage be reserved
at the destination for this message. It does not take in further [frames]
from the Host until it receives an allocation [reply]."
This does strongly imply that the end-end allocation mechanism was added
_later_, in response to this problem.
It is quite interesting, because it seems that all the buffering at the source
was in the source Host, not the source IMP! In other words, messages queued
for output at the host to _other_ destinations had to wait until the space
reservation go-ahead was received from the destination IMP for the _first_
message!
I wonder if this was ever changed, so that each multi-frame message to a
new destination didn't add an RTT to the queing delay of _all_ packets
queued for output behind it? Of course, this would have required more
buffering in the IMP. (Do I recall something about the first IMPs having a
minimal amount of memory, but it being expanded over time?)
Noel
More information about the Internet-history
mailing list