[ih] principles of the internet
John Day
jeanjour at comcast.net
Tue Jun 1 13:31:42 PDT 2010
At 13:00 -0700 2010/06/01, Dave Crocker wrote:
>On 6/1/2010 11:49 AM, Richard Bennett wrote:
>>The Internet protocols are agnostic about privilege and best-effort, as
>
>Absent standardized QOS, IP is best effort and the transport-level
>reliability mechanisms reflect this, even as weak as they were
>(intentionally) made to be.
>
>This was a major shift from the degree of delivery assurance
>attempted for the Arpanet IMP infrastructure, which was reflected in
>the /lack/ of host-to-host reliability mechanism in the NCP.
Yes, this was the basic datagram innovation pioneered by CYCLADES,
which is the fundamental shift in the thinking. I sometimes
characterize the distinction as packet switching was "continental
drift" but datagrams were "plate tectonics."
>>these are layer two functions that are simply outside the scope of a
This was the hop-by-hop error control seen in the ARPANet and later
advocated by the PTTs in X.25. Pouzin's insight was that the hosts
weren't going to trust the network no matter what, so it didn't have
to be perfect. Building reliable systems from unreliable parts was
in the air at the time, i.e. the von Neumann paper.
>Except that layer two is not end-to-end and therefore cannot make
>end-to-end service assertions or enforce them.
Right, but is necessary. Layer two must provide enough error control
to make end-to-end error control at layer 4 cost effective. Since
most loss at layer 3 is due to congestion, that implies that layer
two should not be worse than the congestion losses. If it is, layer
4 error control becomes very inefficient.
>
>>I don't know that economics has much to do with this, beyond the
>>assumption that packet-switching is more economical for human-computer
>>interactions than circuit-switching is. The Internet wasn't designed by
>>economists.
>
>Cost-savings, by avoiding NxM combinatorial explosion of
>communications lines, was an explicit and frequently cited
>motivation for the work, at least in terms of what I heard when I
>came on board in the early 70s.
Circuit switching didn't require that. I never heard that argument.
The arguments I heard (and are the arguments in Baran's report) were
that circuit switches required long connection set up times and
effectively statically allocated resources to flows, where datagrams
required very little set up time (counting transport connect time)
and pooled (or dynamic) resource allocation, which is always much
more effective.
Voice was characterized by long connection times and continues data
flow, whereas data had short connection times and burst data flows.
Where data connection times were less than the set up time for
circuits.
>
>Surviving a "hostile battlefield" was the other, which meant
>conventional, not nuclear, conditions. At the time, I believe folks
>didn't quite anticipate that commercial communications environments
>would also look pretty hostile...
>
>
Indeed.
>
>d/
>--
>
> Dave Crocker
> Brandenburg InternetWorking
> bbiw.net
More information about the Internet-history
mailing list