[ih] principles of the internet

John Day jeanjour at comcast.net
Tue Jun 1 14:46:52 PDT 2010


At 22:54 +0200 2010/06/01, Matthias Bärwolff wrote:
>On 06/01/2010 10:00 PM, Dave Crocker wrote:
>>
>>
>>  On 6/1/2010 11:49 AM, Richard Bennett wrote:
>>>  The Internet protocols are agnostic about privilege and best-effort, as
>>
>>  Absent standardized QOS, IP is best effort and the transport-level
>>  reliability mechanisms reflect this, even as weak as they were
>>  (intentionally) made to be.
>
>Best effort to me seems absolutely central to the "Internet
>architecture" -- I'd recommend reading Metcalfe's thesis' chapter 6
>which really nicely elaborates the notion.

This is the contribution from Pouzin implemented 
in CYCLADES, which Metcalfe picks up on for the 
more limited environment of the LAN.

>
>>
>>  This was a major shift from the degree of delivery assurance attempted
>>  for the Arpanet IMP infrastructure, which was reflected in the /lack/ of
>>  host-to-host reliability mechanism in the NCP.
>>
>>
>>>  these are layer two functions that are simply outside the scope of a
>>
>>  Except that layer two is not end-to-end and therefore cannot make
>>  end-to-end service assertions or enforce them.
>>
>>
>>>  I don't know that economics has much to do with this, beyond the
>>>  assumption that packet-switching is more economical for human-computer
>>>  interactions than circuit-switching is. The Internet wasn't designed by
>>>  economists.
>>
>>  Cost-savings, by avoiding NxM combinatorial explosion of communications
>>  lines, was an explicit and frequently cited motivation for the work, at
>>  least in terms of what I heard when I came on board in the early 70s.
>
>+1 the avoidance of the nxm problem is all over the literature from the
>time (also, Padlipsky's term "common intermediary representations" comes
>to mind)

This use of n x m is very different than Dave's 
use about connectivity.  This is the concept that 
was called the canonical form.  It was critically 
important in the early network, but actually 
proves to be a transitional concept.  It is 
absolutely necessary when the same application is 
developed in isolation: terminals, file systems, 
etc.  But once networks become common, new 
applications are designed from the start to be 
used on different systems over a network.  So 
they are their canonical form.

I always thought this was quite interesting. 
Since at one time, it was trying to formalize the 
idea of canonical form is what drove me to 
reading too much Frege.  ;-)  Then to find out, 
that the existence of the network makes the 
problem go away was amusing.

>  >
>>  Surviving a "hostile battlefield" was the other, which meant
>>  conventional, not nuclear, conditions.  At the time, I believe folks
>>  didn't quite anticipate that commercial communications environments
>>  would also look pretty hostile...
>>
>>
>>  d/
>
>--
>Matthias Bärwolff
>www.bärwolff.de





More information about the Internet-history mailing list