[ih] receipts & OS modifications for IMPs [networks]

John Day jeanjour at comcast.net
Wed May 1 13:21:13 PDT 2024


Inline below.

> On May 1, 2024, at 11:26, Craig Partridge via Internet-history <internet-history at elists.isoc.org> wrote:
> 
> John Day raised two points in the discussion of flow control that I
> thought I'd lift up to a discussion on their own.
> 
> 1. John mentioned the jest: how do you interface to an IMP?  Make it
> look like a tape drive!  ;-)
> 
> This jest points up what was a serious and long-running debate in the
> OS and applications community about the abstract interface should be
> to both applications and the operating system - a debate that
> persisted from the late 1970s into the early 1990s.  The great
> innovation of UNIX was to make pipes -- byte streams -- first class OS
> objects and then create an ecosystem that leveraged pipes (with
> glorious things like grep piped to sed and such like).  TCP fit well
> with pipes; UDP (and the short lived RDP, but also ICMP and such)
> didn't.  Then there was the issue of how the OS and its supporting
> applications (things that brought interfaces up and such) interacted
> with network interfaces.  Sockets won this fight, but it was a long
> and tricky muddle.

To respond, my comment was purely about how to model the device driver for interfacing BBN 1822 to an OS.

As for the API interface to applications, that is a whole different kettle of fish.  

We put the first UNIX system on the ’Net in 1975 and there was never any issue about what to do. We quickly determined that UNIX didn’t really have good IPC. Pipes were clearly a non-starter (and frankly surprisingly limited). The one property of any protocol is that it must be able to accept input from above or below at any time. Protocols are not purely request/response. This was clearly something Dennis and Ritchie didn’t get, nor most other OS designs. (I should note here that several years earlier we had modified the Burroughs MCP to support IPC. MCP was what Burroughs called their OS.)

Pipes showed that UNIX was clueless when it came to IPC because they were asymmetric and blocking. I have already mentioned the problems with asymmetric, blocking APIs. Although, Craig is correct that it took computer science and the industry a long time to get rid of that stupidity. As I said in an earlier post, the Apollo workstations were enamored with mailboxes for IPCs. They were either incredibly short-sighted or as I have long suspected most CS people are afraid of asynchrony.  (Asynchrony has a long tradition at Illinois. Four computers built there all used asynchronous logic. Not sure whether we were comfortable with it or just new it was a hard requirement that had to be solved.)

For that first implementation, we didn’t have time to solve the IPC problem, so it was finessed by putting the NCP in the kernel (where on an 11/45 it barely fit). The applications were in user space. Telnet was implemented as two processes one incoming and one outgoing ;-) and stty and gtty were hacked so the two processes could do what was necessary. That was okay, because at the time all applications were built on Telnet. 

Once that was up and running, a true non-blocking IPC mechanism was developed for UNIX, which was working within a couple of months and allowed Telnet to be cleaned up. For the API to applications, file_io was modified slightly to make accessing the Net look like file operations. Why? Because it just made good sense. To open a connection from Unix, a program merely executed,  <file-desc> = open(ucsd/telnet). 
The code looked up the host name in the host table to get the address and had a table of well-known sockets.

Later in 1976 Unix was stripped down to fit on an LSI-11 (a one board PDP-11) and used for an 'intelligent terminal' that a plasma screen and touch (it had a keyboard but was mainly not necessary) and connected to the 11/45 and the Net. It was used for a land-use management system for the 6 counties around Chicago and was also deployed within the DoD.

Steve Bunch who is on this list can add considerable detail to all of this.

Years later, when Bill Joy was proposing the Sockets API for Berkeley UNIX, Bunch and Michel Gien (of Chorus, formerly with CYCLADES) met with Joy to try to convince him that a file-like API made more sense. But Joy was too infatuated with Sockets. 

I consider this one of the major bad decisions the Internet ever made. With the file-like API, the Internet could have seamlessly transitioned to application-names and away from well-known ports and no one would ever have known the difference. And DNS could have been used to look up application names. (Naming the host was irrelevant anyway.) Application-names are a major missing part of the architecture which makes mobility as cumbersome as it is and requires applications to invent their own means to open a connection to a specific instance of an application. (I have also been told that Sockets delayed Microsoft adoption of IPv6 by 2 years because every application that access the network had to be modified.)

As I said before, the RPC vs IPC debate was one of the biggest wastes of time and effort to roil CS. It really does seem most in CS fear asynchrony.

> 
> 2. John mentioned receipts and issues of who handles flow control and
> how acks fit in.
> 
> Keep in mind that, again, through the mid-1990s, we did not understand
> fully general acknowledgement schemes -- by which I mean, ack schemes
> that allowed tracking of packets received out of order, or in order
> but with gaps.   Automatic Repeat Request (ARQ) research had worked
> out how to handle acks for packets in order and what to do if a single
> packet was missing, but beyond those fundamental challenges we took a
> long time to figure out what to do.

Again, Receipt Confirmation was about letting the sending Application know when Acks were received. This was primarily a big deal for the phone companies who didn’t really understand layers. (not surprising). ;-)  The user of a layer should indicate what is desired of the communication, not how it is done.  Basically, the user indicates what it needs, the layer determines how best to do it that relative to all of there requests it is dealing with. (This is why the layer is a distributed resource allocator.) This was clear with CYCLADES and pretty well in place by the late 70s.  This is why the primitives for a layer should look like     open (destination-application-name, source-application-name, QoS-parameters)  

QoS says what is characteristics are requested, the layer figures out how to do that.

As for acknowledgements, it is clear that the primary purpose of acks is to delete packets from the retransmission queue.  An Ack doesn’t mean “I got it”, but “I am not going to ask you to retransmit it.”  One of the QoS parameters should be maximum allowed gap in terms of the units of data passed to the layer. (This is what CYCLADES called a letter in their transport protocol and what OSI called an SDU.  SDUs were made into one or more PDUs, or several SDUs into a single PDU.  The CYCLADES Transport Protocol (and INWG96) maintained the identity of letters from end-to-end. Of course, this is much more difficult to do with in TCP because of its stream interface.)  As I have pointed out elsewhere, the stream concept is the protocol implementors view, while the letter concept is the layer users view. Still today, we see one of the shortcomings leveled at TCP is that it doesn’t support the what applications want, i.e., letters. (Funny to see this being asked for after 40 years of it being ignored.) Not surprisingly UNIX (and Multics) was focused on the system implementors view, not the system user’s view.

(I have to admit that for a long time I was an advocate of the stream approach, but a lot of thinking and looking at the arguments convinced me that was incorrect. Subsequently, I have found that it does make things go together much better.)

All of the focus on the ARQ discussions was far too focused on the data transmission aspects, the engineering aspects if you will and not on the overall systems aspects. Also, too many people saw networking as closer to telecom, when it never was. It was always much closer to operating systems. The telecom considerations really never get beyond the physical layer.  Again, it is just unfortunate that so much time and energy was spent on these issues.

I have to say that in all of the debates I participated in from 1975 to 1990, I found that the Europeans were giving more consideration to the overall system aspects of solutions than the Americans. I never took their word for it, but after careful thought, it caused me to change my mind on several things and they have been borne out by much simpler solutions.

Take care,
John

> 
> Craig
> 
> 
> -- 
> *****
> Craig Partridge's email account for professional society activities
> and mailing lists.
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history




More information about the Internet-history mailing list