[ih] Hourglass model question
John Day
jeanjour at comcast.net
Fri Jul 5 12:09:45 PDT 2019
Jack,
> On Jul 3, 2019, at 16:26, Jack Haverty <jack at 3kitty.org> wrote:
>
> On 7/3/19 11:20 AM, Joe Touch wrote:
>
>> I think there was at least some terminology borrowing; not sure who came
>> up with what first, e.g., link, net, transport, etc.
>
> IMHO it's important when looking at history to remember that computer
> networking did not start with OSI, or the Internet, or even the ARPANET.
>
> Before those existed, there were lots of people sitting in front of lots
> of terminals using the telephone network to interact with their
> mainframes. Protocols like BISYNC (circa 1967) and others were used,
> along with "multidrop lines" that enabled lots of terminals to use the
> same telephone line.
>
> IIRC, much of the "networking" terminology was borrowed from that
> environment - terms like "link" and "transport" for example. I suspect
> you'd find a lot of "our" terms in early IBM documents.
Well, sort of. There was a distinct shift between the early phone and datacomm networks and what we called ’networking’ even then.
The phones and terminals were attached to the network and were not really active participants in it.
Hosts, OTOH, are active participants in the network. Admittedly at first, there was an attempt to ‘protect’ the hosts from dealing with the network, but that was more an issue of the resource limitations of the hosts. It was readily apparent even in the early 70s that this form of networking was qualitatively different. It was a distributed computing problem, in some regards a distributed OS, that it was a network of peers, not master/slave as with the data comm networks. Datacom networks and the PTTs follow what I have called the beads-on-a-string model: boxes connected by wires, where layers are just modules in the boxes. The Host has members in the lower 4 layers. The Networking model, OTOH, is focused on distributed cooperating processes that form layers of different scope. The boxes are just containers and less important.
>
> My impression of the 7-layer model has always been that it came from
> that kind of early 60s "network" world - lots of human users sitting at
> a slightly smart terminal (e.g., IBM 2260 or later 3270) interacting
> withal some application running on a remote mainframe over a virtual
> circuit carried by modems and telephone lines. The seven layers match
> reasonably well to that technology.
To some degree, I would agree with that Charlie Bachman (author of the 7 layer model) who worked for Honeywell-Bull did come from that background and I have always thought that the label of Presentation Layer was an indicator of that. But there was also a considerable distributed computing influence as well. First internally Honeywell-Bull referred to it as a distributed computing architecture and the French side of Honeywell Bull in the person of Michel Elie was working closely with the CYCLADES group who definitely saw it that way. Much of what they were trying to do was focused on computer-to-computer uses as you point out.
Once the committee got hold of the model it set out to further generalize it beyond Charlie’s original. (Although I have to say Charlie wasn’t stuck in that view and was pushing a distributed view, but more from the business side than the research side.) For example, the Presentation Layer had nothing to do with ‘presenting’ but with selecting transfer syntax. The Application Layer was modular and modules could be re-used for different applications, etc.
>
> The problem we had back in the early 80s with forcing the Internet into
> that model was a result of the multiple endpoints involved in virtually
> every scenario. Users were still at terminals, but only the TIP/TAC
> scenario (remote login) fit into the 7-layer model.
In the OSI model there were no terminals. Terminals were outside the model. It was a model of peer systems.
Now, ITU (CCITT in those days) did include the terminal in their model. They called it a “stop/start mode-DTE” (Data Terminating Equipment) and what we called a TIP, they called a PAD (Packet Assembler/Disassembler) which they saw as an ‘interface between a start/stop-mode DTE and a packet-mode DTE (host). Well, not really. A TIP was an IMP with a piece of software that looked like a host to the IMP. There were ’network access’ devices built at the time that were small stand-alone very limited hosts that provided terminals access to the network. Strictly speaking, this latter type was more equivalent to a PAD than the TIP was. All of this pre-dates OSI.
I did (I think) a delightful picture back then that I still use that shows the sequence of a terminal connected to a host connected to a couple of switches/routers connected to another host. The top of the picture is labeled the way we would have done it, the bottom labels the same objects as the CCITT labeled them. The upper labeling is symmetric, the lower labeling asymmetric. And of course, in the upper labeling ‘interfaces’ are APIs and in the lower labeling, ‘interfaces’ are wires between boxes. The point was that one could build on the top labeling, but the bottom labeling was pretty much a dead-end.
>
> Most scenarios were more complex, and the communications over the
> ARPANET and later Internet were largely computers interacting with other
> computers. When a human user was involved he or she was likely using a
> "client" program on a local computer (e.g., Telnet or FTP from ISIA,
> BBNE, MIT-DM (where I hung out)...) and that local computer was
> interacting with a computer at the "other end" as well as with computers
> "in the middle", e.g., information exchanges with gateways (routers),
> with servers such as DNS or NTP, etc.
>
> There were lots of other scenarios with no human user in sight, e.g,.
> mail servers talking amongst themselves, DNS servers getting
> synchronized and updated, etc.
Agreed.
In fact, when peer-to-peer [sic] became a big buzzword, I asked an advocate what the big deal was. The reply with bated breath was, “that a host could be a client and a server at the same time!!!” I said, yea, that was true the day we turned the Net on!
>
> It was really hard to put the round ARPA Networking block into the
> square OSI hole….
Sounds to me like you are confusing the OSI work with the ITU X.25 beads-on-a-string model. Remember OSI was an effort started by US computer companies with the support of European computer companies. It was later that the Europeans insisted that ISO and ITU produce joint standards. Personally, I think this was a big mistake, but with no signs of deregulation in Europe, there wasn’t much choice.
>
> Today, if you look at a single web-page, you'd likely see dozens of
> interactions going on between lots of network sites as all of the page
> content is pulled or constructed from all over the Internet to get the
> ads, cryptominer malware, teasers for other sites, et al onto the
> screen. Sure seems far away from that 7-layer model….
I don’t see how. Fits perfectly from my understanding of both. In fact, web fits better in the 7 layer model (with its additional architecture investigations). I use the web to explain the insights made in understanding the OSI Application Layer. There were some very important insights developed.
(Now that said, was the OSI Model perfect? Far from it. There were some very good ideas in there, but the phone companies (and their allies) introduced a lot of old think and a lot that was just plain wrong. I have yet to see them get anything right, even today. In fact, I see them proposing the same thing they were in the 1970s, just with flashier buzzwords.)
>
> /Jack
>
> PS - a factoid you might find amusing. In the Internet, routers used to
> be called "gateways". When I was at BBN in the 80s, we sporadically
> tried to sell gateways to our X.25 network customers. No one would
> touch a "gateway". We finally learned that the term "gateway" in
> IBM-land referred to something which had a reputation of being
> expensive, hard to install, and very unreliable. So we started calling
> them "routers" instead…
Even more interesting factoid. The term ‘gateway’ had nothing to do with IBM.
A ‘router’ or switch, the names were used interchangeably, was the relay within the networks of an internet, while a ‘gateway' was the relay between networks. So a router is a relay between hosts and gateways, or between gateways; while a gateway is a higher layer relay between hosts and gateways.
This was the whole idea that Abbate documents in her book: that while the PTTs adopted protocol translation at the boundaries between networks and didn’t need gateways. (They were translating between very similar X.25 networks, so it wasn’t too messy (and they had X.75 to define the translation) and it maintained their desire for ‘value-added services’ in the network. The researchers expecting a wide variety of new network technologies and knowing how messy m x n translation could get chose an overlay, an internet layer, so that the networks of an internet supported a common layer with no protocol translation.
This was the model that INWG was using in the mid-70s and reflected by the fact that the 3 transport protocols they were looking at had internet (transport) addresses, over network addresses, over data link addresses with decreasing scope. (This was before IP was separated from TCP.)
A decade later, OSI independently came to the same conclusion in ISO 8648 Internal Organization of the Network Layer, which says that there are 3 sublayers (all not always present). In OSI-ese, they were 3a, Subnetwork Access; 3b, Subnet Dependent Convergence; and 3c, Subnet Indpendent Convergence. IOW, (3a,b) was the Network Layer; (3c, Transport) was the Internet Layer. (This is also reflected in that intra-domain routing uses the Data Link Layer (the network layer’s lower layer) to exchange routing updates; and Inter-Domain routing uses a Transport Protocol, e.g. TCP or TP4. as an SNDC (the internet layer’s lower layer) to exchange routing information.)
[I know it was independently arrived at because I was in a position to observe both groups up close, but was not participating in their discussions, and there was no overlap in their membership. Yes, the basic Reference Model should have been re-written to reflect this structure but it was deemed politically impossible to do so. Similarly, by 1983, the upper layers were admitted to be a single layer. The protocol specifications were ‘adjusted’ so they could be implemented as a single layer and were. (Noticing this simplification was often referred to as the OSI Clueless Test.) ;-) Uncovering that had been complicated by the PTTs stealing the Session Layer, which further obscured that the upper 3 layers were not only one layer but upside down. Also in the early 1980s, proof was found that addresses should not be exposed at the layer boundary as had been forced on the Reference Model in 1978 and that (N-1)-addresses should not be used as a suffix of an (N)-address. But again, it was deemed politically impossible to outright re-write the Basic Reference Model to directly reflect this and various definition subterfuges were used to get around it.]
But you are right, that by 1983 the term gateway had pretty much disappeared in the Internet discussions, but that was primarily because the Internet had become one big network with translation (IP Fragmentation and later NATs) at the boundaries.
Take care,
John
>
>
>
> _______
> internet-history mailing list
> internet-history at postel.org
> http://mailman.postel.org/mailman/listinfo/internet-history
> Contact list-owner at postel.org for assistance.
More information about the Internet-history
mailing list