[ih] "The Great Debate"

John Day jeanjour at comcast.net
Mon Apr 27 06:12:31 PDT 2026


Remember this was a project between ISO (computer companies) and CCITT (Primarily SGVII and SGVIII that didn’t agree on much). Needless to say, the ISO faction didn’t agree with SGVII at all. SGVII wanted X.25 all the way, not Transport Protocol, and no datagrams. It wasn’t made any easier by the fact that many of the European computer companies were in the CCITT camp, and then there was IBM trying to stonewall everything. It was a major fight. Just to get the connectionless addendum to the Reference Model, the US had to threaten to pull out.

When it was decided to make OSI joint between ISO and ITU, I predicted that OSI would fail. The two worlds were too far apart. Of course, this begs the question, why the IETF didn’t leapfrog the ambitions of the computer company side of OSI. (The CCITT side was clearly a decade behind and trying to stay there.)

Inline below.

> On Apr 26, 2026, at 21:13, Karl Auerbach <karl at iwl.com> wrote:
> 
> (By-the-way, where is Marshall Rose these days?  He and I got off on the wrong foot and I wish we would have agreed more and fought less.)
> 
> While Marshall was doing ISODE/CMOT I had purchased many of the colored volumes from the ITU or whomever and I had set forth to build a working X.400 - the first obstacle being ASN.1.

Was that before or after, Bancroft Scott straightened out the mess the Brits had created? I realize that the IETF thought that ASN.1 was just an overly complex encoding scheme. It was actually a tool to make application protocols invariant with respect to syntax. A specification written in ASN.1 could then be compiled into most any encoding scheme (and today can be). But the IETF never included the means to select the encoding rules. (The abstract syntax was like writing data structure definitions in code, while the encoding rules were basically the code generators and could be changed easily.)

O, gawd. X.400 what happens when Silicon Valley meets ITU. What an abomination! What lack of thinking. They would write down the syntax of a command and think it was a formal description! They were convinced that they could generate the protocol from the API (remember that lunacy) only to find out when they were deep into it that gee that didn’t work when the protocol was symmetrical. Far too complex, but there was no telling them. It wasn’t part of the main direction, so we left SGVIII to themselves.

X.500 had many of the same problems, but there someone who knew what needed to be done was able to save some of that work (sort of)

As for implementation, there were always the NIST Workshops which generally schedule on top of OSI meeting elsewhere to ensure that implementers didn’t talk to designers. (IBM was good at that.) Although I found that the designers had a better clue on implementation than the vaunted implementors.
> 
> (By-the-way, I don't think it is fair to complain that ISODE/CMOT were too large and slow - they were prototypes for experimentation.  I remember TCP implementations that were large and ugly - such as the U of Illinois implementation for early Unix on PDP-11 that swapped between a "small daemon" and a "large daemon" depending on connection state.)

Ahh, those were the days, there was barely room for the NCP in the kernel of Unix on an 11/45. Not to mention Unix had lousy IPC, so that had to be added. (Pipes could hardly be called IPC in my book. Just the typical CS hang up on synchrony,) The constraints are hardly comparable. Even Telnet had be done in user-space and was a hack.  

The one thing they did right was to hack file_io as the API, so to create a connection was open(ucsd/telnet). But an even greater hack was necessary to be acceptable to the IETF. (Notice that hack would have allowed a seamless transition away from well-known ports and to application-names and no one would have noticed.)

> 
> As for OSI - OMG!!  What a nightmare.  Nary a word of explanation why things were as they were, lots of insider phrases, and a design that was so open ended that it amounted to the equivalent of a Rube Goldberg airliner, complete with bowling alley, Olympic swimming pool, golf driving range, and a coal powered steam boiler.

O, gawd. That argument with ISO (and ITU) began almost immediately. What you see is considerably more than what they thought should be in a standard. There were long drawn out arguments about that. Everyone knew more needed to be said about the implementation and how to look at what was written, but ISO wanted just requirements as if we were specifying screw threads. There were objections over specifying APIs (those belong with the programming language committees). Then how were we to specify the input to protocol? Errr, that was sort of worked out. But this was in the day when every system and language had its own API.

If it wasn’t CCITT committees throwing up roadblocks on technical issues, it was ISO and CCITT admin on procedural matters. Then there were the companies trying to pull a fast one, as in the time a new 50-page section of a document appeared overnight that no one had seen.

> 
> (I fear that RFC's coming out of the IETF are slowly walking the same road towards incomprehensibility and lack of explanation (especially with regard to paths not taken) that helped to sink ISO/OSI.)

Unfortunately, I see the same thing. (Probably, the fate of all consensus committees), The thing with ISO and ITU was we were walking into one that was already far along that path. There were some quite adept at that game.

I do have to say, that member organization/country voting had its advantages. A large company couldn’t stack a meeting. More than once I was able to thwart IBM by using that.
> 
> There were nuggets of value in there, but they were not easily detectable or identifiable among the mountain of dross.

That is for sure they were hard to see. For example, Marshall missed that the upper 3 layers were a single state machine.
> 
> I did an implementation in which I threw out most of ASN.1 complexity and ended up with a basic-encoding-rules (BER) engine that worked nicely when SNMP came along.

The right way to do it. We did something similar to go directly from data structures in code to the encoding rules.
However, there were advantages (as noted above) to being able to select encoding rules. PER was much more efficient in both bandwidth and processing and just simpler.
> 
> OSI had some good ideas such as:
> 
>    - Connection time data (which in the TCP world would have made TLS and virtual websites a lot easier)

I find TLS a funny choice of name. My rule is that every layer (including applications) should protect itself.  Why would the Application trust the Transport Layer?
> 
>    - A session layer - which is a nice way to span application level relationships that span the failure and reconstruction of underlying transport connections as devices move about.  This could have greatly simplified IP mobility and simplified context-keeping things like web cookies.

You should have read the Session Layer standard. The OSI session layer was no session layer. SGVIII had stolen it early on for Videotex. The functions in the Session Layer belong i the application. Later that came home to roost when Transaction Processing tried to use them. (Long story for later). What made it worse was rather than do it right, the Brits came to the rescue with an overly complex kludge that only solved the immediate problem. SGVIII stealing the Session Layer actually turned the upper layer architecture upside-down. (Letting them do it was one of the things ITU got in the deal to develop OSI jointly. It was obvious at the time. Even AT&T (Bell Labs) argued against it.

The real session layer was in the Application Layer and called ACSE, which included a plug-in for authentication. OSI had a nice modular structure for applications. ACSE created the Application connection, then there might be a base protocol with modules to add capabilities. Good software engineering ideas.

Actually by that time, I had made the ACSE recursive, so distributed applications could be built on distributed applications. Not intended when it was done but it turned out to solve the Transaction Processing issue much more simply and it fit the implementation. But no way they were going to do anything elegant, It went against Larmouth’s grain.

> 
>    - A nice way to specify protocol services to the next higher layer and a distinct way to specify what was happening internal to the protocol.  (Dave Kaufman and I wrestled with the need for this kind of expression when we were trying to do security protocols at SDC - the OSI folks did a better job of it than we did.)

Which in particular are you pointing at?
> 
>    - The Fletcher checksum (it looks scary, but there are good ways to implement it and also to do incremental updates.)

especially since we learned sometime ago that the checksum in TCP was a placeholder. ;-)

But Fletcher was responsible for much more than that!  He brought in Watson’s delta-t to TP4. Almost all of delta-t is in TP4 which made it a major advance over TCP. It was simpler, more resistant to attacks, more robust and avoids the 3-way handshake which is superfluous anyway.

> 
>   - Things like "application titles" that would help in a world of cloud-like computing by allowing services to split (sorcerer's apprentice style), merge, or move while maintaining a client-service context.

Actually there was much more to Application layer naming and it supported far more than just client/server and far more than the limitations imposed by the ARPANET kludge of well-known ports. The Upper Layer Architecture group made a major insight into the nature of the application layer that came from a turf question. (Not only was an important insight, but the idea it came from a turf question is delightful.)
> 
>    - An object identifier hierarchy.  (The OSI version was sane, what we did to it in SNMP by imposing "lexiographic ordering" was not nice - I wrote and did a prototype implementation of an alternative to SNMP that treated object ID sequences more in tune with what OSI designed and ended up with an SNMP near replacement that was orders of magnitude faster, smaller, more secure, and more able to perform atomic control operations - https://www.iwl.com/idocs/knmp-overview )

There were good people working on the OID stuff.

Well, SNMP was a big mistake especially since there was a more advanced alternative that could have been chosen. But then it used modern CS concepts, something IETF always shies away from. I always thought it was amusing that of the 3 management protocols at the time, SNMP had the largest implementation. I characterized it as ’so simple, it was too complex to use.'
> 
> But the OSI folks really shot themselves in the foot by:
> 
>    - Charging $$ just to see the specification documents, which were written in opaque language, and were designed to be all things to all people without any practical engineering to cut them down to implementable size and useful deployment.

Yes, but this was how ISO especially (and to a lesser extent ITU) supported themselves. We tried hard to make the argument they could make more money packaging them for a bulk market. (Remember ISO standardizes far more than telecom  (unlike ITU) For the vast majority of those standards (and its experience), a company buys one copy refers to it infrequently. What OSI was doing was totally different and counter to everything they knew (and they were a pretty stodgy organization in Geneva). With OSI, every engineer needed a copy. They could have made a lot more money binding OSI standards in logical groups and selling them at technical book prices, but they couldn’t see it. Stupid. Although, ISO is based on capitalism, not government socialism.

>    - Being all snooty and kinda unwilling to engage with other networking professionals - it was ITU/CCITT all the way and everyone else can go pound sand.  Our small company considered joining the OSI committees - but the entry fees were aimed at IBM sized companies, not the kind of small companies in the TCP/IP world.

That may have been true in ITU where you had to be from a PTT. Remember ITU is a treaty organization. You had to work for a signatory of the treaty to have a vote. In ISO, at least in the US, the only rule was to show up at a meeting. And for international meetings there was a requirement to have been at the last 2 out of 3 meetings so you knew what the national positions were. Unlike the IETF, at international meetings one was representing the committee as a whole (not everyone went to international meetings. 

Why on earth would you have even considered going to an ITU meeting? 
I never went to a single ITU meeting.
> 
>    - Treating their designs as perfect and complete rather than as an evolving exploration of a new technology, store and forward packet switched networks.

Another thing we tried to drum into their heads. But the Europeans (ITU) wouldn’t hear of it. We kept telling them: Start simple, add features.
> 
> I kinda like the TUBA - basically replacing IPv4 with OSI CLNP - proposals when we were in the early phases that led to IPv6 (I confess I was cued by Cindi Jung.)

Yes, TUBA and CLNP was what was needed. It made the two major structural changes the Internet required. It is interesting to see the growing unrest with v6 (after 500+RFCs who wouldn’t!) but none of them understand what is needed.
> 
> By-the-way, Sue Hares built some really cool wooden rubber-band machine guns.  Not that this is relevant to anything, but it was fun.

Why am I not surprised!!  ;-)

Take care,
John

> 
>         --karl--
> 



More information about the Internet-history mailing list