[ih] Confusion in the RFCs
John Day
jeanjour at comcast.net
Fri Sep 5 12:25:23 PDT 2025
> On Sep 5, 2025, at 11:04, Clem Cole <clemc at ccc.com> wrote:
>
> below.. [note this really belongs in COFF, as it's less Internet History and more reminiscent of us old guys]
>
> On Fri, Sep 5, 2025 at 8:44 AM John Day via Internet-history <internet-history at elists.isoc.org <mailto:internet-history at elists.isoc.org>> wrote:
>> It inspired everything we did. It was a revelation. That is why our PDP-11 OS language was called PDP-11 Espol, their OS language.
> Fascinating - did that survive? Could you tell us more? I did not know that someone had tried to make an ESPOL for the 11. Was it a cross compiler, and what was the native OS? I grew up on BLISS and C, of course, and knew about other system languages like BCPL and concurrent Pascal that targeted the 11, but I never knew about an implementation of ESPOL for it.
PEESPOL was written by Dave Grothe, no longer with us. It was patterned after ESPOL, it wasn’t trying to *be* ESPOL. ESPOL had the Burros hardware to rely on. Lots of functionality that the 11 didn’t support. Yes it was a cross-compiler. It had a very sophisticated Macro processor that allowed us to treat it like an extensible language and declare macros that looked like language constructs so that we could then program in terms of the system we were building. I know there were parsing errors in BLISS or BCPL one that PEESPOL had avoided. (Don’t remember exactly what they were but they had to do with getting the order of processing right.)
>>
>> I knew there was one around UCLA somewhere and at Stanford. Knuth wrote the early Algol compiler for it. It was the first system to use a stack for procedures, as well as arithmetic. Tagged architecture, descriptor based memory. The system had a coherence I have never seen again.
Another innovation was treating interrupts as an accidental procedure entry which nearly everyone does now. (The PDP-11 did.) However, that in the 5500 (because they did call-by-name in hardware) was a degenerate case of a thunk. ;-)
> No doubt, the B5000 was the first "high-level" system design, incorporating everything you describe, along with some interesting support for its multi-tasking concepts. [I remember trying to wrap my head around the idea of how a cactus stack worked].
;-) Simple. Everything was a procedure including the MCP. The only difference between a process and a procedure was what it returned to. A procedure returned within the same stack, a process returned to a different stack.
> One of my old colleagues at Tektronix was Bill Price, who was earlier one of the MCP's designers and implementors, and he took great pride in schooling us youngsters in those days. He pointed out to us that if Burroughs' management had had any real idea of what they were doing and how far out it was and different from anything else being done at IBM in White Plains or Remington Rand/Eckert-Mauchly in North Philly, he is pretty sure they would have shut it down.
;-) I didn’t know that, but we suspected that was the case. ;-)
>
> As Bill explained it to us (then UNIX guys in the late 1970s), the designers of the MCP were very rigorous in their design, but had a great sense of humor and used really marvelous names for some of the data structures and kernel tasks.
;-) Indeed they did. ;-) And file attributes too.
> The MCP was extremely well structured, but when they ended up with something that did not quite fit in their structured design, they gave the special case to Bill to deal with in his "Old Weird Harold" kernel task, which, among other things, maintained "the bed," which was a list of tasks awaiting actions. One of my favorite actions was when Bill shared the comments from some of the code he still had, which revealed that Old Weird Harold was responsible for "monitoring the bed for something to fork.
;-) ? This was something other than the schedule of processes to run? The schedule was called the sheet. So of course there were variables called, “stackofsheet” and “pileofsheet”.
Of course, the one that got them in trouble (the top line of the B6700 operator console would display the name of the procedure the MCP was in and one day a prim lady from a bank was standing behind the operator) when the procedure that forked new user processes ran. It was called motherforker, and had been for over a decade. ;-)
>
> Also, one minor correction, while I do believe that Burroughs had an LA-based team, I am under the impression that most of the work on both HS and SW for the B5000 and B6000 families was done in Philadelphia (well, Paoli to be more precise).
The large systems, 7800, 8800 were done in Paoli, but the 5500 and 6700 were done in California. We were making the transition from the 5500 to the 6700 for IlliacIV. We were getting pre-beta listings and releases of the MCP.
>>
>> Trivial example: 48-bit word. Floating point format was a 39-bit mantissa (sign bit, 8-bit exponent) but the decimal point was at the right end of the word. Integers were merely unnormalized floating point numbers. No integer to real conversion. It just worked. Also, it was pointed out to me recently that there was a hardware operator that convert an integer to BCD. A 39-bit binary integer would convert within 48 bits. (The Burros 3500 was a COBOL machine and all decimal including the addressing!) Burros was architecture-agnostic. One could go on and on.
> Yeah, they got it about language-driven architectures. My favorite Burroughs machine was their mid-range B1700, which they targeted at small businesses. This machine changed its microcode on the fly depending on the application (i.e., it had Cobol microcode, Algol microcode, etc.). We studied this system in great detail in Dan Siewiorek's computer architecture class when I was an undergrad. It was a very cool machine that really learned a great deal about how microcoding could be used (and some of you have heard my story during my UCB grad qualifiers when I was asked a question about microcoding and used the B1700 to answer it).
Indeed it was!! We thought it was pretty fascinating too. Burros tried to give us one, but the U decided it would cost too much to accept it. Sad.
>>
>> Why can’t we build systems like that any more.
> Sadly, because often simpler is much less costly, and as I have said many times, "Simple Economics always beats Sophisticated Architecture."
The machine was more expensive, but the cost of operations was orders of magnitude less. There were companies with entire dept that wrote JCL for IBM machines that were unnecessary for Burros. Anyone could do it.
Also, the 5500/6700 machines were secure. Only the compilers generated code, there was no assembler, the stack was tagged non-executable, and with the descriptor-based memory one couldn’t index off the end of an array. I also recently learned that it all still exists in Unisys. They have updated it. There are now 8 bits of tag and they have generalized the descriptor-based memory to be ‘object-based memory’ and other things. They are very proud of the fact that when they ported it to Intel hardware, they didn’t have to change a single line of user code.
Take care,
John
>
More information about the Internet-history
mailing list