From jack at 3kitty.org Sun Oct 9 09:41:06 2016 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 9 Oct 2016 09:41:06 -0700 Subject: [ih] "network unix" In-Reply-To: References: Message-ID: <02099a22-3c5c-ba04-9a46-e3fe7934cc57@3kitty.org> On 09/27/2016 03:24 PM, Paul Ruizendaal wrote: > Hi all, > > I'm interested in "Network Unix" as described in RFC681 and here https://archive.org/details/networkunixsyste243kell. My purpose is to understand the early history of networking in Unix, much in the style of Warren Toomey's work on early Unix in general (see http://minnie.tuhs.org/cgi-bin/utree.pl). > > Would anybody know if the source code of this Network Unix survived to the present? > > Many thanks, > > Paul > _______ Paul also asked me about this off-list, and we thought that my recollections of the "early history of networking in Unix" might be of interest to other historians using this list as an archaeological resource. So below is what I recently sent to him. It's all first hand from my personal experience. File it under "Networking in the Stone Age of Computing". Enjoy, /Jack Haverty ----------------------------------- Hi Paul, Sorry this reply took so long. I wanted to do a little digging in the boxes in my basement before answering. Re: Unix TCPs et al: I left MIT and joined BBN in the summer of 1977, and my first assignment was to get TCP running on a PDP-11/40 with V6 Unix. The TCP effort was part of a large project building a research system for network security that involved a client/server architecture. We had a bunch of LSI-11 systems (used in a variety of projects, the SRI Packet Radios being developed at that time) were clients of a server running on a PDP-10 Tenex system. The goal was to move that server function to a much cheaper machine and someone (not me) thought that the PDP-11/40 was suitable. It was there in the lab when I started working on the TCP implementation. When I began that project, I didn't know much about Unix. I had seen people using it, but hadn't used it myself. I recall that my first impression was that to use Unix you typed strings of gibberish at the console, which somehow made sense to the system. The Unix command language was (is still) pretty complex. I used to speak it fluently, but that was a long time ago... Most of my prior work was on PDP-10s and PDP-11s at MIT in Licklider's group. I also had written a lot of code that used the ARPANET in the 70s, but hadn't done any of the system programming work on NCP. I hadn't heard of TCP either. So I guess I was the perfect choice to implement TCP in Unix....anyway that's what I was assigned to do, working with Randy Rettberg (who didn't know anything about Unix either). Learning Unix was necessary, and of course a bit of a learning curve. I anticipated the need to poke around inside the Unix kernel in order to implement TCP. The "network unix" that did NCP at the time wasn't viable as a base since it wouldn't run on the PDP-11/40. So I had to learn how the kernel worked. Since there was no open-source Unix at the time, the kernel came from ATT with lots of restrictions about keeping it confidential, but with no documentation of the internals of the kernel itself. The source code was provided, but it was difficult to figure out the "big picture" of how it operated. The ARPANET came to the rescue -- I found documentation of some Unix internals that were apparently used in courses at the University of Woolongong. It described the core mechanisms of things like inodes, forking, etc. That helped a lot to understand the kernel source code. The /40 design utilized a single address space for instructions and data, so everything had to fit in 32KB of memory (yes K, not M or G). Other machines, e.g., the 11/45 or 11/70, implemented "i-d separation". This caused instructions to be fetched from a different address space than data. That meant that the effective address space for the kernel was 64K instead of 32K on those larger machines. Where the NCP kernel mods could fit in a "64KB machine", there wasn't enough room in the 32K world for much at all after the basic V6 kernel, which took up most of the memory. I managed to add what was needed to get TCP running but it was a struggle. Every trick in the books was needed - e.g., I went through the entire kernel and changed all of the "panic" messages (what it printed out on the console when it crashed) so they were just short 3 or 4 letter codes rather than sentences. Every byte saved helped. The bulk of the TCP software therefore had to be in user space, with absolute minimal additions to the Unix kernel. There just wasn't space for much. That was the primary factor influencing the design. The design was specific to that 11/40 Unix -- not intended as a general purpose implementation for Unix on other platforms. Jim Mathis at SRI had developed a TCP implementation for the LSI-11s that were the computers used in Packet Radios. The implementation was a TCP version 2.5, written in assembler (Macro-11), and designed to run on top of MOS, which was SRI's operating system for the LSI-11. All of this work was done under ARPA funding, with Vint Cerf as the ARPA program manager. Vint directed SRI to provide me with the LSI-11 TCP, which Jim happily provided and helped explain how it worked. So I took Jim's TCP and started to get it running as a user process on the 11/40. The core "TCP engine" with the state machine, packet handling et al was straightforward since the LSI-11 and PDP-11/40 were essentially the same instruction set. The bulk of the work was in the interfaces between that engine and the outside world: the network interface (a DEC IMP-11A) and the program that was using TCP (Telnet, FTP, etc.) With TCP as a user process, and its "customers" other user processes, the obvious way to interconnect the two is by use of the Unix "pipes" mechanisms - or actually the "ports" mechanisms developed by Rand to enable pipe-like interconnects between unrelated processes. That's where we ran into a Unix deficiency. Unix had been designed with the concept of "pipes" as a basic element. You pump data from a keyboard or other source into a program, it does its computation, and pushes the results back out a pipe. You can string pipes and programs together. Each program waits for something to come through its input pipe ("stdin") and then computes and answer which is sends through the output pipe ("stdout"), and then goes to read the next input. If there is no input, the kernel simply suspends the process until the next input arrives. This works well for the classic time-sharing terminal usage where a human interacts with a program. But networking is fundamentally different - you always have two participants generally on different computers connected by the network. When the system is idle, there's no way to tell which participant is going to send data next. For example, if you consider a basic Telnet connection, there would be user sitting at a terminal interacting with a program on some remote computer. The TCP software would be in the middle. When things "quieted down" and the user, or the program, was thinking, the TCP would want to go idle, and sleep. To do that it would simply execute a "read" command to wait for the next data to arrive, and the Unix kernel will make the process dormant until that data is ready. But does the TCP read from the user side, expecting the user to type next? Or does it read from the network side, expecting the remote program (actually the remote TCP) to send next? There's no way to tell. Randy and I had to invent a minimalist new function for the Unix kernel to make it possible to write TCP. The basic requirement was that a process (the TCP) be able to have multiple simultaneous I/O activities without the risk of the TCP process being blocked waiting in a read or write on any particular I/O channel. Of course it all had to fit into the 11/40 kernel space, which meant very very few instructions. That led to the definitions of two new kernel calls: AWAIT and CAPAC. The semantics of AWAIT were simple - it was like a "sleep" call but you could specify to be awakened when one or more of a set of I/O channels had changed. For input, that meant there was some data ready to be read. For output, that meant that there was some buffer space available to write data. In other words, the process could do a READ or WRITE without danger of being blocked by the kernel. CAPAC was the complementary function that allowed a process to determine exactly how much data it could READ or WRITE without blocking. With these two functions, it was possible to implement the TCP as a user process. Randy and I actually wrote a paper about this: Haverty, J.F. and Rettberg, R.D., "Interprocess communications for a server in UNIX", Proceedings IEEE Computer Society International Conference on Computer Communications Networks, September 1978, 312-315. That paper talked specifically about Unix and the AWAIT and CAPAC primitives. From our ARPANET experience, these were the minimum functions needed to enable Unix to be used in network environments. They weren't the ideal API, but they would fit in the 11/40! With AWAIT and CAPAC implemented, I brought up the LSI-11 TCP inside our 11/40 and got it running and communicating with the other TCPs for the project, using the ARPANET for transport. My old yellowed lab notebooks have been aging in the basement but they're still readable. Some salient entries: July 27, 1977: "got TCP11 Unix version to assemble" September 16, 1977: "TCP and Al Spector's TCP can talk fine" So, if you're curious about when "the first Unix TCP" was created, I'd set September 16, 1977 as the date. That's when my 11/40 TCP first successfully communicated with a TCP on a different machine (an LSI-11). Al Spector was one of the engineers working on that LSI-11 component. This was a time of rapid change in the TCP world, and as we (the handful of people who did those first TCP implementations) gained experience, we changed the TCP protocol and progressed through TCP 2.5, 2.5+, 2.5+epsilon, and eventually TCP 4 (which is still largely what we have in 2016). I changed the 11/40 TCP to track those modifications. It's hard to believe, but I actually still have a binder with a line-printer listing of that Unix TCP which was running on the 11/40. It's dated March 30, 1979. That's probably about the time we ended that project that was using TCP and decommissioned the 11/40. I've been meaning to scan that listing and get it online.....maybe this winter. The 11/40 implementation was far from being high performance. Shortly after I got it running I did some performance tests with a stopwatch. That TCP could achieve the blazing fast speed of 11 bits/second. Yes, bits.... That motivated a bit more digging around in the Unix kernel. What I discovered was that the pipes/ports implementation was also pretty basic. It was built on top of the file system, and a pipe was basically a file with a few pointers kept in the kernel to make sure that the reader never got ahead of the writer. The problem was that, since it was just a file, the kernel felt obliged to write it out to disk, and it would block the reader/writer processes as needed to wait for the disk I/O. Our PDP-11/40 had an RK05 cartridge disk, which was far from being fast. Hence, 11 bits/second was the result. Changing a few more kernel "panic" messages freed up a few more bytes and I added a few more instructions to prevent the file from ever being written to disk. Vint and I were working closely at the time, so I had kept him aware of these obstacles and the need to get a "real" TCP implementation for the newer, more capable machines. User-level TCP worked, but it really should be integrated into the kernel, which could be done with the newer machines. Vint added several TCP-related tasks to our contract at BBN. One was the Vax TCP (which Rob Gurwitz took on). Another was the HP-3000 TCP (HP/UX variant of Unix) which John Sax and Winston Edmond built. In addition, Ed Cain, who was a manager at the Defense Communications Engineering Center (DCEC), initiated a project to build TCP for the 11/70 Unix, which Mike Wingfield and Al Nemeth did. So there were 4 separate Unix TCP implementations done at BBN. My 11/40 one was first, and proved that although you could implement TCP on 11/40 Unix, it was not appropriate for general use in more powerful systems. The others happened mostly concurrently. Rob's Vax implementation was probably started second, with the HP and 11/70 versions not far behind. This was part of a concerted effort on DARPA (and DCA/DCEC's) part to make Unix implementations more widely available so that it would be easier for people to start using TCP. ARPA was also funding the work at Berkeley to create BSD Unix, and all of this prior work was made available to them (in the same way that I got Jim Mathis' LSI-11 TCP for possible use in Unix). But I have no idea what of that, if anything, they might have used in creating their code. Hope this helps! /Jack From jnc at mercury.lcs.mit.edu Sun Oct 9 10:21:59 2016 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 9 Oct 2016 13:21:59 -0400 (EDT) Subject: [ih] "network unix" Message-ID: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> > below is what I recently sent to him. It's all first hand from my > personal experience. A few notes/corrections: > The /40 design utilized a single address space for instructions and > data, so everything had to fit in 32KB of memory (yes K, not M or G). Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in each process' kernel stack + other swappable per-process data, so really only 48KB for all kernel code, data, disk buffers, etc. > MOS, which was SRI's operating system for the LSI-11. A small real-time memory-only OS; no memory management, or pre-empting of processes (which were all created when a configuration file was assembled, prior to linking). It provided inter-process message passing, memory allocation, asynchronous queued device I/O (using messages for completion notification), timers and not a whole lot more. It did have character/string I/O to serial lines. > there wasn't enough room in the 32K world for much at all after the > basic V6 kernel We had the same issue at MIT (a couple of years later). We went a very slightly different direction: we put input packet de-multiplexing in the kernel, but like yours, everything else (TCP and application) ran in a user process. The interface between the user and kernel was completely different, though. We had several different TCP implementations, semi-tuned to the application: e.g. the one for FTP had real buffering, for major data flows, but the one for User Telnet used (effectively) a shift register for buffering transmitted data (keystrokes). > But does the TCP read from the user side, expecting the user to type > next? Or does it read from the network side, expecting the remote > program (actually the remote TCP) to send next? There's no way to tell. The classic way to handle this problem in 'original' Unix is to have the process fission, and have one process for each direction; for coordination between the two, if the two have a pipe between them, they can use signals to wake each other up and notify the partner that there's data in the pipe to be read. Painful, but it does work. I don't recall if our Telnet used this hack; I know the program we used to talk over serial lines to consoles of the various LSI-11 routers, etc (which were connected to our time-sharing Unix) used it (not our group's code, someone else's clever idea). Noel From jack at 3kitty.org Sun Oct 9 11:36:57 2016 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 9 Oct 2016 11:36:57 -0700 Subject: [ih] "network unix" In-Reply-To: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> Message-ID: Hi Noel, You're right, I should have said 32KW....but I wonder how many people today would know what "KW" means? I suspect many would think it referred to the power that our ancient monsters consumed...actually probably not far off for the bigger machines that consumed many square feet of lab space! The other aspect of the "Stone Age" that may not be remembered today is that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit bytes in 16-bit words. Other machines made different choices. The PDP-10 was agnostic -- the instruction set allowed the programmer to specify whatever byte size they liked. So a "byte" only made sense in the context of a specific machine. Today of course we all know that a byte is 8 bits. Period. Perhaps some historian can figure out exactly when that happened..... Fun times... /Jack On 10/09/2016 10:21 AM, Noel Chiappa wrote: > A few notes/corrections: > > > The /40 design utilized a single address space for instructions and > > data, so everything had to fit in 32KB of memory (yes K, not M or G). > > Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so > only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in > each process' kernel stack + other swappable per-process data, so really only > 48KB for all kernel code, data, disk buffers, etc. From nigel at channelisles.net Sun Oct 9 12:07:42 2016 From: nigel at channelisles.net (Nigel Roberts) Date: Sun, 9 Oct 2016 20:07:42 +0100 Subject: [ih] "network unix" In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> Message-ID: <486739e9-e400-e051-9f1b-4dee0c883205@channelisles.net> Some time after 1980. I worked with BCPL on the PDP-10 and 'byte' was not fixed at 8 bits then. But with the arrival of microprocessors, it became so, pretty soon thereafter, probably around 82-83 On 09/10/16 19:36, Jack Haverty wrote: > Hi Noel, > > You're right, I should have said 32KW....but I wonder how many people > today would know what "KW" means? I suspect many would think it > referred to the power that our ancient monsters consumed...actually > probably not far off for the bigger machines that consumed many square > feet of lab space! > > The other aspect of the "Stone Age" that may not be remembered today is > that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit > bytes in 16-bit words. Other machines made different choices. The > PDP-10 was agnostic -- the instruction set allowed the programmer to > specify whatever byte size they liked. So a "byte" only made sense in > the context of a specific machine. > > Today of course we all know that a byte is 8 bits. Period. Perhaps > some historian can figure out exactly when that happened..... > > Fun times... > /Jack > > On 10/09/2016 10:21 AM, Noel Chiappa wrote: >> A few notes/corrections: >> >> > The /40 design utilized a single address space for instructions and >> > data, so everything had to fit in 32KB of memory (yes K, not M or G). >> >> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >> each process' kernel stack + other swappable per-process data, so really only >> 48KB for all kernel code, data, disk buffers, etc. > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From sob at sobco.com Sun Oct 9 12:20:40 2016 From: sob at sobco.com (Scott O. Bradner) Date: Sun, 9 Oct 2016 15:20:40 -0400 Subject: [ih] "network unix" In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> Message-ID: > On Oct 9, 2016, at 2:36 PM, Jack Haverty wrote: > > Hi Noel, > > You're right, I should have said 32KW....but I wonder how many people > today would know what "KW" means? I suspect many would think it > referred to the power that our ancient monsters consumed...actually > probably not far off for the bigger machines that consumed many square > feet of lab space! > > The other aspect of the "Stone Age" that may not be remembered today is > that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit > bytes in 16-bit words. PDP-8s were 12 bit words PDP-1, 7, 9, 15 were 18 bit words PDP-11 were 16 bit words PDD 6, 10 were 36 bit words Scott > Other machines made different choices. The > PDP-10 was agnostic -- the instruction set allowed the programmer to > specify whatever byte size they liked. So a "byte" only made sense in > the context of a specific machine. > > Today of course we all know that a byte is 8 bits. Period. Perhaps > some historian can figure out exactly when that happened..... > > Fun times... > /Jack > > On 10/09/2016 10:21 AM, Noel Chiappa wrote: >> A few notes/corrections: >> >>> The /40 design utilized a single address space for instructions and >>> data, so everything had to fit in 32KB of memory (yes K, not M or G). >> >> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >> each process' kernel stack + other swappable per-process data, so really only >> 48KB for all kernel code, data, disk buffers, etc. > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From brian.e.carpenter at gmail.com Sun Oct 9 12:56:31 2016 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 10 Oct 2016 08:56:31 +1300 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> Message-ID: <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> On 10/10/2016 08:20, Scott O. Bradner wrote: ... > PDP-11 were 16 bit words I think the byte stabilised at 8 bits in my mind because of the PDP-11, rapidly followed by the Intel 8080 and Motorola 6800. Looking at my 1971 PDP11/20-15-r20 processor handbook, it's unambiguous that a byte is 8 bits and is the smallest addressable unit. (However, I think the question was really settled in April 1964 when the IBM 360 was announced. From then on, the 6 bit bytes were survivors.) When did ISO start using "octet"? Brian From jack at 3kitty.org Sun Oct 9 12:57:53 2016 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 9 Oct 2016 12:57:53 -0700 Subject: [ih] "network unix" In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> Message-ID: <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> Wow, people are actually reading this stuff... Thanks to everyone who pointed out that PDP-8s didn't have 8-bit byte. Tough audience... What I meant to say was "PDP-8s had 12-bit words and IIRC some notion of 6-bit bytes. PDP-11s had 8-bit bytes in 16-bit words" Somewhere between brain and fingers my neural network must have dropped a packet...... /Jack On 10/09/2016 12:20 PM, Scott O. Bradner wrote: > >> On Oct 9, 2016, at 2:36 PM, Jack Haverty wrote: >> >> Hi Noel, >> >> You're right, I should have said 32KW....but I wonder how many people >> today would know what "KW" means? I suspect many would think it >> referred to the power that our ancient monsters consumed...actually >> probably not far off for the bigger machines that consumed many square >> feet of lab space! >> >> The other aspect of the "Stone Age" that may not be remembered today is >> that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit >> bytes in 16-bit words. > > > PDP-8s were 12 bit words > > PDP-1, 7, 9, 15 were 18 bit words > > PDP-11 were 16 bit words > > PDD 6, 10 were 36 bit words > > Scott > >> Other machines made different choices. The >> PDP-10 was agnostic -- the instruction set allowed the programmer to >> specify whatever byte size they liked. So a "byte" only made sense in >> the context of a specific machine. >> >> Today of course we all know that a byte is 8 bits. Period. Perhaps >> some historian can figure out exactly when that happened..... >> >> Fun times... >> /Jack >> >> On 10/09/2016 10:21 AM, Noel Chiappa wrote: >>> A few notes/corrections: >>> >>>> The /40 design utilized a single address space for instructions and >>>> data, so everything had to fit in 32KB of memory (yes K, not M or G). >>> >>> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >>> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >>> each process' kernel stack + other swappable per-process data, so really only >>> 48KB for all kernel code, data, disk buffers, etc. >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From sob at sobco.com Sun Oct 9 13:27:49 2016 From: sob at sobco.com (Scott O. Bradner) Date: Sun, 9 Oct 2016 16:27:49 -0400 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> Message-ID: <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> fwiw - I DEC was using ?octet? with the PDP-1s and PDP-4s in the mid 1960s (for me it was 1966 with a PDP-4) Scott > On Oct 9, 2016, at 3:56 PM, Brian E Carpenter wrote: > > On 10/10/2016 08:20, Scott O. Bradner wrote: > > ... >> PDP-11 were 16 bit words > > I think the byte stabilised at 8 bits in my mind because of > the PDP-11, rapidly followed by the Intel 8080 and Motorola 6800. > Looking at my 1971 PDP11/20-15-r20 processor handbook, it's unambiguous > that a byte is 8 bits and is the smallest addressable unit. > > (However, I think the question was really settled in April 1964 > when the IBM 360 was announced. From then on, the 6 bit bytes were > survivors.) > > When did ISO start using "octet"? > > Brian > > > From dave.walden.family at gmail.com Sun Oct 9 13:34:20 2016 From: dave.walden.family at gmail.com (dave.walden.family at gmail.com) Date: Sun, 9 Oct 2016 16:34:20 -0400 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> Message-ID: <12A3E4F3-DAD8-47E2-9246-3EB2327F0680@gmail.com> My memory is that IBM *really pushed* the superiority of 32-bit words and 8-bit bytes as part of bringing out and selling the IBM 360 ca.1964. I suspect that this had non-trivial impact on how computer purchasers throughout the computer world thought about the "correct" computer to buy > From randy at psg.com Sun Oct 9 13:51:23 2016 From: randy at psg.com (Randy Bush) Date: Mon, 10 Oct 2016 05:51:23 +0900 Subject: [ih] "network unix" In-Reply-To: <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> Message-ID: > Wow, people are actually reading this stuff... Thanks to everyone who > pointed out that PDP-8s didn't have 8-bit byte. Tough audience... no. it's just that the digital series is well under the altzheimer's threshold of most of us. i did not come into the dec world until the pdp-8 and -11 and. much later, the sail-10. i was trapped in the ibm lameframe (704/40/94) world, and later the 360s, which had its own rather large minis. the architectures of the 1620 (6-bit bcd word) and 1401 (8-bit bcd with zone punches and a word mark) were much more fun. the 1130 was 16-bit, boring, and short-lived. alas, poor yorick. i knew him, horatio. randy From jack at 3kitty.org Sun Oct 9 14:11:53 2016 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 9 Oct 2016 14:11:53 -0700 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> Message-ID: <20748dd7-383c-1593-71e5-c124f2cc334f@3kitty.org> I also remember "octet" being used in those days. Since "byte" was so imprecise, "octet" was used to mean "8-bit byte". That may seem to have solved the problem. But, like atoms, octets were not the most primitive element of computing (and networking). Any contemporary history buff who is curious about the computing environment in those early days as networking was being created -- I highly recommend Danny Cohen's writeup " ON HOLY WARS AND A PLEA FOR PEACE" in IEN 137 ( https://www.ietf.org/rfc/ien/ien137.txt ) How many bits were in a byte was just the tip of the iceberg annoying us who were trying to get computers to talk with each other back then. It may seem obvious now that most computer makers have disappeared and computing is rather uniform; it wasn't then... /Jack On 10/09/2016 01:27 PM, Scott O. Bradner wrote: > fwiw - I DEC was using ?octet? with the PDP-1s and PDP-4s in the mid 1960s > (for me it was 1966 with a PDP-4) > > Scott > >> On Oct 9, 2016, at 3:56 PM, Brian E Carpenter wrote: >> >> On 10/10/2016 08:20, Scott O. Bradner wrote: >> >> ... >>> PDP-11 were 16 bit words >> >> I think the byte stabilised at 8 bits in my mind because of >> the PDP-11, rapidly followed by the Intel 8080 and Motorola 6800. >> Looking at my 1971 PDP11/20-15-r20 processor handbook, it's unambiguous >> that a byte is 8 bits and is the smallest addressable unit. >> >> (However, I think the question was really settled in April 1964 >> when the IBM 360 was announced. From then on, the 6 bit bytes were >> survivors.) >> >> When did ISO start using "octet"? >> >> Brian >> >> >> > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From jnc at mercury.lcs.mit.edu Sun Oct 9 14:20:47 2016 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 9 Oct 2016 17:20:47 -0400 (EDT) Subject: [ih] bytes [Re: "network unix"] Message-ID: <20161009212047.7336D18C0A3@mercury.lcs.mit.edu> > From: Brian E Carpenter > I think the question was really settled in April 1964 when the IBM 360 > was announced. I too was going to mention the 360. I'm not sure we can elucidate _precisely_ what led to the focus on 8-bit bytes, so questions like 'would the 360 _on its own_ have done it' may be forever unknowable. But I do think the 360 was one of the biggest factors. The other one I'd point to is ASCII. Technically, one only needs 7 bits for ASCII, but 7 is odd (although there's no particular reason one couldn't have odd-length bytes, but it just feels, well, odd), and so I think ASCII was a big driver to 8-bit bytes; it certainly knocked out 6-bit bytes. And probably the power-of-two was an influence, too. > I think the byte stabilised at 8 bits in my mind because of the PDP-11, > rapidly followed by the Intel 8080 and Motorola 6800. The PDP-11 was certainly a factor (I think at one point, before micros appeared, it was the best-selling computer, in terms of numbers, in history). I'm not so sure about the micros - I think they may have 'put the last nail in', but I think they were more of a recognition of reality, than a pusher thereof. > From: Jack Haverty > Wow, people are actually reading this stuff... Hey, you're putting the energy in to write it, the least we can do is read! :-) Noel From winowicki at yahoo.com Sun Oct 9 14:45:46 2016 From: winowicki at yahoo.com (Bill Nowicki) Date: Sun, 9 Oct 2016 21:45:46 +0000 (UTC) Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20161009212047.7336D18C0A3@mercury.lcs.mit.edu> References: <20161009212047.7336D18C0A3@mercury.lcs.mit.edu> Message-ID: <1039759213.966140.1476049547024@mail.yahoo.com> Yes, before eight-bit bytes it really was the wild west. For a kick, some probably remember: https://en.wikipedia.org/wiki/DEC_Radix-50 which squeezed three characters (upper case, digits and only a few punctuation marks) into 16 bits. This is mostly to blame for file names having multiples of three characters in their names. It still shows up with all those web pages that end in ".htm" instead of ".html"! ? We had some long debates on Wikipedia if we should use "octet" everywhere, but that never gained ground. I thought all these old encodings were only historical, until last week we got a letter from our state government (yes, where Sillicon Valley is located) that was totally in upper case. Thanks for the amusements! On Sunday, October 9, 2016 2:31 PM, Noel Chiappa wrote: ? ? > From: Brian E Carpenter ? ? > I think the question was really settled in April 1964 when the IBM 360 ? ? > was announced. I too was going to mention the 360. I'm not sure we can elucidate _precisely_ what led to the focus on 8-bit bytes, so questions like 'would the 360 _on its own_ have done it' may be forever unknowable. But I do think the 360 was one of the biggest factors. The other one I'd point to is ASCII. Technically, one only needs 7 bits for ASCII, but 7 is odd (although there's no particular reason one couldn't have odd-length bytes, but it just feels, well, odd), and so I think ASCII was a big driver to 8-bit bytes; it certainly knocked out 6-bit bytes. And probably the power-of-two was an influence, too. ? ? > I think the byte stabilised at 8 bits in my mind because of the PDP-11, ? ? > rapidly followed by the Intel 8080 and Motorola 6800. The PDP-11 was certainly a factor (I think at one point, before micros appeared, it was the best-selling computer, in terms of numbers, in history). I'm not so sure about the micros - I think they may have 'put the last nail in', but I think they were more of a recognition of reality, than a pusher thereof. ? ? > From: Jack Haverty ? ? > Wow, people are actually reading this stuff... Hey, you're putting the energy in to write it, the least we can do is read! :-) ??? Noel _______ internet-history mailing list internet-history at postel.org http://mailman.postel.org/mailman/listinfo/internet-history Contact list-owner at postel.org for assistance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhc2 at dcrocker.net Sun Oct 9 15:11:25 2016 From: dhc2 at dcrocker.net (Dave Crocker) Date: Sun, 9 Oct 2016 15:11:25 -0700 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20748dd7-383c-1593-71e5-c124f2cc334f@3kitty.org> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> <20748dd7-383c-1593-71e5-c124f2cc334f@3kitty.org> Message-ID: <0cd9eb50-b0ee-cb91-4a68-aa24c2de5358@dcrocker.net> On 10/9/2016 2:11 PM, Jack Haverty wrote: > It > may seem obvious now that most computer makers have disappeared and > computing is rather uniform; it wasn't then... Some things haven't changed. The size of a bit has stayed fairly constant. So has the number of bits in a bit. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Sun Oct 9 15:12:08 2016 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Mon, 10 Oct 2016 11:12:08 +1300 Subject: [ih] "network unix" In-Reply-To: <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> Message-ID: I don't recall a 6-bit byte notion in the PDP-8. The smallest addressable unit was the 12-bit word, and the primary I/O device was an 8-bit ASR33 (which read into bits 4 through 11 of the 12-bit accumulator, the 1966 manual reminds me). Of course you could squeeze upper case ASCII down to 6 bits and store two characters per word to save core memory; I expect I did that but this was 1969 so I don't quite remember. Unlike some of my cohort, I didn't pad out my dissertation by including source code, so it's long lost. Regards Brian On 10/10/2016 08:57, Jack Haverty wrote: > Wow, people are actually reading this stuff... Thanks to everyone who > pointed out that PDP-8s didn't have 8-bit byte. Tough audience... > > What I meant to say was "PDP-8s had 12-bit words and IIRC some notion of > 6-bit bytes. PDP-11s had 8-bit bytes in 16-bit words" > > Somewhere between brain and fingers my neural network must have dropped > a packet...... > > /Jack > > > On 10/09/2016 12:20 PM, Scott O. Bradner wrote: >> >>> On Oct 9, 2016, at 2:36 PM, Jack Haverty wrote: >>> >>> Hi Noel, >>> >>> You're right, I should have said 32KW....but I wonder how many people >>> today would know what "KW" means? I suspect many would think it >>> referred to the power that our ancient monsters consumed...actually >>> probably not far off for the bigger machines that consumed many square >>> feet of lab space! >>> >>> The other aspect of the "Stone Age" that may not be remembered today is >>> that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit >>> bytes in 16-bit words. >> >> >> PDP-8s were 12 bit words >> >> PDP-1, 7, 9, 15 were 18 bit words >> >> PDP-11 were 16 bit words >> >> PDD 6, 10 were 36 bit words >> >> Scott >> >>> Other machines made different choices. The >>> PDP-10 was agnostic -- the instruction set allowed the programmer to >>> specify whatever byte size they liked. So a "byte" only made sense in >>> the context of a specific machine. >>> >>> Today of course we all know that a byte is 8 bits. Period. Perhaps >>> some historian can figure out exactly when that happened..... >>> >>> Fun times... >>> /Jack >>> >>> On 10/09/2016 10:21 AM, Noel Chiappa wrote: >>>> A few notes/corrections: >>>> >>>>> The /40 design utilized a single address space for instructions and >>>>> data, so everything had to fit in 32KB of memory (yes K, not M or G). >>>> >>>> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >>>> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >>>> each process' kernel stack + other swappable per-process data, so really only >>>> 48KB for all kernel code, data, disk buffers, etc. >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From vint at google.com Sun Oct 9 17:17:46 2016 From: vint at google.com (Vint Cerf) Date: Sun, 9 Oct 2016 20:17:46 -0400 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <0cd9eb50-b0ee-cb91-4a68-aa24c2de5358@dcrocker.net> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> <20748dd7-383c-1593-71e5-c124f2cc334f@3kitty.org> <0cd9eb50-b0ee-cb91-4a68-aa24c2de5358@dcrocker.net> Message-ID: well, maybe not, Dave - if you look at some of the qubit parameters in quantum computing that may contain many conventional bits per qubit... :-) v On Sun, Oct 9, 2016 at 6:11 PM, Dave Crocker wrote: > On 10/9/2016 2:11 PM, Jack Haverty wrote: > > It > > may seem obvious now that most computer makers have disappeared and > > computing is rather uniform; it wasn't then... > > > Some things haven't changed. > > The size of a bit has stayed fairly constant. > > So has the number of bits in a bit. > > d/ > > -- > > Dave Crocker > Brandenburg InternetWorking > bbiw.net > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > -- New postal address: Google 1875 Explorer Street, 10th Floor Reston, VA 20190 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhc2 at dcrocker.net Sun Oct 9 17:41:43 2016 From: dhc2 at dcrocker.net (Dave Crocker) Date: Sun, 9 Oct 2016 17:41:43 -0700 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> <7D637D21-2932-4D66-9391-D30A9C2D18DF@sobco.com> <20748dd7-383c-1593-71e5-c124f2cc334f@3kitty.org> <0cd9eb50-b0ee-cb91-4a68-aa24c2de5358@dcrocker.net> Message-ID: On 10/9/2016 5:17 PM, Vint Cerf wrote: > well, maybe not, Dave - if you look at some of the qubit parameters in > quantum computing that may contain many conventional bits per qubit... :-) Yes, well, I'm sorry to say that something along those lines did occur to me before posting, but since the new thing is not simply called a bit.. As well, adding a qualifier to my quip would have been far too verbose. Sort of like this response... d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From randy at psg.com Sun Oct 9 18:25:46 2016 From: randy at psg.com (Randy Bush) Date: Mon, 10 Oct 2016 10:25:46 +0900 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <12A3E4F3-DAD8-47E2-9246-3EB2327F0680@gmail.com> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <6cfa9555-caf4-1b44-108b-a196f9c53bf5@gmail.com> <12A3E4F3-DAD8-47E2-9246-3EB2327F0680@gmail.com> Message-ID: > My memory is that IBM *really pushed* the superiority of 32-bit words > and 8-bit bytes as part of bringing out and selling the IBM 360 > ca.1964. I suspect that this had non-trivial impact on how computer > purchasers throughout the computer world thought about the "correct" > computer to buy yes, if you were in the ibm world, the 360 nailed the addressable byte to the wall, though in ebcdic, and with four-octet ops; also long moves up to 266 bytes. since we're into reporting obscure kink: when you pushed an octet from a 360 though a 2701 serial adapter (four ports and the size of a fridge), over a link, and into a pdp-8, link-8, etc, (or vice versa) the bits arrived in reverse order. i.e. 76543210 became 01234567. i found it easier to do the flipping on the 360 side with the TRanslate op. i could even do ascii to/from ebcdic at the same time if i knew for sure it was a character. randy From jack at 3kitty.org Sun Oct 9 18:31:04 2016 From: jack at 3kitty.org (Jack Haverty) Date: Sun, 9 Oct 2016 18:31:04 -0700 Subject: [ih] "network unix" In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> Message-ID: <5f114833-c93d-dac7-a6b9-1e02fe9ce142@3kitty.org> Opcode 7002. From Wikipedia: 7002 ? BSW ? Byte Swap 6-bit "bytes" (PDP 8/e and up) /Jack On 10/09/2016 03:12 PM, Brian E Carpenter wrote: > I don't recall a 6-bit byte notion in the PDP-8. The smallest addressable unit > was the 12-bit word, and the primary I/O device was an 8-bit ASR33 (which read > into bits 4 through 11 of the 12-bit accumulator, the 1966 manual reminds me). > Of course you could squeeze upper case ASCII down to 6 bits and store two > characters per word to save core memory; I expect I did that but this was 1969 > so I don't quite remember. Unlike some of my cohort, I didn't pad out my > dissertation by including source code, so it's long lost. > > Regards > Brian > > On 10/10/2016 08:57, Jack Haverty wrote: >> Wow, people are actually reading this stuff... Thanks to everyone who >> pointed out that PDP-8s didn't have 8-bit byte. Tough audience... >> >> What I meant to say was "PDP-8s had 12-bit words and IIRC some notion of >> 6-bit bytes. PDP-11s had 8-bit bytes in 16-bit words" >> >> Somewhere between brain and fingers my neural network must have dropped >> a packet...... >> >> /Jack >> >> >> On 10/09/2016 12:20 PM, Scott O. Bradner wrote: >>> >>>> On Oct 9, 2016, at 2:36 PM, Jack Haverty wrote: >>>> >>>> Hi Noel, >>>> >>>> You're right, I should have said 32KW....but I wonder how many people >>>> today would know what "KW" means? I suspect many would think it >>>> referred to the power that our ancient monsters consumed...actually >>>> probably not far off for the bigger machines that consumed many square >>>> feet of lab space! >>>> >>>> The other aspect of the "Stone Age" that may not be remembered today is >>>> that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit >>>> bytes in 16-bit words. >>> >>> >>> PDP-8s were 12 bit words >>> >>> PDP-1, 7, 9, 15 were 18 bit words >>> >>> PDP-11 were 16 bit words >>> >>> PDD 6, 10 were 36 bit words >>> >>> Scott >>> >>>> Other machines made different choices. The >>>> PDP-10 was agnostic -- the instruction set allowed the programmer to >>>> specify whatever byte size they liked. So a "byte" only made sense in >>>> the context of a specific machine. >>>> >>>> Today of course we all know that a byte is 8 bits. Period. Perhaps >>>> some historian can figure out exactly when that happened..... >>>> >>>> Fun times... >>>> /Jack >>>> >>>> On 10/09/2016 10:21 AM, Noel Chiappa wrote: >>>>> A few notes/corrections: >>>>> >>>>>> The /40 design utilized a single address space for instructions and >>>>>> data, so everything had to fit in 32KB of memory (yes K, not M or G). >>>>> >>>>> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >>>>> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >>>>> each process' kernel stack + other swappable per-process data, so really only >>>>> 48KB for all kernel code, data, disk buffers, etc. >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>> >>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. >> From lpress at csudh.edu Mon Oct 10 07:08:19 2016 From: lpress at csudh.edu (Larry Press) Date: Mon, 10 Oct 2016 14:08:19 +0000 Subject: [ih] "network unix" In-Reply-To: References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org>, Message-ID: <1476108502079.26885@csudh.edu> > 1401 (8-bit bcd with zone punches and a word mark) It's been (a long) while, but I think the 1401 had 6-bit characters plus a parity (odd) bit and a word mark to signify the end of a data field or instruction. From randy at psg.com Mon Oct 10 07:10:41 2016 From: randy at psg.com (Randy Bush) Date: Mon, 10 Oct 2016 23:10:41 +0900 Subject: [ih] "network unix" In-Reply-To: <1476108502079.26885@csudh.edu> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> Message-ID: >> 1401 (8-bit bcd with zone punches and a word mark) > > It's been (a long) while, but I think the 1401 had 6-bit characters > plus a parity (odd) bit and a word mark to signify the end of a data > field or instruction. parity A B 8 4 2 1 word mark From pnr at planet.nl Mon Oct 10 07:45:22 2016 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 10 Oct 2016 16:45:22 +0200 Subject: [ih] Early Unix networking Message-ID: Gentlemen, Following my earlier request I have received a lot of information off list. I think it might be useful to summarize that on this list. For introduction: I'm a retro-computing hobbyist who's interested in the origins of networking in Unix (other than uucp). From 1983 onwards that history is well-kown and documented in several books; my interest is in what came before. Warren Toomey of the Unix Heritage Society has done a wonderful job of preserving the development of the early Unix source code (see http://minnie.tuhs.org/cgi-bin/utree.pl). However, in that archive networking appears as a big blob in 4.1c BSD and I am trying to dig beyond that and figure out how the code developed. I'm a bit peculiar in that I like to work from the actual source code and trying to get it running again (in emulation, or by porting) and experience the design choices made first hand. Hence my interest is sometimes about arcane detail and sometimes about the big picture. Here is where I am at, working backwards from 4.1c BSD. Questions are inline, marked with "=>". - 4.1a BSD (March '82). This is essentially 4.1BSD plus Rob Gurwitz' TCP/IP stack plus a precursor to the Joy/Leffler socket API. In backup tape in the CSRG archives was corrupted and the kernel files could not be read. However, I'm hopeful that 4.1a can be reconstructed from the SCCS files if need be. => does anybody know of a preserved good copy of the 4.1a distribution tape? - 4.1 BBN (November '81). This is the beta code that BBN sent to CSRG. It has Rob Gurwitz' TCP/IP stack as described in ien168 (https://www.rfc-editor.org/ien/ien168.txt) combined with an API and user land programs (telnet, etc.) that seem to derive from Network Unix as developed at the University of Illinois. The source to this system was preserved in the CSRG archives. Making this TCP/IP stack run with a loopback driver on X64 and on a 16-bit mini was surprisingly clean and easy. - Network Unix (May '76). This is the NCP Unix system as described in RFC681 and here https://archive.org/details/networkunixsyste243kell From the authors and the Chesson paper I know that the kernel did not change much between '76 and '78. A tape from early 1979 has been located that might contain the source, but it has not been read so far. This seems to be the eldest Arpanet enabled Unix, and also the eldest networked Unix -- predating uucp. => does anybody know of a preserved copy of this system? It would seem to have been in fairly wide use. There are three code bases that may have influenced the design decisions of later TCP/IP stacks: - DTI Unix (March '79). This is mentioned in IEN98 (https://www.rfc-editor.org/ien/ien98.txt). I have questions outstanding off list to understand what this was. Perhaps it derives from Network Unix, with NCP ripped out and TCP/IP put in. - BBN Unix / Wingfield (March '79). Also mentioned in IEN98. This is a user land TCP/IP implementation that runs on top of a kernel enriched with Rand ports, await/capac synchronization and a IMP device driver. The source is available as a scan at the Internet Museum at UCLA (http://digital2.library.ucla.edu/viewItem.do?ark=21198/zz002gvzqg). It may have been a test bed for some security enhancements, not sure yet how this worked. Over the coming months I will try to get this OCR'ed and running again. - BBN Unix / Haverty (September '77 - March '79). See Jack's post yesterday for detail. As I understand the broad design is similar to the later Wingfield Unix, but the TCP/IP stack is written in assembly. As he mentioned, a printout of the source code has survived in Jack's basement. If I understood Jack correctly, the main lesson from this work was that if buffers or code of the TCP/IP stack can be swapped out performance becomes terrible. I find the ports system and the await/capac system calls interesting, as they may have influenced the later design decisions on the BSD socket API. So far I have no source for these kernel extensions, but there is detailed documentation, so they could be recreated I think (for details see http://www.dtic.mil/dtic/tr/fulltext/u2/a044201.pdf and BBN Quarterly Report 3824). => Did the kernel source code for these extensions survive? Paul From bernie at fantasyfarm.com Mon Oct 10 07:53:31 2016 From: bernie at fantasyfarm.com (Bernie Cosell) Date: Mon, 10 Oct 2016 10:53:31 -0400 Subject: [ih] "network unix" References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu>, , <1476108502079.26885@csudh.edu> Message-ID: <57FBAB6B.22777.22C8B88B@bernie.fantasyfarm.com> One amusing entry in this is the MBB, which had 20-bit words in two ten-bit bytes. Perhaps someone on this list is familiar with the hardware that BBN developed and how things worked on it [I think it had a special mode where it transferred bytes from the internet into the low-eight bytes of consecutive words, and there was some magic I don't remember that allowed us to treat them as a 16-bit number when necessary. I don't remember much about it as the IMP replacement. I worked on the project to bring Unix up on it. Carl Howe wrote the microcode, Al Nemeth was poring over the Unix kernel figuring out what needed tweaking, and I wrote the compiler. AT first a cross compiler from our PDP-10 to the MBB. Handling constants and bit masks was very,er, uh, interesting, but we compiled the "standard" Unix kernal and utilities and between the three of us had to make it work. It was intended as an inexpensive *temporarly* replacement for the 11/70, but was _so_ fast [Al did an amazing job of finding ways that the compiler could optimize the kernel code, Carl added new instructions and such as required and I got the thing to compile using the new tricks] that it survived for YEARS. The magic day was when I compiled the compiler with itself and put it on the MBB. and then had the system compile its *OWN* kernel. AT that point the MBB was a self-sufficient Unix system and someone else will have to tell the story -- I moved on to other projects. But it was a quite potent and [at BBN] common "network Unix" node. /Bernie\ -- Bernie Cosell Fantasy Farm Fibers mailto:bernie at fantasyfarm.com Pearisburg, VA --> Too many people, too few sheep <-- From jnc at mercury.lcs.mit.edu Mon Oct 10 08:26:34 2016 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 10 Oct 2016 11:26:34 -0400 (EDT) Subject: [ih] Early Unix networking Message-ID: <20161010152635.D5DD018C09B@mercury.lcs.mit.edu> > From: Paul Ruizendaa > Here is where I am at, working backwards from 4.1c BSD. Anything Berkeley is pretty late in the game, in terms of early TCP/IP on Unix. > DTI Unix (March '79). ... I have questions outstanding off list to > understand what this was. John Day can I think answer questions about this; if memory serves, I think he did it? I remember looking at this code (and saying bad things about it an at early Internet/TCP-IP working group meeting), but it was so long ago I can barely remember anything about it. > BBN Unix / Haverty (September '77 - March '79). ... the TCP/IP stack is > written in [PDP-11] assembly. That's because it's a port of Jim Mathis' MOS-based code. (Which I _might_ have a copy of, somewhere. I definitely do have the MOS source - that was the OS the C Gateway, later the Proteon router products, was based on.) We looked at both of these at MIT, but wound up doing our own; I no longer have any memory of why (probably mostly NIH, but I suspect we also wanted something pretty high performance [since our networks were high-speed LANs, not ARPANET], and probably also wanted to explore some of our own ideas on how to structure network code - this was around the time Dave Clark was doing upcalls, etc. I have a couple of dump tapes that should have all the MIT Unix TCP/IP source on it (and perhaps more besides, e.g. the MOS TCP/IP that Jack ported), but I've had trouble getting them read - unreadable spots on the first tape we tried. Noel From reed at reedmedia.net Mon Oct 10 09:33:04 2016 From: reed at reedmedia.net (Jeremy C. Reed) Date: Mon, 10 Oct 2016 11:33:04 -0500 (CDT) Subject: [ih] Early Unix networking In-Reply-To: References: Message-ID: On Mon, 10 Oct 2016, Paul Ruizendaal wrote: > source code (see http://minnie.tuhs.org/cgi-bin/utree.pl). However, in > that archive networking appears as a big blob in 4.1c BSD and I am > trying to dig beyond that and figure out how the code developed. I have slowly been authoring a book about this from the Berkeley Unix perspective. I started over six years ago and have done interviews with over 80 participants in the BSD story. > => does anybody know of a preserved good copy of the 4.1a distribution > tape? It is included in the CSRG set (disk 1). https://www.mckusick.com/csrg/ (Now available in some git and subversion repos online too.) But it doesn't include the sys nor IP networking code, so doesn't help much. Nevertheless the disk1's 4.1c.1 code does have SCCS files for the sys networking code from October 1981 and later. See sys/netinet/SCCS/ and sys/vaxif/SCCS/s.if_en.c for example. (Again note these SCCS files is separate from the disk4 sccs code. I didn't look recently but I recall some of this history and some of the files in the SCCS files is different from the disk4.) The SCCS history references ../bbnnet/ code. I thknk the files were just renamed, for example ../bbnnet/fsm.h is tcp_fsm.h (which does have SCCS history in late October 1981). So I think using SCCS and renaming files you can reconstruct the original VAX implementation from Gurwitz. The CSRG SCCS archives (disk4) for sys/deprecated/bbnnet/SCCS/ appears to be later code but may be a good reference too. Another awesome resource for you are some of the Combined Quarterly Technical Reports from BBN. They discuss what was proposed and what was delivered. The ones I used were: #19 for aug. 1 1980 to Oct. 31, 1980 #20 for nov 1, 1980 to jan 31, 1981 #23 aug 1 - 10/31 1981 (says "At this writing (early December)" and stamped dec. 22 1981) #24 for nov. 1, 1981 to jan 31,1982 #27 for august 1 - october 31, 1982 $ grep ^@ /home/reed/book//bsd-history/svn-bsd-history/book.bib 234 (my list of citations in the book) $ grep -i CITE: /home/reed/book/bsd-history/svn-bsd-history/*tex | sort -u | wc -l 233 (my citations left to add to my bibliography, so I have hundreds of sources.) From dhc2 at dcrocker.net Mon Oct 10 10:26:36 2016 From: dhc2 at dcrocker.net (Dave Crocker) Date: Mon, 10 Oct 2016 10:26:36 -0700 Subject: [ih] Early Unix networking In-Reply-To: References: Message-ID: <269ee892-e3c3-83f7-3110-7a86ce3c9caa@dcrocker.net> On 10/10/2016 7:45 AM, Paul Ruizendaal wrote: > For introduction: I'm a retro-computing hobbyist who's interested in the origins of networking in Unix (other than uucp). Paul, Your summary reads well. And it defines a concrete and (IMO) entirely reasonable scope for what you probably mean with the above statement of intent. However you might be interested in a slightly broader scope. I suggest you consider email-based unix network access. Although a much more constrained channel than a packet-level interface, this was a legitimate means of gaining Arpanet (and then Internet) access for a number of years, and greatly expanded the community of participants.[1] To the extent you are interested in exploring this, then you'd have to include uucp, but also CSNet's Phonenet [2],[3]. d/ [1] To Be "On" the Internet; RFC 1775 [2] HISTORY AND OVERVIEW OF CSNET; https://www.internetsociety.org/sites/default/files/pdf/Comm83.pdf [3] MMDF; https://en.wikipedia.org/wiki/MMDF - -- Dave Crocker Brandenburg InternetWorking bbiw.net From johnl at iecc.com Mon Oct 10 11:52:49 2016 From: johnl at iecc.com (John Levine) Date: 10 Oct 2016 18:52:49 -0000 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20161009212047.7336D18C0A3@mercury.lcs.mit.edu> Message-ID: <20161010185249.38601.qmail@ary.lan> > > I think the question was really settled in April 1964 when the IBM 360 > > was announced. > >I too was going to mention the 360. I'm not sure we can elucidate _precisely_ >what led to the focus on 8-bit bytes, so questions like 'would the 360 _on >its own_ have done it' may be forever unknowable. But I do think the 360 was >one of the biggest factors. Well, yeah. In the 1970s I think IBM was still as big as all the other computer makers combined. >The other one I'd point to is ASCII. Technically, one only needs 7 bits for >ASCII, but 7 is odd (although there's no particular reason one couldn't have >odd-length bytes, but it just feels, well, odd), and so I think ASCII was a >big driver to 8-bit bytes; it certainly knocked out 6-bit bytes. So say Amdahl et al in their article on the design of S/360. https://pdfs.semanticscholar.org/3b9b/2cc9c0ce79aa3995a9b65f4a05c57bcb4efc.pdf By 1970 it was obvious that the 360's approach of byte addressed words worked a lot better than the various byte addressing kludges on the 36 bit mainfames (ILDB et al on the PDP-6/10 and some amazingly complex address modes on the GE 635.) If you're going to do that, making the size a power of 2 rather than a multiple of 6 greatly simplified the logic. 4 bits was too little, 16 was at the time way too big, so 8 bits it was. The biggest complaint about the 360's 32 bit words was floating point precision, but that was at least as much due to the well known design mistakes in 360 hex floating point as the word size. > > I think the byte stabilised at 8 bits in my mind because of the PDP-11, > > rapidly followed by the Intel 8080 and Motorola 6800. In Bell's article on the design of the PDP-11 he just says the the character size is 8 bits, which suggests that at the time that was obvious. He mentions the 360 which probably made the choice of byte addressing obvious, too. There was a competing 16 bit word addressed design by the designer of the PDP-8, which after DEC rejected it became the DG Nova. DG did OK but I think it's fair to say that the PDP-11 and Vax did a lot better than the Nova and the Eclipse. R's, John From johnl at iecc.com Mon Oct 10 11:59:28 2016 From: johnl at iecc.com (John Levine) Date: 10 Oct 2016 18:59:28 -0000 Subject: [ih] bits, was bytes [Re: "network unix"] In-Reply-To: <0cd9eb50-b0ee-cb91-4a68-aa24c2de5358@dcrocker.net> Message-ID: <20161010185928.38636.qmail@ary.lan> >The size of a bit has stayed fairly constant. >So has the number of bits in a bit. Depends how far back you go. According to my handy IBM 650 manual, each digit was represented in bi-quinary, where there was one group with five, uh, bit like things representing 0-4 and another group with two things representing 0 and 5. Error checking logic checked that exactly one thing in each group was on. So they were sort of like bits, but not really. Helpfully, John From johnl at iecc.com Mon Oct 10 12:03:19 2016 From: johnl at iecc.com (John Levine) Date: 10 Oct 2016 19:03:19 -0000 Subject: [ih] "network unix" In-Reply-To: <5f114833-c93d-dac7-a6b9-1e02fe9ce142@3kitty.org> Message-ID: <20161010190319.38673.qmail@ary.lan> In article <5f114833-c93d-dac7-a6b9-1e02fe9ce142 at 3kitty.org> you write: >Opcode 7002. From Wikipedia: > >7002 ? BSW ? Byte Swap 6-bit "bytes" (PDP 8/e and up) The 8/e was very late in the PDP-8 series. Back when I was programming an '8 we either stored one ASCII character per word, or three characters in two words. I would guess the byte swap was handy for some of the later disc or dectape operating systems that used pdp-10 style sixbit file names. R's, John From dhc2 at dcrocker.net Mon Oct 10 12:09:34 2016 From: dhc2 at dcrocker.net (Dave Crocker) Date: Mon, 10 Oct 2016 12:09:34 -0700 Subject: [ih] bits, was bytes [Re: "network unix"] In-Reply-To: <20161010185928.38636.qmail@ary.lan> References: <20161010185928.38636.qmail@ary.lan> Message-ID: On 10/10/2016 11:59 AM, John Levine wrote: > So they were sort of like bits, but not really. as in, they were a kind of byte, or more precisely digits. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From brian.e.carpenter at gmail.com Mon Oct 10 12:21:36 2016 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 11 Oct 2016 08:21:36 +1300 Subject: [ih] "network unix" In-Reply-To: <5f114833-c93d-dac7-a6b9-1e02fe9ce142@3kitty.org> References: <20161009172159.55A3518C0A1@mercury.lcs.mit.edu> <14946e8d-03d0-0085-bb2e-2b45a2e567cc@3kitty.org> <5f114833-c93d-dac7-a6b9-1e02fe9ce142@3kitty.org> Message-ID: PDP-8/e? Too modern for me! That was definitely a retro-fit to the architecture. Regards Brian On 10/10/2016 14:31, Jack Haverty wrote: > Opcode 7002. From Wikipedia: > > 7002 ? BSW ? Byte Swap 6-bit "bytes" (PDP 8/e and up) > > /Jack > > On 10/09/2016 03:12 PM, Brian E Carpenter wrote: >> I don't recall a 6-bit byte notion in the PDP-8. The smallest addressable unit >> was the 12-bit word, and the primary I/O device was an 8-bit ASR33 (which read >> into bits 4 through 11 of the 12-bit accumulator, the 1966 manual reminds me). >> Of course you could squeeze upper case ASCII down to 6 bits and store two >> characters per word to save core memory; I expect I did that but this was 1969 >> so I don't quite remember. Unlike some of my cohort, I didn't pad out my >> dissertation by including source code, so it's long lost. >> >> Regards >> Brian >> >> On 10/10/2016 08:57, Jack Haverty wrote: >>> Wow, people are actually reading this stuff... Thanks to everyone who >>> pointed out that PDP-8s didn't have 8-bit byte. Tough audience... >>> >>> What I meant to say was "PDP-8s had 12-bit words and IIRC some notion of >>> 6-bit bytes. PDP-11s had 8-bit bytes in 16-bit words" >>> >>> Somewhere between brain and fingers my neural network must have dropped >>> a packet...... >>> >>> /Jack >>> >>> >>> On 10/09/2016 12:20 PM, Scott O. Bradner wrote: >>>> >>>>> On Oct 9, 2016, at 2:36 PM, Jack Haverty wrote: >>>>> >>>>> Hi Noel, >>>>> >>>>> You're right, I should have said 32KW....but I wonder how many people >>>>> today would know what "KW" means? I suspect many would think it >>>>> referred to the power that our ancient monsters consumed...actually >>>>> probably not far off for the bigger machines that consumed many square >>>>> feet of lab space! >>>>> >>>>> The other aspect of the "Stone Age" that may not be remembered today is >>>>> that a "byte" was not yet very well-defined back then. PDP-8s had 8-bit >>>>> bytes in 16-bit words. >>>> >>>> >>>> PDP-8s were 12 bit words >>>> >>>> PDP-1, 7, 9, 15 were 18 bit words >>>> >>>> PDP-11 were 16 bit words >>>> >>>> PDD 6, 10 were 36 bit words >>>> >>>> Scott >>>> >>>>> Other machines made different choices. The >>>>> PDP-10 was agnostic -- the instruction set allowed the programmer to >>>>> specify whatever byte size they liked. So a "byte" only made sense in >>>>> the context of a specific machine. >>>>> >>>>> Today of course we all know that a byte is 8 bits. Period. Perhaps >>>>> some historian can figure out exactly when that happened..... >>>>> >>>>> Fun times... >>>>> /Jack >>>>> >>>>> On 10/09/2016 10:21 AM, Noel Chiappa wrote: >>>>>> A few notes/corrections: >>>>>> >>>>>>> The /40 design utilized a single address space for instructions and >>>>>>> data, so everything had to fit in 32KB of memory (yes K, not M or G). >>>>>> >>>>>> Err, that was 32KW, i.e. 64KB. But 8KB was the I/O page (device registers), so >>>>>> only 56KB of memory - sort of, because V6 Unix used one 8KB page to map in >>>>>> each process' kernel stack + other swappable per-process data, so really only >>>>>> 48KB for all kernel code, data, disk buffers, etc. >>>>> _______ >>>>> internet-history mailing list >>>>> internet-history at postel.org >>>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>>> Contact list-owner at postel.org for assistance. >>>> >>>> >>>> _______ >>>> internet-history mailing list >>>> internet-history at postel.org >>>> http://mailman.postel.org/mailman/listinfo/internet-history >>>> Contact list-owner at postel.org for assistance. >>>> >>> _______ >>> internet-history mailing list >>> internet-history at postel.org >>> http://mailman.postel.org/mailman/listinfo/internet-history >>> Contact list-owner at postel.org for assistance. >>> > From jnc at mercury.lcs.mit.edu Mon Oct 10 12:32:14 2016 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Mon, 10 Oct 2016 15:32:14 -0400 (EDT) Subject: [ih] bytes [Re: "network unix"] Message-ID: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> > From: "John Levine" > There was a competing 16 bit word addressed design by the designer of > the PDP-8, which after DEC rejected it became the DG Nova. I hear this repeated a lot, but I'm not sure it's accurate. That competing design has surfaced (Google "PDP-X"), and it's not very much like the Nova. Noel From jack at 3kitty.org Mon Oct 10 12:40:15 2016 From: jack at 3kitty.org (Jack Haverty) Date: Mon, 10 Oct 2016 12:40:15 -0700 Subject: [ih] bits, was bytes [Re: "network unix"] In-Reply-To: <20161010185928.38636.qmail@ary.lan> References: <20161010185928.38636.qmail@ary.lan> Message-ID: Back in the 60s, as a student project in "Digital Systems Lab" at MIT, I built a very primitive "computer" using ternary logic. Each bit could have 3 values: +1, 0, -1 which reflected direction, or absence, of current flow in the transistors involved. Seemed like a good idea at the time. Fortunately the idea lost favor before we had to figure out how to network computers....what a mess that would have been. Google "ternary computer". >From 1958 (a ternary computer in Russia!) through today's qubits...a bit is not always a bit! Of course, if you consider "bit" to be a shortened term for "binary digit", then those other non-binary things were "bit like things" with no specific name that I recall. Perhaps we would have had tri-valued "ternits"? And to mirror the "nibbles" of half-bytes, we would have had .... "tribbles"! Star Trek was very prescient.... /Jack On 10/10/2016 11:59 AM, John Levine wrote: >> The size of a bit has stayed fairly constant. >> So has the number of bits in a bit. > > Depends how far back you go. According to my handy IBM 650 manual, > each digit was represented in bi-quinary, where there was one group > with five, uh, bit like things representing 0-4 and another group with > two things representing 0 and 5. Error checking logic checked that > exactly one thing in each group was on. > > So they were sort of like bits, but not really. > > Helpfully, > John > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > From pnr at planet.nl Mon Oct 10 13:22:27 2016 From: pnr at planet.nl (Paul Ruizendaal) Date: Mon, 10 Oct 2016 22:22:27 +0200 Subject: [ih] Early Unix networking In-Reply-To: References: Message-ID: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> On 10 Oct 2016, at 18:33 , Jeremy C. Reed wrote: > I have slowly been authoring a book about this from the Berkeley Unix > perspective. That is welcome news. I'm sure I will find your book a very good read once it is ready! It would seem that your and my focus are mostly complementary. Networking in BSD is 1982 and later (respectfully ignoring uucp and berknet), my interest is 1982 and before. We overlap for 1982 :^) >> => does anybody know of a preserved good copy of the 4.1a distribution >> tape? > > It is included in the CSRG set (disk 1). > https://www.mckusick.com/csrg/ > (Now available in some git and subversion repos online too.) > But it doesn't include the sys nor IP networking code, so doesn't help > much. Yes, that partial directory tree is what was Kirk McKusick could rescue from the damaged tape. > Nevertheless the disk1's 4.1c.1 code does have SCCS files for the sys > networking code from October 1981 and later. See sys/netinet/SCCS/ and > sys/vaxif/SCCS/s.if_en.c for example. (Again note these SCCS files is > separate from the disk4 sccs code. I didn't look recently but I recall > some of this history and some of the files in the SCCS files is > different from the disk4.) Ah, didn't know that. I looked at the SCCS included on disk 4 only. > The SCCS history references ../bbnnet/ code. I thknk the files were just > renamed, for example ../bbnnet/fsm.h is tcp_fsm.h (which does have SCCS > history in late October 1981). Partly so. I have a few snapshots of the code Gurwitz sent to CSRG during 1981. This code is in line with the design proposal in IEN168. My last snapshot is from January 1982, two months before the release of 4.1a, if I'm not mistaken. In those two months the API was changed from the Network-Unix-like original to a primordial sockets API. There was a build switch to build with either the network stack in "bbnnet" or with the network stack in "inet". I think the latter was alpha code at the time 4.1a was "released". According to BBN Quarterly #28, late in 1982 Gurwitz changed the design from being run on a second kernel thread to being driven by software interrupts. The code appears to have been further developed and maintained by Gurwitz and Partridge as late as 1984. It is this evolved version that appears in the "deprecated" subdirectory in October 1985. Unfortunately, all the development between March 1982 and October 1985 on the "bbnnet" code is not covered by SCCS and also not included on the 4.1c and 4.2 BSD distribution tapes. > So I think using SCCS and renaming files you can reconstruct the > original VAX implementation from Gurwitz. Yes, making some guesses about the integration of the early Gurwitz code with the early sockets API that should be a doable effort. However, finding the real code would be even better. Paul From dhc2 at dcrocker.net Mon Oct 10 13:32:19 2016 From: dhc2 at dcrocker.net (Dave Crocker) Date: Mon, 10 Oct 2016 13:32:19 -0700 Subject: [ih] bits, was bytes [Re: "network unix"] In-Reply-To: References: <20161010185928.38636.qmail@ary.lan> Message-ID: On 10/10/2016 12:40 PM, Jack Haverty wrote: > Seemed like a good idea at the time. Fortunately the idea lost favor > before we had to figure out how to network computers....what a mess that > would have been. On the other hand, this might be just the breakthrough we need for nextgen voting machines. Yes, No, Maybe. d/ -- Dave Crocker Brandenburg InternetWorking bbiw.net From johnl at iecc.com Mon Oct 10 13:45:29 2016 From: johnl at iecc.com (John R. Levine) Date: 10 Oct 2016 16:45:29 -0400 Subject: [ih] words and bytes [Re: "network unix"] In-Reply-To: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> Message-ID: > > There was a competing 16 bit word addressed design by the designer of > > the PDP-8, which after DEC rejected it became the DG Nova. > > I hear this repeated a lot, but I'm not sure it's accurate. That competing > design has surfaced (Google "PDP-X"), and it's not very much like the Nova. Huh, I'd missed that. You're right, it was more like a stripped down PDP-10 than the Nova. But it's definitely true that Ed DeCastro who designed the PDP-8 and I believe the PDP-X left to form Data General. Now I'm thinking about the eventual fate of DEC and DG, both clobbered by PCs. DEC came out with single chip package PDP-8 and J-11, and DG with single chip Micronova, but they were both too little too late. I guess the PDP-11 was more influential, since the x86 series uses the 11's (at the time) unsual little-endian byte addressing. Regards, John Levine, johnl at iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. https://jl.ly From brian.e.carpenter at gmail.com Mon Oct 10 14:50:58 2016 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Tue, 11 Oct 2016 10:50:58 +1300 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> Message-ID: <6a54bee1-c69f-74b5-2851-bf813eb16c22@gmail.com> On 11/10/2016 08:32, Noel Chiappa wrote: > > From: "John Levine" > > > There was a competing 16 bit word addressed design by the designer of > > the PDP-8, which after DEC rejected it became the DG Nova. > > I hear this repeated a lot, but I'm not sure it's accurate. That competing > design has surfaced (Google "PDP-X"), and it's not very much like the Nova. I jumped from programming a PDP-8 to programming an Imlac PDS-1, which struck me at the time as being remarkably like a 16-bit PDP-8. The core was word-addressed. There was a separate graphics processor which also took 16-bit instructions, but vectors were defined in 8-bit bytes. There's no resemblance to the PDP-X description. I believe the Imlac founders had jumped ship from DEC in 1968. (The first protocol I designed and implemented was for booting and driving PDS-1s from an IBM 1800, in 1971, at CERN.) Brian From jeanjour at comcast.net Mon Oct 10 23:55:02 2016 From: jeanjour at comcast.net (John Day) Date: Tue, 11 Oct 2016 02:55:02 -0400 Subject: [ih] words and bytes [Re: "network unix"] In-Reply-To: References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> Message-ID: Yea, I have always wondered about that. The minicomputer companies had ?done it? to IBM you would have thought they would be looking over their shoulders. But I guess corporate culture was just too strong and they couldn?t shift to a consumer model. > On Oct 10, 2016, at 16:45, John R. Levine wrote: > >>> There was a competing 16 bit word addressed design by the designer of >>> the PDP-8, which after DEC rejected it became the DG Nova. >> >> I hear this repeated a lot, but I'm not sure it's accurate. That competing >> design has surfaced (Google "PDP-X"), and it's not very much like the Nova. > > Huh, I'd missed that. You're right, it was more like a stripped down > PDP-10 than the Nova. But it's definitely true that Ed DeCastro who > designed the PDP-8 and I believe the PDP-X left to form Data General. > > Now I'm thinking about the eventual fate of DEC and DG, both clobbered by > PCs. DEC came out with single chip package PDP-8 and J-11, and DG with > single chip Micronova, but they were both too little too late. > > I guess the PDP-11 was more influential, since the x86 series uses the > 11's (at the time) unsual little-endian byte addressing. > > Regards, > John Levine, johnl at iecc.com, Primary Perpetrator of "The Internet for Dummies", > Please consider the environment before reading this e-mail. https://jl.ly > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From dot at dotat.at Tue Oct 11 01:35:54 2016 From: dot at dotat.at (Tony Finch) Date: Tue, 11 Oct 2016 09:35:54 +0100 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20161010185249.38601.qmail@ary.lan> References: <20161010185249.38601.qmail@ary.lan> Message-ID: John Levine wrote: > The biggest complaint about the 360's 32 bit words was floating point > precision, but that was at least as much due to the well known design > mistakes in 360 hex floating point as the word size. I was wondering what those mistakes were, and I found this comp.arch message from a certain John Levine... https://groups.google.com/forum/#!topic/comp.arch/m7P2QFqayuo Tony. -- f.anthony.n.finch http://dotat.at/ - I xn--zr8h punycode Viking, North Utsire, North South Utsire: Variable 3 or 4, becoming southeasterly 5 at times. Slight, occasionally moderate. Fair. Good. From craig at tereschau.net Tue Oct 11 04:46:45 2016 From: craig at tereschau.net (Craig Partridge) Date: Tue, 11 Oct 2016 07:46:45 -0400 Subject: [ih] Early Unix networking In-Reply-To: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> References: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> Message-ID: On Mon, Oct 10, 2016 at 4:22 PM, Paul Ruizendaal wrote: > > > According to BBN Quarterly #28, late in 1982 Gurwitz changed the design > from being run on a second kernel thread to being driven by software > interrupts. The code appears to have been further developed and maintained > by Gurwitz and Partridge as late as 1984. It is this evolved version that > appears in the "deprecated" subdirectory in October 1985. Unfortunately, > all the development between March 1982 and October 1985 on the "bbnnet" > code is not covered by SCCS and also not included on the 4.1c and 4.2 BSD > distribution tapes. > > As I recall, the BBN code history is more complex post 1982. It went through several hands. Rob Gurwitz moved from being programmer to project manager to, I think, group manager between something like 1981 and 1985. By sometime in late 1982, Rob was the manager of the TCP/IP effort at BBN and Dennis Rockwell, who had previously been a lead programmer in a group at Duke doing UUCP/Netnews (I think that's right), was the TCP/IP programmer. Bob Walsh took over the code, still reporting to Gurwitz, in summer 1983. By fall 1983, BBN found it needed someone to work with the Joy code in 4.1c and hired me and taught me TCP/IP programming with Joy's code (rather than the BBN code). Sometime in 1985, I ended up helping Bob Walsh a little bit in the 4.2 BBN TCP/IP code release, which was the BBN code rewritten to use the socket interface rather /dev/tcp. Sometime soon after Bob moved on to other projects and Karen Lam was hired to work on the BBN TCP/IP (which DARPA still funded). Then Rob Gurwitz moved on and I was handed the project and Karen reported to me and had moved to adding some features to the Joy BSD code (I don't recall what). Then Karen left and David Waitzman replaced her and David worked on putting multicast into the Joy BSD code with Steve Deering of Stanford. After that, c. 1991?, the DARPA funding finally ended. Many details are likely not quite right here -- doing this from memory. Two side notes: * sometime during this period I recall seeing an insightful note from Gurwitz about limitations of both the Joy and BBN TCP implementations in Unix. I'm hoping it was sent to DARPA and survives in a file somewhere. The paragraph I remember best is a comment on the limitations of mbufs (which apparently Gurwitz devised), which seemed prophetic (which is why I remember it), when people sought to reduce the memory overheads during the 1990s. * the Joy TCP was apparently a rewrite of the BBN TCP (vs. a from scratch implementation). As late as 1989, Berkeley would sometimes defend bugs in the Joy TCP by observing they'd originated in the BBN TCP. Thanks! Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. For Raytheon business, please email: craig at bbn.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sob at sobco.com Tue Oct 11 06:44:24 2016 From: sob at sobco.com (Scott Bradner) Date: Tue, 11 Oct 2016 09:44:24 -0400 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> Message-ID: <83A454B9-14A9-4078-A103-940C9A48A7EE@sobco.com> this is the story I remember at the time -almost all of the DG founders were all ex-DEC people particularly Ed de Castro who ran the PDP-8 stuff for DEC - the Nova was basically a glorified PDP-8 Scott > On Oct 10, 2016, at 3:32 PM, Noel Chiappa wrote: > >> From: "John Levine" > >> There was a competing 16 bit word addressed design by the designer of >> the PDP-8, which after DEC rejected it became the DG Nova. > From jeanjour at comcast.net Tue Oct 11 07:18:57 2016 From: jeanjour at comcast.net (John Day) Date: Tue, 11 Oct 2016 10:18:57 -0400 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <83A454B9-14A9-4078-A103-940C9A48A7EE@sobco.com> References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> <83A454B9-14A9-4078-A103-940C9A48A7EE@sobco.com> Message-ID: <5048A395-1428-4CF3-89CB-100A1C45BECB@comcast.net> The thing I remember about this (not much) ;-) is that they didn't understand that auto-increment, auto-decrement addressing modes should be on opposite sides of the instruction so they can be used for stack operations. I figured it was an indicator. > On Oct 11, 2016, at 09:44, Scott Bradner wrote: > > this is the story I remember at the time -almost all of the DG founders were all ex-DEC people > particularly Ed de Castro who ran the PDP-8 stuff for DEC - the Nova was basically a glorified PDP-8 > > Scott > >> On Oct 10, 2016, at 3:32 PM, Noel Chiappa wrote: >> >>> From: "John Levine" >> >>> There was a competing 16 bit word addressed design by the designer of >>> the PDP-8, which after DEC rejected it became the DG Nova. >> > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. From steve.bunch at gmail.com Tue Oct 11 07:31:23 2016 From: steve.bunch at gmail.com (Steve Bunch) Date: Tue, 11 Oct 2016 10:31:23 -0400 Subject: [ih] bytes [Re: "network unix"] Message-ID: <5472B250-5D31-4DE7-9BE1-2B28341A4DF8@gmail.com> > From: Brian E Carpenter > > On 11/10/2016 08:32, Noel Chiappa wrote: >>> From: "John Levine" > >> >>> There was a competing 16 bit word addressed design by the designer of >>> the PDP-8, which after DEC rejected it became the DG Nova. >> >> I hear this repeated a lot, but I'm not sure it's accurate. That competing >> design has surfaced (Google "PDP-X"), and it's not very much like the Nova. > > I jumped from programming a PDP-8 to programming an Imlac PDS-1, which > struck me at the time as being remarkably like a 16-bit PDP-8. The core > was word-addressed. There was a separate graphics processor which also took > 16-bit instructions, but vectors were defined in 8-bit bytes. There's no > resemblance to the PDP-X description. In 1972-73 as a new grad student I took a computer architecture class at the University of Illinois from professor Michael Faiman. Michael spent a significant amount of time on a gate-by-gate, equation-by-equation analysis of the PDP-8 (possibly 8i, can?t recall). At the Center for Advanced Computation we had an Imlac PDS-1 that was essentially idle, so I took it over. It had a delicate core memory (very power-supply voltage sensitive) and RC-delay based UART timing. Both had to be routinely adjusted, so I found myself looking through the prints of the logic design. The Imlac PDS-1 WAS the PDP-8 we?d studied, for all practical purposes, stretched to 16 bits and enhanced for graphics. There was an extra bit of opcode so room for new instructions, extra bits for address, and of course the graphics ?processor? *, the raison d?etre of the machine, was added. But the similarity was unmistakeable, the signals and registers even had the same names. Our EE, Jim Bailey, knew the Imlac guys and told me that the lead designer of the Imlac PDS-1 was indeed the ex-DEC engineer who had done the same PDP-8 that we?d studied. The PDS-1 was used for the Network Graphics Protocol Level 0 interpreter mentioned in RFC 472 and 549, written in IMOL (Imlac Machine-Oriented Language), using a compiler constructed by Smokey Wallace that ran on a PDP-10 (I used BBN Tenex). The NGP-0 interpreter was used to access the UC Santa Barbara OLS (On-Line System) graphics system remotely and in other experiments with John Pickens (recently deceased) of the UCSB Computer Systems Laboratory, experiment with computer generated holograms, display cloud simulation data, and other projects. The NGP0 interpreter was embedded in an emulator for a standard TTY terminal. We also used the program to gather statistics on typical Telnet keyboard usage. Those stats were available when deciding things like buffer sizes for the NCP of Network UNIX a couple of years later (I wrote the mbuf code), which in our environment was heavily used for Telnet. (I?ve been corresponding with Paul Ruizendaal on Network UNIX.) I salvaged the CPU logic book when the PDS-1 was scrapped around 1980 and still have it, along with the original SRI mouse and 5-key keyset. Steve * The graphics ?processor? was actually just a few added registers and logic and a graphics/cpu state bit, and that bit then was an extra input to key equations. Each instruction cycle was either a graphics instruction or a CPU instruction. It took extra effort to avoid creating graphics programs that shut out regular instruction processing long enough to lose interrupts. > I believe the Imlac founders had jumped ship from DEC in 1968. > > (The first protocol I designed and implemented was for booting and driving > PDS-1s from an IBM 1800, in 1971, at CERN.) > > Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From pnr at planet.nl Tue Oct 11 11:30:32 2016 From: pnr at planet.nl (Paul Ruizendaal) Date: Tue, 11 Oct 2016 20:30:32 +0200 Subject: [ih] Early Unix networking In-Reply-To: References: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> Message-ID: <0C91C83B-9665-4DC7-8ADB-B06E3DE27EF2@planet.nl> On 11 Oct 2016, at 13:46 , Craig Partridge wrote: > According to BBN Quarterly #28, late in 1982 Gurwitz changed the design from being run on a second kernel thread to being driven by software interrupts. The code appears to have been further developed and maintained by Gurwitz and Partridge as late as 1984. [...] > > As I recall, the BBN code history is more complex post 1982. It went through several hands. Without the source for the 1982-1985 period there is really very little I could say about how the code developed with any certainty and my wording should have reflected that. Thank you for filling in the gaps! > [PR wrote] There was a build switch to build with either the network stack in "bbnnet" or with the network stack in "inet". I think the latter was alpha code at the time 4.1a was "released". My above sentence was also worded with a little too much haste. I should have written: "The latter was evolving quite quickly at the time and I still have to understand the level of functionality it offered at the time 4.1a was released". > Then Rob Gurwitz moved on and I was handed the project and Karen reported to me and had moved to adding some features to the Joy BSD code (I don't recall what). Have a look here: https://svnweb.freebsd.org/csrg?view=revision&revision=25202 It may have been work on the HMP protocol, amongst other. > Sometime in 1985, I ended up helping Bob Walsh a little bit in the 4.2 BBN TCP/IP code release, which was the BBN code rewritten to use the socket interface rather /dev/tcp. That throws my understanding of 4.1a upside down. My understanding was that 4.1a already had the BBN TCP/IP code combined with the (early) sockets API and that this happened between January and March 1982. If that shift only happened in 1985 (according to the above snap shot before October 14th) it means that the sockets API and the /dev/tcp API (the API pioneered by Network Unix) coexisted for about 4 years. I guess I really need to find the 4.1a tape and work with the code in detail to improve my understanding of what was there early in 1982. > * sometime during this period I recall seeing an insightful note from Gurwitz about limitations of both the Joy and BBN TCP implementations in Unix. I'm hoping it was sent to DARPA and survives in a file somewhere. The paragraph I remember best is a comment on the limitations of mbufs (which apparently Gurwitz devised), which seemed prophetic (which is why I remember it), when people sought to reduce the memory overheads during the 1990s. My current understanding is that network buffers grew in size over time as hardware and software evolved. - From the earliest Unix there were 'clists' to buffer serial line I/O, 8 byte buffers linked in a chain. - In 1975 the U of I team considered using clists to buffer IMP I/O but found that impractical. Steve Bunch then developed a buffer scheme with 64 byte buffer blocks, linked together in chains. The network buffer code would steal 512 byte blocks from the general disk buffer pool and place 8 network buffers on each. - In the BBN code buffering consists of linked 128 byte blocks placed on memory pages, where the network buffer code steals pages from the VM manager. - If I'm not mistaken this then evolves into 128 byte blocks with a potential pointer to an external 2048 byte data block in the CSRG code. A contemporary note discussing the trade-offs in buffer size would certainly be very interesting. > * the Joy TCP was apparently a rewrite of the BBN TCP (vs. a from scratch implementation). [...] I can confirm this from source code / SCCS. The directory "inet" was created in August 1981, around the time the 2nd tape was sent from BBN to CSRG. Nothing much happens in the repository until October 14th, 1981 when code begins to appear, and initially this is the BBN code: https://svnweb.freebsd.org/csrg/sys/netinet/?pathrev=4498 This code then begins to morph quite quickly. I have not reviewed the code development in the "inet" directory in full detail yet, but it seems that the CSRG focus is on maximizing the speed of FTP-like transfers on ethernet networks. My current understanding is that this was achieved through (i) larger buffers, (ii) assembly optimized check-summing en (iii) revised scheduling. Other changes appear to be more a matter of taste in code structuring then anything else. I may need to revise this understanding as I learn more. Early versions of the socket API appear in November 1981 (https://svnweb.freebsd.org/csrg/sys/kern/uipc_socket.c?revision=4786&view=markup&pathrev=4787), but there are major revisions before it settles down several months later. Paul From brian.e.carpenter at gmail.com Tue Oct 11 12:52:01 2016 From: brian.e.carpenter at gmail.com (Brian E Carpenter) Date: Wed, 12 Oct 2016 08:52:01 +1300 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <5048A395-1428-4CF3-89CB-100A1C45BECB@comcast.net> References: <20161010193214.EE0AB18C099@mercury.lcs.mit.edu> <83A454B9-14A9-4078-A103-940C9A48A7EE@sobco.com> <5048A395-1428-4CF3-89CB-100A1C45BECB@comcast.net> Message-ID: <1f1ddaf3-d09c-f4e4-4229-2a4ddf1f6d94@gmail.com> I forget the details, but there was actually a hardware bug in one instruction in the first PDP-11 model. The decrement of R7 was performed at the wrong point in the cycle compared with what the ISP definition said, or something like that. Since R7 was used as the stack pointer, this mattered if you stacked and unstacked the value of R7. I recall that the original version of FOCAL for the PDP-11 crashed horribly on later models, because it used some tricky code involving stacking the stack pointer. Regards Brian On 12/10/2016 03:18, John Day wrote: > The thing I remember about this (not much) ;-) is that they didn't understand that auto-increment, auto-decrement addressing modes should be on opposite sides of the instruction so they can be used for stack operations. I figured it was an indicator. > > >> On Oct 11, 2016, at 09:44, Scott Bradner wrote: >> >> this is the story I remember at the time -almost all of the DG founders were all ex-DEC people >> particularly Ed de Castro who ran the PDP-8 stuff for DEC - the Nova was basically a glorified PDP-8 >> >> Scott >> >>> On Oct 10, 2016, at 3:32 PM, Noel Chiappa wrote: >>> >>>> From: "John Levine" >>> >>>> There was a competing 16 bit word addressed design by the designer of >>>> the PDP-8, which after DEC rejected it became the DG Nova. >>> >> >> >> _______ >> internet-history mailing list >> internet-history at postel.org >> http://mailman.postel.org/mailman/listinfo/internet-history >> Contact list-owner at postel.org for assistance. > > > _______ > internet-history mailing list > internet-history at postel.org > http://mailman.postel.org/mailman/listinfo/internet-history > Contact list-owner at postel.org for assistance. > . > From reed at reedmedia.net Tue Oct 11 12:59:35 2016 From: reed at reedmedia.net (Jeremy C. Reed) Date: Tue, 11 Oct 2016 14:59:35 -0500 (CDT) Subject: [ih] Early Unix networking In-Reply-To: <0C91C83B-9665-4DC7-8ADB-B06E3DE27EF2@planet.nl> References: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> <0C91C83B-9665-4DC7-8ADB-B06E3DE27EF2@planet.nl> Message-ID: On Tue, 11 Oct 2016, Paul Ruizendaal wrote: > > Sometime in 1985, I ended up helping Bob Walsh a little bit in the > > 4.2 BBN TCP/IP code release, which was the BBN code rewritten to use > > the socket interface rather /dev/tcp. > > That throws my understanding of 4.1a upside down. My understanding was > that 4.1a already had the BBN TCP/IP code combined with the (early) > sockets API and that this happened between January and March 1982. If > that shift only happened in 1985 (according to the above snap shot > before October 14th) it means that the sockets API and the /dev/tcp > API (the API pioneered by Network Unix) coexisted for about 4 years. You may have read this backwards. This is not developing the BSD release. > I guess I really need to find the 4.1a tape and work with the code in > detail to improve my understanding of what was there early in 1982. That would be nice. But you basically already know basically what was in it (per your other comments and sources that I didn't include here). Some early uses of the sockets in the BSD code: % if_en Ethernet interface driver 81/11/26 wnj % if_acc ACC LH/DH ARPAnet IMP interface driver 82/02/01 sam % if_un Ungermann-Bass network/DR11-W interface driver 82/02/05 root % sendmail daemon.c 82/02/26 eric (allman) (also look at the 4.1a_daemon.c and compare with the bbn_daemon.c code) % telnet.c 82/02/28 root % telnetd.c 82/02/28 root Stripped-down telnet server. % comsat.c 82/03/31 root "datagram version" The BBN Combined Quarterly Technical Report No. 24 mentions some of BSD changes they knew about but not implemented on BBN's side. Some other interesting documents for you to look at are: TCP/IP Digest 8 Oct 1981 Volume 1 : Issue 1 TCP/IP Digest 11 Nov 1981 Volume 1 : Issue 6 TCP-IP Digest, Vol 1 #9 By the way, I couldn't find BBN Quarterly Technical Reports 18, 21, 22, 25, 28. Does anyone have these? From pnr at planet.nl Tue Oct 11 18:18:47 2016 From: pnr at planet.nl (Paul Ruizendaal) Date: Wed, 12 Oct 2016 03:18:47 +0200 Subject: [ih] Early Unix networking In-Reply-To: References: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> <0C91C83B-9665-4DC7-8ADB-B06E3DE27EF2@planet.nl> Message-ID: <8EC5BB34-4888-4B4E-B949-E5E820A49626@planet.nl> On 11 Oct 2016, at 21:59 , Jeremy C. Reed wrote: > The BBN Combined Quarterly Technical Report No. 24 mentions some of BSD > changes they knew about but not implemented on BBN's side. That's an interesting read. Section 8 is indeed a nice summary of how the BBN code evolved between August 1981 and January 1982. The notes on local net performance tuning are intriguing: - byte swapping and check summing in assembly - tuning timer values - software interrupts instead of a kernel thread It would be interesting to find out which of the above made it into the 4.1a release. As yet, I haven't fathomed why software interrupts would be more efficient than the kernel thread design. I've checked the code again and at that time the CSRG mbuf code does not use external 2048 byte blocks yet; the page allocation algorithm appears to have changed from the BBN version, but as yet I don't know if this affected performance much. In any case, my earlier understanding that larger buffers were used at that time to boost speed on local nets appears incorrect. From craig at tereschau.net Wed Oct 12 03:17:46 2016 From: craig at tereschau.net (Craig Partridge) Date: Wed, 12 Oct 2016 06:17:46 -0400 Subject: [ih] Early Unix networking In-Reply-To: <8EC5BB34-4888-4B4E-B949-E5E820A49626@planet.nl> References: <5E3469E9-F686-4D40-A428-FC246D248348@planet.nl> <0C91C83B-9665-4DC7-8ADB-B06E3DE27EF2@planet.nl> <8EC5BB34-4888-4B4E-B949-E5E820A49626@planet.nl> Message-ID: On Tue, Oct 11, 2016 at 9:18 PM, Paul Ruizendaal wrote: > > I've checked the code again and at that time the CSRG mbuf code does > not use external 2048 byte blocks yet; the page allocation algorithm > appears to have changed from the BBN version, but as yet I don't know if > this affected performance much. > > In any case, my earlier understanding that larger buffers were used at > that time to boost speed on local nets appears incorrect. > > This is entirely recollections and probably foggy/wrong in some details. My recollection is that mbufs were 128 bytes, because that was optimized for manipulating headers. You could pull an entire TCP/IP header into the front mbuf and work with a contiguous block of memory. I seem to recall a routine called mpullup() which was designed to ensure that the entire TCP/IP header was in the first mbuf after the link layer (e.g. Ethernet) header was removed. Mbufs were created by taking a 512 byte page and splitting it into 4 mbufs. I think there may have been 512 byte mbufs too -- and that may be the larger size. I don't think 2048 byte buffers were feasible until BSD enhanced its memory management -- initially, mbufs were simply pages taken from the page pool. Thanks! Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. For Raytheon business, please email: craig at bbn.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnl at iecc.com Thu Oct 13 10:00:47 2016 From: johnl at iecc.com (John Levine) Date: 13 Oct 2016 17:00:47 -0000 Subject: [ih] bytes [Re: "network unix"] In-Reply-To: <83A454B9-14A9-4078-A103-940C9A48A7EE@sobco.com> Message-ID: <20161013170047.47623.qmail@ary.lan> In article <83A454B9-14A9-4078-A103-940C9A48A7EE at sobco.com>, Scott Bradner wrote: >this is the story I remember at the time -almost all of the DG founders were all ex-DEC people >particularly Ed de Castro who ran the PDP-8 stuff for DEC - the Nova was basically a glorified PDP-8 Having spent a fair amount of time programming a PDP-8, I'd say that while the Nova had more in common with the 4/5/6/7/8/9* machines than with the -11, it was not all that much like a PDP-8. The Nova had four registers, the -8 had a single accumulator, the Nova was load/store, while the closest things to load and store the PDP-8 had were TAD (two's complement add) and DCA (deposit and clear the AC.) Frankly, I think the only important difference between the -11 and the Nova is that the -11 had byte addressing. The Nova's stripped down architecture gave it a price advantage in the then-important OEM market, but that was pretty short term since both were simple enough to implement in a single package. Other than that it was more differences of style. R's, John * - the 5/8 were clearly stripped down versions of the 4/7/9 From craig at tereschau.net Thu Oct 13 10:05:55 2016 From: craig at tereschau.net (Craig Partridge) Date: Thu, 13 Oct 2016 13:05:55 -0400 Subject: [ih] Leo Beranek died on Monday Message-ID: One of the founders of Bolt Beranek and Newman (BBN). http://www.bostonglobe.com/metro/2016/10/13/leo-beranek-accoustics-pioneer-and-founder-bbn-technologies-dies/F732cEPdAE3K00y6cO1U9N/story.html?event=event25 Craig -- ***** Craig Partridge's email account for professional society activities and mailing lists. For Raytheon business, please email: craig at bbn.com -------------- next part -------------- An HTML attachment was scrubbed... URL: