[ih] booting linux on a 4004
John Day
jeanjour at comcast.net
Mon Sep 30 18:00:26 PDT 2024
Thanks, Jack. This is very helpful. It really explains what was and wasn’t understood at the time.
A few comments inline below.
> On Sep 30, 2024, at 17:44, Jack Haverty via Internet-history <internet-history at elists.isoc.org> wrote:
>
> I'm not sure I remember all of the "gateway issues" but here's some of them...
>
> Circa 1978/9, it wasn't clear what "gateways" were in an architectural sense. TCP version 2.5 had not yet evolved into TCP/IP version 4, which split the "TCP" and "IP" functions more cleanly, and also enabled the introduction of additional functionality as an alternative to TCP. In particular, this enabled the definition of UDP, which was deemed necessary for experimentation with real-time interactive voice. Some usage required a reliable byte-stream; other usage required getting as much as possible as fast as possible.
>
> I was one of Vint's "ICCB" members, and we had lots of discussions about the role of "gateways", even after TCP and IP were split in Version 4. Vint had moved the "gateway project" to my group at BBN, so I was tasked to "make the Internet a 24x7 operational service". Or something like that. Gateways had become my problem.
>
> Gateways were characterized by the fact that they connected to more than one network. When they connected to three or more they had to make routing decisions, and thus participate in some kind of routing algorithm and information exchanges with other gateways.
Yes, this was understood in 1973. The primary problem being a routing scheme that scaled to the needed size. It was eventually provided (for better or worse) by BGP.
>
> However, we also realized that, in some cases, "host" computers also had to perform gateway functions. In particular, if a host computer (e.g., your favorite PDP-10) was connected to more than one network, it had to make a routing decision about where to send each datagram. To do so, the host needed some "routing information". This led to the notion of a "half-gateway" inside a host TCP/IP implementation. A multi-connected "host" could also possibly pass transit traffic from one network to another, essentially acting as a "full gateway". With the advent of LANs and Workstations, the quantity of "hosts" was expected to explode.
This is to some extent an artifact of the ARPANET initial conditions. IMPs were both a front end and a router. This meant that Hosts were not part of the network. After 1973 and the more or less adoption of the CYCLADES model, routers were just routers and hosts were part of the network. This is the shift from the ITU model of networking to the new Layered Model (for want of a better term). Layers have become resource allocators.
>
> Additionally, different kinds of user applications might need different network service. Interactive voice might desire low-latency service. Large file transfers might prefer a high-capacity service. Some networks would only carry traffic from "approved (by the network owner) users". Some networks charged by amount of traffic you sent over them.
This is QoS and should be passed as parameters to the Internet Layer. It is tthen up to the layer to figure out how to provide that level of QoS (probably within a range as these systems aren’t very sensitive) using the QoS provided by the layer below. (Since traffic is being multiplexed on the lower layer, the lower layer QoS ranges will be different and an aggregate of the QoS of the QoS flow multiplexed on to it.
>
> The approach to these needs, purely as an experiment (we didn't know exactly how it would work), was to have multiple routing mechanisms running in parallel and coordinated somehow. Each mechanism would capture its own data to use in routing decisions. Each datagram would have a "Type Of Service" designator in the IP header, that would indicate what network behavior that datagram desired. The separate routing mechanisms would (somehow) coordinate their decisions to try to allocate the available network resources in a "fair" way. Lots of ideas flew around. Lots of experimentation to be done.
Good start you were on the right track. Not sure different routing strategies were necessary so much as different metrics. The real isn’t so much QoS per se but the trade-offs between QoS-classes.
>
> Pragmatically, we had an experimental environment suitable for such exploration. The Arpanet was the main long-haul US backbone, extending across the Atlantic to parts of Europe. However, the WideBandNet (WBNet) also provided connectivity across the US, using a satellite channel. The Arpanet was primarily a terrestrial network of circuits running at 56 kilobits/second; the WBNet had a 3 megabits/second satellite channel, and of course had much higher latency than the Arpanet but could carry much more traffic. SATNET, also satellite based, covered parts of the US and Europe; MATNET was a clone of SATNET, installed on Navy ships. Packet Radio networks existed in testbed use at various military sites. Since these were funded by ARPA, use was restricted to users associated with ARPA projects. The public X.25/X.75 network also provided connectivity between the US and Europe. They were available for any use, but incurred costs based on "calls" like the rest of the telephony system. NSF (and NSFNet) had not yet appeared on the Internet; Al Gore did however speak at one of our meetings.
See below. There are multiple aspects to the solution to this. Part of it is QoS, but there are other parts.
>
> All of these networks were in place and connected by gateways to form the Internet of the early 1980s. The user scenarios we used to drive technical discussions included one where a teleconference is being held, with participants scattered across the Internet, some connected by Arpanet, some on ships connected by satellite, some in motion connected by Packet Radio, etc. The teleconference was multimedia, involving spoken conversations, interactive graphics, shared displays, and viewing documents. We didn't even imagine video (well, maybe some did...) with the technology of the day -- but if you use Zoom/Skype/etc today, you'll get the idea.
Good example. See above and below.
>
> Somehow, the Internet was supposed to make all of that "routing" work, enabling the use of such scenarios where different "types of service" were handled by the net to get maximal use of the limited resources. Traffic needing low latency should use terrestrial paths. Large volumes of time-insensitive traffic should go by satellite. Networks with rules about who could use them would be happy.
This is primarily a QoS problem noted above using the QoS of the network layers supporting the Internet Layer. It was less important whether the lines were terrestrial or not but more important what their measured latency and probably RTT were.
>
> In addition, there were other "gateway issues" that needed experimentation.
>
> One was called "Expressway Routing". The name was derived from an analogy to the highway system. Many cities have grids of streets that can extend for miles. They may also have an "Expressway" (Autobahn, etc.) that is some distance away but parallels a particular street. As you leave your building, you make a "routing decision" to select a route to your destination. In some cities, that destination might be on the same street you are on now, but many blocks away. So you might make the decision to use the local Expressway instead of just driving up the street you are already on. That might involve going "the wrong way" to get to an Expressway on-ramp. People know how to make such decisions; gateways didn't.
Expressways were lower layers perhaps supported by other networks that would provide ‘bulk’ traffic between major regions. This is something MPLS might have done if the developers had had sufficient imagination. It is likely these Expressways would have been virtual-circuit. (When there is lots of traffic all going to the same place, why look at every packet, just move the darn stuff! I always found it amazing how the vc-advocates never proposed equipment for what vc’s were good for and only for what they were not good for.) ;-) Also in this environment one wants to be relaying more stuff less often not less stuff more often.) The example I always give is there may not be constant traffic between Lake Forest, IL and Lexington, MA (use datagrams) but there will be constant traffic between the Boston and Chicago regions (use vc’s). Make these “Expressways” a different layer under the Internet Layer optimized for this.
>
> That particular situation was endemic to the WBNet at the time. There were no "hosts" connected to the WBNet; only gateways were directly connected, between the WBNet and Arpanet at various locations. With the standard routing mechanisms of the time, traffic would never use the WBNet. Since both source and destination were on the Arpanet (or a LAN connected to it), traffic would naturally just use the Arpanet.
The WBNet was a perfect example of what I described above. Sounds like Flat Earth thinking,
>
> Another "gateway issue" was "Multi-Homed Hosts" (MHH). These are simply host (users') computers that are somehow connected to more than one network. That was rare at the time. Network connections were quite expensive. But we envisioned that such connectivity would become more available. For example, a "host computer" in a military vehicle might be connected to a Packet Radio network while in motion, but might be able to "plug in" to a terrestrial network (such as Arpanet) when it was back "at base".
The solution to multihoming has been known since 1972. CYCLADES solved by making an inherent part of the model. Yea, I know there are still a lot of people in the Internet today who refuse to believe that. They are simply wrong. O, and to those who complain that not all multi homed hosts want to act as a transit node, the answer is simple: Don’t advertise that it is a route to anything but itself. Sheesh!
>
> In addition to improving reliability by such redundancy, MHH could take advantage of multiple connections -- if the networking technology knew how to do so. One basic advantage would be increased throughput by using the capacity of both connections. But there were problems to be addressed. Each connection would have a unique IP address - how do you get that to be useful for a single TCP connection?si
It is not relevant to a single TCP connection. It is a different layer and it is that layer’s task to do the resource allocation to use multiple paths. One of the things that has always bugged me is the idea that one should be able to identify a single host-to-host flow in the middle of the network. Good grief! This isn’t the PSTN!! The IP addresses are irrelevant. Each layer has its own addresses. (Remember addresses belong to layers not to protocols
>
> That may sound like an ancient problem.... But my cell phone today has both "cell data" and "Wifi" capability. It can only use one at a time however. It also has a different IP address for each connection. At best it's a MHH with just a backup capability. We thought we could do better...
Mobility is still in the ITU model as is the Internet and is totally screwed up as it is in the Internet. In the early days, that was understandable. But it hasn’t been the case 30 years or more. This is just a variation of the multihoming problem, which as I said the solution had been known for almost a decade at that point. In a well-formed architecture, mobility doesn’t require anything new. Certainly doesn’t need any foreign agents or home agents or tunnels or new protocols.
>
> I'm sure there were other "gateway issues". But we recognized the limits of the technology of the day. The gateways were severely limited in memory and computing power. The network speeds would be considered unusable today. To make routing decisions such as choosing a low-latency path for interactive usage required some way to measure datagram transit time. But the gateway hardware had no ability to measure time.
Yes, there would have been a congestion issue in both the Internet Layer and the Network Layer, or for that matter any layer that relays.
>
> In the interim, the only viable approach was to base routing on "hop counts" while the hardware was improved and the experimentation hopefully revealed a viable algorithm to use within the Internet -- including "gateways" and "half-gateways". We introduced various kinds of "source routing" so that experimenters could forec traffic to follow routes that the primitive existing routing mechanisms would reject. The "next release" after TCP/IP version 4 would hopefully address some of the issues. I lost track after that; another reorganization moved the project elsewhere.
Never been a fan of source routing. It always seemed like virtual circuit by another name.
>
> All of the above occurred about ~45 years ago. AFAIK, the specifications for "half" and "full" gateways were never created. And it seems we're still using hop counts? Perhaps computing and communications technology just exploded fast enough so it no longer matters.
>
> Except for latency. Physics still rules. The speed of light, and digital signals, is still the Law.
Yep, you can’t fool mother nature.
Take care,
John
>
> Hope this helps,
> Jack Haverty
>
>
>
>
> On 9/30/24 12:43, John Day via Internet-history wrote:
>> I am confused. Could someone clarify for me what all of these gateway issues were? Why gateways were such a big deal?
>>
>> Thanks,
>> John
>>
>>> On Sep 30, 2024, at 13:06, Barbara Denny via Internet-history<internet-history at elists.isoc.org> wrote:
>>>
>>> I have been trying to remember some things surrounding this topic so I did some poking as my knowledge/memory is hazy. I found some documents on DTIC which may be of interest to people. It seems not all documents in DTIC provide useable links so use the document IDs in the search bar on their website.
>>> ADA093135
>>>
>>> This one confirms a long suspicion of mine regarding gateways. The gateway functionality/software originally resided in the packet radio station. It also mentions getting TCP from SRI and porting it to ELF (The packet radio station was an LSI-11 if I remember correctly and ELF was the operating system).
>>> You might also be interested in the following report for the discussion of Internet and gateway issues. It mentions removing support for versions of IP that weren't v4 for example.
>>> ADA099617
>>>
>>> I also remember Jim talking about PMOS which I think stood for Portable MOS ( Micro Operating System aka Mathis's Operating System). I think Jim's TCP code also ran on the TIU (Terminal Interface Unit) using PMOS which was a PDP-11 and was part of the packet radio architecture. Not sure how many people used the term PMOS though.
>>> For more info see
>>> https://gunkies.org/wiki/MOS_operating_system
>>>
>>> BYW, I have never heard of this website before. It might be a little buggy but it certainly strikes familiar chords in my memory. BTW the NIU (Network Interface Unit) was a 68000 and ran PMOS. This was used for the SURAN project which was a follow on to packet radio.
>>> Finally i also found a description of the IPR (Improved Packet Radio) in DTIC. It covers the hardware and the operating system. This version of packet radio hardware used 2 processors. I think this was due to performance problems with the previous generation of packet radio.
>>> https://apps.dtic.mil/sti/citations/ADB075938
>>>
>>> barbara
>>>
>>> On Sunday, September 29, 2024 at 01:33:14 PM PDT, Jack Haverty via Internet-history<internet-history at elists.isoc.org> wrote:
>>>
>>> Yeah, the "Stone Age of Computing" was quite different from today.
>>>
>>> The Unix (lack of) IPC was a serious obstacle. I struggled with it in
>>> the late 70s when I got the assignment to implement some new thing
>>> called "TCP" for ARPA. I used Jim Mathis implementation for the
>>> LSI-11s being used in Packet Radio, and shoehorned it into Unix.
>>> Several of us even went to Bell Labs and spent an afternoon discussing
>>> networking with Ritchie. All part of all of us learning about networking.
>>>
>>> More info on what the "underlying architectures" were like back then,
>>> including details of the experience of creating TCP implementations for
>>> various Unices:
>>>
>>> http://exbbn.weebly.com/note-47.html
>>> https://www.sophiehonerkamp.com/othersite/isoc-internet-history/2016/oct/msg00000.html
>>>
>>> There was a paper ("Interprocess Communications for a Server in Unix")
>>> for some IEEE conference in 1978 where we described the additions to
>>> Unix to make it possible to write TCP. But I can't find it online -
>>> probably the Conference Proceedings are behind a paywall somewhere though.
>>>
>>> Jack
>>>
>>>
>>> On 9/29/24 10:42, John Day wrote:
>>>> Good point, Jack. Dave did a lot of good work. I always liked his comment when I asked him about his collaboration with CYCLADES. He said, it was ’so they wouldn’t make the same mistakes we did.’ ;-) Everyone was learning back then.
>>>>
>>>> Perhaps more relevant is that the first Unix system was brought up on the ’Net at UIUC in the summer of 1975 on a PDP-11/45. It was then stripped down and by the Spring of 1976 ported to an LSI-11 (a single board PDP-11) for a ‘terminal’ with a plasma screen and touch. That was fielded as part of a land-use management system for the 6 counties around Chicago and for the DoD at various places including CINCPAC.
>>>>
>>>> Unix didn’t have a real IPC facility then. (Pipes were blocking and not at all suitable.) Once the first version was up and running with NCP in the kernel and Telnet, etc in user mode, a true IPC was implemented. (To do Telnet in that early version without IPC, there were two processes, one, in-bound and one out-bound and stty and gtty were hacked to coordinate them.) file_io was hacked for the API, so that to open a connection, it was simply “open(ucsd/telnet)”.
>>>>
>>>> Years later there was an attempt to convince Bill Joy to do something similar for Berkley Unix but he was too enamored with his Sockets idea. It is too bad because with the original API, the Internet could have seamless moved away from well-known ports and to application-names and no one would have noticed. As it was domain names were nothing more than automating downloading the host file from the NIC.
>>>>
>>>> Take care,
>>>> John Day
>>>>
>>>>> On Sep 29, 2024, at 13:16, Jack Haverty via Internet-history<internet-history at elists.isoc.org> wrote:
>>>>>
>>>>> On 9/29/24 08:58, Dave Taht via Internet-history wrote:
>>>>>> See:
>>>>>>
>>>>>> https://dmitry.gr/?r=05.Projects&proj=35.%20Linux4004
>>>>>>
>>>>>> While a neat hack and not directly relevant to ih, it sparked curiosity in
>>>>>> me as to the characteristics of the underlying architectures arpanet was
>>>>>> implemented on.
>>>>>>
>>>>>>
>>>>> For anyone interested in the "underlying architectures arpanet was implemented on", I suggest looking at:
>>>>>
>>>>> https://walden-family.com/bbn/imp-code.pdf
>>>>>
>>>>> Dave Walden was one of the original Arpanet programmers. He literally wrote the code. This paper describes how the Arpanet software and hardware were created. Part 2 of his paper describes more recent (2010s) work to resurrect the original IMP code and get it running again to create the original 4-node Arpanet network as it was in 1970. The code is publicly available - so anyone can look at it, and even get it running again on your own modern hardware. Check out the rest of the walden-family website.
>>>>>
>>>>> When Arpanet was being constructed, microprocessors such as the Intel 4004 did not yet exist. Neither did Unix, the precursor to Linux. Computers were quite different - only one processor, no cores, threads, or such. Lots of boards, each containing a few logic gates, interconnected by wires. Logic operated at speeds of perhaps a Megahertz, rather than Gigahertz. Memory was scarce, measured in Kilobytes, rather than Gigabytes. Communication circuits came in Kilobits per second, not Gigabits. Persistent storage (disks, drums) were acquired in Megabytes, not Terabytes. Everything also cost a lot more than today.
>>>>>
>>>>> Computing engineering was quite different in 1969 from today. Every resource was scarce and expensive. Much effort went towards efficiency, getting every bit of work out of the available hardware. As technology advanced and the Arpanet evolved into the Internet, I often wonder how the attitudes and approaches to computing implementations changed over that history. We now have the luxury of much more powerful hardware, costing a tiny fraction of what a similar system might have cost in the Arpanet era. How did hardware and software engineering change over that time?
>>>>>
>>>>> Curiously, my multi-core desktop machine today, with its gigabytes of memory, terabytes of storage, and gigabits/second network, running the Ubuntu version of Linux, takes longer to "boot up" and be ready to work for me than the PDP-10 did, back when I used that machine on the Arpanet in the 1970s. I sometimes wonder what it's doing while executing those trillions of instructions to boot up.
>>>>>
>>>>> Jack Haverty
>>>>>
>>> --
>>> Internet-history mailing list
>>> Internet-history at elists.isoc.org
>>> https://elists.isoc.org/mailman/listinfo/internet-history
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
More information about the Internet-history
mailing list