[ih] Fuzzballs and the Neonatal Internet (was Re: booting linux on a 4004)
Jack Haverty
jack at 3kitty.org
Wed Oct 2 15:02:03 PDT 2024
[Changed the subject since we've drifted far away from linux and Intel -
such wandering is an Internet tradition]
Dave Mills and his crew were one of the most prolific, and adventurous,
sources of ideas and running code in the era of 1980 +- a few years.
Dave was an avid experimenter, trying out his ideas in code rather than
equations and diagrams on whiteboards. Experiments requiremeasurements
and instrumentation, which was sorely lacking in the Internet design at
the time. So Dave and crew created NTP, somehow got it embraced by
NIST, and as a result all of our devices today know what time it is.
With amazing accuracy.
Fuzzballs were Dave's medium for trying out ideas about networking.
They existed in the Internet well before they were chosen for use in
NSF's network. My experience with Dave's "Fuzzies" occurred in the late
1970s and early 1980s.
Dave was also a member of Vint's ICCB at that time, where his zeal at
experimentation was legendary. My own task from Vint was to make the
"core gateways" highly reliable and operational as a 24x7 service.
That was especially important for the research community in Europe, who
had to rely on the Internet for connectivity, while researchers in the
US were able to simply use the Arpanet. When the core gateways weren't
working, they couldn't access any US resources. They were not shy
about complaining, and had a 5+ hour head start on seeing problems due
to the time zones involved.
Research priorities often conflict with operational ones. Our goal was
to keep the "core" running. Dave's goal was to try out new ideas and
see how well they worked. Sometimes the new ideas broke the
Internet. Maybe such outages were caused by a simple bug, or possibly
an unforeseen consequence of the new idea. Whatever the cause, my
phone rang -- "The core gateways aren't working!".
We had encountered such conflicts during the previous 10 years of
Arpanet, and used traditional solutions. New code and algorithms were
developed on a separate "clone" network. They were extensively tested,
usually for many months. The test environments were highly instrumented,
data collected, and analyzed in depth. Eventually, the new code was then
carefully introduced into the operational Arpanet itself, with
provisions for "backing out" to the old system if necessary.
Such rigor is suitable for an operational network. But it is too
limiting for a research environment, especially the Internet where lots
of people had lots of ideas about techniques to be tried. Creating a
separate "Test Internet" was pragmatically unrealistic.
After some ICCB meeting, back at BBN, I corralled one of the Arpanet
"thinkers" for an afternoon to brainstorm how to keep the research
activities somehow "insulated" from the operational users of the
Internet. That led to the creation of the notion of "Autonomous
Systems", and the Exterior Gateway Protocol (see RFC 827, published in
1982).
EGP enabled the "core gateways" to be isolated into their own
"Autonomous System". Research could continue until the "right"
architecture, algorithms, and protocols were proven in trials on the
Internet. Then the new system could be deployed and the need for
Autonomous Systems and EGP would disappear.
How naive we were...!
With a bit of extra code, the "core gateways" could simply ignore any
information from "outside" that didn't seem reasonable, e.g., any
routing information pertaining to networks that were already connected
directly to a core gateway. With EGP in place, the core gateways were
able to protect themselves from whatever any researcher system did, so
that "operational" and "research" activities could co-exist on the Internet.
Fuzzies still attacked the "core". But it was protected by the EGP
wall. Research and Operations could coexist on The Internet.
All of the above happened in the 1979-1983 timeframe. Fuzzballs
continued on and were used in a variety of places including NSF's
projects. But I wasn't involved then - someone else will have to
explain that part of their history.
Hope this helps,
Jack Haverty
On 9/30/24 18:28, Kyle Duren wrote:
> Where did the Fuzzball routers fit into this timeline/architecture?
>
> On Mon, Sep 30, 2024 at 5:45 PM Jack Haverty via Internet-history
> <internet-history at elists.isoc.org> wrote:
>
> I'm not sure I remember all of the "gateway issues" but here's
> some of
> them...
>
> Circa 1978/9, it wasn't clear what "gateways" were in an
> architectural
> sense. TCP version 2.5 had not yet evolved into TCP/IP version 4,
> which
> split the "TCP" and "IP" functions more cleanly, and also enabled the
> introduction of additional functionality as an alternative to
> TCP. In
> particular, this enabled the definition of UDP, which was deemed
> necessary for experimentation with real-time interactive voice. Some
> usage required a reliable byte-stream; other usage required
> getting as
> much as possible as fast as possible.
>
> I was one of Vint's "ICCB" members, and we had lots of discussions
> about
> the role of "gateways", even after TCP and IP were split in
> Version 4.
> Vint had moved the "gateway project" to my group at BBN, so I was
> tasked
> to "make the Internet a 24x7 operational service". Or something like
> that. Gateways had become my problem.
>
> Gateways were characterized by the fact that they connected to
> more than
> one network. When they connected to three or more they had to make
> routing decisions, and thus participate in some kind of routing
> algorithm and information exchanges with other gateways.
>
> However, we also realized that, in some cases, "host" computers
> also had
> to perform gateway functions. In particular, if a host computer
> (e.g.,
> your favorite PDP-10) was connected to more than one network, it
> had to
> make a routing decision about where to send each datagram. To do so,
> the host needed some "routing information". This led to the notion
> of a
> "half-gateway" inside a host TCP/IP implementation. A
> multi-connected
> "host" could also possibly pass transit traffic from one network to
> another, essentially acting as a "full gateway". With the advent of
> LANs and Workstations, the quantity of "hosts" was expected to
> explode.
>
> Additionally, different kinds of user applications might need
> different
> network service. Interactive voice might desire low-latency service.
> Large file transfers might prefer a high-capacity service. Some
> networks would only carry traffic from "approved (by the network
> owner)
> users". Some networks charged by amount of traffic you sent over
> them.
>
> The approach to these needs, purely as an experiment (we didn't know
> exactly how it would work), was to have multiple routing mechanisms
> running in parallel and coordinated somehow. Each mechanism would
> capture its own data to use in routing decisions. Each datagram
> would
> have a "Type Of Service" designator in the IP header, that would
> indicate what network behavior that datagram desired. The separate
> routing mechanisms would (somehow) coordinate their decisions to
> try to
> allocate the available network resources in a "fair" way. Lots of
> ideas flew around. Lots of experimentation to be done.
>
> Pragmatically, we had an experimental environment suitable for such
> exploration. The Arpanet was the main long-haul US backbone,
> extending
> across the Atlantic to parts of Europe. However, the WideBandNet
> (WBNet) also provided connectivity across the US, using a satellite
> channel. The Arpanet was primarily a terrestrial network of circuits
> running at 56 kilobits/second; the WBNet had a 3 megabits/second
> satellite channel, and of course had much higher latency than the
> Arpanet but could carry much more traffic. SATNET, also satellite
> based,
> covered parts of the US and Europe; MATNET was a clone of SATNET,
> installed on Navy ships. Packet Radio networks existed in
> testbed use
> at various military sites. Since these were funded by ARPA, use was
> restricted to users associated with ARPA projects. The public
> X.25/X.75
> network also provided connectivity between the US and Europe. They
> were available for any use, but incurred costs based on "calls"
> like the
> rest of the telephony system. NSF (and NSFNet) had not yet
> appeared on
> the Internet; Al Gore did however speak at one of our meetings.
>
> All of these networks were in place and connected by gateways to form
> the Internet of the early 1980s. The user scenarios we used to drive
> technical discussions included one where a teleconference is being
> held,
> with participants scattered across the Internet, some connected by
> Arpanet, some on ships connected by satellite, some in motion
> connected
> by Packet Radio, etc. The teleconference was multimedia, involving
> spoken conversations, interactive graphics, shared displays, and
> viewing
> documents. We didn't even imagine video (well, maybe some
> did...) with
> the technology of the day -- but if you use Zoom/Skype/etc today,
> you'll
> get the idea.
>
> Somehow, the Internet was supposed to make all of that "routing"
> work,
> enabling the use of such scenarios where different "types of service"
> were handled by the net to get maximal use of the limited resources.
> Traffic needing low latency should use terrestrial paths. Large
> volumes
> of time-insensitive traffic should go by satellite. Networks with
> rules about who could use them would be happy.
>
> In addition, there were other "gateway issues" that needed
> experimentation.
>
> One was called "Expressway Routing". The name was derived from an
> analogy to the highway system. Many cities have grids of streets
> that
> can extend for miles. They may also have an "Expressway" (Autobahn,
> etc.) that is some distance away but parallels a particular
> street. As
> you leave your building, you make a "routing decision" to select a
> route
> to your destination. In some cities, that destination might be on
> the
> same street you are on now, but many blocks away. So you might make
> the decision to use the local Expressway instead of just driving
> up the
> street you are already on. That might involve going "the wrong
> way" to
> get to an Expressway on-ramp. People know how to make such
> decisions;
> gateways didn't.
>
> That particular situation was endemic to the WBNet at the time. There
> were no "hosts" connected to the WBNet; only gateways were directly
> connected, between the WBNet and Arpanet at various locations.
> With the
> standard routing mechanisms of the time, traffic would never use the
> WBNet. Since both source and destination were on the Arpanet (or
> a LAN
> connected to it), traffic would naturally just use the Arpanet.
>
> Another "gateway issue" was "Multi-Homed Hosts" (MHH). These are
> simply host (users') computers that are somehow connected to more
> than
> one network. That was rare at the time. Network connections were
> quite
> expensive. But we envisioned that such connectivity would become
> more
> available. For example, a "host computer" in a military vehicle
> might
> be connected to a Packet Radio network while in motion, but might be
> able to "plug in" to a terrestrial network (such as Arpanet) when
> it was
> back "at base".
>
> In addition to improving reliability by such redundancy, MHH could
> take
> advantage of multiple connections -- if the networking technology
> knew
> how to do so. One basic advantage would be increased throughput by
> using the capacity of both connections. But there were problems
> to be
> addressed. Each connection would have a unique IP address - how
> do you
> get that to be useful for a single TCP connection?
>
> That may sound like an ancient problem.... But my cell phone
> today has
> both "cell data" and "Wifi" capability. It can only use one at a
> time
> however. It also has a different IP address for each
> connection. At
> best it's a MHH with just a backup capability. We thought we
> could do
> better...
>
> I'm sure there were other "gateway issues". But we recognized the
> limits of the technology of the day. The gateways were severely
> limited
> in memory and computing power. The network speeds would be
> considered
> unusable today. To make routing decisions such as choosing a
> low-latency path for interactive usage required some way to measure
> datagram transit time. But the gateway hardware had no ability to
> measure time.
>
> In the interim, the only viable approach was to base routing on "hop
> counts" while the hardware was improved and the experimentation
> hopefully revealed a viable algorithm to use within the Internet --
> including "gateways" and "half-gateways". We introduced various
> kinds
> of "source routing" so that experimenters could forec traffic to
> follow
> routes that the primitive existing routing mechanisms would reject.
> The "next release" after TCP/IP version 4 would hopefully address
> some
> of the issues. I lost track after that; another reorganization
> moved
> the project elsewhere.
>
> All of the above occurred about ~45 years ago. AFAIK, the
> specifications for "half" and "full" gateways were never created.
> And it
> seems we're still using hop counts? Perhaps computing and
> communications technology just exploded fast enough so it no
> longer matters.
>
> Except for latency. Physics still rules. The speed of light, and
> digital signals, is still the Law.
>
> Hope this helps,
> Jack Haverty
>
>
>
>
> On 9/30/24 12:43, John Day via Internet-history wrote:
> > I am confused. Could someone clarify for me what all of these
> gateway issues were? Why gateways were such a big deal?
> >
> > Thanks,
> > John
> >
> >> On Sep 30, 2024, at 13:06, Barbara Denny via
> Internet-history<internet-history at elists.isoc.org> wrote:
> >>
> >> I have been trying to remember some things surrounding this
> topic so I did some poking as my knowledge/memory is hazy. I found
> some documents on DTIC which may be of interest to people. It
> seems not all documents in DTIC provide useable links so use the
> document IDs in the search bar on their website.
> >> ADA093135
> >>
> >> This one confirms a long suspicion of mine regarding gateways.
> The gateway functionality/software originally resided in the
> packet radio station. It also mentions getting TCP from SRI and
> porting it to ELF (The packet radio station was an LSI-11 if I
> remember correctly and ELF was the operating system).
> >> You might also be interested in the following report for the
> discussion of Internet and gateway issues. It mentions removing
> support for versions of IP that weren't v4 for example.
> >> ADA099617
> >>
> >> I also remember Jim talking about PMOS which I think stood for
> Portable MOS ( Micro Operating System aka Mathis's Operating
> System). I think Jim's TCP code also ran on the TIU (Terminal
> Interface Unit) using PMOS which was a PDP-11 and was part of the
> packet radio architecture. Not sure how many people used the term
> PMOS though.
> >> For more info see
> >> https://gunkies.org/wiki/MOS_operating_system
> >>
> >> BYW, I have never heard of this website before. It might be a
> little buggy but it certainly strikes familiar chords in my
> memory. BTW the NIU (Network Interface Unit) was a 68000 and ran
> PMOS. This was used for the SURAN project which was a follow on to
> packet radio.
> >> Finally i also found a description of the IPR (Improved Packet
> Radio) in DTIC. It covers the hardware and the operating system.
> This version of packet radio hardware used 2 processors. I think
> this was due to performance problems with the previous generation
> of packet radio.
> >> https://apps.dtic.mil/sti/citations/ADB075938
> >>
> >> barbara
> >>
> >> On Sunday, September 29, 2024 at 01:33:14 PM PDT, Jack
> Haverty via Internet-history<internet-history at elists.isoc.org> wrote:
> >>
> >> Yeah, the "Stone Age of Computing" was quite different from today.
> >>
> >> The Unix (lack of) IPC was a serious obstacle. I struggled
> with it in
> >> the late 70s when I got the assignment to implement some new thing
> >> called "TCP" for ARPA. I used Jim Mathis implementation for the
> >> LSI-11s being used in Packet Radio, and shoehorned it into Unix.
> >> Several of us even went to Bell Labs and spent an afternoon
> discussing
> >> networking with Ritchie. All part of all of us learning about
> networking.
> >>
> >> More info on what the "underlying architectures" were like back
> then,
> >> including details of the experience of creating TCP
> implementations for
> >> various Unices:
> >>
> >> http://exbbn.weebly.com/note-47.html
> >>
> https://www.sophiehonerkamp.com/othersite/isoc-internet-history/2016/oct/msg00000.html
> >>
> >> There was a paper ("Interprocess Communications for a Server in
> Unix")
> >> for some IEEE conference in 1978 where we described the
> additions to
> >> Unix to make it possible to write TCP. But I can't find it
> online -
> >> probably the Conference Proceedings are behind a paywall
> somewhere though.
> >>
> >> Jack
> >>
> >>
> >> On 9/29/24 10:42, John Day wrote:
> >>> Good point, Jack. Dave did a lot of good work. I always liked
> his comment when I asked him about his collaboration with
> CYCLADES. He said, it was ’so they wouldn’t make the same mistakes
> we did.’ ;-) Everyone was learning back then.
> >>>
> >>> Perhaps more relevant is that the first Unix system was
> brought up on the ’Net at UIUC in the summer of 1975 on a
> PDP-11/45. It was then stripped down and by the Spring of 1976
> ported to an LSI-11 (a single board PDP-11) for a ‘terminal’ with
> a plasma screen and touch. That was fielded as part of a land-use
> management system for the 6 counties around Chicago and for the
> DoD at various places including CINCPAC.
> >>>
> >>> Unix didn’t have a real IPC facility then. (Pipes were
> blocking and not at all suitable.) Once the first version was up
> and running with NCP in the kernel and Telnet, etc in user mode, a
> true IPC was implemented. (To do Telnet in that early version
> without IPC, there were two processes, one, in-bound and one
> out-bound and stty and gtty were hacked to coordinate them.)
> file_io was hacked for the API, so that to open a connection, it
> was simply “open(ucsd/telnet)”.
> >>>
> >>> Years later there was an attempt to convince Bill Joy to do
> something similar for Berkley Unix but he was too enamored with
> his Sockets idea. It is too bad because with the original API, the
> Internet could have seamless moved away from well-known ports and
> to application-names and no one would have noticed. As it was
> domain names were nothing more than automating downloading the
> host file from the NIC.
> >>>
> >>> Take care,
> >>> John Day
> >>>
> >>>> On Sep 29, 2024, at 13:16, Jack Haverty via
> Internet-history<internet-history at elists.isoc.org> wrote:
> >>>>
> >>>> On 9/29/24 08:58, Dave Taht via Internet-history wrote:
> >>>>> See:
> >>>>>
> >>>>> https://dmitry.gr/?r=05.Projects&proj=35.%20Linux4004
> <https://dmitry.gr/?r=05.Projects&proj=35.%20Linux4004>
> >>>>>
> >>>>> While a neat hack and not directly relevant to ih, it
> sparked curiosity in
> >>>>> me as to the characteristics of the underlying architectures
> arpanet was
> >>>>> implemented on.
> >>>>>
> >>>>>
> >>>> For anyone interested in the "underlying architectures
> arpanet was implemented on", I suggest looking at:
> >>>>
> >>>> https://walden-family.com/bbn/imp-code.pdf
> >>>>
> >>>> Dave Walden was one of the original Arpanet programmers. He
> literally wrote the code. This paper describes how the Arpanet
> software and hardware were created. Part 2 of his paper describes
> more recent (2010s) work to resurrect the original IMP code and
> get it running again to create the original 4-node Arpanet network
> as it was in 1970. The code is publicly available - so anyone can
> look at it, and even get it running again on your own modern
> hardware. Check out the rest of the walden-family website.
> >>>>
> >>>> When Arpanet was being constructed, microprocessors such as
> the Intel 4004 did not yet exist. Neither did Unix, the precursor
> to Linux. Computers were quite different - only one processor, no
> cores, threads, or such. Lots of boards, each containing a few
> logic gates, interconnected by wires. Logic operated at speeds of
> perhaps a Megahertz, rather than Gigahertz. Memory was scarce,
> measured in Kilobytes, rather than Gigabytes. Communication
> circuits came in Kilobits per second, not Gigabits. Persistent
> storage (disks, drums) were acquired in Megabytes, not Terabytes.
> Everything also cost a lot more than today.
> >>>>
> >>>> Computing engineering was quite different in 1969 from
> today. Every resource was scarce and expensive. Much effort went
> towards efficiency, getting every bit of work out of the available
> hardware. As technology advanced and the Arpanet evolved into the
> Internet, I often wonder how the attitudes and approaches to
> computing implementations changed over that history. We now have
> the luxury of much more powerful hardware, costing a tiny fraction
> of what a similar system might have cost in the Arpanet era. How
> did hardware and software engineering change over that time?
> >>>>
> >>>> Curiously, my multi-core desktop machine today, with its
> gigabytes of memory, terabytes of storage, and gigabits/second
> network, running the Ubuntu version of Linux, takes longer to
> "boot up" and be ready to work for me than the PDP-10 did, back
> when I used that machine on the Arpanet in the 1970s. I sometimes
> wonder what it's doing while executing those trillions of
> instructions to boot up.
> >>>>
> >>>> Jack Haverty
> >>>>
> >> --
> >> Internet-history mailing list
> >> Internet-history at elists.isoc.org
> >> https://elists.isoc.org/mailman/listinfo/internet-history
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20241002/ebe03305/attachment.asc>
More information about the Internet-history
mailing list