[ih] The history of "This" 0.0.0.0/8 network?

Jack Haverty jack at 3kitty.org
Tue Feb 12 15:14:13 PST 2019


On 2/12/19 1:48 PM, Grant Taylor wrote:
> On 02/12/2019 01:20 PM, Jack Haverty wrote:
>> Well, I was there during the transition from ARPANET to TCP version 
>> 2.x to version 4, and I remember some of the issues and reasoning. 
>> You probably won't find any of this written down in IENs or RFCs.
> I hadn't even thought about TCP/IP prior to version 4.  That sounds like 
> you went through two transitions.  Or was TCP/IP (?) version 2 used in 
> parallel with ARPANET protocols.  (Memory is failing if ARPANET used 
> it's own protocols between IMPs / TIPs / Hosts.)

ARPANET existed for about a decade before TCP appeared, with its own
protocols for doing remote login, file transfer, and electronic mail. 
The Internet was built "on top of" the ARPANET, with gateways
interconnecting ARPANET, SATNET, PRNETs, and a few other networks in the
early days.   TCP was essentially another "user-level" protocol that
used the ARPANET, just as the Telnet, FTP, and Mail (SMTP) did.  

IP-capable hosts and gateways established connections through the
ARPANET to each other as needed.  Each such connection was essentially a
virtual circuit (reliable, sequenced delivery).  Where a Telnet
connection would carry ascii streams to and from a terminal, IP
connections would carry IP packets.  The ARPANET didn't treat them any
differently.

TCP V2 had a single packet header with the whole Internet address. 
There were quite a few versions as we sorted out issues and tried
different ideas.  I remember for example TCP V2.5,V2.5+epsilon,
V2.5+2epsilon, etc.  There were a lot of such transitions, but there
were only a handful of people involved in implementing TCP on a handful
of hosts, so transitions were easy.

TCP V4 made major structural changes to the TCP V2 world, in particular
splitting the header into separate TCP and IP headers.  This was done
for many reasons, e.g., to enable the use of non-reliable datagrams for
experiments with packet voice, video, etc.

(There was a TCP V3 but it lasted only long enough for one
implementation, which had no one to talk to....)

There was a great push to standardize TCP as a DoD standard, which would
then be mandatory for all DoD computer procurements.  That was what
drove the documentation of TCP/IP V4.  Standardization is like casting
something in concrete, so transitions then got much more difficult.

For several years, the Internet was essentially in a "fuzzy peach" mode,
where the ARPANET was the peach, and all of the LANs that were popping
up represented the fuzz.  At some point along the way, we observed that
a very basic primitive network type was simply a wire, which had only
two addresses - "this end" and "the other end".  That allowed us to
replace the linkages between gateways, which had been ARPANET
connections, with a physical wire connection.  You could even use the
same circuit from the phone company, unplugging it from the IMP and
plugging it directly into a gateway.

So there were many transitions.  The Internet was one big Erector Set. 
(Google it)

>> In the beginning, ... there was the ARPANET, the Packet Radio Net(s), 
>> and SATNET.   All of these had their own addressing scheme, e.g., in 
>> the ARPANET it was a concatenation of IMP# and Host# on that IMP (An IMP 
>> was a packet switch to which phone lines and cables to "host computers" 
>> were attached.
> ACK
>
> How were those IMP# & Host# typically written down or spoken between people?
>
>> The IP address, expressed in x.x.x.x notation, could be used to specify 
>> both a particular network and a Host address on that network.  …  The 
>> ARPANET addresses could be encoded into 24 bits.  So, for ARPANET (and 
>> some of its clones), an IP address like 10.2.0.5 would mean network #10, 
>> Host #2 on IMP #5.  Host #2 identified a specific physical connector on 
>> the back of the IMP cabinet.
> Did "network #10" have meaning on the ARPANET?  Or was network #10 the 
> representation of ARPANET vs PRNs / SATNET / etc?

Network #10 (or any specific number) identified a particular physical
network.  E.g., one of the clones of the ARPANET was used inside of BBN,
and it was assigned as network #3.   So 10.2.0.5 was a computer attached
to ARPANET IMP 5, while 3.2.0.5 was a different computer, attached to
IMP 5 on the BBN internal net.

Every physical network had its own network number.  One of the neat
features of the IP world was that a particular physical network could
have several network numbers.  Essentially, you could create different
parts of the Internet on top of the same physical network.  E.g., while
debugging new code, we could bring up gateways attached to the ARPANET
but not as net #10.  A gateway might be on IMP5 as host 2, but with an
IP address of 11.2.0.5, and operate totally disjoint from the rest of
the Internet.  This was very useful for implementing transitions, since
you could operate the "new" system simultaneously with the "old" system,
but using the same physical network underneath.

The Internet components (gateways and IP code in hosts) didn't have any
knowledge of, or way to influence, the underlying network's addresses. 
Those bits were just a "black box" to the IP world, intended for the
underlying network's switches to understand.

>
> I find the Host #2 on IMP #5 a bit odd thinking about the structure of 
> an IPv4 address as we use them today.  I would have expected IMP #2 and 
> Host #5.  At least if using a form of hierarchal routing.
>
> If not, I'd be afraid that you would end up with a site having the 
> following IP addresses:
>
>   - 10.1.0.5 - host #1 on IMP #5
>   - 10.2.0.5 - host #2 on IMP #5
>   - 10.3.0.5 - host #3 on IMP #5
>
> I feel like that wouldn't scale and route nearly as well as:
>
>   - 10.5.0.1 - host #1 on IMP #5
>   - 10.5.0.2 - host #2 on IMP #5
>   - 10.5.0.3 - host #3 on IMP #5
>
> Maybe I'm misunderstanding what you're saying.
It could have been done either way.  There was no hierarchical routing
across address boundaries. The "network part" (e.g., 10) identified a
specific network, but that network did not know or care about that
number.  The "host part" was defined for the specific type of underlying
network in whatever way made sense on that network.  So ARPANETs used
Host.0.IMP but it could have been IMP.0.Host.  But the IP software
didn't know or care about that structure.
>
>> Routers (then known as gateways), hosts, and anybody else could take 
>> an IP address, and figure out the network address on that particular 
>> network by simple algorithm. This worked for ARPANET, SATNET, and PRNETs.
> Yep.  I get that.  It makes sense to me.
>
> I occasionally ask interview candidates why the "default gateway" is 
> called "default" and "gateway".  Occasionally, a candidate surprises me 
> and gets it correct.  :-)
Ask them what PING stands for...  Spoiler: Packet InterNet Groper, as
told to me by Dave Mills way back when.
>
>> LANs broke this scheme.  In particular, Ethernet addresses were too big to 
>> be stuffed into even the 24 bits of a class-A IP address.  So algorithmic 
>> translations were not possible with those types of networks.  That led to 
>> the creation of ARP, and the use of broadcast capabilities of Ethernets, 
>> to implement a mechanism for doing translations.
> Then there were things like DECnet Phase IV where the MAC address gets 
> modified to match the protocol address.  I think I've heard of other 
> protocols doing this too.  But not TCP/IP.
>
>> I recall discussions of this in the ICCB/IAB, the Internet Meetings, the 
>> hotel bars, etc.   The goal of the Internet was to be able to integrate 
>> any type of network into the network as long as it met the very basic 
>> requirements of being able to carry packets.  (We even mused about a 
>> "network" based on carrier pigeons).
> I think I've read stories where people have actually made such things 
> work as a Proof of Concept.  TCP/IP over Bongo Drums comes to mind.
>
> I've also heard people use UUCP, admittedly not TCP/IP, via avian 
> carriers (read: homing pigeons) with USB flash drives containing bag 
> files.  }:-)  It's not practical, but it does work.
>
>> The unsolved problem (at the time) was how to deal with networks that 
>> had addresses too big to fit in IP addresses, and that also did not have 
>> any broadcast capability.
> Ew.
>
> I guess things like ATM qualify there.  Needing some sort of out of band 
> mapping.
>
>> I can't remember specifically, but that might have come up while we 
>> implemented the "VAN Gateway", which used the public X.25 network to 
>> interconnect between the US ARPANET and University College London in 
>> Europe.
> I guess X.25 qualifies there too.
>
>> IIRC, we could manually configure the two gateways at either end of the 
>> X.25 path to know each others addresses and be able to use the X.25 path 
>> to carry IP traffic between routers.
> It's my understanding that X.25 is (was) largely a switched serial 
> circuit network.  Meaning that you had some management, similar to 
> dialing a modem, which would establish a virtual circuit between X.25 
> endpoints and then provide an end to end serial path, much like a (null) 
> modem cable.
>
> I'm guessing that the TCP/IP to X.25 gateways collective looked like a 
> single hop in a TCP/IP traceroute.

Yep, the entire public X.25 infrastructure was a single hop to IP. 

The ARPANET operated internally as a virtual circuit kind of network;
later in its lifetime, IMPs supported X.25 as a way of attaching a host
computer to the network.

>
>> But that didn't provide any mechanism for a Host computer to interact 
>> with the gateway, using the X.25 network as a "LAN" of sorts.  So the 
>> entire public X.25 network was a network within the Internet, but we 
>> didn't have any way to connect "host computers" on that network.
> Yep.  Similar to how IPv4 can carry IPv6, or vice versa between 
> endpoints.  But hosts on either side of the gateways (outer IPv{6,4} vs 
> middle IPv{4,6}) don't have a (good) way to communicate directly with 
> each other.  Such is the nature of bridged / tunneled / encapsulated 
> traffic.
>
>> Another unsolved problem was how host computers would find out basic 
>> information that they needed to know how to use the Internet - e.g., 
>> the address of the gateway(s) on their net, their network number, etc. 
>> Manual configuration wasn't too bad when there were only large computers 
>> on the ARPANET, but with the advent of workstations, the administration 
>> became unwieldy.  That led to DHCP.
> ACK
>
> I remember when DHCP was a thing that big networks would set up.  Early 
> in my career, many of the networks that I worked on didn't have DHCP 
> yet.  Or they were IPX or NetBIOS.
>
>> There was a lot of other experimental work going on, e.g., in handling 
>> of voice and video, and the use of multicast capabilities, which led to 
>> implementations such as MBone.  Lots of problems to be solved there.
>>
>> I don't remember any specific discussions about 0.x.x.x, but I suspect 
>> it was reserved as a placeholder for future use - i.e., don't assign 
>> any network as network #0 - it may be useful when someone figures out 
>> how to approach the problems above.
> Was that same mentality extended to include the zeroth (all zero bit) 
> host address?  Or was that something else?
I don't remember anything about that specifically.  There were some
special cases like reserving 255 for broadcast and multicast.  Typically
all-zero would have been reserved because an all-zero address most
likely would be caused by a bug somewhere, so having it be illegal
limited the carnage such a bug might cause.
>
>> Remember, the Internet was always an Experiment (hence the IENs), so 
>> lots of stuff in the Internet technology was there to help create an 
>> experimental platform.
> Yep.
>
>> All of the above occurred in the 1978-1982 or so timeframe.   I assume 
>> that the 0.x.x.x "hook" got used later as the Internet evolved.
> ACK
>
>
>



More information about the Internet-history mailing list