[ih] Correct name for early TCP/IP working group?

Jack Haverty jack at 3kitty.org
Tue Jan 28 08:41:41 PST 2025


[I sent this yesterday, but it hasn't come back to me yet from the 
elists.isoc.org server.   So I'm not sure what happened.   Apologies if 
you already got this.  /Jack]

The design goal of statelessness certainly made the switching fabric of 
routers/gateways simpler and more easily scaled.

But that choice didn't remove the complexity;  it simply moved it from 
the lowest level switches into the computers attached to them.

I just viewed the ICCC'72 video (Thanks, Lars!), which I think I never 
had time to see in 1972.   One of the points it makes is that the 
introduction of another computer (the IMP) as a front end for one or 
several "user" computers had a lot of advantages.  It simplified the 
tasks for each computer's programmer, minimized the workload on those 
expensive machines, and made it much easier to evolve the various 
mechanisms that provided error control, flow control, et al, by putting 
all that complexity in the IMP under single management.

When I joined BBN in 1977, I didn't know much about how the IMPs 
worked.  But I was surrounded by ARPANET people, who by then had been 
refining the IMP's internal mechanisms for 8 years and had made many 
changes as real-world operation surfaced issues.  I learned a lot by 
osmosis.

My assignment was to implement TCP as part of the "Internet 
Experiment".  I heard plenty of reasons why it was a bad idea to rely on 
"datagrams", and intense reluctance to allow TCPs to use the IMP's 
datagram mode, for fear it would result in a crash of the entire network.

Experiments are classically done to test some theory/hypothesis.   I 
don't recall ever seeing an explanation of the "Theory of the 
Internet".  But I've always assumed it was something like "It is 
possible to construct a global data communications network using simple 
switching computers with no maintenance of state information."

The Internet Experiment was testing that hypothesis.   Today's network 
seems to say "Yes!".  It is possible to build such a network.

But there seem to be some undesired consequences.

For example, the IMP code went through multiple releases over the 
lifetime of the Arpanet.   All were carefully orchestrated to convert 
the entire network from the "old" to "new" mechanisms, only rarely with 
the need for any of the "host" computers to change their code.

In contrast, TCP rapidly went through multiple releases, including the 
major changes from TCP2 to TCP/IP4, over a period of just a few years, 
while it was still controlled by the research community. However, as the 
technology migrated into the commercial environment and grew rapidly, 
such evolution seems to have become much more difficult.  TCP/IP6 was 
defined more than 25 years ago, but TCP/IP4 is still used, which I have 
to believe makes the underlying mechanisms more complex inside the 
switching machinery.

Similarly, changes have been made to TCP and related mechanisms, which 
are all implemented in the "host" computers and are responsible for 
error control, retransmissions, algorithms and protocols for 
interactions, and general monitoring and management of such technology.  
But few end-users have the interest or capability to watch how their 
computer systems are working.

The IETF has released thousands of Standards, RFCs, and related 
documents, but it's almost impossible to tell which if them are actually 
implemented, correctly, in a product I might buy.   In my own LAN at 
home, I have more than 60 devices.   I have no idea what Standards they 
implement, how efficiently they use the Internet, or if I should replace 
them with some other device.  My ISP is not helpful.  They just carry 
datagrams.

In addition, there were a number of unsolved research questions 
concerning the switching fabric.  For example, can the network provide 
multiple types of service, such as one for bulk transfer of data and 
another for interactive needs?  At the time, we thought such a 
capability was necessary.

Now 40 years later, perhaps such research topics are no longer relevant, 
given today's technology such as fiber.  Or maybe not.

Jack Haverty



On 1/28/25 07:53, John Day via Internet-history wrote:
> Right, and at some point (possibly then) length was less important than processing time.
>
> Packets were pretty much all the same size, although message length varied. But that was probably much less important.
>
> John
>
>> On Jan 28, 2025, at 09:04, Vint Cerf<vint at google.com>  wrote:
>>
>> basically it was networks of queues with variable length messages - averages and moments as well as optimizations. We used the same mathematics to predict Arpanet behavior. I collected data via the Sigma-7 and other students modeled performance (mostly delay and throughput for message completion). The models were adapted to the Arpanet message/packet practice.
>>
>> v
>>
>>
>> On Tue, Jan 28, 2025 at 9:00 AM John Day <jeanjour at comcast.net  <mailto:jeanjour at comcast.net>> wrote:
>>> Correct.
>>>
>>> But I would think there would have been a big difference in the results, but I guess it would depend on what was being analyzed. If it was just the queue behavior, then there wouldn’t be a big difference.  The difference between Message switching with long and short messages, vs packet switching with more homogenous traffic of short messages.
>>>
>>> But there is a big difference is 'completion time' between FCFS (message-switching) and round-robin (packet switching). In this case, completion time is ‘end-to-end’ delay.  ;-)
>>>
>>> Just rambling off the top of my head too early in the morning.  ;-)
>>>
>>> Take care,
>>> John
>>>
>>>> On Jan 28, 2025, at 08:49, Vint Cerf <vint at google.com  <mailto:vint at google.com>> wrote:
>>>>
>>>> roberts would also have been aware of Len Kleinrock's queueing theory analysis of message switching which, mathematically, was not very different from packet switching. They were both at MIT if memory serves and got their Ph.D's the same year, 1963.
>>>>
>>>> v
>>>>
>>>>
>>>> On Tue, Jan 28, 2025 at 7:50 AM John Day <jeanjour at comcast.net  <mailto:jeanjour at comcast.net>> wrote:
>>>>> Brian,
>>>>> I agree with you. In the Baran reports, he describes something that sounds like a datagram. However, he never explores it much other than to define hot-potato routing. His focus is very centered on survivability and resilience, which makes sense it was research for the DoD. There is also the consideration that so far as I have been able to determine, all of the projects Baran was involved in afterwards were virtual-circuit, as were Roberts.
>>>>>
>>>>> OTOH, NPL didn’t do military research, so I guessed that their impetus for exploring packet switching had to be different and indeed it was. We have found a memo Davies wrote (and a similar one by Derek Barber) that Davies had attended the IFIP Congress in the US in 1965 and heard lots of papers on timesharing and the time slicing scheduling that timesharing used. The advantage being that while batch systems did FCFS and short jobs got stuck behind long ones, timesharut ing interleaved jobs and while short jobs were still delayed but their *completion* times were shorter. Davies told Derek that that was what they should do with communications and they did. Their impetus for packet switching was very different. (But there were two problems. ;-) 1) Donald got promoted and less time for research, ;-) and 2) the GPO got involved and the government directed NPL to concentrate on “practical” projects, so they had to move to virtual-circuits.
>>>>>
>>>>> Scantlebury told Roberts about packet switching at the Gatlinburg conference and convinced him to use it. When Roberts returned to DC, he found he had Baran’s reports in a stack of documents but hadn’t read it yet. Based on the NPL experience Roger also convinced Roberts not to use 2.4Kbps lines but 50Kbps, which was a large part of the ARPANET success. (Slower speed would have worked but been so slow people would have said it wasn’t practical, etc.)
>>>>>
>>>>> There is much more to be said about all of this. But that seems to be the core of it. I find it very interesting how minor events have major effects.
>>>>>
>>>>> Take care,
>>>>> John Day
>>>>>
>>>>>> On Jan 27, 2025, at 20:47, Brian E Carpenter via Internet-history <internet-history at elists.isoc.org  <mailto:internet-history at elists.isoc.org>> wrote:
>>>>>>
>>>>>> Vint, and Noel,
>>>>>>
>>>>>> I just glanced through Baran's 1964 paper, and it clearly recognized
>>>>>> statelessnesss (and a standard packet header) as important for network
>>>>>> survivability and adaptive routing. But although he mentions networks
>>>>>> of intercontinental size, I didn't spot any discussion of scalability
>>>>>> as such.
>>>>>>
>>>>>> Interestingly, exactly the same applies to Dave Clark's 1988 "Design
>>>>>> Philosophy" paper.
>>>>>>
>>>>>> In RFC 1958, we did note as principle 3.3 that "All designs must scale
>>>>>> readily to very many nodes per site and to many millions of sites".
>>>>>> I guess that by then (1996) this was too obvious to ignore, and it was
>>>>>> written when IPv4 address exhaustion was considered inevitable.
>>>>>>
>>>>>> Maybe somebody who knows the early literature better than me can find
>>>>>> something. But it's almost as if the intrinsic scalability of stateless
>>>>>> packet switching was an unnoticed and accidental property.
>>>>>>
>>>>>> Regards
>>>>>>    Brian
>>>>>>
>>>>>> On 27-Jan-25 11:16, Vint Cerf via Internet-history wrote:
>>>>>>> statelessness was an important design choice and was made consciously so
>>>>>>> that paths were not critical to successful transport. For example we did
>>>>>>> not want to have to reassemble along a particular path. Even though we
>>>>>>> deprecated fragmentation, at the time we thought it was important, we did
>>>>>>> not want gateway (router) state to be necessary to accomplish reassembly
>>>>>>> regardless of path. I don't know that we recognized the scalability aspect
>>>>>>> but we definitely cared a lot about statelessness of the gateways.
>>>>>>> v
>>>>>>> On Sun, Jan 26, 2025 at 4:25 PM Noel Chiappa via Internet-history <
>>>>>>> internet-history at elists.isoc.org  <mailto:internet-history at elists.isoc.org>> wrote:
>>>>>>>>      > From: Jack Haverty jack at 3kitty.org<http://3kitty.org/>
>>>>>>>>
>>>>>>>>      > At the time, the "ARPANET crowd" was skeptical that the "datagram"
>>>>>>>>      > nature of TCP could be made to work. Traditional networks, including
>>>>>>>>      > the ARPANET, had elaborate internal mechanisms to provide a "virtual
>>>>>>>>      > circuit" service to its users.
>>>>>>>>
>>>>>>>> I was thinkking about this, and wondering if internetworking was a more
>>>>>>>> fundamental advance than the ARPANET (relegating the latter to a
>>>>>>>> 'ground-breaking experiment'), and I had another thought.
>>>>>>>>
>>>>>>>>
>>>>>>>> Internetworking (following in the track of CYCLADES) made much of the
>>>>>>>> fate-sharing aspect - that the data needed to ensure reliable transmission
>>>>>>>> was co-located was the application. One good reason for that (that we knew
>>>>>>>> at
>>>>>>>> the time) was that it made the network itself simpler.
>>>>>>>>
>>>>>>>> But there's another side to that, one that was even more important, and
>>>>>>>> which
>>>>>>>> I'm not sure was obvious to us at the time (1977-79), which is that because
>>>>>>>> it means the intermediate packet switches in the overall internet carry no
>>>>>>>> state about the connections travelling through them, there's no scaling
>>>>>>>> limit. This, to me, has been the single biggest reason why the Internet has
>>>>>>>> been able to grow to the stupendous size it has.
>>>>>>>>
>>>>>>>> I don't think we could have been thinking 'this aspect of lack of state in
>>>>>>>> the internet packet switches neans it will scale indefinitely', because I
>>>>>>>> don't think we had any idea, at that point, about how to do path selection
>>>>>>>> in
>>>>>>>> a global-scale internet - so global-scale internets could not have been in
>>>>>>>> our thinking.
>>>>>>>>
>>>>>>>> Did that infinite scalability turn out to be just a happy accident, a
>>>>>>>> side-effect of good fundamental design (but one whose true complete value
>>>>>>>> wasn't obvious to us at the time), one that moved state out of the internet
>>>>>>>> packet switches?
>>>>>>>>
>>>>>>>>          Noel
>>>>>>>> --
>>>>>>>> Internet-history mailing list
>>>>>>>> Internet-history at elists.isoc.org  <mailto:Internet-history at elists.isoc.org>
>>>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>>>>>>
>>>>>> -- 
>>>>>> Internet-history mailing list
>>>>>> Internet-history at elists.isoc.org  <mailto:Internet-history at elists.isoc.org>
>>>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>>
>>>>
>>>> --
>>>> Please send any postal/overnight deliveries to:
>>>> Vint Cerf
>>>> Google, LLC
>>>> 1900 Reston Metro Plaza, 16th Floor
>>>> Reston, VA 20190
>>>> +1 (571) 213 1346<tel:(571)%20213-1346>
>>>>
>>>>
>>>> until further notice
>>>>
>>>>
>>>>
>>
>>
>> --
>> Please send any postal/overnight deliveries to:
>> Vint Cerf
>> Google, LLC
>> 1900 Reston Metro Plaza, 16th Floor
>> Reston, VA 20190
>> +1 (571) 213 1346
>>
>>
>> until further notice
>>
>>
>>

-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 665 bytes
Desc: OpenPGP digital signature
URL: <http://elists.isoc.org/pipermail/internet-history/attachments/20250128/c01a19ee/attachment.asc>


More information about the Internet-history mailing list