[ih] History from 1960s to 2025 (ARPANET to TCP)
Bill Nowicki
winowicki at yahoo.com
Sun Jan 4 14:38:14 PST 2026
Thanks for the others who have already given very informed answers on the IMP (PSN?) design evolution.
I can offer a few anecdotes about some of the many times I had to deal with the "protocol off-load" idea through history. One observation is that the "front-end" protocol processor made a lot of sense if the people doing the network (or the customers!) do not want to write much code in the main computer. For example, if the programming environment on the main computer is much harder to use than the front-end, either through technical, practical, or political reasons. Another argument was to ensure the protocols were all the same, while the OSes were different. Another common situation was that the main computer had much more expensive cycles or slower cycles (or often both). In the 80s for example one could get cheap microprocessors while "main frames" were still expensive and behind in technology. However, this was a temporary situation. In the time it would take to get the off-load product out, the main computer could be replaced by a faster model, while the off-load was limited by the technology of when that off-load board was designed. After a while people realized that they could just run their applications on those commodity processors too. And if they invested in a faster main CPU, then not only the network code but their application would also run faster too.
The killer was that many issues are inherently end-to-end. For example, flow-control, error detection (checksums etc.) and security all need to be done on the interface between the front-end and the main computer anyway. Like most concepts, the "off-load" concept went through the hype cycle several times, eventually finding the places where it makes sense. For example, many interfaces avoid the per-byte (as opposed to per-packet) overhead with gather memory access, fragmentation and reassembly, and a few other places.
Now days, the fact that so much of the CPU is using AI or running security code protecting from the attackers using AI, the things like byte-swapping we cared about in the past are almost comically low in their requirements for compute power.
Bill Nowicki On Sunday, January 4, 2026 at 01:40:50 PM PST, William Westfield via Internet-history <internet-history at elists.isoc.org> wrote:
>
> Some TCP implementers in the 1980s chose to use a "front end" approach, placing all of the TCP mechanisms in a separate processor somehow attached to their main computer. AFAIK, such implementations have mostly disappeared.
This sort of implementation is still widely used in the “deeply embedded” market, with chips and modules from the likes of WizNet and Espressif allowing small microcontrollers (eg 32k of memory) to talk to the Internet. (One could debate the logic of a “network processor” with significantly greater resources and performance than the “host”, but… still happening.)
It can be “interesting” how the “bottlenecks” move, in such implementations. (presumably that was what killed the “large computer” networking front ends as well - it’s awkward to have your connection to the front end be slower (and perhaps more complex) than the network connection itself.)
BillW
--
Internet-history mailing list
Internet-history at elists.isoc.org
https://elists.isoc.org/mailman/listinfo/internet-history
-
Unsubscribe: https://app.smartsheet.com/b/form/9b6ef0621638436ab0a9b23cb0668b0b?The%20list%20to%20be%20unsubscribed%20from=Internet-history
More information about the Internet-history
mailing list