[ih] GOSIP & compliance

Jack Haverty jack at 3kitty.org
Sun Apr 3 12:20:00 PDT 2022


On 4/2/22 18:19, John Day via Internet-history wrote:
> So you send a flurry of UDP packets and some get through and you hope that the ones that do are useful.  Doesn’t sound great.

Very true.   But IMHO it was more complex than that, if you look at the 
overall *system* architecture - which unfortunately may have never been 
written down except maybe as fragments in notes from various meetings in 
the early 80s.   Here's some that I remember....

UDP defined a "datagram mode" somewhat analogous to the ARPANET's 
"uncontrolled packets" ("type 3" IIRC).  The ARPANET operators at BBN 
were staunchly opposed to allowing such packets to be used, for fear 
that they would seriously disrupt the normal "virtual circuit" 
mechanisms internal to the ARPANET structure.  So "datagrams" on the 
ARPANET, while possible, were only rarely permitted, for specific 
experiments, between specific Hosts.

At one point in the early gateway development, I remember trying to get 
permission to have the gateways use type 3 packets on the ARPANET.   
IIRC that was never approved.   The risk of causing serious degradation 
to other users was too high I guess.

Thinking about the whole system of the Internet, UDP packets were prime 
candidates for different handling by the mesh of gateways that might 
carry them.  In particular, we anticipated that there would be one or 
more additional "Types Of Service" that could be selected, and that such 
different Types Of Service would be treated quite differently by the 
gateways and routing mechanisms.

So, for example, the TTL field in the IP headers, i.e, Time-To-Live, was 
expected to evolve into an actual metric of Time, rather than Hops -- 
once the appropriate hardware was in place.   Then the gateways would, 
somehow, be able to detect that a particular datagram wouldn't likely 
get to its destination before its TTL expired.  Such a datagram could 
therefore be discarded as soon as that death sentence was noticed, 
rather than waiting for it to actually travel through the network and 
get discarded only when its TTL reached zero.   We expected that a lot 
of datagrams would get discarded at the gateways where a megabit-level 
LAN connected to the kilobit-level ARPANET.

Other "system" aspects might involve routing.  E.g., perhaps there are 
multiple possible paths through the network, one with low latency but 
also low bandwidth, and another with high latency but high bandwidth.   
That situation existed in the early 80s when the US was covered 
coast-to-coast by both the ARPANET (low latency, tens-of-kilobits 
bandwidth) and the WidebandNet (satellite latency but high bandwidth).  
Clever routing decisions could send packets by different paths, 
depending on what they needed.

Decisions on how to best use the network resources at the moment, to 
satisfy the users' needs, at the moment, probably couldn't be made by 
humans.   That was where "Automated Network Management" would play a 
role, developing mechanisms and associated protocols for having 
computers make such decisions, adapting to conditions in the network as 
they changed.

Since this email thread still mentions "compliance" I'll add one other 
piece of the system.  Sometime in the early 80s, IIRC, the National 
Bureau of Standards (now NIST), created a set of conformance tests for 
verifying that a TCP implementation actually conformed to the TCP 
specification.  They also created a process by which independent labs 
could be certified as Testers, authorized to perform the tests and 
provide certificates of compliance to the applicants.  Such a 
certificate was mandatory in procurements; if you didn't have a 
certificate, the government would not be allowed to buy your product.  
This was a placeholder for some mechanisms that assured that the 
technology defined by specifications was actually what was deployed in 
the field.

That piece of the system, like many others, was really just a 
placeholder for an important component that would evolve over time.   
There were many others.   The TTL "time" being implemented as "hops" is 
one example.   The "Source Quench" mechanism, where a gateway would 
notify a sender that it had dropped one of that sender's datagrams, was 
another.   It was a placeholder for whatever kind of information would 
have to flow between the network switches and the network users (TCPs or 
even applications in Hosts) in order to implement mechanisms such as 
Congestion Control.  TOS (Type-of-Service) bits in the IP header were 
put in place, but the Services they might select were to be determined.

IIRC, no one thought that these placeholder mechanisms were 
well-defined, or even that they would work.   There was much to still be 
figured out, and the placeholders replaced with real mechanisms once 
somebody figured out what they should be.

The expectation was that two newly formed groups would work together to 
define and deploy the next generation of TCP/UDP/IP over the next few 
years.   The IRTF would focus the Research efforts, to perform 
experiments, analyze results, and figure out what mechanisms and 
algorithms would solve the issues involving the placeholders.   The IETF 
would take the results of that research and instantiate it into actual 
protocols, formats, message structures, and network services, and deploy 
and refine them into the operational Internet, That would include such 
issues as how to modify an existing operational system to change its 
underlying mechanisms from Version X to Version X+1. Being able to 
define and deploy next generation mechanisms was an important 
requirement of the overall Internet system.

So, for UDP, we didn't believe at the time that UDP datagrams would be 
any less dangerous in the Internet as they had been in the ARPANET.  But 
we (at leat I) believed that such problems would arise only when the 
network approached the limits of its resources. Putting in more and 
faster circuits, and more powerful switching hardware, to keep ahead of 
the user traffic demands would keep the network operating well, while 
the various "placeholder" issues were worked out and the new algorithms 
and protocols were deployed.

Part of the History of the Internet would capture how the technology and 
implementations has actually evolved over the ensuing 40 years. It might 
explain how we got to today's problems such as "buffer bloat", as a 
consequence of pouring more and more memory into the system to keep 
ahead of the demand curve.

Jack Haverty




The expectation was that



More information about the Internet-history mailing list