[ih] "The Internet runs on Proposed Standards"
Jack Haverty
jack at 3kitty.org
Sat Dec 3 12:37:19 PST 2022
Thanks Andy. That's what I suspected but I can only see now from a
User's perspective.
I still have doubts about "The Internet runs on Proposed Standards".
Does anybody know -- Is it true? How do you know? Personally I haven't
found any way, at least as a User, to tell what technology is inside all
the equipment, software, services, protocols, algorithms, et al that are
operating between my keyboard/screen and yours. It could be all
Standards of some ilk, or it could all be Proprietary. It might
conform to the spec, or have some zero-day flaw. How do you tell?
Focussing on Internet History, I think there's been an interesting
evolution over the years. The technology has of course advanced, as
evidenced by thousands of RFCs. But the "Process" has also changed
over time.
Back in the 80s, the mantra "Rough Consensus and Running Code" ruled.
Someone had an idea, debated it with other researchers, and eventually
someone wrote the code so the idea could be tried out on the live (if
very small then) Internet. That would reveal problems, revisions would
be made, and more experimentation would follow. Little got written
down except in emails. When I got involved in implementing TCP, the
version was 2.x, and I remember quickly progressing through 2.5,
2.5+epsilon, etc. as experience drove changes.
Vint came in to a meeting one day and advised that DoD was declaring TCP
a DoD Standard. That meant somebody had to create the specification as
a document, and Jon Postel took on the task. Our "documentation is in
the code" excuse wasn't acceptable. The rest of us scrambled to try to
describe what exactly our code did, so that Jon could capture it in
writing. We didn't have much time. But Jon produced the spec and
published it as an RFC. It defined the new DoD Standard -- TCP3, the
next obvious "major version". TCP3 became a mandatory requirement for
DoD procurements.
Unfortunately, with the time pressure we quickly found that there were
flaws in the TCP3 spec. It didn't match what the code did. So a
revision was needed and rather quickly TCP4 was published. Some
contractors (e.g., Ford Aerospace IIRC) sadly got caught in the middle
and had to implement TCP3 as required by their contract, only to
discover that there was no one they could communicate with. It was a
rather frenzied period in Internet history.
While this was all happening, other efforts were creating more pieces of
the "process". NBS (now NIST) created a test suite and testing program,
so that contractors implementing TCP to fulfill their contract had a
means of verifying that they had actually met that requirement. It
checked a box on their list of deliverables.
ARPA had decided that the TCP and related Internet technologies would be
open and freely available. Not only the documentation, but also
implementations were made freely available for multiple types of
computers of the era, either to run directly or to serve as guidelines
for new implementations on other computers. IIRC, as NSF got involved
it followed a similar policy.
Educational institutions, seeing the need to add Networking to their
curricula, selected TCP as the vehicle for teaching. It was readily
available and free. Within a few years, a "pipeline" had been created,
producing a steady stream of new graduates who knew about the TCP
technologies and how to use them. Industry quickly adopted TCP since it
could be observed to work, at events such as Interop, and there was a
supply of new technical staff who already knew how to use it even as a
new grad.
Rough consensus. Running code. Operational experience.
.....Fast forward 40 years.....
I'm not very familiar with how the process works today, or how we got
from there to here. But my impression is that today there are few if
any of those old "process" mechanisms still in place. Technology is
defined in RFCs, but there may not be any open and freely available
implementations for others to use or examine. There seem to be no
mechanisms for any kind of "certification" that an implementation even
exists in whatever hardware/software you might have in front of you.
Few people, even techies, seem to be aware of the available technology
in the RFCs, let alone how to use them and their purpose. Users have
no clue, even when technology is present, of how to use it (looking at
you PGP). No one seems to care much about getting a technology into
actual widespread use, except within their own product, service, walled
garden, etc.
My impression is that the role of the technology development has changed
a lot over the years. The "deliverable" of the process today seems to
be RFCs, defining technology that is placed on a public "shelf" and
offered for anybody to use as they like. The "process" that causes
technology to be actually deployed into field operation is someone
else's task.
If you look at other infrastructures, there's some parallels to the
Internet which is arguably a new infrastructure. E.g., electricity was
invented and early users experienced fires, explosions, electrocutions,
and other such nasty side-effects. But over time rules were developed,
building codes created, inspectors put in place, grids and procedures
developed, and electricity made much more reliable and safe as an
infrastructure.
Similar evolutions happened with roads, water, fuel, transportation, and
other infrastructures. Perhaps the Internet is just too young as an
infrastructure for similar mechanisms to have been created yet. Maybe
government(s) will step (back) in soon.
One of the reasons I recall as an explanation of why TCP succeeded where
OSI failed is because the TCP community produced working code while OSI
produced only very expensive paper. The Internet Project of the 80s
produced code and reluctantly also documentation. The focus of
IAB/IESG/IETF/IRTF/etc. in 2022 seems to be limited to documentation.
For Internet Historians: How did we get from there to here? And why?
Perhaps the Internet has simply become OSI.
Jack Haverty
On 12/3/22 06:34, Andrew G. Malis wrote:
> Brian et al,
>
> Having worked for both a bunch of vendors and a major operator, I
> think it's more accurate to say that the Internet runs on a mix of
> IETF Standards, Proposed Standards, internet drafts, and
> various proprietary features from either a single vendor, or several
> cooperating vendors pushed together by a common customer. In addition,
> operators have been known to develop and use their own proprietary HW
> and/or SW as well.
>
> Cheers,
> Andy
>
>
> On Thu, Dec 1, 2022 at 9:16 PM Brian E Carpenter via Internet-history
> <internet-history at elists.isoc.org> wrote:
>
> I'm not sure whether this actually started before RFC1310 (March
> 1992), but certainly since then there have been multiple steps on
> the standards track: Proposed Standard, Draft Standard (no longer
> assigned) and Internet Standard.
>
> (Rumour has it that this started in pure imitation of the ISO
> standards process. Vint can probably speak to the truth of that.)
>
> But, as I first heard from Fred Baker, "The Internet runs on
> Proposed Standards", because most IETFers can't be bothered with
> the bureaucracy to take the next step. Draft Standard was
> abolished for new work to reduce the bureaucracy, but it hasn't
> had much effect. We did advance IPv6 to Internet Standard, but
> most WGs just don't bother.
>
> In any case, the formal "STD" designation doesn't really mean much.
>
> For a current non-IETF effort, I've drawn a diagram about how to
> interpret the status of RFCs. It can be found at
> https://github.com/becarpenter/book6/blob/main/8.%20Further%20Reading/8.%20Further%20Reading.md
>
> Regards
> Brian Carpenter
>
> On 02-Dec-22 09:52, touch at strayalpha.com wrote:
> > On Nov 30, 2022, at 1:36 PM, Jack Haverty <jack at 3kitty.org> wrote:
> >>
> >> Well, maybe...
> >>
> >> RFC5227 describes itself as a proposed standard. Has it
> subsequently become an actual standard? I don't see it in the
> "Official Internet Protocol Standards" maintained at
> rfc-editor.org <http://rfc-editor.org> but maybe it had later
> revisions.
> >
> > That distinction isn’t all that significant. There are a LOT of
> protocols that never progressed beyond the initial “PS” status:
> > https://www.rfc-editor.org/standards#PS
> > Progression requires not only some specific hurdles, but also
> the will and effort of someone to walk the spec through that
> process. The latter is more often the limitation.
> >
> >> If it or a descendant is a Standard, does that prevent the
> creation of "tools" such as the Flakeway I described? RFCs are
> full of "SHOULD" and "MUST" directives, which systems such as
> Flakeway probably violated. If RFC5227 was universally and
> correctly implemented, would it prevent someone from implementing
> a Flakeway-like tool, assuming of course they don't feel the need
> to follow the RFCs' rules?
> >>
> >> If RFC5227 et al do in fact prevent such behavior, how does one
> know whether or not the proscribed mechanisms are actually present
> in one's equipment? I just looked and I have 54 devices on my
> home Ethernet. Some are wired, some are wifi, and from many
> different companies. How do I tell if they've all correctly
> implemented the mechanisms proscribed in the RFCs?
> >
> > The IETF provides no mechanisms for protocol validation. That’s
> true for all MUSTs, SHOULDs, and MAYs for all protocols.
> >
> >> So, is it really "fixed" even today?
> >>
> >> I guess it depends on how you define "fixed”.
> >
> > Doesn’t it always? :-)
> >
> > Joe
> >
> >
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>
More information about the Internet-history
mailing list