[ih] The Internet Crucible (was: Digital pioneer Geoff Huston apologises for bringing the internet to Australia)

the keyboard of geoff goodfellow geoff at iconia.com
Sun Oct 4 13:35:07 PDT 2020


you're most welcome, Andy... and following on the "theme" of Geoff H's
Thursday presentation at the NetThing internet governance conference, yours
truly is/was kinda reminded of this Geoff's late 1980's presentation, er,
eleemosynary publication: *The Internet Crucible.*

The first *The Internet Crucible* went out in August 1989 and is summarily
included below.

Subsequent/follow on IC's went out in September 1989 and January & March
1990 and can be found at the *The Internet Crucible* archive at
https://iconia.com/ic/

THE CRUCIBLE                                                  INTERNET EDITION
August, 1989                                                Volume 1 : Issue 1
                                                                 (reprint)
   In this issue:
	A Critical Analysis of the Internet Management Situation

   THE CRUCIBLE is a moderated forum for the discussion of Internet issues.
   Contributions received by the moderator are stripped of all identifying
   headers and signatures and forwarded to a panel of referees.  Materials
   approved for publication will appear in THE CRUCIBLE without attribution.
   This policy encourages consideration of ideas solely on their intrinsic
   merit, free from the influences of authorship, funding sources and
   organizational affiliations.

   THE INTERNET CRUCIBLE is an eleemosynary publication of Geoff Goodfellow.

Mail contributions to:	                         crucible at fernwood.mpk.ca.us

------------------------------------------------------------------------------

	 A Critical Analysis of the Internet Management Situation:
		       The Internet Lacks Governance


				  ABSTRACT

At its July 1989 meeting, the Internet Activities Board made some
modifications in the management structure for the Internet.  An outline of
the new IAB structure was distributed to the Internet engineering community
by Dr. Robert Braden, Executive Director.  In part, the open letter stated:

   "These changes resulted from an appreciation of our successes, especially
    as reflected in the growth and vigor of the IETF, and in rueful
    acknowledgment of our failures (which I will not enumerate).  Many on
    these lists are concerned with making the Internet architecture work in
    the real world."

In this first issue of THE INTERNET CRUCIBLE we will focus on the failures
and shortcomings in the Internet.  Failures contain the lessons one often
needs to achieve success.  Success rarely leads to a search for new
solutions.  Recommendations are made for short and long term improvements
to the Internet.


			A Brief History of Networking

	The Internet grew out of the early pioneering work on the ARPANET.
This influence was more than technological, the Internet has also been
significantly influenced by the economic basis of the ARPANET.

	The network resources of the ARPANET (and now Internet) are "free".
There are no charges based on usage (unless your Internet connection is via
an X.25 Public Data Network (PDN) in which case you're well endowed, or
better be).  Whether a site's Internet connection transfers 1 packet/day or
a 1M packets/day, the "cost" is the same.  Obviously, someone pays for the
leased lines, router hardware, and the like, but this "someone" is, by and
large, not the same "someone" who is sending the packets.

	In the context of the Research ARPANET, the "free use" paradigm was
an appropriate strategy, and it has paid handsome dividends in the form of
developing leading edge packet switching technologies.  Unfortunately,
there is a significant side-effect with both the management and technical
ramifications of the current Internet paradigm: there is no accountability,
in the formal sense of the word.

	In terms of management, it is difficult to determine who exactly is
responsible for a particular component of the Internet.  From a technical
side, responsible engineering and efficiency has been replaced by the
purchase of T1 links.

	Without an economic basis, further development of short-term
Internet technology has been skewed.  The most interesting innovations in
Internet engineering over the last five years have occurred in resource
poor, not resource rich, environments.

	Some of the best known examples of innovative Internet efficiency
engineering are John Nagle's tiny-gram avoidance and ICMP source-quench
mechanisms documented in RFC896, Van Jacobsen's slow-start algorithms and
Phil Karn's retransmission timer method.

	In the Nagle, Jacobsen and Karn environments, it was not possible
or cost effective to solve the performance and resource problems by simply
adding more bandwidth -- some innovative engineering had to be done.
Interestingly enough, their engineering had a dramatic impact on our
understanding of core Internet technology.

	It should be noted that highly efficient networks are important
when dealing with technologies such as radio where there is a finite amount
of bandwidth/spectrum to be had.  As in the Nagle, Jacobsen and Karn cases,
there are many environments where adding another T1 link can not be used to
solve the problem.  Unless innovation continues in Internet technology, our
less than optimal protocols will perform poorly in bandwidth or resource
constrained environments.

	Developing at roughly the same time as Internet technology have
been the "cost-sensitive" technologies and services, such as the various
X.25-based PDNs, the UUCP and CSNET dial-up networks.  These technologies
are all based on the notion that bandwidth costs money and the subscriber
pays for the resources used.  This has the notable effect of focusing
innovation to control costs and maximize efficiency of available resources
and bandwidth.  Higher efficiency is achieved by concentrating on sending
the most amount of information through the pipe in the most efficient
manner thereby making the best use of available bandwidth/cost ratio.

	For example, bandwidth conservation in the UUCP dial-up network has
multiplied by leaps and bounds in the modem market with the innovation of
Paul Baran's (the grandfather of packet switching technology) company,
Telebit, which manufactures a 19.2KB dial-up modem especially optimized for
UUCP and other well known transfer protocols.  For another example,
although strictly line-at-a-time terminal sessions are less "user friendly"
than character-oriented sessions, they make for highly efficient use of
X.25 PDN network resources with echoing and editing performed locally on
the PAD.

	While few would argue the superiority of X.25 and dial-up CSNET and
UUCP, these technologies have proved themselves both to spur innovation and
to be accountable.  The subscribers to such services appreciate the cost of
the services they use, and often such costs form a well-known "line item"
in the subscriber's annual budget.

	Nevertheless, the Internet suite of protocols are eminently
successful, based solely on the sheer size and rate of growth of both the
Internet and the numerous private internets, both domestically and
internationally.  You can purchase internet technology with a major credit
card from a mail order catalog.  Internet technology has achieved the
promise of Open Systems, probably a decade before OSI will be able to do so.


			   Failures of the Internet

	The evolution and growth of Internet technology have provided the
basis for several failures.  We think it is important to examine failures
in detail, so as to learn from them.  History often tends to repeat itself.


Failure 1:- Network Nonmanagement

	The question of responsibility in todays proliferated Internet is
completely open.  For the last three years, the Internet has been suffering
from non-management.  While few would argue that a centralized czar is
necessary (or possible) for the Internet, the fact remains there is little
to be done today besides finger-pointing when a problem arises.

	In the NSFNET, MERIT is incharge of the backbone and each regional
network provider is responsible for its respective area.  However, trying
to debug a networking problem across lines of responsibility, such as
intermittent connectivity, is problematic at best.  Consider three all too
true refrains actually heard from NOC personal at the helm:

	"You can't ftp from x to y?  Try again tomorrow, it will
	 probably work then."

	"If you are not satisfied with the level of [network]
         service you are receiving you may have it disconnected."

	"The routers for network x are out of table space for routes,
	 which is why hosts on that network can't reach your new
	 (three-month old) network.  We don't know when the routers will
	 be upgraded, but it probably won't be for another year."

	One might argue that the recent restructuring of the IAB may work
towards bringing the Internet under control and Dr. Vinton G. Cerf's recent
involvement is a step in the right direction.  Unfortunately, from a
historical perspective, the new IAB structure is not likely to be
successful in achieving a solution.  Now the IAB has two task forces, the
Internet Research Task Force (IRTF) and the Internet Engineering Task Force
(IETF).  The IRTF, responsible for long-term Internet research, is largely
composed of the various task forces which used to sit at the IAB level.
The IETF, responsible for the solution of short-term Internet problems, has
retained its composition.

	The IETF is a voluntary organization and its members participate
out of self interest only.  The IETF has had past difficulties in solving
some of the Internet's problems (i.e., it has taken the IETF well over a
year to not yet produce RFCs for either a Point-To-Point Serial Line IP or
Network Management enhancements).  It is unlikely that the IETF has the
resources to mount a concerted attack against the problems of today's ever
expanding Internet.  As one IETF old-timer put it: "No one's paid to go do
these things, I don't see why they (the IETF management) think they can
tell us what to do" and "No one is paying me, why should I be thinking
about these things?"

	Even if the IETF had the technical resources, many of the
Internet's problems are also due to lack of "hands on" management.
The IETF,

	o  Bites off more than it can chew;
	o  Sometimes fails to understand a problem before making a solution;
	o  Attempts to solve political/marketing problems with technical
	     solutions;
	o  Has very little actual power.

	The IETF has repeatedly demonstrated the lack of focus necessary to
complete engineering tasks in a timely fashion.  Further, the IRTF is
chartered to look at problems on the five-year horizon, so they are out of
the line of responsibility.  Finally, the IAB, per se, is not situated to
resolve these problems as they are inherent to the current structure of
nonaccountability.

	During this crisis of non-management, the Internet has evolved into
a patch quilt of interconnected networks that depend on lots of
seat-of-the-pants flying to keep interoperating.  It is not an unusual
occurrence for an entire partition of the Internet to remain disconnected
for a week because the person responsible for a key connection went on
vacation and no one else knew how to fix it.  This situation is but one
example of an endemic problem of the global Internet.


Failure 2:- Network Management

	The current fury over network management protocols for TCP/IP is
but a microcosm of the greater Internet vs. OSI debate going on in the
marketplace.  While everyone in the market says they want OSI, anyone
planning on getting any work done today buys Internet technology.  So it is
with network management, the old IAB made the CMOT an Internet standard
despite the lack of a single implementation, while the only non-proprietary
network management protocol in use in the Internet is the SNMP.  The dual
network management standardization blessings will no doubt have the effect
of confusing end-users of Internet technology--making it appear there are
two choices for network management, although only one choice, the SNMP has
been implemented.  The CMOT choice isn't implemented, doesn't work, or
isn't interoperable.

	To compound matters, after spending a year trying to achieve
consensus on the successor to the current Internet standard SMI/MIB, the
MIB working group was disbanded without ever producing anything: the
political climate prevented them from resolving the matter.  (Many
congratulatory notes were sent to the chair of the group thanking him for
his time.  This is an interesting new trend for the Internet--congratulating
ourselves on our failures.)

	Since a common SMI/MIB could not be advanced, an attempt was made
to de-couple the SNMP and the CMOT (RFC1109).  The likely result of RFC1109
will be that the SNMP camp will continue to refine their experience towards
workable network management systems, whilst the CMOT camp will continue the
never-ending journey of tracking OSI while producing demo systems for trade
shows exhibitions.  Unfortunately the end-user will remain ever confused
because of the IAB's controversial (and technically questionable) decision
to elevate the CMOT prior to implementation.

	While the network management problem is probably too large for the
SNMP camp to solve by themselves they seem to be the only people who are
making any forward progress.


Failure 3:- Bandwidth Waste

	Both the national and regional backbone providers are fascinated
with T1 (and now T3) as the solution towards resource problems.  T1/T3
seems to have become the Internet panacea of the late 80's.  You never hear
anything from the backbone providers about work being done to get hosts to
implement the latest performance/congestion refinements to IP, TCP, or
above.  Instead, you hear about additional T1 links and plans for T3 links.
While T1 links certainly have more "sex and sizzle" than efficient
technology developments like slow-start, tiny gram avoidance and line mode
telnet, the majority of users on the Internet will probably get much more
benefit from properly behaving hosts running over a stable backbone than
the current situation of misbehaving and semi-behaved hosts over an
intermittent catenet.


Failure 4:- Routing

	The biggest problem with routing today is that we are still using
phase I (ARPANET) technology, namely EGP.  The EGP is playing the role of
routing glue in providing the coupling between the regional IGP and the
backbone routing information.  It was designed to only accommodate a single
point of attachment to the catenet (which was all DCA could afford with the
PSNs).  However with lower line costs, one can build a reasonably
inexpensive network using redundant links.  However the EGP does not
provide enough information nor does the model it is based upon support
multiple connections between autonomous systems.  Work is progressing in
the Interconnectivity WG of the IETF to replace EGP.  They are in the
process of redefining the model to solve some of the current needs.  BGP or
the Border Gateway Protocol (RFC1105) is an attempt to codify some of the
ideas the group is working on.

	Other problems with routing are caused by regionals wanting a
backdoor connection to another regional directly.  These connections
require some sort of interface between the two routing systems.  These
interfaces are built by hand to avoid routing loops.  Loops can be caused
when information sent into one regional network is sent back towards the
source.  If the source doesn't recognize the information as its own,
packets can flow until their time to live field expires.

	Routing problems are caused by the interior routing protocol or
IGP.  This is the routing protocol which is used by the regionals to pass
information to and from its users.  The users themselves can use a
different IGP than the regional.  Depending on the number of connections a
user has to the regional network, routing loops can be an issue.  Some
regionals pass around information about all known networks in the entire
catenet to their users.  This information deluge is a problem with some
IGPs.  Newer IGPs such as the new OSPF from the IETF and IGRP from cisco
attempt to provide some information hiding by adding hierarchy.  OSPF is
the internets first attempt at using a Dykstra type algorithm as an IGP.
BBN uses it to route between their packet switch nodes below the 1822 or
X.25 layer.

	Unstable routing is caused by hardware or hosts software.  Older
BSD software sets the TTL field in the IP header to a small number.  The
Internet today is growing and its diameter has exceed the software's
ability to reach the other side.  This problem is easily fixed by
knowledgeable systems people, but one must be aware of the problem before
they can fix it.

	Routing problems are also perceived when in fact a serial line
problem or hardware problem is the real cause.  If a serial line is
intermittent or quickly cycles from the up state into the down state and
back again, routing information will not be supplied in a uniform or smooth
manner.  Most current IGPs are Bellman-Ford based and employ some
stabilizing techniques to stem the flow of routing oscillations due to
"flapping" lines.  Often when a route to a network disappears, it may take
several seconds for it to reappear.  This can occur at the source router
who waits for the route to "decay" from the system.  This pause should be
short enough so that active connections persist but long enough that all
routers in the routing system "forget" about routes to that network.  Older
host software with over-active TCP retransmission timers will time out
connections instead of persevering in the face of this problem.  Also
routers, according to RFC1009, must be able to send ICMP unreachables when
a packet is sent to a route which is not present in its routing database.
Some host products on the market close down connections when a single ICMP
reachable is received.  This bug flies in the face of the Internet parable
"be generous in what you accept and rigorous in what you send".

	Many of the perceived routing problems are really complex multiple
interactions of differing products.


			    Causes of the Failures

The Internet failures and shortcomings can be traced to several sources:
			
	First and foremost, there is little or no incentive for efficiency
and/or economy in the current Internet.  As a direct result, the resources
of the Internet and its components are limited by factors other than
economics.  When resources wear thin, congestion and poor performance
result.  There is little to no incentive to make things better, if 1 packet
out of 10 gets through things "sort of work".  It would appear that
Internet technology has found a loophole in the "Tragedy of The Commons"
allegory--things get progressively worse and worse, but eventually
something does get through.

	The research community is interested in technology and not
economics, efficiency or free-markets.  While this tack has produced the
Internet suite of protocols, the de facto International Standard for Open
Systems, it has also created an atmosphere of intense in-breeding which is
overly sensitive to criticism and quite hardened against outside influence.
Meanwhile, the outside world goes on about developing economically viable
and efficient networking technology without the benefit of direct
participation on the part of the Internet.

	The research community also appears to be spending a lot of its
time trying to hang onto the diminishing number of research dollars
available to it (one problem of being a successful researcher is eventually
your sponsors want you to be successful in other things).  Despite this,
the research community actively shuns foreign technology (e.g., OSI), but,
inexplicably has not recently produced much innovation in new Internet
technology.  There is also a dearth of new and nifty innovative
applications on the Internet.  Business as usual on the Internet is mostly
FTP, SMTP and Telnet or Rlogin as it has been for many years.  The most
interesting example of a distributed application on the Internet today is
the Domain Name System, which is essentially an administrative facility,
not an end-user service.

	The engineering community must receive equal blame in these
matters.  While there have been some successes on the part of the
engineering community, such as those by Nagel, Jacobsen and Karn mentioned
above, the output of the IETF, namely RFCs and corresponding
implementations, has been surprisingly low over its lifetime.

	Finally, the Internet has become increasingly dependent on vendors
for providing implementations of Internet technology.  While this is no
doubt beneficial in the long-term, the vendor community, rather than
investing "real" resources when building these products, do little more
than shrink-wrap code written primarily by research assistants at
universities.  This has lead to cataclysmic consequences (e.g., the
Internet worm incident, where Sendmail with "debug" command and all was
packaged and delivered to customers without proper consideration).  Of
course, when problems are found and fixed (either by the vendor's customers
or software sources), the time to market with these fixes is commonly a
year or longer.  Thus, while vendors are vital to the long-term success of
Internet technology, they certainly don't receive high marks in the short-term.


			       Recommendations

Short-term solutions (should happen by year's end):

	In terms of hardware, the vendor community has advanced to the
point where the existing special-purpose technologies (Butterfly, NSSs) can
be replaced by off-the-shelf routers at far less cost and with superior
throughput and reliability.  Obvious candidates for upgrade are both the
NSFNET and ARPANET backbones.  Given the extended unreliability of the
mailbridges, the ARPA core is an immediate candidate (even though the days
of net 10 are numbered).

	In terms of software, ALL devices in the Internet must be network
manageable.  This is becoming ever more critical when problems must be
resolved.  Since SNMP is the only open network management protocol
functioning in the Internet, all devices must support SNMP and the Internet
standard SMI and MIB.

	Host implementations must be made to support the not-so-recent TCP
enhancements (e.g., those by Nagle, Jacobsen and Karn) and the more recent
linemode TELNET.

	The national and regional providers must coordinate to share
network management information and tools so that user problems can be dealt
with in a predictable and timely fashion.  Network management tools are a
big help, but without the proper personnel support above this, the benefits
can not be fully leveraged.

	The Internet needs leadership and hands-on guidance.  No one is
seemingly in charge today, and the people who actually care about the net
are pressed into continually fighting the small, immediate problems.

Long-term solutions:

	To promote network efficiency and a free-market system for the
delivery of Internet services, it is proposed to switch the method by which
the network itself is supported.  Rather than a top-down approach where the
money goes from funding agencies to the national backbone or regional
providers, it is suggested the money go directly to end-users (campuses)
who can then select from among the network service providers which among
them best satisfies their needs and costs.

	This is a strict economic model: by playing with the full set of
the laws of economics, a lot of the second-order problems of the Internet,
both present and on the horizon, can be brought to heel.  The Internet is
no longer a research vehicle, it is a vibrant production facility.  It is
time to acknowledge this by using a realistic economic model in the
delivery of Internet services to the community (member base).

	When Internet sites can vote with their pocketbooks, some new
regionals will be formed; some, those which are non-performant or
uncompetitive, will go away; and, the existing successful ones will grow.
The existing regionals will then be able to use their economic power, as
any consumer would, to ensure that the service providers (e.g., the
national backbone providers) offer responsive service at reasonable prices.
"The Market" is a powerful forcing function: it will be in the best
interests of the national and regional providers to innovate, so as to be
more competitive.  Further, such a scheme would also allow the traditional
telecommunications providers a means for becoming more involved in the
Internet, thus allowing cross-leverage of technologies and experience.

	The transition from top-down to economic model must be handled
carefully, but this is exactly the kind of statesmanship that the Internet
should expect from its leadership.
-------


On Sun, Oct 4, 2020 at 9:58 AM* Andrew G. Malis <agmalis at gmail.com
<agmalis at gmail.com>> wrote*:

> Geoff,
>
> Thanks for forwarding. I've heard Geoff (H.) speak many times, and I can
> hear this in his own voice.
>
> Cheers,
> Andy
>
>
> On Sun, Oct 4, 2020 at 11:44 AM the keyboard of geoff goodfellow via
> Internet-history <internet-history at elists.isoc.org> wrote:
>
>> Huston says the internet is a 'gigantic vanity-reinforcing distorted
>> TikTok
>> selfie' and web security is 'the punchline to some demented sick joke'.
>> But
>> Australia's first Privacy Commissioner thinks he's being optimistic.
>> [...]
>>
>> https://www.zdnet.com/article/digital-pioneer-geoff-huston-apologises-for-bringing-the-internet-to-australia/
>>
>>
-- 
Geoff.Goodfellow at iconia.com
living as The Truth is True



More information about the Internet-history mailing list