[ih] Preparing for the splinternet

Miles Fidelman mfidelman at meetinghouse.net
Sun Mar 13 14:23:40 PDT 2022


re.  "An open nature is apparently insufficient.   A strong "forcing 
function" is also insufficient, except in its own silo where its force 
is effective."

OR... in a situation where a strong "anchor" customer (or vendor) drives 
a marketplace.

Hence my mention of Medicare's influence on standardization of reporting 
formats across the medical industry.  Similarly, one might expect that 
if the folks who enforce HIPPA regulations were to mandate, or at least 
endorse, X.509 based PKI, and S/MIME for data exchange - that would push 
back on the variety of proprietary email and record sharing systems in 
the medical community.

Or consider what might happen if the IRS and the Social Security 
Administration adopted an open standards policy.

Similarly, we have the ubiquity of Kerberos and Shibboleth in the 
academic community.  And Active Directory in corporate settings. Or, for 
that matter, Login.Gov.

Cheers,

Miles



Jack Haverty via Internet-history wrote:
> X.509 may be used "EVERYWHERE in government" (at least some 
> governments); but it's not used everywhere else, e.g., in the much 
> larger community of Internet users worldwide.
>
> Forcing functions seem to create silos.  TCP/IP was nurtured in such a 
> silo, where a "force" had effect.  It started with the US Defense 
> Department who mandated it for their world, while at the same time 
> keeping their options open for planned adoption of OSI technology.
>
> TCP/IP broke out of its silo, spread throughout the world, and reduced 
> competing silos (SNA, DECNet, OSI, SPX/IPX, ...) to oblivion.
>
> A few other silos had a similar experience, e.g., DNS and NTP which 
> seem to have no competitors now.   Dave Mills and his crew built the 
> NTP silo.  IIRC, he just needed good clocks to perform some 
> experiments on the neonatal Internet.  So he built them as NTP.  Web 
> technology (HTTP, HTML, URLs) started in Tim Berners-Lee's silo but 
> similarly broke out and became ubiquitous.   Competitors such as 
> Gopher, and even well-funded and pre-existing ones like Lotus Notes, 
> didn't endure.   In the early days of the Web, there were several 
> silos competing to provide security for Electronic Commerce.  But 
> rather quickly HTTPS became dominant and seems ubiquitous today.
>
> People who build silos sometimes build them with fragile materials, 
> easily broken.   E.g., TCP/IP was built in a government silo, but was 
> explicitly made very "open" for anyone to adopt. I've always thought 
> such open nature was important for ubiquity. But while TCP/IP V4 broke 
> out of its silo and became ubiquitous, TCP/IP V6, with presumably the 
> same characteristics, has still not replaced V4.   Internet 
> technologies such as IRC (Internet Relay Chat) provided an open 
> mechanism for people to carry on public discussions.  But that didn't 
> prevent the emergence of myriad social media mechanisms that 
> collectively dominate today as competing silos.   The battle 
> continues, and IRC still exists as a minor contestant, but it's not 
> likely to win.   Similarly, NNTP provided a mechanism for 
> disseminating news across the Internet; there's lots of news today on 
> the 'net, but I don't think it travels using NNTP.
>
> An open nature is apparently insufficient.   A strong "forcing 
> function" is also insufficient, except in its own silo where its force 
> is effective.
>
> Ray Tomlinson's introduction of @ has dominated for decades.  But now 
> it seems more and more likely as part of a Twitter identity than an 
> Internet one.
>
> DNS seems ubiquitous, but I sense that its dominance is waning. There 
> are too many "Acme Plumbing" websites now, making it hard to remember 
> the DNS name for the one in my neighborhood.  Even "Four Seasons" 
> doesn't always get you what you expect...
>
> I find myself now using search engines and browser history to remember 
> where to find things I use, rather than remembering their DNS names.   
> Ubiquity and dominance seem to not be permanent.
>
> As Toerless pointed out, silos (and splinters) enable innovation. A 
> good thing.  They also encourage complexity and walled gardens. Bad 
> things, IMHO.
>
> So why do some silos break open and their technology spreads to become 
> dominant and ubiquitous, while others languish for decades.
>
> That's my question perhaps some Historian can answer someday.   I 
> suspect the answer will be complicated.
>
> Jack Haverty
>
> On 3/13/22 08:01, Miles Fidelman via Internet-history wrote:
>> Jack Haverty via Internet-history wrote:
>>> if you look at history from the Users' perspective, IMHO the problem 
>>> has been a lack of follow-through.  Lots of technology (protocols, 
>>> formats, algorithms) has been created, documented in 1000s of RFCs 
>>> and such.  But unless it gets to the field and becomes an inherent 
>>> and pervasive capability of the Internet, it doesn't really exist 
>>> for the Users, whether they be individuals or corporations or 
>>> governments or network or cloud operators.
>>>
>>> Two good examples of technology beyond basic TCP/IP that have made 
>>> that leap are DNS and NTP.  You can pretty much count on them to be 
>>> available no matter where you connect to the Internet and what kind 
>>> of device you use to make that connection.
>>>
>>> In contrast, many other technologies may "exist" but haven't made 
>>> that leap.
>>>
>>> E.g., X.509 and certificates may exist, but IMHO they aren't widely 
>>> used.   I occasionally see my browser advise me that a certificate 
>>> is invalid.   But the only path forward it offers is to ignore the 
>>> error if I want to continue doing whatever I'm trying to do.  I 
>>> typically say "go ahead", and I suspect most Users do the same. 
>>> Similarly, I have PGP and S/MIME credentials, but I rarely use them, 
>>> and rarely receive any email from others using them.
>>
>> But... they're used EVERYWHERE in government, particularly the 
>> military - where you need to plug a CAC card into your computer, just 
>> to log in.
>>
>> They're used, because you HAVE to use them.
>>
>> Same again for things like Microsoft Active Directory in the 
>> corporate environment, or Shibboleth in the academic world. (Which, 
>> in turn, are based on Kerberos, if memory serves.)
>>
>> If the folks who enforce HIPPA were to pass a regulation requiring a 
>> standard format, and standard protocols, for exchanging medical 
>> records - that was based on X.509 certificates and S/MIME - 
>> guaranteed that every medical systems provider would migrate from 
>> their proprietary formats and protocols, to the standard. 
>> (Particularly, since pretty much every mail and web client has the 
>> capabilities built in.)
>>
>>>
>>> Control of Internet content, to provide child protection or other 
>>> constraints, was developed by W3C in the 90s (look up PICS - 
>>> Platform for Internet Content Selection).  It was even implemented 
>>> in popular browsers of the day.  As a rep to W3C I helped get that 
>>> in place as a general mechanism for attaching metadata to Web 
>>> content, but AFAIK it never got any real use in the broad Internet 
>>> and by now seems to have disappeared.
>>>
>>> Perhaps some historian will someday explain why such mechanisms 
>>> don't seem to make it to the field and get widely implemented, 
>>> deployed, and used.  Why are they different from TCP/IP, DNS, NTP 
>>> and maybe a few others which had success in the early stages of the 
>>> Internet?
>>
>> Lack of a forcing function - be it a vacuum demanding to be filled, 
>> or legislation, or buying behavior of a large client, or customer 
>> demand.
>>
>> Miles
>>
>>
>>>
>>> Jack
>>>
>>>
>>> On 3/12/22 22:55, Toerless Eckert via Internet-history wrote:
>>>> Access control would be a lovely topic to take to the IETF. For 
>>>> something
>>>> what Jack described as a review of historic methods to learn from 
>>>> (would be a very
>>>> helpful info RFC, but lot of work i guess), and for todays 
>>>> perspective IMHO
>>>> what access control methods could be recommended to avoid the 
>>>> problematic filtering at
>>>> network layer.
>>>>
>>>> For example, we just had another incident of a court in germany 
>>>> issuing blocking
>>>> orders to german ISPs (which typically operates on DNS), against a 
>>>> porn service
>>>> that wasn't providing adequate child protection. How do we get rid 
>>>> of such recurring
>>>> challenges to the basic internet infrastructure (IP and naming 
>>>> level...) ?
>>>>
>>>> Funnily, i am just trying to watch a movie on disneyplus ("All King 
>>>> Man") while being in
>>>> Germany with a USA based account, and the account only allows me to 
>>>> select <= PG14.
>>>> Talked with tech-support, and the only solution was to temporarily 
>>>> update the account location
>>>>   to germany because (as i figure) it's even logically impossible 
>>>> to automate this: In germany
>>>> kids are allowed/disallowed to watch different movies than in the 
>>>> USA, but travelling
>>>> parents might be caught by surprise (especially on the "allowed" 
>>>> part). So that's
>>>> from an arguably kids-friendly global content provider. Now try to 
>>>> imagine how governments
>>>> are struggling, that many parents do expect to provide some useful 
>>>> degree of protection for
>>>> kids.  If the answer to the problem is "well, we can't figure out 
>>>> how to do this for the
>>>> Internet at large", then this will even increase the monopolization 
>>>> of services to those
>>>> global providers that do.
>>>>
>>>> Sorry. Too much current-day text. The Internet was definitely a lot 
>>>> easier in <= 1990'th,
>>>> when we had not enough kids on the Internet to worry about that issue.
>>>>
>>>> How about "The Internet was built for adults" ?
>>>>
>>>> Cheers
>>>>      Toerless
>>>>
>>>>
>>>> On Sat, Mar 12, 2022 at 09:14:05PM -0500, Miles Fidelman via 
>>>> Internet-history wrote:
>>>>> A helpful perspective.  Thanks Jack.
>>>>>
>>>>> Not sure I completely agree with all of it (see below) - but 
>>>>> pretty close.
>>>>>
>>>>> Jack Haverty via Internet-history wrote:
>>>>>> IMHO, the Internet has been splintered for decades, going back to 
>>>>>> the
>>>>>> days of DDN, and the introduction of EGP which enabled carving up 
>>>>>> the
>>>>>> Internet into many pieces, each run by different operators.
>>>>>>
>>>>>> But the history is a bit more complex than that.   Back in the 
>>>>>> mid-80s,
>>>>>> I used to give a lot of presentations about the Internet. One of the
>>>>>> points I made was that the first ten years of the Internet were all
>>>>>> about connectivity -- making it possible for every computer on the
>>>>>> planet to communicate with every other computer.  I opined then 
>>>>>> that the
>>>>>> next ten years would be about making it *not* possible for every
>>>>>> computer to talk with every other -- i.e., to introduce 
>>>>>> mechanisms that
>>>>>> made it possible to constrain connectivity, for any of a number of
>>>>>> reasons already mentioned. That was about 40 years ago -- my ten 
>>>>>> year
>>>>>> projection was way off target.
>>>>>>
>>>>>> At the time, the usage model of the Internet was based on the way 
>>>>>> that
>>>>>> computers of that era were typically used.  A person would use a
>>>>>> terminal of some kind (typewriter or screen) and do something to 
>>>>>> connect
>>>>>> it to a computer.  He or she would then somehow "log in" to that
>>>>>> computer with a name and password, and gain the ability to use 
>>>>>> whatever
>>>>>> programs, data, and resources that individual was allowed to 
>>>>>> use.  At
>>>>>> the end of that "session", the user would log out, and that terminal
>>>>>> would no longer be able to do anything until the next user 
>>>>>> repeated the
>>>>>> process.
>>>>>>
>>>>>> In the early days of the Internet, that model was translated into 
>>>>>> the
>>>>>> network realm.  E.g., there was a project called TACACS (TAC Access
>>>>>> Control System) that provided the mechanisms for a human user to 
>>>>>> "log
>>>>>> in" to the Internet, using a name and a password. DDN, for example,
>>>>>> issued DDN Access Cards which had your name and network password 
>>>>>> that
>>>>>> enabled a human user to log in to the DDN as a network.
>>>>>>
>>>>>> Having logged in to the network, you could then still connect to 
>>>>>> your
>>>>>> chosen computer as before.  But you no longer had to log in to that
>>>>>> computer.   The network could tell the computer which user was
>>>>>> associated with the new connection, and, assuming the computer 
>>>>>> manager
>>>>>> trusted the network, the user would be automatically logged in 
>>>>>> and be
>>>>>> able to do whatever that user was allowed to do.   This new 
>>>>>> feature was
>>>>>> termed "Double Login Elimination", since it removed the necessity 
>>>>>> to log
>>>>>> in more than once for a given session, regardless of how many 
>>>>>> computers
>>>>>> you might use.
>>>>>>
>>>>>> Those mechanisms didn't have strong security, but it was 
>>>>>> straightforward
>>>>>> to add it for situations where it was required. The basic model 
>>>>>> was that
>>>>>> network activity was always associated with some user, who was
>>>>>> identified and verified by the network mechanisms. Each computer 
>>>>>> that
>>>>>> the user might use would be told who the user was, and could then 
>>>>>> apply
>>>>>> its own rules about what that user could do.   If the user made a
>>>>>> network connection out to some other computer, the user's 
>>>>>> identity would
>>>>>> be similarly passed along to the other computer.
>>>>>>
>>>>>> At about that time (later 1980s), LANs and PCs began to spread 
>>>>>> through
>>>>>> the Internet, and the user-at-a-terminal model broke down. 
>>>>>> Instead of
>>>>>> users at terminals making connections to the network, now there were
>>>>>> users at microcomputers making connections.   Such computers were
>>>>>> "personal" computers, not under management by the typical "data 
>>>>>> center"
>>>>>> or network operator but rather by individuals.    Rather than 
>>>>>> connecting
>>>>>> to remote computers as "terminals", connections started to also 
>>>>>> be made
>>>>>> by programs running on those personal computers.   The human user 
>>>>>> might
>>>>>> not even be aware that such connections were happening.
>>>>>>
>>>>>> With that evolution of the network/user model, mechanisms such as 
>>>>>> TACACS
>>>>>> became obsolete.  Where it was often reasonable to trust the
>>>>>> identification of a user performed by a mechanism run by the 
>>>>>> network or
>>>>>> a datacenter, it was difficult to similarly trust the word of one 
>>>>>> of the
>>>>>> multitude of microcomputers and software packages that were now
>>>>>> involved.
>>>>>>
>>>>>> So, the notion that a "user" could be identified and then 
>>>>>> constrained in
>>>>>> use of the resources on the Internet was no longer available.
>>>>>>
>>>>>> AFAIK, since that time in the 80s, there hasn't been a new "usage 
>>>>>> model"
>>>>>> developed to deal with the reality of today's Internet. We each have
>>>>>> many devices now, not just one personal computer.   Many of them are
>>>>>> online all of the time; there are no "sessions" now with a human
>>>>>> interacting with a remote computer as in the 80s. When we use a 
>>>>>> website,
>>>>>> what appears on our screen may come from dozens of computers 
>>>>>> somewhere
>>>>>> "out there".   Some of the content on the screen isn't even what we
>>>>>> asked for.   Who is the "user" asking for advertising popups to
>>>>>> appear?   Did I give that user permission to use some of my screen
>>>>>> space?   Who did?
>>>>>>
>>>>>> User interaction with today's network is arguably much more 
>>>>>> complex than
>>>>>> it was 40 years ago.  IMHO, no one has developed a good model of 
>>>>>> network
>>>>>> usage for such a world, that enables the control of the resources
>>>>>> (computing, data) accessed across the Internet.   For mechanisms 
>>>>>> that
>>>>>> have been developed, such as privacy-enhanced electronic mail,
>>>>>> deployment seems to have been very spotty for some reason. We get
>>>>>> email from identified Users, but can we trust that the email 
>>>>>> actually
>>>>>> came from that User? When the Web appeared, the Internet got really
>>>>>> complicated.
>>>>>>
>>>>>> Lacking appropriate mechanisms, users still need some way to 
>>>>>> control who
>>>>>> can utliize what.   So they improvise and generate adhoc point
>>>>>> solutions.  My bank wants to interact with me safely, so it sets 
>>>>>> up a
>>>>>> separate account on its own computers, with name, password, and 
>>>>>> 2-factor
>>>>>> authentication.   It can't trust the Internet to tell it who I 
>>>>>> am.   It
>>>>>> sends me email when I need to do something, advising me to log in 
>>>>>> to my
>>>>>> account and read its message to me there, where it knows that I'm 
>>>>>> me,
>>>>>> and I know that it's my bank.   It can't trust Internet email for 
>>>>>> more
>>>>>> than advising me to come in to its splinter of the Internet.
>>>>>>
>>>>>> All my vendors do the same.  My newspaper.  My doctors. My media
>>>>>> subscriptions.  Each has its own "silo" where it can interact 
>>>>>> with me
>>>>>> reliably and confidently.   Some of them probably do it to better 
>>>>>> make
>>>>>> money.  But IMHO most of them do it because they have to - the 
>>>>>> Internet
>>>>>> doesn't provide any mechanisms to help.
>>>>> I'm not sure that's really the case.  We do, after all have things 
>>>>> like
>>>>> X.509 certificates, and various mechanisms defined on top of 
>>>>> them.  Or, in
>>>>> the academic & enterprise worlds, we have IAM mechanisms that work 
>>>>> across
>>>>> multiple institutions (e.g., Shibboleth and the like).
>>>>>> So we get lots of "splintering". IMHO that has at least partially
>>>>>> been driven by the lack of mechanisms within the Internet 
>>>>>> technology to
>>>>>> deal with control of resources in ways that the users require. So 
>>>>>> they
>>>>>> have invented their own individual mechanisms as needs arose.  
>>>>>> It's not
>>>>>> just at the router/ISP level, where splintering can be caused by 
>>>>>> things
>>>>>> like the absence of mechanisms for "policy routing" or "type of 
>>>>>> service"
>>>>>> or "security" that's important to someone.
>>>>> And here, I'll come back to commercial interests as driving the show.
>>>>>
>>>>> In the academic world - where interoperability and 
>>>>> resource/information
>>>>> sharing are a priority - we have a world of identify federations.  
>>>>> Yes, one
>>>>> has to have permissions and such, but one doesn't need multiple 
>>>>> library
>>>>> cards to access multiple libraries, or to make interlibrary 
>>>>> loans.  For that
>>>>> matter, we can do business worldwide, with one bank account or 
>>>>> credit card.
>>>>>
>>>>> But, when it comes to things like, say, distributing medical 
>>>>> records, it
>>>>> took the Medicare administrators to force all doctors offices, 
>>>>> hospitals,
>>>>> etc. to use the same format for submitting billing records. Meanwhile
>>>>> commercial firms have made a fortune creating and selling portals and
>>>>> private email systems - and convincing folks that the only way 
>>>>> they can meet
>>>>> HIPPA requirements is to use said private systems.  And now 
>>>>> they've started
>>>>> to sell their users on mechanisms to share records between 
>>>>> providers (kind
>>>>> of like the early days of email - "there are more folks on our 
>>>>> system then
>>>>> the other guys,' so we're your best option for letting doctors 
>>>>> exchange
>>>>> patient records").  Without a forcing function for 
>>>>> interoperability (be it
>>>>> ARPA funding the ARPANET specifically to enable resource sharing, or
>>>>> Medicare, or some other large institution) - market forces, and 
>>>>> perhaps
>>>>> basic human psychology, push toward finding ways to segment 
>>>>> markets, isolate
>>>>> tribes, carve off market niches, etc.
>>>>>
>>>>> Come to think of it, the same applies to "web services" - we 
>>>>> developed a
>>>>> perfectly good protocol stack, and built RESTful services on top 
>>>>> of it.  But
>>>>> somebody had to go off and reinvent everything, push all the 
>>>>> functions up to
>>>>> the application layer, and make everything incredibly baroque and
>>>>> cumbersome.  And then folks started to come to their senses and start
>>>>> standardizing, a bit, on how to do RESTful web services in ways 
>>>>> that sort of
>>>>> work for everyone.  (Of course, there are those who are trying to 
>>>>> repeat the
>>>>> missteps, with "Web 3.0," smart contracts, and all of that stuff.)
>>>>>> "Double Login" momentarily was eliminated, but revived and has 
>>>>>> evolved
>>>>>> into "Continuous Login" since the Internet doesn't provide what's 
>>>>>> needed
>>>>>> by the users in today's complex world.
>>>>> A nice way of putting it.
>>>>>
>>>>> Though, perhaps it's equally useful to view things as "no login." 
>>>>> Everything
>>>>> is a transaction, governed by a set of rules, accompanied by 
>>>>> credentials and
>>>>> currency.
>>>>>
>>>>> And we have models for that that date back millennia - basically 
>>>>> contracts
>>>>> and currency.  Later we invented multi-part forms & checking 
>>>>> accounts.  Now
>>>>> we have a plethora of mechanisms - all doing basically the same 
>>>>> thing - and
>>>>> competing with each other for market share.  (Kind of like 
>>>>> standards, we
>>>>> need a standard way of talking to each other - so let's invent a 
>>>>> new one.)
>>>>>
>>>>> Maybe, we can take a breath, take a step backwards, and start 
>>>>> building on
>>>>> interoperable building blocks that have stood the test of time.  
>>>>> In the same
>>>>> way that e-books "work" a lot better than reading on laptops, and now
>>>>> tablets are merging the form factor in ways that are practical.  
>>>>> Or chat, in
>>>>> the form of SMS & MMS messaging, is pretty much still the standard 
>>>>> for
>>>>> reaching anybody, anywhere, any time.
>>>>>
>>>>> But... absent a major institution pushing things forward (or 
>>>>> together)... it
>>>>> probably will take a concerted effort, by those of us who 
>>>>> understand the
>>>>> issues, and are in positions to specify technology for large 
>>>>> systems, or
>>>>> large groups/organizations, to keep nudging things in the right 
>>>>> direction,
>>>>> when we have the opportunity to do so.
>>>>>
>>>>>> I was involved in operating a "splinternet" corporate internet in 
>>>>>> the
>>>>>> 90s, connected to "the Internet" only by an email gateway. We just
>>>>>> couldn't trust the Internet so we kept it at arms length.
>>>>>>
>>>>>> Hope this helps some historian....
>>>>>> Jack Haverty
>>>>> And, perhaps, offer some lessons learned to those who would prefer 
>>>>> not to
>>>>> repeat history!
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Miles
>>>>>
>>>>>
>>>>> -- 
>>>>> In theory, there is no difference between theory and practice.
>>>>> In practice, there is.  .... Yogi Berra
>>>>>
>>>>> Theory is when you know everything but nothing works.
>>>>> Practice is when everything works but no one knows why.
>>>>> In our lab, theory and practice are combined:
>>>>> nothing works and no one knows why.  ... unknown
>>>>>
>>>>> -- 
>>>>> Internet-history mailing list
>>>>> Internet-history at elists.isoc.org
>>>>> https://elists.isoc.org/mailman/listinfo/internet-history
>>>
>>
>>
>


-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown




More information about the Internet-history mailing list