[ih] Preparing for the splinternet

Miles Fidelman mfidelman at meetinghouse.net
Sat Mar 12 18:14:05 PST 2022


A helpful perspective.  Thanks Jack.

Not sure I completely agree with all of it (see below) - but pretty close.

Jack Haverty via Internet-history wrote:
> IMHO, the Internet has been splintered for decades, going back to the 
> days of DDN, and the introduction of EGP which enabled carving up the 
> Internet into many pieces, each run by different operators.
>
> But the history is a bit more complex than that.   Back in the 
> mid-80s, I used to give a lot of presentations about the Internet. One 
> of the points I made was that the first ten years of the Internet were 
> all about connectivity -- making it possible for every computer on the 
> planet to communicate with every other computer.  I opined then that 
> the next ten years would be about making it *not* possible for every 
> computer to talk with every other -- i.e., to introduce mechanisms 
> that made it possible to constrain connectivity, for any of a number 
> of reasons already mentioned. That was about 40 years ago -- my ten 
> year projection was way off target.
>
> At the time, the usage model of the Internet was based on the way that 
> computers of that era were typically used.  A person would use a 
> terminal of some kind (typewriter or screen) and do something to 
> connect it to a computer.  He or she would then somehow "log in" to 
> that computer with a name and password, and gain the ability to use 
> whatever programs, data, and resources that individual was allowed to 
> use.  At the end of that "session", the user would log out, and that 
> terminal would no longer be able to do anything until the next user 
> repeated the process.
>
> In the early days of the Internet, that model was translated into the 
> network realm.  E.g., there was a project called TACACS (TAC Access 
> Control System) that provided the mechanisms for a human user to "log 
> in" to the Internet, using a name and a password. DDN, for example, 
> issued DDN Access Cards which had your name and network password that 
> enabled a human user to log in to the DDN as a network.
>
> Having logged in to the network, you could then still connect to your 
> chosen computer as before.  But you no longer had to log in to that 
> computer.   The network could tell the computer which user was 
> associated with the new connection, and, assuming the computer manager 
> trusted the network, the user would be automatically logged in and be 
> able to do whatever that user was allowed to do.   This new feature 
> was termed "Double Login Elimination", since it removed the necessity 
> to log in more than once for a given session, regardless of how many 
> computers you might use.
>
> Those mechanisms didn't have strong security, but it was 
> straightforward to add it for situations where it was required. The 
> basic model was that network activity was always associated with some 
> user, who was identified and verified by the network mechanisms.   
> Each computer that the user might use would be told who the user was, 
> and could then apply its own rules about what that user could do.   If 
> the user made a network connection out to some other computer, the 
> user's identity would be similarly passed along to the other computer.
>
> At about that time (later 1980s), LANs and PCs began to spread through 
> the Internet, and the user-at-a-terminal model broke down. Instead of 
> users at terminals making connections to the network, now there were 
> users at microcomputers making connections.   Such computers were 
> "personal" computers, not under management by the typical "data 
> center" or network operator but rather by individuals.    Rather than 
> connecting to remote computers as "terminals", connections started to 
> also be made by programs running on those personal computers.   The 
> human user might not even be aware that such connections were happening.
>
> With that evolution of the network/user model, mechanisms such as 
> TACACS became obsolete.  Where it was often reasonable to trust the 
> identification of a user performed by a mechanism run by the network 
> or a datacenter, it was difficult to similarly trust the word of one 
> of the multitude of microcomputers and software packages that were now 
> involved.
>
> So, the notion that a "user" could be identified and then constrained 
> in use of the resources on the Internet was no longer available.
>
> AFAIK, since that time in the 80s, there hasn't been a new "usage 
> model" developed to deal with the reality of today's Internet.  We 
> each have many devices now, not just one personal computer.   Many of 
> them are online all of the time; there are no "sessions" now with a 
> human interacting with a remote computer as in the 80s. When we use a 
> website, what appears on our screen may come from dozens of computers 
> somewhere "out there".   Some of the content on the screen isn't even 
> what we asked for.   Who is the "user" asking for advertising popups 
> to appear?   Did I give that user permission to use some of my screen 
> space?   Who did?
>
> User interaction with today's network is arguably much more complex 
> than it was 40 years ago.  IMHO, no one has developed a good model of 
> network usage for such a world, that enables the control of the 
> resources (computing, data) accessed across the Internet.   For 
> mechanisms that have been developed, such as privacy-enhanced 
> electronic mail, deployment seems to have been very spotty for some 
> reason.   We get email from identified Users, but can we trust that 
> the email actually came from that User? When the Web appeared, the 
> Internet got really complicated.
>
> Lacking appropriate mechanisms, users still need some way to control 
> who can utliize what.   So they improvise and generate adhoc point 
> solutions.  My bank wants to interact with me safely, so it sets up a 
> separate account on its own computers, with name, password, and 
> 2-factor authentication.   It can't trust the Internet to tell it who 
> I am.   It sends me email when I need to do something, advising me to 
> log in to my account and read its message to me there, where it knows 
> that I'm me, and I know that it's my bank.   It can't trust Internet 
> email for more than advising me to come in to its splinter of the 
> Internet.
>
> All my vendors do the same.  My newspaper.  My doctors.  My media 
> subscriptions.  Each has its own "silo" where it can interact with me 
> reliably and confidently.   Some of them probably do it to better make 
> money.  But IMHO most of them do it because they have to - the 
> Internet doesn't provide any mechanisms to help.
I'm not sure that's really the case.  We do, after all have things like 
X.509 certificates, and various mechanisms defined on top of them.  Or, 
in the academic & enterprise worlds, we have IAM mechanisms that work 
across multiple institutions (e.g., Shibboleth and the like).
>
> So we get lots of "splintering".    IMHO that has at least partially 
> been driven by the lack of mechanisms within the Internet technology 
> to deal with control of resources in ways that the users require. So 
> they have invented their own individual mechanisms as needs arose.  
> It's not just at the router/ISP level, where splintering can be caused 
> by things like the absence of mechanisms for "policy routing" or "type 
> of service" or "security" that's important to someone.

And here, I'll come back to commercial interests as driving the show.

In the academic world - where interoperability and resource/information 
sharing are a priority - we have a world of identify federations.  Yes, 
one has to have permissions and such, but one doesn't need multiple 
library cards to access multiple libraries, or to make interlibrary 
loans.  For that matter, we can do business worldwide, with one bank 
account or credit card.

But, when it comes to things like, say, distributing medical records, it 
took the Medicare administrators to force all doctors offices, 
hospitals, etc. to use the same format for submitting billing records.  
Meanwhile commercial firms have made a fortune creating and selling 
portals and private email systems - and convincing folks that the only 
way they can meet HIPPA requirements is to use said private systems.  
And now they've started to sell their users on mechanisms to share 
records between providers (kind of like the early days of email - "there 
are more folks on our system then the other guys,' so we're your best 
option for letting doctors exchange patient records").  Without a 
forcing function for interoperability (be it ARPA funding the ARPANET 
specifically to enable resource sharing, or Medicare, or some other 
large institution) - market forces, and perhaps basic human psychology, 
push toward finding ways to segment markets, isolate tribes, carve off 
market niches, etc.

Come to think of it, the same applies to "web services" - we developed a 
perfectly good protocol stack, and built RESTful services on top of it.  
But somebody had to go off and reinvent everything, push all the 
functions up to the application layer, and make everything incredibly 
baroque and cumbersome.  And then folks started to come to their senses 
and start standardizing, a bit, on how to do RESTful web services in 
ways that sort of work for everyone.  (Of course, there are those who 
are trying to repeat the missteps, with "Web 3.0," smart contracts, and 
all of that stuff.)
>
> "Double Login" momentarily was eliminated, but revived and has evolved 
> into "Continuous Login" since the Internet doesn't provide what's 
> needed by the users in today's complex world.
A nice way of putting it.

Though, perhaps it's equally useful to view things as "no login." 
Everything is a transaction, governed by a set of rules, accompanied by 
credentials and currency.

And we have models for that that date back millennia - basically 
contracts and currency.  Later we invented multi-part forms & checking 
accounts.  Now we have a plethora of mechanisms - all doing basically 
the same thing - and competing with each other for market share.  (Kind 
of like standards, we need a standard way of talking to each other - so 
let's invent a new one.)

Maybe, we can take a breath, take a step backwards, and start building 
on interoperable building blocks that have stood the test of time.  In 
the same way that e-books "work" a lot better than reading on laptops, 
and now tablets are merging the form factor in ways that are practical.  
Or chat, in the form of SMS & MMS messaging, is pretty much still the 
standard for reaching anybody, anywhere, any time.

But... absent a major institution pushing things forward (or 
together)... it probably will take a concerted effort, by those of us 
who understand the issues, and are in positions to specify technology 
for large systems, or large groups/organizations, to keep nudging things 
in the right direction, when we have the opportunity to do so.

>
> I was involved in operating a "splinternet" corporate internet in the 
> 90s, connected to "the Internet" only by an email gateway.  We just 
> couldn't trust the Internet so we kept it at arms length.
>
> Hope this helps some historian....
> Jack Haverty
And, perhaps, offer some lessons learned to those who would prefer not 
to repeat history!

Cheers,

Miles


-- 
In theory, there is no difference between theory and practice.
In practice, there is.  .... Yogi Berra

Theory is when you know everything but nothing works.
Practice is when everything works but no one knows why.
In our lab, theory and practice are combined:
nothing works and no one knows why.  ... unknown




More information about the Internet-history mailing list