[ih] Preparing for the splinternet

Jack Haverty jack at 3kitty.org
Sat Mar 12 17:23:00 PST 2022


IMHO, the Internet has been splintered for decades, going back to the 
days of DDN, and the introduction of EGP which enabled carving up the 
Internet into many pieces, each run by different operators.

But the history is a bit more complex than that.   Back in the mid-80s, 
I used to give a lot of presentations about the Internet. One of the 
points I made was that the first ten years of the Internet were all 
about connectivity -- making it possible for every computer on the 
planet to communicate with every other computer.  I opined then that the 
next ten years would be about making it *not* possible for every 
computer to talk with every other -- i.e., to introduce mechanisms that 
made it possible to constrain connectivity, for any of a number of 
reasons already mentioned. That was about 40 years ago -- my ten year 
projection was way off target.

At the time, the usage model of the Internet was based on the way that 
computers of that era were typically used.  A person would use a 
terminal of some kind (typewriter or screen) and do something to connect 
it to a computer.  He or she would then somehow "log in" to that 
computer with a name and password, and gain the ability to use whatever 
programs, data, and resources that individual was allowed to use.  At 
the end of that "session", the user would log out, and that terminal 
would no longer be able to do anything until the next user repeated the 
process.

In the early days of the Internet, that model was translated into the 
network realm.  E.g., there was a project called TACACS (TAC Access 
Control System) that provided the mechanisms for a human user to "log 
in" to the Internet, using a name and a password.   DDN, for example, 
issued DDN Access Cards which had your name and network password that 
enabled a human user to log in to the DDN as a network.

Having logged in to the network, you could then still connect to your 
chosen computer as before.  But you no longer had to log in to that 
computer.   The network could tell the computer which user was 
associated with the new connection, and, assuming the computer manager 
trusted the network, the user would be automatically logged in and be 
able to do whatever that user was allowed to do.   This new feature was 
termed "Double Login Elimination", since it removed the necessity to log 
in more than once for a given session, regardless of how many computers 
you might use.

Those mechanisms didn't have strong security, but it was straightforward 
to add it for situations where it was required.  The basic model was 
that network activity was always associated with some user, who was 
identified and verified by the network mechanisms.   Each computer that 
the user might use would be told who the user was, and could then apply 
its own rules about what that user could do.   If the user made a 
network connection out to some other computer, the user's identity would 
be similarly passed along to the other computer.

At about that time (later 1980s), LANs and PCs began to spread through 
the Internet, and the user-at-a-terminal model broke down. Instead of 
users at terminals making connections to the network, now there were 
users at microcomputers making connections.   Such computers were 
"personal" computers, not under management by the typical "data center" 
or network operator but rather by individuals.    Rather than connecting 
to remote computers as "terminals", connections started to also be made 
by programs running on those personal computers.   The human user might 
not even be aware that such connections were happening.

With that evolution of the network/user model, mechanisms such as TACACS 
became obsolete.  Where it was often reasonable to trust the 
identification of a user performed by a mechanism run by the network or 
a datacenter, it was difficult to similarly trust the word of one of the 
multitude of microcomputers and software packages that were now involved.

So, the notion that a "user" could be identified and then constrained in 
use of the resources on the Internet was no longer available.

AFAIK, since that time in the 80s, there hasn't been a new "usage model" 
developed to deal with the reality of today's Internet.  We each have 
many devices now, not just one personal computer.   Many of them are 
online all of the time; there are no "sessions" now with a human 
interacting with a remote computer as in the 80s.  When we use a 
website, what appears on our screen may come from dozens of computers 
somewhere "out there".   Some of the content on the screen isn't even 
what we asked for.   Who is the "user" asking for advertising popups to 
appear?   Did I give that user permission to use some of my screen 
space?   Who did?

User interaction with today's network is arguably much more complex than 
it was 40 years ago.  IMHO, no one has developed a good model of network 
usage for such a world, that enables the control of the resources 
(computing, data) accessed across the Internet.   For mechanisms that 
have been developed, such as privacy-enhanced electronic mail, 
deployment seems to have been very spotty for some reason.   We get 
email from identified Users, but can we trust that the email actually 
came from that User?   When the Web appeared, the Internet got really 
complicated.

Lacking appropriate mechanisms, users still need some way to control who 
can utliize what.   So they improvise and generate adhoc point 
solutions.  My bank wants to interact with me safely, so it sets up a 
separate account on its own computers, with name, password, and 2-factor 
authentication.   It can't trust the Internet to tell it who I am.   It 
sends me email when I need to do something, advising me to log in to my 
account and read its message to me there, where it knows that I'm me, 
and I know that it's my bank.   It can't trust Internet email for more 
than advising me to come in to its splinter of the Internet.

All my vendors do the same.  My newspaper.  My doctors.  My media 
subscriptions.  Each has its own "silo" where it can interact with me 
reliably and confidently.   Some of them probably do it to better make 
money.  But IMHO most of them do it because they have to - the Internet 
doesn't provide any mechanisms to help.

So we get lots of "splintering".    IMHO that has at least partially 
been driven by the lack of mechanisms within the Internet technology to 
deal with control of resources in ways that the users require. So they 
have invented their own individual mechanisms as needs arose.  It's not 
just at the router/ISP level, where splintering can be caused by things 
like the absence of mechanisms for "policy routing" or "type of service" 
or "security" that's important to someone.

"Double Login" momentarily was eliminated, but revived and has evolved 
into "Continuous Login" since the Internet doesn't provide what's needed 
by the users in today's complex world.

I was involved in operating a "splinternet" corporate internet in the 
90s, connected to "the Internet" only by an email gateway.  We just 
couldn't trust the Internet so we kept it at arms length.

Hope this helps some historian....
Jack Haverty






More information about the Internet-history mailing list