[ih] Internet as a Set of Services (IAASOS) -- was Re: "The First Router" on Jeopardy

Jack Haverty jack at 3kitty.org
Tue Nov 23 18:14:10 PST 2021


Changed the name, this has morphed well away from Jeopardy.

I look at this now from an end-users' perspective, which I think is 
quite different from that of a network operator.

Whatever kind of services are implemented in a backbone, several other 
things sometimes have to happen for it to be visible to the end-user:

1/ Some way of selecting a particular service must be exposed at the 
edge of the network, perhaps in a packet header or protocol exchange.  
E.g., the TOS field.
2/ The end-user's OS must act appropriately to use the ability to select 
the service.  E.g., it must set the TOS field.
3/ If an application has to make the decision about which service to 
select, the OS must expose that choice in it's interface to user 
programs, e.g., in its API. E.g., the app has to select the appropriate 
kind of service.
4/ An application that can benefit from the service has to be designed 
to select the appropriate service when interfacing with the OS, and 
possibly also present appropriate choices to the end-user. E.g., the App 
must be aware of and properly use the OS API to select the appropriate 
service for whatever it is doing.

and lastly,

5/ if the end-user is interacting with a remote machine that requires an 
Internet pathway through several network operators, all of them must 
implement the service and pass the appropriate information across the 
boundaries between them.  E.g., EGP/BGP/etc have to be revised to 
support service parameters, and networks have to use them.

Until all of those steps are taken, it's not surprising that there is no 
market demand, even if some underlying network mechanism is perfectly 
designed and implemented in routers or such devices. There is simply no 
benefit visible to the end-users.   Zero supply means zero demand.

I've found it consistently difficult to even determine if a particular 
situation meets those criteria above.   Of course I'm just an end-user, 
so I shouldn't have to see such details.   I have 60+ devices on my home 
LAN at the moment.  I wonder how many of them have the capability to 
select some specific service from "the network", or if they do whether 
or not the networks I'm using behave any differently.   Hard to tell, as 
the end user, much about what's going on inside all those boxes.

But I can see the aggregate effects of whatever is going on "under the 
covers".   For example, it's common to see video and audio break up, 
pixellate, and otherwise misbehave, even on mass-market television 
channels, especially with events involving participants in remote 
roundtables (think Zoom, Webex, etc.).

A couple of years ago, I helped a friend troubleshoot a "gaming" app he 
was trying to use on an Internet path between the Los Angeles and Reno 
Nevada areas.  That involved probably 4 or more distinct networks along 
the way.   We couldn't see much except what Wireshark et al could see at 
his end, but the results were interesting.  Over many tests, 100.0% of 
the packets sent were correctly received. Not a single packet was dropped.

But occasionally, traffic would stop for a while, then resume.  All 
packets would eventually be delivered.   Some took 30 seconds in 
transit.  So, 100% of the packets were delivered, but it's hard to tell 
how many of them were actually useful by the time they got there.  We 
couldn't see "inside" the app.  A packet containing audio data that 
should have been sent to the speaker 2 seconds ago wasn't useful.   In 
the TOS/TTL design of "The Plan" such packets would have been discarded 
as soon as some router determined that they couldn't get there in time 
to be useful.  It was hard to tell if the OS, or the apps running at 
each end of the connection, were doing anything special to try to select 
some type of service, or if the networks even made such a choice possible.

Regardless of what was going on, the gaming app was essentially 
unusable.   That customer was not "ultra extreme".   He just wanted to 
play the game.   He followed the Marketing advice to "upgrade his 
service" to higher speeds.  Cost more, but didn't help.   Even switch 
his local ISP.  No effect.

My anecdotal test with the Gamer was just one tiny piece of the Internet 
system.   But it seems like that experience is somewhat common.    
Gamers are known for demanding "low latency" and I've even heard of 
cases where a Gamer refuses to live somewhere because the Internet 
service is unusable, for Gamers.   Even at "gigabit speed".   Even 
though there is a market for it.

When the Internet was just an ARPA research project, "The Plan" was to 
have it capable of supporting such scenarios, assuming of course that we 
could figure out how to do so.  Then the Internet took off, became a DoD 
Standard, and escaped from the lab.   Research was replaced by 
commercial and marketplace concerns.

My tentative conclusion is that there are now simply too many 
uncoordinated and/or competing components in today's "internet system", 
with no one responsible for making the overall system work to meet the 
users' needs.   The original ARPA vision and "The Plan" still seems 
possible, but no one seems to be working on it.   Maybe it just no 
longer matters.

Jack Haverty


On 11/23/21 4:35 PM, Toerless Eckert wrote:
> On Tue, Nov 23, 2021 at 03:52:49PM -0800, Tony Li wrote:
>>> Obviously, i think there could and should be more.
>> That would be nice, but in practice, it hasn’t found a market.  Best Effort (or Lousy Effort) typically results in 99.999% of the packets making it to their destination with perfectly acceptable reliability, delay, and jitter. When SP’s have attempted to charge a premium for the remaining 0.001%, they have found that almost all customers were not interested, and those that did pay the premium did not feel that it was worthwhile.
> My experience from enterprises and SP-services-for Enterprises such as L3VPN
> are somehwat different. QoS was and probably still is a fairly well
> selling service. Arguably you always need some such service if you need
> to guarantee services, and for example contribution TV is not using
> rate adaptive video, and certainly should never want to because at
> the end of the contribution path you want 100% of the input bandwidth,
> and not less on a friday evening. Aka: any actual "real-time" traffic
> will predominantly have trrhoughput, and arguably many will also have
> latency and even more so reliability requirements.
>
> And don't even let me get into the politics of selling better network
> transport features only as part of much more expensive and often badly
> worked out application services. IP Multicast for example was for sure
> never be made available to OTT by access-SPs to their subscribres to
> protect the SPs own IPTV service offering against competition, and
> i was often enough involved back when the SPs feared regulation that
> would have forced them offer such a service.
>
>> The ultra-extreme folks who want the ultimate reliability are not willing to have SPs in the loop in the first place.
> Unless they have a service where on one side you have a larger
> customer base that you don't want/cant wire up yourself.
>
>> But back to the point: the mechanisms are all in place for constrained path computation for various different traffic classes and applications if people choose to turn them on. Some do.
> We have a bunch of 25 year old wierd building blocks, but
> SPs have to business concept competency to make something from them.
>
> I do also count mostly on challengers building better networks
> for future services than expecting existing SPs to figure out
> how to make new money from new opportunities.
>
> Cheers
>      Toerless
>
>> T
>>





More information about the Internet-history mailing list