[ih] History of Naming on The Internet - is it still relevant?
Karl Auerbach
karl at iwl.com
Thu Aug 14 13:14:50 PDT 2025
Whew, you are opening up a Pandora's box.
I, personally, am not fond of the socket API. But I'm lazy and don't
want to re-invent a wheel that is almost round, or at least round-enough
to be useful.
I do remember a presentation in which someone at a SIGCOMM gathering
advocated certain network operations modeled more as virtual memory
paging operations. Sequence of results was not as important as knowing
that a certain remote access to a block had been completed. It was an
intriguing idea and it was *neither* a socket-like form of access nor a
file-like form of access. My memory has a vague tingling that there was
something like this in Multics.
My present occupation (apart from being a troublemaker) is to build
tools to inject "bad things" into network paths in order to drive
protocol implementations to take those under-tested (often never-tested)
code paths. (It is surprising how many things wobble or fail when
network conditions start to erode - for example out-of-order packet
delivery sequencing.)
To my mind one of the major differences between classic file and network
operations is the error or failure rate. Another major difference is
the time scale.
In most solid code there is a large amount of error catching and
recovery code.
In file operations that error catching/recovery tends to be near to the
file operation that triggered that ill-favored event. And because file
errors are relatively infrequent (and persistent) code that doesn't have
the requisite handling often works pretty well in normal practice.
In network code, not only is an error far more likely, that error may be
intermittent and is offset from the triggering event by a period of time
- often a lot of time by computer processor standards.
All of this suggests to me that there are two broad classes of APIs.
There are those (such as file operations) in which errors are rare or
highly persistent and are closely proximate in time to the triggering
event. And there are those (typical of network operations) where errors
can be transient and there is a time skew. (Clearly these two classes
overlap in situations such as network file systems, and we have observed
how things like the NFS protocols have had to evolve and yet even with
that evolution NFS hangs [especially on file and record locks] are not
all that infrequent.)
When Steve Casner and I were doing entertainment grade audio/video we
felt a strong need for network APIs to not only give us means to express
quality of service needs (in ways beyond the relatively simple modes we
have today) but also for pushback by the net to say "if you want this
then this is going to cost you". (I am a big fan of the way Dave
Farber's DCS project at UC Irvine used a bid, quotation-of-cost, select
among the quotations, bind to the best bid model of resource negotiation.)
Not every network interaction needs a big (bit rate), low latency path.
For instance I work a lot of IoT stuff, like fire alarms, that does not
need big/fast/low-latency but needs really low loss rates even during
periods of high competing demand.
Casner and I wanted to eventually push that bid/quotation/binding
process up to the user level so that a human user or agent acting on
behalf of the user could get involved with the choice. We see this now
on many entertainment video feeds - some higher priced ones have no
advertising, lower priced options have commercials - and the user can
chose. (This kind of friction might also be a useful tool to deal with
things like the essentially no-cost of email use by spammers.)
All of this suggests that the land of network APIs can get complicated
really fast.
And I have not observed that programmers are getting smarter. They just
use big, fat, heavy libraries. AI generated "vibe" code might help with
the boring task of adding error checks and handling, but I have more
faith in the inertia of human laziness than I do in AI based coding.
In a larger view I have great concern about how the net is evolving into
a lifeline grade utility. People think it is one already, but that's a
perception more than a reality. My sense is that making the net more
"lifeline" will have a real impact on code styles and APIs, but I do not
understand what form that will take (apart from my very serious concerns
about security walls getting in the way of detection and repair.)
In a similar vein, long ago Dave Kaufman, Frank Heinrich, and I were at
SDC working for one of those three-letter-agencies in Maryland on
network and operating security we worked with capability based computers
and operating systems. We had a reasonably decent handle on building
least-privilege/delegated-privilege based operating systems and
applications on a single computer (we used an extensible hardware
capability system.) But Dave and I wrestled hard with extending that
out over the network so that we could push security constraints and
privileges beyond a single machine; we never found a solid answer (this
was in the days before the discovery, or at least the public
re-discovery, of public key systems.)
(By-the-way, the present day Linux "capability" system is a far and
weaker cry from the kind of machinery that used the "capability" name
back in the 1970s.)
That work (with privileges and capabilities) affected many of our
programattic APIs. Often it was more like Unix environments, something
present but easy not to see. But other times it needed to become more
explicit.
My feeling is that these kinds of needs are going to sooner or later
force us to re-consider things like the socket API. (And I really hope
we can do it better than Linux did with its netlink API into the kernel.)
--karl--
On 8/14/25 7:00 AM, John Day wrote:
> I apologize for bring up this topic after it has been quiet for a couple of weeks. It sparked my interest, but it was the end of the summer session and I got swamped with all of that.
>
> Why do several of you (apparently all) find the Sockets API so good? This was really surprising.
>
> For me, it is one of the 2 or 3 Internet standards (is it really? or just what everyone uses?) that has done the most damage to the Internet architecture.
>
> It exposes far too much to the application. It has very little of the usual properties of an API to create a black box. It requires the application to modified to use the Net. It seems counter to the Unix philosophy to make everything look like a file operation. In fact, when the first Unix was put on the Net in 1975, that was the API, something like, <file-desc> = OPEN(USCD/Telnet).
> The file-desc would be a form of port-id and could be easily extended for other connection specific parameters. There was a meeting at some point to discuss what to put into Berkeley Unix but Joy was too taken with Sockets to adopt a more Unix-consistent API.
> I have been told that the Microsoft transition to IPv6 was delayed two years because every program that used the Net had to be modified.
> A file-oriented API would have allowed a seamless transition to Application-names and getting rid of the well-known port kludge. No one would have noticed.
>
> Hope I haven’t stirred things up too badly.
>
> Take care,
> John Day
>
>> On Jul 23, 2025, at 23:31, Karl Auerbach via Internet-history <internet-history at elists.isoc.org> wrote:
>>
>> On 7/23/25 6:28 PM, Craig Partridge via Internet-history wrote:
>>> On Wed, Jul 23, 2025 at 6:55 PM touch--- via Internet-history <
>>> internet-history at elists.isoc.org> wrote:
>>>
>>>> The user thinks in terms of socket connections; the rest can be opaque.
>>>>
>>> I'd argue that the history of the Internet says that's not quite true.
>>> Users care about the path their traffic takes in the network (various
>>> issues about ensuring certain traffic did not transit certain countries
>>> over the years). Users care about performance, and before carriers sorta
>>> figured out how to ensure good service, we found users playing around with
>>> reaching into the routing layer.
>> The socket abstraction has a meritorious aspect: it is simple.
>>
>> But users need ways to express the kind of service that a socket should provide. And, indeed, there are some simple mechanisms for that.
>>
>> But once one digs into the idea of expressing "what kind of service does this connection need" things can get complicated really fast.
>>
>> Fred Baker and I worked on the RSVP protocol. (I did a fairly extensive client implementation, Fred the the in-router hooks.) It was hard. There are so many dimensions of "service" ranging from a simple "bit/second rate" (over what time span?) to some sort of expression of dynamics (burst behavior of the flow). But even with a reasonably extensive means to express connection service RSVP was largely stuck with a relatively limited ability to affect actual routing path setup (and re-setup). And RSVP did nothing to help a client chose among multiple equivalent service peers (equivalent except for the network path to reach them.)
>>
>> Steve Casner and I were doing network video with potentially many separate streams of video and audio from many sources to many destinations. Lip sync was a huge problem. Beyond the fact that clocks on senders and receivers drift with respect to one another, often our code was faced with the question "do we pause the stream or do we insert a fig-leaf to cover the missing data?" If the fig leaf got large or frequent that coverage could involve switching to an alternative (but equivalent, except for the network path) data source.
>>
>> Our code was on top of UDP, which somewhat simplified things. But even then we never found a good solution except to ask the providers administratively to provision more resources.
>>
>> But sometimes the problem was not the data path itself but rather, the client's choice of which (equivalent) server to select (which would imply a choice among data paths.)
>>
>> I took the effort further and asked whether we could figure out a way to let a client discover an alternative content source, or select among multiple potential sources, within a reasonably short period of time (not much more than a single round trip time) and without seriously burdening the routing fabric or the routers themselves?
>>
>> The method I came up with (circa year 2000) was inspired by Van Jacobson's multicast traceroute (mtrace). I never had a chance to implement it in Cisco IOS (I was going to use the on-again/off-again Java VM engine that was in some IOS experimental versions.)
>>
>> I used a combination of a hypothetical IP header and an Integrated Services T-Spec (RFC 2215) as a means to express the proposed service level.
>>
>> There is a very rough, incomplete draft of the idea up on my website. (It was never sufficiently developed even to reach the level of an Internet Draft.)
>>
>> Fast Path Characterization Protocol (FPCP)
>>
>> https://www.cavebear.com/archive/fpcp/fpcp-sept-19-2000.html
>>
>> --karl--
>>
>> --
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
>> -
>> Unsubscribe: https://app.smartsheet.com/b/form/9b6ef0621638436ab0a9b23cb0668b0b?The%20list%20to%20be%20unsubscribed%20from=Internet-history
More information about the Internet-history
mailing list