[ih] "The Internet runs on Proposed Standards"
John Gilmore
gnu at toad.com
Sun Dec 4 01:33:12 PST 2022
Brian E Carpenter wrote:
> Berners-Lee argued that the two-way hyperlink approach didn't scale,
> required some sort of coordinated database to work at all, and would
> never succeed as a result. Whereas the Web would scale indefinitely,
> needed no centralized anything, didn't need managing, and had a chance
> of world domination.
Interesting history, but perhaps shortsighted on Tim's part. HTTP
already provided most of the tools needed to do 2-way hyperlinks without
any centralized anything, using the mispelled Referer field. The first
and every time that an HTTP client followed a link from A to B, the B
server would get an HTTP access to B that notified it that there's a
link from A.
Early and even modern HTTP servers just dump that info into a logfile
and otherwise ignore it. But they could be cacheing that info to build
a local database at B of backreferences. The B HTTP server could
actually check the Referer by accessing the cited A page itself to see
if the link is really there, to avoid spoofing. And then could modify
the HTML that it serves up to clients, to somehow include the
backreference info. As a simple example, it could use a naming
convention in the URL to offer a metadata page corresponding to each
page it serves up, that shows the backlinks that it knows about for that
page. Perhaps think of the metadata page like Wikipedia's "talk page"
for every encyclopedia article page.
Instead, that Referer merely got used by advertisers for spyware (that's
how Google and Meta know which page a "tracking pixel" or "web bug" was
accessed from).
The opportunity for 2-way, cached, distributed web linking is still
available today, if somebody wanted to write a little software!
(But this is a history list, so let's go back to looking backward. ;-)
John
More information about the Internet-history
mailing list