[ih] "The Internet runs on Proposed Standards"

Brian E Carpenter brian.e.carpenter at gmail.com
Sun Dec 4 11:40:02 PST 2022


On 05-Dec-22 04:26, Toerless Eckert wrote:
> The "requires some sort of coordinated database" certainly sounds incorrect.

That is certainly how Hyper-G worked. Not having a recording of Tim's
remarks, I'm working from my memory of ~30 years ago, and he probably
said something more subtle.

Does anyone here know exactly when "Referer" was added to HTTP?
I can't find it in the original specs at
http://info.cern.ch/hypertext/WWW/Protocols/HTTP.html
and I don't where to look for intermediate versions prior
to RFC1945.

    Brian




> 
> If course: , Google Search Console does provide this information AFAIK, so it
> certinly is possible with a coordinated database .
> 
> For a disributed solution,
> it would certainly help the ability to track referrers to your URL if the
> web had a mechanism for URLs that can only be used from a specific referrer
> location/URL. That would force the referrer to actually do keep an active
> referrer status with the referred to URL. Not trivial.
> 
> Short of that, the realities of how clients are behind firewalls and web pages
> behind world-wide content caching systems make the mechanisms of
> HTTP as described by John unfortunately rather impractical to rely upon AFAISI.
> 
> Cheers
>      Toerless
> 
> On Sun, Dec 04, 2022 at 01:33:12AM -0800, John Gilmore via Internet-history wrote:
>> Brian E Carpenter wrote:
>>> Berners-Lee argued that the two-way hyperlink approach didn't scale,
>>> required some sort of coordinated database to work at all, and would
>>> never succeed as a result. Whereas the Web would scale indefinitely,
>>> needed no centralized anything, didn't need managing, and had a chance
>>> of world domination.
>>
>> Interesting history, but perhaps shortsighted on Tim's part.  HTTP
>> already provided most of the tools needed to do 2-way hyperlinks without
>> any centralized anything, using the mispelled Referer field.  The first
>> and every time that an HTTP client followed a link from A to B, the B
>> server would get an HTTP access to B that notified it that there's a
>> link from A.
>>
>> Early and even modern HTTP servers just dump that info into a logfile
>> and otherwise ignore it.  But they could be cacheing that info to build
>> a local database at B of backreferences.  The B HTTP server could
>> actually check the Referer by accessing the cited A page itself to see
>> if the link is really there, to avoid spoofing.  And then could modify
>> the HTML that it serves up to clients, to somehow include the
>> backreference info.  As a simple example, it could use a naming
>> convention in the URL to offer a metadata page corresponding to each
>> page it serves up, that shows the backlinks that it knows about for that
>> page.  Perhaps think of the metadata page like Wikipedia's "talk page"
>> for every encyclopedia article page.
>>
>> Instead, that Referer merely got used by advertisers for spyware (that's
>> how Google and Meta know which page a "tracking pixel" or "web bug" was
>> accessed from).
>>
>> The opportunity for 2-way, cached, distributed web linking is still
>> available today, if somebody wanted to write a little software!
>> (But this is a history list, so let's go back to looking backward. ;-)
>>
>> 	John
>> 	
>> -- 
>> Internet-history mailing list
>> Internet-history at elists.isoc.org
>> https://elists.isoc.org/mailman/listinfo/internet-history
> 



More information about the Internet-history mailing list