[ih] FTP Design
Dave Crocker
dhc2 at dcrocker.net
Tue Jul 3 11:29:48 PDT 2012
On 7/3/2012 10:29 AM, Tony Finch wrote:
> Noel Chiappa <jnc at mercury.lcs.mit.edu> wrote:
>>
>> I look back on all the work on Archie, Gopher, WAIS, etc, etc and think
>> 'Goodness gracious, how was it not obvious to us that we needed the WWW
>> (with explicit links in documentation)?' There were a lot of smart
>> people working on the Internet at that stage, but nobody saw it.
>
> I read somewhere (but I have forgotten where, sorry) that some of the key
> features of hypertext as seen in the 1980s were come-from links, and some
Hmmm. Since this is a history list, I'll free associate to:
When I first read the following article in 1973, I was just starting
to learn computer science constructs. Structured programming, and the
like, were extremely hot topics, including the goal of deprecating
undisciplined use of GoTo. So I read the article with diligentce. It
was not until I got to the last sentence it that I realized it was a put-on:
http://www.fortran.com/come_from.html
> kind of "which pages link here" feature. Neither are easy to implement in
> a very loosely coupled distributed system. TBL's great insight was that
> a much simpler hypertext system would still be useful enough provided it
> could link to any existing stuff out there on the net.
Mumble. The linking mechanism in Engelbart's system was similarly
simple. It was not inter-machine, but it was textual and evaluated at
run-time.
I'm going to claim that the innovations in the URL construct were:
1. extensible declaration of service mechanism (http, ftp, ...)
2. rigid requirement for domain name, to specify the place for
evaluating the rest of the string
3. essentially no constrains on the rest of the string.
That is, a simple, common global portion for evaluation, with an
non-standardized local remainder. I claim that this model is a core
construct in good Internet architecture design.
The local/public distinction happens to also be a particular win in the
way email addressing was done. (It was in marked contrast with the way
X.400 did things...)
> There is a bit more to it than that, though, because gopher made the same
> simplification of one-way links, but it lacked hypertext and multimedia
> and multiprotocol URLs, and its design was unable to grow beyond the
> constraints of 1980s computing.
However it was easier to set up a gopher site than a web site, because
the web required specialized documents while gopher ran on text. There
was serious competition between the two.
Another major design difference was that gopher provided no useful
information until you reached the leaf, whereas the web could produce an
'interesting' document with every click. That is, the Web permitted a
far sexier experience, of course.
In terms of human factors, that means the user can get a 'reward' with
every click for the web. And, yes, the implied comparison with rats
pressing levers is both intentional and, IMO, valid. Operant
conditioning is our friend.
d/
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
More information about the Internet-history
mailing list