[ih] Lessons to be learnt from Internet history
jnc at mercury.lcs.mit.edu
Wed Feb 20 09:28:44 PST 2013
> From: John Curran <jcurran at istaff.org>
>> Many of the problems we see now were understood when the Internet was
>> first developed, but we didn't have practical solutions to them.
> our challenge has not been in facing problems beyond solving, but rather
> the tendency to skimp on fully defining problems before moving on to
> solution phase...
> ... in many cases the problems we face in the Internet have and will
> continue to include economic or political aspects which dominate the
> available solution space.
My sense is that the picture is more complicted.
In some cases, our understanding of the issues is indeed now a lot more
complete than it was in the early days of the Internet - we didn't do better
then because we couldn't. Examples of this include congestion (pre-Van), and
routing. More recently, the evolution of HTML is another case where we had to
learn as we went.
In some of these places, we managed to include enough generality that we could
deploy better stuff as our understanding increased - e.g. re-transmission and
congestion. I'm not sure we had an _explicit_ goal of being able to deploy
better algorithsm, but given that we were trying different stuff then, I think
it just naturally happened that the thing we deployed had that flexibility.
In other places, we did have knowledge, but we deliberately chose to not do
things: some examples are security, separation of location and identity, and
addressing in general.
Admittedly, security is a complex situation, because we have had some new
tools become available (e.g. public keys) over time. And also I think security
suffered from some of what you allude to with disparate external factors -
e.g. early work on secure email proposed a model that aligned well with one
group of users (military/government), but not the 'ordinary' users, leading to
poor uptake. But we surely could have done better than we did (I speak of
security overall, not just email).
> forever is a long-time and not likely something to serve as a useful
> planning horizon. However, planning for "the foreseeable future", i.e.
> for as long and as well as we can imagine, _is_ quite reasonable.
Yes, but a lot of the time I think it's pure luck whether we get something
with a good lifetime or not. (I think a big part of that luck is the person
who winds up doing the design for a particular newly-needed piece, to be
frank. Some are much better than others.) And the choices are often driven by
short-term considerations, and trying to put out fires.
Take DNS for example. We were lucky there - the design had a lot of room to
grow. But it could easily not have.
The recent discussion of the origins of BGP shows all these factors at work.
We didn't have great knowledge of routing, but we had some. Nonetheless, we
didn't do something that was on the outer limits of what we could do - for
reasons I won't take time to analyze in detail (basically, it was 'Pogoitis' -
"We have met the enemy", etc). I suspect the BGP designers probably wouldn't
have guessed that it would successfully function as well as it does for a
system of this size. And later on we did have a fair amount of work go into
more advanced routing architectures, but they were left to the side (again,
for complex reasons I won't analyze here).
Balancing 'getting it running with the resouces available' and 'doing
something with a long lifetime' is still a struggle. I was recently driven to
desperation by the unwillingness of a key LISP protagonist to adopt a packet
format (aka interface semantics) which had more flexibility and adaptability.
The reason? 'It was easier/quicker to do the kludgy hack.'
But I guess all human works are like this - a combination of varying levels of
luck, skill, chance, people and circumstances.
More information about the Internet-history