[ih] vm vs. memory

Noel Chiappa jnc at mercury.lcs.mit.edu
Wed Oct 25 06:13:48 PDT 2017


    > From: Paul Vixie

    > an alternative to the pi/pa split

In a very large network, the path-selection system needs names (in the
general sense; 'addresses' are a class of names) which have topological
significance. That's from the math; that's the immovable rock.

If names are needed which have properties which conflict with that, you need
another namespace. That's the irresistable force.


    > it therefore alarms me mildly to be disagreeing with you here.

I'm not sure we disagree as much as you think...

    > the internet wasn't a wanted system. ...  an airplane that would have to
    > be launched unready, and then continuously rebuilt during flight. ...
    > ... it was therefore never possible, ever, anywhere, to authoritatively
    > "list those as requirements when you sit down"
    > ...
    > my own belief is that whatever we do next (ever), will be done from the
    > network as it then exists, and that "up front" never happened, and never
    > shall happen.

I don't disagree. But that doesn't mean we did as well as we could have - the
above being an example.

It was perfectly obvious _at the time_ that with only one namespace, immovable
objects were going to meet irresistable forces. But people preferred to, I
dunno, put their heads in the sand? I still don't really understand why people
didn't draw the appropriate conclusion - perhaps the pill was too big to
swallow?


(Actual history interjection: shortly before the IPv6 process, a fellow member
of the IESG said something that I took be an aspersion on my character. Exactly
what it was, I no longer recall [perhaps Craig does, I talked to him about
what I should do]. It was something to the effect that some commercial people
were not keeping the Internet's best interests as their top priority, or
something like that. Anyway, that was kind of the last straw - I was already
terribly stressed out by other things - so I resigned from the IESG to prove
that the accusation was untrue. Needless to say, had I still been on the IESG
at the time of the IPv6 process, that design would _never_ have been accepted
- 'over my dead body'. Here's the kicker: some years later, chatting to the
person, I discovered he hadn't been talking about me at all! On such
accidents does the course of history turn!)


But back to doing architecture on a flying airplane...

The need to evolve a running system does prevent a severe challenge to the
architect. (Architects are usually given a blank sheet of paper, precisely so
they can draw what's _needed_, after an open-ended analysis of the whole
thing.) But after thinking about this for a while, I think one still needs to
perform much the same process, even for an evolving system: analyze the whole
system at a deep level, taking in mind what it needs to do.

The reason is that if you evolve a system _without understanding where you
want to wind up (with a coherent system, designed _as a system_), you'll wind
up with a convoluted cancer that doesn't do a lot of what you want it to.

(There's some aphorism, along the line of 'a journey of a thousand miles
starts with a single step', that catches this idea of 'if you don't know where
you're going, you probably won't get there', but my mind can't recall it at
the moment.)

So, yes, in an evolving system like the Internet, one may not have all the
requirements in hand at any point, _but_ at any point in time, one should have

- i) the requirements, as best they are understood at the time
- ii) an overall system architecture which meets those goals, starting with
  'how many namespaces are there, and what are their semantics'
- iii) a plan for how to evolve the system to get there, from where you are

Of course, the list of requirements will change over time, which will have
ripple-down consequences through the next stages; but note that things like
the second step should are planning 30-50 years out, so if that gets changed,
because of the time lag, you're changing stuff that hasn't happened yet. (And
the third step probably covers a 10-20 year timeframe; the thought being that
at the end of that process, you'll have a system which is good for the 50 year
time-frame.)

Will things happen exactly according to that plan? No, of course things change
(e.g. disruptive technology) in ways one can't forsee. But if one has
correctly understood the fundamentals, those don't change. (We still have
stacks, procedures, etc - because the fundamentals there were correctly
understood.) So the plan has to adapt and evolve. (Corby said something to
this effect in his Multics appraisal paper, about how a system needs a rudder,
so you can change course.)

But unless you know where you're going, you won't get there.

There is no evidence known to me that the I* community ever had anything like
this. I did some on my own, but that's not very effective. I consider this
professional nonfeasance on the part of the Internet engineering commmunity,
but that's water over the dam now.


    > Neither the Internet nor the DNS can be stopped and restarted, nor
    > upgraded as a whole unit.  New features must always be introduced in a
    > way that is backward compatible and thus ``roll out'' incrementally.

Agree 100%.

    > This is similar in concept to rebuilding an airplane while in flight
    > except with multiple teams working on different parts of the airplane at
    > the same time and without an agreed upon plan for the new design.

I can pretty much guarantee you that if you re-build a plane, _without some
overall plan as to what the result will be_, your plane will not work.

	Noel



More information about the Internet-history mailing list