[ih] Eric Allman, the Sendmail DEBUG command and "people who are privileged to go into the depths of networks, often across administrative boundaries..." (was GOSIP & compliance

Toerless Eckert tte at cs.fau.de
Mon Mar 28 01:21:55 PDT 2022


There is actually a decades long process about remote troubleshooting originating
from experiences like this. On routers, it is the AFAIK still ongoing honorous
process of requesting all type of diagnostics output "show foobar"/"debug something"
and so on. And as the remote expert you had to know how to play 4 round lookahead chess
to come up with all possible requested commands to minimize RTTs, which could save
a week or more (24 hours RTT for each request/reply was quite common). And that lookahead
of course doesn't really work for actual level 3 support where it's not user misconfig
but an actual novel product issue. Whenever one therefore was as an expert in a position
of power, one would flat out reject this official process and request direct access to
the nodes of interest.

I don't think there was ever a more structured approach to evolve this process.
For example, i don't think there is any attribution mechanism in YANG to allow
defining "private" elements, such as passwords, or otherwise not-to-be-exposed
information for the purpose of troubleshooting. Instead, this is all still coded
ad-hoc in product specific fashions.

With the lack of a more structured solution to these problems of troubleshooting,
it is no wonder that we continue to see those "backdoors" in products IMHO:

Backdoors are what happens when customers indulge in wishful thinking.

CHeers
    Toerless

On Sat, Mar 26, 2022 at 12:07:33PM -1000, the keyboard of geoff goodfellow via Internet-history wrote:
> vis-a-vis On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach who wrote:
> "... people who are privileged to go into the depths of networks,
> often across administrative boundaries, going where privacy and
> security concerns must be honored..."
> 
> an Internet History anecdote "relating" to the above with regards to the
> "history and purpose" of the Sendmail DEBUG command that was responsible
> for the Robert Tappen Morris Internet Worm Incident in 1988 as excerpted
> from a "Hillside Club Fireside Meeting: Eric Allman" last year starting at
> about 51 mins and 15 secs:
> 
> *Tim Pozar*: *"... geoff goodfellow is back here saying could you give some
> backstory with respect to the Sendmail DEBUG command that led to the Robert
> Tappen Morris 1988 Internet Worm Incident?"*
> 
> *Eric Allman: *"Actually, this is a good story. So for those who don't
> know, I put in a command in Sendmail that I could connect to remotely and
> say DEBUG. And it would give me basically permissions that I wasn't
> supposed to have on that computer. Big permissions on that computer. So why
> did I do something like that? Well, it turns out that this is back when
> Sendmail was being used on campus, and I was at least a part time student,
> and there was a bug on one of the computers, but it was one of the
> computers that was used for administrative computing. And they said, So
> there's this bug. And I said, well, let me come in and look at it. They
> said, oh, we can't let you onto that computer. You're a student. You don't
> have authorization to get to that computer. Well, I can't fix your problem
> then. Oh, no, you have to fix our problem. I can't. You have to. You can't.
> You have to. They wouldn't let me onto the computer, but I did send them
> saying something like, here's a new version (of sendmail). I've done some
> stuff to it. Why don't you install it and try it out? And that gave me the
> access to the machine that I needed to actually fix their problem. And if
> they had never done that, the DEBUG command would never have happened. It
> was like they were unrealistic about the security. Now it is totally my
> fault that I did not immediately remove the DEBUG command. And that was,
> frankly, because, wow, it was so useful there. I might need that again.
> Here we go. And at some point I kind of forgot about it and it was out way
> too far on the net. That was just pure stupidity. I apologize for that."
> 
> https://www.youtube.com/watch?v=j6h-jCxtSDA
> 
> *Hillside Club Fireside Meeting: Eric Allman*
> *"On January 1, 1983, the Internet was born from the ashes of the ARPAnet,
> and sendmail was already there. Written by Eric Allman as a stopgap measure
> in the early 1980s, it grew with the Internet, at one point delivering
> around 90% of all the email on the network.*
> 
> 
> 
> *The early developers of the Internet believed that "universal
> communication" would promote democracy and bring people closer together.
> Things didn't work out that way. Many folks, including Eric, gave away
> their work for free. That changed too. *
> *Arlene Baxter engages Eric Allman in conversation about those early, heady
> days as electronic communication began to be an essential part of all of
> our lives. This conversation will discuss the origins of sendmail, the
> attitudes of the time, and how the Internet grew and changed over the
> years."*
> 
> 
> On Sat, Mar 26, 2022 at 10:13 AM Karl Auerbach via Internet-history <
> internet-history at elists.isoc.org> wrote:
> 
> >
> > On 3/26/22 10:30 AM, Jack Haverty via Internet-history wrote:
> > > SNMP et al are mechanisms for data collection, i.e., retrieving all
> > > sorts of metrics about how things out in the network are behaving. But
> > > has there been much thought and effort about how to *use* all that
> > > data to operate, troubleshoot, plan or otherwise manage all the
> > > technology involved in whatever the users are doing?
> >
> > The short answer is "yes".  I've been thinking about it for a long time,
> > since the 1980's.  I tend to use the phrase "homeostatic networking".
> >
> > I helped with a DARPA project about "smart networks".  (They weren't
> > really "smart" in the switching plane, the "smarts" was in stuff in the
> > control plane.)  In that project we fed a bunch of information into a
> > modelling system that produced MPLS paths, including backups so that we
> > could do things like switching over within a few milliseconds. The
> > modelling was done externally; results would be disseminated into the
> > network. The idea was to put somewhat autonomous smarts into the routers
> > so that they could manage themselves (to a very limited degree) in
> > accord with the model by watching things like queue lengths, internal
> > drops, etc, and decide when to switchover to a new path definition.  (I
> > was going to use JVMs into Cisco IOS - someone had already done that -
> > to run this code.)
> >
> > We realized, of course, that we were on thin ice - an error could bring
> > down an otherwise operational network in milliseconds.
> >
> > My part was based on the idea "what we are doing isn't improving things,
> > what do we do now?"  To me that was a sign of one of several possible
> > things:
> >
> >    a) our model was wrong.
> >
> >    b) the network topology was different than we thought it was (either
> > due to failure, error, or security penetration)
> >
> >    c) something was not working properly (or had been penetrated)
> >
> >    d) A new thing had arrived in the structure of the net (all kinds of
> > reasons, including security penetration)
> >
> >    etc.
> >
> > In our view that would trigger entry into a "troubleshooting" mode
> > rather than a control/management mode.  That would invoke all kinds of
> > tools, some of which would scare security managers (and thus needed to
> > be carefully wielded by a limited cadre of people.)
> >
> > One of the things that fell out of this is that we lack something that I
> > call a database of network pathology.  It would begin with a collection
> > of anecdotal data about symptoms and the reasoning chain (including
> > tests that would need to be performed) to work backwards towards
> > possible causes.
> >
> > (Back in the 1990's I began some test implementations of pieces of this
> > - originally in Prolog.  I've since taken a teensy tiny part of that and
> > incorporated it into one of our protocol testing products.  But it is
> > just a tiny piece, mainly some data structures to represent the head of
> > a reverse-reasoning chain of logic.)
> >
> > In broader sense several other things were revealed.
> >
> > One was that we are deeply under investing in our network diagnostic and
> > repair technology.  And as we build ever higher and thicker security
> > walls we are making it more and more difficult to figure out what is
> > going awry and correcting it.  And that, in turn, raises questions
> > whether we are going to need to create a kind of network priesthood of
> > people who are privileged to go into the depths of networks, often
> > across administrative boundaries, going where privacy and security
> > concerns must be honored.  As a lawyer who has lived for decades legally
> > bound to such obligations I do not feel that this is a bad thing but
> > many others do not feel the same way that I do about a highly privileged
> > class of network repair people.
> >
> > Another thing that I have realized along the way is that we need to look
> > to biology for guidance.  Living things are very robust; they survive
> > changes that would collapse many of our human creations.  How do they do
> > that?  Well, first we have to realize that in biology, death is a useful
> > tool that we often can't accept in our technical systems.
> >
> > But as we dig deeper into why biological things survive while human
> > things don't we find that evolution usually does not throw out existing
> > solutions to problems, but layers on new solutions. All of these are
> > always active, pulling with and against one another, but the newer ones
> > tend to dominate.  So as a tree faces a 1000 year drought if first pulls
> > the latest solutions from its genetic bag of tricks, like folding leaves
> > down to reduce evaporation.  But when that doesn't do the job older
> > solutions start to become top-dog and exercise control.
> >
> > It is that competition of solutions in biology that provides
> > robustness.  The goal is survival.  Optimal use of resources comes into
> > play only as an element of survival.
> >
> > But on our networks we too often have exactly one solution.  And if that
> > solution is brittle or does not extend into a new situation then we have
> > a potential failure.  An example of this is how TCP congestion detection
> > and avoidance ran into something new - too many buffers in switching
> > devices - and caused a failure mode: bufferbloat.
> >
> >
> > > We also discovered quite a few bugs in various SNMP implementations,
> > > where the data being provided were actually quite obviously incorrect.
> > > I wondered at the time whether anyone else had ever tried to actually
> > > use the SNMP data, more than just writing it into a log file.
> >
> > I still make a surprising large part of my income from helping people
> > find and fix SNMP errors.  It's an amazing difficult protocol to
> > implement properly.
> >
> > My wife and I wrote a paper back in 1996, "Towards Useful Network
> > Management" that remains even 26 years later, in my opinion, a rather
> > useful guide to some things we need.
> >
> > https://www.iwl.com/idocs/towards-useful-network-management
> >
> >          --karl--
> >
> >
> -- 
> Geoff.Goodfellow at iconia.com
> living as The Truth is True
> -- 
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history

-- 
---
tte at cs.fau.de



More information about the Internet-history mailing list