[ih] How Plato Influenced the Internet
Jack Haverty
jack at 3kitty.org
Wed Aug 25 12:13:08 PDT 2021
Hi Steve,
I wasn't worrying about efficiency; I just wanted to see if I could
figure out how to make some ternary logic. IIRC, the class was on
digital design, and the lab assignment was basically "build something
digital, other than the examples we used in the lectures". Rather than
building boring flipflops and gates, maybe an adder, I decided to try to
create ternary logic. After all, they said "digital", so it didn't
have to be binary. I always tended to think outside the box.
Your math is probably correct, but it's only a small part of making a
design choice. At about the same time of that lab course, I had a
student job using a PDP-8 to gather and analyze data from experiments
being done at the "Instrumentation Lab". They were designing,
building, testing, and deploying inertial navigation units that were use
in a lot of places, including the Apollo spacecraft. So I actually got
to work with real "rocket scientists". I learned as much from those
engineers and scientists as I did from the classes.
One thing I learned was that much of the mathematical toolbox from the
courses wasn't terribly useful. For example, there were lots of tools
and techniques for minimizing Boolean logic. But reams of data had
shown that the critical issue for reliability (it's hard to fix things
in space) was the mechanical connections involved, e.g., how many pins
on a PC board and corresponding socket were needed in the system
design. More logic circuitry was OK if it meant fewer pins were
needed. None of the tools and techniques taught in courses even
mentioned the issue of pins or other such design questions, such as heat
dissipation.
So, it's possible that some kind of non-binary logic might have required
fewer pins and resulted in more reliable hardware. Or not. The choice
made early on meant we'd never pursue any other path.
Getting back to Plato and the Internet, I can confirm that I never saw
or even heard of Plato during the 60s/70s/80s. So it probably didn't
influence me, at least not directly.
However, I was surprised to just read how Plato was focussed on latency
as a key driver of the users' experience. I ran into that same issue
at Licklider's MIT lab, when we were trying to bring up MazeWars on our
newfangled Imlacs that were used as terminals on the PDP10. I spent a
bit of time tweaking the RS232 TTY interfaces to get the line speed up
around 100 kb/sec (typical max was 9.6 in those days), and that made the
Maze game popular. When you "shot" an opponent they died as they
should. With higher latency, they'd often inexplicably get away.
We tried to convince BBN to upgrade TIPs to run faster, but were
rebuffed. The TIPs supported the "maximum reasonable speed" of 9.6.
Nothing faster was needed.
Later on, circa 1978 while we were rearchitecting TCP to split out TCP
and IP, and introduce UDP, I remembered my experience with latency, and
pushed for inclusion of "Type Of Service" so that the underlying IP
transport might someday be able to offer both low-latency and
high-throughput services to meet different users' needs. And maybe a
"guaranteed bandwidth" service to better mimic old physical circuits.
Low latency was also important for things like conversational voice, so
the "voice guys" at places like ISI and Lincoln were also interested in
having such a capability in the Internet. I don't recall that "Plato
guys" were involved, but I bet they would have been proponents as well.
Sadly, although those experiences certainly "influenced the Internet" to
the extent that various header fields and rudimentary mechanisms were
included in the emerging TCP that we still have today, there apparently
wasn't enough pressure and interest to cause low-latency service to
actually get implemented. At least as far as I can tell....
I can't see "inside" the Internet now, just as a user today. But
simply watching the now constant stream of live interviews on TV, and
the pixelization, breaking audio, and such artifacts, makes me conclude
that low-latency service isn't there yet, after 40 years of evolution.
I suspect part of the cause was also the hardware availability, or lack
thereof. Like Plato, Imlacs were not common in "the network community"
and neither were voice-capable terminals. So unless you had one of
those, you didn't understand why things like low-latency were needed.
So the "rough consensus" never emerged for such mechanisms.
Plato, and Maze, and Conversational Voice, and no doubt others,
influenced the Internet. But not enough to drive the associated
functionality all the way to deployment.
Sometimes history is about what didn't happen. And why.
/Jack
On 8/23/21 7:47 PM, Steve Crocker wrote:
> Jack,
>
> A classic analysis of bits vs trits says trits are slightly more
> efficient. The analysis is based on assuming that it takes b parts to
> represent a digit in base b. Two parts for a bit, three parts for a
> trit, four parts for a quit(?!). The information content of k digits
> in base b is b^k. The "cost" is b*k. The optimal base is e
> (2.71828...). Bases 2 and 4 are equal. (The information content of 2k
> bits is 2^(2k). The information content of k quits is 4^k. The costs
> are the same, i.e. 4k.)
>
> Trinary is better but not by much. Using six parts, you can make
> three bits or two trits. The information content of three bits is 8.
> The information content of two trits is 9.
>
> A different consideration of using trinary vs binary is the
> representation of integers. As we all know from hard experience, twos
> complement representation of signed integers gives you an asymmetry
> with one more negative number than positive number. Switch to ones
> complement and you wind up with two representations of zero. Trinary
> gives you a naturally symmetric representation of signed integers.
>
> I think that's the end of the advantages of trinary over binary. But
> I'm VERY impressed you took the time and effort to actually build such
> circuits. Bravo!
>
> Steve
>
>
> On Mon, Aug 23, 2021 at 10:29 PM Jack Haverty via Internet-history
> <internet-history at elists.isoc.org
> <mailto:internet-history at elists.isoc.org>> wrote:
>
> Back in the 60s, a lot of computer technology was not yet cast in
> concrete. There were lots of choices. But then someone pursues one
> choice, and if it works reasonably well, others follow the same path.
> It doesn't take very long for the "installed base" to become so large
> that it's unlikely that some other initial choice could easily take
> over. Think about how long it's taken, so far, for IPV6 to
> supplant IPV4.
>
> Sometime around 1968, as a learning experience in some lab course at
> MIT, I decided to make some non-binary logic. At the time, analog
> computers were still around, and digital computers hadn't yet agreed
> even on how many bits were in a byte, or how to encode characters, or
> what order bits should be in a computer memory word. But bits were
> pretty well established.
>
> I figured there must be other choices. So I made some ternary
> logic.
> Unlike binary, which dealt with 1s and 0s, I used +1, 0, and -1 as
> the
> three possible states. Electronically it translated into positive,
> negative, or no current. Using transistors and such components,
> I made
> some basic logic "gates" that operated using three states instead of
> two. Was that a good idea? Probably not, but it was a good way to
> learn about circuits. Instead of bits (binary digits), how about
> manipulating trits (trinary digits)? There's nothing magic about
> 1s and 0s.
>
> Shortly thereafter, binary took over as circuitry went into
> integrated
> circuits and a whole industry came in to being around binary
> computers. If some other kind of approach, ternary, quaternary, or
> whatever is better than binary, we'll probably never know. I
> suspect
> something might happen soon with qubits though to challenge bits
> supremacy..
>
> There are lots of ways to do things, and the one that "wins" might
> not
> have been the best choice.
>
> Imagine how networking and computing might have evolved with trits
> instead of bits....
>
> /Jack
>
>
> On 8/23/21 12:15 PM, John Day via Internet-history wrote:
> > Agreed. There are only so many ways to do something. ;-)
> >
> >> On Aug 23, 2021, at 14:46, Craig Partridge <craig at tereschau.net
> <mailto:craig at tereschau.net>> wrote:
> >>
> >>
> >>
> >> On Mon, Aug 23, 2021 at 8:12 AM John Day via Internet-history
> <internet-history at elists.isoc.org
> <mailto:internet-history at elists.isoc.org>
> <mailto:internet-history at elists.isoc.org
> <mailto:internet-history at elists.isoc.org>>> wrote:
> >> It is not uncommon in the history of technology (it has been
> observed back several centuries) that it isn’t so much direct
> transfer of technology but more someone brings back a story along
> the lines of, ‘I saw this thing that did thus and so and kind of
> looks like t.’ Which gives someone the idea, that if it exists,
> then how it must work like this.’ It isn’t quite independent
> invention, but it isn’t quite direct influence either.
> >>
> >>
> >>
> >> Related comment -- from my various interactions with historians
> about technology history. If the available technology is limited
> (as it was in the 1950s/60s/70s and early 1980s in many
> dimensions) then your solutions to certain problems are going to
> look rather similar. That doesn't meant that two similar
> solutions influenced each other... The trick in writing tech
> history is figuring out where there was a choice space and where
> there wasn't (much of) one.
> >>
> >> Craig
> >>
> >>
> >> --
> >> *****
> >> Craig Partridge's email account for professional society
> activities and mailing lists.
>
>
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> <mailto:Internet-history at elists.isoc.org>
> https://elists.isoc.org/mailman/listinfo/internet-history
> <https://elists.isoc.org/mailman/listinfo/internet-history>
>
More information about the Internet-history
mailing list