[ih] ARPANET pioneer Jack Haverty says the internet was never finished

Louis Mamakos louie at transsys.com
Thu Mar 3 10:49:02 PST 2022


The small amount of multicast address space really isn't a problem in
practice.  For any successful, scalable multicast deployment, you'll end up
with source-rooted trees and the forwarding state in the routers are (S,G)
tuples.  And multicast only makes sense for a large number of receivers
because of all the effort required to instantiate the forwarding state in
the control plane of your network.

The larger problem is that multicast requires a large number of receivers
that want to simultaneously receive the traffic.  This is at odds with
personalized content.

I did a multicast product at UUNET so many years ago now, back when the
access was dial-up users.  How do you sell this?  Content providers want to
reach content consumers everywhere.  So multicast distribution is an
optimization, rather than a central part of the solution to this problem.
The customer that I worked with at the time was essentially in the
"Internet Radio" business.  They selected a subset of all their live
streams for distribution by multicast on our network, with about 250K
multicast-enabled dial-up ports.  Their client software would use some
program guide, distributed out-of-band for their customers to navigate and
select content.  The client software also subscribed to a multicast group
to listen for "beacon" messages to discover if a multicast stream was
possibly available.  (And we just transmitted NTP time announcements on
that group every few seconds..)   The client would attempt to join the
group if possible, or fall back to a unicast stream.

This was completely at odds with the "MBONE" experimentation going on at
the time.  There were content announcement sent to a multicast group by
each source, and some client applications that listened for these things.
This wasn't a great model for commercial adoption if the content provider
wanted to reach the most eyeballs, as it reduced the addressable segment of
his market to a very small subset.

This was back in the mid to later 1990's, when dial-up V.90 modems were the
common means of Internet access for residential end-users.  I spent time
with our finance people trying to figure out costs of running a platform
like this, so we'd have at least something to base retail pricing on and
ideally produce a positive margin.  So it was an exercise to understand the
span and extent of a multicast distribution tree across backbone links for
any given stream from a source, and some hand-waving over the cost of the
forwarding state, back when memory was expensive and you had state based on
both source and destination occupying resources.  At the time, this was not
quite top-of-mind, but something to think hard about, having had to upgrade
CPU boards in many routers as the default-free Internet routing table was
growing quite rapidly in those days.

And back then, inter-domain multicast was quite... a hack.  Gluing together
sparse-mode PIM IGP infrastructure wasn't not at all obvious at that time.
Of course BGP got co-opted yet again as the all-purpose container for
carrying router state, but you still had problem before IGMPv3 and being
able to specify a source when joining a multlicast group.  So wonderful
hacks like inter-domain source discovery protocols to forward discovered
sources in groups towards the PIM RP.  Madness.  IGMPv3 made more of this
possible to imagine working, though I had moved on to other things and
stopped following in detail what happened in the interdomain multicast
routing space by then.

Louis Mamakos

On Thu, Mar 3, 2022 at 12:42 PM Michael Grant via Internet-history <
internet-history at elists.isoc.org> wrote:

> Jack Haverty via Internet-history wrote:
> > IMHO, many things also happen for non-technical and non-business
> > reasons.  Since multicast was needed for some uses of the 'net, but it
> > didn't actually get deployed widely in the Internet (whatever happened
> > to the Mbone...?), people figured out another way to provide it by
> > putting it in separate boxes (the CDNs) from the switches themselves.
>
> From my memory, there were several different ways of doing multicast
> and it was a bit of a mess.  IGMP, PIM, others, I'm sure someone can
> enumerate them all.  Almost no ISP supported multicast and the few
> that did, not all were the same and very few routers supported it.
>
> Then there was the issue that it wasn't global.  You couldn't expect
> just to get something multicast to you from anywhere on the internet.
>
> The address space (224.0.0.0 to 239.255.255.255) was very small, I
> never understood how that was supposed to work in a global context.
>
> You could sort of get it working within a LAN but there was no reason
> to save the bandwidth with switches everywhere.
>
> But technical stuff aside, the final nail in the coffin was that the
> content providers wanted to know who they were broadcasting to so they
> could advertize to them and get their data and sell it.  Also to be
> able to sell the content behind a paywall.
>
> And then there's content on demand vs live streaming.  You can't pause
> a multicast stream indefinitely.
>
> In the end, trying to save bandwidth using multicasting became harder
> than just using unicast.
>
> Michael Grant
> --
> Internet-history mailing list
> Internet-history at elists.isoc.org
> https://elists.isoc.org/mailman/listinfo/internet-history
>



More information about the Internet-history mailing list