[ih] One instance where multicast lost (was Re: Wide Area Multicast deployment [was IPv8...])

Karl Auerbach karl at iwl.com
Tue Apr 21 15:17:49 PDT 2026


At Cisco we worked a bit on what we called "embargoed" multicast.  The 
idea is that we could use multicast to disseminate data to a point in a 
Cisco box that was in close proximity to the subscriber.  We would hold 
that data until the embargo time.  This let us act as a fairly neutral 
party in the dissemination of time-sensitive data, but not anything that 
had a near real-time aspect (because one had to send the embargoed data 
somewhat in advance of the release time.)

It probably would have become a legal nightmare for Cisco to have 
released this; we had no idea how down in the sub-millisecond (and 
sub-microsecond) wolrd were securities and financial instrument traders.

It was sometime around that time that I put "vacuum filled fiber" into 
my catalog of bogus network products - and these days I discover that my 
joke has become reality.  (Vacuum filled fiber - 
https://www.cavebear.com/cb_catalog/current/fastfiber/ )

(I was even more amazed that people believed our 1998 press release 
about Gaganet - where we claimed to move packets faster than the speed 
of light on the Interop Show network.  Yes, people really believed 
this!! https://www.cavebear.com/cb_catalog/techno/gaganet/ )

         --karl--


On 4/21/26 2:48 PM, Andrew Sullivan via Internet-history wrote:
> Hi,
>
> On Wed, Apr 22, 2026 at 09:07:01AM -0500, Brian E Carpenter via 
> Internet-history wrote:
>> be useless. Once broadband capacity to the customer became affordable,
>> the demand was there but it wasn't for simultaneous transmission to
>> many recipients, it was for individual audio or video *on demand* to
>> subscribers.
>
> I think I can tell this story now, though I will keep the names out of 
> it because I know it under NDA.
>
> Some years ago I worked on a contract engagement with a company that 
> offered certain time-sensitive data (you might imagine, say, that 
> getting the data at the right moment could enable decisions that would 
> be extremely profitable or else quite bad, depending on close timing 
> in financial markets).  The solution they had for this product turned 
> out to hinge on multicast: by sending the data multicast they could be 
> sure that nobody could sue them for someone else having received the 
> data first.  (If someone else _did_ receive it first, that was down to 
> effects in the network, so of course every customer that received the 
> data was in the same data centre in racks proximate to the sending 
> node.  But that was the customer's problem to sort out and so absolved 
> this company of liability.  It seems to me, in fact, that without 
> multicast in the first place, the product could never have been built 
> because of the threat of lawsuits.)
>
> What killed this design, to their great chagrin, was NAT. Essentially 
> all the customers had been through multiple rounds of mergers and 
> acquisition, as indeed had the division of the (by the time I advised 
> them) large company who was supplying the data. Getting the multicast 
> packets across the multiple layers of NAT, and being able to prove 
> that that was not causing a meaningful delay for some recipient, 
> effectively meant the redesign of the entire project.  Keep in mind 
> that it is very common in the event of a merger for two former 
> indepenent companies, now divisions of the same company, to be using 
> the same RFC1918 address ranges. Nobody under such circumstances wants 
> to renumber an entire network.  So the solution is to introduce 
> multiple redundant NAT gateways on-path so that the formerly 
> independent networks can now talk to each other conveniently while 
> using the same address at both ends of the connection.
>
> This got dramatically worse when various corporate edicts came to move 
> things into certain large cloud providers.  For reasons, I am assured, 
> those cloud providers were unable to spell "IPv6" at the outset, and 
> so did everything with IPv4.  As the scale mounted, the use of IPv4 
> squatted on the entire public address space and used a lot of NAT to 
> communicate both outside the data centres and within the centres 
> (between different customers) of said cloud providers.  Combine this 
> with the "internal" networks of a lot of enterprises (including the 
> above-mentioned financial services ones), and you have a problem that 
> eventually made the multicast basis for the product too difficult to 
> support.
>
> In some sense, this is the shadow of the point Dave Crocker was making 
> on-list about installed base.  The installed base effectively 
> guarantees sometimes that you'll lose functionality you might 
> otherwise get, just because of features of the way the installed base 
> actually works.  (It was also a depressing incident for me. It became 
> a clarion example where a really clever technical facility created by 
> some extremely sharp technical minds was going to lose to a design 
> from muppets with spreadsheets and bureaucratic-political skills but 
> no interest in the particulars of the case the designers of this 
> losing facility were trying to address.  Even though a state was not 
> involved, I came to understand it as an example of "legibility" in 
> _Seeing Like a State_.)
>
> Best regards,
>
> A
>


More information about the Internet-history mailing list