[Chapter-delegates] ISOC security

Dave Burstein daveb at dslprime.com
Sun Apr 19 06:42:38 PDT 2026


--

It’s the End of the Internet as We Know It
April 15, 2026
[image: An illustration of three people holding up a massive modem over
their heads and struggling.]
Credit...Kyle Ellingson
Listen · 7:14 min

   - Share full article
   -
   -
   - 213

By Raffi Krikorian

Mr. Krikorian is the chief technology officer at Mozilla.

Last week, Anthropic announced that its newest artificial intelligence
model, Claude Mythos Preview, would not be released
<https://www.nytimes.com/2026/04/07/technology/anthropic-claims-its-new-ai-model-mythos-is-a-cybersecurity-reckoning.html>
to
the public, after the company learned it was capable of finding and
exploiting vulnerabilities that have gone undetected in critical software
systems for decades. Instead, Anthropic gave access to Mythos — and $100
million in credits to use it — to more than 50 of the world’s largest
organizations, including Amazon, Apple, Microsoft, Google and JPMorgan
Chase, as part of a defensive cybersecurity initiative called Project
Glasswing.

Even before the announcement, publicly available A.I. models were already
finding security vulnerabilities in commonly used software. Anthropic’s
researchers acknowledged that other labs are six to 18 months from building
something comparable. These capabilities, and the threats they pose to
cybersecurity, will proliferate. From streaming platforms to online banking
services to search engines that answer everyday questions, broad swaths of
the internet could become unusable.

If we don’t respond carefully and decisively, then the millions of people
who stand to gain the most from A.I.’s progress as a programming tool will
also be the ones most exposed to attack. Leaving them to fend for
themselves could erode the internet as we know it.

You might already be familiar with the concept of vibe coding: using A.I.
tools to turn plain-language descriptions into working software. A shop
owner describes the inventory system she needs, and A.I. creates it. A
dentist describes a patient portal, and A.I. delivers it. Millions of
people who never thought of themselves as software developers — small
business owners, clinicians, nonprofit directors — are creating software
for the first time without any training. But these applications are often
written without security review. Potential flaws, increasingly easy to find
as A.I. improves, could let someone access customer data, take over
accounts or shut the entire application down.

For decades, two kinds of scarcity kept the internet safe — or safe enough.
Writing software was hard, so the people who did it were trained, careful
and few. Finding bugs was also hard, so the worst flaws stayed hidden,
sometimes for decades. It wasn’t a great system. But the difficulty on both
sides created a kind of détente that held.
Sign up for the Opinion Today newsletter  Get expert analysis of the news
and a guide to the big ideas shaping the world every weekday morning. Get
it sent to your inbox.

Now, thanks to new A.I. tools, anyone can write code. Soon, bad actors
could use those same tools to find out what’s wrong with code. The détente
is over.

Most of the internet was built from open-source software. For example, much
of the video you stream online is quietly delivered by FFmpeg, a free,
open-source program maintained by volunteers whose combined budget is
modest by any corporate standard. OpenBSD, an operating system that runs
the firewalls and gateways protecting sensitive networks from outside
attack, and which Anthropic calls “one of the most security-hardened
operating systems in the world,” runs on donations. Unlike the proprietary
software developed by the big firms in Project Glasswing, these projects
exist because someone decided the work mattered more than the paycheck.
They are built by people who have given years of their lives to code that
powers products most of us use every day without knowing it.

According to Anthropic, Mythos found
<https://venturebeat.com/security/mythos-detection-ceiling-security-teams-new-playbook>
a
27-year-old vulnerability in OpenBSD and a 16-year-old vulnerability in
FFmpeg, buried in a line of code that, Anthropic says, other automated
security tools had glossed over five million times. (Both organizations say
they have fixed the issues identified.) Even Firefox, the web browser my
own organization builds, wasn’t spared: When Anthropic ran its previous
model against Firefox, it was able to weaponize an already discovered bug
just twice
<https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/> out
of several hundred attempts. When Anthropic ran Mythos, it succeeded nearly
every time. Across all these projects and many more, the model identified
thousands of vulnerabilities in code. These are the types of issues that
can allow ransomware to shut down hospitals. They’re how cyberattacks can
disrupt critical infrastructure. And they’re how foreign intelligence
services can compromise government networks.

Beyond detecting problems in lines of code, Mythos found the seams in the
informal social contract that holds the internet together. It’s long been
understood that developers would share their work openly, help one another
fix what’s broken and maintain the software that all of us depend on — not
for pay, but because that’s how the community has worked. The veteran
programmer who has been patching critical code for 20 years in his spare
time is in the same position as the shop owner who vibe coded her first app
last Tuesday. Both are exposed. Neither has a security team. Neither
currently has access to Mythos.
Editors’ Picks
Sofia Isella’s Dark Pop Is Poetic, Feminist and Right on Time
<https://www.nytimes.com/2026/04/14/arts/music/sofia-isella-something-is-a-shell.html>
Spice Up Your Cooking Skills With Help From Your Phone
<https://www.nytimes.com/2026/04/15/technology/personaltech/recipes-cooking-ai.html>
Peloton’s Latest Leader Thinks He Can Coach It Back to Health
<https://www.nytimes.com/2026/04/12/business/peloton-peter-stern.html>

To its credit, Anthropic is among the first major A.I. companies to decide
the responsible thing was to slow down. The company says it is committing
$4 million to open-source security organizations. That’s more than anyone
else in this industry has done.

And yet the underlying economics haven’t changed; the most valuable
software infrastructure in the world continues to be maintained by people
working for free, while the companies building fortunes on top of it never
had to pay for its upkeep. Now a powerful new capability has arrived — and
as we’ve seen repeatedly in tech, there’s the risk that organizations with
resources will receive it first and learn to protect themselves, while
others are left vulnerable.

The programmer who gave 20 years of his life to maintain code that runs
inside products used by billions of people? He doesn’t have access to
Mythos yet. He should. The organizations that steward open-source
infrastructure know who these maintainers are and how to reach them, and
are ready to help. That’s a short list and a solvable problem. The shop
owner is different. She shouldn’t need Mythos or a tool just as powerful to
defend herself from a cyberattack, just the confidence that the tools she
used were built to protect her from the start.

So, let’s change the default. Every company that ships open-source code in
its products — which is most of the technology industry — should invest in
the essential workers who maintain it. That means funding, but it also
means that A.I. firms contribute engineering time, security expertise and
staff to the projects we all depend on. A.I. companies that are building
tools like Mythos, beyond Anthropic, should put them into the hands of
these workers. And all of us who benefit from open-source infrastructure
need to treat it as what it has always been: as critical as any road,
bridge or power line.

And for the millions of new creators building software for the first time,
we need to make it easy for them to build safely. Integrate security into
the tools they’re already using. Make sure the A.I. that writes the code
also protects the code. Not as an add-on and not as a premium feature, but
as a default. The détente is over. The flaws are visible. The creators are
everywhere. The only question is whether we protect all of them — or just
the ones who can afford to protect themselves.


News worth a tweet @AInews_wire, Did two books on broadband but now mostly
AI. Always trying to get closer to the truth. Happy to exchange ideas.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://elists.isoc.org/pipermail/chapter-delegates/attachments/20260419/e0bb37b5/attachment-0001.htm>


More information about the Chapter-delegates mailing list