Newsletter
Anthropic: We built a security doomsday machine
Today: Why Anthropic thinks its next frontier model is too dangerous to release to the general public, AI infrastructure growing pains aren't improving, and the latest funding rounds in enterprise tech.
Welcome to Runtime! Today: Why Anthropic thinks its next frontier model is too dangerous to release to the general public, AI infrastructure growing pains aren't improving, and the latest funding rounds in enterprise tech.
Please forward this email to a friend or colleague! If it was forwarded to you, sign up here to get Runtime each week, and if you value independent enterprise tech journalism, click the button below and become a Runtime supporter today.
Glass houses
At the same time they've embraced AI tools that can help defend complex distributed systems from attackers, security professionals have kept a wary lookout for an influx of attacks generated by those AI tools. Anthropic announced Tuesday that its latest, unreleased model could unleash a new wave of software exploits that could overwhelm today's generation of security tools and strategies.
"Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. It also announced the creation of Project Glasswing, a consortium of tech companies and non-profits that will be given special access to the model in order to get an understanding of what they're up against before it is released more widely.
- "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser," Anthropic said (emphasis theirs).
- Anthropic released details about a few of those vulnerabilities, including "a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world" that "allowed an attacker to remotely crash any machine running the operating system just by connecting to it."
- It also said the model could immediately plot a sophisticated exploit upon discovery of those vulnerabilities and even chain together multiple vulnerabilities to attack.
- "We basically need to start, right now, preparing for a world where there is zero lag between discovery and exploitation,” Anthropic's Logan Graham told The Wall Street Journal.
Project Glasswing is the start of that effort to prepare for what participants believe is the inevitability of AI-powered cyberattacks, and it contains some heavy hitters. The Big Three cloud providers, The Linux Foundation, security companies like Palo Alto Networks and CrowdStrike, and dozens of other software organizations that work on development and security will all get access to Claude Mythos Preview and usage credits on Anthropic's services to conduct their research.
- Despite their ongoing dispute, Anthropic said it would also share details about Mythos Preview with the U.S. government.
- Participants will study how the model could impact their security posture and after 90 days Anthropic said it would report the initial findings to the public.
- The group will also discuss ways to modernize the software vulnerability reporting system, which has been groaning under the weight of the explosion in new software while facing funding challenges.
Of course, the Claude Mythos Preview disclosure is yet another opportunity for Anthropic to market the powerful capabilities of its models to developers and security pros, and it's not that surprising that OpenAI did not appear on the list of initial participants.
- Project Glasswing will be a real test of the enterprise tech industry's ability to work together while keeping the details under wraps, which could be a challenge given how many companies and organizations are involved.
- Anthropic's Dianne Penn told Bloomberg that "there are protections in place to ensure that members of Project Glasswing keep a tight grip on access to the Mythos model, but declined to share more detail for security reasons."
It's not clear what kind of test will be used to determine when or if Mythos Preview can be released, but Anthropic did say that it eventually hopes to "enable our users to safely deploy Mythos-class models at scale — for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring." However, given the breadth and depth of the companies and groups involved, it's clear that Anthropic's briefings on Claude Mythos Preview spooked enough people to take this threat seriously.
- "The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months," Anthropic said.
Down and out
We've covered the AI industry's struggles to deliver reliable services in the face of skyrocketing demand for several months now, and the situation does not seem to be getting better. Over the weekend Anthropic announced that in order to manage capacity on its platform, monthly subscribers will no longer be able to use third-party clients like OpenClaw as part of their subscription packages, which "weren't built for the usage patterns of these third-party tools," Anthropic's Boris Cherny said on X.
Meanwhile, GitHub has also been dealing with uptime problems over the last several months, and it decided Monday to shed a little more light on the increase in demand for its services. GitHub's Kyle Daigle revealed on X that users are on track to push at least 14 billion commits in 2026 after sending just 1 billion commits in all of 2025, which illustrates just how much software is being assembled (if not necessarily used in production) right now.
Anthropic announced Monday that it signed a new TPU computing deal with Google Cloud and Broadcom that won't come on line until next year, and GitHub is still in the process of moving its servers over to Microsoft Azure after getting absorbed into its Core AI group last year, so it could be some time before users see some relief. In the short term, however, the price of AI might have to go up.
Enterprise funding
Firmus raised $505 million in new funding, which values the Australian AI infrastructure provider at $5.5 billion.
Aria Networks scored $125 million in new funding and announced the general availability of its Deep Networking infrastructure platform, which was designed to maximize token production and delivery.
Coder landed $90 million in Series C funding for its platform-as-a-service technology, which allows customers to run development environments in the cloud rather than on local machines.
depthfirst raised $80 million in Series B funding for its security technology, which is based around a custom AI model.
Q-Factor launched with $24 million in seed funding as it attempts to build a quantum computer using neutral atom technology, which is gaining traction among other quantum computing companies.
Trent AI launched with $13 million in seed funding and announced its flagship service, which was designed to secure AI agents with AI agents.
The Runtime roundup
AWS engineers are working "24/7" to try and stabilize its data centers in the UAE and Bahrain amid ongoing attacks, CEO Matt Garman told CNBC Tuesday.
AI-generated submissions to the Curl open-source project have improved dramatically in recent weeks after forcing the project to stop its bug-bounty system in January, according to maintainer Daniel Stenberg.
Thanks for reading — see you Thursday!