The AI-powered hacks are here

Today: Anthropic discloses how Chinese hackers used its Claude AI model to launch several cyberattacks, Cursor hits a big milestone, and the latest enterprise moves.

The AI-powered hacks are here
Photo by Magnus Jonasson / Unsplash

Welcome to Runtime! Today: Anthropic discloses how Chinese hackers used its Claude AI model to launch several cyberattacks, Cursor hits a big milestone, and the latest enterprise moves.

(Please forward this email to a friend or colleague! And if it was forwarded to you, sign up here to get Runtime each week.)


What are you doing, Claude

Back in June, Amazon chief security officer Steve Schmidt told Runtime that as of at point, generative AI tools were helping defenders more than attackers, who were using AI to automate phishing scams but hadn't progressed to orchestrating full-blown hacks into complicated systems. It only took a few months for that to change.

Anthropic disclosed Thursday that attackers used Claude Code to launch around 30 attacks at businesses and government organizations in mid-September, failing in most cases but succeeding in a few. "We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention," the company said in a blog post.

  • In order to launch the attack, the hackers had to "jailbreak" Claude, or bypass defenses put in place as part of its training to prevent bad actors from using the model for illegal purposes.
  • The attackers told Claude they were security professionals conducting "defensive testing," which is a legitimate practice used by many companies and consulting groups to strengthen security defenses.
  • Claude was then used to look for "high-value databases" at the intended targets and generate exploit code to steal legitimate credentials and harvest data.
  • "The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision," Anthropic said.

Anthropic was able to detect "suspicious activity" by users and ban their accounts, and "we assess with high confidence [it] was a Chinese state-sponsored group" behind the attacks, it said in the blog post. It also notified the affected companies and coordinated with law enforcement as it learned more about the nature of the attack.

  • This type of attack was only possible because of recent advances in AI models that allow developers to create agents to execute tasks in a loop more or less autonomously, Anthropic said, which is probably not the proof point that software companies were looking for to proclaim the arrival of agentic AI.
  • "Overall, the threat actor was able to use AI to perform 80-90% of the campaign, with human intervention required only sporadically (perhaps 4-6 critical decision points per hacking campaign)," Anthropic said. "The sheer amount of work performed by the AI would have taken vast amounts of time for a human team."
  • Amusingly, the attackers had to put up with the biggest obstacle to widespread generative AI adoption — hallucinations — at several points as they executed the attack.
  • "[Claude Code] might say, 'I was able to gain access to this internal system'" even when that wasn't true, Anthropic's Jacob Klein told The Wall Street Journal. “It would exaggerate its access and capabilities, and that’s what required the human review.”

Cybersecurity professionals have seen this day coming for several years, and news of the Claude attacks comes just a few weeks after Google warned that threat actors were using Gemini in similar ways. A reasonable question in the wake of these disclosures might be, "Why did you build a model that is allowed to hack things," one which Anthropic anticipated.

  • "The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense," the company said.
  • There's no question that generative AI technology has improved cybersecurity defenses, as Schmidt laid out in our interview earlier this year, but the safeguards in place around Claude need to be strengthened if all one has to do to execute a large-scale attack is pretend to be a security engineer.
  • "With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator," Anthropic said.
  • Making it harder to correctly set up such an attack should quickly become a priority for frontier model companies.

If you value independent enterprise tech journalism, please consider becoming a monthly supporter of Runtime. For $10 a month, you'll help us continue our mission to bring reliable and actionable coverage of this vital sector of the economy and gain access to supporter-only features currently in the works, such as an exclusive discussion and events forum.


Vibe funding

New funding announcements usually go in the Tuesday edition of Runtime, but Cursor's latest windfall catapults it into new territory. The AI coding assistant company said Thursday that it has raised a new $2.3 billion Series D funding round, which values the four-year-old startup at an astonishing $29.3 billion, more than MongoDB, Zoom, and Figma.

Cursor also said that it now has 300 employees and has crossed $1 billion in annualized revenue, which is more or less a forward-looking prediction of what it thinks it could take in over the next 12 months based on last quarter's momentum. "Our in-house models now generate more code than almost any other LLMs in the world," the company said, and the word "almost" is bearing a lot of weight in that sentence.

There's no doubt Cursor has had a huge impact on the world of professional software development in a very short amount of time, and giants like Microsoft, OpenAI, and Anthropic are watching its progress closely. What's not clear, however, is whether Cursor can translate that buzz into a sustainable business given the costs of delivering AI tools at scale.


Enterprise moves

Sachin Katti is the new … something at OpenAI, focused on "building out the compute infrastructure for AGI" (which could use some help) after serving as Intel's chief technology and AI officer since April.

Portland's own Ran Kurup is the new chief corporate development officer at MinIO, joining the storage company after 20 years at Intel and Intel Capital.


The Runtime roundup

Cisco's stock rose 4.6% on an otherwise dreadful day for the Nasdaq after it reported third-quarter revenue and earnings that surpassed Wall Street's expectations, and raised guidance.

Anthropic said it would spend $50 billion to create its own network of AI data centers, "the first major data center build-out that the AI firm has taken on directly" outside of its work with AWS and Google Cloud, according to Bloomberg.

Salesforce acquired Doti AI, a startup working on agentic AI for internal corporate search, for $100 million.

Microsoft opened its second "Fairwater" AI data center in Atlanta, which uses liquid cooling and new networking infrastructure that will connect other Fairwater-class sites in the U.S.


Thanks for reading — see you Saturday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.