What is enterprise AI going to cost?

Today: Demand for AI continues to reveal cracks in the enterprise tech pricing models that built the cloud, AWS sends more money to its Trainium business through Anthropic, and the latest funding rounds in enterprise tech.

What is enterprise AI going to cost?
Photo by Towfiqu barbhuiya / Unsplash

Welcome to Runtime! Today: Demand for AI continues to reveal cracks in the enterprise tech pricing models that built the cloud, AWS sends more money to its Trainium business through Anthropic, and the latest funding rounds in enterprise tech.

Please forward this email to a friend or colleague! If it was forwarded to you, sign up here to get Runtime each week, and if you value independent enterprise tech journalism, click the button below and become a Runtime supporter today.


No owners, only spenders

For the most part, over the last two decades enterprise SaaS companies operated on a fairly simple and predictable pricing scheme for their services: customers pay a certain number of dollars per month for every employee that needs to log into the service. When the infrastructure costs required to support that employee's experience were also simple and predictable, this scheme worked; it no longer does.

GitHub became the latest enterprise software company Monday to alter its per-seat pricing plans in the face of exploding demand for its services, which has created an industry-wide scramble for computing resources simply to keep up. "Agentic workflows have fundamentally changed Copilot’s compute demands," the company said in a blog post, and GitHub is far from alone in that experience.

  • "Long-running, parallelized sessions now regularly consume far more resources than the original plan structure was built to support," the company said.
  • As a result, GitHub is no longer allowing new users to sign up for its Copilot Pro, Pro+, and Student plans, which were designed for individuals and small teams.
  • Pro current users will no longer have access to Anthropic's top-of-the-line Opus models, underscoring how a lot of things have changed inside a service built around OpenAI's models.
  • And the meter is running: Pro and Pro+ users now have daily and weekly consumption limits, although GitHub said it built tools into VS Code and Copilot CLI to help customers understand when they are approaching their limits.

GitHub's move comes days after Anthropic confirmed it was shifting enterprise users to a similar pay-as-you-go model and Salesforce introduced a "headless" option designed to be used by agents, rather than people. Meanwhile, Adobe told The Information Tuesday that it is shifting to "outcome-based pricing," which was trendy around the time SaaS companies were first starting to talk about agents but before they actually worked.

  • Anthropic's struggles to stay afloat amid surging demand for its services are well documented at this point, and new pricing schemes could bring an end to the brief era of "tokenmaxxing," or throwing as many AI resources as possible at a workflow to get the job done.
  • Forcing companies to think about how they are using AI resources could actually improve adoption by finally acknowledging that cramming generative AI into every nook and cranny of the business is not helping productivity, reserving those tokens for the areas in which it really is, like coding or business-process automation.
  • But outcome-based pricing is trickier given that the definition of a successful outcome could differ across different types of customers, and some outcomes are more strategic than others.

It's important to remember that the current surge in demand for these tools is really only six months old, even if it feels like we have been talking about enterprise generative AI since the beginning of time, The November release of new models from Anthropic and OpenAI super-charged coding assistants and outlined a blueprint for building enterprise agents that could execute tasks in parallel, and vendors and buyers alike still don't exactly understand what a fair price for agentic AI services should be.

  • And that's before they've even begun to think about paying on a per-agent basis, a trial balloon recently floated by a Microsoft executive.
  • Most people agree with IDC "that by 2028, pure seat-based pricing will be obsolete, with 70% of software vendors refactoring their pricing strategies around new value metrics, such as consumption, outcomes, or organizational capability," the venerable research institution said in a recent report.
  • Most likely, the cost of using the high-end models is going to go way up, which will test the notion that enterprises can build "good-enough" AI agents around cheaper models from frontier companies or open-source alternatives.

Chipping away

As established earlier, Anthropic desperately needs compute, and AWS has been there for the company since the days where almost everybody thought Microsoft and OpenAI were going to lock up the next decade of enterprise tech spending. AWS announced Monday that it will plow as much as $25 billion into the company over the next decade in exchange for a commitment from Anthropic to spend $100 billion on AWS services over the same time.

AWS will immediately throw another $5 billion to the $8 billion pile it has already invested in Anthropic, which in turn will spend a good chunk of that money on its Tranium AI chips and Graviton CPUs. AWS customers will also be able to access the Claude Platform Console directly through their AWS accounts, which reduces a bit of friction.

Of course, Anthropic needs to find a way to come up with the $100 billion in spending commitments on top of what it has already pledged to spend with other compute providers, but ten years is a long time. Anthropic's success in coding and enterprise services has allowed AWS to gain its footing in enterprise AI, and as Sevens Report Research's Tom Essaye put it Monday, "Amazon and Anthropic are sort of directly in competition with the Microsoft and OpenAI partnership, and I think right now AWS and Anthropic are winning that battle."


Enterprise funding

Factory raised $150 million in Series C funding, which values the AI coding company at $1.5 billion.

Artemis launched with $70 million in seed and Series A funding for its threat detection and response software, which promises to help companies respond quickly to security incidents.

NeoCognition launched with $40 million in seed funding for its research toward developing "self-learning AI agents," which used to be called "employees."

Resolve AI raised a $40 million extension to a previous Series A round, which values the AIOps company at $1.5 billion.

Parasail landed $32 million in Series A funding for its AI training and inference cloud platform.

Solidroad scored $25 million in Series A funding for its customer-service software, which helps companies understand how customers are interacting with their business.


The Runtime roundup

That didn't take long: A bunch of Discord users appear to have gained access to Anthropic's Mythos Preview model, according to Bloomberg, which might help the rest of us understand whether or not Mythos Preview is a security nightmare or a marketing exercise designed to hide Anthropic's capacity constraints.

Speaking of which, Mozilla said access to Mythos Preview allowed it to find and patch 271 bugs in Firefox, according to Wired.

Vercel's security incident over the weekend was the result of a Context.AI employee inadvertently downloading malware after searching for Roblox cheats, according to CyperScoop, which you have to admit is pretty funny.

SpaceX struck a partnership deal with Cursor that gives it the option to buy the AI coding startup for $60 billion after its imminent IPO, which ... sure, fine, whatever.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.