Welcome to Runtime! Today: AWS makes a $4 billion AI bet, the latest update on the massive MOVEIt breach, and this week in enterprise startup funding.
(Was this email forwarded to you? Sign up here to get Runtime each week.)
Pay to play
A few weeks ago in Seattle, AWS CEO Adam Selipsky explained its historical reluctance to make big acquisitions as a positive, in that AWS has been innovating so gosh darn much it has never really needed to consider outside alternatives. He did, however, leave the door "open to not only acquisitions, but also to investments of different types."
At that time Selipsky and team were likely putting the finishing touches on the company's investment deal with Anthropic, which was announced Monday morning. Under the terms of the deal, Anthropic agreed to train and deploy its models on AWS's homegrown AI chips in exchange for up to $4 billion in investment over the next several years.
- Anthropic will also use AWS as its "primary cloud provider," which will increase its use of AWS overall but allow it to maintain its relationship with Google Cloud as a hedge.
- The deal also includes a long-term commitment for Anthropic to make its models available through Amazon Bedrock, the managed service used by AWS customers to access several different types of AI models.
- Amazon will invest $1.25 billion in Anthropic right away and will likely get a great deal of that investment back in cloud computing revenue before too long.
- Anthropic will also help AWS design future generations of its Trainium and Inferentia AI chips, which are currently positioned as the bargain-basement alternative to Nvidia's powerful and scarce AI chips.
With the Anthropic deal, all the major cloud providers have now put substantial investment behind technology invented elsewhere, as they chase what they believe to be the next major breakthrough in the history of computing by funding the startups that are creating it.
- The catalyst that set this year's funding frenzy in motion was Microsoft's $10 billion investment in OpenAI in January, which followed a $1 billion investment a few years back.
- Google sank $300 million into Anthropic earlier this year only to watch it turn around and cut a more substantial partnership with AWS this week.
- Google, however, already has a substantial in-house AI research team in the combination of DeepMind, the AI research group it bought in 2014 for $650 million, as well as Google Brain, which created the transformer technology that underpins OpenAI's GPT-4 technology.
- Even Oracle is playing a part, joining Nvidia and Salesforce in a $270 million funding round for Cohere.
But most startups fail. Companies like OpenAI, Anthropic, and Cohere are pushing the boundaries of large-language models, but it's becoming increasingly clear that a variety of AI models — both commercial and homegrown — are going to shape the future direction of enterprise tech products and services.
- That's been AWS's position all along, which suggests the motivation behind the Anthropic deal is as much about improving its optics with enterprise tech buyers as anything else.
- Even Microsoft is working on in-house and open-source models that will be far less expensive to use than OpenAI's stuff and potentially just as effective for certain applications, The Information reported Tuesday.
- The legal implications of building software and creating content using LLMs trained on public data are far from settled, and a lot of companies will want to use models based on training data that won't get them sued.
With the Anthropic deal, AWS clearly felt the pressure to do something beyond invoking metaphors about long-distance running.
- An investment of $4 billion over several years isn't very expensive considering AWS will likely generate close to $100 billion in revenue this year despite growing at its slowest pace ever.
- AWS attained that position because it made it easy for companies to build and consume the wide variety of software they needed to run their business more reliably than they could on their own.
- Microsoft's exclusive deal with OpenAI means AWS is in trouble if its breakthrough models become the standard platform for generative AI applications, but that outcome is far from certain.
No one likes to MOVEit MOVEit
Over 62 million people have now been affected by the breach of Progress Software's MOVEit file-transfer application, after the National Student Clearinghouse disclosed Monday that personal data belonging to students at almost 900 colleges and universities had been compromised.
"Only" 51,000 current and former students were victimized this time around, which just goes to show how widely this particular breach has impacted life in these United States, home to 88% of the affected organizations, according to Emisoft. And we may never know the true extent of the breach given that several affected organizations might have chosen to pay the ransom.
Security experts quoted by The Record believe the organization behind the attacks "has ended up netting anywhere from $75 million to $100 million just from the MOVEit campaign." It does not appear much progress has been made identifying the specific individuals behind the attacks.
OpenAI is weighing a plan to let existing employees sell their shares at a valuation approaching $90 billion, more than triple what it was worth at the time of Microsoft's investment, according to the Wall Street Journal.
MotherDuck raised $52.5 million in Series B funding, which values the database startup at $400 million.
Kneron added $49 million to complete a total Series B funding round of $97 million, as it chases Nvidia with AI chips designed to run across edge networks.
The Runtime roundup
The CIA plans to build a ChatGPT-like model to search across intelligence data. What could possibly go wrong.
Along the same lines, Microsoft wants to hire a nuclear engineer to consider using the technology to power its cloud data centers.
SAP jumped into the generative AI pool with Joule, a new assistant-style application that can search across a customer's SAP data with natural language commands.
AI startup Lamini said AMD's ROCm software development platform "has achieved software parity" with Nvidia's CUDA toolset after training its LLMs on AMD's chips for the past year.
Confluent announced plans to integrate its streaming data service with several of the vector databases used by AI developers, including…
…MongoDB, which separately announced new features for its flagship database that improve the ability of its customers to work with the unstructured data at the heart of many AI applications.
Thanks for reading — see you Thursday!