Mandiant bails out Salesforce; MongoDB's Voyage continues
Today on Product Saturday: Google releases a tool to prevent Salesforce leaks, MongoDB's Voyage AI acquisition bears fruit, and the quote of the week.
Today: AWS's Byron Cook explains how blending automated reasoning with transformer models could fix a lot of generative AI's hallucination woes, IBM adds Confluent to its software portfolio, and the latest funding rounds in enterprise tech.
Welcome to Runtime! Today: AWS's Byron Cook explains how blending automated reasoning with transformer models could fix a lot of generative AI's hallucination woes, IBM adds Confluent to its software portfolio, and the latest funding rounds in enterprise tech.
(Please forward this email to a friend or colleague! And if it was forwarded to you, sign up here to get Runtime each week.)
Sometimes it can be hard to remember that the world's best and brightest minds were researching ways to build artificial intelligence decades before Google researchers outlined the transformer-based models that led to the launch of ChatGPT in 2022. Generative AI models delivered a clear breakthrough in usability, allowing just about anyone to tap into those models with natural-language commands, but companies that want to build applications around those models have spent the last three years trying, and often failing, to make them produce reliable results.
Byron Cook, vice president and distinguished scientist at AWS, has been working on infusing "automated reasoning" techniques into the cloud leader's services for more than a decade, and is now putting that experience to work to help customers build AI agents they can actually trust. Last week at AWS re:Invent 2025 the company launched Policy in AgentCore, which allows Bedrock customers building and deploying agentic applications "to set boundaries on what agents can do with tools" using those techniques.
A blend of the two approaches, known as neurosymbolic AI, could produce AI agents that are both reliable and easy to use. Neurosymbolic AI is a combination of neural-network techniques like large-language models, and symbolic AI, which is "based on formal rules and an encoding of the logical relationships between concepts," according to Nature.
The combined approach could also address one of the most pressing existential problems associated with generative AI: The energy consumption required to make it all work. "We are orders and orders of magnitude better for energy and cost," Cook told Runtime.
Looking for a way to support independent tech journalism this holiday season? For $10 a month, you'll help us continue our mission to bring reliable and actionable coverage of this vital sector of the economy and gain access to supporter-only features currently in the works, such as an exclusive discussion and events forum.
IBM has been trying to jumpstart its software business through a series of acquisitions over the past several years, most notably its $34 billion acquisition of Red Hat and more recently its $6.4 billion deal for HashiCorp. As real-time access to data becomes vital to making sure AI agents can execute their tasks, IBM announced Monday that it had agreed to shell out $11 billion for Confluent, the company behind the open-source Kafka data streaming project.
Confluent sprang to life in 2014 as three former LinkedIn engineers looked to commercialize their work on Apache Kafka, which makes it much easier to develop applications that can be quickly and easily updated in real time. It went public in 2021, and while it saw steady growth the company never became profitable.
When it comes to building AI apps, “nobody can live with month-old data, or even week-old data, and Confluent has the most capable technology to unlock the real-time value of data," IBM CEO Arvind Krishna told CNBC. "IBM is taking a now proven strategy to its third iteration: Buy a key piece of technology with an open source flavor — Red Hat, HashiCorp and now Confluent — that enterprises want and need professional services to implement," said Holger Mueller, an analyst with Constellation Research.
Unconventional AI launched with $475 million in seed funding (!) to build "a more efficient computational substrate specifically for AI" led by former Databricks and Intel executive Naveen Rao.
7AI raised $130 million in Series A funding, with plans to take what it called "the largest cybersecurity A round in history" and expand its army of AI agents for security operations.
Yoodli scored $40 million in Series B funding for its business-training software, which uses AI to help budding salespeople practice closing deals, in one example.
Gradial landed $35 million in Series B funding to build agents that can be used to orchestrate marketing campaigns.
Lemurian Labs raised $28 million in Series A funding for its "unified compute fabric," which allows customers to write software that can run across different AI environments and perhaps address some of the reliability concerns that have accompanied the generative AI boom.
Prime Security scored $20 million in Series A funding to build out AI agents for infusing security into the software-development process.
The Linux Foundation announced the new Agentic AI Foundation, which will oversee development of key AI building blocks like Anthropic's MCP, Block's goose, and OpenAI's AGENTS.md and includes basically every enterprise tech company of note as a founding member.
Google Cloud and NextEra Energy said they had agreed to build several new data centers around NextEra power plants, but didn't specify how much power capacity they're actually planning to bring online.
Thanks for reading — see you Thursday!