Old meets new: The promise of neurosymbolic AI

Today: AWS's Byron Cook explains how blending automated reasoning with transformer models could fix a lot of generative AI's hallucination woes, IBM adds Confluent to its software portfolio, and the latest funding rounds in enterprise tech.

AWS's Byron Cook speaks at re:Invent 2025 in front of a slide with a pony wearing a christmas bow
AWS's Byron Cook explains during re:Invent 2025 why running agents you don't trust can be like giving a teenager a credit card: you might wind up with a pony. (Credit: AWS)

Welcome to Runtime! Today: AWS's Byron Cook explains how blending automated reasoning with transformer models could fix a lot of generative AI's hallucination woes, IBM adds Confluent to its software portfolio, and the latest funding rounds in enterprise tech.

(Please forward this email to a friend or colleague! And if it was forwarded to you, sign up here to get Runtime each week.)


Reasoning to believe

Sometimes it can be hard to remember that the world's best and brightest minds were researching ways to build artificial intelligence decades before Google researchers outlined the transformer-based models that led to the launch of ChatGPT in 2022. Generative AI models delivered a clear breakthrough in usability, allowing just about anyone to tap into those models with natural-language commands, but companies that want to build applications around those models have spent the last three years trying, and often failing, to make them produce reliable results.

Byron Cook, vice president and distinguished scientist at AWS, has been working on infusing "automated reasoning" techniques into the cloud leader's services for more than a decade, and is now putting that experience to work to help customers build AI agents they can actually trust. Last week at AWS re:Invent 2025 the company launched Policy in AgentCore, which allows Bedrock customers building and deploying agentic applications "to set boundaries on what agents can do with tools" using those techniques.

  • AWS tools like IAM Access Analyzer and S3 Block Public Access have used automated reasoning for years to mathematically prove the outcome of changing a variable, Cook said in an interview with Runtime.
  • But while automated reasoning is based on logical mathematical principles that have been understood for a very long time, it is extremely difficult to apply to complex systems like cloud infrastructure services, he said.
  • By contrast, generative AI tools "are very easy to get started with, but they don't have the guarantees which really allow you to use them in mission-critical situations, whereas the [automated reasoning] tools were prime for mission critical but now you have to go find five genius PhDs to basically babysit the tools," he said.

A blend of the two approaches, known as neurosymbolic AI, could produce AI agents that are both reliable and easy to use. Neurosymbolic AI is a combination of neural-network techniques like large-language models, and symbolic AI, which is "based on formal rules and an encoding of the logical relationships between concepts," according to Nature.

  • The generative boom was built on the backs of businesses dazzled by the potential of generative AI models constructed around neural networks to solve problems with natural-language prompts, but there are obvious limitations to that approach.
  • "Until very recently, the consistent move by mainstream machine learning had been to try to derive all that is needed from data, without any recourse at all to symbolic systems such as traditional computer code, databases, etc., aiming to replace explicit representation with black boxes.," according to AI researcher Gary Marcus.
  • Symbolic systems like automated reasoning can turn non-deterministic generative AI tools into deterministic applications that can be potentially be used across a much wider set of scenarios where "suddenly the the validity of the rationale, the logical reasoning, the reasoning about rules, the reasoning about policies, suddenly the correctness of that really matters," Cook said.
  • "If this combined approach … can reduce the potential failure rate of agentic AI projects 'by even a fraction of a percent, it would be worth hundreds of millions of dollars,' say analysts at Gartner," Fast Company reported last week in a detailed profile of Cook's work.

The combined approach could also address one of the most pressing existential problems associated with generative AI: The energy consumption required to make it all work. "We are orders and orders of magnitude better for energy and cost," Cook told Runtime.

  • Systems trained around neurosymbolic AI won't need to process and analyze massive data sets over and over to produce results, because the rules for determining the correctness of an output can be expressed in code, Cook said.
  • And they'll also be able to address a wider number of questions over time because the neural-network approach helps symbolic systems understand which problems people actually want them to solve.
  • "The combination of the tools [is] really helping people solve problems, but actually solve them in the sense that they actually get a correct result, as opposed to some fake answer that people think [is] real, but it's not," Cook said.

Looking for a way to support independent tech journalism this holiday season? For $10 a month, you'll help us continue our mission to bring reliable and actionable coverage of this vital sector of the economy and gain access to supporter-only features currently in the works, such as an exclusive discussion and events forum.


Islands in the stream

IBM has been trying to jumpstart its software business through a series of acquisitions over the past several years, most notably its $34 billion acquisition of Red Hat and more recently its $6.4 billion deal for HashiCorp. As real-time access to data becomes vital to making sure AI agents can execute their tasks, IBM announced Monday that it had agreed to shell out $11 billion for Confluent, the company behind the open-source Kafka data streaming project.

Confluent sprang to life in 2014 as three former LinkedIn engineers looked to commercialize their work on Apache Kafka, which makes it much easier to develop applications that can be quickly and easily updated in real time. It went public in 2021, and while it saw steady growth the company never became profitable.

When it comes to building AI apps, “nobody can live with month-old data, or even week-old data, and Confluent has the most capable technology to unlock the real-time value of data," IBM CEO Arvind Krishna told CNBC. "IBM is taking a now proven strategy to its third iteration: Buy a key piece of technology with an open source flavor — Red Hat, HashiCorp and now Confluent — that enterprises want and need professional services to implement," said Holger Mueller, an analyst with Constellation Research.


Enterprise funding

Unconventional AI launched with $475 million in seed funding (!) to build "a more efficient computational substrate specifically for AI" led by former Databricks and Intel executive Naveen Rao.

7AI raised $130 million in Series A funding, with plans to take what it called "the largest cybersecurity A round in history" and expand its army of AI agents for security operations.

Yoodli scored $40 million in Series B funding for its business-training software, which uses AI to help budding salespeople practice closing deals, in one example.

Gradial landed $35 million in Series B funding to build agents that can be used to orchestrate marketing campaigns.

Lemurian Labs raised $28 million in Series A funding for its "unified compute fabric," which allows customers to write software that can run across different AI environments and perhaps address some of the reliability concerns that have accompanied the generative AI boom.

Prime Security scored $20 million in Series A funding to build out AI agents for infusing security into the software-development process.


The Runtime roundup

The Linux Foundation announced the new Agentic AI Foundation, which will oversee development of key AI building blocks like Anthropic's MCP, Block's goose, and OpenAI's AGENTS.md and includes basically every enterprise tech company of note as a founding member.

Google Cloud and NextEra Energy said they had agreed to build several new data centers around NextEra power plants, but didn't specify how much power capacity they're actually planning to bring online.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.