Data-center giants launch new networking spec; Anthropic gets dreamy
Today on Product Saturday: six companies you've heard of release a new AI-era networking specification, Anthropic now tells your agents bedtime stories, and the quote of the week.
Today on Product Saturday: six companies you've heard of release a new AI-era networking specification, Anthropic now tells your agents bedtime stories, and the quote of the week.
Welcome to Runtime! Today on Product Saturday: six companies you've heard of release a new AI-era networking specification, Anthropic now tells your agents bedtime stories, and the quote of the week.
Please forward this email to a friend or colleague! If it was forwarded to you, sign up here to get Runtime each week, and if you value independent enterprise tech journalism, click the button below and become a Runtime supporter today.
Network = net worth: The companies building out the infrastructure for the AI boom have had to throw out some old assumptions about piecing together distributed systems and invent new approaches that address the specific characteristics of AI workloads. Six of them — AMD, Broadcom, Intel, Microsoft, Nvidia, and OpenAI — announced this week that they are donating a new networking spec to the Open Compute Project that could solve one of the more pressing bottlenecks in AI data centers.
Multipath Reliable Connection, or MRC, is a new protocol that allows systems that are exchanging data directly from memory to send that data over multiple routes, improving throughput and reliability. "Think of it as replacing a single-lane road spanning a town with a cleverly laid-out street grid system paired with an on-the-fly traffic app, enabling drivers to reroute around slowdowns and road closures," Nvidia said in a blog post.
Open the gates: Once AI model providers have solved their internal networking problems, they need to use some sort of tool to manage access to inference APIs. This week Baseten announced that it now offers an API gateway built on top of its inference platform that its customers can use to reach their customers.
The Baseten Frontier Gateway "is a managed routing layer that sits natively on top of Baseten Dedicated Inference," which means it can offer lower latency and easier setup for model builders that don't have OpenAI or Anthropic's budget, the company said in a blog post. So far most companies have been content to use AI models from the leading-edge vendors in their AI apps, but that could change if training and serving their own, heavily customized models starts to make economic sense.
Dream until your dreams come true: The anthropomorphization of AI technology will continue until morale improves, so you'd better get used to it. This week Anthropic released a new feature in its Claude Managed Agents product that allows your agents to "dream," which is considered to be a self-improvement technique in AI circles apparently.
"Dreaming is a scheduled process that reviews your agent sessions and memory stores, extracts patterns, and curates memories so your agents improve over time," Anthropic said in a blog post. The idea is to improve the performance of agents executing multiple tasks over a long period of time or orchestrating teams of agents, and it's also a clever way to rebrand "hallucinations."
Look who's talking: Outside of coding, customer service is probably the largest market for AI agents given that it's a labor-intensive task most companies view as a cost center. But they're equally aware that a bad customer-service interaction can destroy a brand's reputation in a couple of minutes, and this week Twilio unveiled several additions to its flagship platform that promise to improve the quality of those agentic interactions by providing more context.
Twilio Conversation Memory "helps every conversation pick up where the last one left off, so customers never have to repeat themselves and every agent, human or AI, engages at the right point and with the right context," the company said in a press release. A separate tool helps orchestrate customer interactions across agents and people, and another helps companies blend their agents with Twilio's messaging products.
Pure vibes: Efforts to build a personal AI assistant device have gone basically nowhere at this point, with the Humane AI Pin serving as the cautionary tale. Undeterred, this week Vibe launched the Vibe Dot, billed as "a wearable AI assistant built for professionals."
The Vibe Dot aims to turn workplace meetings and conversations into "structured summaries and contribute to institutional memory, while voice commands can be sent to connected agents to take action," the company said in a press release. However, given what a generation of tech executives are learning this week about the legal discovery process, it's hard to see the Vibe Dot breaking out from the pack.
As noted above, the AI boom is creating new problems for companies that thought they had a handle on serving their customers through distributed systems only to encounter a new type of workload that doesn't behave quite the same way. According to new research from Akamai, 50% of companies running AI applications struggle to maintain acceptable levels of latency during peak levels of demand, and 43% struggle with bursty, unpredictable demand for AI services.
"Putting AI to work for people is the gateway to economic growth. The global workforce is aging. Birth rates are declining, and the world will face a labor shortage of up to 50 million workers by 2030, but at this exact moment in time, billions of agents and robots are coming online." — ServiceNow CEO Bill McDermott, previewing an uneasy moment in the history of the planet during his keynote address Tuesday at Knowledge 2026.
Some sort of "thermal event" hit an availability zone in AWS's notorious us-east-1 region Thursday evening, causing problems for customers well into Friday and taking down Coinbase for a significant stretch.
Anthropic continues to snap up computing capacity wherever it can find a spare server, signing a $1.8 billion deal with Akamai Friday.
Linux vendors are scrambling to deal with a new zero-day vulnerability called Dirty Frag, which can allow attackers with access to a Linux server to gain full control of the computer.
Canvas, the near-ubiquitous educational software tool, was hit with a ransomware attack Thursday that forced the company to shut down the platform right as colleges are gearing up for finals.
Thanks for reading — see you Tuesday!