Welcome to Runtime! Today on Product Saturday: Red Hat leads a consortium of companies working on an open-source AI inference framework, Google makes Gemini Code Assist generally available, and the quote of the week.
(Was this email forwarded to you? Sign up here to get Runtime each week.)
Ship it
Scale 2.0: Back in February, Vercel CEO Guillermo Rauch pointed out how the rise of generative AI apps upended a generation of thinking about infrastructure design that was primarily oriented around speed. This week Red Hat, along with some prominent backers, announced a new open-source project based around Kubernetes that could help enterprises scale AI workloads much more cost effectively than tools designed for a different era.
The project, called llm-d, was designed to provide "to create a well-lit path for anyone to adopt the leading distributed inference optimizations within their existing deployment framework - Kubernetes," Red Hat, IBM, and Google said in a blog post. Nvidia and CoreWeave are also founding contributors to the project, which also incorporates the open-source vLLM and Inference Gateway projects.
Full containment: Chainguard grew quickly over the last five years thanks to its ever-expanding repository of verified and secure container images that developers can use to build their apps without worrying about compromising their software supply chains. Docker, which paved the way for the container to take over enterprise infrastructure more than a decade ago, released its own catalog of container images this week to help tackle the same problem.
Docker Hardened Images are "a curated catalog of security-hardened, enterprise-grade container images designed to meet today’s toughest software supply chain challenges," the company said in a press release. Several other companies will support Docker's images in their distribution channels, including NGNIX, Microsoft, GitLab, and Wiz.
Up to code: Given how many applications are developed on their cloud infrastructure platforms, the Big Three have no choice but to develop their own AI coding assistants in hopes of encouraging the vibe coding set to work with their tools. Microsoft was of course way ahead of this game with GitHub Copilot, but the free version of Google Cloud's Gemini for Code Assist is now generally available and now runs on Gemini 2.5, its latest large-language AI model.
Unveiled this week at Google I/O, Gemini for Code Assist was updated following feedback from the preview launch in February with "more ways to customize workflows to fit different project needs, the ability to more easily pick up tasks exactly from where you were left off, and new tooling to enforce a team’s coding standards, style guides and architectural patterns," the company said in a blog post. Gemini Code Assist for GitHub, which lets users review and edit existing code, is also now generally available.
Slack jawed: One of the more interesting splits in the rush to deploy AI agents across enterprise software tools is the gap between the vendors that treat agents like software (such as ServiceNow) and the vendors that want you to believe you're now interacting with "digital teammates," which is how Salesforce described the rollout of AgentForce for Slack this week. Originally expected to become generally available in January, Agentforce 2.0 allows Slack users to direct-message agents as if they were colleagues to assign tasks or find information.
Customers can build their own agents, but the company also released several pre-built agent templates that can surface Salesforce data in Slack or help onboard new employees. “Every role and team has a business process or need that can be fulfilled with Agentforce in Slack,” Slack's Rob Seaman told VentureBeat, a statement that is very difficult to believe.
See the light: We're almost halfway through 2025, and dozens of companies are still trying to establish themselves as the preferred platform for agentic AI app development with no clear front runner. Glean made its Glean Agents tool generally available this week, betting that its "horizontal, AI-powered index of enterprise knowledge" will help it stand out by providing agents with more context about enterprise data.
"Glean Agents provides unparalleled enterprise agent creation, customization, deployment capabilities, and control over where agents run and which models they use," the company said in a blog post. One interesting twist is that customers can "choose between three 'temperature' settings—factual, balanced, and creative—to suit various business needs," such as setting agents used by the legal team to "factual" and ones used by the marketing team to "creative."
Stat of the week
Most people only notice data-center uptime when something goes wrong, but they're having fewer opportunities these days to complain. According to new research from The Uptime Institute, just 53% of data-center operators reported an outage in the last three years, compared to 78% who reported that level of downtime in 2020, and "only 9% of reported incidents in 2024 were classified as serious or severe, the lowest level recorded by Uptime to date," Data Center Knowledge noted.
Quote of the week
"There are [now] incentives to staying on the core code base that I think for some forks today, they may find that the differential capability that they're introducing is not worth maintaining a fork, and they should just contribute it back." — Amanda Silver, corporate vice president and head of product for Microsoft's Developer Division, discussing the company's decision to open-source the GitHub Copilot extension in Visual Studio Code, which could impact the startups that are building AI coding editors around VS Code.
The Runtime roundup
Salesforce is once again talking about acquiring Infomatica, which ended Friday with a market cap of $6.8 billion, according to Bloomberg.
CISA warned Commvault customers running apps in Microsoft Azure that basic protections may not protect them from attacks following the successful exploitation of a vulnerability in Commvault's backup software.
Thanks for reading — see you Tuesday!