Today: OpenAI's new chief revenue officer outlines a five-point plan for driving enterprise business, local opposition to new data-center construction is going to be a midterm campaign issue, and the latest funding rounds in enterprise tech.
Today on Product Saturday: Nutanix hopes to entice more VMware customers to make the jump with support for AI agents, Zencoder wants to help developers automate the non-coding parts of their day, and the quote of the week.
Today: AWS and Google suggest that the next five years of AI infrastructure strategies won't look like the last five years, GitHub sheds a little more light on its recent uptime struggles, and the latest enterprise moves.
Today: IBM and Arm strike a partnership to keep customers on mainframes as AI coding agents circle a modernization opportunity, Google drops a new open model, and the latest enterprise moves.
Welcome to Runtime! Today: IBM and Arm strike a partnership to keep customers on mainframes as AI coding agents circle a modernization opportunity, Google drops a new open model, and the latest enterprise moves.
Please forward this email to a friend or colleague! If it was forwarded to you,sign up here to get Runtime each week, and if you value independent enterprise tech journalism, click the button below and become a Runtime supporter today.
For all their talk about embracing modern technologies, an astonishing number of large enterprise companies still rely on mainframes for some of their most important business workloads; as of 2024, that included most banks and airlines, according to IBM. But while the first rule you learn at CIO school is "don't fix something that isn't broken," at some point in the not-too-distant future enterprise tech will talk about mainframes like we talk about punch cards.
IBM and Arm announced a new partnership Thursday that will attempt to bridge the gapbetween mainframe hardware, which an ever-diminishing number of people understand how to run, and Arm's hardware and software, which is growing very quickly inside cloud providers like AWS. The goal is to "develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security," the companies said in a press release.
The companies want to develop a way to let Arm applications run on IBM's mainframe processors through some sort of virtualization technology while also making sure those applications can meet the strict security and availability standards required by mainframe customers.
Ultimately, IBM and Arm said they want to build hardware that allows enterprise customers to manage workloads across mainframes and Arm processors as they see fit, allowing them to keep up with the Joneses while still hugging Big Iron.
Current mainframe customers "are hesitant to change architectures due to the risk of breaking the ledger, but they face a shrinking pool of legacy specialists,” Everest Group's Rachita Rao told Network World. “[The announcement] doesn’t change the procurement cycle today, but it de-risks the long-term viability of the LinuxONE or the Z platform as a modern internal cloud."
However, right now cloud providers are salivating over the opportunity to use AI coding agents to drag those mainframe holdouts onto modern platforms in a fraction of the time and effort that mainframe migrations used to take. AWS Transform has been generally available for almost a year, and migration projects that used to take 18 months have been reduced to seven or eight months, said Asa Kalavade, vice president of AWS Transform, in a recent interview with Runtime.
"The most exciting thing that we are very thrilled about is this ability to use a combination of deterministic approaches for understanding your existing application, and then combining that with AI to reimagine the new application," she said.
You can't just tell Claude Code to convert COBOL code into Rust and then hit the bar, but you can tell the coding agent of your choice to analyze a mainframe application and create a detailed specification that actual people can validate, and then turn it loose against that spec.
AI coding agents can also help customers write tests to make sure everything works as it should after the conversion, which in the past took up nearly half the time of a total migration, Kalavade said.
"What we see these customers do is — obviously a single mainframe will have lots of applications — so they're trying a few of them, and getting them to completion within months is giving them huge confidence that they can now start approaching the entire mainframe," she said.
The stakes are high for IBM, which still relies quite a bit on revenue from mainframe customers. Everyone agrees that those companies need a way to tap into the infrastructure and platform technologies that are being created during the AI boom, but nobody knows if those customers will be content with a pathway to the Arm ecosystem or they'd prefer a clean-sheet approach, given how much pain has been taken out of the migration process.
Even IBM was careful to note in the press release that "statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only."
If IBM and Arm's collaboration works, "net-net, IBM mainframe customers will have a lot more software to run on their mainframes," Moor Insights & Strategy's Patrick Moor told The Register.
If it doesn't, and AI coding models continue to improve, AWS and its competitors will be happy to help those customers start over.
Four on the floor
More than a year after the AI industry had a collective freakout over the release of DeepSeek's open models, the closed-model companies have reasserted themselves. But there's still a lot of people interested in using models that they can examine and manipulate, and on Thursday Google released an update to its Gemma series of open-weight models that could spark new interest.
Gemma 4 was designed to be run locally on anything from a smartphone to a laptop, and the "larger models deliver state-of-the-art performance for their sizes, with the 31B model currently ranking as the #3 open model in the world on the industry-standardArena AI text leaderboard, and the 26B model securing the #6 spot," Google said in a blog post. Those two models were designed to run on laptops (and desktops too), while the E2B and E4B models can run on smartphones or edge computing devices like Raspberry Pi.
Perhaps most notably, Google released Gemma 4 under the permissive Apache 2.0 license, which grants developers far more leeway than the custom-Google license that governed earlier releases. Open-weight models don't appear to have had nearly the impact on the AI buildout that open-source software had on the cloud buildout decades ago, but it's still early.
Enterprise moves
Walid Abu-Hadba is the new CEO of Precisely, joining the data-management company after serving as chief product officer at Sage.
The astronauts aboard Artemis II were forced to troubleshoot Microsoft Outlook problems Thursday as they hurtled toward the moon, according to 404 Media, and Redmond, we have a problem.
Thanks for reading — Runtime is off for the weekend so no Product Saturday, see you Tuesday!
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Today: OpenAI's new chief revenue officer outlines a five-point plan for driving enterprise business, local opposition to new data-center construction is going to be a midterm campaign issue, and the latest funding rounds in enterprise tech.
Today on Product Saturday: Nutanix hopes to entice more VMware customers to make the jump with support for AI agents, Zencoder wants to help developers automate the non-coding parts of their day, and the quote of the week.
Today: AWS and Google suggest that the next five years of AI infrastructure strategies won't look like the last five years, GitHub sheds a little more light on its recent uptime struggles, and the latest enterprise moves.
Today: Why Anthropic thinks its next frontier model is too dangerous to release to the general public, AI infrastructure growing pains aren't improving, and the latest funding rounds in enterprise tech.