Nvidia wants the whole AI data center

Today: Why Nvidia thinks enterprise customers will want to go all-in on its tech to build AI, Workday's once-and-future CEO unveils the results of its acquisition of Sana, and the latest funding rounds in enterprise tech.

An exterior view of one of Nvidia's headquarters buildings in Santa Clara, Calif.
(Credit: Nvidia)

Welcome to Runtime! Today: Why Nvidia thinks enterprise customers will want to go all-in on its tech to build AI, Workday's once-and-future CEO unveils the results of its acquisition of Sana, and the latest funding rounds in enterprise tech.

Please forward this email to a friend or colleague! If it was forwarded to you, sign up here to get Runtime each week, and if you value independent enterprise tech journalism, click the button below and become a Runtime supporter today.


Soup to nuts

One of the biggest factors behind Apple's stunning comeback from near-disaster 30 years ago was the insight that computers tend to work better when the software and the hardware are designed together. But it's one thing to get everything you need to run a $1,200 laptop from a single company, and quite another to get everything you need to run a $40 billion data center from a single company.

Nvidia CEO Jensen Huang laid out a sweeping vision for the future of the AI era Monday at GTC that centers not only the company's chips, but also a collection of software tools inside enterprise data centers. "We are a vertically integrated computing company. There is no other way," Huang said as he walked attendees through the scope of that vertical integration.

  • Nvidia has been talking about the next-generation Vera Rubin platform — which has seven types of chips, including the Vera CPU and Rubin GPU — for quite some time, and as expected said it would be available in the second half of this year through cloud providers and server vendors as the successor to the Blackwell platform.
  • Huang also confirmed that Groq's LPUs, which it acquired the rights to sell late last year for $20 billion to take on agentic inference jobs that GPUs weren't designed to handle, will be part of the platform.
  • "If a lot of your workload wants to be coding and very high-value engineering, [or] token generation, I would add Groq to it," Huang said. "It is designed just for inference."
  • That design uses on-chip memory to reduce the number of times the processing engine needs to leave the chip to get data for processing, which allows it to respond to inference workloads much faster.

But while its GPUs are a central part of pretty much every company's AI stack these days, Huang also urged attendees to think of Nvidia as a software company. "The only way for us to accelerate applications going forward and continue to bring tremendous speedup, tremendous cost reduction, is through application or domain-specific acceleration," he said.

  • Nvidia plans to address the inference limitations of its Blackwell chips with a new version of Dynamo, which "splits inference work across GPUs by adding smarter 'traffic control' and the ability to move data between GPUs and lower-cost storage, reducing wasted work and easing memory limits, the company said in a press release.
  • It joined the enterprise software industry at large by announcing a system for managing AI agents, and embraced the OpenClaw phenomenon with the release of NemoClaw, which adds security and governance controls to the open-source AI assistant.
  • "The OpenClaw event cannot be understated. This is as big of a deal as HTML. This is as big of a deal as Linux," Huang said, delivering a line we should all revisit in a year.

Nvidia just enjoyed one of the most spectacular runs in the history of American business; it reported $27 billion in full-year revenue for its 2023 fiscal year and $216 billion in full-year revenue for its just-concluded 2026 fiscal year. And Huang doesn't think that run is over, predicting on stage that Blackwell and Rubin would generate $1 trillion in revenue by the end of 2027.

  • But that goal will require a healthy portion of Nvidia's customers to embrace its vertically integrated strategy and put most of their eggs in its basket, which enterprise buyers have historically balked at doing if they can avoid it.
  • And 60% of its revenue comes from the five major hyperscalers, who are doing everything they can to develop their own in-house AI training and inference chips as well as the software needed to make it all work.
  • Huang and Nvidia have been very careful not to actually force customers to make an all-or-nothing decision when it comes to using its technology, which he called "horizontal openness" just in case any antitrust lawyers were listening.
  • "We'll work and integrate Nvidia's technology into whatever platform you would like us to integrate into," Huang said, in the middle of a two-hour presentation that suggested Nvidia's platform is the real answer.

All in a day's work

It's been a little over a month since Workday CEO Aneel Bhusri took over the company he once founded, which became one of the main targets of the SaaSpocalypse earlier this year. On Tuesday the company rolled out the first new major additions to its flagship service since his return to the corner office based on its acquisition of Sana last year.

Sana from Workday is the new core AI interface for Workday's software, which allows HR and finance teams to manage their workflows. "This is the last piece of software that you might have to learn as an employee, because it's pulling from all of your different apps, doing actions, reading information and so on, becoming this sort of universal interface for you to get work done," said Joel Hellermark, CEO of Sana, during a press briefing last week.

The company also introduced an agent that can find and surface corporate data as well as connectors into other enterprise software tools like calendars and task-management systems. While investors are starting to realize that most companies probably won't build bespoke versions of tools like Workday for themselves, established enterprise software companies still have work to do to make sure they don't lose ground to a new generation of vendors built around AI.


Enterprise funding

Replit raised $400 million in Series D funding, which values the no-code AI software development company at $9 billion.

Aixamatic launched with $54 million in seed and Series A funding for its software, which helps companies manage the process of upgrading to new enterprise systems.

Gumloop scored $50 million in Series B funding for its AI platform, which helps companies build and manage agents.

Qdrant raised $50 million in Series B funding for its vector search technology, which helps people build agents with better accuracy.

Standard Template Labs launched with $49 million in seed funding for its AI service-management platform, which will take on incumbents like ServiceNow and Freshworks.

Native launched with $42 million in seed and Series A funding for its cloud security technology, which uses AI to configure cloud security tools through prompts from customers.


The Runtime roundup

OpenAI is putting the finishing touches on a big pivot toward developers and the enterprise, according to the Wall Street Journal, and it just signed a deal with AWS that will allow it to work with the Pentagon, The Information reported.

Microsoft unified its consumer and enterprise Copilot teams under Jacob Andreou, which will allow Mustafa Suleyman to concentrate on developing AI models, according to CNBC.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.