The vibes are off the rails
Today: The latest in the long-running saga of enterprise tech marketing departments trying and failing to look cool, Oracle customers are receiving extortion attempts after a breach, and the latest enterprise moves.
For companies with sensitive data requirements or substantial investments in data centers, a hybrid cloud strategy offers the best of both worlds — and a significant management challenge. Eight members of our Roundtable discussed how to balance cloud and on-premises deployments.
For companies with sensitive data requirements or substantial investments in data centers, a hybrid cloud strategy offers the best of both worlds — and a significant management challenge. Eight members of our Roundtable discussed how to balance cloud and on-premises deployments.
Chief Architect, Flexera
Hybrid cloud strategy — using multiple cloud providers or a combination of cloud and on-prem — provides organizations flexibility and choice and allows them to select the right tools, services, and locations for their workloads.
Given the complexity of any application or interconnected system deployed across cloud providers or cloud and on-prem, those types of hybrid deployments should be reserved for situations where the benefits or constraints outweigh the drawbacks.
When you do need to span clouds or combine on-prem with cloud in a single connected system, you must ensure that you have the ability to properly support and maintain the system, which highlights the importance of an observability and monitoring strategy that covers all the connected components and environments. Extra care in building disaster recovery and business continuity plans is required as clouds have different availability characteristics than on-prem.
CEO, Observe Inc.
Most enterprises today have a healthy paranoia about proprietary data that they would like to use in new AI applications, but do not want that data to be used to train public foundational models. Despite reassurances from model providers that, if you use their API, data will not be persisted or used to train models, enterprises are skeptical.
Enterprises are therefore looking to host their own models with their own security perimeter and do the necessary work to enable that data to be part of a new, agentic AI workflow. A good example perhaps is incident management, where enterprises are maybe OK with troubleshooting Kubernetes and other infrastructure with public models, and they may even be OK with system logs being processed by APIs to those public models. But the finer details of the application, design documents, runbooks, architecture documents, etc., they want to host themselves.
The decisions around what enterprises can do where are seemingly ad hoc today but likely will be formalized into company policy in the not-too-distant future.
Executive Vice President and GM for Hybrid Cloud and CTO, HPE
The enterprise IT landscape has always been hybrid, and the growing role of AI is deepening that dynamic. We advise organizations to take a strategic approach to application deployment that balances flexibility with control and considers more than just cost or convenience. The focus should be on placing applications where they deliver the most business value and also what makes the most sense in terms of data gravity, sovereignty, and long-term goals.
Traditional applications often thrive in self-managed environments, where latency, compliance, and legacy integrations are critical. On the other hand, modern and AI workloads demand high data throughput and scalability, often favoring cloud environments initially. However, as workloads scale, their requirements often shift — and as a result we've seen more organizations go back to private clouds or on-premises infrastructure.
Whether traditional applications or AI workloads, the real key to application deployment is giving organizations the control to optimize for performance, compliance, and cost while maintaining the flexibility to adapt application deployment to changing regulatory, business, and workload demands.
CEO, Hammerspace
The most effective way to operate across clouds and self-managed data centers is to decouple compute from data. This approach allows sensitive datasets to remain on-premises or in designated cloud regions while appearing as one cohesive namespace to any compute cluster. Policies then automate placement, pre-staging, caching, and replication for latency, cost, sovereignty, and energy efficiency, rather than being dictated by where a VM resides.
For AI workloads, data should feed GPUs directly from local NVMe to eliminate copy storms and network hops, maximizing utilization. At the same time, the same datasets must seamlessly extend to burst capacity in any cloud using standard file and object protocols. For traditional applications, this delivers consistent access without brittle scripts or migrations.
With centralized governance, organizations achieve faster time to GPU, lower cost per token, greater performance per watt, and the freedom to run workloads anywhere without disruptive infrastructure changes.
CEO, Digitate
When deciding between cloud and on-prem, the business purpose should always come first. Are you trying to accelerate time to market, extend global reach, or manage dynamic workloads? Those drivers often make cloud the preferred choice. But cost of ownership, data sensitivity, regulations, compliance, and skills availability can tilt the decision toward self-managed data centers.
Certain applications lend themselves better to cloud, especially those that improve with AI models, faster compute, and agile infrastructure. On the other hand, stable legacy apps that are coupled with other on-prem tools may be better left untouched. Beyond the applications, other factors — like negotiated contracts, ecosystem effects, and regulatory requirements (capex vs. opex, sovereign data mandates, etc.) — also come into play.
In practice, the world is not binary. Most enterprises already operate in some form of a hybrid mode, leveraging cloud for agility while anchoring critical systems where control matters most. The challenge, and the opportunity, is in striking the right balancing agility with cost, risk, and governance.
CTO, Astronomer
Stop thinking about "cloud versus on-prem" and start thinking about abstraction layers instead. The moment you tie your deployment strategy to a cloud provider or specific bare metal server setups, you've already painted yourself into a corner.
The real goal is making where your stuff runs completely boring, so you can focus all your energy on making what you're building actually interesting: your apps, models, and data pipelines.
Traditional apps and AI workloads don't need separate deployment worlds, just different priorities. Your traditional apps obsess over uptime and SLAs. Your AI workloads care about GPU availability and being able to scale down to zero when nobody's using them.
The solution isn't reinventing everything; it's smart orchestration. Build deployment approaches that can handle both sets of requirements without forcing you to maintain two completely different infrastructure stacks. Think Kubernetes and Terraform-style abstraction, but evolved for the actual hybrid reality most of us live in. That's how you avoid getting locked into vendor-specific thinking while still meeting the distinct needs of different application types.
Founder and CEO, Backblaze
Enterprises that thoughtfully design hybrid clouds — combining public cloud and owned data centers — can optimize resources and strategy. AI is a great example: neoclouds (vs. traditional hyperscalers) now offer the most accessible and affordable GPUs, leading innovative companies to move data from owned data centers to neoclouds for model training and inference.
In any hybrid strategy, three key considerations apply:
CEO and Co-founder, Pendo
A hybrid cloud strategy offers flexibility and control, so the key is to shift the mindset from “cloud vs. on-prem” to “what drives the best experience and outcome.” Whether you're deploying a traditional app needing regulatory compliance or an AI tool with dynamic scaling and access needs, start by understanding how users will interact with the software. That visibility should drive infrastructure decisions, not the other way around.
AI, in particular, demands new thinking: Proximity to data, governance, and performance all matter. But so does adaptability. We’ve seen that organizations that align infrastructure decisions with user success — not just technical requirements — are better positioned to scale, govern, and innovate.