How have you gotten AI agents to work at your company?

After more than a year of hype and promises, companies are starting to settle on best practices for deploying and managing AI agents alongside critical business workflows. Nine members of our Roundtable discussed how they successfully rolled out AI agents for tasks other than software development.

How have you gotten AI agents to work at your company?
Photo by Alvaro Reyes / Unsplash

After more than a year of hype and promises, companies are starting to settle on best practices for deploying and managing AI agents alongside critical business workflows. Nine members of our Roundtable discussed how they've successfully rolled out AI agents for tasks other than software development.

Featuring:
Don Schuerman Pega Zach Lloyd Warp
Kunal Anand F5 Josh Fecteau Teradata
Saket Srivastava Asana Michael Ameling SAP
Adrian McDermott Zendesk Naveen Zutshi Databricks
Rob Lee Pure Storage

Don Schuerman

CTO, Pega

We’ve learned that the key to making AI agents work is simple: anchor them in well-defined workflows, not free-form reasoning at runtime. Most failures stem from expecting large-language models to make autonomous decisions on the fly, which results in unpredictability, inconsistency, and spiraling costs. We recommend flipping that script and using AI at design time to map outcomes, reimagine processes, and design efficient workflows. Then, deploy agents within those workflows for deterministic, auditable execution. At runtime, agents act as a semantic layer, routing unstructured requests from customers and employees to the right process, to deliver repeatability, compliance, and speed.

But tech alone isn’t enough. Success demands modernized legacy systems, so that business logic and data is available for use in agentic processes. The basics still matter. Get workflows right, connect them to reliable data, and use AI to accelerate design, not replace foundational discipline. Done right, agents don’t just automate tasks, they transform operations with precision and scale.


Zach Lloyd

Founder and CEO, Warp

I think most teams have treated agents like a quick experiment. They try a few workflows, get mixed results, and move on, but that never builds real learning. What we’ve found works for our team is treating agent output like work from a junior teammate. That means the person who writes the prompt owns the result. “The AI did it” is never an excuse for low quality.

We’ve also mandated that every task an agent can reasonably handle should start with a prompt. That doesn’t mean the agent always finishes the job, but it should always get the first shot. If you’re stuck after about ten minutes, try a different approach. If that still doesn’t work, do it manually. The goal is habit-building, not blind automation.


Kunal Anand

Chief Product Officer, F5

We’ve approached AI agents from two perspectives. First, on the user side, we’ve focused on integration, not experimentation. Agents become useful only when they can safely operate within the tools and systems people already use, which means tight integration with identity, access management, systems of engagement (e.g., Microsoft 365, Confluence), and sources of truth (e.g., Snowflake, JIRA). From the start, we support automation-first agents grounded in identity, with actions scoped, evaluated, audited, and governed.

Second, we actively encourage experimentation at the individual and team levels. That experimentation is bounded by clear guardrails, not left to chance. In parallel, we’ve built a dedicated AI Center of Excellence focused on defining repeatable patterns, shared tooling, and proven methods that deliver results quickly.


Josh Fecteau

Chief Data and AI Officer, Teradata

First, AI governance and enablement must be tied to the overall company objectives and metrics to be successful, with buy-in and accountability at all levels. Second, we’ve been able to deploy agents across the enterprise that are fully autonomous, have measurable ROI, and can stand up to the CFO because they were developed on a centralized enterprise AI and knowledge platform. The agents that were developed in silos or on top of ungoverned data/processes could not come anywhere close to the same value of agents where the data pipelines, governance, semantic context, and security were built-in. One use case can quickly multiply to dozens with this approach.

The key learning: agents succeed when they have appropriate autonomy boundaries, transparent cost tracking tied to business metrics, and infrastructure that enables reuse across use cases rather than point-to-point integrations for each new agent. We’ve also paused several initiatives where governance overhead eliminated efficiency gains or risk-reward ratios didn’t justify production deployment; that disciplined approach is what makes the successful agents sustainable alongside critical workflows.


Saket Srivastava

CIO, Asana

Most AI agent pilots fail because they’re dropped into workflows that were never designed for them. Without clear owners, success metrics, and structure, they just create more noise.

What we realized next is that not all work should be treated the same. Some processes should run the same way every time — those can be automated. But the work that drives real outcomes is cross-functional, ambiguous, and constantly changing. That’s where agents need to stay in the workflow alongside people, helping them reason, coordinate, and adapt.

In IT support, for example, agents handle the repeatable intake and triage, but also stay involved as cases evolve by flagging risks and dependencies, while humans remain accountable for the outcome.

From day one, we pair this with governance: clear permissions, visibility, and oversight, plus training so employees can use AI responsibly. We’re not trying to replace judgment. We’re using AI to strengthen it inside the flow of work.


Michael Ameling

President of SAP Business Technology Platform, SAP

One of the biggest misconceptions about AI agents is that autonomy is the goal. In practice, that’s where most deployments stall or break. Agents work best when they’re treated as constrained, reliable infrastructure designed to operate quietly in the background rather than adding operational complexity.

Instead of giving agents broad authority, they’re embedded within existing operational boundaries and focused on coordinating work. Agents assemble context, route work, and prepare actions across systems that already own execution. This keeps core workflows predictable while still allowing limited adaptive behavior without putting core workflows at risk.

The discipline is less about tooling and more about sequencing. Agents are introduced gradually with clear permissions, visible decision paths, and defined handoff points to humans. If they can’t be observed, audited, or rolled back, they don’t ship.

The most effective agents don’t feel intelligent, they feel dependable. That’s when teams start to trust them, and real adoption follows.


Adrian McDermott

CTO, Zendesk

We’ve gotten AI agents to work by staying focused on the goal: not to automate service as it’s been, but to raise service quality and make it consistently exceptional by following three key best practices.

First, we utilize confidence thresholds to manage and validate AI agent accuracy. By adjusting threshold settings, we allow customers to determine how certain the AI must be before triggering things like a specific action on a support request. If the AI’s confidence score falls below the set level, it defaults to a clarifying question rather than risking an incorrect resolution.

Second, we maintain a human-in-the-loop model for supervision and quality control. This involves support agents and administrators regularly reviewing AI-generated responses and fine-tuning the AI to improve future performance. 

Finally, we recognize that automation drives escalation. We enable our customers to provide clear escalation paths, so that their customers are always able to transfer to a human agent when needed. These escalations can be driven both by the user requesting a human themselves or by automated escalation paths that trigger based on criteria like poor service or customer-satisfaction scores. 


Naveen Zutshi

CIO, Databricks

The most effective way to ensure agents deliver value is by treating automation as part of the operating model, not a one-off deployment. We found success by targeting “scaling bottlenecks” — areas where demand grows faster than teams can scale — and by embedding agents into the everyday jobs to be done. The goal isn’t efficiency for its own sake, but freeing people to prioritize strategic work as the organization grows.

We’ve applied this in areas like people operations and go-to-market execution. Agents sit atop existing HR and enterprise platforms to create a unified experience for employees without eliminating systems of record. In sales and marketing, agents index customer references, automate tasks for reps, and derive insights to help teams be more effective with prospects. They also aggregate signals across marketing campaigns to forecast which investments are most likely to generate returns.

Equally important is investing in human-agent collaboration. Agents create value when people trust their outputs. We’ve focused on clear guardrails, consistent review patterns, and empowering teams to redesign workflows, keeping human judgment central.


Rob Lee

Chief Technology and Growth Officer, Pure Storage

Pure made AI agents work by anchoring them to a specific, high-value problem: delivering competitive intelligence fast enough to still matter. The information customer-facing teams relied on was constantly changing, fragmented across many sources, and impossible to track effectively at scale. Rather than approaching this as an AI experiment, we built an agent to close a real operational gap by providing timely, actionable competitive insights directly within existing workflows.

From the outset, we recognized that “perfect” results weren’t the objective. Instead, we focused early deployments on areas where speed and insights mattered most. In competitive insights, speed and efficacy matter more than absolute precision. In services, value comes from quickly surfacing unusual patterns, correlations, or events in the data that would have gone unnoticed. We focused on areas where fast insights created immediate leverage, then iterated as usage grew, expanding both the agents’ capabilities and application.

Early success came from prioritizing speed and usefulness over perfection. The lesson was simple: AI agents succeed when they solve a real problem and are easy for people to use. 

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.