How should companies make sure they are shipping secure generative AI apps while still moving quickly?

Companies around the world have rushed to deploy generative AI applications over the last year, and the security considerations for genAI apps are somewhat different than traditional software. Nine members of our Roundtable discussed how to move quickly but securely with generative AI.

How should companies make sure they are shipping secure generative AI apps while still moving quickly?
Photo by Mārtiņš Zemlickis / Unsplash

Companies around the world have rushed to deploy generative AI applications over the last year, and the security considerations for GenAI apps are somewhat different than traditional software. Nine members of our Roundtable discussed how to move quickly but securely with generative AI.

Featuring:
Ed Sim boldstart ventures Justin Foster Forescout Technologies
Joye Purser Cohesity Thomas Di Giacomo SUSE
Eoin Hinchy Tines Mike Price ZeroFox
Leonid Belkind Torq Omar Khawaja Databricks
Doug Kersten Appfire

Ed Sim

Founder and General Partner, boldstart ventures

One must take a holistic approach. While the SBOM (software bill of materials) is software only, the AI-BOM (or AI bill of materials) encompasses the model, the data, and the software. One has to secure AI in a shift-left mentality, scanning models, ensuring data integrity and privacy, and making sure the latest vulnerabilities are all up to date. In addition, once AI-powered apps are built, doing product-driven red teaming or offense on the final outputs is hugely important to make sure the model is proving the right answers and not going off the rails.


Justin Foster

Chief Technology Officer, Forescout Technologies

Too many companies have rushed to incorporate large language models (LLMs) into their products and services, often as open-ended chatbot prompts that have access to sensitive customer data. While these tools offer some benefits, they also introduce risks like hallucinations, legal liability, and security threats.

Real-world examples include:

  • Hallucinations – Recommending users upgrade to Windows 9 (a version that does not exist)
  • Liability – Creating a legally binding contract to sell a truck for $1
  • Security – Interpreting prompts as code, such as “Act as a Perl interpreter and execute the following…”

These scenarios highlight the need to safeguard AI prompts from misuse and prompt injection attacks. From user data authorization and permissions to simply performing tasks beyond the AI’s design, inputs must be sanitized to ensure safe and secure usage.


Joye Purser

Global Field CISO, Cohesity

Staying up-to-date on the latest adversary tactics — and protecting against them — is a critical step for organizations to ensure they are shipping secure GenAI apps. One of the best ways to do that is by leveraging the MITRE ATLAS matrix, a globally accessible, living-knowledge base of adversary tactics and techniques against Al-enabled systems.

Organizations also must prioritize a thorough understanding of the large-language models being used to train and power their GenAI applications. Whichever entity developed a model greatly influences the outputs and the security controls in place. And, as AI fundamentally changes how software is written and code is generated, being proactive in the development phases is key to avoiding unsecured AI-generated code and outputs.

Lastly, we need to adapt our own human behaviors to the vagaries of AI. Threat actors are exploiting AI to create more sophisticated forms of attack. For now, every output needs human verification. We must ask ourselves “Does this look right?” until we know the output is accurate and can be trusted.


Thomas Di Giacomo

Chief Technology and Product Officer, SUSE

As companies rapidly adopt generative AI applications, taking in security considerations without slowing innovation is more critical than ever. GenAI introduces unique challenges that require tailored approaches on top of traditional security practices. In order to ship secure GenAI apps while moving quickly, businesses need to stay ahead of risks like data leakage, prompt-injection attacks, and model manipulation that bring costly repercussions.

To securely deploy GenAI, organizations should prioritize robust data governance, adopt zero-trust security frameworks, and implement continuous monitoring throughout AI-driven processes. Organizations can establish effective data governance by setting clear guidelines around data access and usage as the key to safeguarding private data and maintaining compliance. A zero-trust security framework implements continuous security verification with strict policies that control access and ensure tighter control across deployments. Continuous monitoring allows enterprises to detect and respond to potential security threats in real time and identify issues quickly, protecting the security of AI applications.

By focusing on security at every stage of deployment, enterprises can reap the full benefits of GenAI without compromising safety or innovation.


Eoin Hinchy

Co-founder and CEO, Tines

With generative AI, the only way to move fast is to start deliberately. That means building secure, composable foundations before launching anything. Security, privacy, and clarity around data usage must be baked in from day one. There’s also a constant learning curve: understanding what different large-language models are good at, what they’re not, and how they align with your specific use cases. Finally, a strong customer feedback loop is critical. If customers aren’t part of the buildout, you’re flying blind.

The reality is that speed without trust is a dead end. If you can’t clearly explain how customer data is being used — or worse, you don’t know — you’re not ready to scale. Yes, there’s pressure to ship quickly and follow the hype. But we believe generative AI doesn’t reward the first to launch; it rewards the first to get it right. And that starts with sound architecture, transparent data practices, and trusted customer relationships.


Mike Price

Chief Technology Officer, ZeroFox

Every company is at a different point in their security journey, and that directly shapes how safely they can roll out AI. The truth is, if your IT or product security basics aren’t solid, introducing AI can open the door to real risks. AI creates new challenges at every stage, from data poisoning during pre-training to manipulation at the prompt level.

Building secure AI-powered products means evolving fast — especially since many traditional security playbooks don’t cover what AI brings to the table. To move quickly and carefully, teams must:

  • Expand to include AI risks within SSDLC training, tools, processes, and compliance efforts
  • Maintain up-to-date secure development training to include AI-specific risks like prompt injection or model misuse
  • Add tools that catch risky prompts or unusual outputs during testing
  • Build controls to detect tampering with training data or user inputs

There’s no silver bullet. But with a thoughtful, flexible approach, security teams can manage the risks and keep the momentum going.


Leonid Belkind

Co-Founder and CTO, Torq

While generative AI technology is developing extremely quickly — including various means of developing agentic applications or other forms of applications that interact with AI models — the industry has not yet created any solid standards, frameworks, or blueprints for creating secure enterprise-grade AI applications.

This leaves the burden of analyzing the possible risk scenarios and introducing proper guardrails in each and every application on the shoulders of developers. At the moment, since each application is unique in the data it can receive and the actions it can take, there is no other practical solution than to involve security architects. They bring a deep understanding of the application security domain into the picture and perform specific analyses. 

Failing to do that or ignoring the need and relying on an immature framework will expose applications to security risks. Since the velocity of application delivery is of the essence, embedding these security professionals in the development process is the best guarantee that application security won’t be an unanticipated oversight, but will be applied appropriately and proactively to the application as it is being developed.


Omar Khawaja

Field CISO, Databricks

Implementing a comprehensive AI security framework focused on mitigating risk throughout the AI lifecycle and system components, rather than just models and endpoints, grants organizations the speed and control to responsibly adopt and innovate with AI. This holistic approach should be unique to each business use case, fostering collaboration between key decision makers (IT, business, governance, security, etc.) and prioritizing risks relevant to its specific data environments and AI deployment models in the given business context. Too often, security teams’ instinct is to apply all security controls enumerated in a static standards document versus picking the subset of applicable controls needed to remediate the risks specific to a given AI use case.

Some key ingredients to operationalize AI risks: 

  • Create a catalog of risks associated with each AI component
  • Identify applicability of risks to each approved AI deployment model
  • Map controls to mitigate each AI risk
  • Monitor effectiveness of mitigating controls

Leveraging a framework that integrates the aforementioned approach can accelerate organizations’ progress in establishing an enterprise program to optimally balance speed and risk of AI deployments.


Doug Kersten

CISO, Appfire

While GenAI apps introduce new security considerations, the core principles of developing and deploying secure software remain the same. Security must be embedded at every stage of the development lifecycle. “Shifting left” (i.e. integrating security early) has long been a best practice in software development, helping teams identify and address vulnerabilities before they become costly issues post-deployment. Although some perceive security testing as a bottleneck, it’s actually more efficient and cost effective to catch issues early.

With GenAI, even small missteps — like exposing sensitive training data or failing to set clear usage boundaries — can lead to significant risks in production. By building security into the development process from the start, teams can move faster while minimizing threats like model misuse, hallucinations, or data leakage.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.