Why security pros are wary — and excited — about the rise of generative AI

Improving cybersecurity is one of the most important challenges companies face on a day-to-day basis, according to enterprise technology vendors, who preach that mantra right up until they see a massive trend like generative AI come along and can't help but blurt out "squirrel!"

a group of people working in an office
Photo by Arlington Research / Unsplash

Improving cybersecurity is one of the most important challenges companies face on a day-to-day basis, according to enterprise technology vendors, who preach that mantra right up until they see a massive trend like generative AI come along and can't help but blurt out "squirrel!" as they dash off in search of a new revenue stream.

In truth, security has often been an afterthought during the rush to embrace new enterprise tech trends over the past decade. Countless companies are still leaving their S3 buckets open to the web, and it took years before many of those same companies realized that they couldn't depend on the same security measures that protected their virtual machine instances when they migrated to containers.

The rise of generative AI and large-language models over the last six months, however, is a little different. In this new world, software using generative AI can be both an attack surface and a valuable defensive tool, creating new security issues for businesses but also providing a powerful new tool to analyze their weaknesses and defend against novel types of intrusions.

"There's some really strong evidence that you really should not be using large language models, especially not the ones that are commercially available right now, for extremely sensitive tasks, which could be financial advice, or legal advice, or potentially even a medical task," said Liz O'Sullivan, CEO of Vera, which helps customers securely implement AI tech. "That may change in the future, but right now, it's a beta-test situation where it's going to take some time for us to figure out what the AI that exists — and what the AI that we're continuing to make — really can and can't do."

And while nothing defines the average mindset of a CISO so much as constant worrying about the unknown, generative AI could also be a huge benefit to security operations.

"I actually look at this as the only people who are going to be truly impacted around generative AI are the people who don't embrace it one way or the other," said Ryan Kovar, distinguished security strategist at Splunk and leader of the company's security research team. "Network defenders also have an incredible opportunity to utilize generative AI in ways that I can't even imagine yet, but they're going to be able to use this to speed (up) their own detection."

Holes in the buckets

There are two types of basic security concerns that accompany the rise of generative AI in the enterprise: Introducing security holes into existing products by connecting powerful large language models (LLMs) to customer or corporate data, and internal employees exposing those same data sets to the outside world through reckless use of generative AI technologies to augment their workflow.

Sprinklr, a customer-experience SaaS company, has licensed OpenAI's enterprise product to use alongside its own homegrown research to help improve the quality of the interactions between the users of its product and those users’ own customers, said Gerald Beuchelt, the company's CISO. But the company isn't just slapping a layer of GPT fairy dust on its code and calling it production-ready.

"When we implement an enterprise product like that, we not only make sure that we are working closely with a company — in this case, OpenAI — to put some safeguards in place from a regulatory perspective, but also from a testing and evaluation perspective," Beuchelt said. "But also once we start to utilize their APIs in order to build responses for our customers who are interacting with the platform, we have additional controls from an engineering perspective into that workflow in order to make sure that the answers that are being provided stay within guardrails."

At some point, companies that adopt generative AI and LLMs are going to want to train their own models using internal data for internal product development, but until they have sufficient expertise in house to do that safely and effectively, they're better off following the lead of the more established AI research companies, O'Sullivan said.

"Our position is that it's a fallacy to think that any team can be as good as these multibillion-dollar companies that are training these models at scale; that's all they do," she said.

Back to basics

The other concern about generative AI that has surfaced repeatedly over the last six months is the potential for internal business users to inadvertently expose sensitive corporate data by using LLMs to write a quick memo before heading out on a Friday afternoon.

"Once you upload information into these various tools, there's a very wide range of models and frameworks in terms of how this data can be used," said Boaz Gelbord, senior vice president and chief security officer at Akamai. "Once that data is out there, it's certainly at risk of showing up in all sorts of unexpected ways. Unlike traditional data loss prevention (tools), we don't really have forensic capabilities for data that gets leaked through AI."

However, this is a concern that has come up time and time again as businesses have embraced cloud services, said Royal Hansen, vice president of privacy, safety, and security engineering at Google. If you haven't developed internal policies around how corporate data is stored, shared, and uploaded to third-party services, you've got bigger problems to worry about than the rise of LLMs, he said.

"Security still relies on a lot of the great work that's gone before us, in web servers, databases, let's call it the basics at some level," Hansen said. "It's easy to sometimes forget those basics when the enthusiasm for something occurs; managing the data and the access, knowing what's being used, knowing the configuration of your web servers, baking into your frameworks input validation, all that."

Locking down access to generative AI tools simply won't work; it would be like telling employees they can't use a search engine to make them more productive, Gelbord said. Still, that's exactly what Samsung did this month after sensitive data was leaked to ChatGPT, albeit on a "temporary" basis.

"Organizations are going to have to find a way to adapt and put reasonable policies in place, as well as technical safeguards, so that they can use the incredible possibilities that come with generative AI, but at the same time without taking undue risks," Gelbord said.

AI is red, AI is blue

Generative AI security technology is very likely to impact the cat-and-mouse game played by criminal hacking elements looking to steal or kidnap corporate data, and the defenders who try to stop them.

On the offensive side, there's no doubt that malicious actors are going to use generative AI tools to craft better phishing attacks, and to distribute them constantly, Splunk's Kovar said. But those types of attacks are already among the biggest security problems for most companies.

"They're already doing malware pretty damn good, if I'm honest. So they might increase that capability, they might increase the efficacy, but it isn't transformational in every aspect," he said.

Gelbord wasn't so sure.

"In the short term, AI is going to be used to create such realistic lures," he said. "There's just this tidal wave of those types of attacks that are coming, where you couldn't reasonably expect a person, even someone who's relatively vigilant about these kinds of things, to be able to spot that."

However, generative AI technologies are also generating a lot of excitement among security "blue" teams, those responsible for detecting intrusions and attacks often launched by internal "red" teams simulating malicious actors in training exercises.

"If people realized how much network defenders were already using ChatGPT they'd be shocked," Kovar said.

Google is way ahead of most companies when it comes to AI expertise, "but only in the last several years that we've seen that long investment in AI, the AI is now catching our red team in a higher and higher percentage of those attempts," Hansen said.

Generative AI technologies could allow defenders to write detection scripts and workarounds much, much faster than they could have with older processes, Kovar said. He cited the response to the Log4Shell vulnerability — which sent security teams into a panic state in late 2021 trying to find all the places their organizations were using this very popular piece of open-source software in their code — as a scenario that could have really benefited from generative AI tech.

As a global corporation, Splunk was able to work on this problem on behalf of its customers around the clock, but it still took several days to close all the holes as attackers found new vulnerabilities related to Log4Shell. And it often took hours to write new code to help customers locate each new vulnerability, which could be reduced to minutes with generative AI tech, Kovar said.

"To me, the idea of generative AI is to augment staff quickly," he said. "And it expands the capabilities of a network security team in ways that I don't think we've fully recognized yet."

That's because this trend is moving incredibly fast, even by the standards of the tech industry. Knowledge workers, product managers, and security teams are going to need to move even faster to adapt.

"We need people to get up to speed on the real limitations that these models have," O'Sullivan said. "And we honestly needed that long before they should have been released to the public, but now they're here, the public loves them, and we're playing cleanup as we often do. But there is a possibility here for a really bright future, and it's one that we're really excited about."

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.