Why AI agents aren't attacking your database (yet)

Today: Amazon CSO Steve Schmidt discusses how AI is changing, and not changing, cybersecurity strategies, OpenAI reportedly finds new computing power in an unexpected place, and the latest funding rounds in enterprise tech.

Why AI agents aren't attacking your database (yet)
Photo by Simon Maage / Unsplash

Welcome to Runtime! Today: Amazon CSO Steve Schmidt discusses how AI is changing, and not changing, cybersecurity strategies, OpenAI reportedly finds new computing power in an unexpected place, and the latest funding rounds in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Three-minute warning

There are very few companies on the planet that are bigger targets for both script kiddies and military hacking units than Amazon, which tracks an enormous amount of data on consumer spending habits and is home to AWS, the oldest and largest cloud infrastructure service provider. As the old saying goes, the teams tasked with keeping all that data under wraps have to be perfect every day, but those trying to steal that data need only get lucky once.

Steve Schmidt was AWS's CISO for 12 years before he was elevated to the top security job at Amazon in 2022, shortly after former AWS CEO Andy Jassy took over as Amazon CEO. His teams were responsible for protecting AWS during a period of extreme growth when best practices for cloud security were being developed on the fly, and helped AWS win the trust of the U.S. government in 2013 after helping build secure cloud infrastructure for the CIA, arguably the most important customer win in AWS history.

In a recent interview with Runtime, Schmidt discussed the frequency and style of the attacks Amazon fends off every day, the capabilities that cybersecurity defenders can bring to bear thanks to generative AI, and the potential that machine-generated code could lead to the rise of a new class of software vulnerabilities. A few selected excerpts follow below:

On the increase in nation-state attacks:

Schmidt: We operate a threat intelligence collection platform that we call MadPot internally, which is a global honeypot network. So [across] all of our AWS regions around the world, we observe about a billion potential threat interactions every day. And interestingly, when we launch a new sensor into that infrastructure. It takes about 90 seconds before that sensor is discovered by somebody, and within three minutes, we see people trying to break into it.

I think that you see significantly more interactions now that have progression into automated exploitation attempts. We would see the same kind of level of scanning many years ago, but people weren't as rapid with their efforts to break into systems. It used to be a day or two days before someone tried to break into a box after they did their sweep and collection of IPs that were responding.

On generative AI in cybersecurity:

Schmidt: I think it's really important not to overstate the threat here. There's a tendency on some folks' parts to view AI as some kind of magical capability that suddenly transforms cybersecurity. The reality is a lot more nuanced. It is certainly an accelerant to existing techniques, but I don't think yet that it's something that fundamentally changes the nature of attacks.

We have not yet seen any real end-to-end automation attacks where someone could simply tell an AI system, "I want to attack this target," with any sophistication in plain English and have it execute the complete exploitation chain. AI changes how code is created, it doesn't change how the code works. While certain attacks may be simpler to deploy and therefore more numerous, the foundation of how we detect and respond to these events tends to remain the same right now.

On the security threats from AI-generated code:

Schmidt: It is statistically possible, sure. One of the reasons that we don't internally just have the LLM generate the code and put it into production without a human reviewing it is because LLMs are not perfect, and so we want a human set of eyes on this in review.

The important part about that, though, is not only the fact that there's a human making a review and fixing or approving things, but you take the output of that review and put it back into the training process, so you get that feedback loop. It says, "okay, here's what should have been instead of what was produced." Therefore it's seeded into the training, and the model gets better in the future.

Read the rest of the full interview on Runtime here.


The enemy of my enemy is, uh…

Now that it's clear OpenAI can't or won't rely on Microsoft for all the computing power it believes it needs to develop its models, the company is starting to look for help in some interesting places. According to Reuters, OpenAI just finalized a deal to use Google Cloud for excess computing capacity, despite the fact that Google's Gemini models are one of the company's strongest competitors.

"Google and OpenAI discussed an arrangement for months but were previously blocked from signing a deal due to OpenAI's lock-in with Microsoft," Reuters reported. However, the once-close partnership between OpenAI and Microsoft has been fraying for some time, and the chances of OpenAI, Oracle, and Softbank building out Project Stargate to the extent they envisioned in January seem doubtful.

It's an interesting deal for Google, which will now be helping train the models that pose as much of an existential threat to its search cash cow as anything that has come along in years. But Google also appears to still have a cloud computing deal in place with Anthropic, although OpenAI's main rival uses AWS as its "primary" cloud provider.


Enterprise funding

Glean raised $150 million in Series F funding, which values the enterprise AI hub company at $7.2 billion.

Linear scored $82 million in Series C funding to help bring AI into its product management platform, which the company (unlike many of its peers) avoided rushing into its product before understanding how it made sense to incorporate.

Swimlane landed $45 million in "growth funding" with the goal of expanding sales of its security automation tool around the world.

Maze raised $25 million in Series A funding and launched its flagship product, which uses AI agents to help companies manage vulnerabilities in their cloud infrastructure.

Sema4.ai added $25 million to a previous Series A round to help companies build and deploy AI agents on Snowflake.

Thread AI scored $20 million in Series A funding as it expands its Lemma product, which also helps companies build and deploy AI agents.


The Runtime roundup

Heroku suffered a widespread and, as of publish time, ongoing outage that took down customers such as SolarWinds and even forced its status page offline for some period of time.

OpenAI's ChatGPT was also down or at least hobbled for several hours Tuesday, and while it's not clear if that's related to Heroku's issue as of publish time the company was still having problems.

IBM claimed it has solved a quantum error correction problem that will allow it to ship a "fault-tolerant" quantum computing by 2029, and as always, Runtime is taking the over.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.