Who gets to decide what "safe" AI means?

Today: The Department of Homeland Security announces a new set of AI advisors focused on critical infrastructure, Microsoft quietly takes aim at the infrastructure-as-code market, and the quote of the week.

Two people on a sidewalk stare up at a bank of video cameras on a wall.
Photo by Matthew Henry / Unsplash

Welcome to Runtime! Today: The Department of Homeland Security announces a new set of AI advisors focused on critical infrastructure, Microsoft quietly takes aim at the infrastructure-as-code market, and the quote of the week.

(Was this email forwarded to you? Sign up here to get Runtime each week.)

Critical condition

It's always a little fascinating and disturbing to acknowledge how much enterprise technology is sold by tapping into fear; in most cases, the fear that your business will be eclipsed by others who jumped on an emerging technology unless you buy this particular set of tools or services. The AI boom presents a double dose of worries, expanding on old-fashioned FOMO by adding the notion that left unchecked, AI could ruin the world.

Against that backdrop, the Department of Homeland Security unveiled a new AI advisory board on Friday that "will advise DHS on ensuring the safe and responsible deployment of AI technology in [critical infrastructure] sectors in the years to come, and it will look to address threats posed by this technology to these vital services," DHS said in a press release. Several familiar names from the current enterprise AI push made the list.

  • OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella will be on the board, as well as Alphabet CEO Sundar Pichai and AWS CEO Adam Selipsky.
  • This list also thankfully includes some non-tech industry representatives, such as Damon Hewitt of Lawyers’ Committee for Civil Rights Under Law and Maya Wiley of The Leadership Conference on Civil and Human Rights.
  • The group will meet for the first time over the next couple of weeks, and then hold quarterly meetings to gauge their progress at providing "actionable recommendations" to DHS and critical infrastructure providers.

DHS defines "critical infrastructure" as "sixteen sectors of American industry, including our defense, energy, agriculture, transportation, and internet technology sectors," according to the press release.

  • “AI can be an extraordinarily powerful force to improve the efficiency and quality of all the services that critical infrastructure provides. At the same time, we recognize the tremendously debilitating impact its errant use can have,” DHS Secretary Alejandro Mayorkas said during a press conference Friday, according to Bloomberg.
  • This dichotomy has been a hot topic in cybersecurity circles over the past year, given that generative AI technology unlocks new capabilities for both attackers and defenders.
  • In Mayorkas' scenario, an airline could use AI to detect maintenance problems and reduce downtime, but putting AI in charge of routing decisions could be a very bad idea.
  • The board will be charged with developing "practical guidelines and best practices for safe, secure and responsible AI: Not a board focused on theory," Mayorkas said Friday, according to Axios.

But there are some curious omissions from a group that will influence government policy on AI.

  • Mayorkas said that social media companies like Meta were specifically excluded from the group, which means one of the most prominent open-model providers will not have the opportunity to argue the benefits of that approach.
  • It doesn't include academics who have been critical of how the big tech platform companies represented on the board are advancing AI, such as former Googlers Timnit Gebru and Margaret Mitchell, who now works for Hugging Face.
  • And it also lacks the perspective of an organization like the Allen Institute, which believes AI can have a positive impact on society but that it needs a lot more research in the open to fully understand its power.

DHS noted in its press release that it is scrambling to hire AI experts that work directly for the agency, who can hopefully serve as a counterweight to anything recommended by the board.

  • It's hard to imagine the companies that have invested billions chasing AI business are going to recommend substantial restrictions on its use, especially given the spending power of companies in the critical infrastructure sectors.
  • As Gebru put it after seeing the makeup of the list, "Foxes guarding the hen house is an understatement."

AI for IaC

Late Thursday evening, after everyone had focused on Microsoft's blowout earnings quarter, the company quietly announced a new Copilot capability focused on the infrastructure-as-code market. As spotted by Neowin, the Infra Copilot product promises to "revolutionize the way infrastructure is written, addressing the pain points experienced by professionals in the field," Microsoft said in a blog post.

Available to anyone using Visual Studio Code with a GitHub Copilot subscription and PowerShell installed, Infra Copilot will turn prompts into infrastructure code, "allowing professionals to express their requirements in natural language and receive corresponding code suggestions," Microsoft said. It's not clear how much of this tech is naturally baked into GitHub Copilot itself, but using code to provision infrastructure is a different pursuit than building an application.

There's another company called Klotho that appears to be working on a similar tool doing similar things called "Infra Copilot," which could get a lawyer or two riled up. Pulumi also has an "experimental" AI tool for spinning up infrastructure code, and HashiCorp has posted guidelines for using GitHub Copilot and AWS's Code Whisperer for spinning up infrastructure code.

Quote of the week

"This is the first time we've done things in computer science that weren't a mathematical certainty. Everything else we've done in computer science, up until this day, has been deterministic. Every time you run it, it's going to happen that you get the same answer. Now, with generative AI, you'd never quite know what the answer is going to be. And that creates some very interesting issues." — C3 AI CEO Tom Siebel, on the roll of the dice that accompanies the generative AI boom.

The Runtime roundup

A lobbying group representing AWS and Google came out hard against proposed "know your customer" rules for the cloud infrastructure industry, calling the the Biden administration's requirements "overly burdensome, not sufficiently targeted, and risk advantaging foreign competitors."

Enterprise private equity vulture Thoma Bravo scooped up Darktrace for $5.6 billion almost two years after an earlier acquisition attempt failed.

GitHub said 95% of users opted in to adopting two-factor authentication after the company mandated that step last year.

AWS announced plans to kill WorkDocs, a content-sharing application that was probably used to collaborate on a single-digit number of six-pagers over the years.

Thanks for reading — see you Tuesday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.