Welcome to Runtime! Today: how the Biden administration's AI executive order will affect enterprise tech, why CISOs are on the firing line with the SEC, and the latest funding rounds in enterprise tech.
(Was this email forwarded to you? Sign up here to get Runtime each week.)
Regulators, mount up
Compared to other industries that play key roles in our daily lives, the U.S. government has largely allowed the tech industry to do whatever it wants over the last 30 years. After the surge of hype, interest, and real progress in AI technologies this year, those days are over.
The Biden administration's executive order on AI was anticipated for months before it was released Monday, and it appears to strike a solid balance between letting GPT-4 take over the world and forcing all AI development to be approved by federal regulators. The whole thing is quite long (if only technology existed that could quickly summarize it for us) but there are several aspects related to enterprise tech.
First off, cloud infrastructure providers are going to have to get used to a new level of scrutiny.
- Companies training dual-use foundation models will need to provide "information, reports, or records" about "the ownership and possession of the model weights" they're using, which is not clear on whether they need to disclose the weights themselves to regulators.
- Cloud infrastructure companies, including startups like CoreWeave and Lambda Labs, will have to report the location and size of all the regions that can be used to train foundation AI models.
- They'll have to inform the federal government whenever "a foreign person transacts with that United States IaaS Provider to train a large AI model," as well as foreign resellers, and that is a lot of people.
- However, the computing performance standards laid out in the order appear to only cover the largest models currently under development, and not the loads of smaller models that companies are starting to train for cost reasons.
On the security side, model developers will also have to file a series of reports with the federal government.
- They'll need to disclose the security practices they've employed to protect their model weights.
- They'll also have to report the results of any "red-team" exercises that those companies have conducted to test the security of their models.
- Red teams have been part of cybersecurity for a very long time, and test a company's defenses by executing attacks on those systems to find and repair weak points.
- But security experts told Axios that the outsized emphasis on red teams to address AI security risked downplaying other security and accountability efforts.
Regulation-averse Silicon Valley had braced itself for a heavier approach during this push for an executive order, such as requiring companies to get licenses to develop AI models.
- Judging by the dozens of emails I received over the last two days from PR teams offering comments on the order, most people in AI seem satisfied with the results.
- "While the EU leans towards stricter AI regulation, the US is striking a balance between innovation and responsible usage," said Florian Douetteau, CEO of Dataiku.
- "This is the latest in a string of increasingly tech-literate policies issued by this administration and was far more comprehensive and ethics-focused than I anticipated," said Jackie McGuire, senior security strategist at Cribl.
- But it's also clear that the provisions of the order are subject to change over the next year, and that the implementation details will, as always, determine its effectiveness.
In a move that will give cybersecurity executives even more of a headache, the SEC sued SolarWinds and CISO Timothy Brown Monday alleging that it made false statements to investors about its security practices. The company was the victim of one of the most notorious supply-chain security breaches ever in 2020, exposing the inner workings of several government agencies that were using its performance-monitoring software and kicking off the incoming Biden administration's cybersecurity push.
Brown was aware that SolarWinds was not following best cybersecurity practices during his tenure at the company, and by signing off on statements that reassured investors otherwise he "made materially false and misleading statements and omissions, according to the SEC complaint. Internal emails and presentations obtained by the SEC showed that SolarWinds was not enforcing a strong password policy, for example, with key accounts protected by passwords such as "password."
Assigning liability for security breaches, however, is a tricky practice. The SolarWinds case seems fairly egregious judging by the details laid out by the SEC in its complaint, but "I think the main concern is will the SEC and other entities start holding CISOs accountable for breaches that happened from them not getting the resources they need to do the job?" Weave CISO Jessica Sica told Dark Reading.
Anthropic raised $500 million in new funding from Google, which agreed to provide $1.5 billion in additional funding over the next few years just one month after AWS announced plans to invest $4 billion in the foundation-model developer over time.
Cranium.ai landed $25 million in Series A funding to further develop its security software for data teams.
The Runtime roundup
AMD beat Wall Street estimates for revenue and profit and while it warned that fourth-quarter revenue would be lighter than expected, it also predicted strong sales from its data-center group over the next year.
Stanford University's police department was hit by a ransomware attack involving 430GBs of data.
Palo Alto Networks acquired Dig, which makes cloud data security software, for $400 million according to Techcrunch.
Thanks for reading — see you Thursday!