Send lawyers, filters, and money

Today: why enterprise vendor promises to indemnify customers against AI lawsuits could be easier said than done, an insider's view on AWS at a crossroads, and the latest funding rounds in enterprise tech.

Send lawyers, filters, and money
Photo by Clarisse Meyer / Unsplash

Welcome to Runtime! Happy New Year! Today: why enterprise vendor promises to indemnify customers against AI lawsuits could be easier said than done, an insider's view on AWS at a crossroads, and the latest funding rounds in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Protection racket

Over the last six months, major enterprise tech vendors hoping to spur adoption of their expensive new tools have promised customers that they will indemnify them against any legal claims raised by the output produced by those tools. But generative AI is not an ordinary enterprise tech product.

Most generative AI foundation models were trained on a huge swath of content ingested from the internet, and a lot of that content could be protected by copyright law. Last week The New York Times sued Microsoft and OpenAI for using its content to train the groundbreaking ChatGPT service, a case that seems likely to test the boundaries of copyright law in the 21st century.

  • Perhaps fittingly, Microsoft was the first to promise customers in September that "if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved," Vice Chair and President Brad Smith wrote in a blog post.
  • Google Cloud followed shortly after, and around Thanksgiving OpenAI and AWS also announced plans to offer their enterprise customers some protection against generative AI-related lawsuits.
  • However, despite several attempts over the last few months, none of those companies have been willing to talk on the record about how AI indemnification policies will actually work. 

Enterprise generative AI services rely on a series of filters and controls to help their customers avoid using anything legally dubious in their output.

  • At Microsoft, "these (controls) build on and complement our work to protect digital safety, security, and privacy, based on a broad range of guardrails such as classifiers, metaprompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content," Smith said in the blog post.
  • In order to be eligible for AI indemnification, all the major vendors require that customers use their filters as designed.
  • Based on several conversations with people working on these products who were unable to speak on the record, these filters aren't something you can accidentally uncheck in the settings menu; they claim customers would have to deliberately work around those guardrails to generate output based on copyright content.
  • But it's clear from the LAION-5B investigation that some of these companies might not actually know what is in the training data they used to produce their models.

Many of these indemnification clauses have loopholes large enough to drive a truck through, said Kate Downing, a lawyer who has written critically about GitHub's indemnification policies for Copilot, in a recent interview.

  • "With Copilot in particular, that's just bad drafting," she said. "That is just straight-up bad lawyering, unless the goal was specifically to be ambiguous."
  • For example, the specific terms of service that accompany Github Copilot and address indemnification specify "If your Agreement provides for the defense of third party claims, that provision will apply to your use of GitHub Copilot, including to the Suggestions you receive."
  • However, the general terms of service that cover GitHub specify that the company will only defend paying customers that have used a GitHub product "unmodified as provided by GitHub and not combined with anything else."
  • The entire point of using generative AI tools that make suggestions is to use those suggestions in combination with something else, such as your own custom code or your own marketing materials.

This is very new territory for the legal system and it's far from clear where the law will settle a few years down the road.

  • "It's a very different kind of technology from any other software that's come before," Downing said.
  • She pointed to examples like Salesforce and QuickBooks: When a user enters an input into that software, it's very easy to tell what the output should be, and if there is a dispute the code can be audited.
  • With generative AI, the user has no idea what's going to come out on the other end, and the vendor can't really tell you why that output emerged from that input, she said.
  • "It's an entirely different way of thinking about technology and technology systems that people were not used to, and I think that's part of what makes it really hard to understand the limitations of these technologies."

Read the full story on Runtime here.


Disagree and leave

In the weeks following AWS re:Invent 2023, it was not hard to notice that a significant number of well-known AWS employees announced plans to leave the company. According to a blog post last week from AWS's Justin Garrison, a senior developer advocate, the company's return-to-office policy was the breaking point for a lot of his colleagues.

"In my small sphere of people there wasn’t a single person under an L7 that didn’t want out," Garrison said, referring to a relatively senior level in the internal hierarchy of AWS employees. In his view, this was a deliberate move by Amazon leadership to cut headcount without resorting to layoffs, and it appears to have worked.

But while return-to-office mandates have rippled through many enterprise tech companies, Garrison's post also pointed to deeper problems he sees within AWS. "Amazon has shifted from a leader to a follower. From my perspective it’s not going well," he wrote, also predicting that there will be "a major AWS outage in 2024" because of all the institutional knowledge that is walking out the door.


Enterprise funding

Canva is preparing an equity event that would allow employees to sell shares at a valuation of $26 billion, according to the Information, although it wouldn't raise any new funds itself as part of that deal.


The Runtime roundup

2023 was one of the best years for tech stocks in recent memory, thanks in large part to Nvidia's surge atop the generative AI boom.

Oracle Cerner's troubled modernization project with the VA remains a headache, according to NextGov.

Salesforce acquired Spiff, a commission-tracking startup it had previously funded, to close out a year in which it really scaled down its M&A activity to focus on its core business.

Keeping track of all the Copilots rolled out by Microsoft last year is a difficult task, but Directions on Microsoft has you covered.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.