AI2's Sophie Lebrecht: We need to better understand how generative AI works

The most important issue in AI is a lack of open models that could allow researchers — who know surprisingly little about how the generative AI craze that upended the tech industry actually works — to set the parameters of the larger discussions around AI regulation.

AI2's Sophie Lebrecht: We need to better understand how generative AI works

Sophie Lebrecht, the new chief operating officer at the Allen Institute for AI, has spent her career watching AI technologies advance from the perspective of an academic, a startup founder, and an operator inside one of the largest tech companies in the world. Right now, she believes the most important issue in AI is a lack of open models that could allow researchers — who know surprisingly little about how the generative AI craze that upended the tech industry actually works — to set the parameters of the larger discussions around AI regulation.

That's one reason why the Allen Institute, also known as AI2, recently released the training data for OLMo, an open-source large-language model. "Anybody could take [OLMo] and really start to understand the science behind how these models are working," she said in a recent interview.

In some ways Lebrecht returned to AI2 last year; the non-profit institute founded by the late Paul Allen in 2014 incubated Xnor.ai, a startup where Lebrecht served as senior vice president of strategy and operations before it was acquired by Apple in early 2020. Prior to Xnor, she co-founded Neon Labs, which studied the science behind the way humans react to images to help media companies attract visitors.

Now her work centers around demystifying the models at the heart of the generative AI boom, which she thinks will be crucial for measuring the safety of AI models as governments debate AI regulations. Despite its name, OpenAI has been anything but open about the technology at the heart of its GPT models, and companies like Meta that have released their LLMs under open-like models have declined to release their training data.

This interview has been edited and condensed for clarity.

You've been working on these technologies for a very long time, and then like in the last 14 months that everything has just exploded. What do you make of that?

First of all, I think it's super exciting, right? This is the moment we've been waiting for.

We had [the] very early sort of Turing -style AI, and then I think the rise of GPUs allowed for parallel processing, and that was kind of the beginning of the deep-learning movement. Then we went through the pendulum swing to efficiency. We became really good at object recognition and face recognition, and then we said, "how do we get those things on devices, so that we can use them?" Then we had this burst around large-language models, and I think it's incredibly exciting.

One of the really important things is that in order to really understand and explain how a model is working, why it behaves the way it does, we really need to understand the data that it was trained on.

I think [the hype] is the result, though, of the perception that this technology was developed overnight. That's not exactly how it was, right? A lot of these advancements came [through] very small piece-by-piece development. That is the key reason why I wanted to come to AI2, because I feel like AI2 is well positioned to be the leader of the open-source AI movement. With community and with open access to models, datasets, training code and training logs, we're going to see an even bigger burst in innovation, both in terms of the development and also in the understanding of these models, which are critical.

Before we get into some of the broader issues, tell me a little bit more about this most recent model you released called OLMo.

It stands for open-language models. So what was very cool about OLMo was that it was released alongside its pre-training data. What we have seen traditionally is people release models, and they may open source the license to use those models, but we don't get to see what's under the hood. One of the really important things is that in order to really understand and explain how a model is working, why it behaves the way it does, we really need to understand the data that it was trained on.

What AI2 has done in releasing the OLMo framework is we've not just released the model and API access, we've released the model and we've released the training code and training logs. Anybody could take this and really start to understand the science behind how these models are working.

There are definitely people in regulatory conversations who are talking about the fact that open-source models shouldn't be allowed, that allowing nearly anyone to get their hands on all of this stuff could have some real adverse consequences. How are you and the institute working on some of those issues?

The first part of that is we actually need to bridge the policy and regulation conversation with science and engineering. Step one is … it's still a scientific open problem of "how do we evaluate generative AI models?" We're talking about how we regulate and control them, but we first need to even understand how to evaluate them.

If you think about AI previously, it was all about prediction. So I want to predict, is this a red ball? It's really easy to know that it was a red ball. With generative AI, it's like, was that an appropriate response given the task and given the person asking the question? That's a much more challenging evaluation problem.

One of the things that's really critical about the need to open up the data and open up the framework is that we can allow our researchers or practitioners or AI developers to start working actively in this space. We're already seeing a huge burst of innovation and conversation around OLMo for just this reason.

I think this is maybe one of the first times in history where a technology has been developed that has outpaced our ability to understand it.

I think the second part that's really dangerous about keeping the models closed is if you end up restricting development to only a small sector of companies, we're not going to generate the expertise and the talent that we need to be responsive in these situations. People talk about "what if AI gets in the hands of a bad actor," [and] we need to develop the expertise to be able to respond to these situations. Being open, having an open-source community and having a number of experts that could work with this scale of technology, I think it's actually going to be really important to the safety of AI moving forward,

Are there opportunities for AI2 to charge businesses for any of this technology through API access?

I have in my career been heavily focused on being at this new wave of sort of technical innovation, and then figuring out what the impact of that is. And I think now is the moment for open source and community. I'm thinking about how can we get people to adopt and use these models, and for AI2 that's really to advance AI for the common good, because I actually think that is the most important problem to solve right now.

What has gone unnoticed about generative AI during the frenzy of the last year?

I think this is maybe one of the first times in history where a technology has been developed that has outpaced our ability to understand it. Usually, we develop something, we make an invention, we fully understand it and then we push for adoption, or we disseminate that technology out there.

I think this is a little bit reversed, where we're seeing these capabilities before we truly understand them. I think this moment is really about opening everything up so that we collectively can start really researching and experimenting and understanding. And I think that is actually what's going to unlock huge potential for a really effective and safe use of generative AI.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.