Why open GenAI is safe GenAI

Today: AI2 COO Sophie Lebrecht on the importance of true open-source generative AI models, AWS chooses the nuclear option, and the latest funding rounds in enterprise tech.

Why open GenAI is safe GenAI
Photo by Alex Knight / Unsplash

Welcome to Runtime! Today: AI2 COO Sophie Lebrecht on the importance of true open-source generative AI models, AWS chooses the nuclear option, and the latest funding rounds in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Show me the data

Sophie Lebrecht, the new chief operating officer at the Allen Institute for AI, has spent her career watching AI technologies advance from the perspective of an academic, a startup founder, and an operator inside one of the largest tech companies in the world. Right now, she believes the most important issue in AI is a lack of open models that could allow researchers — who know surprisingly little about how the generative AI craze that upended the tech industry actually works — to set the parameters of the larger discussions around AI regulation.

That's one reason why the Allen Institute, also known as AI2, recently released the training data for OLMo, an open-source large-language model. "Anybody could take [OLMo] and really start to understand the science behind how these models are working," she said in a recent interview.

Excerpts from that interview follow below:

On the generative AI boom:

Lebrecht:I think [the hype] is the result, though, of the perception that this technology was developed overnight. That's not exactly how it was, right? A lot of these advancements came [through] very small piece-by-piece development. That is the key reason why I wanted to come to AI2, because I feel like AI2 is well positioned to be the leader of the open-source AI movement. With community and with open access to models, datasets, training code and training logs, we're going to see an even bigger burst in innovation, both in terms of the development and also in the understanding of these models, which are critical.

On open-source AI:

If you think about AI previously, it was all about prediction. So I want to predict, is this a red ball? It's really easy to know that it was a red ball. With generative AI, it's like, was that an appropriate response given the task and given the person asking the question? That's a much more challenging evaluation problem.

One of the things that's really critical about the need to open up the data and open up the framework is that we can allow our researchers or practitioners or AI developers to start working actively in this space. We're already seeing a huge burst of innovation and conversation around OLMo for just this reason.

I think the second part that's really dangerous about keeping the models closed is if you end up restricting development to only a small sector of companies, we're not going to generate the expertise and the talent that we need to be responsive in these situations. People talk about "what if AI gets in the hands of a bad actor," [and] we need to develop the expertise to be able to respond to these situations. Being open, having an open-source community and having a number of experts that could work with this scale of technology, I think it's actually going to be really important to the safety of AI moving forward.

On understanding LLMs:

I think this is maybe one of the first times in history where a technology has been developed that has outpaced our ability to understand it. Usually, we develop something, we make an invention, we fully understand it and then we push for adoption, or we disseminate that technology out there.

I think this is a little bit reversed, where we're seeing these capabilities before we truly understand them. I think this moment is really about opening everything up so that we collectively can start really researching and experimenting and understanding. And I think that is actually what's going to unlock huge potential for a really effective and safe use of generative AI.

Read the full interview on Runtime.


Goin fission

One much-discussed aspect of the generative AI boom is the increased amount of electrical power GPUs need to make the magic happen. Conventional power sources might not be enough to get the job done, which is one reason AWS decided to move in next door to a nuclear power plant.

Talen Energy announced Monday that it has sold its Cumulus data center, next to the Susquehanna nuclear plant in Salem Township, Penn., to a "major cloud service provider." In a separate presentation for investors, Talen revealed that AWS was the buyer and the two companies had agreed on a ten-year energy contract to supply power to a new data center AWS will build at that site.

Microsoft has been hiring nuclear experts for the last six months or so as it tries to ramp up its own alternative power supply strategy. But building a new nuclear reactor in the U.S. is a very difficult undertaking, which means cloud companies that want to tap into that type of source might need to partner with existing plants, or look overseas.


Enterprise funding

Dtex Systems raised $50 million in Series E funding for its cybersecurity software, which uses AI to detect potential insider threats.

Taalas launched with $50 million in new funding to build out a new type of AI chip that could run an LLM without the help of external memory, which could dramatically improve performance.

Baseten scored $40 million in Series B funding to help companies run their own LLMs in their production cloud environments.

Ema launched with $25 million in funding to develop the Universal AI Employee, which can supposedly "emulate the capabilities of a human employee in a specific role - it can engage in conversations, comprehend context, take continuous human feedback, reason, and make informed decisions."

Ubicloud raised a $16 million seed round for its open-source infrastructure software that runs on bare-metal cloud servers.

Metaplane landed $13.8 million in Series A funding to expand its data-observability software, which could help companies that are training their own AI models improve data quality.


The Runtime roundup

The federal government stepped in to help healthcare clinics and providers deal with the massive ChangeUnited ransomware attack, which has disrupted medical billing systems around the country for almost two weeks.

Anthropic released Claude 3, its first multimodal model and most powerful to date, according to the company.

AWS will allow customers that want to move data out of its cloud to do so for free, but like Google Cloud's similar announcement in January, you'll have to apply for a spot in line and delete your AWS account within 60 days of the transfer of all your data, and regular everyday egress fees remain in place.

Crowdstrike reported a 36% jump in revenue and announced plans to buy Flow Security for an undisclosed amount.

Most Meta services went down for hours Tuesday after the company experienced what it called a "technical issue," which, duh.

HashiCorp beat Wall Street expectations for revenue and profit but its guidance was weaker than expected.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.