Cutting cloud costs to find growth
Today: how Deloitte's point person on AWS is thinking about cost optimization and generative AI, the Big Three cloud providers join forces with the Big Two AI model makers, and the latest funding in enterprise tech.
Welcome to Runtime! Today: how Deloitte's point person on AWS is thinking about cost optimization and generative AI, the Big Three cloud providers join forces with the Big Two AI model makers, and the latest funding in enterprise tech.
(Was this email forwarded to you? Sign up here to get Runtime each week.)
Advise and consult
Managing modern cloud infrastructure is a challenge for a lot of companies that operate outside Silicon Valley in the traditional economy. Nishita Henry's job involves helping those companies find opportunities and avoid problems while operating on AWS.
Deloitte's point person when it comes to Amazon and AWS, Henry manages both the traditional consulting services that Deloitte provides to Amazon, while also advising clients on purchasing cloud services from AWS as well as selling their own software on the AWS Marketplace.
In a recent interview at AWS re:Invent 2023, Henry also touched on cloud migration, cost optimization, and the rise of AI. Excerpts from that interview follow below.
Henry on cloud migration costs:
There were some that said, "let's just get everything migrated," and some of those clients actually saw their costs increase, because they didn't optimize, they didn't retire the old things, they didn't transform their underlying business. And that's where a lot of our customers are now, like, "help me, we have to actually get the cost savings we predicted from our business case, and we have to create opportunities for growth."
That's a lot of what we like to focus with our customers on. It's like, "look, yes, you should get cost optimization, no doubt. At the same time, your actual focus should be on developing new services and products in the market to create your future growth."
On generative AI adoption:
I'd say every one of our clients is absolutely having conversations from the boardroom down. Every one of our clients is doing some sort of kicking the tires, ranging from what I call proof of value to proof of concept.
Proof of value is really being focused on proving to their own internal stakeholders that there's there there; (generative AI) is beneficial to the overall business, it's going to be secure, it's going to be trustworthy, and it's going to truly help their overall strategic objectives.
Then there's ones at the proof of concept stage that are truly using their own data to help them do something better, faster, cheaper. And so we have a range of clients on that spectrum for sure.
On actually making it work:
It takes an enormous amount of data to make this work. So this isn't something you can just put your credit card down and start working on, you have to have an understanding of what data you have, is it ready, what are the underlying structures? And is there bias built into your original data structures that will lead you to bad outcomes?
We always said, bad data in, bad data out. Right now it's like, bad data in, bad data out squared, right? It's just something that people have to worry about far more.
A year of fierce competition and sniping among cloud providers and foundation model makers is closing out on an optimistic note with the announcement of the AI Security Initiative, a new coalition formed under the guidance of the Cloud Security Alliance. AWS, Anthropic, Google, Microsoft, and OpenAI are the founding members of the new group, which will also involve CISA.
"The AI Safety Initiative is dedicated to crafting and openly sharing reliable guidelines for AI safety and security, initially concentrating on generative AI," the group said in a press release. It appears to be an attempt at deflecting the broad spectrum of AI fears — from job loss to killer robots — in language spoken in both D.C. and Silicon Valley.
These types of industry-wide initiatives often result in nothing more than a lot of press releases and conferences. But here's hoping the AI Security Initiative evolves into something like the Open Compute Project, which is increasingly focused on sustainability in the AI era; without any kind of industry consensus and agreement on standards any talk of AI safety is just an academic exercise.
Essential AI launched with $56.5 million in new funding to help two of the co-authors of a foundational generative AI research publication build a company that automates enterprise business processes.
Armada scored $55 million in Series A funding for its industrial edge computing hardware and connectivity platform based on the Starlink satellite system.
MaintainX landed $50 million in Series C funding for its maintenance-management software, used by heavy industry and other commercial operations to automate equipment maintenance.
DataCebo launched with $8.5 million in seed funding to help companies building AI models use synthetic data.
The Runtime roundup
Microsoft announced Phi-2, a "small language model" that can run on smartphones and that it claims works about as well as open-source models Mistral and Llama 2.
Oracle missed Wall Street guidance for revenue and investors sent its stock down 12% on Tuesday, questioning its lack of cloud momentum and capacity planning.
Docker acquired AtomicJar, a small startup working on improving testing procedures for container-based apps.
Broadcom is starting to Broadcom-up VMware following the close of that acquisition, telling customers that were on VMware's perpetual licenses that they'll likely have to switch to a recurring subscription license.
Don't miss this great report from The Wall Street Journal on the World Excel Championships, the finest spreadsheet competition on the planet. If you run or are involved in a minor-league Excel competition, please reach out.
Thanks for reading — see you Thursday!