Inside Figma's multiplayer infrastructure

Today: Figma's Abhi Mathur explains how the company created its collaboration tool and overhauled its databases for scale, Red Hat outlines an open-source AI platform strategy, and the latest funding rounds in enterprise tech.

Inside Figma's multiplayer infrastructure
Photo by charlesdeluvio / Unsplash

Welcome to Runtime! Today: Figma's Abhi Mathur explains how the company created its collaboration tool and overhauled its databases for scale, Red Hat outlines an open-source AI platform strategy, and the latest funding rounds in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Sharding for scale

A decade ago at Facebook when the metaverse was still a literary device, Abhi Mathur's engineering teams built everything from databases to CDNs rather than rely on third-party vendors. Things work a little differently at Figma, which like most startups of its age left the undifferentiated parts of its tech stack to the experts and focused all its attention on building and maintaining the technology that made it special.

That approach has served Figma and Mathur, the company's vice president of software engineering, quite well, even if its proposed $20 billion merger with Adobe fell apart last year. Interest in Figma's team-oriented design tools has grown steadily since the company was founded in 2016 thanks in large part to a core platform Mathur compared to a "gaming engine," similar to how Unreal Engine's technology is what makes Fortnite so compelling.

  • It's not really a gaming engine, but Figma's "multiplayer editor" is how it describes the actual technology that allows designers to work together on a new web site or marketing campaign in the main Figma product, or collaborate on product vision and strategy using Figjam's digital whiteboard.
  • Built in TypeScript and rewritten in Rust, the platform is designed to allow Figma users to simultaneously edit design documents without compromising performance, Mathur said.
  • "We want our experiences to be really snappy. Even on a client like a browser, you want it to feel like a very interactive, thin-client desktop-like experience," he said in a recent interview for Runtime's How We Built It series.

Figma's collaboration platform runs primarily on AWS, although the company had used some of OpenAI's models for its early generative AI experiments.

  • It is one of AWS's largest database customers, he said, and for years relied on a single Amazon RDS Postgres database to manage all the data associated with design files.
  • By 2022, an earlier scaling project was pushing the limits of what RDS supported, and Figma decided to implement horizontal sharding across that database to improve performance.
  • The database upgrade took 18 months but has left Figma in a place where it is "theoretically infinitely scalable," Mathur said

Figma worked closely with AWS on the database project, which led Mathur to appreciate the benefits of allowing partners to do some of the heavy lifting rather than directing every aspect of the project in-house, which for the most part is what Facebook did during his tenure there.

  • "For example, we don't want any managed services for the multiplayer because it is so unique and core to [the experience], that we want to have tighter control," he said.
  • "But there are services that are a commodity and can be replaced there by lots of managed services, where I don't have to spend time thinking about those, instead we can think about more creative solutions that are unique to us."

Most enterprise tech companies are still trying to figure out which generative AI experiences are commodities and which ones will deliver actual differentiated value. Mathur is a big believer that the technology will help improve the Figma experience over time so long as it is built in a way that works across all its products and services, not just bolted on in hasty fashion.

  • "Our company's mission is to make design accessible to everyone, and I strongly believe AI can support that [mission], it can make designers much more productive and non-designers less scared of design," he said. 
  • Right now, Figma is looking at generative AI technology at the intersection of design and development, helping its customers get designs built atop the service into working code for their own products more quickly and reliably.
  • "How can we make some of these handoffs much easier using Dev Mode? How can we convert designs into more reusable code, which is sort of the direction in which we are moving," Mathur said.

Read the full interview on Runtime here.


Red Hat's open AI approach

Red Hat waded into the debate over closed source versus open-source AI on Tuesday, unveiling a new enterprise AI development platform called RHEL AI based around IBM Research's Granite large-language models.

RHEL AI is a package of AI tools that can be deployed on RHEL servers or OpenShift clusters and promises to make it easier for enterprises to train and fine-tune their own models based on Granite. As part of the launch, IBM is releasing both the model weights and access to the training data behind Granite, and Red Hat will also release the InstructLab software used for tuning.

Given that RHEL AI users will be limited to IBM's models, this approach may not work for companies still searching for the right model for their application or business needs. But it is a good example of how it's possible to carry open-source software principles forward into the AI era.


Enterprise funding

Wiz raised $1 billion in Series E funding that will allow the cloud security company to continue rolling up distressed cybersecurity startups.

Legion Technologies landed $50 million in Series D funding, with plans to hire new sales and marketing talent to push its workforce management software.

StrongDM scored $34 million in Series C funding and will establish an engineering center in Poland to improve its zero-trust access-management technology.

Lamini raised $25 million in Series A funding to help companies train LLMs based on their own corporate data.

Espresso AI launched with $11 million in seed funding to develop cloud-cost management tools using AI.


The Runtime roundup

Google Cloud entered the threat-intelligence market Tuesday at RSA with a new product called, well, Google Threat Intelligence.

Also at RSA, DHS Secretary Alejandro Mayorkas said the agency's AI safety board held its first meeting Monday and members were concerned about the "potential perpetuation of implicit bias" as AI-powered systems roll out.

Apple is working on its own AI chips that would run in its data centers, according to the Wall Street Journal.

ServiceNow told financial analysts that revenue from generative AI technology could take a while to show up, according to Bloomberg.

Broadcom will no longer allow AWS to resell VMware Cloud for AWS to VMware customers, the beginning of the end of a mutually beneficial relationship between VMware and AWS over the last eight years.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.