Home Artificial Intelligence Helping firms deploy AI models more responsibly

Helping firms deploy AI models more responsibly

Helping firms deploy AI models more responsibly

Corporations today are incorporating artificial intelligence into every corner of their business. The trend is predicted to proceed until machine-learning models are incorporated into many of the services we interact with each day.

As those models turn out to be an even bigger a part of our lives, ensuring their integrity becomes more vital. That’s the mission of Verta, a startup that spun out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Verta’s platform helps firms deploy, monitor, and manage machine-learning models safely and at scale. Data scientists and engineers can use Verta’s tools to trace different versions of models, audit them for bias, test them before deployment, and monitor their performance in the true world.

“Every thing we do is to enable more products to be built with AI, and to try this safely,” Verta founder and CEO Manasi Vartak SM ’14, PhD ’18 says. “We’re already seeing with ChatGPT how AI could be used to generate data, artefacts — you name it — that look correct but aren’t correct. There must be more governance and control in how AI is getting used, particularly for enterprises providing AI solutions.”

Verta is currently working with large firms in health care, finance, and insurance to assist them understand and audit their models’ recommendations and predictions. It’s also working with various high-growth tech firms seeking to speed up deployment of recent, AI-enabled solutions while ensuring those solutions are used appropriately.

Vartak says the corporate has been capable of decrease the time it takes customers to deploy AI models by orders of magnitude while ensuring those models are explainable and fair — an especially vital factor for firms in highly regulated industries.

Health care firms, for instance, can use Verta to enhance AI-powered patient monitoring and treatment recommendations. Such systems have to be thoroughly vetted for errors and biases before they’re used on patients.

“Whether it’s bias or fairness or explainability, it goes back to our philosophy on model governance and management,” Vartak says. “We predict of it like a preflight checklist: Before an airplane takes off, there’s a set of checks you have to do before you get your airplane off the bottom. It’s similar with AI models. You’ll want to be sure you’ve done your bias checks, you have to be sure there’s some level of explainability, you have to be sure your model is reproducible. We help with all of that.”

From project to product

Before coming to MIT, Vartak worked as a knowledge scientist for a social media company. In a single project, after spending weeks tuning machine-learning models that curated content to indicate in people’s feeds, she learned an ex-employee had already done the identical thing. Unfortunately, there was no record of what they did or the way it affected the models.

For her PhD at MIT, Vartak decided to construct tools to assist data scientists develop, test, and iterate on machine-learning models. Working in CSAIL’s Database Group, Vartak recruited a team of graduate students and participants in MIT’s Undergraduate Research Opportunities Program (UROP).

“Verta wouldn’t exist without my work at MIT and MIT’s ecosystem,” Vartak says. “MIT brings together people on the leading edge of tech and helps us construct the following generation of tools.”

The team worked with data scientists within the CSAIL Alliances program to come to a decision what features to construct and iterated based on feedback from those early adopters. Vartak says the resulting project, named ModelDB, was the primary open-source model management system.

Vartak also took several business classes on the MIT Sloan School of Management during her PhD and worked with classmates on startups that really helpful clothing and tracked health, spending countless hours within the Martin Trust Center for MIT Entrepreneurship and participating in the middle’s delta v summer accelerator.

“What MIT helps you to do is take risks and fail in a protected environment,” Vartak says. “MIT afforded me those forays into entrepreneurship and showed me go about constructing products and finding first customers, so by the point Verta got here around I had done it on a smaller scale.”

ModelDB helped data scientists train and track models, but Vartak quickly saw the stakes were higher once models were deployed at scale. At that time, attempting to improve (or unintentionally breaking) models can have major implications for firms and society. That insight led Vartak to start constructing Verta.

“At Verta, we help manage models, help run models, and be sure they’re working as expected, which we call model monitoring,” Vartak explains. “All of those pieces have their roots back to MIT and my thesis work. Verta really evolved from my PhD project at MIT.”

Verta’s platform helps firms deploy models more quickly, ensure they proceed working as intended over time, and manage the models for compliance and governance. Data scientists can use Verta to trace different versions of models and understand how they were built, answering questions like how data were used and which explainability or bias checks were run. They also can vet them by running them through deployment checklists and security scans.

“Verta’s platform takes the information science model and adds half a dozen layers to it to remodel it into something you should use to power, say, a whole advice system in your website,” Vartak says. “That features performance optimizations, scaling, and cycle time, which is how quickly you possibly can take a model and switch it right into a worthwhile product, in addition to governance.”

Supporting the AI wave

Vartak says large firms often use hundreds of various models that influence nearly every a part of their operations.

“An insurance company, for instance, will use models for every little thing from underwriting to claims, back-office processing, marketing, and sales,” Vartak says. “So, the range of models is absolutely high, there’s a big volume of them, and the extent of scrutiny and compliance firms need around these models are very high. They should know things like: Did you employ the information you were speculated to use? Who were the individuals who vetted it? Did you run explainability checks? Did you run bias checks?”

Vartak says firms that don’t adopt AI might be left behind. The businesses that ride AI to success, meanwhile, will need well-defined processes in place to administer their ever-growing list of models.

“In the following 10 years, every device we interact with goes to have intelligence in-built, whether it’s a toaster or your email programs, and it’s going to make your life much, much easier,” Vartak says. “What’s going to enable that intelligence are higher models and software, like Verta, that assist you to integrate AI into all of those applications in a short time.”


  1. It’s fascinating to see how AI is being incorporated the business world. With this trend continuing to grow, it’s crucial to ensure that the AI models being used are reliable, unbiased, and perform well. It’s great to see startups like Verta emerging from top research institutions like MIT, as this shows the potential for cutting-edge technology to be used in real-world applications. I’m interested to learn more about how Verta’s tools work in practice and what kind of impact they can have on the use of AI in business. Additionally, it’s encouraging to see a focus on bias auditing and testing, which is an important step towards creating AI systems that are fair and just for all.


Please enter your comment!
Please enter your name here