Home Artificial Intelligence Our Investment in Openlayer — The ML Debugging Workspace

Our Investment in Openlayer — The ML Debugging Workspace

1
Our Investment in Openlayer — The ML Debugging Workspace

Memory Leak

VC Astasia Myers’ perspectives on machine learning, cloud infrastructure, developer tools, open source, and security. Enroll here.

By Astasia Myers and Matthew Humphrey

ML is on the rise, and there are millions of models available to try. Data scientists pick a model from Hugging Face or Github, and finetune it with their dataset. Then they perform tests with scripts they often write themselves. As we’ve seen on the news, unfortunately ML models could make mistakes that result in inaccurate results.

To unravel this problem, data science teams need rigorous error evaluation, which helps understand the when, how, and why models fail. Error evaluation isolates, observes, and diagnoses erroneous ML predictions. As Stanford professor Andrew Ng puts it, “in the event you do error evaluation well, it should let you know what’s probably the most efficient use of your time to enhance performance.”

ML organizations appreciate that error evaluation is a very important step in the event process, particularly because the industry shifts to data-centric ML, where the main target is on data as a substitute of code. Since error evaluation can discover failure modes to tell data collection and labeling it enables data-centric ML development. Data is critical for training and fine-tuning so these insights have an enormous impact on model performance.

Today error evaluation is difficult for teams because they’ve to put in writing ad hoc scripts or guess why a model is failing. They use their very own intuition to perform data evaluation versus repeatable, programmatic constructs.

Gabriel Bayomi, Rishab Ramanathan, and Vikas Nair, who’ve direct AI experience from working together on Apple’s ML teams, founded Openlayer with the ML practitioner in mind. As ML builders, they experienced firsthand how error evaluation boosts model performance to make it production-ready. The founders left Apple to unravel this problem by constructing a best-in-class solution for evaluating and improving models.

Openlayer makes collaborative error evaluation easy and intuitive. Firms are in a position to track and version models, uncover errors, and make informed decisions on data collection and model re-training. Openlayer’s approach stands out since it emphasizes goal-driven development. Onboard your data and models to Openlayer and collaborate with the entire team to align expectations surrounding quality and performance. In a look, data scientists can know the explanation behind failures with root causes evaluation to give attention to the areas where there may be room for improvement.

Openlayer is designed to assist teams increase their productivity and efficiency in the event and deployment of ML models. With a give attention to the three V’s of ML -versioning, validation, and velocity — their product provides a reproducible and trackable option to construct and improve ML models.

Today, we’re honored to announce that Quiet Capital led Openlayer’s $4.8M seed round. We’re joined by angels Gokul Rajaram (Doordash), Max Mullen (Instacart), John Kim (Sendbird), Oliver Cameron (Cruise), Immad Akhund (Mercury), Guillermo Rauch (Vercel), Mike Krieger (Instagram), Jonathan Swanson (Thumbtack), amongst others. We’re incredibly excited to support Gabriel, Rishab, Vikas, the complete Openlayer team to supply tooling to assist make ML models production-ready.

Join the Openlayer community by following them on LinkedIn and Twitter!

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here