Home Artificial Intelligence Contextual AI: Making Enterprise-Ready LLMs Available For All

Contextual AI: Making Enterprise-Ready LLMs Available For All

1
Contextual AI: Making Enterprise-Ready LLMs Available For All

The team at Contextual

Announcing Lightspeed’s latest investment within the AI space

Lightspeed Venture Partners

Large language models (LLMs) have taken the world by storm in recent months, and for good reason — LLMs are uniquely suited to understand and extract insights from unstructured data, outperforming previous machine learning approaches that either didn’t work in any respect or were extremely brittle.

70% of enterprise data is unstructured in nature (text, images, video, etc). Further, the expansion of distant communication and hybrid work has significantly accelerated the generation of unstructured data via applications like Slack, Zoom, and Google Docs. Nevertheless, unstructured data is difficult to process and analyze by traditional information processing and business intelligence systems.

At Lightspeed, we predict LLMs represent a major “unlock” for AI within the enterprise, enabling businesses to generate many more insights from their troves of unstructured data. Existing operational and analytical processing systems were only tackling the tip of the big iceberg of enterprise data hidden under the surface. Now, as a consequence of LLMs, the overwhelming majority of enterprise data has suddenly grow to be addressable.

Nevertheless, with great power comes great responsibility… and hallucination.

Most LLMs are trained on a general corpus of web data, leading them to generate inaccurate responses to specific fact-based questions. In truth, most foundation models will not be specifically trained to retrieve factual information, resulting in poor performance in accuracy-sensitive contexts.

The flexibleness and creativity of LLMs is due to this fact a double edged sword: if not reigned in, they will easily get too creative, quite literally making up plausible sounding “facts” in response to user queries. This can be a non-starter for many serious business use cases and can inevitably slow adoption of generative AI if not addressed.

That’s not all. Within the flurry of activity and interest around generative AI, numerous other thorny issues have emerged:

  • How does source attribution work in a generative context?
  • Large language models are, well, large — can the everyday enterprise run such a model themselves?
  • Facts change. How can we ensure models sustain with an ever-evolving world?
  • AI is incredibly data-hungry — how can or not it’s harnessed in a privacy-preserving way?
  • Are these items compliant? How would you recognize?

While most are only waking as much as this harsh reality, a number of forward-thinking individuals saw this coming and have been working on practical, research-backed solutions to those problems. Contextual AI’s founders, Douwe Kiela and Amanpreet Singh, have been training sophisticated large language models for much of their skilled careers, advancing the state-of-the-art through their well-cited research at places like Meta (Facebook AI Research), Hugging Face, and Stanford University.

In 2020, Douwe’s team at Facebook AI Research published a paper titled “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks” through which they proposed an easy but powerful idea — LLMs grow to be substantially more powerful when connected to external data sources, so why not make information retrieval a core a part of the training regimen? This results in models which ground themselves in underlying data sources and generate factual responses far more reliably.

I read lots of AI research, and I distinctly remember coming across the paper in 2020 and pondering — “this is absolutely interesting, someone should do something with this!” I took copious highlights and made a mental note to return to the concept in the long run.

Useless to say, the pitch for Contextual AI immediately clicked after I met Douwe and Aman years later. While their research record alone is impressive, they’re also laser-focused on real-world applications and customer pain points, traits not all the time seen amongst researchers.

We’re still within the early days of the AI revolution, and nascent, off-the-shelf LLMs aren’t yet ready for prime time. For AI to attain its promise, we’d like a platform for enterprise-ready LLMs.

Contextual AI is that platform. With Contextual AI, businesses will have the option to construct, train, and deploy their very own customized LLMs, all in a compliant, privacy-preserving way and with fundamentally higher performance.

At Lightspeed, we’ve backed next-generation enterprise technology businesses since our earliest days, and foundational architectural innovation has all the time been a core a part of the thesis behind our most successful investments. Contextual AI is not any different.

We at Lightspeed are excited to partner with Douwe and Aman as investors of their $20M seed round. We’re massive believers of their mission to make enterprise-ready AI a reality, and we will’t wait to see what they do.

–Nnamdi Iregbulem, Partner

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here