Home Artificial Intelligence Generative AI is a Gamble Enterprises Should Soak up 2024

Generative AI is a Gamble Enterprises Should Soak up 2024

0
Generative AI is a Gamble Enterprises Should Soak up 2024

LLMs today suffer from inaccuracies at scale, but that doesn’t mean you must cede competitive ground by waiting to adopt generative AI.

Constructing an AI-ready workforce with data.world OWLs, as imagined by OpenAI’s GPT-4

Every enterprise technology has a purpose or it wouldn’t exist. Generative AI’s enterprise purpose is to provide human-usable output from technical, business, and language data rapidly and at scale to drive productivity, efficiency, and business gains. But this primary function of generative AI — to supply a witty answer — can also be the source of huge language models’ (LLMs) biggest barrier to enterprise adoption: so-called “hallucinations”.

Why do hallucinations occur in any respect? Because, at their core, LLMs are complex statistical matching systems. They analyze billions of knowledge points in an effort to find out patterns and predict the almost definitely response to any given prompt. But while these models may impress us with the usefulness, depth, and creativity of their answers, seducing us to trust them every time, they’re removed from reliable. Recent research from Vectara found that chatbots can “invent” recent information as much as 27% of the time. In an enterprise setting where query complexity can vary greatly, that number climbs even higher. A recent benchmark from data.world’s AI Lab using real business data found that when deployed as a standalone solution, LLMs return accurate responses to most elementary business queries only 25.5% of the time. In relation to intermediate or expert level queries, that are still well inside the bounds of typical, data-driven enterprise queries, accuracy dropped to ZERO percent!

The tendency to hallucinate could also be inconsequential for people fooling around with ChatGPT for small or novelty use cases. But in the case of enterprise deployment, hallucinations present a systemic risk. The implications range from inconvenient (a service chatbot sharing irrelevant information in a customer interaction) to catastrophic, resembling inputting the incorrect numeral on an SEC filing.

Because it stands, generative AI remains to be a bet for the enterprise. Nevertheless, it’s also a needed one. As we learned at OpenAI’s first developer conference, 92% of Fortune 500 corporations are using OpenAI APIs. The potential of this technology within the enterprise is so transformative that the trail forward is resoundingly clear: start adopting generative AI — knowing that the rewards include serious risks. The choice is to insulate yourself from the risks, and swiftly fall behind the competition. The inevitable productivity lift is so obvious now that to not make the most of it may very well be existential to an enterprise’s survival. So, faced with this illusion of selection, how can organizations go about integrating generative AI into their workflows, while concurrently mitigating risk?

First, it is advisable to prioritize your data foundation. Like several modern enterprise technology, generative AI solutions are only pretty much as good as the information they’re built on top of — and based on Cisco’s recent AI Readiness Index, intention is outpacing ability, particularly on the information front. Cisco found that while 84% of corporations worldwide consider AI may have a big impact on their business, 81% lack the information centralization needed to leverage AI tools to their full potential, and only 21% say their network has ‘optimal’ latency to support demanding AI workloads. It’s an identical story in the case of data governance as well; just three out of ten respondents currently have comprehensive AI policies and protocols, while only 4 out of ten have systematic processes for AI bias and fairness corrections.

As benchmarking demonstrates, LLMs have a tough enough time already retrieving factual answers reliably. Mix that with poor data quality, a scarcity of knowledge centralization / management capabilities, and limited governance policies, and the danger of hallucinations — and accompanying consequences — skyrockets. Put simply, corporations with a powerful data architecture have higher and more accurate information available to them and, by extension, their AI solutions are equipped to make higher decisions. Working with an information catalog or evaluating internal governance and data entry processes may not feel like probably the most exciting a part of adopting generative AI. Nevertheless it’s those considerations — data governance, lineage, and quality — that might make or break the success of a generative AI Initiative. It not only enables organizations to deploy enterprise AI solutions faster and more responsibly, but additionally allows them to maintain pace with the market because the technology evolves.

Second, it is advisable to construct an AI-educated workforce. Research points to the indisputable fact that techniques like advanced prompt engineering can prove useful in identifying and mitigating hallucinations. Other methods, resembling fine-tuning, have been shown to dramatically improve LLM accuracy, even to the purpose of outperforming larger, more advanced general purpose models. Nevertheless, employees can only deploy these tactics in the event that they’re empowered with the most recent training and education to achieve this. And let’s be honest: most employees aren’t. We are only over the one-year mark because the launch of ChatGPT on November 30, 2022!

When a serious vendor resembling Databricks or Snowflake releases recent capabilities, organizations flock to webinars, conferences, and workshops to make sure they will make the most of the most recent features. Generative AI must be no different. Create a culture in 2024 where educating your team on AI best practices is your default; for instance, by providing stipends for AI-specific L&D programs or bringing in an outdoor training consultant, resembling the work we’ve done at data.world with Rachel Woods, who serves on our Advisory Board and founded and leads The AI Exchange. We also promoted Brandon Gadoci, our first data.world worker outside of me and my co-founders, to be our VP of AI Operations. The staggering lift we’ve already had in our internal productivity is nothing in need of inspirational (I wrote about it in this three-part series.) Brandon just reported yesterday that we’ve seen an astounding 25% increase in our team’s productivity through using our internal AI tools across all job roles in 2023! Adopting such a culture will go a good distance toward ensuring your organization is provided to grasp, recognize, and mitigate the specter of hallucinations.

Third, it is advisable to stay on top of the burgeoning AI ecosystem. As with all recent paradigm-shifting tech, AI is surrounded by a proliferation of emerging practices, software, and processes to reduce risk and maximize value. As transformative as LLMs may grow to be, the wonderful truth is that we’re just firstly of the long arc of AI’s evolution.

Technologies once foreign to your organization may grow to be critical. The aforementioned benchmark we released saw LLMs backed by a knowledge graph — a decades-old architecture for contextualizing data in three dimensions (mapping and relating data very like a human brain works) — can improve accuracy by 300%! Likewise, technologies like vector databases and retrieval augmented generation (RAG) have also risen to prominence given their ability to assist address the hallucination problem with LLMs. Long-term, the ambitions of AI extend far beyond the APIs of the main LLM providers available today, so remain curious and nimble in your enterprise AI investments.

Like several recent technology, generative AI solutions should not perfect, and their tendency to hallucinate poses a really real threat to their current viability for widespread enterprise deployment. Nevertheless, these hallucinations shouldn’t stop organizations from experimenting and integrating these models into their workflows. Quite the other, actually, as so eloquently stated by AI pioneer and Wharton entrepreneurship professor Ethan Mollick: “…understanding comes from experimentation.” Relatively, the danger hallucinations impose should act as a forcing function for enterprise decision-makers to acknowledge what’s at stake, take steps to mitigate that risk accordingly, and reap the early advantages of LLMs in the method. 2024 is the yr that your enterprise should take the leap.

LEAVE A REPLY

Please enter your comment!
Please enter your name here