Home Artificial Intelligence Moving Large Language Models (LLM) into Real-World Business Applications

Moving Large Language Models (LLM) into Real-World Business Applications

0
Moving Large Language Models (LLM) into Real-World Business Applications

Large language models are in every single place. Every customer conversation or VC pitch involves questions on how ready LLM tech is and the way it’s going to drive future applications. I covered some patterns on this in my previous post. Here I’ll discuss some real-world patterns for an application within the pharma industry that Persistent Systems worked on.

Large Language Models and Core Strengths

LLMs are good at understanding language, that’s their forte. Commonest pattern we’re seeing with applications is retrieval augmented generation (RAG), where knowledge is externally compiled from data sources and provided in context as a prompt for the LLM to paraphrase a response. On this case, super-fast search mechanisms like vector databases and Elasticsearch-based engines function a primary line of search. Then the search results are compiled right into a prompt and sent to the LLM mostly as an API call.

One other pattern is generating a question on structured data by feeding the LLM an information model because the prompt and a particular user query. This pattern could possibly be used to develop a complicated “talk over with your data” interface for SQL databases like Snowflake, in addition to graph databases like Neo4j.

Leveraging LLM Patterns for Real-World Insights

Persistent Systems recently checked out a pattern for Blast Motion, a sports telemetry company (swing evaluation for baseball, golf, etc.), where we analysed time-series data of player summaries to get recommendations.

For more complex applications, we frequently have to chain the LLM requests with processing in between calls. For a pharma company, we developed a wise trails app that filters patients for clinical trials based on criteria extracted from clinical trial document. Here we used a LLM chain approach. First we developed a LLM to read trial pdf document and use RAG pattern to extract inclusion and exclusion criteria.

For this, a comparatively simpler LLM like GPT-3.5-Turbo (ChatGPT) was used. Then we combined these extracted entities with data model of patients SQL database in Snowflake, to create a prompt. This prompt fed to a more powerful LLM like GPT4 gives us a SQL query to filter patients, that’s able to run on Snowflake. Since we use LLM chaining, we could use multiple LLMs for every step of the chain, thus enabling us to administer cost.

Currently, we decided to maintain this chain deterministic for higher control. That’s, we decided to have more intelligence within the chains and keep the orchestration quite simple and predictable. Each element of the chain is a posh application by itself that may take few months to develop within the pre-LLM days.

Powering More Advanced Use Cases

For a more advanced case, we could use Agents like ReAct to prompt the LLM to create step-by-step instructions to follow for a specific user query. This might in fact need a high end LLM like GPT4 or Cohere or Claude 2. Nevertheless, then there’s a risk of the model taking an incorrect step that may have to be verified using guardrails. It is a trade-off between moving intelligence in controllable links of the chain or making the entire chain autonomous.

Today, as we get used to the age of Generative AI for language, the industry is beginning to adopt LLM applications with predictable Chains. As this adoption grows, we’ll soon start experimenting with more autonomy for these chains via agents. That’s what the controversy on AGI is all about and we have an interest to see how all of this evolves over time.

LEAVE A REPLY

Please enter your comment!
Please enter your name here