An LLM can handle general routing. Semantic search can handle private data higher. Which one would you choose?
A single prompt cannot handle the whole lot, and a single data source will not be suitable for all the info.
Here’s something you frequently see in production but not in demos:
You wish a couple of data source to retrieve information. Multiple vector store, graph DB, and even an SQL database. And you wish different prompts to handle different tasks, too.
If that’s the case, we’ve an issue. Given unstructured, often ambiguous, and poorly formatted user input, how will we resolve which database to retrieve data from?
If, for some reason, you continue to think it’s too easy, here’s an example.
Suppose you may have a tour-guiding chatbot, and one traveler asks for an optimal travel schedule between five places. Letting the LLM answer may hallucinate, as LLMs aren’t good with location-based calculations.
As a substitute, should you store this information in a graph database, the LLM may generate a question to fetch the shortest travel path between the points…