RAG

Productionize LLM RAG App in Django — Part I: Celery

Automate Pinecone Day by day Upsert Task with Celery and Slack monitoringIt’s been some time since my last LLM post and I’m excited to share that my prototype has been successfully productionized as Outside’s...

Constructing a RAG chain using LangChain Expression Language (LCEL)

QA RAG with Self Evaluation IIFor this variation, we make a change to the evaluation procedure. Along with the question-answer pair, we also pass the retrieved context to the evaluator LLM.To perform this, we...

RAFT – A High quality-Tuning and RAG Approach to Domain-Specific Query Answering

Because the applications of enormous language models expand into specialized domains, the necessity for efficient and effective adaptation techniques becomes increasingly crucial. Enter RAFT (Retrieval Augmented High quality Tuning), a novel approach that mixes...

A beginner’s guide to constructing a Retrieval Augmented Generation (RAG) application from scratch

Learn critical knowledge for constructing AI apps, in plain englishRetrieval Augmented Generation, or RAG, is all the craze nowadays since it introduces some serious capabilities to large language models like OpenAI’s GPT-4 — and...

A Silent Evolution in AI: The Rise of Compound AI Systems Beyond Traditional AI Models

As we navigate the recent artificial intelligence (AI) developments, a subtle but significant transition is underway, moving from the reliance on standalone AI models like large language models (LLMs) to the more nuanced and...

How one can Improve LLMs with RAG

ImportsWe start by installing and importing crucial Python libraries.!pip install llama-index!pip install llama-index-embeddings-huggingface!pip install peft!pip install auto-gptq!pip install optimum!pip install bitsandbytes# if not running on Colab ensure transformers is installed toofrom llama_index.embeddings.huggingface import HuggingFaceEmbeddingfrom...

Overcoming LLM Hallucinations Using Retrieval Augmented Generation (RAG)

Large Language Models (LLMs) are revolutionizing how we process and generate language, but they're imperfect. Similar to humans might see shapes in clouds or faces on the moon, LLMs can even ‘hallucinate,' creating information...

Nomic AI launches open source longest context embedding model that surpasses OpenAI

An open source text embedding model has emerged that is alleged to have higher performance than OpenAI's 'text-embedding-ada-002', the perfect currently available. Through this, it's evaluated that the open source large language model...

Recent posts

Popular categories

ASK ANA