Building Retrieval Augmented Generation (RAG) AI applications with LangChain

Experience Level: beginners

Discover how Retrieval Augmented Generation (RAG) is revolutionizing the way Large Language Models (LLMs) deliver accurate and relevant responses. By adding context to prompts, RAG enhances the quality and precision of model-generated answers.


Retrieval Augmented Generation (RAG) is a powerful method that helps Large Language Models (LLMs) provide better responses by adding relevant context to their prompts. This improves the accuracy and relevance of the answers the models generate.

LangChain is an open-source framework that helps developers build applications using language models. It makes it easier to create RAG applications by connecting LLMs with various sources of context, such as prompt instructions, examples, and extra content. This enables the models to generate more accurate and meaningful responses.

With over 100k stars on GitHub, LangChain has become one of the most popular tools for developing Generative AI applications. Its simple design and strong features make it an excellent choice for developers building advanced RAG applications.

In this talk, we will explain how LangChain works and the different components used in RAG, such as chunking, embedding computation and storage, and vector search. We will also show a live demo of how to use LangChain to create a RAG application.


Christophe Bornet

Christophe is a Senior Software Engineer at DataStax working on AI frameworks. Through his love for open source, he is also a LangChain Community Champion, an Apache Pulsar committer and a core team member of JHipster and OpenAPI-generator projects.

Christophe_Bornet