Overview
In this course, you will learn how to build Retrieval-Augmented Generation (RAG) applications using LlamaIndex and JavaScript. Starting with an introduction to the course structure and prerequisites, you’ll dive straight into hands-on learning to build production-ready apps. By the end of the course, you will be able to integrate various data sources, set up an OpenAI account, and use powerful tools such as the RouterQueryEngine to handle advanced queries. The course starts with setting up your development environment with NodeJS and OpenAI. You'll be introduced to LlamaIndex, explore its core features, and quickly dive into the fundamentals of RAG systems, from data ingestion and indexing to querying and building custom RAG systems. Along the way, you’ll gain in-depth knowledge of how to handle different types of data, such as PDFs, and how to interact with your system using an Express API. As you progress, you'll tackle more advanced concepts like handling multiple query engines, using agents, and incorporating production-level techniques to make your RAG apps robust and scalable. The course culminates in building a full-stack chatbot app with NextJS and deploying it to Vercel. You’ll leave the course with the ability to develop, deploy, and maintain sophisticated RAG applications. This course is ideal for developers with a solid understanding of JavaScript and basic web development concepts who wish to learn more about RAG systems and their application in real-world projects. No prior experience with LlamaIndex is necessary, but familiarity with NodeJS and APIs is recommended.
Syllabus
Taught by
Tags