Demystifying the next advancements of LLMs: RAG: Retrieval-augmented generation

Have you ever asked an AI language model like ChatGPT about the latest developments on a certain topic, only to receive this response:” I'm sorry, but as of my last knowledge update in January 2022, I don't have information on the topic at hand..“If you have, you've encountered a fundamental limitation of large language models. You can think about these models as time capsules of knowledge frozen at the point of their last training. They can only learn new information by going through a retraining process, which is both time-consuming and computationally intensive.

In the fast-paced world of artificial intelligence, a new technology is emerging to tackle this challenge — Retrieval-Augmented Generation, or RAG. This innovative approach is revolutionizing how language models operate, breaking down barriers and opening up new possibilities.But what exactly is RAG? Why is it important? And how does it work? All and more in this talk.


About Adi Polak

Adi is an experienced Software Engineer and people manager. For most of her professional life, she has worked with data and machine learning for operations and analytics. As a data practitioner, she developed algorithms to solve real-world problems using machine learning techniques and leveraging expertise in Apache Spark, Kafka, HDFS, and distributed large-scale systems.

Adi has taught Spark to thousands of students and is the author of the successful book — Scaling Machine Learning with Spark. Earlier this year, she embarked on a new adventure with data streaming, specifically Flink, and she can't get enough of it.

More About Adi »