Stream Processing with Kafka

Wednesday, 3:15 PM EST - SEASIDE

The toolset for building scalable data systems is maturing, having adapted well to our decades-old paradigm of update-in-place databases. We ingest events, we store them in high-volume OLTP databases, and we have new OLAP systems to analyze them at scale—even if the size of our operation requires us to grow to dozens or hundreds of servers in the distributed system. But something feels a little dated about the store-and-analyze paradigm, as if we are missing a new architectural insight that might more efficiently distribute the work of storing and computing the events that happen to our software. That new paradigm is stream processing.

In this workshop, we’ll learn the basics of Kafka as a messaging system, learning the core concepts of topic, producer, consumer, and broker. We’ll look at how topics are partitioned among brokers and see the simple Java APIs for getting data in and out. But more than that, we’ll look at how we can extend this scalable messaging system into a streaming data processing system—one that offers significant advantages in scalability and deployment agility, while locating computation in your data pipeline in precisely the places it belongs: in your microservices and applications, and out of costly, high-density systems.

Come to this workshop to learn how to do streaming data computation with Apache Kafka!

Video Preview

About Tim Berglund

Tim Berglund

Tim is a teacher, author, and technology leader with Confluent, where he serves as the Vice President of Developer Relations. He is a regular speaker at conferences and a presence on YouTube explaining complex technology topics in an accessible way. He tweets as @tlberglund, blogs every few years at http://timberglund.com. He has three grown children and two grandchildren, a fact about which he is rather excited.

More About Tim »