In this three-day hands-on course you will learn how to build applications and data pipelines that publish data to, and subscribe to data from, an Apache Kafka cluster. You will learn the role of Kafka in the modern data distribution pipeline, discuss core Kafka architectural concepts and components, and review the Kafka developer APIs. The course also covers other components in the broader Confluent Platform such as Kafka Connect, Kafka Streams, the REST Proxy and the Schema Registry.
Who Should Attend?
This course is designed for application developers, ETL (extract, transform, and load) developers, and data scientists who need to interact with Kafka clusters as a source of, or destination for, data.
Attendees should be familiar with developing in Java (preferred) or Python. No prior knowledge of Kafka is required.
This is a three-day training course.
The Motivation for Apache Kafka
Developing with Kafka
More Advanced Kafka Development
Schema Management in Kafka
Kafka Connect for Data Movement
Basic Kafka Installation and Administration
Kafka in the Data Center
An Introduction to Kafka Streams for Data Processing
Throughout the course, Hands-On Exercises reinforce the topics being discussed.
What is provided during the course?
We will provide you with a PDF copy of the course materials. You do not need to bring a computer to the course, as one will be supplied for your use.
Terms and Conditions
By purchasing a ticket for admission to these Course(s), you agree to be bound by the terms of Confluent's Public Training Agreement (the "Terms"), which is set forth at http://confluent.io/public-training-terms