How Engineers At DraftKings Use Kafka Efficiently
Since 2012, our company’s mission has been to bring the best-in-class products to our customers. Our global teams closely collaborate to continually evolve and improve our technologies and systems to efficiently deliver the best customer experience.
As a result, it’s important we empower our engineers to continually grow and augment their skills and knowledge. Recently, they put together a new workshop showing how to use Kafka efficiently in a high-volume production environment like DraftKings. It’s a unique, fun, hands-on workshop where you'll create code (not just run pre-canned commands) to produce and consume messages on a multi-node cluster and learn how to handle failure scenarios and key use cases.
We’ve broken down the workshop for you into four parts. Go ahead and jump right in!
In this section, we’ll get your local environment setup for Kafka development and try out a few key use cases to get familiar with how Kafka brokers, producers, and consumers interact.
In this section of the Kafka workshop, you’ll learn how to set up consumers in a safe and scalable manner to handle messages streamed through Kafka. We’ll cover parallel consumption, deserialization, and caching.
In the third section, you will learn how to set up your producers, produce messages and optimize for throughput and reliability. We’ll cover some producer internals, so you’ll understand the consequences of different configurations, blocking vs. non-blocking, serialization, and how to use Kafka for a performant in-memory/distributed cache.
In our final section, we’ll cover the different cleanup policies and their configurations to help you manage message handling for various use cases. We’ll also take a deep dive into understanding segments, partitions, and compression.
Are you ready to work in an exciting environment that solves unique problems using the latest technology? Join our Engineering team and discover the impact you'll make.