Sponsor: Confluent

Datacenter downtime and data loss can result in businesses losing a vast amount of revenue or entirely halting operations. To minimize the downtime and data loss resulting from a disaster, enterprises can create business continuity plans and disaster recovery strategies.

Download this white paper for a practical guide to configuring multiple Apache Kafka clusters so that if a disaster scenario strikes, you have a plan for failover, failback, and ultimately successful recovery.

Learn how to take full advantage of Apache KafkaTM, the distributed, publish-subscribe queue for handling real-time data feeds. With this comprehensive book, you’ll understand how Kafka works and how it’s designed.

Authors Neha Narkhede, Gwen Shapira, and Todd Palino show you how to deploy production Kafka clusters; secure, tune, and monitor them; write rock-solid applications that use Kafka; and build scalable stream-processing applications.

Learn how Apache Kafka compares to other queues and where it fits in the big data ecosystem

Dive into Kafka’s internal design

Pick up best practices for developing applications that use Kafka

Understand the best way to deploy Kafka in production monitoring, tuning, and maintenance tasks

Over the past several years, organizations across many industries have discovered, and are filling, an increasingly important gap in their data infrastructure. It sits at the nexus of big data, data integration, and all of their data stores and applications – a gap that is being filled by streaming platforms like Apache Kafka.

Confluent has enjoyed a front row view as companies adopt streaming platforms to create new products, become more responsive to customers and make business decisions in real time. This survey focuses on why and how companies are using Apache Kafka and streaming data and the impact it has on their business.