Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to recover kafka messages?

We are considering to use kafka for distributed development, but also would like to use it as a database. Specific case: we write to "transact" topic in kafka and want to rely on it to store all the transactions. Question is: Is there a recovery plan needed in this design, would Kafka lose data due to crashes, disk failures? Or maybe Kafka has it's own recovery mechanics, so user doesn't need a recovery plan on their side?

like image 544
baron Avatar asked Oct 15 '25 03:10

baron


1 Answers

Short answer to your question:

Kafka provides durability and fault-tolerance however, you are responsible for the configuration of the corresponding parameters and the design of an architecture which can deal with fail overs in order to ensure that you'll never lose any data.

Long answer to your question:

I'll answer to your question by explaining how Kafka works in general and how it deals with failures.

Every topic, is a particular stream of data (similar to a table in a database). Topics, are split into partitions (as many as you like) where each message within a partition gets an incremental id, known as offset as shown below.

Partition 0:

+---+---+---+-----+
| 0 | 1 | 2 | ... |
+---+---+---+-----+

Partition 1:

+---+---+---+---+----+
| 0 | 1 | 2 | 3 | .. |
+---+---+---+---+----+

Now a Kafka cluster is composed of multiple brokers. Each broker is identified with an ID and can contain certain topic partitions.

Example of 2 topics (each having 3 and 2 partitions respectively):

Broker 1:

+-------------------+
|      Topic 1      |
|    Partition 0    |
|                   |
|                   |
|     Topic 2       |
|   Partition 1     |
+-------------------+

Broker 2:

+-------------------+
|      Topic 1      |
|    Partition 2    |
|                   |
|                   |
|     Topic 2       |
|   Partition 0     |
+-------------------+

Broker 3:

+-------------------+
|      Topic 1      |
|    Partition 1    |
|                   |
|                   |
|                   |
|                   |
+-------------------+

Note that data is distributed (and Broker 3 doesn't hold any data of topic 2).

Topics, should have a replication-factor > 1 (usually 2 or 3) so that when a broker is down, another one can serve the data of a topic. For instance, assume that we have a topic with 2 partitions with a replication-factor set to 2 as shown below:

Broker 1:

+-------------------+
|      Topic 1      |
|    Partition 0    |
|                   |
|                   |
|                   |
|                   |
+-------------------+

Broker 2:

+-------------------+
|      Topic 1      |
|    Partition 0    |
|                   |
|                   |
|     Topic 1       |
|   Partition 0     |
+-------------------+

Broker 3:

+-------------------+
|      Topic 1      |
|    Partition 1    |
|                   |
|                   |
|                   |
|                   |
+-------------------+

Now assume that Broker 2 has failed. Broker 1 and 3 can still serve the data for topic 1. So a replication-factor of 3 is always a good idea since it allows for one broker to be taken down for maintenance purposes and also for another one to be taken down unexpectedly. Therefore, Apache-Kafka offers strong durability and fault tolerance guarantees.

Note about Leaders: At any time, only one broker can be a leader of a partition and only that leader can receive and serve data for that partition. The remaining brokers will just synchronize the data (in-sync replicas). Also note that when the replication-factor is set to 1, the leader cannot be moved elsewhere when a broker fails. In general, when all replicas of a partition fail or go offline, the leader will automatically be set to -1.

Note about retention period If you are planning to use Kafka as a storage you also need to be aware of the configurable retention period for every topic. If you don't take care of this setting, you might lose your data. According to the docs:

The Kafka cluster durably persists all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space.

like image 79
Giorgos Myrianthous Avatar answered Oct 18 '25 05:10

Giorgos Myrianthous



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!