You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The {name} reads data from a Redis Enterprise database and publishes messages to a Kafka topic.
4
+
The {name} reads from a Redis Enterprise stream and publishes messages to a Kafka topic.
5
5
6
6
== Features
7
7
8
8
The {name} includes the following features:
9
9
10
10
* <<at-least-once-delivery,At least once delivery>>
11
11
* <<tasks,Multiple tasks>>
12
-
* <<key-reader,Key Reader>>
13
12
* <<stream-reader,Stream Reader>>
14
13
15
14
[[at-least-once-delivery]]
@@ -20,44 +19,47 @@ The {name} guarantees that records from the Kafka topic are delivered at least o
20
19
=== Multiple Tasks
21
20
Use configuration property `tasks.max` to have the change stream handled by multiple tasks. The connector splits the work based on the number of configured key patterns. When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
22
21
23
-
[[key-reader]]
24
-
=== Key Reader
25
-
In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic. The data structure key will be mapped to the record key, and the value will be mapped to the record value.
26
-
27
-
[IMPORTANT]
28
-
.Supported Data Structures
29
-
====
30
-
The {name} supports the following data structures:
31
-
32
-
* String: the Kafka record values will be strings
33
-
* Hash: the Kafka record values will be maps (string key/value pairs)
34
-
35
-
====
36
-
37
-
[source,properties]
38
-
----
39
-
redis.keys.patterns=<glob> # <1>
40
-
topic=<topic> # <2>
41
-
----
42
-
43
-
<1> Key portion of the pattern that will be used to listen to keyspace events. For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc. Use comma-separated values for multiple patterns (`foo:*,bar:*`)
44
-
<2> Name of the destination topic.
22
+
//
23
+
//[[key-reader]]
24
+
//=== Key Reader
25
+
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic. The data structure key will be mapped to the record key, and the value will be mapped to the record value.
26
+
//
27
+
//[IMPORTANT]
28
+
//.Supported Data Structures
29
+
//====
30
+
//The {name} supports the following data structures:
31
+
//
32
+
//* String: the Kafka record values will be strings
33
+
//* Hash: the Kafka record values will be maps (string key/value pairs)
34
+
//
35
+
//====
36
+
//
37
+
//[source,properties]
38
+
//----
39
+
//redis.keys.patterns=<glob> <1>
40
+
//topic=<topic> <2>
41
+
//----
42
+
//
43
+
//<1> Key portion of the pattern that will be used to listen to keyspace events. For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc. Use comma-separated values for multiple patterns (`foo:*,bar:*`)
44
+
//<2> Name of the destination topic.
45
45
46
46
[[stream-reader]]
47
47
=== Stream Reader
48
-
In stream reader mode, the {name} reads messages from a Redis stream and publishes to a Kafka topic.
48
+
The {name} reads messages from a stream and publishes to a Kafka topic. Reading is done through a consumer group so that <<multiple-tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
49
49
50
50
[source,properties]
51
51
----
52
-
redis.reader=STREAM # <1>
53
-
redis.stream.name=<stream name> # <2>
54
-
redis.stream.offset=<stream offset> # <3>
55
-
redis.stream.block=<millis> # <4>
56
-
topic=<topic> # <5>
52
+
redis.stream.name=<name> <1>
53
+
redis.stream.offset=<offset> <2>
54
+
redis.stream.block=<millis> <3>
55
+
redis.stream.consumer.group=<group> <4>
56
+
redis.stream.consumer.name=<name> <5>
57
+
topic=<name> <6>
57
58
----
58
59
59
-
<1> Reader mode is STREAM
60
-
<2> Name of the stream to read from.
61
-
<3> https://redis.io/commands/xread#incomplete-ids[Message ID] to start reading from (default: `0-0`).
62
-
<4> Maximum duration in milliseconds to wait while https://redis.io/commands/xread[reading] from the stream (default: 100).
63
-
<5> Name of the destination topic which may contain `${stream}` as a placeholder for the originating stream name. For example, `redis_${stream}` for the stream 'orders' will map to the topic name `redis_orders`.
60
+
<1> Name of the stream to read from.
61
+
<2> https://redis.io/commands/xread#incomplete-ids[Message ID] to start reading from (default: `0-0`).
62
+
<3> Maximum https://redis.io/commands/xread[XREAD] wait duration in milliseconds (default: `100`).
63
+
<4> Name of the stream consumer group (default: `kafka-consumer-group`).
64
+
<5> Name of the stream consumer (default: `consumer-${task}`). May contain `${task}` as a placeholder for the task id. For example, `foo${task}` and task `123` => consumer `foo123`.
65
+
<6> Destination topic (default: `${stream}`). May contain `${stream}` as a placeholder for the originating stream name. For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
publicstaticfinalStringTOPIC_DOC = String.format("Name of the destination topic. In stream reader mode the topic name may contain '%s' as a placeholder for the originating stream name. For example `redis_%s` for the stream 'orders' will map to the topic name 'redis_orders'.", TOKEN_STREAM, TOKEN_STREAM);
publicstaticfinalStringTOPIC_DOC = String.format("Name of the destination topic, which may contain '%s' as a placeholder for the originating stream name. For example `redis_%s` for the stream 'orders' will map to the topic name 'redis_orders'.", TOKEN_STREAM, TOKEN_STREAM);
publicstaticfinalStringREADER_DOC = "Source from which to read Redis records. " + ReaderType.KEYS + ": generate records from key events and respective values generated from write operations in the Redis database. " + ReaderType.STREAM + ": read messages from a Redis stream";
39
39
40
40
publicstaticfinalStringBATCH_SIZE = "batch.size";
@@ -46,7 +46,6 @@ public class RedisEnterpriseSourceConfig extends RedisEnterpriseConfig {
46
46
publicstaticfinalStringKEY_PATTERNS_DOC = "Keyspace glob-style patterns to subscribe to, comma-separated.";
publicstaticfinalStringSTREAM_CONSUMER_NAME_DOC = "A format string for the stream consumer, which may contain '" + TOKEN_TASK + "' as a placeholder for the task id.\nFor example, 'consumer-" + TOKEN_TASK + "' for the task id '123' will map to the consumer name 'consumer-123'.";
publicstaticfinalStringSTREAM_BLOCK_DOC = "The max amount of time in milliseconds to wait while polling for stream messages (XREAD [BLOCK milliseconds])";
@@ -66,6 +70,7 @@ public class RedisEnterpriseSourceConfig extends RedisEnterpriseConfig {
66
70
privatefinalStringstreamName;
67
71
privatefinalStringstreamOffset;
68
72
privatefinalStringstreamConsumerGroup;
73
+
privatefinalStringstreamConsumerName;
69
74
privatefinalLongbatchSize;
70
75
privatefinalLongstreamBlock;
71
76
privatefinalStringtopicName;
@@ -79,16 +84,10 @@ public RedisEnterpriseSourceConfig(Map<?, ?> originals) {
0 commit comments