Skip to content

Commit fc99e65

Browse files
author
Julien Ruaux
committed
feat: Added support for keyspace in Redis keys
1 parent fcf374d commit fc99e65

File tree

6 files changed

+143
-103
lines changed

6 files changed

+143
-103
lines changed

README.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,10 @@ image:https://codecov.io/gh/{project-owner}/{project-name}/branch/master/graph/b
1010

1111
Kafka Connect source and sink connectors for https://redis.com/redis-enterprise-software/overview/[Redis Enterprise]
1212

13+
== Documentation
14+
1315
Refer to the link:https://{project-owner}.github.io/{project-name}[documentation] for configuration and usage information.
16+
17+
== Docker Example
18+
19+
Run `docker/.run.sh` and follow prompts

docker/run.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ echo "Building the Redis Enterprise Kafka Connector"
1212
(
1313
cd ..
1414
./mvnw clean package
15-
find ./target/components/packages -type d -name "redis-redis-enterprise-kafka-5.*" -mindepth 2 -maxdepth 2 -exec mv {} ./target/components/packages/redis-enterprise-kafka \;
15+
find ./target/components/packages -mindepth 2 -maxdepth 2 -type d -name "redis-redis-enterprise-kafka-5.*" -exec mv {} ./target/components/packages/redis-enterprise-kafka \;
1616
)
1717

1818
echo "Starting docker ."
@@ -150,7 +150,7 @@ The `pageviews` stream in Redis should contain the sunk page views: redis-cli xl
150150
Examine the Redis database:
151151
- In your shell run: docker-compose exec redis /usr/local/bin/redis-cli
152152
- List some RedisJSON keys: SCAN 0 TYPE ReJSON-RL
153-
- Show the JSON value of a given key: JSON.GET 971
153+
- Show the JSON value of a given key: JSON.GET pageviews:971
154154
==============================================================================================================
155155
156156
Use <ctrl>-c to quit'''

src/docs/asciidoc/_sink.adoc

Lines changed: 51 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -25,31 +25,22 @@ You can specify the number of tasks with the `tasks.max` configuration property.
2525

2626
[[data-structures]]
2727
=== Redis Data Structures
28-
29-
Record keys and values have different roles depending on the target data structure.
28+
The {name} supports the following Redis data-structure types as targets:
3029

3130
[[collection-key]]
32-
==== Collections
33-
For collections (stream, list, set, sorted set, timeseries) a single key is used which is independent of the record key.
34-
35-
Use the `redis.key` configuration property (default: `${topic}`) to specify a format string for the destination collection, which may contain `${topic}` as a placeholder for the originating topic name.
36-
37-
For example `kafka_${topic}` for the topic `orders` will map to the Redis key `kafka_orders`.
38-
39-
==== Stream
40-
41-
Use the following properties to store Kafka records as Redis stream messages:
42-
43-
[source,properties]
44-
----
45-
redis.type=STREAM
46-
redis.key=<stream key> <1>
47-
value.converter=<Avro or JSON> <2>
48-
----
49-
50-
<1> <<collection-key,Stream key>>
51-
<2> <<avro,Avro>> or <<kafka-json,JSON>>
52-
31+
* Collections: <<sync-stream,stream>>, <<sync-list,list>>, <<sync-set,set>>, <<sync-zset,sorted set>>, <<sync-timeseries,time series>>
32+
+
33+
Collection keys are generated using the `redis.key` configuration property which may contain `${topic}` (default) as a placeholder for the originating topic name.
34+
+
35+
For example with `redis.key = ${topic}` and topic `orders` the Redis key is `set:orders`.
36+
37+
* <<sync-hash,Hash>>, <<sync-string,string>>, <<sync-json,JSON>>
38+
+
39+
For other data-structures the key is in the form `<keyspace>:<record_key>` where `keyspace` is generated using the `redis.key` configuration property like above and `record_key` is the sink record key.
40+
+
41+
For example with `redis.key = ${topic}`, topic `orders`, and sink record key `123` the Redis key is `orders:123`.
42+
43+
[[sync-hash]]
5344
==== Hash
5445
Use the following properties to write Kafka records as Redis hashes:
5546

@@ -62,8 +53,9 @@ value.converter=<Avro or JSON> <2>
6253

6354
<1> <<key-string,String>> or <<key-bytes,bytes>>
6455
<2> <<avro,Avro>> or <<kafka-json,JSON>>.
65-
If value is null the key is https://redis.io/commands/del[deleted].
56+
If value is null the key is deleted.
6657

58+
[[sync-string]]
6759
==== String
6860
Use the following properties to write Kafka records as Redis strings:
6961

@@ -76,8 +68,38 @@ value.converter=<string or bytes> <2>
7668

7769
<1> <<key-string,String>> or <<key-bytes,bytes>>
7870
<2> <<value-string,String>> or <<value-bytes,bytes>>.
79-
If value is null the key is https://redis.io/commands/del[deleted].
71+
If value is null the key is deleted.
72+
73+
[[sync-json]]
74+
==== JSON
75+
Use the following properties to write Kafka records as RedisJSON documents:
76+
77+
[source,properties]
78+
----
79+
redis.type=JSON
80+
key.converter=<string or bytes> <1>
81+
value.converter=<string or bytes> <2>
82+
----
8083

84+
<1> <<key-string,String>> or <<key-bytes,bytes>>
85+
<2> <<value-string,String>> or <<value-bytes,bytes>>.
86+
If value is null the key is deleted.
87+
88+
[[sync-stream]]
89+
==== Stream
90+
Use the following properties to store Kafka records as Redis stream messages:
91+
92+
[source,properties]
93+
----
94+
redis.type=STREAM
95+
redis.key=<stream key> <1>
96+
value.converter=<Avro or JSON> <2>
97+
----
98+
99+
<1> <<collection-key,Stream key>>
100+
<2> <<avro,Avro>> or <<kafka-json,JSON>>
101+
102+
[[sync-list]]
81103
==== List
82104
Use the following properties to add Kafka record keys to a Redis list:
83105

@@ -96,6 +118,7 @@ redis.push.direction=<LEFT or RIGHT> <3>
96118
The Kafka record value can be any format.
97119
If a value is null then the member is removed from the list (instead of pushed to the list).
98120

121+
[[sync-set]]
99122
==== Set
100123
Use the following properties to add Kafka record keys to a Redis set:
101124

@@ -112,6 +135,7 @@ key.converter=<string or bytes> <2>
112135
The Kafka record value can be any format.
113136
If a value is null then the member is removed from the set (instead of added to the set).
114137

138+
[[sync-zset]]
115139
==== Sorted Set
116140
Use the following properties to add Kafka record keys to a Redis sorted set:
117141

@@ -128,22 +152,8 @@ key.converter=<string or bytes> <2>
128152
The Kafka record value should be `float64` and is used for the score.
129153
If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
130154

131-
[[redisjson]]
132-
==== JSON
133-
Use the following properties to write Kafka records as RedisJSON documents:
134-
135-
[source,properties]
136-
----
137-
redis.type=JSON
138-
key.converter=<string or bytes> <1>
139-
value.converter=<string or bytes> <2>
140-
----
141-
142-
<1> <<key-string,String>> or <<key-bytes,bytes>>
143-
<2> <<value-string,String>> or <<value-bytes,bytes>>.
144-
If value is null the key is https://redis.io/commands/del[deleted].
145-
146-
==== TimeSeries
155+
[[sync-timeseries]]
156+
==== Time Series
147157

148158
Use the following properties to write Kafka records as RedisTimeSeries samples:
149159

src/main/java/com/redis/kafka/connect/sink/RedisEnterpriseSinkConfig.java

Lines changed: 30 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -48,9 +48,14 @@ public enum PushDirection {
4848

4949
public static final String KEY_CONFIG = "redis.key";
5050
public static final String KEY_DEFAULT = TOKEN_TOPIC;
51-
public static final String KEY_DOC = "A format string for the destination stream/set/zset/list key, which may contain '"
52-
+ TOKEN_TOPIC + "' as a placeholder for the originating topic name.\nFor example, ``kafka_" + TOKEN_TOPIC
53-
+ "`` for the topic 'orders' will map to the Redis key " + "'kafka_orders'.";
51+
public static final String KEY_DOC = "A format string for destination key space, which may contain '" + TOKEN_TOPIC
52+
+ "' as a placeholder for the originating topic name.\nFor example, ``kafka_" + TOKEN_TOPIC
53+
+ "`` for the topic 'orders' will map to the Redis key space "
54+
+ "'kafka_orders'.\nLeave empty for passthrough (only applicable to non-collection data structures).";
55+
56+
public static final String SEPARATOR_CONFIG = "redis.separator";
57+
public static final String SEPARATOR_DEFAULT = ":";
58+
public static final String SEPARATOR_DOC = "Separator for non-collection destination keys.";
5459

5560
public static final String MULTIEXEC_CONFIG = "redis.multiexec";
5661
public static final String MULTIEXEC_DEFAULT = "false";
@@ -78,7 +83,8 @@ public enum PushDirection {
7883

7984
private final Charset charset;
8085
private final DataType type;
81-
private final String keyFormat;
86+
private final String keyspace;
87+
private final String separator;
8288
private final PushDirection pushDirection;
8389
private final boolean multiexec;
8490
private final int waitReplicas;
@@ -89,7 +95,8 @@ public RedisEnterpriseSinkConfig(Map<?, ?> originals) {
8995
String charsetName = getString(CHARSET_CONFIG).trim();
9096
charset = Charset.forName(charsetName);
9197
type = ConfigUtils.getEnum(DataType.class, this, TYPE_CONFIG);
92-
keyFormat = getString(KEY_CONFIG).trim();
98+
keyspace = getString(KEY_CONFIG).trim();
99+
separator = getString(SEPARATOR_CONFIG).trim();
93100
pushDirection = ConfigUtils.getEnum(PushDirection.class, this, PUSH_DIRECTION_CONFIG);
94101
multiexec = Boolean.TRUE.equals(getBoolean(MULTIEXEC_CONFIG));
95102
waitReplicas = getInt(WAIT_REPLICAS_CONFIG);
@@ -104,8 +111,12 @@ public DataType getType() {
104111
return type;
105112
}
106113

107-
public String getKeyFormat() {
108-
return keyFormat;
114+
public String getKeyspace() {
115+
return keyspace;
116+
}
117+
118+
public String getSeparator() {
119+
return separator;
109120
}
110121

111122
public PushDirection getPushDirection() {
@@ -138,10 +149,13 @@ public RedisEnterpriseSinkConfigDef(ConfigDef base) {
138149
private void define() {
139150
define(ConfigKeyBuilder.of(CHARSET_CONFIG, ConfigDef.Type.STRING).documentation(CHARSET_DOC)
140151
.defaultValue(CHARSET_DEFAULT).importance(ConfigDef.Importance.HIGH).build());
141-
define(ConfigKeyBuilder.of(TYPE_CONFIG, ConfigDef.Type.STRING).documentation(TYPE_DOC).defaultValue(TYPE_DEFAULT)
142-
.importance(ConfigDef.Importance.HIGH).validator(Validators.validEnum(DataType.class)).build());
143-
define(ConfigKeyBuilder.of(KEY_CONFIG, ConfigDef.Type.STRING).documentation(KEY_DOC).defaultValue(KEY_DEFAULT)
144-
.importance(ConfigDef.Importance.MEDIUM).build());
152+
define(ConfigKeyBuilder.of(TYPE_CONFIG, ConfigDef.Type.STRING).documentation(TYPE_DOC)
153+
.defaultValue(TYPE_DEFAULT).importance(ConfigDef.Importance.HIGH)
154+
.validator(Validators.validEnum(DataType.class)).build());
155+
define(ConfigKeyBuilder.of(KEY_CONFIG, ConfigDef.Type.STRING).documentation(KEY_DOC)
156+
.defaultValue(KEY_DEFAULT).importance(ConfigDef.Importance.MEDIUM).build());
157+
define(ConfigKeyBuilder.of(SEPARATOR_CONFIG, ConfigDef.Type.STRING).documentation(SEPARATOR_DOC)
158+
.defaultValue(SEPARATOR_DEFAULT).importance(ConfigDef.Importance.MEDIUM).build());
145159
define(ConfigKeyBuilder.of(PUSH_DIRECTION_CONFIG, ConfigDef.Type.STRING).documentation(PUSH_DIRECTION_DOC)
146160
.defaultValue(PUSH_DIRECTION_DEFAULT).importance(ConfigDef.Importance.MEDIUM).build());
147161
define(ConfigKeyBuilder.of(MULTIEXEC_CONFIG, ConfigDef.Type.BOOLEAN).documentation(MULTIEXEC_DOC)
@@ -186,7 +200,7 @@ public int hashCode() {
186200
final int prime = 31;
187201
int result = super.hashCode();
188202
result = prime * result
189-
+ Objects.hash(charset, keyFormat, multiexec, pushDirection, type, waitReplicas, waitTimeout);
203+
+ Objects.hash(charset, keyspace, separator, multiexec, pushDirection, type, waitReplicas, waitTimeout);
190204
return result;
191205
}
192206

@@ -199,9 +213,10 @@ public boolean equals(Object obj) {
199213
if (getClass() != obj.getClass())
200214
return false;
201215
RedisEnterpriseSinkConfig other = (RedisEnterpriseSinkConfig) obj;
202-
return Objects.equals(charset, other.charset) && Objects.equals(keyFormat, other.keyFormat)
203-
&& multiexec == other.multiexec && pushDirection == other.pushDirection && type == other.type
204-
&& waitReplicas == other.waitReplicas && waitTimeout == other.waitTimeout;
216+
return Objects.equals(charset, other.charset) && Objects.equals(keyspace, other.keyspace)
217+
&& Objects.equals(separator, other.separator) && multiexec == other.multiexec
218+
&& pushDirection == other.pushDirection && type == other.type && waitReplicas == other.waitReplicas
219+
&& waitTimeout == other.waitTimeout;
205220
}
206221

207222
}

0 commit comments

Comments
 (0)