Skip to content

Commit 9a31ee5

Browse files
committed
Initial release
0 parents  commit 9a31ee5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1641
-0
lines changed

.gitignore

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
.gradle
2+
**/build/
3+
!src/**/build/
4+
5+
# Ignore Gradle GUI config
6+
gradle-app.setting
7+
8+
# Avoid ignoring Gradle wrapper jar file (.jar files are usually ignored)
9+
!gradle-wrapper.jar
10+
11+
# Avoid ignore Gradle wrappper properties
12+
!gradle-wrapper.properties
13+
14+
# Cache of project
15+
.gradletasknamecache
16+
17+
# Eclipse Gradle plugin generated files
18+
# Eclipse Core
19+
.project
20+
# JDT-specific (Eclipse Java Development Tools)
21+
.classpath

LICENCE.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
Business Source License
2+
3+
License text copyright (c) 2020 MariaDB Corporation Ab, All Rights Reserved.
4+
“Business Source License” is a trademark of MariaDB Corporation Ab
5+
6+
7+
Licensor:
8+
Redis Ltd., on behalf of itself and its affiliates and subsidiaries worldwide
9+
Licensed Work:
10+
11+
12+
The Licensed Work means the Redis Flink Connector described [here](https://github.com/redis-field-engineering/redis-flink-connector-dist)
13+
14+
Additional Use Grant:
15+
16+
You may make production use of the Licensed Work solely in connection with the
17+
following products offered by Redis Ltd. or its subsidiaries or affiliates
18+
(collectively, “Redis”):
19+
(i) Redis Community Edition as described [here](https://redis.io/docs/latest/get-started/);
20+
(ii) Redis Cloud as described [here](https://redis.io/cloud/); or
21+
(iii) Redis Software as described [here](https://redis.io/enterprise/):
22+
(collectively, the “Redis Services”), provided that such usage is not additionally made
23+
in connection or combination with a Competitive Offering.
24+
25+
A “Competitive Offering” is a Product that is offered to third parties on either a
26+
paid basis, including through paid support arrangements, or a non-paid basis,
27+
that significantly overlaps with the capabilities of the Licensed Work or the Redis Services.
28+
If Your Product is not a Competitive Offering when You first make it generally available,
29+
it will not become a Competitive Offering later due to Redis releasing a new version of the
30+
Licensed Work with additional capabilities.
31+
32+
“Product” means software that is offered to end users to manage in their own
33+
environments or offered as a service on a hosted basis.
34+
35+
Change Date: Four years from the date the Licensed Work is published
36+
37+
Change License: MIT
38+
39+
Terms:
40+
41+
The Licensor hereby grants you the right to copy, modify, create derivative works,
42+
redistribute, and make non-production use of the Licensed Work. The Licensor
43+
may make an Additional Use Grant, above, permitting limited production use.
44+
45+
46+
Effective on the Change Date, or the fourth anniversary of the first publicly
47+
available distribution of a specific version of the Licensed Work under this
48+
License, whichever comes first, the Licensor hereby grants you rights under the
49+
terms of the Change License, and the rights granted in the paragraph above terminate.
50+
51+
52+
If your use of the Licensed Work does not comply with the requirements currently
53+
in effect as described in this License, you must purchase a commercial license from
54+
the Licensor, its affiliated entities, or authorized resellers, or you must refrain
55+
from using the Licensed Work.
56+
57+
58+
All copies of the original and modified Licensed Work, and derivative works of the
59+
Licensed Work, are subject to this License. This License applies separately for each
60+
version of the Licensed Work and the Change Date may vary for each version of the
61+
Licensed Work released by Licensor.
62+
63+
64+
You must conspicuously display this License on each original or modified copy of
65+
the Licensed Work. If you receive the Licensed Work in original or modified form from
66+
a third party, the terms and conditions set forth in this License apply to your use of that work.
67+
68+
69+
Any use of the Licensed Work in violation of this License will automatically terminate
70+
your rights under this License for the current and all other versions of the Licensed Work.
71+
72+
73+
This License does not grant you any right in any trademark or logo of Licensor or its
74+
affiliates (provided that you may use a trademark or logo of Licensor as expressly required by this License).
75+
TO THE EXTENT PERMITTED BY APPLICABLE LAW, THE LICENSED WORK IS PROVIDED ON AN “AS IS” BASIS.
76+
LICENSOR HEREBY DISCLAIMS ALL WARRANTIES AND CONDITIONS, EXPRESS OR IMPLIED, INCLUDING
77+
(WITHOUT LIMITATION) WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, AND TITLE.

README.adoc

Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,136 @@
1+
= Redis Flink Connector
2+
:linkattrs:
3+
:name: Redis Flink Connector
4+
:project-owner: redis-field-engineering
5+
:project-name: redis-flink-connector
6+
:project-group: com.redis
7+
:project-version: 0.0.1
8+
:dist-repo-name: redis-flink-connector-dist
9+
10+
The Redis Flink Connector is a highly performant, scalable Flink Source and Sink
11+
connector for Redis. It is designed and built to provide a simple, scalable means of
12+
using Redis as a source and Sink for your stream-processing use cases in Flink.
13+
14+
== Partitioned Streams
15+
16+
The Redis Flink Connector supports partitioned streams, allowing you to configure how many
17+
separate partitions you want for your stream of data. This allows you to scale your stream
18+
across a Redis Cluster, allowing Flink to manage the work of coordinating which consumer
19+
owns which stream.
20+
21+
== Exactly-Once Semantics
22+
23+
The Redis Flink Connector supports exactly-once semantics. This is tied into
24+
the checkpointing mechanism in Flink. Please note that "exactly once" refers to
25+
is at the checkpoint level, so in the case of a failure in your pipeline
26+
you may see messages within a checkpoint being delivered more than once
27+
28+
=== Gradle
29+
30+
Add the following to your `build.gradle` file
31+
32+
[source,groovy]
33+
[subs="attributes"]
34+
.build.gradle
35+
----
36+
dependencies {
37+
implementation '{project-group}:{project-name}-spring:{project-version}'
38+
}
39+
----
40+
41+
42+
== Using the Stream Source
43+
44+
To use the Flink stream source, you can create a `RedisSourceConfig`.
45+
46+
The configuration options are as follows:
47+
48+
The following table describes the fields in that class:
49+
50+
[cols="1,1,1,1",options="header"]
51+
|===
52+
| **Field** | **Type** | **Default Value** | **Required**
53+
| `host` | `String` | `"localhost"` | No
54+
| `port` | `int` | `6379` | No
55+
| `password` | `String` | `""` (empty string) | No
56+
| `user` | `String` | `"default"` | No
57+
| `consumerGroup` | `String` | N/A | Yes
58+
| `topicName` | `String` | N/A | Yes
59+
| `numPartitions` | `int` | N/A | Yes
60+
| `useClusterApi` | `boolean` | `false` | No
61+
| `requireAck` | `boolean` | `true` | No
62+
| `startingId` | `StreamEntryID` | `StreamEntryID.XGROUP_LAST_ENTRY` | No
63+
|===
64+
65+
You can then initialize the Source Builder using:
66+
67+
[source,java]
68+
----
69+
RedisSourceBuilder<RedisMessage> sourceBuilder = new RedisSourceBuilder<>(sourceConfig, new RedisMessageDeserializer());
70+
----
71+
72+
After that, all that's left is to use your environment to add the source to your pipeline:
73+
74+
[source,java]
75+
----
76+
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
77+
env.getConfig().setGlobalJobParameters(globalConfig);
78+
env.enableCheckpointing(5000);
79+
env.setParallelism(4);
80+
TypeInformation<RedisMessage> typeInfo = TypeInformation.of(RedisMessage.class);
81+
String sourceName = "Redis to Redis";
82+
env.fromSource(sourceBuilder.build(), WatermarkStrategy.noWatermarks(), sourceName, typeInfo).sinkTo(sinkBuilder.build());
83+
----
84+
85+
== Using the Redis Stream Sink
86+
87+
To use the Redis Stream Sink, you can initialize a `RedisSinkConfig` object with the following:
88+
89+
The following table describes the fields in that class:
90+
91+
[cols="1,1,1,1",options="header"]
92+
|===
93+
| **Field** | **Type** | **Default Value** | **Required**
94+
| `host` | `String` | `"localhost"` | No
95+
| `port` | `int` | `6379` | No
96+
| `password` | `String` | `""` (empty string) | No
97+
| `user` | `String` | `"default"` | No
98+
| `topicName` | `String` | N/A | Yes
99+
| `numPartitions` | `int` | N/A | Yes
100+
|===
101+
102+
You then have to initialize the builder and sink to it:
103+
104+
[source,java]
105+
----
106+
RedisSinkBuilder<RedisMessage> sinkBuilder = new RedisSinkBuilder<>(new RedisPassthroughSerializer(), sinkConfig);
107+
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
108+
env.getConfig().setGlobalJobParameters(globalConfig);
109+
env.enableCheckpointing(5000);
110+
env.setParallelism(4);
111+
TypeInformation<RedisMessage> typeInfo = TypeInformation.of(RedisMessage.class);
112+
String sourceName = "Redis to Redis";
113+
env.fromSource(sourceBuilder.build(), WatermarkStrategy.noWatermarks(), sourceName, typeInfo).sinkTo(sinkBuilder.build());
114+
----
115+
116+
== Quick Start
117+
118+
You can run the demo in this repo by running:
119+
120+
[source,bash]
121+
----
122+
docker compose up -d
123+
./example-redis-job.sh
124+
----
125+
126+
This will spin up Redis, a Flink Job Manager and Task Manager, and start a Job with Redis as the Source and Sink.
127+
128+
129+
== Support
130+
131+
{name} is supported by Redis, Inc. for enterprise-tier customers as a 'Developer Tool' under the https://redis.io/legal/software-support-policy/[Redis Software Support Policy.] For non enterprise-tier customers we supply support for {name} on a good-faith basis.
132+
To report bugs, request features, or receive assistance, please https://github.com/{project-owner}/{dist-repo-name}/issues[file an issue].
133+
134+
== License
135+
136+
{name} is licensed under the Business Source License 1.1. Copyright (C) 2024 Redis, Inc. See link:LICENSE.md[LICENSE] for details.

config/flink-conf.yaml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
blob.server.port: 6124
2+
metrics.reporter.prom.port: 9249
3+
taskmanager.memory.process.size: 1024m
4+
metrics.reporter.jmx.class: org.apache.flink.metrics.jmx.JMXReporter
5+
metrics.reporter.jmx.port: 9020-9030
6+
taskmanager.numberOfTaskSlots: 8
7+
jobmanager.rpc.address: jobmanager
8+
metrics.reporter.prom.factory.class: org.apache.flink.metrics.prometheus.PrometheusReporterFactory
9+
jobmanager.memory.process.size: 1024m
10+
query.server.port: 6125

docker-compose.yml

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
version: "2.2"
2+
services:
3+
jobmanager:
4+
image: flink:latest
5+
container_name: jobmanager
6+
ports:
7+
- "8081:8081"
8+
command: jobmanager
9+
volumes:
10+
- ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
11+
environment:
12+
- |
13+
FLINK_PROPERTIES=
14+
jobmanager.rpc.address: jobmanager
15+
taskmanager:
16+
image: flink:latest
17+
container_name: taskmanager
18+
depends_on:
19+
- jobmanager
20+
- redis
21+
- kafka
22+
- zookeeper
23+
command: taskmanager
24+
scale: 1
25+
ports:
26+
- "9249:9249" # Prometheus metrics
27+
- "9020-9030:9020-9030" # JMX metrics
28+
volumes:
29+
- ./config/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
30+
31+
redis:
32+
image: redis/redis-stack-server
33+
ports:
34+
- "6379:6379"
35+
zookeeper:
36+
image: confluentinc/cp-zookeeper:latest
37+
container_name: zookeeper
38+
ports:
39+
- "2181:2181"
40+
environment:
41+
ZOOKEEPER_CLIENT_PORT: 2181
42+
ZOOKEEPER_TICK_TIME: 2000
43+
44+
kafka:
45+
image: confluentinc/cp-kafka:latest
46+
container_name: kafka
47+
ports:
48+
- "9092:9092"
49+
environment:
50+
KAFKA_BROKER_ID: 1
51+
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
52+
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
53+
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
54+
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
55+
depends_on:
56+
- zookeeper
57+
prometheus:
58+
image: prom/prometheus:v2.26.1
59+
ports:
60+
- 9090:9090
61+
volumes:
62+
- prometheus-data:/prometheus
63+
- ./prometheus:/etc/prometheus
64+
command: --config.file=/etc/prometheus/prometheus.yml
65+
links:
66+
- taskmanager
67+
- jobmanager
68+
69+
volumes:
70+
prometheus-data:

example-kafka-redis.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#!/bin/bash
2+
3+
./samples/gradlew -p ./samples clean :kafka-source-redis-sink:shadowJar
4+
docker cp ./samples/jobs/kafka-source-redis-sink/build/libs/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
5+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar

example-random-kafka.sh

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#!/bin/bash
2+
3+
./samples/gradlew -p ./samples clean :random-kafka-sink:shadowJar
4+
docker cp ./samples/jobs/random-kafka-sink/build/libs/random-kafka-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
5+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/random-kafka-sink-all-1.0-SNAPSHOT.jar

example-redis-job.sh

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
./samples/gradlew -p ./samples clean shadowJar
4+
5+
docker cp ./samples/jobs/random-kafka-sink/build/libs/random-kafka-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
6+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/random-kafka-sink-all-1.0-SNAPSHOT.jar
7+
8+
docker cp ./samples/jobs/kafka-source-redis-sink/build/libs/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
9+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar
10+
11+
docker cp ./samples/jobs/partitioned-stream/build/libs/partitioned-stream-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
12+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/partitioned-stream-all-1.0-SNAPSHOT.jar

example-simulate-failure.sh

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
#!/bin/bash
2+
3+
./samples/gradlew -p ./samples clean shadowJar
4+
5+
docker cp ./samples/jobs/random-kafka-sink/build/libs/random-kafka-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
6+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/random-kafka-sink-all-1.0-SNAPSHOT.jar
7+
8+
docker cp ./samples/jobs/kafka-source-redis-sink/build/libs/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
9+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/kafka-source-redis-sink-all-1.0-SNAPSHOT.jar
10+
11+
docker cp ./samples/jobs/intermitent-failure/build/libs/intermitent-failure-all-1.0-SNAPSHOT.jar jobmanager:/opt/flink/examples/streaming
12+
docker exec -it jobmanager ./bin/flink run -d examples/streaming/intermitent-failure-all-1.0-SNAPSHOT.jar

prometheus/prometheus.yml

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
global:
2+
scrape_interval: 15s
3+
scrape_configs:
4+
- job_name: 'prometheus'
5+
scrape_interval: 5s
6+
static_configs:
7+
- targets: ['localhost:9090']
8+
- job_name: 'flink-taskmanager'
9+
metrics_path: /
10+
scrape_interval: 5s
11+
static_configs:
12+
- targets:
13+
- 'taskmanager:9249'
14+
- job_name: 'flink-jobmanager'
15+
metrics_path: /
16+
scrape_interval: 5s
17+
static_configs:
18+
- targets:
19+
- 'jobmanager:9249'

0 commit comments

Comments
 (0)