20.10.14
2.32.0
8.3.1
and abovedocker swarm init if you already haven't done to initiate
Swarm orchestrator.stack.yml
and add Apache Kafka
.task-commands
Todo
servicetask-replies
Todo
service after processing each commandtask-snapshots
example_init-kafka
service logs.Spring boot + Kafka Streams
to process commands and generate relevant output. This Todo
service can deliver near real-time updates when a Task
is created, renamed, or deleted, and produces a message to the Kafka task-snapshots
topic with the updated value.cleanup-policy: compact
for the task-snapshots
topic causes the topic to behave more like a table, where only the most recent message for each distinct message key is retained.Todo
service, setting the Kafka message key to the Task identifier to retain all the distinct Tasks.null
value) to the task-snapshots
topic causing that Task identifier to no longer be retained in Kafka.task-commands
topic and correlated replies are sent to the task-replies
topic with the same zilla:correlation-id
value that was received with the inbound command message.ValidateCommand
Kafka Streams processor implements optimistic locking by ensuring that conditional requests using if-match
are allowed to proceed only if the latest etag
for the Task matches, otherwise the command is rejected.todo-service:latest
image.stack.yml
file and add the Todo service into the stack:todo-service
to your existing stack.Todo
service via Kafka and retrieve task queries from Kafka as needed by the Tasks UX.bindings
representing each step in the pipeline as inbound network traffic is decoded and transformed then encoded into outbound network traffic as needed.Zilla
for the Tasks API to interact with the Todo
Kafka Streams service via Kafka topics.zilla.json
below. To understand each binding type in more detail please visit Zilla Runtime Configuration.tcp_server0
http_server0
http_server0
http_kafka_proxy0
http_kafka_proxy0
POST
, PUT
and DELETE
Tasks API requests to task-commands
topic with task-replies
reply-to topic via kafka_cache_client0
GET
Tasks API requests to task-snapshots
topic via kafka_cache_client0
kafka_cache_client0
kafka_cache_server0
kafka_cache_server0
kafka_client0
kafka_client0
kafka_client0
tcp_client0
zilla.json
shown below to your local zilla.json
file.zilla
service to the docker stack, mounting the zilla.json
configuration.zilla
service to the existing stack.curl
as shown below.GET /tasks
API works as expected, but you would still need to poll the server to detect changes made by other clients.zilla.json
configuration to use Server-Sent Events (SSE) instead to support both initial delivery of the current list of all Todo Tasks, as well as incremental updates when a Task is created, renamed, or deleted by the same client or a different client.zilla.json
inside the bindings
property of the configuration.http_server0
binding in zilla.json
to route the GET /tasks
API to exit to sse_server0
instead of http_kafka_proxy0
.zilla
service and force a restart.curl
as shown below.GET /tasks
API now delivers a continuous stream of tasks starting with the initial tasks as expected.GET /tasks
stream open as shown above.GET /tasks
stream automatically receives the update when the new task is created.Todo
app that's implemented using VueJs framework. Run the commands below in the root directory.dist
folder with necessary artifacts. Now you can configure Zilla to host the app so that both API and app can be served under the same hostname and port.bindings
property of the configuration.dist
folder into the Zilla
container.stack.yml
file and add - ./todo-app/dist:/app/dist:ro
to the zilla
service volumes
.zilla.json
config changes got applied after restarting the Zilla
service. Check the example_zilla
service log.Login
as the Tasks API is initially available to anonymous clients.