Build the Todo Application

In this getting started exercise, you will create a simple Todo application using a CQRS design pattern that's backed by Apache Kafka and Zilla as the event-driven API gateway. Zilla lets you focus on your applications and business logic instead of spending time writing tons of code and this demo helps to ease CQRS complexity. This tutorial gives a basic introduction to Zilla and describes some straightforward capabilities.
This Todo Application tutorial has the following goals:
  • Provide a list of Todo tasks that is shared by all clients
  • Support optimistic locking with conflict detection when attempting to update a Todo task
  • Deliver updates in near real-time when a Todo task is created, modified, or deleted
  • Demonstrate a user interface driving the Tasks API
  • Support scaling Todo task reads and writes


  • Docker 20.10.14
  • Git 2.32.0
  • npm 8.3.1 and above

Step 1: Kafka

In this step, you will set up basic infrastructure components for your event-driven architecture.
Run docker swarm init if you already haven't done to initiateSwarm orchestrator.
Let's create stack.yml and add Apache Kafka.
version: "3"
driver: overlay
image: "bitnami/kafka:3.1.0"
hostname: ""
- net0
- 'sh'
- '-c'
- '/opt/bitnami/scripts/kafka/ && format --config "$${KAFKA_CONF_FILE}" --cluster-id "lkorDA4qT6W1K_dk0LHvtg" --ignore-formatted && /opt/bitnami/scripts/kafka/' # Kraft specific initialise
- KAFKA_CFG_LOG_DIRS=/tmp/logs
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- "9092:9092"
image: "bitnami/kafka:3"
- net0
condition: none
max_attempts: 0
- kafka
entrypoint: [ '/bin/sh', '-c' ]
command: |
# blocks until Kafka becomes reachable
/opt/bitnami/kafka/bin/ --bootstrap-server --list --topic 'task-.*'
echo '## Creating the Kafka topics'
/opt/bitnami/kafka/bin/ --bootstrap-server --create --if-not-exists --topic task-commands --partitions 1
/opt/bitnami/kafka/bin/ --bootstrap-server --create --if-not-exists --topic task-replies --partitions 1
/opt/bitnami/kafka/bin/ --bootstrap-server --create --if-not-exists --topic task-snapshots --config cleanup.policy=compact --partitions 1
echo ''
echo '## Created the Kafka topics'
/opt/bitnami/kafka/bin/ --bootstrap-server --list --topic 'task-.*'
Now let's run
docker stack deploy -c stack.yml example --resolve-image never
to spin up Apache Kafka and create the following topics.
Queues commands to be processed by the Todo service
Queues the response from the Todo service after processing each command
Captures the latest snapshot of each task entity
Now verify that Kafka has fully started and that the topics have been successfully created.
docker service logs example_init-kafka --follow --raw
Make sure you see this output at the end of the example_init-kafka service logs.
## Creating the Kafka topics
Created topic task-commands.
Created topic task-replies.
Created topic task-snapshots.
## Created the Kafka topics

Step 2: Todo Service

Next, you will need to build a todo service that is implemented using Spring boot + Kafka Streams to process commands and generate relevant output. This Todo service can deliver near real-time updates when a Task is created, renamed, or deleted, and produces a message to the Kafka task-snapshots topic with the updated value.
Combining this with cleanup-policy: compact for the task-snapshots topic causes the topic to behave more like a table, where only the most recent message for each distinct message key is retained.
This approach is used as the source of truth for the current state of our Todo service, setting the Kafka message key to the Task identifier to retain all the distinct Tasks.
When a Task is deleted, you will produce a tombstone message (null value) to the task-snapshots topic causing that Task identifier to no longer be retained in Kafka.
Commands arrive at the Tasks service via the task-commands topic and correlated replies are sent to the task-replies topic with the same zilla:correlation-id value that was received with the inbound command message.
Implementing the Todo domain using these topics gives us the following Kafka Streams topology.
The ValidateCommand Kafka Streams processor implements optimistic locking by ensuring that conditional requests using if-match are allowed to proceed only if the latest etag for the Task matches, otherwise the command is rejected.
Let's checkout and build the service by running the commands below.
git clone && \
cd todo-service && \
./mvnw clean install && \
cd ..
This will checkout and build todo-service:latest image.
Open stack.yml file and add the Todo service into the stack:
image: "todo-service:latest"
- net0
TASK_COMMANDS_TOPIC: task-commands
TASK_SNAPSHOTS_TOPIC: task-snapshots
TASK_REPLIES_TOPIC: task-replies
Run the command below to deploy the todo-service to your existing stack.
docker stack deploy -c stack.yml example --resolve-image never
Creating service example_todo-service
Updating service example_kafka (id: st4hq1bwjsom5r0jxnc6i9rgr)
Now, you have a running to-do service that can process incoming commands, send a response and take snapshots of the task.

Step 3: Zilla

Next, the most exciting and most challenging part of implementing the CQRS design pattern is where you need to build and deploy your API with a real-time response.
In a traditional approach, you would have to set up and build multiple layers of service implementation to expose the API that is based on Kafka Streams.
However, Zilla is designed to solve these architectural challenges, requiring only declarative configuration as shown below.
Let's design the Tasks API. You need to define a Tasks API to send commands to the Todo service via Kafka and retrieve task queries from Kafka as needed by the Tasks UX.

Configure Zilla

The Zilla engine configuration defines a flow of named bindings representing each step in the pipeline as inbound network traffic is decoded and transformed then encoded into outbound network traffic as needed.
Let's configure Zilla for the Tasks API to interact with the Todo Kafka Streams service via Kafka topics.
You will add the following bindings to support the Tasks API as shown zilla.json below. To understand each binding type in more detail please visit Zilla Runtime Configuration.
listens on port 8080
routes to http_server0
decodes HTTP protocol
routes Tasks API to http_kafka_proxy0
transforms HTTP to Kafka
routes POST, PUT and DELETE Tasks API requests to task-commands topic with task-replies reply-to topic via kafka_cache_client0
routes GET Tasks API requests to task-snapshots topic via kafka_cache_client0
reads from local Kafka topic message cache
routes to kafka_cache_server0
writes to local Kafka topic message cache as new messages arrive from Kafka
routes to kafka_client0
encodes Kafka protocol
routes to kafka_client0
connects to Kafka brokers
Using Zilla Studio, select the Build the Todo App template from the Load Template dropdown and then press Generate Config to download the corresponding zilla.json configuration file.
Zilla Studio
Alternatively, copy the contents of zilla.json shown below to your local zilla.json file.
Now let's add the zilla service to the docker stack, mounting the zilla.json configuration.
image: ""
hostname: "zilla"
command: ["start", "-v", "-e"]
- ./zilla.json:/zilla.json:ro
- net0
- "8080:8080"
Run the below command as this will deploy the zilla service to the existing stack.
docker stack deploy -c stack.yml example --resolve-image never
Updating service example_kafka (id: st4hq1bwjsom5r0jxnc6i9rgr)
Updating service example_todo-service (id: ojbj2kbuft22egy854xqv8yo8)
Creating service example_zilla
Make sure that Zilla is fully started by checking container logs where you see started at the end of the log.
Let's verify the Tasks API using curl as shown below.
curl http://localhost:8080/tasks
data:{"name":"Read the docs"}
As you can see, the GET /tasks API delivers a continuous stream of tasks starting with the initial tasks as expected.
Now create a new Todo task while keeping the GET /tasks stream open as shown above.
curl -X POST http://localhost:8080/tasks \
-H "Idempotency-Key: 5C1A90A3-AEB5-496F-BA00-42D1D805B21B" \
-H "Content-Type: application/json" \
-d "{\"name\":\"Join the Slack community\"}"
The GET /tasks stream automatically receives the update when the new task is created.
data:{"name":"Join the Slack community"}
Each new update arrives automatically, even when changes are made by other clients.

Step 4: Web App

Next, you will build the Todo app that's implemented using VueJs framework. Run the commands below in the root directory.
git clone && \
cd todo-app && \
npm install && \
npm run build && \
cd ..
This will generate dist folder with necessary artifacts. Now you can configure Zilla to host the app so that both API and app can be served under the same hostname and port.
First, add the http_filesystem_proxy0 and filesystem_server0 bindings to zilla.json giving the following updated configuration.
zilla.json (updated)
The last step is to mount the dist folder into the Zilla container.
Open stack.yml file and add - ./todo-app/dist:/app/dist:ro to the zilla service volumes.
image: ""
hostname: "zilla"
command: ["start", "-v", "-e"]
- ./todo-app/dist:/app/dist:ro
Finally, run
docker stack deploy -c stack.yml example --resolve-image never
Make sure that zilla.json config changes got applied after restarting the Zilla service. Check the example_zilla service log.

Step 5: Test Drive!

Open the browser and enter http://localhost:8080/ to see the Todo Application.
Notice that there is no need to Login as the Tasks API is initially available to anonymous clients.
Next up: Secure the Todo Application Tasks API with JWT access tokens.