Quickstart
Quickstart
Get started with Zilla by trying some of its Kafka proxying and API gateway features. You will see how Zilla can operate as an HTTP Kafka Proxy to expose Kafka topics via REST and SSE endpoints. You can interact with Zilla as an MQTT Kafka Proxy to turn Kafka into an MQTT broker. You can leverage the Zilla gRPC Kafka Proxy to deliver protobuf messages from gRPC clients to gRPC servers through Kafka.
Prerequisites
This Quickstart is hosted at quickstart.aklivity.io
meaning you can interact with it using any clients you prefer. The best way to experience the Zilla features in this Quickstart is by using the Zilla Quickstart Postman Workspace.
- The Postman desktop client to make MQTT and gRPC requests
- A Postman account
- Fork the Postman collections from the Zilla Quickstart Workspace
Warning
The live version of the quickstart is currently down for maintenance and any requests to quickstart.aklivity.io won't work. Please use the local deploy with docker compose setup and select the Local Zilla Quickstart
environment in the Postman collection.
HTTP Kafka Proxy
The Zilla HTTP Kafka Proxy lets you configure application-centric REST APIs and SSE streams that unlock Kafka event-driven architectures.
- Open the live http-messages Kafka topic, which will have all the JSON messages you create. You can switch the filter from
live
tonewest
to see all of the latest messages on the topic. - Fork the HTTP Kafka proxy Postman collection.
- Open the live API stream and scroll to the bottom to view messages fetched from a Kafka topic as a Server-sent Events (SSE) stream. SSE is a text stream over HTTP that directly shows the raw output in a browser tab.
- Use the
Create a new message
request to update and submit the JSON in theBody
tab. The new object will appear in the SSE stream and the Kafka topic. - Get your Kafka message key from the http-messages topic and use the
Get message by key
request to fetch only your message using your key in the<key-from-kafka-topic>
path variable. - To interact more with Zilla, use the
Additional features
in the Postman collection or copy the code samples.
You can easily configure many common Restful actions with the added benefit of built-in streaming with an SSE endpoint. The zilla.yaml
config has simple and clear syntax for defining each HTTP endpoint.
Create a new message.
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
- when:
- method: POST
path: /api/items
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${idempotencyKey}
Fetch all messages on the topic.
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
- when:
- method: GET
path: /api/items
exit: north_kafka_cache_client
with:
capability: fetch
topic: http-messages
merge:
content-type: application/json
Fetch one message by its key.
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
- when:
- method: GET
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: fetch
topic: http-messages
filters:
- key: ${params.id}
Update a message based on its key.
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
- when:
- method: PUT
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${params.id}
Produce a blank message for a key.
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
- when:
- method: DELETE
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${params.id}
Stream all of the messages published on a Kafka topic.
north_sse_kafka_mapping:
type: sse-kafka
kind: proxy
routes:
- when:
- path: /api/stream
exit: north_kafka_cache_client
with:
topic: http-messages
Stream messages for a specific key published on a Kafka.
north_sse_kafka_mapping:
type: sse-kafka
kind: proxy
routes:
- when:
- path: /api/stream/{id}
exit: north_kafka_cache_client
with:
topic: http-messages
filters:
- key: ${params.id}
Full HTTP Proxy zilla.yaml Config
name: http-quickstart
bindings:
# Proxy service entrypoint
north_tcp_server:
type: tcp
kind: server
options:
host: 0.0.0.0
port:
- 7114
routes:
- when:
- port: 7114
exit: north_http_server
telemetry:
metrics:
- stream.*
north_http_server:
type: http
kind: server
options:
versions:
- h2
- http/1.1
access-control:
policy: cross-origin
routes:
- when:
- headers:
:path: /api/stream
exit: north_sse_server
- when:
- headers:
:path: /api/stream/*
exit: north_sse_server
- when:
- headers:
:path: /api/*
exit: north_rest_api_http_kafka_mapping
telemetry:
metrics:
- stream.*
- http.*
# REST proxy endpoints to Kafka a topic
north_rest_api_http_kafka_mapping:
type: http-kafka
kind: proxy
routes:
#region rest_create
- when:
- method: POST
path: /api/items
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${idempotencyKey}
#endregion rest_create
#region rest_update
- when:
- method: PUT
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${params.id}
#endregion rest_update
#region rest_delete
- when:
- method: DELETE
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: produce
topic: http-messages
key: ${params.id}
#endregion rest_delete
#region rest_retrieve_all
- when:
- method: GET
path: /api/items
exit: north_kafka_cache_client
with:
capability: fetch
topic: http-messages
merge:
content-type: application/json
#endregion rest_retrieve_all
#region rest_retrieve_id
- when:
- method: GET
path: /api/items/{id}
exit: north_kafka_cache_client
with:
capability: fetch
topic: http-messages
filters:
- key: ${params.id}
#endregion rest_retrieve_id
# SSE Server to Kafka topics
north_sse_server:
type: sse
kind: server
exit: north_sse_kafka_mapping
north_sse_kafka_mapping:
type: sse-kafka
kind: proxy
routes:
#region sse_stream_all
- when:
- path: /api/stream
exit: north_kafka_cache_client
with:
topic: http-messages
#endregion sse_stream_all
#region sse_stream_id
- when:
- path: /api/stream/{id}
exit: north_kafka_cache_client
with:
topic: http-messages
filters:
- key: ${params.id}
#endregion sse_stream_id
# Kafka sync layer
north_kafka_cache_client:
type: kafka
kind: cache_client
exit: south_kafka_cache_server
south_kafka_cache_server:
type: kafka
kind: cache_server
options:
bootstrap:
- http-messages
exit: south_kafka_client
# Connect to local Kafka
south_kafka_client:
type: kafka
kind: client
options:
servers:
- ${{env.KAFKA_BOOTSTRAP_SERVER}}
exit: south_tcp_client
south_tcp_client:
type: tcp
kind: client
telemetry:
# Desired metrics to track
metrics:
- http.active.requests
- http.request.size
- http.response.size
- stream.opens.sent
- stream.opens.received
- stream.closes.sent
- stream.closes.received
- stream.errors.sent
- stream.errors.received
- stream.active.sent
- stream.active.received
exporters:
# Enable Standard Out logs
stdout_logs_exporter:
type: stdout
Where to learn more
HTTP Kafka proxy Overview and Features | Simple CRUD API Example | Simple SSE Stream Example | Petstore Demo
MQTT Kafka proxy
The Zilla MQTT Kafka Proxy manages MQTT client connections and messages through Kafka topics.
- Open the live mqtt-messages Kafka topic, which will have all of the MQTT messages sent to the broker. You can switch the filter from
live
tonewest
to see all of the latest messages on the topic. - Fork the MQTT Kafka proxy Postman collection in the Postman Desktop client.
- Connect to the broker with the
Pub/Sub
request. Send one of the saved messages, or you can send any message on any MQTT topic. Subscribe to topics in the Topics tab. - Observe the MQTT Broker messages on the Kafka topics with your message in the
body
and the MQTT topic as thekey
. - Connect to the broker with the
Simulator Topics
request to subscribe to the simulated messages being published to the broker.
A Zilla MQTT broker is defined using three specific Kafka topics. The messages Kafka topic will have all of the MQTT messages sent to the broker, where the MQTT topic is the Kafka message key
and the MQTT payload is the Kafka message value. Marking messages with the retain
flag set to true will produce a message on the retained Kafka topic. The sessions Kafka topic is used to manage MQTT client connections.
north_mqtt_server:
type: mqtt
kind: server
exit: north_mqtt_kafka_mapping
north_mqtt_kafka_mapping:
type: mqtt-kafka
kind: proxy
options:
topics:
sessions: mqtt-sessions
messages: mqtt-messages
retained: mqtt-retained
Full MQTT proxy zilla.yaml Config
name: mqtt-quickstart
bindings:
# Proxy service entrypoint
north_tcp_server:
type: tcp
kind: server
options:
host: 0.0.0.0
port:
- 7183
routes:
- when:
- port: 7183
exit: north_mqtt_server
telemetry:
metrics:
- stream.*
# MQTT Server to Kafka topics
#region mqtt_broker_mapping
north_mqtt_server:
type: mqtt
kind: server
exit: north_mqtt_kafka_mapping
north_mqtt_kafka_mapping:
type: mqtt-kafka
kind: proxy
options:
topics:
sessions: mqtt-sessions
messages: mqtt-messages
retained: mqtt-retained
#endregion mqtt_broker_mapping
exit: north_kafka_cache_client
telemetry:
metrics:
- stream.*
# Kafka sync layer
north_kafka_cache_client:
type: kafka
kind: cache_client
exit: south_kafka_cache_server
telemetry:
metrics:
- stream.*
south_kafka_cache_server:
type: kafka
kind: cache_server
options:
bootstrap:
- mqtt-messages
- mqtt-retained
exit: south_kafka_client
telemetry:
metrics:
- stream.*
# Connect to local Kafka
south_kafka_client:
type: kafka
kind: client
options:
servers:
- ${{env.KAFKA_BOOTSTRAP_SERVER}}
exit: south_tcp_client
south_tcp_client:
type: tcp
kind: client
telemetry:
# Desired metrics to track
metrics:
- stream.opens.sent
- stream.opens.received
- stream.closes.sent
- stream.closes.received
- stream.errors.sent
- stream.errors.received
- stream.active.sent
- stream.active.received
exporters:
# Enable Standard Out logs
stdout_logs_exporter:
type: stdout
Where to learn more
Overview and Features | Setup an MQTT Kafka broker | Taxi Demo
gRPC Kafka proxy
The Zilla gRPC Kafka Proxy lets you implement gRPC service definitions from protobuf files to produce and consume messages via Kafka topics.
- Open the live grpc-request and grpc-response Kafka topics, which will have all of the service methods request and response messages respectively. You can switch the filter from
live
tonewest
to see all of the latest messages on the topic. - Fork the gRPC Kafka proxy Postman collection in the Postman Desktop client.
- Invoke the
GetFeature
service method with the default message. - Observe the requested message payload on the Kafka topic followed by the response message with the
keys
having the same UUID. The gRPC method routing information is captured in the Kafka messagesheader
values. - Try out the additional RPC method types in the Postman collection.
Zilla is routing all RouteGuide protobuf messages from any gRPC client to a gRPC server through Kafka. The zilla.yaml
config implements all of the RPC methods from the RouteGuide service protobuf definition.
- GetFeature - Uses
Server-side
streaming to produce the client request message and the server's response message. - ListFeature - Uses
Server-side
streaming to produce the client request message and stream the list of server response messages. - RecordRoute - Uses
Client-side
streaming to produce a stream of client request messages and the server's response message. - RouteChat - Uses
Bidirectional
streaming to stream both the client request messages and server's response messages.
catalogs:
host_filesystem:
type: filesystem
options:
subjects:
route_guide:
path: protos/route_guide.proto
...
north_grpc_server:
type: grpc
kind: server
catalog:
host_filesystem:
- subject: route_guide
exit: north_grpc_kafka_mapping
...
north_grpc_kafka_mapping:
type: grpc-kafka
kind: proxy
routes:
- when:
- method: routeguide.RouteGuide/*
exit: north_kafka_cache_client
with:
capability: produce
topic: grpc-request
acks: leader_only
reply-to: grpc-response
west_kafka_grpc_remote_server:
type: kafka-grpc
kind: remote_server
entry: north_kafka_cache_client
options:
acks: leader_only
routes:
- when:
- topic: grpc-request
reply-to: grpc-response
method: routeguide.RouteGuide/*
with:
scheme: http
authority: ${{env.ROUTE_GUIDE_SERVER_HOST}}:${{env.ROUTE_GUIDE_SERVER_PORT}}
...
west_route_guide_tcp_client:
type: tcp
kind: client
options:
host: ${{env.ROUTE_GUIDE_SERVER_HOST}}
port: ${{env.ROUTE_GUIDE_SERVER_PORT}}
File not found
Full gRPC proxy zilla.yaml Config
name: grpc-quickstart
#region route_guide_proto
catalogs:
host_filesystem:
type: filesystem
options:
subjects:
route_guide:
path: protos/route_guide.proto
#endregion route_guide_proto
bindings:
# Proxy service entrypoint
north_tcp_server:
type: tcp
kind: server
options:
host: 0.0.0.0
port:
- 7151
routes:
- when:
- port: 7151
exit: north_grpc_http_server
telemetry:
metrics:
- stream.*
north_grpc_http_server:
type: http
kind: server
options:
versions:
- h2
access-control:
policy: cross-origin
exit: north_grpc_server
telemetry:
metrics:
- stream.*
- http.*
# gRPC service definition to Kafka topics
#region route_guide_service_definition
north_grpc_server:
type: grpc
kind: server
catalog:
host_filesystem:
- subject: route_guide
exit: north_grpc_kafka_mapping
#endregion route_guide_service_definition
telemetry:
metrics:
- stream.*
- grpc.*
#region route_guide_service_mapping
north_grpc_kafka_mapping:
type: grpc-kafka
kind: proxy
routes:
- when:
- method: routeguide.RouteGuide/*
exit: north_kafka_cache_client
with:
capability: produce
topic: grpc-request
acks: leader_only
reply-to: grpc-response
#endregion route_guide_service_mapping
# Kafka sync layer
north_kafka_cache_client:
type: kafka
kind: cache_client
exit: south_kafka_cache_server
south_kafka_cache_server:
type: kafka
kind: cache_server
options:
bootstrap:
- grpc-request
- grpc-response
exit: south_kafka_client
# Connect to local Kafka
south_kafka_client:
type: kafka
kind: client
options:
servers:
- ${{env.KAFKA_BOOTSTRAP_SERVER}}
exit: south_tcp_client
south_tcp_client:
type: tcp
kind: client
# Kafka to external gRPC server
#region route_guide_interface
west_kafka_grpc_remote_server:
type: kafka-grpc
kind: remote_server
entry: north_kafka_cache_client
options:
acks: leader_only
routes:
- when:
- topic: grpc-request
reply-to: grpc-response
method: routeguide.RouteGuide/*
with:
scheme: http
authority: ${{env.ROUTE_GUIDE_SERVER_HOST}}:${{env.ROUTE_GUIDE_SERVER_PORT}}
#endregion route_guide_interface
exit: west_route_guide_grpc_client
# gRPC RoutGuide server config
west_route_guide_grpc_client:
type: grpc
kind: client
exit: west_route_guide_http_client
west_route_guide_http_client:
type: http
kind: client
options:
versions:
- h2
exit: west_route_guide_tcp_client
#region route_guide_server
west_route_guide_tcp_client:
type: tcp
kind: client
options:
host: ${{env.ROUTE_GUIDE_SERVER_HOST}}
port: ${{env.ROUTE_GUIDE_SERVER_PORT}}
#endregion route_guide_server
telemetry:
# Desired metrics to track
metrics:
- http.active.requests
- http.request.size
- http.response.size
- stream.opens.sent
- stream.opens.received
- stream.closes.sent
- stream.closes.received
- stream.errors.sent
- stream.errors.received
- stream.active.sent
- stream.active.received
- grpc.active.requests
- grpc.requests.per.rpc
- grpc.responses.per.rpc
exporters:
# Enable Standard Out logs
stdout_logs_exporter:
type: stdout
Where to learn more
gRPC Kafka proxy Overview and Features | Simple gRPC Server | Full Route Guide example
Run the Quickstart locally
Download and run the Zilla quickstart
cookbook using this install script. It will start Zilla and everything you need for this guide.
wget -qO- https://raw.githubusercontent.com/aklivity/zilla-examples/main/startup.sh | sh -
Note
Alternatively, download quickstart and follow the README
yourself.
The key components this script will setup:
- Configured Zilla instance
- Kafka instance and topics
- Kafka UI for browsing topics & messages
- gRPC Route Guide server
- MQTT message simulator
Now you can select the Local Zilla Quickstart
environment from the Postman environments dropdown for the collections to work with the Zilla instance running on localhost
.