Careful!
You are browsing documentation for a version of Kuma that is not the latest release.
Explore Kuma with the Universal demo app
To start learning how Kuma works, you can download and run a simple demo application that consists of two services:
demo-app
: web application that lets you increment a numeric counterredis
: data store for the counter
This guide also introduces some of the tools Kuma provides to help you control and monitor traffic, track resource status, and more.
The demo-app
service listens on port 5000. When it starts, it expects to find a zone key in Redis that specifies the name of the datacenter (or cluster) where the Redis instance is running. This name is displayed in the browser.
The zone key is purely static and arbitrary. Different zone values for different Redis instances let you keep track of which Redis instance stores the counter if you manage routes across different zones, clusters, and clouds.
Prerequisites
- Redis installed
- Kuma installed
-
Demo app downloaded from GitHub:
git clone https://github.com/kumahq/kuma-counter-demo.git
To explore traffic metrics with the demo app, you also need to set up Prometheus. See the traffic metrics policy documentation.
Set up
-
Run
redis
as a daemon on port 26379 and set a default zone name:redis-server --port 26379 --daemonize yes redis-cli -p 26379 set zone local
-
Install and start
demo-app
on the default port 5000:npm install --prefix=app/ npm start --prefix=app/
Generate tokens
Create a token for Redis and a token for the app (all valid for 30 days):
kumactl generate dataplane-token --tag kuma.io/service=redis --valid-for=720h > kuma-token-redis
kumactl generate dataplane-token --tag kuma.io/service=app --valid-for=720h > kuma-token-app
This action requires authentication unless executed against a control-plane running on localhost.
If kuma-cp
is running inside docker container please see docker authentication docs.
Create a data plane proxy for each service
For Redis:
kuma-dp run \
--cp-address=https://localhost:5678/ \
--dns-enabled=false \
--dataplane-token-file=kuma-token-redis \
--dataplane="
type: Dataplane
mesh: default
name: redis
networking:
address: 127.0.0.1
inbound:
- port: 16379
servicePort: 26379
serviceAddress: 127.0.0.1
tags:
kuma.io/service: redis
kuma.io/protocol: tcp
admin:
port: 9901"
And for the demo app:
kuma-dp run \
--cp-address=https://localhost:5678/ \
--dns-enabled=false \
--dataplane-token-file=kuma-token-app \
--dataplane="
type: Dataplane
mesh: default
name: app
networking:
address: 127.0.0.1
outbound:
- port: 6379
tags:
kuma.io/service: redis
inbound:
- port: 15000
servicePort: 5000
serviceAddress: 127.0.0.1
tags:
kuma.io/service: app
kuma.io/protocol: http
admin:
port: 9902"
Run
Navigate to 127.0.0.1:5000 and increment the counter.
Explore the mesh
You can view the sidecar proxies that are connected to the Kuma control plane:
Kuma ships with a read-only GUI that you can use to retrieve Kuma resources. By default the GUI listens on the API port and defaults to :5681/gui
.
You can navigate to 127.0.0.1:5681/meshes/default/dataplanes
to see the connected dataplanes.
Enable Mutual TLS and Traffic Permissions
By default the network is unsecure and not encrypted. We can change this with Kuma by enabling the Mutual TLS policy to provision a dynamic Certificate Authority (CA) on the default
Mesh resource that will automatically assign TLS certificates to our services (more specifically to the injected dataplane proxies running alongside the services).
We can enable Mutual TLS with a builtin
CA backend by executing:
cat <<EOF | kumactl apply -f -
type: Mesh
name: default
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin
EOF
Once Mutual TLS has been enabled, Kuma will not allow traffic to flow freely across our services unless we explicitly have a Traffic Permission policy that describes what services can be consumed by other services. By default, a very permissive traffic permission is created.
For the sake of this demo we will delete it:
kumactl delete traffic-permission allow-all-default
You can try to make requests to the demo application at 127.0.0.1:5000/
and you will notice that they will not work.
Now let’s add back the default traffic permission:
cat <<EOF | kumactl apply -f -
type: TrafficPermission
name: allow-all-default
mesh: default
sources:
- match:
kuma.io/service: '*'
destinations:
- match:
kuma.io/service: '*'
EOF
By doing so every request we now make on our demo application at 127.0.0.1:5000/
is not only working again, but it is automatically encrypted and secure.
As usual, you can visualize the Mutual TLS configuration and the Traffic Permission policies we have just applied via the GUI, the HTTP API or kumactl
.
Explore Traffic Metrics
One of the most important policies that Kuma provides out of the box is Traffic Metrics.
With Traffic Metrics we can leverage Prometheus and Grafana to provide powerful dashboards that visualize the overall traffic activity of our application and the status of the service mesh.
cat <<EOF | kumactl apply -f -
type: Mesh
name: default
mtls:
enabledBackend: ca-1
backends:
- name: ca-1
type: builtin
metrics:
enabledBackend: prometheus-1
backends:
- name: prometheus-1
type: prometheus
conf:
tls:
mode: disabled
EOF
This will enable the prometheus
metrics backend on the default
Mesh and automatically collect metrics for all of our traffic.
Increment the counter to generate traffic, and access the dashboard at 127.0.0.1:3000 with default credentials for both the username (admin
) and the password (admin
).
Kuma automatically installs three dashboard that are ready to use:
Kuma Mesh
: to visualize the status of the overall Mesh.Kuma Dataplane
: to visualize metrics for a single individual dataplane.Kuma Service to Service
: to visualize traffic metrics for our services.
You can now explore the dashboards and see the metrics being populated over time.
Next steps
- Explore the Policies available to govern and orchestrate your service traffic.
- Read the full documentation to learn about all the capabilities of Kuma.
- Chat with us at the official Kuma Slack for questions or feedback.