This sections gives an overview of a Kuma service mesh. It also covers how to start integrating your services into your mesh.
A Kuma mesh consists of two main components:
- The data plane consists of the proxies that run alongside your services. All of your mesh traffic flows through these proxies on its way to its destination. Kuma’s uses Envoy for its data plane proxy.
- The control plane configures the data plane proxies for handling mesh traffic. However, the control plane runs independently of the data plane and does not interact with mesh traffic directly. Kuma users create policies that the Kuma control plane processes to generate configuration for the data plane proxies.
Multi-mesh: one Kuma control plane deployment can control multiple, isolated data planes using the
resource. As compared to one control plane per data plane, this option lowers the complexity and operational cost of supporting multiple meshes.
This is a high level visualization of a Kuma service mesh:
Communication happens between the control and data plane as well as between the services and their data plane proxies:
Kuma implements the Envoy xDS APIs so that data plane proxies can retrieve their configuration from the control plane.
A minimal Kuma deployment involves one or more instances of the control plane executable,
For each service in your mesh, you’ll have one or more instances of the data plane proxy executable,
Users interact with the control plane via the command-line tool
There are two modes that the Kuma control plane can run in:
kubernetes: users configure Kuma via Kubernetes resources and Kuma uses the Kubernetes API server as the data store.
universal: users configure Kuma via the Kuma API server and Kuma resources. PostgreSQL serves as the data store. This mode works for any infrastructure other than Kubernetes, though you can run a
universalcontrol plane on top of a Kubernetes cluster.
When running in Kubernetes mode, Kuma stores all of its state and configuration on the underlying Kubernetes API Server.
The only step necessary to join your Kubernetes services to the mesh is enabling sidecar injection.
Pods configured with sidecar injection, Kuma adds the
kuma-dp sidecar container.
The following label on any
Pod controls this injection:
Injection: learn more about sidecar injection in the section on
Annotations: see the complete list of the Kubernetes annotations.
Policies with Kubernetes: when using Kuma in Kubernetes mode you create policies using
Pods with a
For all Pods associated with a Kubernetes
Service resource, Kuma control plane automatically generates an annotation
kuma.io/service: <name>_<namespace>_svc_<port> where
<port> come from the
Service. For example, the following resources generates
apiVersion: v1 kind: Service metadata: name: echo-server namespace: kuma-test annotations: 80.service.kuma.io/protocol: http spec: ports: - port: 80 name: http selector: app: echo-server --- apiVersion: apps/v1 kind: Deployment metadata: name: echo-server namespace: kuma-test labels: app: echo-server spec: strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 0 selector: matchLabels: app: echo-server template: metadata: labels: app: echo-server spec: containers: - name: echo-server image: nginx ports: - containerPort: 80
Pods without a
In some cases
Pods don’t belong to a corresponding
This is typically because they don’t expose any consumable services.
Jobs are a good example of this.
In this case, the Kuma control plane generates a
kuma.io/service tag with the format
<namespace> come from the
Pod resource itself.
Pods created by the following example
Deployment have the tag
apiVersion: apps/v1 kind: Deployment metadata: name: echo-client labels: app: echo-client spec: selector: matchLabels: app: echo-client template: metadata: labels: app: echo-client spec: containers: - name: alpine image: "alpine" imagePullPolicy: IfNotPresent command: ["sh", "-c", "tail -f /dev/null"]
When running in Universal mode, Kuma requires a PostgreSQL database to store its state. This replaces the Kubernetes API. With Universal, you use
kumactl to interact directly with the Kuma API server to manage policies.
Read the docs about the PostgreSQL backend for more details.