Set up a multi-zone deployment
This is a more advanced deployment mode for Kuma that allow us to support service meshes that are running on many zones, including hybrid deployments on both Kubernetes and VMs.
- Control plane: There is one
globalcontrol plane, and many
remotecontrol planes. A global control plane only accepts connections from remote control planes.
- Data plane proxies: The data plane proxies connect to the closest
remotecontrol plane in the same zone. Additionally, we need to start an
ingressdata plane proxy on every zone to have cross-zone communication between data plane proxies in different zones.
- Service Connectivity: Automatically resolved via the built-in DNS resolver that ships with Kuma. When a service wants to consume another service, it will resolve the DNS address of the desired service with Kuma, and Kuma will respond with a Virtual IP address, that corresponds to that service in the Kuma service domain.
We can support multiple isolated service meshes thanks to Kuma’s multi-tenancy support, and workloads from both Kubernetes or any other supported Universal environment can participate in the Service Mesh across different regions, clouds, and datacenters while not compromising the ease of use and still allowing for end-to-end service connectivity.
When running in multi-zone mode, we introduce the notion of a
remote control planes for Kuma:
- Global: this control plane will be used to configure the global Service Mesh policies that we want to apply to our data plane proxies. Data plane proxies cannot connect directly to a global control plane, but can connect to
remotecontrol planes that are being deployed on each underlying zone that we want to include as part of the Service Mesh (can be a Kubernetes cluster, or VM based). Only one deployment of the global control plane is required, and it can be scaled horizontally.
- Remote: we are going to have as many remote control planes as the number of underlying Kubernetes or VM zones that we want to include in a Kuma mesh. Remote control planes accept connections from data plane proxies that are started in the same underlying zone, and they connect to the
globalcontrol plane to fetch their service mesh policies. Remote control plane policy APIs are read-only and cannot accept Service Mesh policies to be directly configured on them. They can be scaled horizontally within their zone.
In this deployment, a Kuma cluster is made of one global control plane and as many remote control planes as the number of zones that we want to support:
- Zone: A zone identifies a Kubernetes cluster, a VPC, or any other cluster that we want to include in a Kuma service mesh.
In a multi-zone deployment mode, services will be running on multiple platforms, clouds, or Kubernetes clusters (which are identified as
zones in Kuma). While all of them will be part of a Kuma mesh by connecting their data plane proxies to the local
remote control plane in the same zone, implementing service to service connectivity would be tricky since a source service may not know where a destination service is being hosted at (for instance, in another zone).
To implement easy service connectivity, Kuma ships with:
- DNS Resolver: Kuma provides an out of the box DNS server on every
remotecontrol plane that will be used to resolve service addresses when estabilishing any service-to-service communication. It scales horizontally as we scale the
- Ingress Data Plane: Kuma provides an out of the box
ingressdata plane proxy mode that will be used to enable traffic to enter a zone from another zone. It can be scaled horizontally. Each zone must have an
ingressdata plane deployed.
ingress data plane proxy is specific to internal communication within a mesh and it is not to be considered an API gateway. API gateways are supported via Kuma’s gateway mode which can be deployed in addition to
ingress data plane proxies.
The global control plane and the remote control planes communicate with each other via xDS in order to synchronize the resources that are being created to configure Kuma, like policies.
For Kubernetes: The global control plane on Kubernetes must reside on its own Kubernetes cluster, in order to keep the CRDs separate from the ones that the remote control planes will create during the synchronization process.
In order to deploy Kuma in a multi-zone deployment, we must start a
global and as many
remote control planes as the number of zones that we want to support.
Global control plane
First we start the
global control plane and configure the
remote control planes connectivity.
global control plane by setting the
controlPlane.mode value to
global when installing the chart. This can be done on the command line, or in a provided file:
helm install --version 0.5.7 kuma --namespace kuma-system --set controlPlane.mode=global kuma/kuma
Remote control plane
remote control planes in each zone that will be part of the multi-zone Kuma deployment.
remote control plane, you need to assign the zone name for each of them and point it to the Global CP.
kumactl install control-plane \ --mode=remote \ --zone=<zone name> \ --ingress-enabled \ --kds-global-address grpcs://`<global-kds-address>` | kubectl apply -f - kumactl install dns | kubectl apply -f -
Kuma DNS installation supports several flavors of Core DNS and Kube DNS. We recommend checking the configuration of the Kubernetes cluster after deploying Kuma remote control plane to ensure everything is as expected.
Verify control plane connectivity
When a remote control plane connects to the global control plane, the
Zone resource is created automatically in the global control plane.
You can verify if a remote control plane is connected to the global control plane by inspecting the list of zones in the global control plane GUI (
:5681/gui/#/zones) or by using
kumactl get zones.
Additionally, if you deployed remote control plane with Ingress, it should be visible in the Ingress tab of the GUI. Cross-zone communication between services is only available if Ingress has a public address and public port. Note that on Kubernetes, Kuma automatically tries to pick up the public address and port. Depending on the LB implementation of your Kubernetes provider, you may need to wait a couple of minutes to receive the address.
Using the multi-zone deployment
To utilize the multi-zone Kuma deployment follow the steps below
To figure out the service names that we can use in the applications for cross-zone communication, we can look at the service tag in the deployed data plane proxies:
kubectl get dataplanes -n echo-example -o yaml | grep service service: echo-server_echo-example_svc_1010
On Kubernetes, Kuma uses transparent proxy. In this mode,
kuma-dp is listening on port 80 for all the virtual IPs that
Kuma DNS assigns to services in the
.mesh DNS zone. It also provides an RFC compatible DNS name where the underscores in the
service are replaced by dots. Therefore, we have the following ways to consume a service from within the mesh:
<kuma-enabled-pod>curl http://echo-server:1010 <kuma-enabled-pod>curl http://echo-server_echo-example_svc_1010.mesh:80 <kuma-enabled-pod>curl http://echo-server.echo-example.svc.1010.mesh:80 <kuma-enabled-pod>curl http://echo-server_echo-example_svc_1010.mesh <kuma-enabled-pod>curl http://echo-server.echo-example.svc.1010.mesh
The first method still works, but is limited to endpoints implemented within the same Kuma zone (i.e. the same Kubernetes cluster).
The second and third options allow to consume a service that is distributed across the Kuma cluster (bound by the same
global control plane). For
example there can be an endpoint running in another Kuma zone in a different data-center.
Since most HTTP clients (such as
curl) will default to port 80, the port can be omitted, like in the fourth and fifth options above.
The Kuma DNS service format (e.g.
echo-server_kuma-test_svc_1010.mesh) is a composition of Kubernetes Service Name (
kuma-test), a fixed string (
svc), the service port (
1010). The service is resolvable in the DNS zone
the Kuma DNS service is hooked.
Deleting a Zone
To delete a
Zone we must first shut down the corresponding Kuma Remote CP instances. As long as the Remote CP is running this will not be possible, and Kuma will return a validation error like:
zone: unable to delete Zone, Remote CP is still connected, please shut it down first
When the Remote CP is fully disconnected and shut down, then the
Zone can be deleted. All corresponding resources (like
DataplaneInsight) will be deleted automatically as well.
In order to disable routing traffic to a specific
Zone, we can disable the
Zone via the
type: Zone name: zone-1 spec: enabled: true
Changing this value to
enabled: false will allow the user to exclude the zone’s
Ingress from all other zones - and by doing so - preventing traffic from being able to enter the
Zone that has been disabled will show up as “Offline” in the GUI and CLI