Careful!
You are browsing documentation for a version of Kuma that is not the latest release.
Kubernetes
You can find instructions to install on kubernetes for single-zone or multi-zone. This page covers special steps for some Kubernetes distributions or version and some troubleshooting help.
Helm
Adding the Kuma charts repository
To use Kuma with Helm charts, add the Kuma charts repository locally:
helm repo add kuma https://kumahq.github.io/charts
You can fetch all following updates by running helm repo update
.
Helm config
You can find a full reference of helm configuration.
You can also set any control-plane configuration by using the prefix: controlPlane.envVars.
. Find detailed explanations in the page: control plane configuration.
Argo CD
Kuma zones require a certificate to verify a connection between the control plane and a data plane proxy.
Kuma Helm chart autogenerate self-signed certificate if the certificate isn’t explicitly set.
Argo CD uses helm template
to compare and apply Kubernetes YAMLs.
Helm template doesn’t work with chart logic to verify if the certificate is present.
This results in replacing the certificate on each Argo redeployment.
The solution to this problem is to explicitly set the certificates.
See “Data plane proxy to control plane communication” to learn how to preconfigure Kuma with certificates.
If you use Argo Rollouts for blue-green deployment configure the control plane with KUMA_RUNTIME_KUBERNETES_INJECTOR_IGNORED_SERVICE_SELECTOR_LABELS
set to rollouts-pod-template-hash
.
It will enable traffic shifting between active and preview Service without traffic interruption.
If you are using policies inside Argo managed entities you will want to workaround argoproj/argo-cd#4764.
To do so disable the mesh owner reference by setting KUMA_RUNTIME_KUBERNETES_SKIP_MESH_OWNER_REFERENCE=true
in your control-plane configuration.
If you do this, deleting a mesh will not delete the resources that are attached to it.
Sidecars
Check the notes on DP lifecycle for Kubernetes for important considerations about sidecars with Kuma.
CNI
On Kubernetes there are two ways to redirect traffic to the sidecar:
- Init containers which need to run with elevated privileges.
- CNI which requires a little extra setup.
To use the CNI you can use the detailed instructions to configure the Kuma CNI.
OpenShift
Transparent proxy
Starting from version 4.1 OpenShift uses nftables
instead of iptables
.
So using init container for redirecting traffic to the proxy no longer works and you should use the kuma-cni
instead.
Webhooks on OpenShift 3.11
By default MutatingAdmissionWebhook
and ValidatingAdmissionWebhook
are disabled on OpenShift 3.11.
In order to make it work add the following pluginConfig
into /etc/origin/master/master-config.yaml
on the master node:
admissionConfig:
pluginConfig:
MutatingAdmissionWebhook:
configuration:
apiVersion: apiserver.config.k8s.io/v1alpha1
kubeConfigFile: /dev/null
kind: WebhookAdmission
ValidatingAdmissionWebhook:
configuration:
apiVersion: apiserver.config.k8s.io/v1alpha1
kubeConfigFile: /dev/null
kind: WebhookAdmission
After updating master-config.yaml
restart the cluster and install control-plane
.
GKE Autopilot
By default, GKE Autopilot forbids the use of the NET_ADMIN
linux capability. This is required by Kuma to set up the iptables rules in order to intercept inbound and outbound traffic.
It is possible to configure a GKE cluster in autopilot mode so that the NET_ADMIN
capability is authorized with the following option in your gcloud command: --workload-policies=allow-net-admin
Full example:
gcloud beta container \
--project ${GCP_PROJECT} \
clusters create-auto ${CLUSTER_NAME} \
--region ${REGION} \
--release-channel "regular" \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--network "projects/${GCP_PROJECT}/global/networks/default" \
--subnetwork "projects/${GCP_PROJECT}/regions/${REGION}/subnetworks/default" \
--no-enable-master-authorized-networks \
--cluster-ipv4-cidr=/20 \
--workload-policies=allow-net-admin