In order for traffic to flow through the Kuma data plane, all inbound and
outbound traffic for a service needs to go through its data plane proxy.
The recommended way of accomplishing this is via transparent proxying.
On Kubernetes it’s handled automatically by default with the
initContainer
kuma-init
, but this container requires certain privileges.
Another option is to use the Kuma CNI. This frees every
Pod
in the mesh from requiring said privileges, which can make security compliance easier.
The CNI DaemonSet
itself requires elevated privileges because it
writes executables to the host filesystem as root
.
Install the CNI using either
kumactl or Helm.
The default settings are tuned for OpenShift with Multus.
To use it in other environments, set the relevant configuration parameters.
Kuma CNI applies NetworkAttachmentDefinitions
to applications in any namespace with kuma.io/sidecar-injection
label.
To apply NetworkAttachmentDefinitions
to applications not in a Mesh, add the label kuma.io/sidecar-injection
with the value disabled
to the namespace.
Installation
Below are the details of how to set up Kuma CNI in different environments using both kumactl
and helm
.
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=05-cilium.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=05-cilium.conflist" \
kuma kuma/kuma
You need to set the Cilium config value cni-exclusive
or the corresponding Helm chart value cni.exclusive
to false
in order to use Cilum and Kuma together.
This is necessary starting with the release of Cilium v1.14.
For installing Kuma CNI with Cilium on GKE, you should follow the Google - GKE
section.
For Cilium versions < 1.14 you should use cni.confName=05-cilium.conf
as this has changed
for version starting from Cilium 1.14.
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-calico.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-calico.conflist" \
kuma kuma/kuma
For installing Kuma CNI with Calico on GKE, you should follow the Google - GKE
section.
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/var/lib/rancher/k3s/agent/etc/cni/net.d" \
--set "cni.binDir=/bin" \
--set "cni.confName=10-flannel.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/var/lib/rancher/k3s/agent/etc/cni/net.d" \
--set "cni.binDir=/bin" \
--set "cni.confName=10-flannel.conflist" \
kuma kuma/kuma
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-kindnet.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-kindnet.conflist" \
kuma kuma/kuma
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-azure.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-azure.conflist" \
kuma kuma/kuma
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=15-azure-swift-overlay.conflist" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=15-azure-swift-overlay.conflist" \
kuma kuma/kuma
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-aws.conflist" \
--set "controlPlane.envVars.KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_IP_FAMILY_MODE=ipv4" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/opt/cni/bin" \
--set "cni.confName=10-aws.conflist" \
--set "controlPlane.envVars.KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_IP_FAMILY_MODE=ipv4" \
kuma kuma/kuma
Add KUMA_RUNTIME_KUBERNETES_INJECTOR_SIDECAR_CONTAINER_IP_FAMILY_MODE=ipv4
as EKS has IPv6 disabled by default.
You need to enable network-policy in your cluster (for existing clusters this redeploys the nodes).
Define the Variable CNI_CONF_NAME
by your CNI, like: export CNI_CONF_NAME=05-cilium.conflist
or export CNI_CONF_NAME=10-calico.conflist
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/home/kubernetes/bin" \
--set "cni.confName=${CNI_CONF_NAME}" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.chained=true" \
--set "cni.netDir=/etc/cni/net.d" \
--set "cni.binDir=/home/kubernetes/bin" \
--set "cni.confName=${CNI_CONF_NAME}" \
kuma kuma/kuma
-
Follow the instructions in OpenShift 3.11 installation
to get the MutatingAdmissionWebhook
and ValidatingAdmissionWebhook
enabled (this is required for regular Kuma installation).
-
You need to grant privileged permission to kuma-cni service account:
oc adm policy add-scc-to-user privileged -z kuma-cni -n kube-system
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.containerSecurityContext.privileged=true" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.containerSecurityContext.privileged=true" \
kuma kuma/kuma
kumactl install control-plane \
--set "cni.enabled=true" \
--set "cni.containerSecurityContext.privileged=true" \
| kubectl apply -f -
# Before installing Kuma with Helm, configure your local Helm repository:
# https://kuma.io/docs/2.9.x/production/cp-deployment/kubernetes/#helm
helm install \
--create-namespace \
--namespace kuma-system \
--set "cni.enabled=true" \
--set "cni.containerSecurityContext.privileged=true" \
kuma kuma/kuma
Kuma CNI taint controller
To prevent a race condition described in this issue a new controller was implemented.
The controller will taint a node with NoSchedule
taint to prevent scheduling before the CNI DaemonSet is running and ready.
Once the CNI DaemonSet is running and ready it will remove the taint and allow other pods to be scheduled into the node.
To disable the taint controller use the following env variable:
KUMA_RUNTIME_KUBERNETES_NODE_TAINT_CONTROLLER_ENABLED="false"
Merbridge CNI with eBPF
To install merbridge CNI with eBPF append the following options to the command from installation:
To use Merbridge CNI with eBPF your environment has to use Kernel >= 5.7
and have cgroup2
available
--set ... \
--set "cni.enabled=true" \
--set "experimental.ebpf.enabled=true"
Kuma CNI logs
Logs of CNI components are available via kubectl logs
.
To enable debug level log, please set value of environment variable CNI_LOG_LEVEL
to debug
on the CNI DaemonSet kuma-cni
. Please note that editing the CNI DaemonSet will shutdown the current running CNI Pods hence all mesh enabled application pods are not able to start or shutdown during the restarting of the DaemonSet. Don’t do it in a production environment unless approved.
eBPF CNI currently doesn’t have support for exposing its logs.
Kuma CNI architecture
The CNI DaemonSet kuma-cni
is formed by two components:
- a CNI installer
- a CNI binary
Involved components collaborate like this:
flowchart LR
subgraph s1["conflist"]
n2["existing-CNIs"]
n3["kuma-cni"]
end
subgraph s2["application pod"]
n4["kuma-sidecar"]
n5["app-container"]
end
A["installer"] -- copy binary and setup conf --> n3
n3 -- configure iptables --> n4
The CNI installer copies CNI binary kuma-cni
to the CNI directory on the host. When chained, the installer also sets up chaining for kuma-cni
in CNI conflist file, and when chaining is disabled, the binary kuma-cni
is invoked explicitly as per pod manifest. When correctly installed, the CNI binary kuma-cni
will be invoked by Kubernetes when a mesh-enabled application pod is being created so iptables rules required by the kuma-sidecar
container inside the pod are properly set up.
When chained, if the CNI conflist file is unexpectedly changed causing kuma-cni
to be excluded, the installer immediately detects it and restarts itself so the chaining installation re-runs and CNI functionalities heal automatically.