Careful!
You are browsing documentation for a version of Kuma that is not the latest release.
Looking for even older versions? Learn more.
Performance fine-tuning
Reachable services
By default, when transparent proxying is used, every data plane proxy follows every other data plane proxy in the mesh. With large meshes, usually, a data plane proxy connects to just a couple of services in the mesh. By defining the list of such services, we can dramatically improve the performance of Kuma.
The result is that:
- The control plane has to generate a much smaller XDS configuration (just a couple of Clusters/Listeners etc.) saving CPU and memory
- Smaller config is sent over a wire saving a lot of network bandwidth
- Envoy only has to keep a couple of Clusters/Listeners which means much fewer statistics and lower memory usage.
Follow the transparent proxying docs on how to configure it.
Config trimming by using MeshTrafficPermission
- This feature only works with MeshTrafficPermission, if you’re using TrafficPermission you need to migrate to MeshTrafficPermission, otherwise enabling this feature could stop all traffic flow.
- Due to a bug ExternalServices won’t work without Traffic Permissions without Zone Egress, if you’re using External Services you need to keep associated TrafficPermissions, or upgrade Kuma to 2.6.x or newer.
Starting with release 2.5 the problem stated in reachable services section
can be also mitigated by defining MeshTrafficPermissions and configuring a zone control plane with KUMA_EXPERIMENTAL_AUTO_REACHABLE_SERVICES=true
.
Switching on the flag will result in computing a graph of dependencies between the services
and generating XDS configuration that enables communication only with services that are allowed to communicate with each other
(their effective action is not deny
).
For example: if a service b
can be called only by service a
:
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
namespace: kuma-system
name: mtp-b
spec:
targetRef:
kind: MeshService
name: b
from:
- targetRef:
kind: MeshService
name: a
default:
action: Allow
Then there is no reason to compute and distribute configuration of service b
to any other services in the Mesh since (even if they wanted)
they wouldn’t be able to communicate with it.
You can combine autoReachableServices
with reachable services, but reachable services will take precedence.
Sections below highlight the most important aspects of this feature, if you want to dig deeper please take a look at the MADR.
Supported targetRef kinds
The following kinds affect the graph generation and performance:
- all levels of
MeshService
- top level
MeshSubset
andMeshServiceSubset
withk8s.kuma.io/namespace
,k8s.kuma.io/service-name
,k8s.kuma.io/service-port
tags - from level
MeshSubset
andMeshServiceSubset
with all tags
If you define a MeshTrafficPermission with other kind, like this one:
apiVersion: kuma.io/v1alpha1
kind: MeshTrafficPermission
metadata:
name: mtp-mesh-to-mesh
namespace: kuma-system
labels:
kuma.io/mesh: default
spec:
targetRef:
kind: MeshSubset
tags:
customTag: true
from:
- targetRef:
kind: Mesh
default:
action: Allow
it won’t affect performance.
Changes to the communication between services
Requests from services trying to communicate with services that they don’t have access to will now fail with connection closed error like this:
root@second-test-server:/# curl -v first-test-server:80
* Trying [IP]:80...
* Connected to first-test-server ([IP]) port 80 (#0)
> GET / HTTP/1.1
> Host: first-test-server
> User-Agent: curl/7.81.0
> Accept: */*
>
* Empty reply from server
* Closing connection 0
curl: (52) Empty reply from server
instead of getting a 503
error.
root@second-test-server:/# curl -v first-test-server:80
* Trying [IP]:80...
* Connected to first-test-server ([IP]) port 80 (#0)
> GET / HTTP/1.1
> Host: first-test-server
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< content-length: 118
< content-type: text/plain
< date: Wed, 08 Nov 2023 14:15:24 GMT
< server: envoy
<
* Connection #0 to host first-test-server left intact
upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination/
Migration
A recommended path of migration is to start with a coarse grain MeshTrafficPermission
targeting a MeshSubset
with k8s.kuma.io/namespace
and then drill down to individual services if needed.
Postgres
If you choose Postgres
as a configuration store for Kuma on Universal,
please be aware of the following key settings that affect performance of Kuma Control Plane.
KUMA_STORE_POSTGRES_CONNECTION_TIMEOUT
: connection timeout to the Postgres database (default: 5s)KUMA_STORE_POSTGRES_MAX_OPEN_CONNECTIONS
: maximum number of open connections to the Postgres database (default: unlimited)
KUMA_STORE_POSTGRES_CONNECTION_TIMEOUT
The default value will work well in those cases where both kuma-cp
and Postgres database are deployed in the same datacenter / cloud region.
However, if you’re pursuing a more distributed topology, e.g. by hosting kuma-cp
on premise and using Postgres as a service in the cloud, the default value might no longer be enough.
KUMA_STORE_POSTGRES_MAX_OPEN_CONNECTIONS
The more dataplanes join your meshes, the more connections to Postgres database Kuma might need to fetch configurations and update statuses.
As of version 1.4.1 the default value is 50.
However, if your Postgres database (e.g., as a service in the cloud) only permits a small number of concurrent connections, you will have to adjust Kuma configuration respectively.
Snapshot Generation
This is advanced topic describing Kuma implementation internals
The main task of the control plane is to provide config for dataplanes. When a dataplane connects to the control plane, the CP starts a new goroutine. This goroutine runs the reconciliation process with given interval (1s by default). During this process, all dataplanes and policies are fetched for matching. When matching is done, the Envoy config (including policies and available endpoints of services) for given dataplane is generated and sent only if there is an actual change.
KUMA_XDS_SERVER_DATAPLANE_CONFIGURATION_REFRESH_INTERVAL
: interval for re-generating configuration for Dataplanes connected to the Control Plane (default: 1s)
This process can be CPU intensive with high number of dataplanes therefore you can control the interval time for a single dataplane. You can lower the interval scarifying the latency of the new config propagation to avoid overloading the CP. For example, changing it to 5s means that when you apply a policy (like TrafficPermission) or the new dataplane of the service is up or down, CP will generate and send new config within 5 seconds.
For systems with high traffic, keeping old endpoints for such a long time (5s) may not be acceptable. To solve this, you can use passive or active health checks provided by Kuma.
Additionally, to avoid overloading the underlying storage there is a cache that shares fetch results between concurrent reconciliation processes for multiple dataplanes.
KUMA_STORE_CACHE_EXPIRATION_TIME
: expiration time for elements in cache (1s by default).
You can also change the expiration time, but it should not exceed KUMA_XDS_SERVER_DATAPLANE_CONFIGURATION_REFRESH_INTERVAL
, otherwise CP will be wasting time building Envoy config with the same data.
Profiling
Kuma’s control plane ships with pprof endpoints so you can profile and debug the performance of the kuma-cp
process.
To enable the debugging endpoints, you can set the KUMA_DIAGNOSTICS_DEBUG_ENDPOINTS
environment variable to true
before starting kuma-cp
and use one of the following methods to retrieve the profiling information:
You can retrieve the profiling information with Golang’s pprof
tool, for example:
go tool pprof http://<IP of the CP>:5680/debug/pprof/profile?seconds=30
Then, you can analyze the retrieved profiling data using an application like Speedscope.
After a successful debugging session, please remember to turn off the debugging endpoints since anybody could execute heap dumps on them potentially exposing sensitive data.
Kubernetes client
Kubernetes client uses client level throttling to not overwhelm kube-api server. In larger deployments, bigger than 2000 services in a single kubernetes cluster, number of resources updates can hit this throttling. In most cases it’s safe to increase this limit as kube-api has it’s own throttling mechanism. To change client throttling configuration you need to update config.
runtime:
kubernetes:
clientConfig:
qps: ... # Qps defines maximum requests kubernetes client is allowed to make per second.
burstQps: ... # BurstQps defines maximum burst requests kubernetes client is allowed to make per second
Kubernetes controller manager
Kuma is modifying some Kubernetes resources. Kubernetes calls the process of modification reconciliation. Every resource has its own working queue, and control plane adds reconciliation tasks to that queue. In larger deployments, bigger than 2000 services in a single Kubernetes cluster, size of the work queue for pod reconciliation can grow and slow down pods updates. In this situation you can change the number of concurrent pod reconciliation tasks, by changing configuration:
runtime:
kubernetes:
controllersConcurrency:
podController: ... # PodController defines maximum concurrent reconciliations of Pod resources
Envoy
Envoy concurrency tunning
Envoy allows configuring the number of worker threads used for processing requests. Sometimes it might be useful to change the default number of worker threads e.g.: high CPU machine with low traffic. Depending on the type of deployment, there are different mechanisms in kuma-dp
to change Envoy’s concurrency level.
By default, Envoy runs with a concurrency level based on resource limit. For example, if you’ve started the kuma-dp
container with CPU resource limit 7000m
then concurrency is going to be set to 7. It’s also worth mentioning that concurrency for K8s is set from at least 2 to a maximum of 10 worker threads. In case when higher concurrency level is required it’s possible to change the setting by using annotation kuma.io/sidecar-proxy-concurrency
which allows to change the concurrency level without limits.
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
annotations:
kuma.io/sidecar-proxy-concurrency: 55
[...]