Careful!
You are browsing documentation for the next version of Kuma. Use this version at your own risk.
Workload
The Workload resource represents a logical grouping of data plane proxies that share the same workload identifier. Kuma automatically creates and manages this resource when data plane proxies have a kuma.io/workload label. On Kubernetes, this label is set via a kuma.io/workload annotation on Pods. On Universal, the label is set directly on the Dataplane resource.
Use Workload resources to:
- Monitor connected and healthy data plane proxies per workload
- Group data plane proxies by workload identifier for observability
- Integrate with MeshIdentity for workload-based identity assignment
Workload resources are automatically managed by Kuma. Manual creation is not supported. The resource is automatically created when data plane proxies with a kuma.io/workload label are deployed, and deleted when no data plane proxies reference it.
namespace-mesh constraint on Kubernetes: All data plane proxies referencing a Workload must belong to the same mesh. On Kubernetes, this is enforced at the namespace level—a single namespace cannot contain pods in multiple meshes.
If Kuma detects pods in multiple meshes within the same namespace, it emits a Kubernetes warning event on the namespace and skips Workload resource generation for the affected workload. The existing Workload resource (if any) is left orphaned but not deleted.
For details on preventing this configuration issue, see the namespace-mesh constraint documentation.
Examples
Workload created automatically
When you deploy a data plane proxy with the kuma.io/workload label, Kuma automatically creates a Workload resource:
Pod annotation:
apiVersion: v1
kind: Pod
metadata:
name: demo-app
annotations:
kuma.io/workload: demo-workload
Automatically created Workload:
apiVersion: kuma.io/v1alpha1
kind: Workload
metadata:
name: demo-workload
namespace: default
labels:
kuma.io/mesh: default
kuma.io/managed-by: k8s-controller
spec: {}
status:
dataplaneProxies:
connected: 3
healthy: 3
total: 3
Workload with MeshIdentity
Use Workload with MeshIdentity to assign identity based on the workload identifier:
MeshIdentity:
apiVersion: kuma.io/v1alpha1
kind: MeshIdentity
metadata:
name: workload-identity
namespace: {{site.mesh_namespace}}
labels:
kuma.io/mesh: default
spec:
selector:
dataplane:
matchLabels:
kuma.io/workload: demo-workload
spiffeID:
trustDomain: example.com
path: "/workload/{{ .Workload }}"
provider:
type: Bundled
bundled:
meshTrustCreation: Enabled
insecureAllowSelfSigned: true
autogenerate:
enabled: true
Result: Data plane proxies with kuma.io/workload: demo-workload receive SPIFFE ID: spiffe://example.com/workload/demo-workload
Checking workload status
Monitor workload health:
kubectl get workloads -n default
NAME MESH AGE
demo-workload default 5m
Get detailed status:
kubectl get workload demo-workload -n default -o yaml
Workload label management
The kuma.io/workload label determines which Workload resource a data plane proxy belongs to:
On Kubernetes:
- Automatic assignment: The workload label is automatically derived from pod labels (configurable via
runtime.kubernetes.workloadLabelsin the control plane configuration) - Manual assignment: Set via the
kuma.io/workloadannotation on pods - Protection: Cannot be manually set as a label on pods; Kuma will reject pod creation/updates with this label
On Universal:
- Set the
kuma.io/workloadlabel directly in the Dataplane resource’s inbound tags
The kuma.io/workload label on data plane proxies must match exactly with the Workload resource name. All data plane proxies referencing a Workload must be in the same mesh.
Limitations
- Single mesh: All data plane proxies referencing a workload must belong to the same mesh. On Kubernetes, this is enforced at the namespace level—a single namespace cannot contain pods in multiple meshes. When this constraint is violated, Kuma skips Workload generation and emits a warning event.
- Automatic lifecycle: Cannot be manually created or modified. The resource is fully managed by the control plane.
- runtime enforcement: To proactively prevent multi-mesh namespaces, enable the
runtime.kubernetes.disallowMultipleMeshesPerNamespaceflag. When enabled, the admission webhook rejects pod creation or updates if the namespace already contains Dataplanes in a different mesh.
Troubleshooting
Detecting multi-mesh namespace issues
If Workload resources are not being created as expected, check for multi-mesh namespace warnings:
Check namespace events:
kubectl get events -n <namespace> --field-selector type=Warning
Look for events with the message: “Skipping Workload generation: namespace has pods in multiple meshes for workload. This configuration is not supported.”
Identify pods and their meshes:
kubectl get pods -n <namespace> -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.annotations.kuma\.io/mesh}{"\n"}{end}'
Resolving multi-mesh namespace issues
To resolve this configuration issue:
- Identify affected pods: Use the command above to list all pods and their mesh assignments in the namespace.
- Reorganize workloads: Move pods belonging to different meshes into separate namespaces.
- Optional: Enable proactive prevention: Set
runtime.kubernetes.disallowMultipleMeshesPerNamespace=truein your control plane configuration to prevent this issue in the future.