Policies

Need help? Installing and using Kuma should be as easy as possible. Contact and chat with the community in real-time if you get stuck or need clarifications. We are here to help.

Here you can find the list of Policies that Kuma supports, that will allow you to build a modern and reliable Service Mesh.

Applying Policies

Once installed, Kuma can be configured via its policies. You can apply policies with kumactl on Universal, and with kubectl on Kubernetes. Regardless of what environment you use, you can always read the latest Kuma state with kumactl on both environments.

We follow the best practices. You should always change your Kubernetes state with CRDs, that's why Kuma disables kumactl apply [..] when running in K8s environments.

These policies can be applied either by file via the kumactl apply -f [path] or kubectl apply -f [path] syntax, or by using the following command:

echo "
  type: ..
  spec: ..
" | kumactl apply -f -
1
2
3
4

or - on Kubernetes - by using the equivalent:

echo "
  apiVersion: kuma.io/v1alpha1
  kind: ..
  spec: ..
" | kubectl apply -f -
1
2
3
4
5

Below you can find the policies that Kuma supports. In addition to kumactl, you can also retrive the state via the Kuma HTTP API as well.

Mesh

This policy allows to create multiple Service Meshes on top of the same Kuma cluster.

On Universal:

type: Mesh
name: default
1
2

On Kuberentes:

apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  namespace: kuma-system
  name: default
1
2
3
4
5

Mutual TLS

This policy enables automatic encrypted mTLS traffic for all the services in a Mesh.

Kuma ships with a builtin CA (Certificate Authority) which is initialized with an auto-generated root certificate. The root certificate is unique for every Mesh and it used to sign identity certificates for every data-plane.

The mTLS feature is used for AuthN/Z as well: each data-plane is being assigned with a workload identity certificate, which is SPIFFE compatible. This certificate has a SAN set to spiffe://<mesh name>/<service name>. When Kuma enforces policies that require an identity, like TrafficPermission, it will extract the SAN from the client certificate and use it for every identity matching operation.

By default, mTLS is not enabled. You can enable Mutual TLS by updating the Mesh policy with the mtls setting.

On Universal:

type: Mesh
name: default
mtls:
  enabled: true
  ca:
    builtin: {}
1
2
3
4
5
6

You can apply this configuration with kumactl apply -f [file-path].

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  namespace: kuma-system
  name: default
spec:
  mtls:
    enabled: true
    ca:
      builtin: {}
1
2
3
4
5
6
7
8
9
10

You can apply this configuration with kubectl apply -f [file-path].

Currently, Kuma only supports self-signed certificates (builtin). In the future, we plan to add support for third-party Certificate Authorities.

With mTLS enabled, traffic is restricted by default. Remember to apply a TrafficPermission policy to permit connections between Dataplanes.

Traffic Permissions

Traffic Permissions allow you to determine security rules for services that consume other services via their Tags. It is a very useful policy to increase security in the Mesh and compliance in the organization.

You can determine what source services are allowed to consume specific destination services. The service field is mandatory in both sources and destinations.

In Kuma 0.3.0 the sources field only allows for service and only service will be enforced. This limitation will disappear in the next version of Kuma.

In the example below, the destinations includes not only the service property, but also an additional version tag. You can include any arbitrary tags to any Dataplane

On Universal:

type: TrafficPermission
name: permission-1
mesh: default
sources:
  - match:
      service: backend
destinations:
  - match:
      service: redis
      version: '5.0'
1
2
3
4
5
6
7
8
9
10

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: TrafficPermission
mesh: default
metadata:
  namespace: default
  name: permission-1
spec:
  sources:
    - match:
        service: backend
  destinations:
    - match:
        service: redis
        version: '5.0'
1
2
3
4
5
6
7
8
9
10
11
12
13
14

Match-All: You can match any value of a tag by using *, like version: '*'.

Traffic Route

TrafficRoute policy allows you to configure routing rules for L4 traffic, i.e. blue/green deployments and canary releases.

On Universal:

type: TrafficRoute
name: route-1
mesh: default
sources:
  - match:
      service: backend
destinations:
  - match:
      service: redis
conf:
  - weight: 90
    destination:
      service: redis
      version: '1.0'
  - weight: 10
    destination:
      service: redis
      version: '2.0'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: TrafficRoute
mesh: default
metadata:
  namespace: default
  name: route-1
spec:
  sources:
  - match:
      service: backend
  destinations:
  - match:
      service: redis
  conf:
    - weight: 90
      destination:
        service: redis
        version: '1.0'
    - weight: 10
      destination:
        service: redis
        version: '2.0'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

Traffic Tracing

This is a proposed policy not in GA yet. You can setup tracing manually by leveraging the ProxyTemplate policy and the low-level Envoy configuration. Join us on Slack to share your tracing requirements.

The proposed policy will enable tracing on the Mesh level by adding a tracing field.

On Universal:

type: Mesh
name: default
tracing:
  enabled: true
  type: zipkin
  address: zipkin.srv:9000
1
2
3
4
5
6

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  namespace: kuma-system
  name: default
spec:
  tracing:
    enabled: true
    type: zipkin
    address: zipkin.srv:9000
1
2
3
4
5
6
7
8
9
10

Traffic Log

With the TrafficLog policy you can configure access logging on every Envoy data-plane belonging to the Mesh. These logs can then be collected by any agent to be inserted into systems like Splunk, ELK and Datadog. The first step is to configure backends for the Mesh. A backend can be either a file or a TCP service (like Logstash). Second step is to create a TrafficLog entity to select connections to log.

On Universal:

type: Mesh
name: default
mtls:
  ca:
    builtin: {}
  enabled: true
logging:
  defaultBackend: file
  backends:
    - name: logstash
      format: |
        {
            "destination": "%UPSTREAM_CLUSTER%",
            "destinationAddress": "%UPSTREAM_LOCAL_ADDRESS%",
            "source": "%KUMA_DOWNSTREAM_CLUSTER%",
            "sourceAddress": "%DOWNSTREAM_REMOTE_ADDRESS%",
            "bytesReceived": "%BYTES_RECEIVED%",
            "bytesSent": "%BYTES_SENT%"
        }
      tcp:
        address: 127.0.0.1:5000
    - name: file
      file:
        path: /tmp/access.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
type: TrafficLog
name: all-traffic
mesh: default
sources:
- match:
    service: '*'
destinations:
- match:
    service: '*'
# if omitted, the default logging backend of that mesh will be used
1
2
3
4
5
6
7
8
9
10
type: TrafficLog
name: backend-to-database-traffic
mesh: default
sources:
- match:
    service: backend
destinations:
- match:
    service: database
conf:
  backend: logstash
1
2
3
4
5
6
7
8
9
10
11

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: Mesh
metadata:
  namespace: kuma-system
  name: default
spec:
  mtls:
    ca:
      builtin: {}
    enabled: true
  logging:
    defaultBackend: file
    backends:
      - name: logstash
        format: |
          {
              "destination": "%UPSTREAM_CLUSTER%",
              "destinationAddress": "%UPSTREAM_LOCAL_ADDRESS%",
              "source": "%KUMA_DOWNSTREAM_CLUSTER%",
              "sourceAddress": "%DOWNSTREAM_REMOTE_ADDRESS%",
              "bytesReceived": "%BYTES_RECEIVED%",
              "bytesSent": "%BYTES_SENT%"
          }
        tcp:
          address: 127.0.0.1:5000
      - name: file
        file:
          path: /tmp/access.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: kuma.io/v1alpha1
kind: TrafficLog
metadata:
  namespace: kuma-system
  name: all-traffic
spec:
  sources:
  - match:
      service: '*'
  destinations:
  - match:
      service: '*'
  # if omitted, the default logging backend of that mesh will be used
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: kuma.io/v1alpha1
kind: TrafficLog
metadata:
  namespace: kuma-system
  name: backend-to-database-traffic
spec:
  sources:
  - match:
      service: backend
  destinations:
  - match:
      service: database
  conf:
    backend: logstash
1
2
3
4
5
6
7
8
9
10
11
12
13
14

If a backend in TrafficLog is not explicitly specified, the defaultBackend from Mesh will be used.

In the format field, you can use standard Envoy placeholders for TCP as well as a few additional placeholders:

  • %KUMA_SOURCE_ADDRESS% - source address of the Dataplane
  • %KUMA_SOURCE_SERVICE% - source service from which traffic is sent
  • %KUMA_DESTINATION_SERVICE% - destination service to which traffic is sent

Proxy Template

With the ProxyTemplate policy you can configure the low-level Envoy resources directly. The policy requires two elements in its configuration:

  • imports: this field lets you import canned ProxyTemplates provided by Kuma.
    • In the current release, the only available canned ProxyTemplate is default-proxy
    • In future releases, more of these will be available and it will also be possible for the user to define them to re-use across their infrastructure
  • resources: the custom resources that will be applied to every Dataplane that matches the selectors.

On Universal:

type: ProxyTemplate
mesh: default
name: template-1
selectors:
  - match:
      service: backend
conf:
  imports:
    - default-proxy
  resources:
    - ..
    - ..
1
2
3
4
5
6
7
8
9
10
11
12

On Kubernetes:

apiVersion: kuma.io/v1alpha1
kind: ProxyTemplate
mesh: default
metadata:
  namespace: default
  name: template-1
spec:
  selectors:
    - match:
        service: backend
  conf:
    imports:
      - default-proxy
    resources:
      - ..
      - ..
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Below you can find an example of what a ProxyTemplate configuration could look like:

  imports:
    - default-proxy
  resources:
    - name: localhost:9901
      version: v1
      resource: |
        '@type': type.googleapis.com/envoy.api.v2.Cluster
        connectTimeout: 5s
        name: localhost:9901
        loadAssignment:
          clusterName: localhost:9901
          endpoints:
          - lbEndpoints:
            - endpoint:
                address:
                  socketAddress:
                    address: 127.0.0.1
                    portValue: 9901
        type: STATIC
    - name: inbound:0.0.0.0:4040
      version: v1
      resource: |
        '@type': type.googleapis.com/envoy.api.v2.Listener
        name: inbound:0.0.0.0:4040
        address:
          socket_address:
            address: 0.0.0.0
            port_value: 4040
        filter_chains:
        - filters:
          - name: envoy.http_connection_manager
            config:
              route_config:
                virtual_hosts:
                - routes:
                  - match:
                      prefix: /stats/prometheus
                    route:
                      cluster: localhost:9901
                  domains:
                  - '*'
                  name: envoy_admin
              codec_type: AUTO
              http_filters:
                name: envoy.router
              stat_prefix: stats
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46