Kyverno Policy-as-Code on EKS: Validate, Mutate, Generate
Kubernetes RBAC controls who can do what, but it doesn’t control whether the things they do are safe. A developer with namespace-level deploy access can create a Pod without resource limits, pull from a public Docker Hub image, skip required labels, and expose a NodePort service — all without any RBAC violation. Kyverno fills this gap: it runs as a validating and mutating admission webhook and intercepts every resource change before it reaches the API server.
Kyverno v1.13 is the current stable release as of mid-2026. It’s written in Go, speaks native Kubernetes (policies are CRDs, not Rego), and handles four types of rules: validate (block or warn), mutate (add/change fields before admission), generate (create related resources automatically), and verifyImages (check container image signatures). This guide covers all four, with EKS-specific installation and the policy patterns that matter most for production clusters.
Why Kyverno Over OPA/Gatekeeper
Both work. The practical difference is language: Gatekeeper uses Rego, which is a query language that most engineers don’t already know and don’t want to learn. Kyverno policies are YAML with JMESPath and Kyverno’s own expression syntax — engineers familiar with Kubernetes can read and write policies without learning a new language.
For teams already invested in Rego, Gatekeeper is fine. For teams that want policies that look like Kubernetes manifests, Kyverno is the better fit. The two tools are functionally comparable for the standard policy use cases.
Installing Kyverno on EKS
Install via Helm. The default configuration runs Kyverno as three replicas for high availability:
# Add the Kyverno Helm repo
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
# Install Kyverno with HA configuration
helm install kyverno kyverno/kyverno \
--namespace kyverno \
--create-namespace \
--version 3.2.0 \
--set replicaCount=3 \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2
# Verify pods are running
kubectl get pods -n kyverno
# NAME READY STATUS RESTARTS AGE
# kyverno-admission-controller-6b8f9d7b5c-xxxxx 1/1 Running 0 2m
# kyverno-admission-controller-6b8f9d7b5c-yyyyy 1/1 Running 0 2m
# kyverno-admission-controller-6b8f9d7b5c-zzzzz 1/1 Running 0 2m
EKS-specific note: Kyverno runs as an admission webhook, which means the API server needs to reach the Kyverno service on port 443. On EKS, this is always available — EKS nodes run in VPC subnets with direct access to the API server endpoint. No additional security group rules needed.
# Verify the webhook is registered
kubectl get validatingwebhookconfigurations | grep kyverno
kubectl get mutatingwebhookconfigurations | grep kyverno
Validate Policies
Validate policies block or warn on resources that don’t meet a rule. The validationFailureAction controls the behavior: Enforce blocks the request, Audit allows it but logs a PolicyReport violation.
Start with Audit mode for every new policy. Switch to Enforce only after you’ve watched the PolicyReport for a few days and confirmed no legitimate workloads are caught by the rule.
Require Labels
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-team-label
match:
any:
- resources:
kinds:
- Deployment
- StatefulSet
- DaemonSet
validate:
message: "Deployments must have 'team' and 'environment' labels."
pattern:
metadata:
labels:
team: "?*" # must exist and be non-empty
environment: "?*"
Require Resource Limits
CPU and memory limits are optional in Kubernetes. Without them, a misbehaving pod can consume all node resources and cause OOM kills across the namespace. This policy blocks pods that don’t set limits:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-container-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "All containers must set CPU and memory limits."
pattern:
spec:
containers:
- name: "*"
resources:
limits:
cpu: "?*"
memory: "?*"
Disallow Latest Tag
latest is the silent killer of reproducible deployments. An image deployed today with nginx:latest is not the same image deployed next Tuesday — and when something breaks, you have no way to identify which layer changed. The rule below covers both the explicit :latest tag and the implicit case where no tag is specified at all (which also resolves to latest in most runtimes):
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-image-tag
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Images must use a specific tag, not 'latest'."
foreach:
- list: "request.object.spec.containers"
deny:
conditions:
any:
- key: ""
operator: Equals
value: "*:latest"
- key: ""
operator: NotContains
value: ":"
The second condition (NotContains ":") catches images with no tag at all — nginx without any tag defaults to latest in Docker and most container runtimes.
Mutate Policies
Mutate policies modify resources as they enter the cluster. The pod gets the mutation transparently; the developer doesn’t need to change their manifests.
Inject Default Resource Requests
Not every team sets resource requests, especially in dev and staging namespaces where Kubernetes scheduling precision matters less. Rather than blocking those pods, mutate them to add sensible defaults on the way in. The +() syntax does the right thing: it sets the field only when it’s missing, so pods that already declare requests aren’t touched:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-resources
spec:
rules:
- name: add-requests
match:
any:
- resources:
kinds:
- Pod
namespaces:
- dev
- staging
mutate:
foreach:
- list: "request.object.spec.containers"
patchStrategicMerge:
spec:
containers:
- name: ""
resources:
requests:
+(cpu): "100m" # + means only set if missing
+(memory): "128Mi"
The +() syntax in Kyverno means “set only if the field doesn’t already exist.” Containers that already declare requests are left alone; containers without them get the defaults injected.
Add Namespace Labels to Pods
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: propagate-ns-labels
spec:
rules:
- name: sync-team-label
match:
any:
- resources:
kinds:
- Pod
context:
- name: namespace
apiCall:
urlPath: "/api/v1/namespaces/"
jmesPath: "metadata.labels.team"
mutate:
patchStrategicMerge:
metadata:
labels:
+(team): ""
This reads the team label from the namespace object and copies it onto the pod — useful for cost allocation tags and for Prometheus relabeling rules that need a team dimension.
Generate Policies
Generate policies watch for resource events and create companion resources automatically. The problem they solve is the namespace provisioning gap: you create a namespace, and for a few minutes it has no NetworkPolicy, no default resource quota, no pull secret — whatever defaults your cluster policy requires. A generate policy closes that window by creating the companion resources the moment the namespace exists:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: default-network-policy
spec:
rules:
- name: create-deny-all-policy
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-ingress
namespace: ""
synchronize: true # Keep the generated resource in sync; delete if policy deleted
data:
spec:
podSelector: {}
policyTypes:
- Ingress
With synchronize: true, if someone manually deletes the generated NetworkPolicy, Kyverno recreates it. The policy becomes the source of truth, not the individual namespace.
Generate policies can also clone existing resources into new namespaces — useful for distributing Secrets (like container registry pull secrets) or ConfigMaps across namespaces automatically.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: clone-registry-secret
spec:
rules:
- name: clone-pull-secret
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: v1
kind: Secret
name: ecr-pull-secret
namespace: ""
synchronize: true
clone:
namespace: kube-system
name: ecr-pull-secret # The source secret to copy
Image Verification with Cosign
Kyverno’s verifyImages rule integrates with Cosign to check container image signatures before allowing a pod to start. This blocks unsigned or improperly signed images from running in your cluster.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signatures
spec:
validationFailureAction: Enforce
background: false # Image verification must run at admission, not in background
rules:
- name: check-signature
match:
any:
- resources:
kinds:
- Pod
namespaces:
- production
verifyImages:
- imageReferences:
- "123456789012.dkr.ecr.us-east-1.amazonaws.com/my-api:*"
attestors:
- count: 1
entries:
- keyless:
subject: "https://github.com/my-org/my-api/.github/workflows/release.yml@refs/heads/main"
issuer: "https://token.actions.githubusercontent.com"
This uses keyless signing (Sigstore’s OIDC-based approach): the signature is tied to the GitHub Actions workflow that produced it. Any image that wasn’t signed by that specific workflow is blocked. To sign images in CI:
# In GitHub Actions, after pushing the image
cosign sign \
--yes \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-api:${IMAGE_TAG}
Policy Exceptions
Sometimes a specific workload legitimately needs to bypass a policy. Kyverno has a PolicyException resource for this rather than requiring you to weaken the policy itself:
apiVersion: kyverno.io/v2beta1
kind: PolicyException
metadata:
name: allow-prometheus-no-limits
namespace: monitoring
spec:
exceptions:
- policyName: require-resource-limits
ruleNames:
- check-container-limits
match:
any:
- resources:
kinds:
- Pod
namespaces:
- monitoring
names:
- "prometheus-*"
This exempts the Prometheus pods from the resource limits policy without touching the policy itself. The exception is namespace-scoped, auditable, and reviewable in code review like any other manifest.
Policy Reports
Kyverno generates PolicyReport (namespace-scoped) and ClusterPolicyReport (cluster-scoped) objects summarizing which resources pass or fail each policy. These are useful for audit mode: deploy the policy in audit, then check the report to see what would have been blocked.
# Check policy reports in a namespace
kubectl get policyreport -n my-api
# NAME PASS FAIL WARN ERROR SKIP AGE
# cpol-require-labels 45 3 0 0 0 2d
# See which specific resources failed
kubectl get policyreport -n my-api -o jsonpath='{.items[0].results[?(@.result=="fail")]}' | \
python3 -m json.tool | grep -A5 '"resources"'
# Aggregate failures across all namespaces
kubectl get policyreport -A -o json | \
python3 -c "
import json,sys
reports = json.load(sys.stdin)['items']
for r in reports:
fails = [x for x in r.get('results',[]) if x.get('result') == 'fail']
if fails:
print(f\"{r['metadata']['namespace']}/{r['metadata']['name']}: {len(fails)} failures\")
"
Kyverno CLI for CI/CD
The kyverno CLI applies policies against manifests before they’re deployed. Add it to your CI pipeline to catch policy violations before they reach the cluster:
# Install Kyverno CLI
curl -LO https://github.com/kyverno/kyverno/releases/latest/download/kyverno-cli_linux_amd64.tar.gz
tar -xzf kyverno-cli_linux_amd64.tar.gz
sudo mv kyverno /usr/local/bin/
# Test your manifests against policies in CI
kyverno apply ./policies/ --resource ./k8s-manifests/ --detailed-results
# Exit code is 0 if all pass, non-zero if any fail
# Output:
# Applying 4 policy rule(s) to 12 resource(s)...
#
# policy require-labels -> resource Deployment/my-api/backend PASSED
# policy require-resource-limits -> resource Deployment/my-api/backend FAILED
# -> validation rule 'check-container-limits' failed
The CLI can also generate policy reports in JUnit XML format for CI systems that display test results:
kyverno apply ./policies/ --resource ./k8s-manifests/ \
--junit-output policy-results.xml
Kyverno works alongside the RBAC setup described in the EKS RBAC and security guide — RBAC controls access, Kyverno controls content. For the broader security posture including pod security standards and network policies, the AWS Security Hub guide covers how EKS findings surface in Security Hub for centralized visibility.
Comments