EKS RBAC and Security: Access Entries, Pod Identity, and Pod Security Standards
The aws-auth ConfigMap was never a good idea. It’s a plain Kubernetes ConfigMap in the kube-system namespace — editable by anyone with cluster-admin, no audit trail, no AWS-native access controls, and no way to see who added which entry or when. For three years it was the only mechanism for granting AWS IAM identities access to EKS clusters. AWS deprecated it in 2023 when EKS Access Entries shipped with Kubernetes 1.27, and the gap between the two approaches is wide enough that migrating off the ConfigMap is worth doing even on clusters that are otherwise stable.
This post covers the full EKS security stack: Access Entries for cluster authentication, Kubernetes RBAC for authorization within the cluster, EKS Pod Identity (the newer alternative to IRSA) for workload IAM, and Pod Security Standards for controlling what pods can do once they’re running.
Access Entries: IAM Identities Without the ConfigMap
Access entries live in the EKS API rather than inside Kubernetes. You create them with aws eks create-access-entry, they appear in CloudTrail, and when an IAM principal is deleted, its access entry follows automatically. No stale entries left in a ConfigMap nobody’s audited in two years.
Three authentication modes control how your cluster handles identity resolution.
CONFIG_MAP is the legacy mode — aws-auth ConfigMap handles everything. Use this only if you’re on an older cluster and haven’t migrated yet.
API_AND_CONFIG_MAP runs both in parallel. Access entries take precedence for identities that appear in both. This is the migration path: add access entries for new principals while keeping existing ConfigMap entries working.
API is the target state. The ConfigMap is ignored entirely. Use this for any new cluster you create.
Check and change your cluster’s authentication mode:
# Check current mode
aws eks describe-cluster \
--name my-cluster \
--query 'cluster.accessConfig.authenticationMode'
# Migrate to API mode
# Warning: you cannot go back toward CONFIG_MAP after moving forward
aws eks update-cluster-config \
--name my-cluster \
--access-config authenticationMode=API
The mode change is directional — CONFIG_MAP → API_AND_CONFIG_MAP → API, no reversal. Plan the migration sequence before starting.
Creating an access entry and attaching an AWS-managed access policy:
# Create access entry for an IAM role
aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/AdminRole \
--type STANDARD
# Grant cluster-admin via managed policy
aws eks associate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/AdminRole \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
--access-scope type=cluster
AWS provides six managed access policies covering the common patterns:
| Policy | What It Grants |
|---|---|
| AmazonEKSClusterAdminPolicy | Full cluster-admin |
| AmazonEKSAdminPolicy | Namespace-scoped admin |
| AmazonEKSEditPolicy | Edit most resources, no RBAC changes |
| AmazonEKSViewPolicy | Read-only cluster access |
| AmazonEKSAdminViewPolicy | Read-only including secrets |
| AmazonEKSNamespacedEditPolicy | Edit within specific namespaces |
For namespace-scoped access, set --access-scope type=namespace:
aws eks associate-access-policy \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/DeveloperRole \
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy \
--access-scope type=namespace \
--namespaces production staging
The developer role gets edit access in production and staging only — no cluster-wide access, no other namespaces. Access entries also support --kubernetes-groups, which maps an IAM role to a Kubernetes RBAC group. That’s where RBAC picks up.
Kubernetes RBAC: Authorization After Authentication
Access entries handle who can connect to the cluster. RBAC handles what they can do once connected. The two layers are independent — a principal with a valid access entry still gets denied by Kubernetes if no RBAC binding permits the requested operation.
Four RBAC objects make up the system.
Role grants permissions within a single namespace and can’t reference resources outside it. ClusterRole grants permissions cluster-wide, or can be applied within a namespace when bound via a RoleBinding. ClusterRoles also hold permissions for non-namespaced resources: nodes, PersistentVolumes, namespaces themselves.
RoleBinding binds a Role or ClusterRole to subjects within a specific namespace. ClusterRoleBinding binds a ClusterRole to subjects cluster-wide.
The most common pattern is ClusterRole plus RoleBinding: one ClusterRole defines the permission set, then a RoleBinding per namespace scopes it to the teams that should have it there.
# ClusterRole: read pods and their logs
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
---
# RoleBinding: apply this access only in the production namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods-in-production
namespace: production
subjects:
- kind: Group
name: monitoring-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
The monitoring-team group can read pods and logs in production. They can’t delete pods, can’t access other namespaces, and can’t read secrets. Wire up the access entry group mapping:
aws eks create-access-entry \
--cluster-name my-cluster \
--principal-arn arn:aws:iam::123456789012:role/MonitoringRole \
--kubernetes-groups monitoring-team
Anyone assuming MonitoringRole gets placed into monitoring-team. The RoleBinding picks that up automatically.
Audit RBAC state with kubectl auth can-i:
# What can the current identity do?
kubectl auth can-i --list
# Can a specific role delete pods in production?
kubectl auth can-i delete pods --namespace production \
--as arn:aws:iam::123456789012:role/DeveloperRole
# Find all cluster-admin bindings
kubectl get clusterrolebindings -o json | \
jq '.items[] | select(.roleRef.name=="cluster-admin") |
{name: .metadata.name, subjects: .subjects}'
The cluster-admin audit is worth running on any cluster that’s been live for more than a few months. Bindings accumulate that nobody remembers creating, from debugging sessions and one-time operations that were supposed to be temporary.
EKS Pod Identity: Workload IAM Without the OIDC Plumbing
Pods frequently need AWS permissions — reading from S3, writing to DynamoDB, fetching from Secrets Manager. Two mechanisms handle this: IRSA (IAM Roles for Service Accounts) and EKS Pod Identity.
IRSA has been around since 2019. It works by connecting an IAM role’s trust policy to your cluster’s OIDC provider URL. The setup works, but the trust policy embeds the cluster-specific OIDC URL. Rebuild the cluster and every trust policy needs updating.
EKS Pod Identity (November 2023) removes that coupling. The trust policy is fixed, generic, and cluster-agnostic:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "pods.eks.amazonaws.com"
},
"Action": ["sts:AssumeRole", "sts:TagSession"]
}]
}
No OIDC URL. The same IAM role works across any EKS cluster. Association is done through the EKS API:
# Install the Pod Identity Agent add-on (one-time per cluster)
aws eks create-addon \
--cluster-name my-cluster \
--addon-name eks-pod-identity-agent
# Associate role with a service account
aws eks create-pod-identity-association \
--cluster-name my-cluster \
--namespace production \
--service-account api-server \
--role-arn arn:aws:iam::123456789012:role/ApiServerRole
Pods using the api-server service account in production receive temporary credentials for ApiServerRole. No annotation on the service account, no OIDC provider to set up. The Pod Identity Agent DaemonSet handles credential delivery.
The one constraint: Pod Identity doesn’t work with Fargate pods. For Fargate workloads, IRSA is still required. For clusters mixing Fargate and managed nodes, use IRSA for Fargate and Pod Identity everywhere else — they coexist without conflict.
The IAM trust mechanics behind both approaches are covered in the AWS IAM roles and policies guide. The trust policy structure for cross-service authentication is the same pattern, just with pods.eks.amazonaws.com as the principal instead of a Lambda or EC2 service endpoint.
Pod Security Standards: What Containers Can Do
Once a pod is running, Pod Security Standards control its security posture at the namespace level. Three levels exist.
Privileged imposes no restrictions. Root containers, host network access, host path mounts — anything goes. Reserve this for node agents that genuinely need it: Falco, certain CSI drivers, the CNI plugin itself.
Baseline blocks the most common privilege escalation vectors. Privileged containers, host network, host process, and hostPath mounts for sensitive paths are all disallowed. Most application workloads pass baseline without code changes.
Restricted implements current hardening best practices. Requires non-root containers, disallows most volume types, requires explicit seccomp profiles. If your application can run under restricted, it should.
Apply a level to a namespace with labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Three modes per label:
enforce: violating pods are rejected at admissionaudit: violations logged to the API server audit log, pods allowedwarn: violations surface a user-visible warning, pods allowed
For migrating existing namespaces, the sequence is warn → audit → enforce. Jumping directly to enforce on a live namespace will reject pods that weren’t written with restricted in mind. The warning mode shows you what needs fixing without breaking anything.
A pod that fails restricted policy usually needs explicit security context settings:
spec:
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
capabilities.drop: ["ALL"] removes all Linux capabilities from the container. Most applications don’t need any. Drop everything and add back only what’s required rather than leaving the defaults in place.
Audit Logging
EKS control plane audit logs go to CloudWatch Logs, but they’re not enabled by default:
aws eks update-cluster-config \
--name my-cluster \
--logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}'
The audit log type records every API server request: who made it, what was requested, and whether it succeeded or was denied. For security investigations, this is the primary evidence source. Without it, reconstructing what happened after an incident is guesswork.
Useful CloudWatch Insights queries:
# Failed requests across the cluster
fields @timestamp, user.username, verb, objectRef.resource, responseStatus.code
| filter responseStatus.code >= 400
| sort @timestamp desc
| limit 50
# Secrets accessed in the last hour
fields @timestamp, user.username, verb, objectRef.name, objectRef.namespace
| filter objectRef.resource = "secrets" and verb in ["get", "list"]
| sort @timestamp desc
One cost consideration: the api log type generates high volume — every kubectl command, every controller reconciliation, every webhook call. On a busy cluster, all five log types together produce significant CloudWatch Logs charges. The audit type alone is usually worth the cost for compliance and incident investigation. The others are most valuable when actively debugging specific issues.
Network Policies
By default, all pods in a cluster can communicate with all other pods regardless of namespace. Network Policies restrict this. The VPC CNI plugin supports Network Policies natively since version 1.14:
aws eks update-addon \
--cluster-name my-cluster \
--addon-name vpc-cni \
--configuration-values '{"enableNetworkPolicy": "true"}'
Default-deny ingress for a namespace, with an explicit allow for the specific workload:
# Block all ingress to the production namespace by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow ingress controller to reach api-server pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
Network policies are additive: pods can communicate until you install a default-deny. Once that’s in place, only explicit policies create connectivity. This is exactly the security model you want, but it requires mapping out your inter-service communication before you apply it — otherwise you’ll break things and have no clean way to trace which policy is blocking what.
The VPC networking layer that underpins pod-to-pod communication — subnet CIDRs, ENI limits, security groups for pods — is covered in the EKS VPC CNI deep dive.
Security Baseline Checklist
A production EKS cluster security baseline, roughly in implementation order:
- Set authentication mode to
APIon new clusters - Create access entries for all IAM principals that need cluster access
- Use Pod Identity for managed node workloads; IRSA for Fargate
- Apply
baselinePod Security Standard to all namespaces immediately - Work toward
restrictedfor namespaces running application workloads - Enable control plane audit logging (
audittype minimum) - Enable VPC CNI network policy support, add default-deny policies per namespace
- Audit ClusterRoleBindings quarterly — especially those pointing at
cluster-admin
Access Entries are a strict improvement over the aws-auth ConfigMap. The ConfigMap’s core problem wasn’t the data it held, it was that the data lived inside Kubernetes with no IAM-native audit trail and no lifecycle connection to the IAM identities it referenced. Moving that mapping into the EKS API makes cluster access as auditable as any other AWS resource. If you’re standing up a cluster today, start with API mode — there’s no reason to initialize the ConfigMap at all.
The EKS getting started guide covers initial cluster setup and IRSA configuration — the patterns here build directly on that foundation.
Comments