Amazon EKS 1.30 Extended Support Deadline: Upgrade Planning Before July 23, 2026

Bits Lovers
Written by Bits Lovers on
Amazon EKS 1.30 Extended Support Deadline: Upgrade Planning Before July 23, 2026

Amazon EKS 1.30 reaches the end of extended support on July 23, 2026. If you still have production clusters on 1.30, the upgrade is no longer a “later this quarter” task. It is a dated platform risk with a control plane auto-upgrade waiting on the other side.

AWS documents the EKS Kubernetes version lifecycle in the Amazon EKS version calendar: 14 months of standard support, then 12 months of extended support. For Kubernetes 1.30, the EKS release date was May 23, 2024, standard support ended July 23, 2025, and extended support ends July 23, 2026.

Amazon EKS 1.30 extended support upgrade plan

The deadline is not only about the control plane. A safe EKS upgrade has to sequence the control plane, Amazon EKS add-ons, managed node groups, self-managed nodes, Fargate pods, Karpenter, PodDisruptionBudgets, deprecated APIs, admission webhooks, and workload rollouts. If you run EKS like a product platform, this is a release program, not a console click.

If you need broader platform context, pair this with the Bits Lovers guides on EKS Auto Mode in production, EKS Karpenter autoscaling, ArgoCD deployments on EKS, and CloudWatch Container Insights for EKS.

The EKS 1.30 Support Dates

Put the dates in front of the team. Deadlines are easier to manage when they are visible.

Kubernetes Version EKS Release Date End of Standard Support End of Extended Support
1.30 May 23, 2024 July 23, 2025 July 23, 2026
1.31 September 26, 2024 November 26, 2025 November 26, 2026
1.32 January 23, 2025 March 23, 2026 March 23, 2027
1.33 May 29, 2025 July 29, 2026 July 29, 2027

The practical target should not be “upgrade on July 22.” By then you have no room for a failed add-on rollout, a broken webhook, a node drain issue, or an application team that discovers a deprecated API in production.

For most teams, the sensible target is to have production clusters off 1.30 at least 30 days before July 23, 2026. That gives you time to monitor workload behavior, update documentation, and clean up node version skew.

What Happens If You Wait

AWS is clear about the end of extended support. Clusters that complete the full lifecycle cannot stay on that Kubernetes version indefinitely. After the end of support date, EKS can automatically update existing control planes to the earliest supported version through a gradual process. AWS also says the automatic update can happen at any time after the end of extended support date, without a specific schedule.

That is the wrong way to run a platform.

The control plane is only one part of the cluster. Managed node groups are not automatically upgraded just because the control plane moves. Self-managed nodes are also your responsibility. Fargate pods need to be restarted so they come back with kubelet aligned to the cluster version. Add-ons such as VPC CNI, CoreDNS, kube-proxy, and EBS CSI need explicit attention.

If EKS moves the control plane and your worker layer stays stale, your incident becomes a version skew cleanup under pressure.

Upgrade Sequencing That Works

The safest EKS upgrade sequence is boring and repeatable.

Phase Action Exit Criteria
1 Inventory clusters and versions Every 1.30 cluster has owner, environment, and target version
2 Scan deprecated APIs No production manifests depend on removed APIs in target version
3 Validate add-ons VPC CNI, CoreDNS, kube-proxy, EBS CSI, and controllers have compatible versions
4 Upgrade non-production control plane Staging cluster on target version with healthy add-ons
5 Upgrade nodes Managed/self-managed/Karpenter nodes aligned or within supported skew
6 Run workload tests Deployments, ingress, storage, autoscaling, and observability pass
7 Upgrade production control plane Production API server healthy and admission webhooks stable
8 Roll production nodes Workloads rescheduled without violating availability targets
9 Verify telemetry and rollback plan Dashboards, alerts, SLOs, and runbooks updated

Do not combine all phases into one maintenance window unless your environment is small and low risk. The expensive part of EKS upgrades is not the control plane operation. It is discovering that some workload only works because the old cluster tolerated a bad assumption.

Pre-Upgrade Checklist

Run this before touching the first production cluster.

  • Export every cluster version with aws eks describe-cluster or aws eks list-clusters plus describe-cluster.
  • Record node kubelet versions with kubectl get nodes -o wide.
  • List add-on versions with aws eks list-addons and aws eks describe-addon.
  • Check Helm releases that install controllers, webhooks, CRDs, and storage drivers.
  • Scan manifests and live resources for deprecated Kubernetes APIs.
  • Confirm all PodDisruptionBudgets allow at least one pod to move during drains.
  • Review topology spread constraints and anti-affinity rules before node replacement.
  • Confirm Cluster Autoscaler or Karpenter supports the target Kubernetes version.
  • Validate admission webhooks have multiple replicas, working certificates, and reasonable timeouts.
  • Snapshot etcd is not user-managed in EKS, but you still need application backups for stateful workloads.
  • Confirm CloudWatch, Prometheus, and log pipelines continue working after node replacement.
  • Write a rollback plan for workloads, even though Kubernetes minor version control plane downgrades are not the normal path.

This checklist catches the usual failures: stale nodes, fragile PDBs, incompatible controllers, and old manifests hiding in Git.

Deprecated API Scan

Deprecated APIs are where “the cluster upgraded fine” turns into “deployments are broken.”

Kubernetes removes APIs across minor versions. If your manifests, Helm charts, operators, or CI templates still emit removed API versions, the API server may reject future creates or updates. Existing objects can make this more confusing because something may appear to be running until the next deployment tries to apply it again.

Use more than one method:

  • Run kubectl get --raw /metrics | grep apiserver_requested_deprecated_apis if your metrics path is available.
  • Use kubectl api-resources and compare against the target version.
  • Run tools such as Pluto, kubent, or kube-no-trouble against rendered manifests and live clusters.
  • Search Git for old API versions in Helm values, Kustomize overlays, and CRDs.
  • Check admission controllers and operators, not just application Deployments.

The important habit is scanning rendered manifests, not only source templates. Helm conditionals and environment overlays can hide the real object that reaches the API server.

Add-Ons Before Nodes

EKS add-ons deserve their own phase.

At minimum, review:

  • Amazon VPC CNI
  • CoreDNS
  • kube-proxy
  • Amazon EBS CSI driver
  • AWS Load Balancer Controller, if installed separately
  • EFS CSI driver, if used
  • ExternalDNS, cert-manager, metrics-server, and observability agents

The order depends on your current versions, but the principle is simple: do not roll hundreds of nodes while the networking, DNS, or storage layer is already behind.

CoreDNS issues show up as application timeouts. VPC CNI issues show up as pod scheduling or IP assignment problems. EBS CSI issues show up as stuck StatefulSets. kube-proxy issues show up as weird service routing symptoms. These are not places to improvise during a production control plane upgrade.

If your deployment process is GitOps-based, the ArgoCD on EKS guide is relevant here. Controllers should be pinned, reviewed, and promoted like application releases, not manually nudged in a panic.

Managed Nodes, Self-Managed Nodes, and Karpenter

After the control plane upgrade, nodes need a deliberate rollout.

Managed node groups are easier because EKS gives you an update operation, but they still create real disruption. The drain behavior still interacts with PDBs, local storage, topology constraints, and application readiness probes.

Self-managed nodes require more discipline. You own the launch template, AMI, bootstrap configuration, labels, taints, IAM role, and replacement process. Make sure the AMI supports the target Kubernetes version and that user data does not pin old bootstrap flags.

Karpenter adds a different shape. You need to confirm your Karpenter version, CRDs, NodePools or Provisioners, EC2NodeClasses or AWSNodeTemplates, disruption budgets, consolidation behavior, and AMI family settings. If Karpenter is allowed to replace too much capacity too quickly, it can turn an upgrade into a cluster-wide churn event.

For teams using Karpenter heavily, revisit EKS Karpenter autoscaling before the upgrade window. Autoscaling is part of the upgrade, not just background infrastructure.

Gotcha: PDBs Can Block the Whole Plan

PodDisruptionBudgets are supposed to protect availability. Bad PDBs can block every node drain.

Common patterns that cause trouble:

  • minAvailable: 1 on a single-replica workload
  • maxUnavailable: 0 on workloads that need voluntary disruption
  • PDBs that select more pods than intended because labels are too broad
  • workloads with replicas spread across too few nodes or zones
  • stateful applications with slow shutdown or startup times

Before the maintenance window, run:

kubectl get pdb -A
kubectl drain <test-node> --ignore-daemonsets --delete-emptydir-data --dry-run=server

The dry run will not catch every real-world behavior, but it forces the team to inspect disruption policy before the clock starts.

Gotcha: Webhooks Can Break the API Server Experience

Admission webhooks sit in the request path. If a webhook is down, slow, expired, or incompatible, normal Kubernetes operations can fail in surprising ways.

Before upgrading:

  • check webhook pod replica counts
  • verify certificates and expiration dates
  • set sane timeoutSeconds
  • review failurePolicy
  • confirm the webhook supports the target Kubernetes version
  • test create, update, and delete operations in staging

This matters for cert-manager, policy engines, service mesh injectors, security scanners, custom operators, and internal platform webhooks. A broken webhook can make a healthy control plane feel broken to every deployment pipeline.

Observability During the Upgrade

Do not upgrade blind. At minimum, watch:

  • API server request errors and latency
  • node readiness and kubelet version distribution
  • pod restart rate
  • pending pods
  • CoreDNS error rate and latency
  • VPC CNI IP allocation errors
  • EBS attach and mount failures
  • ingress controller errors
  • Karpenter provisioning and disruption events

CloudWatch Container Insights, Prometheus, and Grafana each cover part of this picture. If you need the CloudWatch side, use the Container Insights for EKS guide. If your stack is Prometheus-first, the Prometheus and Grafana on EKS guide is the better companion.

Production Rollout Plan

A pragmatic rollout looks like this:

  1. Upgrade a development cluster that actually runs representative workloads.
  2. Upgrade staging and run deployment, scaling, ingress, storage, and rollback tests.
  3. Freeze risky controller changes during the production window.
  4. Upgrade the production control plane.
  5. Upgrade or roll add-ons as required by the target version.
  6. Roll one node group or one Karpenter capacity segment first.
  7. Watch telemetry for at least one normal traffic cycle.
  8. Continue node replacement in controlled batches.
  9. Restart Fargate workloads where applicable so kubelet alignment catches up.
  10. Record final cluster, node, add-on, and controller versions.

The target version decision depends on your organization. Moving from 1.30 to 1.31 buys only until November 26, 2026 extended support. Moving farther reduces repeated upgrade pressure, but increases change surface. For many teams, 1.32 or 1.33 is the better planning target in 2026, provided add-ons and controllers are compatible.

Bottom Line

July 23, 2026 is the wrong date to start the EKS 1.30 conversation. By then, you should already have production clusters upgraded, nodes aligned, add-ons current, and application teams through at least one successful rollout on the new version.

Treat the EKS 1.30 deadline as a platform release. Scan deprecated APIs, promote add-ons through environments, upgrade the control plane deliberately, roll nodes in batches, watch PDBs and webhooks, and verify observability before calling it done. The control plane version is the headline. The real work is everything attached to it.

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus