Karpenter ARC Zonal Shift: EKS AZ Failure Runbook for Platform Teams
AWS added Karpenter support for Amazon Application Recovery Controller zonal shift on May 12, 2026. For EKS operators, that changes an AZ incident from a load-balancer-only discussion into a node provisioning and workload placement discussion.
That is why this post is intentionally practical. It does not try to turn Karpenter ARC zonal shift into a product brochure. It treats the announcement, release, or vulnerability as an operating decision: what should a cloud team change, what can wait, what has to be measured, and which guardrails keep the fix from becoming a new source of downtime.
If you are connecting this to the existing BitsLovers library, start with EKS Karpenter autoscaling, Route 53 ARC readiness changes, EKS IAM condition keys, EKS Auto Mode in production, EKS extended support upgrade planning, CloudWatch Container Insights for EKS. Those articles cover the adjacent platform patterns; this one focuses on EKS availability operations when Karpenter and Amazon Application Recovery Controller work together.

The workflow above is the recommended operating model. It keeps the discussion out of the abstract. You start with the signal, scope the blast radius, implement the smallest useful control, verify the result, and then turn the work into a repeatable runbook. That order matters. A lot of teams jump straight from announcement to tooling. That feels fast, but it usually skips ownership, rollback, and the boring evidence an auditor or incident reviewer will ask for later.
What Changed
Zonal shift and zonal autoshift help move supported traffic away from an impaired Availability Zone. Karpenter support matters because node capacity is part of that recovery story. If a cluster moves traffic but the autoscaler keeps treating zones as interchangeable, the runbook is incomplete.
The date matters here because engineering teams already have plenty of stale guidance in their wikis. Treat this as a May 2026 operating note. If a vendor updates the documentation later, update the runbook and leave a revision note in the post. That is not editorial polish; it is how you keep technical content from becoming another unsafe copy-paste source.
The mechanism is not magic failover. ARC handles zonal shift behavior for supported resources. Karpenter reacts to scheduling and capacity signals. Your workloads still need topology spread, realistic PodDisruptionBudgets, healthy readiness probes, and enough capacity in the remaining zones.
Why Platform Teams Should Care
A one-zone impairment is one of the most common cloud failure modes worth practicing. Many teams have multi-AZ architecture on paper and single-AZ behavior in practice. Stateful workloads, bad affinity rules, oversized pods, strict PDBs, and exhausted IP space can all break the recovery path.
This is also where cost and reliability get mixed together. A feature that looks like a security improvement can increase build time, data scanned, node churn, or operational review effort. A reliability feature can quietly move risk from the service team to the platform team. A new AI workflow can shorten analysis time and still create a governance problem if the identity model is weak. Good engineering writing should name that tradeoff.
For Karpenter ARC zonal shift, the practical question is not “is this useful?” It is useful. The better question is where the control should live. If it belongs in a one-off project, document it there. If it belongs in the platform baseline, put it in CI, admission control, IAM, observability, or a shared runbook. Most teams get into trouble when they make that boundary implicit.
Operating Baseline
The minimum baseline is an EKS cluster where critical workloads can tolerate losing one zone. That means services have replicas across zones, persistent workloads have an honest recovery story, and Karpenter can launch replacement nodes in zones that still have room.
| Workload type | Zonal shift stance | Pre-flight check |
|---|---|---|
| Stateless HTTP service | Good candidate | Replicas across at least two healthy zones |
| Queue worker | Usually safe | Backlog tolerance and idempotent processing |
| Stateful database pod | Treat carefully | Storage and quorum model understood |
| Single-replica admin tool | Do not depend on zonal shift | Add replicas or accept downtime |
The table is deliberately opinionated. It gives you a default answer before the exception shows up. Exceptions are fine; hidden exceptions are not. If someone wants to bypass the default, require a reason, an owner, and an expiration date. That one small rule prevents a lot of permanent “temporary” infrastructure.
Implementation Pattern
A practice run should capture what Karpenter, the scheduler, and the application do together.
kubectl get pods -A -o wide | awk '{print $1,$2,$7,$8}'
kubectl get pdb -A
kubectl get nodes -L topology.kubernetes.io/zone
kubectl get nodeclaims -A 2>/dev/null || true
# During a controlled exercise, watch pending pods and scheduling pressure.
kubectl get pods -A --field-selector=status.phase=Pending
kubectl describe nodes | grep -E 'topology.kubernetes.io/zone|Allocated resources' -A8
The snippet is not meant to be pasted blindly. Use it as the shape of the implementation, then adapt names, account boundaries, tags, and approval gates to your environment. The useful part is the sequence: inspect, constrain, verify, and record evidence. If your process cannot produce evidence, it is not mature enough for production.
Controls, Metrics, And Evidence
The useful metrics are not just service uptime. You need to know whether the cluster could absorb the shift.
| Signal | Question it answers | Target |
|---|---|---|
| Pending pods by zone | Did the scheduler run out of usable capacity? | Returns to baseline quickly |
| Karpenter provisioning latency | How long did replacement nodes take? | Known and tested per workload class |
| PDB violations | Did disruption policy block recovery? | 0 unexpected blocks |
| SLO burn rate | Did users notice the shift? | Within incident budget |
Notice that the table separates a control from the evidence. A control without evidence is a hope. Evidence without an owner is a screenshot in a ticket that nobody trusts three months later. Tie each signal to a system that already has retention, access control, and review habits.
Rollout Plan
Treat the first shift as an exercise, not a hero move during a live incident.
- Label critical workloads by tier and confirm zone spread before enabling the runbook.
- Run a non-production zonal shift exercise and record timing for pending pods and new nodes.
- Fix PDBs that block all movement. A PDB that protects availability by preventing recovery is broken.
- Document stateful workloads separately. Do not hide database constraints inside a generic EKS runbook.
- Schedule quarterly practice runs and compare timing against the previous run.
This is where teams often overbuild. Start with the smallest production slice that proves the behavior. One non-critical cluster, one runner group, one application namespace, one account, or one data domain is enough. Then widen the blast radius only after you have a rollback path and a metric that proves the change did not make the system worse.
Gotchas
Most zonal shift mistakes come from assuming traffic movement is the same as application recovery.
- ARC zonal shift is not a multi-Region disaster recovery plan. It is an AZ-level maneuver.
- Karpenter cannot fix a pod that is too large for remaining instance types or subnet IP space.
- Topology spread constraints can become hard blockers when one zone is intentionally removed.
- Stateful workloads may need a separate failover runbook, especially if storage is zonal.
- Autoshift should not be enabled without game-day evidence. Automation amplifies bad assumptions.
The uncomfortable lesson is simple: new platform features usually fail at the handoff points. The vendor feature works. The identity mapping is incomplete. The backup restores but not the secret. The scanner finds an issue but nobody owns the fix. The autoscaler drains a zone correctly but the application has a bad disruption budget. These are not edge cases. They are where production work lives.
Security, Reliability, And Cost Tradeoffs
The reliability gain is a faster, cleaner AZ evacuation. The cost is extra capacity planning. If you run clusters hot in every zone, zonal shift creates pending pods. If you keep too much spare capacity, you pay for idle headroom. The right answer is workload-tiered headroom.
Use a scorecard before rolling the pattern to every team:
| Question | Good answer | Weak answer |
|---|---|---|
| Can the cluster lose one AZ? | Capacity, IPs, and replicas survive a test | Architecture diagram says multi-AZ |
| Do teams know the rollback? | Runbook names stop condition and restore step | Operator decides during incident |
| Are stateful workloads classified? | Each has a separate recovery statement | They are hidden under ‘Kubernetes’ |
The weak answers are not moral failures. They are just not production answers yet. If your current state is weak, write the gap down, choose the next smallest fix, and keep the change contained until the evidence improves.
First 48 Hours In Practice
The first two days decide whether Karpenter ARC zonal shift becomes a controlled platform improvement or another half-finished note in a chat thread. I would split the work into three windows: the first hour, the first business day, and the first week. The first hour is about scope. Do not change production yet unless the exposure is obvious. Name the owner, capture the source link, list affected systems, and decide whether this is emergency work or scheduled platform work.
By the end of the first business day, the team should have one working example. That could be one patched runner pool, one restored namespace, one repository review, one governed data domain, one EKS node group, or one shared VPC deployment. The exact target depends on the topic. The point is to choose a small production-shaped slice, not a toy. A lab that has no secrets, no real users, no deployment pressure, and no monitoring will hide the problems that matter.
The first-week goal is repeatability. If the change worked once because a senior engineer babysat it, you have a useful experiment, not a platform pattern. Turn the successful path into a runbook with commands, screenshots, expected output, rollback steps, and escalation rules. Then test it with someone who did not write the first version. That review will expose missing assumptions faster than another hour of polishing.
For EKS availability operations when Karpenter and Amazon Application Recovery Controller work together, the review meeting should be short and concrete. Ask what changed, which systems are in scope, which systems are intentionally out of scope, what evidence proves the control works, and what would make the team roll back. If the group cannot answer those five questions, the change is not ready to become a default.
| Owner | Decision to make | Evidence they should demand |
|---|---|---|
| Service owner | Confirms scope and business impact | Accepts or rejects the default action for Stateless HTTP service |
| Platform owner | Turns the pattern into a shared control | Publishes the runbook, dashboard, and rollback path for Karpenter ARC zonal shift |
| Security owner | Reviews risk and exception handling | Checks that Pending pods by zone has usable evidence |
| FinOps or operations owner | Checks cost and toil | Watches whether Karpenter provisioning latency creates recurring work |
One practical habit helps a lot: write the rollback criteria before the rollout starts. For Karpenter ARC zonal shift, a rollback may mean re-enabling an old runner path, restoring a prior IAM policy, pausing an agent workflow, undoing an autoscaling setting, or reverting to a previous storage ownership model. Whatever the answer is, write it down. Operators make better decisions during incidents when the stop condition is already named.
Runbook Artifacts To Keep
A trustworthy runbook is not a wall of prose. It is a small set of artifacts that prove the system can be operated by more than one person. Keep the procedure, the evidence, and the exception list separate. Procedures change often. Evidence grows during exercises and incidents. Exceptions need owners and expiration dates because otherwise they become the real architecture.
| Artifact | What good looks like | Maintenance rule |
|---|---|---|
| Runbook page | One current procedure with commands, owners, and rollback | Update after every exercise or incident |
| Evidence folder | Screenshots, command output, logs, ticket IDs, and query results | Keep according to audit and incident policy |
| Exception register | Every skipped service, account, cluster, repo, or dataset | Owner plus expiration date required |
| Dashboard link | The live view operators use during rollout | Must show the metric in the control table |
The evidence should be boring enough to survive an audit and specific enough to help an engineer at 2 a.m. A command transcript showing did the scheduler run out of usable capacity? is useful. A dashboard screenshot with no time range is not. A ticket that says “verified” is weak. A ticket with the exact source, system, output, owner, and next review date is much stronger.
This also keeps trust resources honest. A blog post can point to AWS, Kubernetes, GitLab, or project documentation, but the local runbook has to say how your team interpreted that source. If the official document changes, the local procedure needs a review. If the source disappears, the team needs a replacement. That is why the trusted resources section at the end of this post is not decorative; it is part of the operating model.
Example Review Questions
Use these questions before making Karpenter ARC zonal shift a default pattern:
- What is the smallest system where we proved this works with production-like constraints?
- Which team owns the control after the initial rollout is finished?
- Which metric tells us the change helped instead of simply adding process?
- What is the first rollback action if arc zonal shift is not a multi-region disaster recovery plan. it is an az-level maneuver.?
- What exception would we approve, and how long may that exception live?
- Which trusted source would force us to revisit the design if it changed?
Two questions deserve blunt answers. First, does the pattern reduce risk, or does it only move risk to another team? Second, can a new engineer follow the runbook without private context? If the answer to either question is no, keep the rollout narrow.
A Concrete Failure Scenario
Imagine the team accepts the default action for stateless http service but ignores queue worker. At first, the rollout looks successful. The dashboard turns green. The announcement is written. Then the first exception arrives. A service owner cannot meet the deadline, a cluster has an unusual constraint, or a repository breaks in a way the shared workflow did not predict. Without an exception register, the team handles that case in a side conversation. Two weeks later nobody remembers whether the exception was temporary.
That is the failure mode this article is trying to avoid. The technology can be good and the rollout can still decay. The fix is not more meetings. The fix is a small operating loop: define the default, record the exception, attach an owner, set an expiration date, and review the evidence. This is simple, but it is not optional for production work.
Karpenter cannot fix a pod that is too large for remaining instance types or subnet IP space. That gotcha should shape the rollout. Put it in the runbook as a check, not as a footnote. If a future operator has to rediscover it during an outage or audit review, the article failed to become operational knowledge.
When To Use This
Use this pattern when you run production EKS workloads where an AZ impairment should degrade gracefully instead of turning into a full application outage.
Do not use it when the workload is single-zone by design or the business has accepted downtime for that service tier. That boundary is important because the wrong abstraction can make a simple system harder to operate. Sometimes the best platform decision is to leave a feature out of the shared baseline and document a local exception instead.
Trusted Resources
These are the sources I would keep next to the runbook:
- AWS Karpenter ARC zonal shift announcement
- Amazon ARC zonal shift docs
- Karpenter documentation
- EKS best practices reliability
- Kubernetes PodDisruptionBudgets
- Kubernetes topology spread constraints
I am intentionally marking one uncertainty: service support and regional availability for ARC and Karpenter integration should be checked against current AWS documentation before enabling production automation. Treat the article as an operating guide, not as a replacement for the vendor documentation. The source links above are the authority when a limit, feature state, or mitigation changes.
The Practical Takeaway
Karpenter support makes zonal shift more useful, but only if your workloads can actually move. Test the shape before the outage.
Comments