Kubernetes v1.36: What's New
Kubernetes v1.36 shipped April 22, 2026, with 64 enhancements across the release: 17 graduating to stable, 18 moving to beta, and 24 entering alpha. The headline is sidecar containers reaching GA after two years in beta — a pattern that shows up in nearly every service mesh, log shipper, and observability agent deployed today. The other changes are less visible but matter: in-place pod resource resize reaching beta means you can adjust CPU and memory on a running pod without a rolling restart, and nftables kube-proxy graduating to stable means clusters on Linux 5.2+ can opt into the replacement backend.
This release also contains a deprecation worth flagging before you upgrade: spec.ephemeralContainers changes in how kubectl debug provisions them, and the node.kubernetes.io/not-ready taint behavior on certain edge cases shifts in 1.36. Check your taints automation before upgrading.
Sidecar Containers — Stable
The sidecar pattern using init containers with restartPolicy: Always graduates to stable in 1.36. This was introduced as alpha in 1.28, beta in 1.29, and has been the recommended way to run lifecycle-aware sidecars since then.
Before 1.28, the only option for a sidecar that needed to stay running alongside the main container was a regular container — which meant Job pods would never complete because the sidecar was still running. Init containers with restartPolicy: Always fix this: the Kubernetes scheduler treats them as sidecars that start before the main container and are terminated after it exits.
apiVersion: v1
kind: Pod
metadata:
name: web-app
spec:
initContainers:
- name: log-shipper
image: fluent/fluent-bit:3.0
restartPolicy: Always # This makes it a sidecar, not a blocking init container
resources:
requests:
memory: 64Mi
cpu: 50m
volumeMounts:
- name: log-volume
mountPath: /var/log/app
- name: metrics-collector
image: prom/prometheus-agent:v0.52.0
restartPolicy: Always
resources:
requests:
memory: 128Mi
cpu: 100m
containers:
- name: app
image: my-api:v2.1.0
volumeMounts:
- name: log-volume
mountPath: /var/log/app
volumes:
- name: log-volume
emptyDir: {}
What this solves in practice: Jobs that use Istio, Linkerd, or any sidecar proxy previously required the shareProcessNamespace workaround or a custom entrypoint that sent SIGTERM to the proxy. With proper sidecars, the proxy terminates after the job container exits — no hacks required.
The graduation to stable means feature gates are removed and the behavior is unconditional. If you’re on 1.28–1.35 with SidecarContainers: true explicitly set, remove that gate — it no longer exists in 1.36.
# Verify sidecar container status
kubectl get pod web-app -o jsonpath='{.spec.initContainers[*].restartPolicy}'
# Output: Always Always
# Check sidecar readiness before main container starts
kubectl describe pod web-app | grep -A5 "Init Containers:"
In-Place Pod Resource Resize — Beta
kubectl patch can now resize CPU and memory on a running pod without triggering a restart, for workloads where the resize policy allows it. This graduated to beta in 1.36 after being alpha since 1.27.
apiVersion: v1
kind: Pod
metadata:
name: api-server
spec:
containers:
- name: api
image: my-api:v2.1.0
resources:
requests:
memory: "256Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "1000m"
resizePolicy:
- resourceName: cpu
restartPolicy: NotRequired # CPU can resize without restart
- resourceName: memory
restartPolicy: RestartContainer # Memory resize requires restart
With restartPolicy: NotRequired, you can resize CPU live:
# Resize CPU on a running pod (no restart)
kubectl patch pod api-server --subresource resize --patch '
spec:
containers:
- name: api
resources:
requests:
cpu: "750m"
limits:
cpu: "1500m"
'
# Watch the resize happen
kubectl get pod api-server -o jsonpath='{.status.containerStatuses[0].resources}'
The practical use case is response to traffic spikes: if a pod’s CPU is saturated and a horizontal scale-out would take 90 seconds (new pod scheduling + image pull + health check delay), an in-place CPU resize completes in seconds. It doesn’t replace HPA but it fills the gap between “this pod needs more CPU right now” and “the new pod is ready.”
Memory resize with RestartRequired is less useful during an incident but matters for right-sizing: tools like Compute Optimizer or VPA can adjust memory without forcing a blue/green deployment.
nftables kube-proxy Backend — Stable
The nftables kube-proxy backend graduates to stable in 1.36. On Linux kernels 5.2+ with nft installed, you can switch away from iptables, which has known performance issues at scale (10K+ Services generates tens of thousands of iptables rules that are re-evaluated linearly).
# Check current kube-proxy mode
kubectl -n kube-system get configmap kube-proxy -o jsonpath='{.data.config\.conf}' | grep mode
# Enable nftables mode (requires kernel 5.2+ and nft binary)
# Edit the kube-proxy DaemonSet or ConfigMap:
kubectl -n kube-system edit configmap kube-proxy
# kube-proxy ConfigMap change:
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "nftables" # was "iptables" or ""
At 5,000 services, the nftables backend typically shows 20-40% lower CPU usage for kube-proxy compared to iptables on the same node. The improvement comes from nftables’ set-based matching: instead of evaluating thousands of individual rules in order, nftables uses hash sets that look up the destination in O(1). At 500 services it doesn’t matter much; at 5,000 it does.
EKS nodes running Amazon Linux 2023 satisfy the kernel requirement. If you’re on AL2 (kernel 4.14), stay on iptables.
VolumeAttributesClass — Stable
VolumeAttributesClass reaches stable, letting you modify dynamic storage parameters on existing PersistentVolumeClaims without deleting and recreating them. The primary use case is changing IOPS or throughput on EBS gp3 volumes.
# Define a VolumeAttributesClass for high-IOPS workload
apiVersion: storage.k8s.io/v1
kind: VolumeAttributesClass
metadata:
name: ebs-high-iops
driverName: ebs.csi.aws.com
parameters:
iops: "6000"
throughput: "500"
---
# Reference from a PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3
volumeAttributesClassName: ebs-high-iops # new field
resources:
requests:
storage: 100Gi
To change IOPS on a running volume, update the PVC’s volumeAttributesClassName field. The CSI driver applies the change without unmounting the volume (EBS supports online modification):
# Switch to high-IOPS class without restating the pod
kubectl patch pvc db-data -p '{"spec":{"volumeAttributesClassName":"ebs-high-iops"}}'
# Watch the modification progress
kubectl get pvc db-data -w
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
# db-data Bound pvc-abc123 100Gi RWO gp3 ebs-high-iops 5m
Before VolumeAttributesClass, changing EBS IOPS mid-workload required a separate aws ec2 modify-volume call outside of Kubernetes, with no reconciliation if the change failed. This brings that operation inside the Kubernetes control plane.
Dynamic Resource Allocation — Beta 2
DRA (Dynamic Resource Allocation) — the replacement for device plugins for GPUs, FPGAs, and network accelerators — continues toward stability with significant API refinements in 1.36. The ResourceClaim and ResourceClaimTemplate objects work alongside ResourceClass to describe GPU requests in a way that the scheduler and device driver can negotiate.
# Request a GPU through DRA (beta API)
apiVersion: resource.k8s.io/v1beta1
kind: ResourceClaimTemplate
metadata:
name: gpu-claim-template
namespace: ml-workloads
spec:
spec:
devices:
requests:
- name: gpu
deviceClassName: nvidia.com/gpu
allocationMode: ExactCount
count: 1
---
apiVersion: v1
kind: Pod
metadata:
name: training-job
namespace: ml-workloads
spec:
resourceClaims:
- name: gpu
resourceClaimTemplateName: gpu-claim-template
containers:
- name: trainer
image: pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime
resources:
claims:
- name: gpu
DRA is still beta and the API surfaces continue to evolve. If you’re on NVIDIA device plugin and it’s working, stay on it for now — DRA becomes the recommended path once it reaches GA, currently projected for 1.38 or 1.39.
Notable Deprecations and Removals
flowcontrol.apiserver.k8s.io/v1beta3 removed. The v1beta3 version of API Priority and Flow Control is gone in 1.36 — not deprecated, removed. The stable v1 has been available since 1.29, so this shouldn’t catch anyone off guard. Still, grep your manifests before upgrading: a single FlowSchema referencing the old apiVersion will cause apply to fail after the upgrade.
# Find v1beta3 resources in your manifests
grep -r "flowcontrol.apiserver.k8s.io/v1beta3" ./k8s-manifests/
# Replace with: flowcontrol.apiserver.k8s.io/v1
kubectl run removes --generator flag. The --generator flag in kubectl run (already deprecated since 1.12) is fully removed. If any scripts reference it, remove the flag — it was a no-op since 1.21.
Alpha feature gates removed. Several feature gates that graduated to stable in earlier releases (1.28–1.34) are removed from the codebase. If you have these in --feature-gates flags on kube-apiserver or kubelet, the process will fail to start in 1.36 if unknown gates cause an error. Run a pre-upgrade check:
# Check for removed feature gates in your cluster config
# On kube-apiserver pod or manifest:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep "feature-gates"
# On kubelet:
cat /etc/kubernetes/kubelet.conf | grep "featureGates"
The feature gate InPlacePodVerticalScaling, SidecarContainers, NFTablesProxyMode, and VolumeAttributesClass are all removed from the gate list since they’re now unconditionally enabled as stable features.
Upgrade Considerations for EKS
AWS typically makes new Kubernetes versions available in EKS within 1–3 months of the upstream release. v1.36 should be available in EKS by June–July 2026. For upgrade procedure, the EKS Cluster Upgrade Zero-Downtime Playbook covers the node drain, control plane upgrade, and verification sequence. The Gateway API migration guide is relevant if v1.36 prompts you to finally move off ingress-nginx.
# Check available EKS versions
aws eks describe-addon-versions --kubernetes-version 1.36 \
--query 'addons[*].addonName' --output table
# Update EKS control plane to 1.36
aws eks update-cluster-version \
--name my-cluster \
--kubernetes-version 1.36
# Watch upgrade status
aws eks describe-update \
--name my-cluster \
--update-id <update-id> \
--query 'update.status'
One pre-upgrade check specific to 1.36: if you’re using restartPolicy: Always on init containers as a workaround already, verify those pods behave correctly after the feature gate is unconditionally enabled. There were edge cases in beta where the sidecar shutdown ordering differed from the workaround implementations — stable behavior is that sidecars receive SIGTERM after all non-sidecar containers have exited.
Comments