Kubernetes Gateway API: Migrating Away from ingress-nginx
ingress-nginx is End of Life. CVE-2026-4342 — a configuration injection vulnerability enabling potential code execution — was disclosed in April 2026 against all versions below v1.13.9, v1.14.5, and v1.15.1. The project is in maintenance mode: security patches only, no new features. The Kubernetes community has been directing teams toward the Gateway API for over two years, and the ingress-nginx EOL makes that migration from “recommended” to “necessary.”
The Kubernetes Gateway API reached GA in v1.21 for its core resources (Gateway, HTTPRoute) and has been stable since. It replaces Ingress with a richer, role-oriented model that separates infrastructure concerns (what LoadBalancer to provision) from application concerns (how to route traffic). This guide covers the Gateway API model, migrating existing Ingress resources using ingress2gateway, configuring AWS Load Balancer Controller on EKS, and the gotchas that bite people during migration.
Why Ingress Was Always the Wrong Abstraction
The Kubernetes Ingress object tried to be everything to every team. A single resource was responsible for describing TLS termination, routing rules, backend service selection, and implementation-specific annotations that varied between nginx, Traefik, HAProxy, and AWS ALB. The result was annotation sprawl: dozens of nginx.ingress.kubernetes.io/ annotations embedded in application manifests that tightly coupled the application to a specific Ingress controller.
Gateway API fixes this with a role separation model:
- GatewayClass: cluster infrastructure — which controller implementation to use (one per cluster, set by platform team)
- Gateway: a LoadBalancer instance — which ports, protocols, and TLS certificates (set by ops team per environment)
- HTTPRoute: application routing rules — which paths route to which services (set by application teams)
Application teams no longer touch infrastructure configuration. They write HTTPRoute resources that describe routing intent without knowing what LoadBalancer implements them.
Core Resources
# 1. GatewayClass — defined once per cluster (platform team)
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: alb
spec:
controllerName: ingress.k8s.aws/alb
---
# 2. Gateway — one per environment or team namespace (ops team)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: production-gateway
namespace: infrastructure
spec:
gatewayClassName: alb
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All # Allow routes from any namespace
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: production-tls-cert
namespace: infrastructure
allowedRoutes:
namespaces:
from: All
---
# 3. HTTPRoute — per application (application team)
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-api
namespace: my-api
spec:
parentRefs:
- name: production-gateway
namespace: infrastructure
hostnames:
- "api.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /orders
backendRefs:
- name: orders-service
port: 8080
- matches:
- path:
type: PathPrefix
value: /payments
backendRefs:
- name: payments-service
port: 8080
weight: 100
The HTTPRoute lives in the application namespace alongside the deployment. It references the Gateway in the infrastructure namespace. The platform team controls the Gateway; application teams control their routes. No annotation sharing, no cross-team configuration coupling.
Installing Gateway API on EKS with AWS Load Balancer Controller
AWS Load Balancer Controller v2.8+ supports Gateway API. If you’re on an older version, upgrade first:
# Check current AWS Load Balancer Controller version
kubectl get deployment -n kube-system aws-load-balancer-controller \
-o jsonpath='{.spec.template.spec.containers[0].image}'
# Install Gateway API CRDs (required before the controller)
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.0/standard-install.yaml
# Verify CRDs installed
kubectl get crd | grep gateway.networking.k8s.io
# Install/upgrade AWS Load Balancer Controller with Gateway API enabled
helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=my-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set enableGatewayAPI=true \
--version ">=1.8.0"
# Verify the GatewayClass is available
kubectl get gatewayclass
# NAME CONTROLLER ACCEPTED
# alb ingress.k8s.aws/alb True
The enableGatewayAPI=true flag is required — it’s off by default to avoid breaking existing setups.
On EKS, that controller upgrade is where a lot of migrations stall. Teams assume Gateway API is just a manifest-conversion exercise, then run into subnet tagging, stale controller versions, or namespace ownership issues. Clean up the cluster networking baseline first; the EKS networking guide covers the controller and CNI groundwork you want in place before production cutover.
Using ingress2gateway for Migration
ingress2gateway is the Kubernetes SIGs migration tool — it inspects your existing Ingress objects and emits equivalent HTTPRoute manifests. Run it dry first to see what it produces, then redirect output to a directory once you’re satisfied with the results:
# Install ingress2gateway
go install sigs.k8s.io/ingress2gateway@latest
# or: curl -L https://github.com/kubernetes-sigs/ingress2gateway/releases/latest/download/ingress2gateway-linux-amd64 -o ingress2gateway
# Preview the migration (dry run)
ingress2gateway print \
--namespace my-api \
--providers ingress-nginx
# Convert and write to files
ingress2gateway print \
--namespace my-api \
--providers ingress-nginx \
--output-dir ./gateway-api-migration/
# Convert all ingresses across all namespaces
ingress2gateway print \
--all-namespaces \
--providers ingress-nginx \
--output-dir ./gateway-api-migration/
The tool handles the common cases: path-based routing, host-based routing, TLS, and basic header matching. It doesn’t automatically convert all nginx annotations — complex annotations (rate limiting, custom timeouts, auth snippets) need manual HTTPRoute equivalents or separate policy resources.
Review the output before applying. The converted manifests often need adjustment:
# Review generated files
ls gateway-api-migration/
# my-api-httproute.yaml
# production-gateway.yaml
# Apply in stages: Gateway first, then HTTPRoutes
kubectl apply -f gateway-api-migration/production-gateway.yaml
# Verify Gateway is READY before applying routes
kubectl get gateway -n infrastructure
# NAME CLASS ADDRESS PROGRAMMED AGE
# production-gateway alb 52.1.2.3 True 30s
# Apply HTTPRoutes
kubectl apply -f gateway-api-migration/my-api-httproute.yaml
Traffic Migration Strategy
The safest approach is running the new Gateway configuration alongside your old Ingress, verifying it on a throwaway hostname first, then cutting DNS over once you’re confident. Dropping the old Ingress and applying the HTTPRoute at the same moment in production is how you end up on a bridge call at 2am:
# Step 1: Deploy HTTPRoute alongside existing Ingress (different hostname)
# Add a test hostname to verify the Gateway configuration
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-api-test
namespace: my-api
spec:
parentRefs:
- name: production-gateway
namespace: infrastructure
hostnames:
- "api-new.example.com" # Test hostname, not production
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-api-service
port: 8080
# Step 2: Test the new hostname thoroughly
curl -H "Host: api-new.example.com" https://api-new.example.com/orders/123
# Step 3: Switch DNS for the real hostname to the new ALB
# Get the new ALB hostname from the Gateway status
kubectl get gateway production-gateway -n infrastructure \
-o jsonpath='{.status.addresses[0].value}'
# Update Route 53 CNAME record from old nginx ALB to new Gateway ALB
aws route53 change-resource-record-sets \
--hosted-zone-id ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "api.example.com",
"Type": "CNAME",
"TTL": 60,
"ResourceRecords": [{"Value": "k8s-infra-prodgatew-XXXXX.us-east-1.elb.amazonaws.com"}]
}
}]
}'
# Step 4: Monitor for 24-48 hours, then remove old Ingress
kubectl delete ingress my-api -n my-api
If your platform already deploys through GitOps, make the Gateway move a Git change instead of a kubectl-only migration. That gives you review history, rollback, and one source of truth for the new routing model. The ArgoCD on EKS guide is a better long-term operating model once HTTPRoute becomes part of the application contract.
Common Migration Gotchas
nginx annotations don’t have direct equivalents. Rate limiting, custom error pages, and auth_request configurations have no direct HTTPRoute equivalent. Gateway API uses separate policy resources (HTTPRouteFilter, ReferenceGrant) for some of these, but complex nginx configurations may require an Envoy or Traefik controller that supports richer extension points.
TLS certificate references across namespaces need ReferenceGrant. If your Gateway is in the infrastructure namespace and the TLS Secret is in another namespace, you need a ReferenceGrant to authorize the cross-namespace reference:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: allow-gateway-tls
namespace: my-api # The namespace being referenced FROM
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: infrastructure # The namespace doing the referencing
to:
- group: ""
kind: Secret
The parentRefs namespace must be explicit. Unlike Ingress (which is always cluster-scoped to the same namespace), HTTPRoute’s parentRefs must specify the Gateway namespace explicitly. Forgetting the namespace field causes the route to silently not attach to the Gateway.
Weighted routing replaces nginx canary annotations. The old approach put a percentage annotation on a second Ingress object; the new approach is cleaner — split traffic directly in the route by assigning weights to each backend ref:
# nginx equivalent: nginx.ingress.kubernetes.io/canary-weight: "20"
rules:
- backendRefs:
- name: my-api-stable
port: 8080
weight: 80
- name: my-api-canary
port: 8080
weight: 20
For Helm-based deployments using the chart described in the Helm Charts on EKS guide, update your chart templates to generate HTTPRoute instead of Ingress resources — the structure is similar enough that the migration is mostly a find-and-replace of resource types and field names. The EKS RBAC and security guide covers the RBAC policies needed to control which teams can create HTTPRoute resources in their namespaces.
DNS deserves the same discipline as the manifest work. If you want a staged production cutover instead of an instant flip, use the weighted and failover approaches in the Route 53 routing policies guide so traffic migration stays reversible.
Comments