ArgoCD on EKS: GitOps Continuous Delivery for Kubernetes
GitOps is the practice of using a Git repository as the single source of truth for what should run in your Kubernetes cluster. ArgoCD implements this by watching a Git repo, comparing its contents against the live cluster state, and automatically reconciling any drift. A developer pushes a change to a Helm chart or Kubernetes manifest; ArgoCD sees the change within minutes and applies it to the cluster. No kubectl commands in CI pipelines, no deploy scripts, no “what version is running in production?” questions.
This guide covers installing ArgoCD on EKS, defining applications, configuring automated sync with safety controls, managing multiple environments with ApplicationSets, RBAC for team access control, and integrating with GitHub Actions.
Installing ArgoCD on EKS
# Create a dedicated namespace
kubectl create namespace argocd
# Install ArgoCD (latest stable)
kubectl apply -n argocd \
-f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for all pods to be ready
kubectl wait --for=condition=available deployment \
--all -n argocd --timeout=300s
# Check pod status
kubectl get pods -n argocd
By default, ArgoCD uses a ClusterIP service. For EKS production use, expose it through an ALB with AWS Load Balancer Controller:
# argocd-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: argocd
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:123456789012:certificate/abc-123
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS13-1-2-2021-06
spec:
rules:
- host: argocd.internal.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443
kubectl apply -f argocd-ingress.yaml
# Get the initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d
# Login with argocd CLI
argocd login argocd.internal.example.com \
--username admin \
--password $(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)
# Change the admin password immediately
argocd account update-password
Defining Applications
An ArgoCD Application is a CRD that maps a Git repository path to a cluster namespace.
# my-api-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-api
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io # Cascade delete resources on app deletion
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests.git
targetRevision: HEAD
path: apps/my-api/overlays/production
destination:
server: https://kubernetes.default.svc # This cluster
namespace: my-api
syncPolicy:
automated:
prune: true # Delete resources removed from Git
selfHeal: true # Revert manual kubectl changes
allowEmpty: false # Prevent syncing empty state (safety)
syncOptions:
- CreateNamespace=true
- PrunePropagationPolicy=foreground
- PruneLast=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
kubectl apply -f my-api-app.yaml
# Or create via CLI
argocd app create my-api \
--repo https://github.com/myorg/k8s-manifests.git \
--path apps/my-api/overlays/production \
--dest-server https://kubernetes.default.svc \
--dest-namespace my-api \
--sync-policy automated \
--auto-prune \
--self-heal
# Check sync status
argocd app get my-api
argocd app sync my-api # Manual trigger if automated is off
prune: true deletes Kubernetes resources when you remove them from Git. Without it, removed manifests leave orphaned resources in the cluster. Enable it for production — the alternative is accumulating zombie deployments and services.
selfHeal: true reverts manual cluster changes. If someone runs kubectl scale deployment my-api --replicas=5 directly, ArgoCD resets it back to whatever the Git manifest specifies within minutes. This enforces Git as the only path to change cluster state — which is the whole point of GitOps.
Helm Chart Applications
# my-api-helm.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-api-helm
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/helm-charts.git
targetRevision: v1.5.2
path: charts/my-api
helm:
releaseName: my-api
valueFiles:
- values/production.yaml
parameters:
- name: image.tag
value: "2.1.4"
- name: replicaCount
value: "3"
destination:
server: https://kubernetes.default.svc
namespace: my-api
syncPolicy:
automated:
prune: true
selfHeal: true
Pinning targetRevision to a Git tag (v1.5.2) rather than HEAD means ArgoCD only syncs when you deliberately cut a new release — not on every commit. For production environments, tag-based tracking is safer than branch tracking. For staging, HEAD of a branch is fine.
App of Apps Pattern
For many applications, define a “root” application that manages other ArgoCD applications:
# root-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/myorg/k8s-manifests.git
targetRevision: HEAD
path: argocd/applications
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
The argocd/applications directory in Git contains Application manifests for all your services. Adding a new application is a Git commit to that directory. The root app picks it up and creates the child Application automatically. Deleting an Application manifest from Git removes the app (and all its resources if prune: true).
ApplicationSets for Multi-Environment
ApplicationSet generates multiple Application objects from a single template, driven by generators:
# appset-environments.yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: my-api-environments
namespace: argocd
spec:
generators:
- list:
elements:
- env: staging
cluster: https://staging-cluster.example.com
namespace: my-api-staging
imageTag: latest
- env: production
cluster: https://prod-cluster.example.com
namespace: my-api-production
imageTag: "2.1.4"
template:
metadata:
name: "my-api-"
spec:
project: default
source:
repoURL: https://github.com/myorg/helm-charts.git
targetRevision: HEAD
path: charts/my-api
helm:
releaseName: my-api
parameters:
- name: image.tag
value: ""
- name: environment
value: ""
destination:
server: ""
namespace: ""
syncPolicy:
automated:
prune: true
selfHeal: true
This creates my-api-staging and my-api-production Applications from a single definition. A Git generator variant reads environment definitions from files in the repo, so adding a new environment is a Git commit rather than editing the ApplicationSet.
RBAC and Projects
ArgoCD Projects scope what a team can deploy and where:
# Create a project that restricts the platform team
argocd proj create platform-team \
--description "Platform team applications" \
--src https://github.com/myorg/k8s-manifests.git \
--dest https://kubernetes.default.svc,platform-* \
--allow-cluster-resource Namespace \
--allow-namespaced-resource "*/*"
# Grant a group access to the project
argocd proj role create platform-team developer
argocd proj role add-policy platform-team developer \
-a get -a sync -a override \
-p applications \
-o "platform-team/*"
# Create an API token for CI/CD (scoped to specific app)
argocd proj role create-token platform-team developer
Projects restrict:
- Source repos: teams can only deploy from their approved repos
- Destination clusters and namespaces: teams can only deploy to their namespaces
- Resource types: limit which Kubernetes resource types a team can manage
The RBAC policy in argocd-rbac-cm ConfigMap controls what roles can do:
kubectl edit configmap argocd-rbac-cm -n argocd
data:
policy.csv: |
p, role:developer, applications, get, */*, allow
p, role:developer, applications, sync, */*, allow
p, role:developer, applications, action/*, */*, allow
g, myorg:platform-team, role:developer
policy.default: role:readonly
scopes: '[groups]'
OIDC / SSO Integration
For EKS teams using AWS IAM Identity Center, configure ArgoCD OIDC against the IAM Identity Center OIDC endpoint:
kubectl edit configmap argocd-cm -n argocd
data:
url: https://argocd.internal.example.com
oidc.config: |
name: AWS SSO
issuer: https://identitycenter.amazonaws.com/ssooidc/d-1234567890
clientID: argocd-client-id
clientSecret: $oidc.aws-sso.clientSecret
requestedScopes:
- openid
- profile
- email
requestedIDTokenClaims:
groups:
essential: true
After OIDC setup, groups from IAM Identity Center map to ArgoCD RBAC roles. Team members log into ArgoCD with their SSO credentials rather than individual ArgoCD accounts.
GitHub Actions Integration
Trigger ArgoCD image updates from GitHub Actions after a successful build:
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and push Docker image
run: |
docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-api:$ .
aws ecr get-login-password | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/my-api:$
- name: Update image tag in Git manifests
run: |
git config user.email "[email protected]"
git config user.name "GitHub Actions"
# Update the image tag in the Helm values file
sed -i "s/tag: .*/tag: $/" helm/my-api/values/staging.yaml
git add helm/my-api/values/staging.yaml
git commit -m "chore: update my-api image to $"
git push
env:
GITHUB_TOKEN: $
The CI pipeline builds and pushes the image, then updates the Git manifest with the new image tag. ArgoCD detects the manifest change and syncs the cluster. This is the GitOps pattern: CI pushes to Git; ArgoCD pulls from Git to the cluster. The cluster never exposes kubectl access to CI — it only needs Git access.
For the cluster this runs on, the EKS networking VPC CNI guide covers pod networking configuration, and the EKS RBAC and security guide covers how ArgoCD’s service account permissions integrate with EKS IAM roles for service accounts. For Helm charts that ArgoCD deploys, the Helm Charts on EKS guide covers chart structure and values management across environments.
Comments