HashiCorp Vault + Workload Identity Federation: Secretless Access for Kubernetes and CI/CD

Bits Lovers
Written by Bits Lovers on
HashiCorp Vault + Workload Identity Federation: Secretless Access for Kubernetes and CI/CD

The worst secret in your platform is the one that exists only because the previous secret could not be trusted. That is how teams end up with GitLab variables that contain cloud keys, Kubernetes secrets that hold database passwords, and rotation procedures nobody wants to test. HashiCorp’s February 2026 guidance on Vault plus workload identity federation is interesting because it attacks that chain at the front: stop handing workloads static credentials in the first place.

The core pattern is simple. A workload presents a signed identity token from a trusted issuer. Vault validates that token, maps its claims to a role, and issues a short-lived token or dynamic credential for the exact downstream system the workload needs. No long-lived AWS key in CI. No database password baked into a Helm chart. No secret zero hidden in a different file and pretending to be safer.

If you need the basics on OIDC and temporary AWS credentials first, the AWS STS guide is the right background. If your team is still storing most runtime values in CI settings, revisit the GitLab CI variables guide before you scale that pattern any further. This post is about the next step: using identity, not stored secrets, as the first credential.

What Workload Identity Federation Means In Practice

Workload identity federation is not a single product feature. It is an authentication design. The workload starts with a signed identity token from something the platform already trusts, such as GitLab OIDC, GitHub Actions OIDC, a Kubernetes service account token, or a cloud-native workload identity service. Vault then becomes the policy and brokering layer.

That distinction matters because people often mix three different things together.

The first is authentication. Vault needs proof that the caller is really the CI job or Kubernetes workload it claims to be. That comes from the external identity token.

The second is authorization. Vault decides what that caller is allowed to access by matching claims such as project path, branch, namespace, service account name, or audience.

The third is credential brokering. Instead of returning a static secret from a KV path, Vault can mint short-lived database credentials, AWS credentials, or other scoped tokens. This is where the platform starts to feel cleaner, because the caller gets what it needs for this run, not a permanent secret that lingers until someone forgets about it.

HashiCorp’s point in the 2026 guidance is the right one: trusted identity should be the first hop. Static credential distribution should be the exception.

Why Teams Still Get This Wrong

Most pipelines already have an identity. GitLab jobs can emit OIDC tokens. GitHub Actions can do the same. Kubernetes service accounts already produce signed tokens that other systems can validate. But teams still fall back to long-lived secrets because it feels easier to paste a value once than to build a claim mapping model.

That shortcut works until the first real audit or compromise. Now you have to answer hard questions.

Which projects can read the production secret?

How many runners have ever seen the AWS key?

What else can that key do besides the one deployment it was created for?

How quickly can you rotate it without breaking a dozen other pipelines that quietly reuse it?

Vault plus WIF answers those questions with a stricter pattern. The job authenticates with the token it already has. Vault issues a time-bound credential tied to the role and policy you defined. If the token leaks, the blast radius is measured in minutes instead of months.

The Clean GitLab To Vault Flow

GitLab is a good example because the pieces are explicit. Modern GitLab pipelines can request OIDC tokens through id_tokens. Vault can trust GitLab as a JWT or OIDC issuer and map claims like project path, ref, environment, and audience to a role.

The Vault side starts with enabling the JWT auth method and binding it to the GitLab issuer:

vault auth enable jwt

vault write auth/jwt/config \
  oidc_discovery_url="https://gitlab.com" \
  bound_issuer="https://gitlab.com"

vault write auth/jwt/role/gitlab-prod \
  role_type="jwt" \
  user_claim="sub" \
  bound_audiences="https://vault.bitslovers.internal" \
  bound_claims='{"project_path":"team/platform-api","ref":"main","ref_type":"branch"}' \
  token_policies="prod-read" \
  token_ttl="15m"

Then the job authenticates with its own token instead of a stored Vault credential:

stages:
  - deploy

deploy_prod:
  stage: deploy
  id_tokens:
    VAULT_ID_TOKEN:
      aud: https://vault.bitslovers.internal
  script:
    - export VAULT_TOKEN=$(vault write -field=token auth/jwt/login role=gitlab-prod jwt="$VAULT_ID_TOKEN")
    - export DB_PASSWORD=$(vault kv get -field=password kv/prod/platform-api)
    - ./deploy.sh

This is already better than a masked GitLab variable holding a reusable production secret. But the stronger pattern is to stop fetching static values when you do not need to. If the app talks to Postgres, use Vault’s database secrets engine and mint a credential with a short TTL instead of reading one shared password.

Kubernetes Is The Same Pattern Wearing Different Clothes

Kubernetes workloads already have an identity surface. A service account token, especially with bound projected tokens, gives the pod something signed that can be verified. Vault can trust the Kubernetes cluster directly with the Kubernetes auth method, or trust an external OIDC issuer depending on how you designed the platform.

The important operational point is not which auth method name you choose. It is the authorization boundary you encode.

A pod in dev should not get production secrets because it knows the path. A pod using the wrong service account should not inherit broad read access because the namespace matched. The role should bind tightly to the service account and namespace that represent the workload.

A minimal role looks like this:

vault auth enable kubernetes

vault write auth/kubernetes/config \
  kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
  token_reviewer_jwt="$TOKEN_REVIEWER_JWT" \
  kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

vault write auth/kubernetes/role/payments-api \
  bound_service_account_names="payments-api" \
  bound_service_account_namespaces="prod" \
  token_policies="payments-read" \
  token_ttl="15m"

That is not glamorous platform work, but it is the difference between identity-based access and a slightly fancier secret distribution system.

Where Vault Actually Adds Value

A fair criticism is that cloud providers already have identity federation options. AWS IAM roles, IRSA on EKS, and GitHub or GitLab OIDC into STS cover a lot of ground. Sometimes that is enough. The GitHub Actions deploy to AWS guide already shows how far you can get with native OIDC and AWS IAM alone.

Vault becomes worth the extra platform surface when at least one of these is true.

You need one policy layer across more than AWS.

You want dynamic secrets for databases, not just cloud API credentials.

You need a central audit trail for who asked for which secret and why.

You want to issue different downstream credentials from the same authenticated identity without pushing that authorization logic into every target system.

You are trying to remove secret sprawl from both CI and Kubernetes, not just one of them.

This is also why Vault still fits naturally beside Terraform and broader platform engineering. Teams already invested in the HashiCorp stack will recognize the control-plane value quickly, which is one reason the Terraform vs OpenTofu 2026 guide calls out the tighter HashiCorp integration story.

The Gotchas That Hurt In Production

The first one is claim design. If you bind a Vault role to a broad claim set like only project_path, you just authorized every branch in that repository to use the same secret path. That might be fine for dev. It is reckless for prod. Match on branch, environment, namespace, or service account wherever the risk justifies it.

The second is audience drift. OIDC failures caused by a mismatched aud value waste a lot of engineering time because the token looks valid and the issuer is correct. Be explicit about the audience in both the workload token request and the Vault role config.

The third is pretending short TTLs are free. They are safer, but the application and connection pool need to tolerate renewal or re-authentication. Dynamic database users with 5-minute TTLs sound secure until the app opens long-lived connections and starts failing mid-transaction.

The fourth is building a federation path and still returning static secrets from KV by habit. That is better than storing them in GitLab variables, but it is not the end state. If the target supports dynamic credentials, use them.

The fifth is forgetting audit review. Vault’s audit devices are part of the value proposition. If nobody reads them, you built a better auth path but not a better detection path.

When I Would Use Native Cloud Identity Instead

Not every team needs Vault.

If you are all-in on AWS, only need AWS credentials, and can model access cleanly through IAM roles and STS, native cloud identity is usually simpler. If you just need a pipeline to deploy to AWS, do that directly. If your only secret need is a small set of application values in Secrets Manager, adding Vault can be needless platform weight.

But if your environment spans Kubernetes, GitLab, multiple databases, and more than one cloud or identity boundary, the simplicity flips. Vault gives you one consistent broker instead of a pile of one-off trust paths.

The Practical Recommendation

Do not roll out Vault plus workload identity federation as a grand migration. Pick one production pipeline and one Kubernetes workload that currently depend on static secrets. Replace the static first credential with OIDC or service-account-based login. Keep the role scope narrow. Keep the TTL short but realistic. Measure what breaks.

If the result is only “we moved a password from GitLab variables to Vault KV,” you are not done. The real payoff starts when the workload gets a time-bound credential minted for that run and then loses it automatically.

That is the bar. Identity first. Short-lived access second. Static secrets only when the target system gives you no better choice.

Sources

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus