How to execute Cloud Formation on Gitlab

Bits Lovers
Written by Bits Lovers on
How to execute Cloud Formation on Gitlab

I wanted to share how I set up CloudFormation templates to run through GitLab CI/CD. If you’ve been writing templates and running them manually from your terminal, moving the whole thing into a pipeline saves time and keeps your deployments consistent. I’ll walk through the setup and point out a few things I ran into along the way.

If you’re interested in running Terraform through GitLab instead, I have a separate post for that.

What is CloudFormation?

CloudFormation is AWS’s built-in tool for defining infrastructure as code. You write a template in JSON or YAML, and CloudFormation reads it to figure out which resources to create – EC2 instances, S3 buckets, Lambda functions, VPCs, whatever you need. Other cloud platforms have their own equivalents (Azure Resource Manager, Google Deployment Manager, OpenStack Heat), but CloudFormation is specific to AWS.

You can deploy templates as “stacks” through the AWS console, the CLI, or the API. One thing I like about stacks is the cleanup: when you delete a stack, every resource inside it gets torn down too. No orphaned resources floating around.

Stacks can be simple – just an EC2 instance and a security group – or they can describe an entire environment with dozens of interconnected resources.

Why run CloudFormation from GitLab?

There are a few reasons I prefer this over running templates from my laptop.

First, I don’t want to manage local dependencies. If your template uses the AWS CLI, SAM CLI, or a specific Python version, installing and maintaining all of that on every machine gets old fast. GitLab runners handle that in a clean container environment every time.

Second, it gives your team a single source of truth. If you work with other developers or DevOps engineers, having one project that runs the deployments means everyone uses the same parameters, the same versions, and the same process. You can see exactly who deployed what and when.

Third, complex templates often need a bunch of parameters. Storing those in the GitLab project alongside the template keeps everything together. You can tag releases, roll back to previous configurations, and any new team member can deploy without needing a walkthrough.

Creating the .gitlab-ci.yml File

GitLab CI/CD pipelines are defined in a file called .gitlab-ci.yml at the root of your project.

Create that file and add the following. Replace #S3NAME# with the Amazon S3 bucket name you set up for deployment artifacts:

image: python:3.12
stages:
  - deploy
Dev:
  stage: deploy
  before_script:
    - pip3 install awscli --upgrade
    - pip3 install aws-sam-cli --upgrade
  script:
    - sam build
    - sam package --output-template-file packaged.yaml --s3-bucket #S3Bucket#
    - sam deploy --template-file packaged.yaml --stack-name gitlab-example --s3-bucket #S3NAME# --capabilities CAPABILITY_IAM --region us-east-1
  environment: dev

A few notes on what’s happening here:

  • The pipeline runs inside a python:3.12 Docker image. I updated this from the original python:3.8 since Python 3.8 reached end-of-life in October 2024. If your application requires a different runtime, adjust accordingly.
  • The before_script section installs the AWS CLI and SAM CLI.
  • SAM builds the application, packages it (uploading artifacts to S3), and then deploys it as a CloudFormation stack.

A note on --s3-bucket vs --resolve-s3: Newer versions of SAM CLI support the --resolve-s3 flag, which lets SAM create and manage the S3 bucket for you automatically. If you don’t want to manage your own bucket, you can replace --s3-bucket #S3NAME# with --resolve-s3 in both the sam package and sam deploy commands. Either approach works – --resolve-s3 is just simpler.

Configuring AWS Credentials with GitLab

The pipeline needs AWS credentials to interact with your account. There are two approaches:

Option 1: Static access keys (simple setup)

Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as CI/CD variables in your GitLab project. Go to Settings > CI/CD > Variables to add them. For more details on how variables work, check out our guide to GitLab CI variables.

The IAM user associated with those keys needs policies that cover every resource your template creates or modifies.

GitLab now supports OpenID Connect (OIDC) authentication with AWS. Instead of storing long-lived access keys, you configure an IAM role that trusts GitLab as an identity provider. Each pipeline job gets temporary credentials via a JWT token exchange.

The basic setup looks like this in your .gitlab-ci.yml:

assume-role:
  id_tokens:
    GITLAB_OIDC_TOKEN:
      aud: https://gitlab.com
  script:
    - >
      aws_sts_output=$(aws sts assume-role-with-web-identity
      --role-arn ${AWS_ROLE_ARN}
      --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
      --web-identity-token ${GITLAB_OIDC_TOKEN}
      --duration-seconds 3600
      --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
      --output text)
    - export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $aws_sts_output)

You also need to create an OIDC identity provider in AWS IAM pointing to gitlab.com (or your self-hosted GitLab URL), then create a role with a trust policy that allows sts:AssumeRoleWithWebIdentity from that provider. This is the approach I recommend for any production setup – no credentials stored anywhere in GitLab.

Common Issues

If your pipeline fails, check the job log first. The most common problems I’ve seen:

Software version mismatches. The Python version in your Docker image might not match what your Lambda function expects. Fix this by pinning the correct image version in .gitlab-ci.yml. For example, use python:3.12 if your SAM template specifies python3.12 as the runtime.

AWS access errors. Double-check your CI/CD variables. Make sure the access key and secret key are correct, and that the associated IAM user or role has the required permissions.

Permission denied on specific resources. Your IAM policy needs to cover every action the template performs – including S3 uploads for the packaging step and CloudFormation stack creation. If you’re using SAM, also check that CAPABILITY_IAM or CAPABILITY_NAMED_IAM is set when the template creates IAM resources.

AWS IAM Roles for GitLab Runner

For self-hosted runners on EC2, I strongly recommend attaching an IAM role to the runner instance instead of using access keys. With an instance role, the runner picks up temporary credentials automatically from the EC2 metadata service. You don’t store any credentials in GitLab, and you can scope the role down to exactly what the runner needs.

If you’re using shared runners on gitlab.com, the OIDC approach described above is your best bet.

Real-World Example

Here’s a working example that uses CloudFormation to deploy a GitLab Runner on AWS Fargate:

https://github.com/clebermasters/gitlab-aws-fargate-runner-template

The project includes a full .gitlab-ci.yml with --parameter-overrides for all the template variables:

variables:
  AWS_REGION: us-east-1
  GIT_SSL_NO_VERIFY: "true"
  MODE: BUILD
  STACK_NAME: 'bitslovers-runner-dev'
  VPC_IP: '' #VpcId
  SUBNET: '' #SubnetId
  GITLAB_URL: 'http://YOUR_GITLAB.COM' #GitLabURL
  GITLAB_TOKEN: '' #GitLabRegistrationToken
  RUNNER_TAG: 'aws-fargate-dev' #RunnerTagList
  DOCKER_IMAGE_DEFAULT: 'alpine:latest' #DockerImageDefault
  DOCKER_IMAGE_RUNNER: 'XXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/ec2-gitlab-runners-fargate:dev' #DockerImageRunner
  AWS_SUBNET_ZONE: 'a' #SubnetZoneEC2Runner
stages:
  - prep
  - deploy
Prep:
  image: docker:latest
  stage: prep
  script:
    - docker build -t build-container .
Deploy:
  image:
    name: build-container:latest
    entrypoint: [""]
  stage: deploy
  script:
    - aws configure set region ${AWS_REGION}
    - sam deploy --template-file template.yml --stack-name $STACK_NAME --capabilities CAPABILITY_NAMED_IAM --region us-east-1 --parameter-overrides VpcId=\"${VPC_IP}\" SubnetId=\"${SUBNET}\" SubnetZoneEC2Runner=\"${AWS_SUBNET_ZONE}\" GitLabURL=\"${GITLAB_URL}\" GitLabRegistrationToken=\"${GITLAB_TOKEN}\" RunnerTagList=\"${RUNNER_TAG}\" DockerImageRunner=\"${DOCKER_IMAGE_RUNNER}\" RunnerIamRole=\"${RUNNER_IAM_PROFILE}\" 

Make sure to override the parameter values before running the pipeline. You can set them directly in the variables section or store them as CI/CD variables in GitLab.

Wrapping Up

Running CloudFormation through GitLab CI/CD takes a bit of upfront work, but it pays off quickly. You get repeatable deployments, a clear audit trail, and your team can trigger deploys from a web browser without needing AWS CLI installed locally. For teams managing multiple environments, this setup removes a lot of friction.

If you found this helpful, check out these related posts:

Effective Cache Management with Maven Projects on GitLab

Pipeline to build Docker in Docker on GitLab

How to Autoscale the GitLab Runner

How to use GitLab CI: Deploy to Elastic Beanstalk

GitLab CI + Terraform IaC Pipeline 2026

GitLab CI Variables guide

Bits Lovers

Bits Lovers

Professional writer and blogger. Focus on Cloud Computing.

Comments

comments powered by Disqus