How to execute Cloud Formation on Gitlab

How to execute Cloud Formation on Gitlab

How to execute Cloud Formation on Gitlab, it’s our goal for this blog post. And you will understand the benefits of using Gitlab to run the Cloud Formation template, which is beyond the automation and makes the process easier.

If you also would like to learn how to execute Terraform on Gitlab, check out our post.

What is Cloudformation?

It can be appropriately described as a mechanism that assists in deploying infrastructure as code. The particular code is JSON formatted and, when filled into CloudFormation, will address what resources to create. Other tools do this for other products. For example, OpenStack uses Heat (Which can also interface with different platforms). VMWare utilizes vRealize, Azure, and GCE and even has private versions that are not as mature. Users can compose templates and deploy them as stacks utilizing the Cloudformation GUI, CLI, or API. One of the exceptional advantages of using the tool is that you can destroy the stack when it gets cleaned up, and all resources created in it will be liquidated.

Stacks (Created via Templates) can be pretty straightforward, perhaps a unique template that brings up a few resources like an EC2 Instance, or they can be challenging and build whole environments from the ground up. 

Why use Gitlab to execute a Cloud Formation template?

Sometimes we don’t want to install the dependencies needed to execute a script on our computer to leverage and use GitLab to run it for us. 

Also, it could be a trick to handle different versions of the dependencies on your computer—for example, the tools necessary to query python to be used. So, if you need a specific version of Python for the different applications, it could be painful to maintain it.

Another reason is to suppose that you work for a big company with many developers or devops engineers. Using Gitlab, you can do one project that everybody uses, and it could keep tracking the changes and who did the latest deployment on a specific environment. Also, it helps new members of your team to deploy any application without any issues for complex implementations automatically.

Depending on your template’s complexity, it may require many parameters that need to be specified before executing the Cloud Formation template. So, if you have one project on GitLab that keeps track of all parameters, you can roll back or create snapshots (tags) for security. Also, any person who needs to deploy a new environment will know the values and which parameters should be specified without any explanation.

Creating the .gitlab-ci.yml File

GitLab CI/CD pipelines are created using a YAML file called .gitlab-ci.yml inside each project. This represents the structure and composition of the pipelines.

In the root of your project, create a .gitlab-ci.yml file, set the following code, and substitute the #S3NAME# with the Amazon S3 bucket name you made to save the package:

image: python:3.8
  - deploy
  stage: deploy
    - pip3 install awscli --upgrade
    - pip3 install aws-sam-cli --upgrade
    - sam build
    - sam package --output-template-file packaged.yaml --s3-bucket #S3Bucket#
    - sam deploy --template-file packaged.yaml --stack-name gitlab-example --s3-bucket #S3NAME# --capabilities CAPABILITY_IAM --region us-east-1
  environment: dev

Let’s review the config file more nearly:

  • You can use the Docker image for this build from the Docker hub. The Docker image with Python 3.8 is used following the sample application written in Python 3.8.
  • The AWS Command Line Interface and AWS SAM CLI are installed in the script section.
  • SAM builds, packages, and deploys the application.

Con figuring Your AWS Credentials with GitLab

To communicate with your AWS account, the GitLab CI/CD pipelines expect AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be specified in your GitLab project settings. For example, you can do the following > CI/CD > Variables. For more information, please see GitLab variables.

The AWS credentials you provide must contain the AWS Identity and Access Management (IAM) policies that provision correct access control to AWS Resources created by the template. 

Cloud Formation Issues that you may have

If your build breaks, please take a look at the build log to examine why it failed. Some typical causes are:

  • Conflicting software versions (for instance, the Python runtime version may differ from the Python on the build computer). Address this by placing the proper versions of the software. If your build computer has an inconsistent software version, install the correct version through the .gitlab-ci.yml file.
  • You may not be capable of accessing your AWS account from GitLab. Check the values from the environment variables you configured with AWS credentials following the CI/CD configuration in Gitlab.

You may not have permission to deploy some specific resources. Double-check if you provide all required permissions to deploy. Also, your credentials must have permissions to the S3 bucket and create the necessary resources.

AWS IAM Roles for Gitlab Runner

For best practices, AWS it’s recommended to use the Roles instead of AWS Access Keys. For a large company, it’s challenging to manage and control the Access Keys if you configure it as an environment variable on your pipeline. Instead, I encourage you to assign a Role on your Runner and specify which resources will be created or managed, so you don’t store any credentials on AWS.

Real-World Example using Cloud Formation on Gitlab

You can check out a real example of a project, which uses Cloud Formation to deploy a Runner on AWS Fargate using a pipeline on Gitlab:

Always remember to override the Cloud Formation parameters before executing the template. You can also do it on Gitlab.

You can use –parameter-overrides like the example below to specify all variables needed.

  AWS_REGION: us-east-1
  STACK_NAME: 'bitslovers-runner-dev'
  VPC_IP: '' #VpcId
  SUBNET: '' #SubnetId
  GITLAB_TOKEN: '' #GitLabRegistrationToken
  RUNNER_TAG: 'aws-fargate-dev' #RunnerTagList
  DOCKER_IMAGE_DEFAULT: 'alpine:latest' #DockerImageDefault
  DOCKER_IMAGE_RUNNER: '' #DockerImageRunner
  AWS_SUBNET_ZONE: 'a' #SubnetZoneEC2Runner
  - prep
  - deploy
  image: docker:latest
  stage: prep
    - docker build -t build-container .
    name: build-container:latest
    entrypoint: [""]
  stage: deploy
    - aws configure set region ${AWS_REGION}
    - sam deploy --template-file template.yml --stack-name $STACK_NAME --capabilities CAPABILITY_NAMED_IAM --region us-east-1 --parameter-overrides VpcId=\"${VPC_IP}\" SubnetId=\"${SUBNET}\" SubnetZoneEC2Runner=\"${AWS_SUBNET_ZONE}\" GitLabURL=\"${GITLAB_RUL}\" GitLabRegistrationToken=\"${GITLAB_TOKEN}\" RunnerTagList=\"${RUNNER_TAG}\" DockerImageRunner=\"${DOCKER_IMAGE_RUNNER}\" RunnerIamRole=\"${RUNNER_IAM_PROFILE}\" 
    - echo "Access for more information"


Using Cloud Formation is easy, and creating a pipeline on Gitlab to execute it makes it more accessible. Sometimes, we are not in front of a computer, and if we would like to deploy using any web browser, like our phones or tablet, we can do it from anywhere. Or even by people that don’t need to understand how everything works. 

I hope that this article was helpful to you. Would you mind leaving feedback and sharing it on your social network?

Before you leave, check out our blog posts in the DevOps area. It may have more content related to Gitlab and AWS Solutions that you can learn.

How to use the Gitlab CI Variables

Effective Cache Management with Maven Projects on Gitlab.

Pipeline to build Docker in Docker on Gitlab.

How to Autoscaling the Gitlab Runner.

Execute Terraform on Gitlab CI.

How to use Gitlab CI: Deploy to elastic beanstalk

Leave a Comment

Your email address will not be published. Required fields are marked *

Free PDF with a useful Mind Map that illustrates everything you should know about AWS VPC in a single view.