GitLab CI Environments and Review Apps in 2026
Review apps changed how my team does code review. Instead of reading diffs, reviewers click a link and see the actual change running. The designer can verify spacing on the real app. The product manager can test the new onboarding flow without setting up a local environment. The engineer writing the review doesn’t have to narrate screenshots.
Before review apps, our merge request process was slow and error-prone. Someone would approve code that looked right in the diff but broke the layout on mobile. Another person would merge a backend change that broke the frontend flow — nobody noticed because nobody ran the full app locally to test it. We only found out in staging, after merge.
Review apps didn’t just speed up code review. They changed what reviewers actually looked at. That’s worth setting up properly.
This post covers GitLab CI environments from basics to production deployment controls, then builds up to a complete pipeline with review apps, staging, and production — including Kubernetes-based review apps that auto-stop when the branch closes.
What GitLab Environments Are
An environment in GitLab CI is a named deployment target. When a job declares environment: production, GitLab tracks that deployment. It stores which commit is deployed, when it happened, who triggered it, and whether it succeeded. You can see this in the Deployments section of your project.
Environments give you:
- A deployment history per environment
- Rollback buttons in the UI that re-run the last successful deployment job
- The ability to open the deployed app directly from GitLab
- Protected environment gates that require approval before deployment
- Deploy freeze windows that block deployments during maintenance periods
Without environments, your pipeline runs and you have no visibility into what’s actually running where. With environments, GitLab becomes your deployment record.
The simplest environment declaration:
deploy:staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
That url field puts a button in GitLab’s deployment view. Click it and the browser opens your staging URL. Tiny feature, saves a lot of context-switching.
Environment Tiers
GitLab recognizes five environment tiers: production, staging, testing, development, and other. Tiers control how GitLab displays environments and which protections it suggests.
deploy:production:
environment:
name: production
url: https://example.com
deployment_tier: production
deploy:staging:
environment:
name: staging
url: https://staging.example.com
deployment_tier: staging
Tiers affect the Environments page in GitLab. Production environments show at the top. They’re visually distinguished from lower tiers. GitLab also uses tiers in the Protected Environments feature to automatically apply stricter rules to production-tier deployments.
For most projects, you won’t explicitly set the tier — GitLab infers it from the environment name. An environment named production is assumed to be production tier. An environment named staging is staging tier. You only need the explicit deployment_tier key if you’re using non-standard names or need to disambiguate.
Review Apps: Auto-Deploy Every MR to a Unique URL
Review apps create a live deployment for every merge request. Each MR gets its own URL. When the MR closes, the deployment stops and cleans up.
The key is dynamic environment names using $CI_COMMIT_REF_SLUG. This variable takes the branch name and makes it URL-safe: feature/new-login becomes feature-new-login.
deploy:review:
stage: deploy
script:
- ./deploy-review.sh $CI_COMMIT_REF_SLUG
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
on_stop: stop:review
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
The name: review/$CI_COMMIT_REF_SLUG creates a namespaced environment. GitLab groups all review/* environments together in the Environments page. You can see every open MR deployment at a glance.
The on_stop key points to a job that cleans up the environment. That job runs when you close the MR or manually stop the environment from the GitLab UI.
stop:review:
stage: deploy
script:
- ./teardown-review.sh $CI_COMMIT_REF_SLUG
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
The action: stop marks this job as the teardown job for that environment. When the MR closes, GitLab triggers it automatically. When a reviewer wants to clean up early, they click “Stop” in the MR’s environments panel and GitLab runs this job.
Auto-Stop for Cleanup
Review app environments accumulate. If you have 30 open MRs and each has its own environment with compute and storage, the bill adds up. Auto-stop solves this.
deploy:review:
stage: deploy
script:
- ./deploy-review.sh $CI_COMMIT_REF_SLUG
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
auto_stop_in: 2 days
on_stop: stop:review
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
auto_stop_in: 2 days tells GitLab to automatically run the stop job 48 hours after the environment was last deployed. Every new commit to the MR resets the timer. An MR that gets updated daily stays alive. An MR that goes quiet for two days gets cleaned up.
You can also set this at the environment level in GitLab’s project settings as a project-wide default.
For CI-level control, the GIT_STRATEGY: none in the stop job prevents GitLab from checking out code it doesn’t need:
stop:review:
stage: deploy
variables:
GIT_STRATEGY: none
script:
- ./teardown-review.sh $CI_COMMIT_REF_SLUG
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
GIT_STRATEGY: none means the job doesn’t clone the repo. The teardown script only needs the environment name, which comes from the environment variable. No source code required.
Protected Environments and Approval Gates
Some environments require human approval before deployment. You don’t want a pipeline to deploy to production automatically, even on the main branch. You want an engineer to review the changes and click approve.
Set this up in GitLab’s project settings under Deployments > Environments > [environment name] > Protected environments.
In the UI, you configure:
- Which roles can deploy (typically Maintainer or Owner)
- Which users or groups must approve before deployment proceeds
- How many approvals are required
When a pipeline hits a job targeting a protected environment, it pauses and sends a notification to the required approvers. The pipeline continues only after the required approvals are given. If anyone rejects, the deployment job is blocked.
You can also set this via the API or through your project’s CI configuration for environments that are created dynamically, but the most common pattern is to protect production through the UI.
For staging environments where you want lighter gates, you can require a single approval:
deploy:staging:
stage: deploy
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
when: manual
when: manual makes the job require a human to click play in the pipeline UI. It doesn’t require a specific approver — anyone with Developer access can run it. That’s enough for staging. For production, use Protected Environments with required approvers.
Deploy Freeze Windows
Some organizations have change freezes: no deployments on Fridays, no deployments during peak business hours, no deployments during the holiday shopping season. GitLab’s deploy freeze feature enforces this automatically.
Set up freeze windows in Settings > CI/CD > Deploy freezes. Define a cron-based start and end time, and a timezone.
During a freeze window, the $CI_DEPLOY_FREEZE variable is set to true. You can use this in your pipeline:
deploy:production:
stage: deploy
script:
- |
if [ "$CI_DEPLOY_FREEZE" == "true" ]; then
echo "Deploy freeze active. Deployment blocked."
exit 1
fi
- ./deploy.sh production
environment:
name: production
url: https://example.com
This makes the deploy job fail loudly during a freeze. The pipeline turns red. Nobody accidentally deploys. The error message tells them why.
An alternative is to check the variable in the job rules so the job doesn’t even start during a freeze:
deploy:production:
stage: deploy
rules:
- if: $CI_DEPLOY_FREEZE == "true"
when: never
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
when: manual
script:
- ./deploy.sh production
environment:
name: production
The first rule skips the job entirely during a freeze window. During normal hours, the job is manual and only runs on the main branch.
Kubernetes Review Apps
Docker Compose-based review apps work fine for simple applications, but Kubernetes review apps are more flexible and easier to scale. GitLab has first-class Kubernetes support through the GitLab Agent for Kubernetes.
The pattern: each MR creates a Kubernetes namespace, deploys the application there, and the namespace is deleted when the MR closes.
First, install the GitLab Agent in your cluster:
helm repo add gitlab https://charts.gitlab.io
helm repo update
helm upgrade --install gitlab-agent gitlab/gitlab-agent \
--namespace gitlab-agent \
--create-namespace \
--set config.token=YOUR_AGENT_TOKEN \
--set config.kasAddress=wss://kas.gitlab.com
Configure the agent in your repository at .gitlab/agents/my-agent/config.yaml:
ci_access:
projects:
- id: mygroup/myproject
This allows your GitLab CI pipeline to connect to the cluster via the agent. No cluster credentials in CI variables. The agent handles authentication.
Now the review app pipeline:
deploy:review:
stage: deploy
image: bitnami/kubectl:latest
script:
- export NAMESPACE="review-${CI_COMMIT_REF_SLUG}"
- kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
- |
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: review-app
namespace: $NAMESPACE
spec:
replicas: 1
selector:
matchLabels:
app: review-app
template:
metadata:
labels:
app: review-app
spec:
containers:
- name: app
image: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
ports:
- containerPort: 3000
EOF
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: review-app
namespace: $NAMESPACE
spec:
selector:
app: review-app
ports:
- port: 80
targetPort: 3000
EOF
- |
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: review-app
namespace: $NAMESPACE
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: ${CI_COMMIT_REF_SLUG}.review.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: review-app
port:
number: 80
EOF
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
on_stop: stop:review
auto_stop_in: 2 days
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
stop:review:
stage: deploy
image: bitnami/kubectl:latest
variables:
GIT_STRATEGY: none
script:
- kubectl delete namespace review-${CI_COMMIT_REF_SLUG} --ignore-not-found
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
Each MR creates a namespace named review-feature-my-branch. The Deployment, Service, and Ingress all live in that namespace. The Ingress creates a DNS entry at feature-my-branch.review.example.com. When the MR closes, the entire namespace is deleted.
The --dry-run=client -o yaml | kubectl apply -f - pattern for namespace creation is idempotent. Subsequent pipelines on the same branch won’t fail trying to create a namespace that already exists.
For your DNS to work, you’ll need a wildcard DNS record pointing *.review.example.com to your ingress controller’s IP or load balancer hostname.
Environment Rollback
Rollback in GitLab is a re-run of the last successful deployment job for an environment. No special rollback commands. No rollback scripts. The pipeline just runs again with the same commit SHA that was previously deployed.
In the Deployments page, every successful deployment has a “Re-deploy” button. Clicking it creates a new pipeline that runs the deploy job for that historical commit. The environment URL updates to reflect the rolled-back state.
This works because your deploy job uses $CI_COMMIT_SHA to determine what to deploy:
deploy:production:
stage: deploy
script:
- kubectl set image deployment/my-app my-app=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA -n production
environment:
name: production
url: https://example.com
When you roll back, GitLab re-runs this job with the SHA of the previous deployment. Kubernetes pulls that specific image and rolls the Deployment back.
For AWS ECS, the pattern is the same:
deploy:production:
stage: deploy
image: amazon/aws-cli:latest
script:
- |
aws ecs update-service \
--cluster production \
--service my-app \
--task-definition my-app:$TASK_DEFINITION_REVISION \
--force-new-deployment \
--region $AWS_REGION
environment:
name: production
url: https://example.com
You store the task definition revision in a CI/CD variable or artifact and reference it during deployment. Rolling back means re-running the deploy job from the previous pipeline, which deploys the previous task definition revision.
Complete .gitlab-ci.yml
Here’s a complete pipeline: build → deploy review → deploy staging → deploy production. It includes review apps for MRs, manual staging deploys, protected production deploys, and environment-specific variables.
stages:
- build
- test
- deploy
variables:
DOCKER_REGISTRY: $CI_REGISTRY_IMAGE
AWS_REGION: us-east-1
ECS_CLUSTER: my-cluster
# Build
build:image:
stage: build
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $DOCKER_REGISTRY:$CI_COMMIT_SHA .
- docker push $DOCKER_REGISTRY:$CI_COMMIT_SHA
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Test
test:unit:
stage: test
image: node:20
needs: [build:image]
script:
- npm ci
- npm test
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Review App (MR only)
deploy:review:
stage: deploy
image: bitnami/kubectl:latest
needs: [test:unit]
script:
- export NAMESPACE="review-${CI_COMMIT_REF_SLUG}"
- kubectl create namespace $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
- kubectl set image deployment/my-app my-app=$DOCKER_REGISTRY:$CI_COMMIT_SHA -n $NAMESPACE || \
kubectl create deployment my-app --image=$DOCKER_REGISTRY:$CI_COMMIT_SHA -n $NAMESPACE
- kubectl expose deployment my-app --port=80 --target-port=3000 -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
- |
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
namespace: $NAMESPACE
spec:
rules:
- host: ${CI_COMMIT_REF_SLUG}.review.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80
EOF
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_COMMIT_REF_SLUG.review.example.com
on_stop: stop:review
auto_stop_in: 2 days
deployment_tier: development
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
stop:review:
stage: deploy
image: bitnami/kubectl:latest
needs: []
variables:
GIT_STRATEGY: none
script:
- kubectl delete namespace review-${CI_COMMIT_REF_SLUG} --ignore-not-found
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
deployment_tier: development
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
# Staging (main branch, manual)
deploy:staging:
stage: deploy
image: amazon/aws-cli:latest
needs: [test:unit]
script:
- aws ecs update-service
--cluster $ECS_CLUSTER
--service my-app-staging
--force-new-deployment
--region $AWS_REGION
- aws ecs wait services-stable
--cluster $ECS_CLUSTER
--services my-app-staging
--region $AWS_REGION
environment:
name: staging
url: https://staging.example.com
deployment_tier: staging
resource_group: staging
when: manual
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Production (main branch, manual, protected)
deploy:production:
stage: deploy
image: amazon/aws-cli:latest
needs: [deploy:staging]
script:
- |
if [ "$CI_DEPLOY_FREEZE" == "true" ]; then
echo "Deploy freeze is active. Deployment blocked."
exit 1
fi
- aws ecs update-service
--cluster $ECS_CLUSTER
--service my-app-production
--force-new-deployment
--region $AWS_REGION
- aws ecs wait services-stable
--cluster $ECS_CLUSTER
--services my-app-production
--region $AWS_REGION
environment:
name: production
url: https://example.com
deployment_tier: production
resource_group: production
when: manual
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
Walk through the key decisions. Build and test run for both MR pipelines and main branch. Review apps only deploy for MR pipelines — the $CI_PIPELINE_SOURCE == "merge_request_event" rule keeps them off the main branch. Staging and production only deploy from the main branch after tests pass.
The needs: [deploy:staging] on the production job creates an explicit gate: you can’t deploy production without first deploying staging. If staging is broken, the production job has no successful predecessor and won’t queue.
resource_group: production serializes production deploys. If two commits land on main quickly and both pass tests, the second production deploy waits for the first to complete. They don’t race.
For the ECS commands to work, you’ll need an IAM role for the GitLab runner with permissions to call ecs:UpdateService and ecs:DescribeServices. Use OIDC token authentication rather than static IAM access keys. See GitLab CI Variables for how to set that up securely, and GitLab CI Build Docker Image and Push to ECR for the ECR authentication pattern.
Environment-Specific Variables
Different environments need different configuration: staging connects to a test database, production connects to the real one. Set environment-scoped variables in Settings > CI/CD > Variables.
Each variable has an “Environment scope” field. Set it to staging for staging variables and production for production variables. GitLab injects only the variables matching the current deployment environment.
deploy:staging:
stage: deploy
script:
- echo "Deploying to staging with DB at $DATABASE_URL"
- ./deploy.sh
environment:
name: staging
When this job runs, $DATABASE_URL resolves to the staging value you configured in project settings. The same job on production resolves to the production value. The script doesn’t change.
For review apps, scope variables to review/*:
Variable: DATABASE_URL
Value: postgres://review-user:[email protected]/review
Environment scope: review/*
Every review app environment matches review/* and gets the shared review database URL.
Connecting Review App Links to Merge Requests
When a review app is deployed, GitLab automatically shows an “Open in…” button in the merge request. This button appears next to the pipeline status in the MR header. It links to the url from your environment block.
You don’t configure this separately. Deploy to a review/$CI_COMMIT_REF_SLUG environment with a URL, and the button appears automatically. Anyone viewing the MR can open the live app in one click.
The link updates on every new commit to the MR branch. If you push a fix at 2pm, the review app redeploys and the link opens the updated version. Reviewers don’t need to know about branches, environments, or Kubernetes namespaces. They click a link.
Monitoring and Deployment History
GitLab tracks every deployment. Under Deployments > Environments, select any environment and click the deployment history. You’ll see:
- Every deployment, with the commit SHA and pipeline link
- Who triggered each deployment
- When it ran and how long it took
- Whether it succeeded or failed
- The deployed commit’s message and author
This is your audit trail. When something breaks in production and you need to know what changed, you look at the deployment history. Find the last healthy deployment. Compare the commit SHAs. That’s your change window.
The rollback button is on every successful deployment row. Click it, GitLab runs the deploy job from that commit, and your environment is back to its previous state. The process takes as long as your deploy job takes. For ECS, typically under two minutes. For Kubernetes, typically under a minute.
What to Check Before Enabling Review Apps
A few things to sort out before you turn this on:
DNS: You need *.review.example.com pointing to your ingress or load balancer. If you’re using AWS Route 53, create a wildcard CNAME record.
TLS: Your ingress controller needs a wildcard certificate for *.review.example.com. Use cert-manager with Let’s Encrypt for automatic certificate provisioning. A wildcard cert covers all review app subdomains without manual renewal.
Resource limits: Review app namespaces should have ResourceQuotas. Otherwise, a branch that accidentally creates a large Deployment will consume cluster resources until someone notices.
apiVersion: v1
kind: ResourceQuota
metadata:
name: review-quota
namespace: review-feature-my-branch
spec:
hard:
requests.cpu: "500m"
requests.memory: "512Mi"
limits.cpu: "1000m"
limits.memory: "1Gi"
pods: "5"
Apply this quota after creating the namespace in your deploy script.
Database isolation: Review apps sharing a single database cause flaky tests and broken reviews. Either use a separate review database per branch (costly) or run a containerized database as a sidecar deployment in the review namespace (better).
Authentication bypass: If your app requires login, reviewers can’t test it without credentials. Either create a shared review account in your review environment or add a dev-only bypass that skips auth when NODE_ENV=review.
Real Impact
After setting this up on a three-service project with a React frontend, a Node API, and a background worker, here’s what changed:
MR review time dropped because reviewers could see changes immediately instead of asking engineers to explain what the diff did. Product review became possible — non-engineers could test features without a local environment setup.
We caught layout bugs that code review missed. We caught state management issues that only appeared when clicking through the actual UI. We found one regression that only manifested when JavaScript loaded slowly on a throttled connection — something nobody would have noticed reading code.
The infrastructure cost for review apps on Kubernetes was about $40/month for a team of 12 engineers with roughly 20 open MRs at any time. The auto_stop_in: 2 days kept idle environments from accumulating.
Setup took one afternoon. The wildcard DNS and wildcard certificate took most of that time. The GitLab CI changes were straightforward once the cluster was ready.
Putting It Together
Environments give GitLab visibility into your deployments. Tiers organize them. Protected environments add approval gates. Freeze windows block deployments during sensitive periods. Review apps make every MR testable without any setup.
The complete pattern: MRs get review apps via dynamic review/$CI_COMMIT_REF_SLUG environments. Main branch deployments go to staging manually. Production requires staging to succeed and is protected with required approvals. Rollbacks are re-deploys of previous successful pipeline jobs. Everything is tracked in the Deployments page with full history.
See GitLab CI Build Docker Image and Push to ECR for the image build step that feeds into these deployments. See GitLab Elastic Beanstalk Deploy if you’re on Elastic Beanstalk instead of ECS or Kubernetes. See GitLab ArgoCD GitOps EKS 2026 for a GitOps approach to the production deployment side of this pipeline. See GitLab CI Rules for fine-grained control over which jobs run in which scenarios.
The review app URL in a merge request is a small thing that makes a big difference. Engineers stop writing review comments like “I think this will look right on mobile.” They open the review app, check on mobile, and leave a comment with a screenshot. That’s the right kind of code review.
Comments