GitLab CI Services: Run Databases in Your Pipeline Tests
The first time I tried running integration tests in GitLab CI, I hardcoded a database connection to localhost and wondered why nothing worked. The job would spin up, find no database on port 5432, and crash in the first second. I was treating CI like my laptop. That was the problem.
GitLab CI runs your jobs inside containers. Your test suite can’t reach a database running on your laptop. It can’t reach anything outside the runner unless you bring it in. That’s exactly what services do: they spin up additional containers alongside your job container, on the same internal network, ready before your job starts running commands.
This post covers how services work, how to configure PostgreSQL, Redis, and Elasticsearch for real test scenarios, and how to avoid the mistakes that waste an hour of your afternoon.
What GitLab CI Services Actually Are
A service in GitLab CI is a Docker container that runs alongside your job container for the duration of that job. When the job starts, GitLab spins up the service containers first. They share the same network namespace as the job. When the job finishes, they get torn down.
The key distinction from just running a container manually in your script: services are ready before your script block runs. GitLab pulls the image, starts the container, and waits for it to be reachable before handing control to your commands. For databases, that removes the race condition of trying to connect to something that hasn’t finished initializing.
Services are defined per job, not globally. You can have different databases for different jobs in the same pipeline. Your unit tests don’t need a database at all. Your integration tests get PostgreSQL. Your search tests get Elasticsearch. Each job only pays for what it uses.
The networking model is simple: the service container’s hostname defaults to the image name with certain characters replaced. If your service image is postgres:16, your code connects to host postgres. If it’s redis:7-alpine, the host is redis. You can override this with an explicit alias, which I’ll show later.
PostgreSQL Service: The Full Setup
Here is the minimum viable PostgreSQL service configuration:
test:integration:
image: python:3.12-slim
services:
- postgres:16
variables:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
DATABASE_URL: "postgresql://testuser:testpass@postgres:5432/testdb"
script:
- pip install -r requirements.txt
- python -m pytest tests/integration/
The variables you set in the job are passed to both the job container and the service containers. POSTGRES_DB, POSTGRES_USER, and POSTGRES_PASSWORD are the environment variables the official PostgreSQL image reads to initialize itself. Your DATABASE_URL variable uses postgres as the hostname — the default alias derived from the image name.
One thing that tripped me up early: the DATABASE_URL variable needs to match exactly what the service container was told to create. If you set POSTGRES_DB: testdb but connect to myapp_test, you’ll get an “database does not exist” error. Keep them consistent.
For a Django app, the variable setup looks slightly different because Django reads database config from settings.py:
variables:
POSTGRES_DB: django_test
POSTGRES_USER: django
POSTGRES_PASSWORD: secret
DB_HOST: postgres
DB_PORT: "5432"
DB_NAME: django_test
DB_USER: django
DB_PASSWORD: secret
Then in settings.py:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("DB_NAME", "django_test"),
"USER": os.environ.get("DB_USER", "django"),
"PASSWORD": os.environ.get("DB_PASSWORD", "secret"),
"HOST": os.environ.get("DB_HOST", "localhost"),
"PORT": os.environ.get("DB_PORT", "5432"),
}
}
Notice DB_HOST is set to postgres in CI, but defaults to localhost for local development. That one environment variable handles both environments without duplicating your settings file.
Redis Service: Cache and Queue Testing
Redis needs almost no configuration because it starts without authentication by default:
test:cache:
image: python:3.12-slim
services:
- redis:7-alpine
variables:
REDIS_URL: "redis://redis:6379/0"
script:
- pip install -r requirements.txt
- python -m pytest tests/cache/
The hostname is redis — derived from the image name. Alpine variant keeps the runner’s memory usage lower, which matters on shared runners with tight limits.
If your app uses Redis for both caching and Celery task queues, you can use different database numbers on the same instance:
variables:
CELERY_BROKER_URL: "redis://redis:6379/0"
CELERY_RESULT_BACKEND: "redis://redis:6379/1"
CACHE_URL: "redis://redis:6379/2"
For production-like setups that require Redis authentication, pass the password through environment variables:
services:
- name: redis:7-alpine
command: ["redis-server", "--requirepass", "testpass"]
variables:
REDIS_URL: "redis://:testpass@redis:6379/0"
The command key overrides the container’s default startup command, same as Docker’s CMD override.
Multiple Services in One Job
A single job can have as many services as it needs. The full Django integration test job with PostgreSQL and Redis looks like this:
test:integration:
image: python:3.12-slim
services:
- postgres:16
- redis:7-alpine
variables:
POSTGRES_DB: django_test
POSTGRES_USER: django
POSTGRES_PASSWORD: secret
DATABASE_URL: "postgresql://django:secret@postgres:5432/django_test"
REDIS_URL: "redis://redis:6379/0"
CELERY_BROKER_URL: "redis://redis:6379/0"
DJANGO_SETTINGS_MODULE: "myapp.settings.test"
script:
- apt-get update -qq && apt-get install -y -qq libpq-dev gcc
- pip install -r requirements.txt
- python manage.py migrate --run-syncdb
- python -m pytest tests/ -v --tb=short
All service containers are available from the moment your script block runs. You don’t need to sleep or wait between them.
Service Aliases and Networking
The default hostname for a service comes from its image name. postgres:16 becomes postgres. redis:7-alpine becomes redis. But my-registry.example.com/team/postgres-custom:latest becomes something ugly and unpredictable.
Use explicit aliases whenever the image name is not a clean hostname:
services:
- name: my-registry.example.com/team/postgres-custom:latest
alias: postgres
- name: my-registry.example.com/team/redis-custom:7
alias: redis
Aliases are also useful when running multiple instances of the same image in one job — for example, a primary and replica PostgreSQL setup for replication tests:
services:
- name: postgres:16
alias: postgres-primary
- name: postgres:16
alias: postgres-replica
Each instance gets its own container, its own hostname, and its own data. You connect to them as postgres-primary:5432 and postgres-replica:5432 respectively.
Health Checks and Startup Timing
GitLab starts service containers before your script, but “started” is not the same as “ready to accept connections.” PostgreSQL in particular takes a few seconds to initialize its data directory, run initdb, and begin listening on port 5432. If your test script tries to connect immediately, it may fail.
The official PostgreSQL image includes pg_isready. Use it to wait:
script:
- until pg_isready -h postgres -U "$POSTGRES_USER"; do sleep 1; done
- python manage.py migrate
- python -m pytest tests/
For setups where pg_isready is not available in your job container (it’s not installed in python:3.12-slim), install it or use a TCP check:
script:
- apt-get update -qq && apt-get install -y -qq postgresql-client
- until pg_isready -h postgres -U "$POSTGRES_USER"; do sleep 1; done
- python -m pytest tests/
Alternatively, the wait-for-it pattern works for any TCP service:
before_script:
- apt-get update -qq && apt-get install -y -qq netcat-openbsd
- until nc -z postgres 5432; do echo "waiting for postgres..."; sleep 1; done
- until nc -z redis 6379; do echo "waiting for redis..."; sleep 1; done
In practice, PostgreSQL on a healthy runner with a cached Docker image is ready in under five seconds. The sleep loop rarely runs more than two iterations. But skipping it means an occasional flaky test that fails at midnight and wastes someone’s morning.
Complete Django Example
Here is a full .gitlab-ci.yml for a Django project with PostgreSQL and Redis, structured for real-world use:
default:
image: python:3.12-slim
stages:
- test
- build
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.pip-cache"
PYTHONDONTWRITEBYTECODE: "1"
PYTHONUNBUFFERED: "1"
.test-base:
stage: test
cache:
paths:
- .pip-cache/
before_script:
- apt-get update -qq && apt-get install -y -qq libpq-dev gcc postgresql-client
- pip install -r requirements.txt --quiet
test:unit:
extends: .test-base
script:
- python -m pytest tests/unit/ -v --tb=short -x
test:integration:
extends: .test-base
services:
- name: postgres:16
alias: postgres
- name: redis:7-alpine
alias: redis
variables:
POSTGRES_DB: django_test
POSTGRES_USER: django_ci
POSTGRES_PASSWORD: ci_secret
DATABASE_URL: "postgresql://django_ci:ci_secret@postgres:5432/django_test"
REDIS_URL: "redis://redis:6379/0"
CELERY_BROKER_URL: "redis://redis:6379/0"
DJANGO_SETTINGS_MODULE: "myapp.settings.test"
SECRET_KEY: "ci-test-secret-key-not-for-production"
script:
- until pg_isready -h postgres -U "$POSTGRES_USER"; do sleep 1; done
- python manage.py migrate --run-syncdb --no-input
- python -m pytest tests/integration/ -v --tb=short
build:image:
stage: build
image: docker:27
services:
- docker:27-dind
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
The .test-base template handles the repeated before_script and cache config. Unit tests run without any services — faster, cheaper on shared runner minutes. Integration tests bring in both databases. The build job only runs on the default branch, keeping feature branches from burning Docker build minutes on every push.
Environment Variables for Service Configuration
All environment variables in a job’s variables: block are injected into every container in that job — the main container and all service containers. This is how you configure services without baking credentials into the image.
For the PostgreSQL image specifically, these are the variables it reads:
POSTGRES_PASSWORD— required, no defaultPOSTGRES_USER— defaults topostgresPOSTGRES_DB— defaults to the value ofPOSTGRES_USERPOSTGRES_INITDB_ARGS— passed directly toinitdb
For Redis with a custom config file, you’d mount it via volumes — but that requires a specific GitLab Runner configuration with Docker executor volumes enabled, which shared runners typically do not allow. On shared runners, stick to command overrides.
For Elasticsearch, the critical variable is discovery.type=single-node. Without it, Elasticsearch tries to form a cluster, waits for other nodes, and the service never becomes healthy:
services:
- name: elasticsearch:8.13.0
alias: elasticsearch
variables:
discovery.type: single-node
xpack.security.enabled: "false"
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
The ES_JAVA_OPTS limit is important on shared runners. Elasticsearch defaults to using half the available system memory. On a shared runner with 2GB RAM, that means Elasticsearch grabs 1GB and your job container gets whatever is left.
Common Pitfalls
Service startup timing. Services start before your script, but not before your before_script. The ready-check loop belongs in before_script or at the top of script, not in a separate before_services block — that doesn’t exist. If your before_script tries to run migrations before the database is ready, it fails.
Hostname confusion. Your code’s database host is the service alias, not localhost and not 127.0.0.1. This surprises developers every time they move tests from local Docker Compose to GitLab CI. On your laptop, Compose puts everything on the same network and you use the service name. In GitLab CI, it’s the same idea — but the service name comes from the image name or the explicit alias, not from a Compose service block you named yourself.
Memory limits on shared runners. GitLab’s shared runners on GitLab.com have 3.75GB of RAM per job. Running PostgreSQL, Redis, and Elasticsearch in the same job pushes past that limit. Elasticsearch alone wants 1GB minimum. If you need all three, either split them into separate jobs or use a self-hosted runner with more memory. See GitLab Runner Handbook for details on sizing self-hosted runners.
Image pull limits. Docker Hub rate limits unauthenticated pulls. On shared runners, your job might fail with a 429 Too Many Requests error pulling postgres:16 during peak hours. Pull from the GitLab Container Registry mirror or authenticate to Docker Hub using a GitLab CI variable for the token.
Persistent data between jobs. Services are ephemeral. Every job gets a fresh container with an empty database. This is the correct behavior for tests — you want a clean state. If you find yourself pre-loading data into a service, do it in your test fixtures or migrations, not by sharing a volume between jobs.
Service logs on failure. When a service fails to start, GitLab does not automatically show you its logs. Add a debug step at the top of your script: docker logs won’t work in the job container, but you can add a before_script health check that times out with a meaningful error message rather than a cryptic connection refused.
Services vs. Docker Compose in CI
The question I get asked most often: why not just use Docker Compose in the pipeline and manage services yourself?
You can. It works. But services are cleaner for most cases.
With services, GitLab handles the lifecycle. Images are pulled in parallel with your job image. Containers are networked automatically. Cleanup happens automatically. You write less YAML and fewer shell commands.
With Docker Compose, you get more control. You can define health checks with precise conditions, set restart policies, share named volumes between containers, and configure complex multi-container topologies that services can’t handle. You can also reference your existing docker-compose.yml instead of duplicating configuration.
The trade-off is that Docker Compose requires Docker-in-Docker (docker:dind service) or the Docker socket to be available in the job container. On shared runners, Docker socket mounting is disabled for security reasons. DinD works but adds overhead and complexity. See Build Docker on GitLab and Push to ECR for how to set up DinD correctly when you need it.
For standard integration tests against PostgreSQL and Redis, use services. They cover 90% of real-world needs with minimal configuration. Reserve Docker Compose for the cases where you genuinely need the extra control — multi-container apps that depend on each other, complex networking setups, or when you want to reuse an existing Compose file directly.
GitLab CI services solve a concrete problem: your tests need databases, your pipeline runs in containers, and those two facts need to coexist cleanly. The service model handles the plumbing so you don’t have to write startup scripts, manage container lifecycles, or hardcode connection strings for a database that doesn’t exist until the pipeline runs.
Start with PostgreSQL, get your integration tests passing, then add Redis when your caching layer needs coverage. Keep the ready-check loop in place even when it feels unnecessary. The five lines it adds to your pipeline have prevented more 2am pages than any monitoring alert I’ve ever written.
For more on managing variables across environments, see GitLab CI Variables. For pipeline artifacts and test reports, see GitLab CI Artifacts. For controlling which jobs run under which conditions, see GitLab CI Rules.
Related Posts: GitLab Runner Tags 2026, GitLab CI Variables, GitLab CI Artifacts, Docker in 2026
Comments