GitLab CI Artifacts
Artifacts let you persist files between CI/CD jobs. If your pipeline produces build outputs, test reports, or any other files you need later, GitLab stores them as artifacts. You can then download them through the UI or the API.
The artifacts keyword
In your .gitlab-ci.yml, add an artifacts block to tell GitLab which files to persist:
build_app:
stage: build
script: make build:app
artifacts:
paths:
- bin/
The paths list is relative to the project directory. You can use wildcards and glob patterns here.
How artifacts work across stages
By default, every job downloads artifacts from all jobs in earlier stages. If you want to limit this, use dependencies:
build_app:
stage: build
script: make build:app
artifacts:
paths:
- bin/
test:
stage: test
script: make test:app
dependencies:
- build_app
Here, the test job only downloads artifacts from build_app, skipping anything else in earlier stages.
With the needs keyword, you can be even more precise about which jobs provide artifacts to which:
build_app:
stage: build
script: make build:app
artifacts:
paths:
- bin/
test:
stage: test
script: make test:app
needs:
- build_app
Artifacts are collected from successful jobs by default and restored after any cache operations.
Setting expiration
You do not want old artifacts filling up your storage forever. Use expire_in to auto-delete them:
build_job:
script: mvn package -U
artifacts:
paths:
- target/*.war
expire_in: 1 week
The default expiration is set at the instance level. If expire_in is not specified, GitLab uses that default (typically 30 days on self-managed instances, though GitLab.com has its own schedule). You can also set expiration to never if a specific artifact should persist indefinitely.
Artifact storage location (self-managed)
GitLab stores artifacts on the server’s filesystem. The default path on self-managed installations is /var/opt/gitlab/gitlab-rails/shared/artifacts. If you change this location, you must restart GitLab for the change to take effect.
Modern GitLab deployments often use object storage (S3, GCS, Azure Blob) instead of local disk, which scales better. Check GitLab’s documentation for your version to see how to configure object storage for artifacts.
Downloading artifacts via API
You can fetch artifact files directly using the API. This streams the file from the job’s artifact archive:
GET /projects/:id/jobs/artifacts/:ref_name/raw/*artifact_path?job=name
Example:
curl --header "PRIVATE-TOKEN: <your-token>" \
--location "https://gitlab.example.com/api/v4/projects/55/jobs/artifacts/master/raw/path/to/file/post.pdf?job=pdf"
Note: parent-child pipelines handle artifact resolution hierarchically. If both parent and child pipelines have a job with the same name, the parent pipeline’s artifact takes precedence.
Size limits
The default maximum artifact size is 100 MB. You can change this at different levels:
- Instance level (Admin Area > Settings > CI/CD)
- Group level (Group > Settings > CI/CD)
- Project level (Project > Settings > CI/CD)
The UI path is generally: Settings → CI/CD → “Maximum artifact size” (the exact labeling varies slightly by GitLab version).
For very large artifacts, consider using GitLab’s object storage integration instead of the default local storage, which has different capacity constraints.
A note on the example config
The pre_build example in the original post had a small mismatch worth mentioning. It echoes variables into a file called variables but the paths list references vars_file. Make sure the filename in paths matches what your script actually creates.
Artifacts become even more useful with parallel matrix builds, where each matrix job can produce its own artifact set, and with environment-based deployments, where artifacts flow from build stages directly into environment-scoped deploy jobs.
Comments