添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
  • quick start guide .
  • For a collection of examples, see GitLab CI/CD Examples .
  • To view a large .gitlab-ci.yml file used in an enterprise, see the .gitlab-ci.yml file for gitlab .
  • When you are editing your .gitlab-ci.yml file, you can validate it with the CI Lint tool.

    If you are editing content on this page, follow the instructions for documenting keywords .

    Global keywords that configure pipeline behavior:

    Keyword Description dependencies Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. environment Name of an environment to which the job deploys. except Control when jobs are not created. extends Configuration entries that this job inherits from. image Use Docker images. inherit Select which global defaults all jobs inherit. interruptible Defines if a job can be canceled when made redundant by a newer run. needs Execute jobs earlier than the stage ordering. Control when jobs are created. pages Upload the result of a job to use with GitLab Pages. parallel How many instances of a job should be run in parallel. release Instructs the runner to generate a release object. resource_group Limit job concurrency. retry When and how many times a job can be auto-retried in case of a failure. rules List of conditions to evaluate and determine selected attributes of a job, and whether or not it's created. script Shell script that is executed by a runner. secrets The CI/CD secrets the job needs. services Use Docker services images. stage Defines a job stage. List of tags that are used to select a runner. timeout Define a custom job-level timeout that takes precedence over the project-wide setting. trigger Defines a downstream pipeline trigger. variables Define job variables on a job level. When to run job.
  • after_script
  • artifacts
  • before_script
  • cache
  • hooks
  • image
  • interruptible
  • retry
  • services
  • timeout
  • Example of default :

    default:
      image: ruby:3.0
    rspec:
      script: bundle exec rspec
    rspec 2.7:
      image: ruby:2.7
      script: bundle exec rspec

    In this example, ruby:3.0 is the default image value for all jobs in the pipeline. The rspec 2.7 job does not use the default, because it overrides the default with a job-specific image section:

    Additional details :

  • When the pipeline is created, each default is copied to all jobs that don't have that keyword defined.
  • If a job already has one of the keywords configured, the configuration in the job takes precedence and is not replaced by the default.
  • Control inheritance of default keywords in jobs with inherit:default .
  • Moved to GitLab Free in 11.4.

    Use include to include external YAML files in your CI/CD configuration. You can split one long .gitlab-ci.yml file into multiple files to increase readability, or reduce duplication of the same configuration in multiple places.

    You can also store template files in a central repository and include them in projects.

    The include files are:

  • Merged with those in the .gitlab-ci.yml file.
  • Always evaluated first and then merged with the content of the .gitlab-ci.yml file, regardless of the position of the include keyword.
  • The time limit to resolve all files is 30 seconds.

    Keyword type : Global keyword.

    Possible inputs : The include subkeys:

  • include:local
  • include:project
  • include:remote
  • include:template
  • Additional details :

  • Only certain CI/CD variables can be used with include keywords.
  • Use merging to customize and override included CI/CD configurations with local
  • You can override included configuration by having the same job name or global keyword in the .gitlab-ci.yml file. The two configurations are merged together, and the configuration in the .gitlab-ci.yml file takes precedence over the included configuration.
  • If you rerun a:
  • Job, the include files are not fetched again. All jobs in a pipeline use the configuration fetched when the pipeline was created. Any changes to the source include files do not affect job reruns.
  • Pipeline, the include files are fetched again. If they changed after the last pipeline run, the new pipeline uses the changed configuration.
  • You can have up to 150 includes per pipeline by default, including nested . Additionally:
  • In GitLab 16.0 and later self-managed users can change the maximum includes value.
  • In GitLab 15.10 and later you can have up to 150 includes. In nested includes, the same file can be included multiple times, but duplicated includes count towards the limit.
  • From GitLab 14.9 to GitLab 15.9 , you can have up to 100 includes. The same file can be included multiple times in nested includes, but duplicates are ignored.
  • In GitLab 14.9 and earlier you can have up to 100 includes, but the same file can not be included multiple times.
  • Related topics :

    Use variables with include . Use rules with include .
  • use * and ** wildcards in the file path .
  • You can use certain CI/CD variables .
  • Example of include:local :

    include:
      - local: '/templates/.gitlab-ci-template.yml'

    You can also use shorter syntax to define the path:

    include: '.gitlab-ci-production.yml'

    Additional details :

  • The .gitlab-ci.yml file and the local file must be on the same branch.
  • You can't include local files through Git submodules paths.
  • All nested includes are executed in the scope of the project containing the configuration file with the include keyword, not the project running the pipeline. You can use local, project, remote, or template includes.
  • introduced in GitLab 13.6. Feature flag removed in GitLab 13.8.

    To include files from another private project on the same GitLab instance, use include:project and include:file .

    Keyword type : Global keyword.

    Possible inputs :

    include:project : The full GitLab project path. include:file A full file path, or array of file paths, relative to the root directory ( / ). The YAML files must have the .yml or .yaml extension. include:ref : Optional. The ref to retrieve the file from. Defaults to the HEAD of the project when not specified.
  • You can use certain CI/CD variables .
  • Example of include:project :

    include:
      - project: 'my-group/my-project'
        file: '/templates/.gitlab-ci-template.yml'
      - project: 'my-group/my-subgroup/my-project-2'
        file:
          - '/templates/.builds.yml'
          - '/templates/.tests.yml'

    You can also specify a ref :

    include:
      - project: 'my-group/my-project'
        ref: main                                      # Git branch
        file: '/templates/.gitlab-ci-template.yml'
      - project: 'my-group/my-project'
        ref: v1.0.0                                    # Git Tag
        file: '/templates/.gitlab-ci-template.yml'
      - project: 'my-group/my-project'
        ref: 787123b47f14b552955ca2786bc9542ae66fee5b  # Git SHA
        file: '/templates/.gitlab-ci-template.yml'

    Additional details :

  • All nested includes are executed in the scope of the project containing the configuration file with the nested include keyword. You can use local (relative to the project containing the configuration file with the include keyword), project , remote , or template includes.
  • When the pipeline starts, the .gitlab-ci.yml file configuration included by all methods is evaluated. The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to the referenced .gitlab-ci.yml file configuration until the next pipeline starts.
  • When you include a YAML file from another private project, the user running the pipeline must be a member of both projects and have the appropriate permissions to run pipelines. A not found or access denied error may be displayed if the user does not have access to any of the included files.
  • certain CI/CD variables .
  • Example of include:remote :

    include:
      - remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'

    Additional details :

  • All nested includes are executed without context as a public user, so you can only include public projects or templates. No variables are available in the include section of nested includes.
  • Be careful when including a remote CI/CD configuration file. No pipelines or notifications trigger when external CI/CD configuration files change. From a security perspective, this is similar to pulling a third-party dependency.
  • .gitlab-ci.yml templates .

    Keyword type : Global keyword.

    Possible inputs :

    A CI/CD template :

  • Templates are stored in lib/gitlab/ci/templates . Not all templates are designed to be used with include:template , so check template comments before using one.
  • You can use certain CI/CD variables .
  • Example of include:template :

    # File sourced from the GitLab template collection
    include:
      - template: Auto-DevOps.gitlab-ci.yml

    Multiple include:template files:

    include:
      - template: Android-Fastlane.gitlab-ci.yml
      - template: Auto-DevOps.gitlab-ci.yml

    Additional details :

  • All nested includes are executed without context as a public user, so you can only include public projects or templates. No variables are available in the include section of nested includes.
  • stage in a job to configure the job to run in a specific stage.

    If stages is not defined in the .gitlab-ci.yml file, the default pipeline stages are:

  • build
  • deploy
  • .post
  • The order of the items in stages defines the execution order for jobs:

  • Jobs in the same stage run in parallel.
  • Jobs in the next stage run after the jobs from the previous stage complete successfully.
  • If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage. .pre and .post stages can be used in required pipeline configuration to define compliance jobs that must run before or after project pipeline jobs.

    Keyword type : Global keyword.

    Example of stages :

    stages:
      - build
      - test
      - deploy

    In this example:

  • All jobs in build execute in parallel.
  • If all jobs in build succeed, the test jobs execute in parallel.
  • If all jobs in test succeed, the deploy jobs execute in parallel.
  • If all jobs in deploy succeed, the pipeline is marked as passed .
  • If any job fails, the pipeline is marked as failed and jobs in later stages do not start. Jobs in the current stage are not stopped and continue to run.

    Additional details :

  • If a job does not specify a stage , the job is assigned the test stage.
  • If a stage is defined but no jobs use it, the stage is not visible in the pipeline, which can help compliance pipeline configurations :
  • Stages can be defined in the compliance configuration but remain hidden if not used.
  • The defined stages become visible when developers use them in job definitions.
  • Related topics :

  • To make a job start earlier and ignore the stage order, use the needs keyword.
  • Introduced in GitLab 12.5

    Use workflow to control pipeline behavior.

    Related topics :

  • workflow: rules examples
  • Switch between branch pipelines and merge request pipelines
  • Introduced in GitLab 15.5 with a flag named pipeline_name . Disabled by default. Enabled on GitLab.com and self-managed in GitLab 15.7. Generally available in GitLab 15.8. Feature flag pipeline_name removed.

    You can use name in workflow: to define a name for pipelines.

    All pipelines are assigned the defined name. Any leading or trailing spaces in the name are removed.

    Possible inputs :

  • A string.
  • CI/CD variables .
  • A combination of both.
  • Examples of workflow:name :

    A simple pipeline name with a predefined variable:

    workflow:
      name: 'Pipeline for branch: $CI_COMMIT_BRANCH'

    A configuration with different pipeline names depending on the pipeline conditions:

    variables:
      PROJECT1_PIPELINE_NAME: 'Default pipeline name'  # A default is not required.
    workflow:
      name: '$PROJECT1_PIPELINE_NAME'
      rules:
        - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
          variables:
            PROJECT1_PIPELINE_NAME: 'MR pipeline: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
        - if: '$CI_MERGE_REQUEST_LABELS =~ /pipeline:run-in-ruby3/'
          variables:
            PROJECT1_PIPELINE_NAME: 'Ruby 3 pipeline'

    Additional details :

  • If the name is an empty string, the pipeline is not assigned a name. A name consisting of only CI/CD variables could evaluate to an empty string if all the variables are also empty.
  • workflow:rules:variables become global variables available in all jobs, including trigger jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:
  • Use a unique variable name in every project's pipeline configuration, like PROJECT1_PIPELINE_NAME .
  • Use inherit:variables in the trigger job and list the exact variables you want to forward to the downstream pipeline.
  • The rules keyword in workflow is similar to rules defined in jobs , but controls whether or not a whole pipeline is created.

    When no rules evaluate to true, the pipeline does not run.

    Possible inputs : You can use some of the same keywords as job-level rules :

    rules: if . rules: changes . rules: exists . when , can only be always or never when used with workflow . variables .

    Example of workflow:rules :

    workflow:
      rules:
        - if: $CI_COMMIT_TITLE =~ /-draft$/
          when: never
        - if: $CI_PIPELINE_SOURCE == "merge_request_event"
        - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

    In this example, pipelines run if the commit title (first line of the commit message) does not end with -draft and the pipeline is for either:

  • A merge request
  • The default branch.
  • Additional details :

  • If your rules match both branch pipelines (other than the default branch) and merge request pipelines, duplicate pipelines can occur.
  • Related topics :

  • You can use the workflow:rules templates to import a preconfigured workflow: rules entry.
  • Common if clauses for workflow:rules . Use rules to run merge request pipelines .

    variables in workflow:rules to define variables for specific pipeline conditions.

    When the condition matches, the variable is created and can be used by all jobs in the pipeline. If the variable is already defined at the global level, the workflow variable takes precedence and overrides the global variable.

    Keyword type : Global keyword.

    Possible inputs : Variable name and value pairs:

  • The name can use only numbers, letters, and underscores ( _ ).
  • The value must be a string.
  • Example of workflow:rules:variables :

    variables:
      DEPLOY_VARIABLE: "default-deploy"
    workflow:
      rules:
        - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
          variables:
            DEPLOY_VARIABLE: "deploy-production"  # Override globally-defined DEPLOY_VARIABLE
        - if: $CI_COMMIT_REF_NAME =~ /feature/
          variables:
            IS_A_FEATURE: "true"                  # Define a new variable.
        - when: always                            # Run the pipeline in other cases
    job1:
      variables:
        DEPLOY_VARIABLE: "job1-default-deploy"
      rules:
        - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
          variables:                                   # Override DEPLOY_VARIABLE defined
            DEPLOY_VARIABLE: "job1-deploy-production"  # at the job level.
        - when: on_success                             # Run the job in other cases
      script:
        - echo "Run script with $DEPLOY_VARIABLE as an argument"
        - echo "Run another script if $IS_A_FEATURE exists"
    job2:
      script:
        - echo "Run script with $DEPLOY_VARIABLE as an argument"
        - echo "Run another script if $IS_A_FEATURE exists"

    When the branch is the default branch:

  • job1's DEPLOY_VARIABLE is job1-deploy-production .
  • job2's DEPLOY_VARIABLE is deploy-production .
  • When the branch is feature :

  • job1's DEPLOY_VARIABLE is job1-default-deploy , and IS_A_FEATURE is true .
  • job2's DEPLOY_VARIABLE is default-deploy , and IS_A_FEATURE is true .
  • When the branch is something else:

  • job1's DEPLOY_VARIABLE is job1-default-deploy .
  • job2's DEPLOY_VARIABLE is default-deploy .
  • Additional details :

    workflow:rules:variables become global variables available in all jobs, including trigger jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:
  • Use unique variable names in every project's pipeline configuration, like PROJECT1_VARIABLE_NAME .
  • Use inherit:variables in the trigger job and list the exact variables you want to forward to the downstream pipeline.
  • default section .

    Possible inputs : An array including:

  • Single line commands.
  • Long commands split over multiple lines .
  • YAML anchors .

    CI/CD variables are supported .

    Example of after_script :

    job:
      script:
        - echo "An example script section."
      after_script:
        - echo "Execute this command after the `script` section completes."

    Additional details :

    Scripts you specify in after_script execute in a new shell, separate from any before_script or script commands. As a result, they:

  • Have the current working directory set back to the default (according to the variables which define how the runner processes Git requests ).
  • Don't have access to changes done by commands defined in the before_script or script , including:
  • Command aliases and variables exported in script scripts.
  • Changes outside of the working tree (depending on the runner executor), like software installed by a before_script or script script.
  • Have a separate timeout, which is hard-coded to 5 minutes .
  • Don't affect the job's exit code. If the script section succeeds and the after_script times out or fails, the job exits with code 0 ( Job Succeeded ).
  • If a job times out or is cancelled, the after_script commands do not execute. An issue exists to add support for executing after_script commands for timed-out or cancelled jobs.

    Related topics :

    Use after_script with default to define a default array of commands that should run after all jobs.
  • You can ignore non-zero exit codes .
  • Use color codes with after_script to make job logs easier to review. Create custom collapsible sections to simplify job log output. manual jobs . false for jobs that use when: manual inside rules . false in all other cases.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    true or false .

    Example of allow_failure :

    job1:
      stage: test
      script:
        - execute_script_1
    job2:
      stage: test
      script:
        - execute_script_2
      allow_failure: true
    job3:
      stage: deploy
      script:
        - deploy_to_staging
      environment: staging

    In this example, job1 and job2 run in parallel:

  • If job1 fails, jobs in the deploy stage do not start.
  • If job2 fails, jobs in the deploy stage can still start.
  • Additional details :

  • You can use allow_failure as a subkey of rules .
  • If allow_failure: true is set, the job is always considered successful, and later jobs with when: on_failure don't start if this job fails.
  • You can use allow_failure: false with a manual job to create a blocking manual job . A blocked pipeline does not run any jobs in later stages until the manual job is started and completes successfully.
  • job artifacts . Job artifacts are a list of files and directories that are attached to the job when it succeeds, fails, or always .

    The artifacts are sent to GitLab after the job finishes. They are available for download in the GitLab UI if the size is smaller than the maximum artifact size .

    By default, jobs in later stages automatically download all the artifacts created by jobs in earlier stages. You can control artifact download behavior in jobs with dependencies .

    When using the needs keyword, jobs can only download artifacts from the jobs defined in the needs configuration.

    Job artifacts are only collected for successful jobs by default, and artifacts are restored after caches .

    Read more about artifacts .

    default section .

    Possible inputs :

  • An array of file paths, relative to the project directory.
  • You can use Wildcards that use glob patterns and:
  • In GitLab Runner 13.0 and later , doublestar.Glob .
  • In GitLab Runner 12.10 and earlier, filepath.Match .
  • Example of artifacts:paths :

    job:
      artifacts:
        paths:
          - binaries/
          - .config

    This example creates an artifact with .config and all the files in the binaries directory.

    Additional details :

  • If not used with artifacts:name , the artifacts file is named artifacts , which becomes artifacts.zip when downloaded.
  • Related topics :

  • To restrict which jobs a specific job fetches artifacts from, see dependencies .
  • Create job artifacts .

    default section .

    Possible inputs :

  • An array of file paths, relative to the project directory.
  • You can use Wildcards that use glob or doublestar.PathMatch patterns.
  • Example of artifacts:exclude :

    artifacts:
      paths:
        - binaries/
      exclude:
        - binaries/**/*.o

    This example stores all files in binaries/ , but not *.o files located in subdirectories of binaries/ .

    Additional details :

    artifacts:exclude paths are not searched recursively.
  • Files matched by artifacts:untracked can be excluded using artifacts:exclude too.
  • Related topics :

    Exclude files from job artifacts . Introduced in GitLab 13.0 behind a disabled feature flag, the latest job artifacts are kept regardless of expiry time. Made default behavior in GitLab 13.4. Introduced in GitLab 13.8, keeping latest job artifacts can be disabled at the project level. Introduced in GitLab 13.9, keeping latest job artifacts can be disabled instance-wide.

    Use expire_in to specify how long job artifacts are stored before they expire and are deleted. The expire_in setting does not affect:

  • Artifacts from the latest job, unless keeping the latest job artifacts is disabled at the project level . or instance-wide .
  • After their expiry, artifacts are deleted hourly by default (using a cron job), and are not accessible anymore.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs : The expiry time. If no unit is provided, the time is in seconds. Valid values include:

  • 42 seconds
  • 3 mins 4 sec
  • 2 hrs 20 min
  • 2h20min
  • 6 mos 1 day
  • 47 yrs 6 mos and 4d
  • 3 weeks and 2 days
  • never
  • Example of artifacts:expire_in :

    job:
      artifacts:
        expire_in: 1 week

    Additional details :

  • The expiration time period begins when the artifact is uploaded and stored on GitLab. If the expiry time is not defined, it defaults to the instance wide setting .
  • To override the expiration date and protect artifacts from being automatically deleted:
  • Select Keep on the job page.
  • In GitLab 13.3 and later , set the value of expire_in to never .

    Use the artifacts:expose_as keyword to expose job artifacts in the merge request UI .

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • The name to display in the merge request UI for the artifacts download link. Must be combined with artifacts:paths .
  • Example of artifacts:expose_as :

    test:
      script: ["echo 'test' > file.txt"]
      artifacts:
        expose_as: 'artifact 1'
        paths: ['file.txt']

    Additional details :

  • If artifacts:paths uses CI/CD variables , the artifacts do not display in the UI.
  • A maximum of 10 job artifacts per merge request can be exposed.
  • Glob patterns are unsupported.
  • If a directory is specified and there is more than one file in the directory, the link is to the job artifacts browser .
  • If GitLab Pages is enabled, GitLab automatically renders the artifacts when the artifacts is a single file with one of these extensions: .html or .htm
  • .json
  • Related topics :

    Expose job artifacts in the merge request UI .

    default section .

    Possible inputs :

  • The name of the artifacts archive. CI/CD variables are supported . Must be combined with artifacts:paths .
  • Example of artifacts:name :

    To create an archive with a name of the current job:

    job:
      artifacts:
        name: "job1-artifacts-file"
        paths:
          - binaries/

    Related topics :

    Use CI/CD variables to define the artifacts name . Introduced in GitLab 13.8 with a flag named non_public_artifacts , disabled by default. Updated in GitLab 15.10. Artifacts created with artifacts:public before 15.10 are not guaranteed to remain private after this update.

    WARNING: On self-managed GitLab, by default this feature is not available. To make it available, an administrator can enable the feature flag named non_public_artifacts . On GitLab.com, this feature is not available. Due to issue 413822 , the keyword can be used when the feature flag is disabled, but the feature does not work. Do not attempt to use this feature when the feature flag is disabled, and always test with non-production data first.

    Use artifacts:public to determine whether the job artifacts should be publicly available.

    When artifacts:public is true (default), the artifacts in public pipelines are available for download by anonymous and guest users.

    To deny read access for anonymous and guest users to artifacts in public pipelines, set artifacts:public to false :

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    true (default if not defined) or false .

    Example of artifacts:public :

    job:
      artifacts:
        public: false

    artifacts:reports to collect artifacts generated by included templates in jobs.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • See list of available artifacts reports types .
  • Example of artifacts:reports :

    rspec:
      stage: test
      script:
        - bundle install
        - rspec --format RspecJunitFormatter --out rspec.xml
      artifacts:
        reports:
          junit: rspec.xml

    Additional details :

  • Combining reports in parent pipelines using artifacts from child pipelines is not supported. Track progress on adding support in this issue .
  • To be able to browse the report output files, include the artifacts:paths keyword. This uploads and stores the artifact twice.
  • Artifacts created for artifacts: reports are always uploaded, regardless of the job results (success or failure). You can use artifacts:expire_in to set an expiration date for the artifacts.
  • default section .

    Possible inputs :

    true or false (default if not defined).

    Example of artifacts:untracked :

    Save all Git untracked files:

    job:
      artifacts:
        untracked: true

    Related topics :

    Add untracked files to artifacts .

    default section .

    Possible inputs :

    on_success (default): Upload artifacts only when the job succeeds. on_failure : Upload artifacts only when the job fails. always : Always upload artifacts (except when jobs time out). For example, when uploading artifacts required to troubleshoot failing tests.

    Example of artifacts:when :

    job:
      artifacts:
        when: on_failure

    Additional details :

  • The artifacts created for artifacts:reports are always uploaded, regardless of the job results (success or failure). artifacts:when does not change this behavior.
  • artifacts are restored.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs : An array including:

  • Single line commands.
  • Long commands split over multiple lines .
  • YAML anchors .

    CI/CD variables are supported .

    Example of before_script :

    job:
      before_script:
        - echo "Execute this command before any 'script:' commands."
      script:
        - echo "This command executes after the job's 'before_script' commands."

    Additional details :

  • Scripts you specify in before_script are concatenated with any scripts you specify in the main script . The combined scripts execute together in a single shell.
  • Using before_script at the top level, but not in the default section, is deprecated .
  • Related topics :

    Use before_script with default to define a default array of commands that should run before the script commands in all jobs.
  • You can ignore non-zero exit codes .
  • Use color codes with before_script to make job logs easier to review. Create custom collapsible sections to simplify job log output.

    Introduced in GitLab 15.0, caches are not shared between protected and unprotected branches.

    Use cache to specify a list of files and directories to cache between jobs. You can only use paths that are in the local working copy.

    Caches are:

  • Shared between pipelines and jobs.
  • By default, not shared between protected and unprotected branches.
  • Restored before artifacts .
  • Limited to a maximum of four different caches .
  • You can disable caching for specific jobs , for example to override:

  • A default cache defined with default .
  • The configuration for a job added with include .
  • For more information about caches, see Caching in GitLab CI/CD .

    default section .

    Possible inputs :

  • An array of paths relative to the project directory ( $CI_PROJECT_DIR ). You can use wildcards that use glob patterns:
  • In GitLab Runner 13.0 and later , doublestar.Glob .
  • In GitLab Runner 12.10 and earlier, filepath.Match .
  • Example of cache:paths :

    Cache all files in binaries that end in .apk and the .config file:

    rspec:
      script:
        - echo "This job uses a cache."
      cache:
        key: binaries-cache
        paths:
          - binaries/*.apk
          - .config

    Additional details :

  • The cache:paths keyword includes files even if they are untracked or in your .gitignore file.
  • Related topics :

  • See the common cache use cases for more cache:paths examples.
  • default section .

    Possible inputs :

  • A string.
  • A predefined CI/CD variable .
  • A combination of both.
  • Example of cache:key :

    cache-job:
      script:
        - echo "This job uses a cache."
      cache:
        key: binaries-cache-$CI_COMMIT_REF_SLUG
        paths:
          - binaries/

    Additional details :

    If you use Windows Batch to run your shell scripts you must replace $ with % . For example: key: %CI_COMMIT_REF_SLUG%

    The cache:key value can't contain:

  • The / character, or the equivalent URI-encoded %2F .
  • Only the . character (any number), or the equivalent URI-encoded %2E .
  • The cache is shared between jobs, so if you're using different paths for different jobs, you should also set a different cache:key . Otherwise cache content can be overwritten.

    Related topics :

  • You can specify a fallback cache key to use if the specified cache:key is not found.
  • You can use multiple cache keys in a single job.
  • See the common cache use cases for more cache:key examples.
  • Introduced in GitLab 12.5.

    Use the cache:key:files keyword to generate a new key when one or two specific files change. cache:key:files lets you reuse some caches, and rebuild them less often, which speeds up subsequent pipeline runs.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • An array of one or two file paths.
  • Example of cache:key:files :

    cache-job:
      script:
        - echo "This job uses a cache."
      cache:
        key:
          files:
            - Gemfile.lock
            - package.json
        paths:
          - vendor/ruby
          - node_modules

    This example creates a cache for Ruby and Node.js dependencies. The cache is tied to the current versions of the Gemfile.lock and package.json files. When one of these files changes, a new cache key is computed and a new cache is created. Any future job runs that use the same Gemfile.lock and package.json with cache:key:files use the new cache, instead of rebuilding the dependencies.

    Additional details :

  • The cache key is a SHA computed from the most recent commits that changed each listed file. If neither file is changed in any commits, the fallback key is default .
  • Introduced in GitLab 12.5.

    Use cache:key:prefix to combine a prefix with the SHA computed for cache:key:files .

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A string
  • A predefined variables
  • A combination of both.
  • Example of cache:key:prefix :

    rspec:
      script:
        - echo "This rspec job uses a cache."
      cache:
        key:
          files:
            - Gemfile.lock
          prefix: $CI_JOB_NAME
        paths:
          - vendor/ruby

    For example, adding a prefix of $CI_JOB_NAME causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5 . If a branch changes Gemfile.lock , that branch has a new SHA checksum for cache:key:files . A new cache key is generated, and a new cache is created for that key. If Gemfile.lock is not found, the prefix is added to default , so the key in the example would be rspec-default .

    Additional details :

  • If no file in cache:key:files is changed in any commits, the prefix is added to the default key.
  • .gitignore configuration .
  • Created, but not added to the checkout with git add .
  • Caching untracked files can create unexpectedly large caches if the job downloads:

  • Dependencies, like gems or node modules, which are usually untracked.
  • Artifacts from a different job. Files extracted from the artifacts are untracked by default.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    true or false (default).

    Example of cache:untracked :

    rspec:
      script: test
      cache:
        untracked: true

    Additional details :

    You can combine cache:untracked with cache:paths to cache all untracked files, as well as files in the configured paths. Use cache:paths to cache any specific files, including tracked files, or files that are outside of the working directory, and use cache: untracked to also cache all untracked files. For example:

    rspec:
      script: test
      cache:
        untracked: true
        paths:
          - binaries/

    In this example, the job caches all untracked files in the repository, as well as all the files in binaries/ . If there are untracked files in binaries/ , they are covered by both keywords.

    Introduced in GitLab 15.8.

    Use cache:unprotect to set a cache to be shared between protected and unprotected branches.

    WARNING: When set to true , users without access to protected branches can read and write to cache keys used by protected branches.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    true or false (default).

    Example of cache:unprotect :

    rspec:
      script: test
      cache:
        unprotect: true

    Introduced in GitLab 13.5 and GitLab Runner v13.5.0.

    Use cache:when to define when to save the cache, based on the status of the job.

    Must be used with cache: paths , or nothing is cached.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    on_success (default): Save the cache only when the job succeeds. on_failure : Save the cache only when the job fails. always : Always save the cache.

    Example of cache:when :

    rspec:
      script: rspec
      cache:
        paths:
          - rspec/
        when: 'always'

    This example stores the cache whether or not the job fails or succeeds.

    default section .

    Possible inputs :

    pull-push (default) CI/CD variables .

    Example of cache:policy :

    prepare-dependencies-job:
      stage: build
      cache:
        key: gems
        paths:
          - vendor/bundle
        policy: push
      script:
        - echo "This job only downloads dependencies and builds the cache."
        - echo "Downloading dependencies..."
    faster-test-job:
      stage: test
      cache:
        key: gems
        paths:
          - vendor/bundle
        policy: pull
      script:
        - echo "This job script uses the cache, but does not update it."
        - echo "Running tests..."

    Related topics :

  • You can use a variable to control a job's cache policy .
  • default section .

    Possible inputs :

  • An array of cache keys
  • Example of cache:fallback_keys :

    rspec:
      script: rspec
      cache:
        key: gems-$CI_COMMIT_REF_SLUG
        paths:
          - rspec/
        fallback_keys:
          - gems
        when: 'always'
  • Code Coverage .
  • If there is more than one matched line in the job output, the last line is used (the first result of reverse search).
  • If there are multiple matches in a single line, the last match is searched for the coverage number.
  • If there are multiple coverage numbers found in the matched fragment, the first number is used.
  • Leading zeros are removed.
  • Coverage output from child pipelines is not recorded or displayed. Check the related issue for more details.
  • Introduced in GitLab 14.1.

    Use the dast_configuration keyword to specify a site profile and scanner profile to be used in a CI/CD configuration. Both profiles must first have been created in the project. The job's stage must be dast .

    Keyword type : Job keyword. You can use only as part of a job.

    Possible inputs : One each of site_profile and scanner_profile .

  • Use site_profile to specify the site profile to be used in the job.
  • Use scanner_profile to specify the scanner profile to be used in the job.
  • Example of dast_configuration :

    stages:
      - build
      - dast
    include:
      - template: DAST.gitlab-ci.yml
    dast:
      dast_configuration:
        site_profile: "Example Co"
        scanner_profile: "Quick Passive Test"

    In this example, the dast job extends the dast configuration added with the include keyword to select a specific site profile and scanner profile.

    Additional details :

  • Settings contained in either a site profile or scanner profile take precedence over those contained in the DAST template.
  • Related topics :

    Site profile . Scanner profile .

    artifacts from. You can also set a job to download no artifacts at all.

    If you do not use dependencies , all artifacts from previous stages are passed to each job.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • The names of jobs to fetch artifacts from.
  • An empty array ( [] ), to configure the job to not download any artifacts.
  • Example of dependencies :

    build osx:
      stage: build
      script: make build:osx
      artifacts:
        paths:
          - binaries/
    build linux:
      stage: build
      script: make build:linux
      artifacts:
        paths:
          - binaries/
    test osx:
      stage: test
      script: make test:osx
      dependencies:
        - build osx
    test linux:
      stage: test
      script: make test:linux
      dependencies:
        - build linux
    deploy:
      stage: deploy
      script: make deploy
      environment: production

    In this example, two jobs have artifacts: build osx and build linux . When test osx is executed, the artifacts from build osx are downloaded and extracted in the context of the build. The same thing happens for test linux and artifacts from build linux .

    The deploy job downloads artifacts from all previous jobs because of the stage precedence.

    Additional details :

  • The job status does not matter. If a job fails or it's a manual job that isn't triggered, no error occurs.
  • If the artifacts of a dependent job are expired or deleted , then the job fails.
  • environment that a job deploys to.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : The name of the environment the job deploys to, in one of these formats:

  • Plain text, including letters, digits, spaces, and these characters: - , _ , / , $ , { , } .
  • CI/CD variables, including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can't use variables defined in a script section.
  • Example of environment :

    deploy to production:
      stage: deploy
      script: git push production HEAD:main
      environment: production

    Additional details :

  • If you specify an environment and no environment with that name exists, an environment is created.
  • environment .

    Common environment names are qa , staging , and production , but you can use any name.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : The name of the environment the job deploys to, in one of these formats:

  • Plain text, including letters, digits, spaces, and these characters: - , _ , / , $ , { , } .
  • CI/CD variables , including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can't use variables defined in a script section.

    Example of environment:name :

    deploy to production:
      stage: deploy
      script: git push production HEAD:main
      environment:
        name: production

    environment .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : A single URL, in one of these formats:

  • Plain text, like https://prod.example.com .
  • CI/CD variables , including predefined, project, group, instance, or variables defined in the .gitlab-ci.yml file. You can't use variables defined in a script section.

    Example of environment:url :

    deploy to production:
      stage: deploy
      script: git push production HEAD:main
      environment:
        name: production
        url: https://prod.example.com

    Additional details :

  • After the job completes, you can access the URL by selecting a button in the merge request, environment, or deployment pages.
  • environment:action for more details and an example.
  • Read more about preparing environments . Indicates that the job stops an environment. Read more about stopping an environment . verify Indicates that the job is only verifying the environment. It does not trigger deployments. Read more about verifying environments . access Indicates that the job is only accessing the environment. It does not trigger deployments. Read more about accessing environments .

    Example of environment:action :

    stop_review_app:
      stage: deploy
      variables:
        GIT_STRATEGY: none
      script: make delete-app
      when: manual
      environment:
        name: review/$CI_COMMIT_REF_SLUG
        action: stop

    Introduced in GitLab 12.8.

    The auto_stop_in keyword specifies the lifetime of the environment. When an environment expires, GitLab automatically stops it.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : A period of time written in natural language. For example, these are all equivalent:

  • 168 hours
  • 7 days
  • one week
  • never
  • Example of environment:auto_stop_in :

    review_app:
      script: deploy-review-app
      environment:
        name: review/$CI_COMMIT_REF_SLUG
        auto_stop_in: 1 day

    When the environment for review_app is created, the environment's lifetime is set to 1 day . Every time the review app is deployed, that lifetime is also reset to 1 day .

    Related topics :

    Environments auto-stop documentation .

    Introduced in GitLab 12.6.

    Use the kubernetes keyword to configure deployments to a Kubernetes cluster that is associated with your project.

    Keyword type : Job keyword. You can use it only as part of a job.

    Example of environment:kubernetes :

    deploy:
      stage: deploy
      script: make deploy-app
      environment:
        name: production
        kubernetes:
          namespace: production

    This configuration sets up the deploy job to deploy to the production environment, using the production Kubernetes namespace .

    Additional details :

  • Kubernetes configuration is not supported for Kubernetes clusters managed by GitLab .
  • Related topics :

    Available settings for kubernetes .

    Introduced in GitLab 13.10.

    Use the deployment_tier keyword to specify the tier of the deployment environment.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : One of the following:

  • production
  • staging
  • testing
  • development
  • other
  • Example of environment:deployment_tier :

    deploy:
      script: echo
      environment:
        name: customer-portal
        deployment_tier: production

    Additional details :

  • Environments created from this job definition are assigned a tier based on this value.
  • Existing environments don't have their tier updated if this value is added later. Existing environments must have their tier updated via the Environments API .
  • Related topics :

    Deployment tier of environments .

    variables to dynamically name environments.

    For example:

    deploy as review app:
      stage: deploy
      script: make deploy
      environment:
        name: review/$CI_COMMIT_REF_SLUG
        url: https://$CI_ENVIRONMENT_SLUG.example.com/

    The deploy as review app job is marked as a deployment to dynamically create the review/$CI_COMMIT_REF_SLUG environment. $CI_COMMIT_REF_SLUG is a CI/CD variable set by the runner. The $CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable for inclusion in URLs. If the deploy as review app job runs in a branch named pow , this environment would be accessible with a URL like https://review-pow.example.com/ .

    The common use case is to create dynamic environments for branches and use them as Review Apps. You can see an example that uses Review Apps at https://gitlab.com/gitlab-examples/review-apps-nginx/ .

    YAML anchors and is a little more flexible and readable.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • The name of another job in the pipeline.
  • A list (array) of names of other jobs in the pipeline.
  • Example of extends :

    .tests:
      script: rake test
      stage: test
      only:
        refs:
          - branches
    rspec:
      extends: .tests
      script: rake rspec
      only:
        variables:
          - $RSPEC

    In this example, the rspec job uses the configuration from the .tests template job. When creating the pipeline, GitLab:

  • Performs a reverse deep merge based on the keys.
  • Merges the .tests content with the rspec job.
  • Doesn't merge the values of the keys.
  • The result is this rspec job:

    rspec:
      script: rake rspec
      stage: test
      only:
        refs:
          - branches
        variables:
          - $RSPEC

    Additional details :

  • In GitLab 12.0 and later, you can use multiple parents for extends .
  • The extends keyword supports up to eleven levels of inheritance, but you should avoid using more than three levels.
  • In the example above, .tests is a hidden job , but you can extend configuration from regular jobs as well.
  • Related topics :

    Reuse configuration sections by using extends .
  • Use extends to reuse configuration from included configuration files .
  • Introduced in GitLab 15.6 with a flag named ci_hooks_pre_get_sources_script . Disabled by default. Generally available in GitLab 15.10. Feature flag ci_hooks_pre_get_sources_script removed.

    Use hooks to specify lists of commands to execute on the runner at certain stages of job execution, like before retrieving the Git repository.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A hash of hooks and their commands. Available hooks: pre_get_sources_script .
  • Introduced in GitLab 15.6 with a flag named ci_hooks_pre_get_sources_script . Disabled by default. Generally available in GitLab 15.10. Feature flag ci_hooks_pre_get_sources_script removed.

    Use hooks:pre_get_sources_script to specify a list of commands to execute on the runner before retrieving the Git repository and any submodules. You can use it to adjust the Git client configuration first, for example.

    Related topics :

  • GitLab Runner configuration
  • Example of hooks:pre_get_sources_script :

    job1:
      hooks:
        pre_get_sources_script:
          - echo 'hello job1 pre_get_sources_script'
      script: echo 'hello job1 script'

    Introduced in GitLab 15.7.

    Use id_tokens to create JSON web tokens (JWT) to authenticate with third party services. All JWTs created this way support OIDC authentication. The required aud sub-keyword is used to configure the aud claim for the JWT.

    Possible inputs :

  • Token names with their aud claims. aud supports:
  • A single string.
  • An array of strings.
  • CI/CD variables .

    Example of id_tokens :

    job_with_id_tokens:
      id_tokens:
        ID_TOKEN_1:
          aud: https://gitlab.com
        ID_TOKEN_2:
          aud:
            - https://gcp.com
            - https://aws.com
        SIGSTORE_ID_TOKEN:
          aud: sigstore
      script:
        - command_to_authenticate_with_gitlab $ID_TOKEN_1
        - command_to_authenticate_with_aws $ID_TOKEN_2

    Related topics :

    Keyless signing with Sigstore .

    default section .

    Possible inputs : The name of the image, including the registry path if needed, in one of these formats:

    <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>
  • CI/CD variables are supported .

    Example of image :

    default:
      image: ruby:3.0
    rspec:
      script: bundle exec rspec
    rspec 2.7:
      image: registry.example.com/my-group/my-project/ruby:2.7
      script: bundle exec rspec

    In this example, the ruby:3.0 image is the default for all jobs in the pipeline. The rspec 2.7 job does not use the default, because it overrides the default with a job-specific image section.

    Related topics :

    Run your CI/CD jobs in Docker containers .

    image used by itself.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs : The name of the image, including the registry path if needed, in one of these formats:

    <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>
  • Example of image:name :

    image:
      name: "registry.example.com/my/image:latest"

    Related topics :

    Run your CI/CD jobs in Docker containers .

    Dockerfile ENTRYPOINT directive , where each shell token is a separate string in the array.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A string.
  • Example of image:entrypoint :

    image:
      name: super/sql:experimental
      entrypoint: [""]

    Related topics :

    Override the entrypoint of an image . Introduced in GitLab 15.1 with a flag named ci_docker_image_pull_policy . Disabled by default. Enabled on GitLab.com and self-managed in GitLab 15.2. Generally available in GitLab 15.4. Feature flag ci_docker_image_pull_policy removed.
  • Requires GitLab Runner 15.1 or later.
  • The pull policy that the runner uses to fetch the Docker image.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A single pull policy, or multiple pull policies in an array. Can be always , if-not-present , or never .
  • Examples of image:pull_policy :

    job1:
      script: echo "A single pull policy."
      image:
        name: ruby:3.0
        pull_policy: if-not-present
    job2:
      script: echo "Multiple pull policies."
      image:
        name: ruby:3.0
        pull_policy: [always, if-not-present]

    Additional details :

  • If the runner does not support the defined pull policy, the job fails with an error similar to: ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]) .
  • Related topics :

    Run your CI/CD jobs in Docker containers . How runner pull policies work . Using multiple pull policies .

    Introduced in GitLab 12.9.

    Use inherit to control inheritance of default keywords and variables .

    default keywords .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    true (default) or false to enable or disable the inheritance of all default keywords.
  • A list of specific default keywords to inherit.
  • Example of inherit:default :

    default:
      retry: 2
      image: ruby:3.0
      interruptible: true
    job1:
      script: echo "This job does not inherit any default keywords."
      inherit:
        default: false
    job2:
      script: echo "This job inherits only the two listed default keywords. It does not inherit 'interruptible'."
      inherit:
        default:
          - retry
          - image

    Additional details :

  • You can also list default keywords to inherit on one line: default: [keyword1, keyword2]

    global variables keywords.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    true (default) or false to enable or disable the inheritance of all global variables.
  • A list of specific variables to inherit.
  • Example of inherit:variables :

    variables:
      VARIABLE1: "This is variable 1"
      VARIABLE2: "This is variable 2"
      VARIABLE3: "This is variable 3"
    job1:
      script: echo "This job does not inherit any global variables."
      inherit:
        variables: false
    job2:
      script: echo "This job inherits only the two listed global variables. It does not inherit 'VARIABLE3'."
      inherit:
        variables:
          - VARIABLE1
          - VARIABLE2

    Additional details :

  • You can also list global variables to inherit on one line: variables: [VARIABLE1, VARIABLE2]

    Introduced in GitLab 12.3.

    Use interruptible if a job should be canceled when a newer pipeline starts before the job completes.

    This keyword has no effect if automatic cancellation of redundant pipelines is disabled. When enabled, a running job with interruptible: true is cancelled when starting a pipeline for a new change on the same branch.

    You can't cancel subsequent jobs after a job with interruptible: false starts.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    true or false (default).
  • Example of interruptible :

    stages:
      - stage1
      - stage2
      - stage3
    step-1:
      stage: stage1
      script:
        - echo "Can be canceled."
      interruptible: true
    step-2:
      stage: stage2
      script:
        - echo "Can not be canceled."
    step-3:
      stage: stage3
      script:
        - echo "Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible."
      interruptible: true

    In this example, a new pipeline causes a running pipeline to be:

  • Canceled, if only step-1 is running or pending.
  • Not canceled, after step-2 starts.
  • Additional details :

  • Only set interruptible: true if the job can be safely canceled after it has started, like a build job. Deployment jobs usually shouldn't be cancelled, to prevent partial deployments.
  • To completely cancel a running pipeline, all jobs must have interruptible: true , or interruptible: false jobs must not have started.
  • Introduced in GitLab 12.2.
  • In GitLab 12.3, maximum number of jobs in needs array raised from five to 50.
  • Introduced in GitLab 12.8, needs: [] lets jobs start immediately. Introduced in GitLab 14.2, you can refer to jobs in the same stage as the job you are configuring.

    Use needs to execute jobs out-of-order. Relationships between jobs that use needs can be visualized as a directed acyclic graph .

    You can ignore stage ordering and run some jobs without waiting for others to complete. Jobs in multiple stages can run concurrently.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • An array of jobs.
  • An empty array ( [] ), to set the job to start as soon as the pipeline is created.
  • Example of needs :

    linux:build:
      stage: build
      script: echo "Building linux..."
    mac:build:
      stage: build
      script: echo "Building mac..."
    lint:
      stage: test
      needs: []
      script: echo "Linting..."
    linux:rspec:
      stage: test
      needs: ["linux:build"]
      script: echo "Running rspec on linux..."
    mac:rspec:
      stage: test
      needs: ["mac:build"]
      script: echo "Running rspec on mac..."
    production:
      stage: deploy
      script: echo "Running production..."
      environment: production

    This example creates four paths of execution:

  • Linter: The lint job runs immediately without waiting for the build stage to complete because it has no needs ( needs: [] ).
  • Linux path: The linux:rspec job runs as soon as the linux:build job finishes, without waiting for mac:build to finish.
  • macOS path: The mac:rspec jobs runs as soon as the mac:build job finishes, without waiting for linux:build to finish.
  • The production job runs as soon as all previous jobs finish: linux:build , linux:rspec , mac:build , mac:rspec .
  • Additional details :

  • The maximum number of jobs that a single job can have in the needs array is limited:
  • For GitLab.com, the limit is 50. For more information, see our infrastructure issue .
  • For self-managed instances, the default limit is 50. This limit can be changed .
  • If needs refers to a job that uses the parallel keyword, it depends on all jobs created in parallel, not just one job. It also downloads artifacts from all the parallel jobs by default. If the artifacts have the same name, they overwrite each other and only the last one downloaded is saved.
  • To have needs refer to a subset of parallelized jobs (and not all of the parallelized jobs), use the needs:parallel:matrix keyword.
  • In GitLab 14.1 and later you can refer to jobs in the same stage as the job you are configuring. This feature is enabled on GitLab.com and ready for production use. On self-managed GitLab 14.2 and later this feature is available by default.
  • In GitLab 14.0 and older, you can only refer to jobs in earlier stages. Stages must be explicitly defined for all jobs that use the needs keyword, or are referenced in a job's needs section.
  • In GitLab 13.9 and older, if needs refers to a job that might not be added to a pipeline because of only , except , or rules , the pipeline might fail to create. In GitLab 13.10 and later, use the needs:optional keyword to resolve a failed pipeline creation.
  • If a pipeline has jobs with needs: [] and jobs in the .pre stage, they will all start as soon as the pipeline is created. Jobs with needs: [] start immediately, and jobs in the .pre stage also start immediately.
  • Introduced in GitLab 12.6.

    When a job uses needs , it no longer downloads all artifacts from previous stages by default, because jobs with needs can start before earlier stages complete. With needs you can only download artifacts from the jobs listed in the needs configuration.

    Use artifacts: true (default) or artifacts: false to control when artifacts are downloaded in jobs that use needs .

    Keyword type : Job keyword. You can use it only as part of a job. Must be used with needs:job .

    Possible inputs :

    true (default) or false .

    Example of needs:artifacts :

    test-job1:
      stage: test
      needs:
        - job: build_job1
          artifacts: true
    test-job2:
      stage: test
      needs:
        - job: build_job2
          artifacts: false
    test-job3:
      needs:
        - job: build_job1
          artifacts: true
        - job: build_job2
        - build_job3

    In this example:

  • The test-job1 job downloads the build_job1 artifacts
  • The test-job2 job does not download the build_job2 artifacts.
  • The test-job3 job downloads the artifacts from all three build_jobs , because artifacts is true , or defaults to true , for all three needed jobs.
  • Additional details :

  • In GitLab 12.6 and later, you can't combine the dependencies keyword with needs .
  • Introduced in GitLab 12.7.

    Use needs:project to download artifacts from up to five jobs in other pipelines. The artifacts are downloaded from the latest successful specified job for the specified ref. To specify multiple jobs, add each as separate array items under the needs keyword.

    If there is a pipeline running for the ref, a job with needs:project does not wait for the pipeline to complete. Instead, the artifacts are downloaded from the latest successful run of the specified job.

    needs:project must be used with job , ref , and artifacts .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    needs:project : A full project path, including namespace and group. job : The job to download artifacts from. ref : The ref to download artifacts from. artifacts : Must be true to download artifacts.

    Examples of needs:project :

    build_job:
      stage: build
      script:
        - ls -lhR
      needs:
        - project: namespace/group/project-name
          job: build-1
          ref: main
          artifacts: true
        - project: namespace/group/project-name-2
          job: build-2
          ref: main
          artifacts: true

    In this example, build_job downloads the artifacts from the latest successful build-1 and build-2 jobs on the main branches in the group/project-name and group/project-name-2 projects.

    In GitLab 13.3 and later, you can use CI/CD variables in needs:project , for example:

    build_job:
      stage: build
      script:
        - ls -lhR
      needs:
        - project: $CI_PROJECT_PATH
          job: $DEPENDENCY_JOB_NAME
          ref: $ARTIFACTS_DOWNLOAD_REF
          artifacts: true

    Additional details :

  • To download artifacts from a different pipeline in the current project, set project to be the same as the current project, but use a different ref than the current pipeline. Concurrent pipelines running on the same ref could override the artifacts.
  • The user running the pipeline must have at least the Reporter role for the group or project, or the group/project must have public visibility.
  • You can't use needs:project in the same job as trigger .
  • When using needs:project to download artifacts from another pipeline, the job does not wait for the needed job to complete. Directed acyclic graph behavior is limited to jobs in the same pipeline. Make sure that the needed job in the other pipeline completes before the job that needs it tries to download the artifacts.
  • You can't download artifacts from jobs that run in parallel .
  • Support for CI/CD variables in project , job , and ref was introduced in GitLab 13.3. Feature flag removed in GitLab 13.4.
  • Related topics :

  • To download artifacts between parent-child pipelines , use needs:pipeline:job .
  • Introduced in GitLab 13.7.

    A child pipeline can download artifacts from a job in its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    needs:pipeline : A pipeline ID. Must be a pipeline present in the same parent-child pipeline hierarchy. job : The job to download artifacts from.

    Example of needs:pipeline:job :

    Parent pipeline ( .gitlab-ci.yml ):

    create-artifact:
      stage: build
      script: echo "sample artifact" > artifact.txt
      artifacts:
        paths: [artifact.txt]
    child-pipeline:
      stage: test
      trigger:
        include: child.yml
        strategy: depend
      variables:
        PARENT_PIPELINE_ID: $CI_PIPELINE_ID

    In this example, the create-artifact job in the parent pipeline creates some artifacts. The child-pipeline job triggers a child pipeline, and passes the CI_PIPELINE_ID variable to the child pipeline as a new PARENT_PIPELINE_ID variable. The child pipeline can use that variable in needs:pipeline to download artifacts from the parent pipeline.

    Additional details :

  • The pipeline attribute does not accept the current pipeline ID ( $CI_PIPELINE_ID ). To download artifacts from a job in the current pipeline, use needs .
  • rules , only , or except and that are added with include might not always be added to a pipeline. GitLab checks the needs relationships before starting a pipeline:

  • If the needs entry has optional: true and the needed job is present in the pipeline, the job waits for it to complete before starting.
  • If the needed job is not present, the job can start when all other needs requirements are met.
  • If the needs section contains only optional jobs, and none are added to the pipeline, the job starts immediately (the same as an empty needs entry: needs: [] ).
  • If a needed job has optional: false , but it was not added to the pipeline, the pipeline fails to start with an error similar to: 'job1' job needs 'job2' job, but it was not added to the pipeline .
  • Keyword type : Job keyword. You can use it only as part of a job.

    Example of needs:optional :

    build-job:
      stage: build
    test-job1:
      stage: test
    test-job2:
      stage: test
      rules:
        - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
    deploy-job:
      stage: deploy
      needs:
        - job: test-job2
          optional: true
        - job: test-job1
      environment: production
    review-job:
      stage: deploy
      needs:
        - job: test-job2
          optional: true
      environment: review

    In this example:

    build-job , test-job1 , and test-job2 start in stage order.
  • When the branch is the default branch, test-job2 is added to the pipeline, so: deploy-job waits for both test-job1 and test-job2 to complete.
  • review-job waits for test-job2 to complete.
  • When the branch is not the default branch, test-job2 is not added to the pipeline, so: deploy-job waits for only test-job1 to complete, and does not wait for the missing test-job2 .
  • review-job has no other needed jobs and starts immediately (at the same time as build-job ), like needs: [] .

    You can mirror the pipeline status from an upstream pipeline to a job by using the needs:pipeline keyword. The latest pipeline status from the default branch is replicated to the job.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • A full project path, including namespace and group. If the project is in the same group or namespace, you can omit them from the project keyword. For example: project: group/project-name or project: project-name .
  • Example of needs:pipeline :

    upstream_status:
      stage: test
      needs:
        pipeline: other/project

    Additional details :

  • If you add the job keyword to needs:pipeline , the job no longer mirrors the pipeline status. The behavior changes to needs:pipeline:job .
  • Introduced in GitLab 16.3.

    Jobs can use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.

    Use needs:parallel:matrix to execute jobs out-of-order depending on parallelized jobs.

    Keyword type : Job keyword. You can use it only as part of a job. Must be used with needs:job .

    Possible inputs : An array of hashes of variables:

  • The variables and values must be selected from the variables and values defined in the parallel:matrix job.
  • Example of needs:parallel:matrix :

    linux:build:
      stage: build
      script: echo "Building linux..."
      parallel:
        matrix:
          - PROVIDER: aws
            STACK:
              - monitoring
              - app1
              - app2
    linux:rspec:
      stage: test
      needs:
        - job: linux:build
          parallel:
            matrix:
              - PROVIDER: aws
              - STACK: app1
      script: echo "Running rspec on linux..."

    The above example generates the following jobs:

    linux:build: [aws, monitoring]
    linux:build: [aws, app1]
    linux:build: [aws, app2]
    linux:rspec

    The linux:rspec job runs as soon as the linux:build: [aws, app1] job finishes.

    Related topics :

    Specify a parallelized job using needs with multiple parallelized jobs .

    rules is the preferred keyword to control when to add jobs to pipelines.

    You can use only and except to control when to add jobs to pipelines.

  • Use only to define when a job runs.
  • Use except to define when a job does not run.
  • See specify when jobs run with only and except for more details and examples.

    rules:if is the preferred keyword when using refs, regular expressions, or variables to control when to add jobs to pipelines.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : An array including any number of:

    Branch names, for example main or my-feature-branch .

    Regular expressions that match against branch names, for example /^feature-.*/ .

    The following keywords:

    Value Description external_pull_requests When an external pull request on GitHub is created or updated (See Pipelines for external pull requests ). merge_requests For pipelines created when a merge request is created or updated. Enables merge request pipelines , merged results pipelines , and merge trains . pipelines For multi-project pipelines created by using the API with CI_JOB_TOKEN , or the trigger keyword. pushes For pipelines triggered by a git push event, including for branches and tags. schedules For scheduled pipelines . When the Git reference for a pipeline is a tag. triggers For pipelines created by using a trigger token . For pipelines created by selecting Run pipeline in the GitLab UI, from the project's Build > Pipelines section.

    Scheduled pipelines run on specific branches, so jobs configured with only: branches run on scheduled pipelines too. Add except: schedules to prevent jobs with only: branches from running on scheduled pipelines.

    only or except used without any other keywords are equivalent to only: refs or except: refs . For example, the following two jobs configurations have the same behavior:

    job1:
      script: echo
      only:
        - branches
    job2:
      script: echo
      only:
        refs:
          - branches

    If a job does not use only , except , or rules , then only is set to branches and tags by default.

    For example, job1 and job2 are equivalent:

    job1:
      script: echo "test"
    job2:
      script: echo "test"
      only:
        - branches
        - tags

    CI/CD variables .

    only:variables and except:variables are not being actively developed. rules:if is the preferred keyword when using refs, regular expressions, or variables to control when to add jobs to pipelines.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • An array of CI/CD variable expressions .
  • Example of only:variables :

    deploy:
      script: cap staging deploy
      only:
        variables:
          - $RELEASE == "staging"
          - $STAGING

    Related topics :

    only:variables and except:variables examples . using only:changes with merge request pipelines )

    only:changes and except:changes are not being actively developed. rules:changes is the preferred keyword when using changed files to control when to add jobs to pipelines.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : An array including any number of:

  • Paths to files.
  • Wildcard paths for single directories, for example path/to/directory/* , or a directory and all its subdirectories, for example path/to/directory/**/* .
  • Wildcard glob paths for all files with the same extension or multiple extensions, for example *.md or path/to/directory/*.{rb,py,sh} . See the Ruby fnmatch documentation for the supported syntax list.
  • Wildcard paths to files in the root directory, or all directories, wrapped in double quotes. For example "*.json" or "**/*.json" .
  • Example of only:changes :

    docker build:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      only:
        refs:
          - branches
        changes:
          - Dockerfile
          - docker/scripts/*
          - dockerfiles/**/*
          - more_scripts/*.{rb,py,sh}
          - "**/*.json"

    Additional details :

    changes resolves to true if any of the matching files are changed (an OR operation).
  • If you use refs other than branches , external_pull_requests , or merge_requests , changes can't determine if a given file is new or old and always returns true .
  • If you use only: changes with other refs, jobs ignore the changes and always run.
  • If you use except: changes with other refs, jobs ignore the changes and never run.
  • Related topics :

    only: changes and except: changes examples .
  • If you use changes with only allow merge requests to be merged if the pipeline succeeds , you should also use only:merge_requests .
  • Jobs or pipelines can run unexpectedly when using only: changes .

    rules:if with the CI_KUBERNETES_ACTIVE predefined CI/CD variable to control if jobs are added to the pipeline when the Kubernetes service is active in the project.

    Keyword type : Job-specific. You can use it only as part of a job.

    Possible inputs :

  • The kubernetes strategy accepts only the active keyword.
  • Example of only:kubernetes :

    deploy:
      only:
        kubernetes: active

    In this example, the deploy job runs only when the Kubernetes service is active in the project.

    GitLab Pages job that uploads static content to GitLab. The content is then published as a website.

    You must:

  • Define artifacts with a path to the content directory, which is public by default.
  • Use publish if want to use a different content directory.
  • Keyword type : Job name.

    Example of pages :

    pages:
      stage: deploy
      script:
        - mv my-html-content public
      artifacts:
        paths:
          - public
      rules:
        - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      environment: production

    This example moves all files from a my-html-content/ directory to the public/ directory. This directory is exported as an artifact and published with GitLab Pages.

    Introduced in GitLab 16.1.

    Use publish to configure the content directory of a pages job .

    Keyword type : Job keyword. You can use it only as part of a pages job.

    Possible inputs : A path to a directory containing the Pages content.

    Example of publish :

    pages:
      stage: deploy
      script:
        - npx @11ty/eleventy --input=path/to/eleventy/root --output=dist
      artifacts:
        paths:
          - dist
      publish: dist
      rules:
        - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      environment: production

    This example uses Eleventy to generate a static website and output the generated HTML files into a the dist/ directory. This directory is exported as an artifact and published with GitLab Pages.

    Introduced in GitLab 15.9, the maximum value for parallel is increased from 50 to 200.

    Use parallel to run a job multiple times in parallel in a single pipeline.

    Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.

    Parallel jobs are named sequentially from job_name 1/N to job_name N/N .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • A numeric value from 1 to 200 .
  • Example of parallel :

    test:
      script: rspec
      parallel: 5

    This example creates 5 jobs that run in parallel, named test 1/5 to test 5/5 .

    Additional details :

  • Every parallel job has a CI_NODE_INDEX and CI_NODE_TOTAL predefined CI/CD variable set.
  • A pipeline with jobs that use parallel might:
  • Create more jobs running in parallel than available runners. Excess jobs are queued and marked pending while waiting for an available runner.
  • Create too many jobs, and the pipeline fails with a job_activity_limit_exceeded error. The maximum number of jobs that can exist in active pipelines is limited at the instance-level .
  • Related topics :

    Parallelize large jobs .
  • improved in GitLab 13.4 .
  • Introduced in GitLab 15.9, the maximum number of permutations is increased from 50 to 200.

    Use parallel:matrix to run a job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.

    Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : An array of hashes of variables:

  • The variable names can use only numbers, letters, and underscores ( _ ).
  • The values must be either a string, or an array of strings.
  • The number of permutations cannot exceed 200.
  • Example of parallel:matrix :

    deploystacks:
      stage: deploy
      script:
        - bin/deploy
      parallel:
        matrix:
          - PROVIDER: aws
            STACK:
              - monitoring
              - app1
              - app2
          - PROVIDER: ovh
            STACK: [monitoring, backup, app]
          - PROVIDER: [gcp, vultr]
            STACK: [data, processing]
      environment: $PROVIDER/$STACK

    The example generates 10 parallel deploystacks jobs, each with different values for PROVIDER and STACK :

    deploystacks: [aws, monitoring]
    deploystacks: [aws, app1]
    deploystacks: [aws, app2]
    deploystacks: [ovh, monitoring]
    deploystacks: [ovh, backup]
    deploystacks: [ovh, app]
    deploystacks: [gcp, data]
    deploystacks: [gcp, processing]
    deploystacks: [vultr, data]
    deploystacks: [vultr, processing]

    Additional details :

    parallel:matrix jobs add the variable values to the job names to differentiate the jobs from each other, but large values can cause names to exceed limits :
  • Job names must be 255 characters or fewer .
  • When using needs , job names must be 128 characters or fewer.
  • Related topics :

    Run a one-dimensional matrix of parallel jobs . Run a matrix of triggered parallel jobs . Select different runner tags for each parallel matrix job .

    Introduced in GitLab 13.2.

    Use release to create a release .

    The release job must have access to the release-cli , which must be in the $PATH .

    If you use the Docker executor , you can use this image from the GitLab Container Registry: registry.gitlab.com/gitlab-org/release-cli:latest

    If you use the Shell executor or similar, install release-cli on the server where the runner is registered.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : The release subkeys:

  • tag_name
  • tag_message (optional) name (optional)
  • description
  • ref (optional) milestones (optional) released_at (optional) assets:links (optional)

    Example of release keyword :

    release_job:
      stage: release
      image: registry.gitlab.com/gitlab-org/release-cli:latest
      rules:
        - if: $CI_COMMIT_TAG                  # Run this job when a tag is created manually
      script:
        - echo "Running the release job."
      release:
        tag_name: $CI_COMMIT_TAG
        name: 'Release $CI_COMMIT_TAG'
        description: 'Release created using the release-cli.'

    This example creates a release:

  • When you push a Git tag.
  • When you add a Git tag in the UI at Code > Tags .
  • Additional details :

    All release jobs, except trigger jobs, must include the script keyword. A release job can use the output from script commands. If you don't need the script, you can use a placeholder:

    script:
      - echo "release job"

    An issue exists to remove this requirement.

    The release section executes after the script keyword and before the after_script .

    A release is created only if the job's main script succeeds.

    If the release already exists, it is not updated and the job with the release keyword fails.

    Related topics :

    CI/CD example of the release keyword . Create multiple releases in a single pipeline . Use a custom SSL CA certificate authority .

    are supported .

    Example of release:tag_name :

    To create a release when a new tag is added to the project:

  • Use the $CI_COMMIT_TAG CI/CD variable as the tag_name .
  • Use rules:if to configure the job to run only for new tags.
  • job:
      script: echo "Running the release job for the new tag."
      release:
        tag_name: $CI_COMMIT_TAG
        description: 'Release description'
      rules:
        - if: $CI_COMMIT_TAG

    To create a release and a new tag at the same time, your rules should not configure the job to run only for new tags. A semantic versioning example:

    job:
      script: echo "Running the release job and creating a new tag."
      release:
        tag_name: ${MAJOR}_${MINOR}_${REVISION}
        description: 'Release description'
      rules:
        - if: $CI_PIPELINE_SOURCE == "schedule"

    Introduced in GitLab 15.3. Supported by release-cli v0.12.0 or later.

    If the tag does not exist, the newly created tag is annotated with the message specified by tag_message . If omitted, a lightweight tag is created.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • A text string.
  • Example of release:tag_message :

      release_job:
        stage: release
        release:
          tag_name: $CI_COMMIT_TAG
          description: 'Release description'
          tag_message: 'Annotated tag message'
  • GitLab 13.7 .
  • The file location must be relative to the project directory ( $CI_PROJECT_DIR ).
  • If the file is a symbolic link, it must be in the $CI_PROJECT_DIR .
  • The ./path/to/file and filename can't contain spaces.
  • Example of release:description :

    job:
      release:
        tag_name: ${MAJOR}_${MINOR}_${REVISION}
        description: './path/to/CHANGELOG.md'

    Additional details :

  • The description is evaluated by the shell that runs release-cli . You can use CI/CD variables to define the description, but some shells use different syntax to reference variables. Similarly, some shells might require special characters to be escaped. For example, backticks ( ` ) might need to be escaped with a backslash ( \ ).
  • Introduced in GitLab 13.12.

    Use release:assets:links to include asset links in the release.

    Requires release-cli version v0.4.0 or later.

    Example of release:assets:links :

    assets:
      links:
        - name: 'asset1'
          url: 'https://example.com/assets/1'
        - name: 'asset2'
          url: 'https://example.com/assets/2'
          filepath: '/pretty/url/1' # optional
          link_type: 'other' # optional

    Introduced in GitLab 12.7.

    Use resource_group to create a resource group that ensures a job is mutually exclusive across different pipelines for the same project.

    For example, if multiple jobs that belong to the same resource group are queued simultaneously, only one of the jobs starts. The other jobs wait until the resource_group is free.

    Resource groups behave similar to semaphores in other programming languages.

    You can define multiple resource groups per environment. For example, when deploying to physical devices, you might have multiple physical devices. Each device can be deployed to, but only one deployment can occur per device at any given time.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • Only letters, digits, - , _ , / , $ , { , } , . , and spaces. It can't start or end with / . CI/CD variables are supported .
  • Example of resource_group :

    deploy-to-production:
      script: deploy
      resource_group: production

    In this example, two deploy-to-production jobs in two separate pipelines can never run at the same time. As a result, you can ensure that concurrent deployments never happen to the production environment.

    Related topics :

    Pipeline-level concurrency control with cross-project/parent-child pipelines .

    retry:when to select which failures to retry on.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

    0 (default), 1 , or 2 .

    Example of retry :

    test:
      script: rspec
      retry: 2

    retry , and can be 0 , 1 , or 2 .

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A single failure type, or an array of one or more failure types:
  • always : Retry on any failure (default). unknown_failure : Retry when the failure reason is unknown. script_failure : Retry when:
  • The script failed.
  • The runner failed to pull the Docker image. For docker , docker+machine , kubernetes executors .
  • api_failure : Retry on API failure. stuck_or_timeout_failure : Retry when the job got stuck or timed out. runner_system_failure : Retry if there is a runner system failure (for example, job setup failed). runner_unsupported : Retry if the runner is unsupported. stale_schedule : Retry if a delayed job could not be executed. job_execution_timeout : Retry if the script exceeded the maximum execution time set for the job. archived_failure : Retry if the job is archived and can't be run. unmet_prerequisites : Retry if the job failed to complete prerequisite tasks. scheduler_failure : Retry if the scheduler failed to assign the job to a runner. data_integrity_failure : Retry if there is a structural integrity problem detected.

    Example of retry:when (single failure type):

    test:
      script: rspec
      retry:
        max: 2
        when: runner_system_failure

    If there is a failure other than a runner system failure, the job is not retried.

    Example of retry:when (array of failure types):

    test:
      script: rspec
      retry:
        max: 2
        when:
          - runner_system_failure
          - stuck_or_timeout_failure

    Related topics :

    You can specify the number of retry attempts for certain stages of job execution using variables.

    Introduced in GitLab 12.3.

    Use rules to include or exclude jobs in pipelines.

    Rules are evaluated when the pipeline is created, and evaluated in order until the first match. When a match is found, the job is either included or excluded from the pipeline, depending on the configuration.

    You cannot use dotenv variables created in job scripts in rules, because rules are evaluated before any jobs run.

    rules replaces only/except and they can't be used together in the same job. If you configure one job to use both keywords, the GitLab returns a key may not be used with rules error.

    rules accepts an array of rules defined with:

  • changes
  • exists
  • allow_failure
  • variables
  • You can combine multiple keywords together for complex rules .

    The job is added to the pipeline:

  • If an if , changes , or exists rule matches and also has when: on_success (default), when: delayed , or when: always .
  • If a rule is reached that is only when: on_success , when: delayed , or when: always .
  • The job is not added to the pipeline:

  • If no rules match.
  • If a rule matches and has when: never .
  • You can use !reference tags to reuse rules configuration in different jobs.

    CI/CD variables or predefined CI/CD variables , with some exceptions .

    Keyword type : Job-specific and pipeline-specific. You can use it as part of a job to configure the job behavior, or with workflow to configure the pipeline behavior.

    Possible inputs :

  • A CI/CD variable expression .
  • Example of rules:if :

    job:
      script: echo "Hello, Rules!"
      rules:
        - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != $CI_DEFAULT_BRANCH
          when: never
        - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/
          when: manual
          allow_failure: true
        - if: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME

    Additional details :

  • If a rule matches and has no when defined, the rule uses the when defined for the job, which defaults to on_success if not defined.
  • In GitLab 14.5 and earlier, you can define when once per rule, or once at the job-level, which applies to all rules. You can't mix when at the job-level with when in rules.
  • In GitLab 14.6 and later, you can mix when at the job-level with when in rules . when configuration in rules takes precedence over when at the job-level.
  • Unlike variables in script sections, variables in rules expressions are always formatted as $VARIABLE .
  • You can use rules:if with include to conditionally include other configuration files .
  • CI/CD variables on the right side of =~ and !~ expressions are evaluated as regular expressions .
  • Related topics :

    Common if expressions for rules . Avoid duplicate pipelines . Use rules to run merge request pipelines .
  • file paths can include variables . A file path array can also be in rules:changes:paths .
  • Wildcard paths for:
  • Single directories, for example path/to/directory/* .
  • A directory and all its subdirectories, for example path/to/directory/**/* .
  • Wildcard glob paths for all files with the same extension or multiple extensions, for example *.md or path/to/directory/*.{rb,py,sh} . See the Ruby fnmatch documentation for the supported syntax list.
  • Wildcard paths to files in the root directory, or all directories, wrapped in double quotes. For example "*.json" or "**/*.json" .
  • Example of rules:changes :

    docker build:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      rules:
        - if: $CI_PIPELINE_SOURCE == "merge_request_event"
          changes:
            - Dockerfile
          when: manual
          allow_failure: true
  • If the pipeline is a merge request pipeline, check Dockerfile for changes.
  • If Dockerfile has changed, add the job to the pipeline as a manual job, and the pipeline continues running even if the job is not triggered ( allow_failure: true ).
  • A maximum of 50 patterns or file paths can be defined per rules:changes section.
  • If Dockerfile has not changed, do not add job to any pipeline (same as when: never ).
  • rules:changes:paths is the same as rules:changes without any subkeys.

    Additional details :

    rules: changes works the same way as only: changes and except: changes .
  • You can use when: never to implement a rule similar to except:changes .
  • changes resolves to true if any of the matching files are changed (an OR operation).

    Related topics :

    Jobs or pipelines can run unexpectedly when using rules: changes .

    Introduced in GitLab 15.2.

    Use rules:changes to specify that a job only be added to a pipeline when specific files are changed, and use rules:changes:paths to specify the files.

    rules:changes:paths is the same as using rules:changes without any subkeys. All additional details and related topics are the same.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • An array of file paths. File paths can include variables .
  • Example of rules:changes:paths :

    docker-build-1:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      rules:
        - if: $CI_PIPELINE_SOURCE == "merge_request_event"
          changes:
            - Dockerfile
    docker-build-2:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      rules:
        - if: $CI_PIPELINE_SOURCE == "merge_request_event"
          changes:
            paths:
              - Dockerfile

    In this example, both jobs have the same behavior.

    Introduced in GitLab 15.3 with a flag named ci_rules_changes_compare . Enabled by default. Generally available in GitLab 15.5. Feature flag ci_rules_changes_compare removed.

    Use rules:changes:compare_to to specify which ref to compare against for changes to the files listed under rules:changes:paths .

    Keyword type : Job keyword. You can use it only as part of a job, and it must be combined with rules:changes:paths .

    Possible inputs :

  • A branch name, like main , branch1 , or refs/heads/branch1 .
  • A tag name, like tag1 or refs/tags/tag1 .
  • A commit SHA, like 2fg31ga14b .
  • Example of rules:changes:compare_to :

    docker build:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      rules:
        - if: $CI_PIPELINE_SOURCE == "merge_request_event"
          changes:
            paths:
              - Dockerfile
            compare_to: 'refs/heads/branch1'

    In this example, the docker build job is only included when the Dockerfile has changed relative to refs/heads/branch1 and the pipeline source is a merge request event.

  • CI/CD variables .
  • Example of rules:exists :

    job:
      script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
      rules:
        - exists:
            - Dockerfile

    job runs if a Dockerfile exists anywhere in the repository.

    Additional details :

  • Glob patterns are interpreted with Ruby File.fnmatch with the flags File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB .
  • For performance reasons, GitLab performs a maximum of 10,000 checks against exists patterns or file paths. After the 10,000th check, rules with patterned globs always match. In other words, the exists rule always assumes a match in projects with more than 10,000 files, or if there are fewer than 10,000 files but the exists rules are checked more than 10,000 times.
  • A maximum of 50 patterns or file paths can be defined per rules:exists section.
  • exists resolves to true if any of the listed files are found (an OR operation).

    Introduced in GitLab 12.8.

    Use allow_failure: true in rules to allow a job to fail without stopping the pipeline.

    You can also use allow_failure: true with a manual job. The pipeline continues running without waiting for the result of the manual job. allow_failure: false combined with when: manual in rules causes the pipeline to wait for the manual job to run before continuing.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    true or false . Defaults to false if not defined.

    Example of rules:allow_failure :

    job:
      script: echo "Hello, Rules!"
      rules:
        - if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH
          when: manual
          allow_failure: true

    If the rule matches, then the job is a manual job with allow_failure: true .

    Additional details :

  • The rule-level rules:allow_failure overrides the job-level allow_failure , and only applies when the specific rule triggers the job.
  • Introduced in GitLab 16.0 with a flag named introduce_rules_with_needs . Disabled by default. Generally available in GitLab 16.2. Feature flag introduce_rules_with_needs removed.

    Use needs in rules to update a job's needs for specific conditions. When a condition matches a rule, the job's needs configuration is completely replaced with the needs in the rule.

    Keyword type : Job-specific. You can use it only as part of a job.

    Possible inputs :

  • An array of job names as strings.
  • A hash with a job name, optionally with additional attributes.
  • An empty array ( [] ), to set the job needs to none when the specific condition is met.
  • Example of rules:needs :

    build-dev:
      stage: build
      rules:
        - if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
      script: echo "Feature branch, so building dev version..."
    build-prod:
      stage: build
      rules:
        - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
      script: echo "Default branch, so building prod version..."
    specs:
      stage: test
      needs: ['build-dev']
      rules:
        - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
          needs: ['build-prod']
        - when: on_success # Run the job in other cases
      script: echo "Running dev specs by default, or prod specs when default branch..."

    In this example:

  • If the pipeline runs on a branch that is not the default branch, the specs job needs the build-dev job (default behavior).
  • If the pipeline runs on the default branch, and therefore the rule matches the condition, the specs job needs the build-prod job instead.
  • Additional details :

    needs in rules override any needs defined at the job-level. When overridden, the behavior is same as job-level needs . needs in rules can accept artifacts and optional .

    variables in rules to define variables for specific conditions.

    Keyword type : Job-specific. You can use it only as part of a job.

    Possible inputs :

  • A hash of variables in the format VARIABLE-NAME: value .
  • Example of rules:variables :

    job:
      variables:
        DEPLOY_VARIABLE: "default-deploy"
      rules:
        - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
          variables:                              # Override DEPLOY_VARIABLE defined
            DEPLOY_VARIABLE: "deploy-production"  # at the job level.
        - if: $CI_COMMIT_REF_NAME =~ /feature/
          variables:
            IS_A_FEATURE: "true"                  # Define a new variable.
      script:
        - echo "Run script with $DEPLOY_VARIABLE as an argument"
        - echo "Run another script if $IS_A_FEATURE exists"

    trigger jobs require a script keyword.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : An array including:

  • Single line commands.
  • Long commands split over multiple lines .
  • YAML anchors .

    CI/CD variables are supported .

    Example of script :

    job1:
      script: "bundle exec rspec"
    job2:
      script:
        - uname -a
        - bundle exec rspec

    Additional details :

  • When you use these special characters in script , you must use single quotes ( ' ) or double quotes ( " ) .
  • Related topics :

  • You can ignore non-zero exit codes .
  • Use color codes with script to make job logs easier to review. Create custom collapsible sections to simplify job log output.

    Introduced in GitLab 13.4.

    Use secrets to specify CI/CD secrets to:

  • Retrieve from an external secrets provider.
  • Make available in the job as CI/CD variables ( file type by default).
  • Introduced in GitLab 13.4 and GitLab Runner 13.4.

    Use secrets:vault to specify secrets provided by a HashiCorp Vault .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    engine:name : Name of the secrets engine. engine:path : Path to the secrets engine. path : Path to the secret. field : Name of the field where the password is stored.

    Example of secrets:vault :

    To specify all details explicitly and use the KV-V2 secrets engine:

    job:
      secrets:
        DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
          vault:  # Translates to secret: `ops/data/production/db`, field: `password`
            engine:
              name: kv-v2
              path: ops
            path: production/db
            field: password

    You can shorten this syntax. With the short syntax, engine:name and engine:path both default to kv-v2 :

    job:
      secrets:
        DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
          vault: production/db/password  # Translates to secret: `kv-v2/data/production/db`, field: `password`

    To specify a custom secrets engine path in the short syntax, add a suffix that starts with @ :

    job:
      secrets:
        DATABASE_PASSWORD:  # Store the path to the secret in this CI/CD variable
          vault: production/db/password@ops  # Translates to secret: `ops/data/production/db`, field: `password`

    Introduced in GitLab 16.3 and GitLab Runner 16.3.

    Use secrets:azure_key_vault to specify secrets provided by a Azure Key Vault .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    name : Name of the secret. version : Version of the secret.

    Example of secrets:azure_key_vault :

    job:
      secrets:
        DATABASE_PASSWORD:
          azure_key_vault:
            name: 'test'
            version: 'test'

    Related topics :

    Use Azure Key Vault secrets in GitLab CI/CD .

    Introduced in GitLab 14.1 and GitLab Runner 14.1.

    Use secrets:file to configure the secret to be stored as either a file or variable type CI/CD variable

    By default, the secret is passed to the job as a file type CI/CD variable. The value of the secret is stored in the file and the variable contains the path to the file.

    If your software can't use file type CI/CD variables, set file: false to store the secret value directly in the variable.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

    true (default) or false .

    Example of secrets:file :

    job:
      secrets:
        DATABASE_PASSWORD:
          vault: production/db/password@ops
          file: false

    Additional details :

  • The file keyword is a setting for the CI/CD variable and must be nested under the CI/CD variable name, not in the vault section.
  • Introduced in GitLab 15.8.

    Use secrets:token to explicitly select a token to use when authenticating with Vault by referencing the token's CI/CD variable.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • The name of an ID token
  • Example of secrets:token :

    job:
      id_tokens:
        AWS_TOKEN:
          aud: https://aws.example.com
        VAULT_TOKEN:
          aud: https://vault.example.com
      secrets:
        DB_PASSWORD:
          vault: gitlab/production/db
          token: $VAULT_TOKEN

    Additional details :

  • When the token keyword is not set, the first ID token is used to authenticate.
  • In GitLab 15.8 to 15.11, you must enable Limit JSON Web Token (JWT) access for this keyword to be available.
  • When Limit JSON Web Token (JWT) access is disabled, the token keyword is ignored and the CI_JOB_JWT CI/CD variable is used to authenticate.
  • services image is linked to the image specified in the image keyword.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs : The name of the services image, including the registry path if needed, in one of these formats:

    <image-name> (Same as using <image-name> with the latest tag)
  • <image-name>:<tag>
  • <image-name>@<digest>
  • CI/CD variables are supported , but not for alias .

    Example of services :

    default:
      image:
        name: ruby:2.6
        entrypoint: ["/bin/bash"]
      services:
        - name: my-postgres:11.7
          alias: db-postgres
          entrypoint: ["/usr/local/bin/db-postgres"]
          command: ["start"]
      before_script:
        - bundle install
    test:
      script:
        - bundle exec rake spec

    In this example, GitLab launches two containers for the job:

  • A Ruby container that runs the script commands.
  • A PostgreSQL container. The script commands in the Ruby container can connect to the PostgreSQL database at the db-postgrest hostname.
  • Related topics :

    Available settings for services . Define services in the .gitlab-ci.yml file . Run your CI/CD jobs in Docker containers . Use Docker to build Docker images . Introduced in GitLab 15.1 with a flag named ci_docker_image_pull_policy . Disabled by default. Enabled on GitLab.com and self-managed in GitLab 15.2. Generally available in GitLab 15.4. Feature flag ci_docker_image_pull_policy removed.
  • Requires GitLab Runner 15.1 or later.
  • The pull policy that the runner uses to fetch the Docker image.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • A single pull policy, or multiple pull policies in an array. Can be always , if-not-present , or never .
  • Examples of service:pull_policy :

    job1:
      script: echo "A single pull policy."
      services:
        - name: postgres:11.6
          pull_policy: if-not-present
    job2:
      script: echo "Multiple pull policies."
      services:
        - name: postgres:11.6
          pull_policy: [always, if-not-present]

    Additional details :

  • If the runner does not support the defined pull policy, the job fails with an error similar to: ERROR: Job failed (system failure): the configured PullPolicies ([always]) are not allowed by AllowedPullPolicies ([never]) .
  • Related topics :

    Run your CI/CD jobs in Docker containers . How runner pull policies work . Using multiple pull policies .

    stage a job runs in. Jobs in the same stage can execute in parallel (see Additional details ).

    If stage is not defined, the job uses the test stage by default.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs : A string, which can be a:

    Default stage .
  • User-defined stages.
  • Example of stage :

    stages:
      - build
      - test
      - deploy
    job1:
      stage: build
      script:
        - echo "This job compiles code."
    job2:
      stage: test
      script:
        - echo "This job tests the compiled code. It runs when the build stage completes."
    job3:
      script:
        - echo "This job also runs in the test stage".
    job4:
      stage: deploy
      script:
        - echo "This job deploys the code. It runs when the test stage completes."
      environment: production

    Additional details :

  • Jobs can run in parallel if they run on different runners.
  • If you have only one runner, jobs can run in parallel if the runner's concurrent setting is greater than 1 .
  • Introduced in GitLab 12.4.

    Use the .pre stage to make a job run at the start of a pipeline. .pre is always the first stage in a pipeline. User-defined stages execute after .pre . You do not have to define .pre in stages .

    If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage.

    Keyword type : You can only use it with a job's stage keyword.

    Example of stage: .pre :

    stages:
      - build
      - test
    job1:
      stage: build
      script:
        - echo "This job runs in the build stage."
    first-job:
      stage: .pre
      script:
        - echo "This job runs in the .pre stage, before all other stages."
    job2:
      stage: test
      script:
        - echo "This job runs in the test stage."

    Introduced in GitLab 12.4.

    Use the .post stage to make a job run at the end of a pipeline. .post is always the last stage in a pipeline. User-defined stages execute before .post . You do not have to define .post in stages .

    If a pipeline contains only jobs in the .pre or .post stages, it does not run. There must be at least one other job in a different stage.

    Keyword type : You can only use it with a job's stage keyword.

    Example of stage: .post :

    stages:
      - build
      - test
    job1:
      stage: build
      script:
        - echo "This job runs in the build stage."
    last-job:
      stage: .post
      script:
        - echo "This job runs in the .post stage, after all other stages."
    job2:
      stage: test
      script:
        - echo "This job runs in the test stage."

    Additional details:

  • If a pipeline has jobs with needs: [] and jobs in the .pre stage, they will all start as soon as the pipeline is created. Jobs with needs: [] start immediately, ignoring any stage configuration.
  • A limit of 50 tags per job enabled on GitLab.com in GitLab 14.3.
  • A limit of 50 tags per job enabled on self-managed in GitLab 14.3.
  • Use tags to select a specific runner from the list of all runners that are available for the project.

    When you register a runner, you can specify the runner's tags, for example ruby , postgres , or development . To pick up and run a job, a runner must be assigned every tag listed in the job.

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs :

  • An array of tag names.
  • CI/CD variables are supported in GitLab 14.1 and later.
  • Example of tags :

    job:
      tags:
        - ruby
        - postgres

    In this example, only runners with both the ruby and postgres tags can run the job.

    Additional details :

  • In GitLab 14.3 and later, the number of tags must be less than 50 .
  • Related topics :

    Use tags to control which jobs a runner can run . Select different runner tags for each parallel matrix job .

    Introduced in GitLab 12.3.

    Use timeout to configure a timeout for a specific job. If the job runs for longer than the timeout, the job fails.

    The job-level timeout can be longer than the project-level timeout . but can't be longer than the runner's timeout .

    Keyword type : Job keyword. You can use it only as part of a job or in the default section .

    Possible inputs : A period of time written in natural language. For example, these are all equivalent:

  • 3600 seconds
  • 60 minutes
  • one hour
  • Example of timeout :

    build:
      script: build.sh
      timeout: 3 hours 30 minutes
    test:
      script: rspec
      timeout: 3h 30m

    downstream pipeline that is either:

    A multi-project pipeline . A child pipeline .

    Trigger jobs can use only a limited set of GitLab CI/CD configuration keywords. The keywords available for use in trigger jobs are:

    allow_failure . extends . needs , but not needs:project . only and except . rules . stage . trigger . variables . when (only with a value of on_success , on_failure , or always ).

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • For multi-project pipelines, the path to the downstream project. CI/CD variables are supported in GitLab 15.3 and later, but not job-level persisted variables . Alternatively, use trigger:project .
  • For child pipelines, use trigger:include .
  • Example of trigger :

    trigger-multi-project-pipeline:
      trigger: my-group/my-project

    Additional details :

  • You cannot use the API to start when:manual trigger jobs .
  • In GitLab 13.5 and later , you can use when:manual in the same job as trigger . In GitLab 13.4 and earlier, using them together causes the error jobs:#{job-name} when should be on_success, on_failure or always .
  • You cannot manually specify CI/CD variables before running a manual trigger job.
  • Manual pipeline variables and scheduled pipeline variables are not passed to downstream pipelines by default. Use trigger:forward to forward these variables to downstream pipelines. Job-level persisted variables are not available in trigger jobs.

    Related topics :

    Multi-project pipeline configuration examples .
  • To run a pipeline for a specific branch, tag, or commit, you can use a trigger token to authenticate with the pipeline triggers API . The trigger token is different than the trigger keyword.
  • child pipeline .

    Use trigger:include:artifact to trigger a dynamic child pipeline .

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • The path to the child pipeline's configuration file.
  • Example of trigger:include :

    trigger-child-pipeline:
      trigger:
        include: path/to/child-pipeline.gitlab-ci.yml

    Related topics :

    Child pipeline configuration examples .

    multi-project pipeline .

    By default, the multi-project pipeline triggers for the default branch. Use trigger:branch to specify a different branch.

    Keyword type : Job keyword. You can use it only as part of a job.

    Possible inputs :

  • The path to the downstream project. CI/CD variables are supported in GitLab 15.3 and later, but not job-level persisted variables .
  • Example of trigger:project :

    trigger-multi-project-pipeline:
      trigger:
        project: my-group/my-project

    Example of trigger:project for a different branch :

    trigger-multi-project-pipeline:
      trigger:
        project: my-group/my-project
        branch: development

    Related topics :

    Multi-project pipeline configuration examples .
  • To run a pipeline for a specific branch, tag, or commit, you can also use a trigger token to authenticate with the pipeline triggers API . The trigger token is different than the trigger keyword.
  • Optional manual jobs in the downstream pipeline do not affect the status of the downstream pipeline or the upstream trigger job. The downstream pipeline can complete successfully without running any optional manual jobs. Blocking manual jobs in the downstream pipeline must run before the trigger job is marked as successful or failed. The trigger job shows pending ( {status_pending} ) if the downstream pipeline status is waiting for manual action ( {status_manual} ) due to manual jobs. By default, jobs in later stages do not start until the trigger job completes.
  • If the downstream pipeline has a failed job, but the job uses allow_failure: true , the downstream pipeline is considered successful and the trigger job shows success .
  • Introduced in GitLab 14.9 with a flag named ci_trigger_forward_variables . Disabled by default. Enabled on GitLab.com and self-managed in GitLab 14.10. Generally available in GitLab 15.1. Feature flag ci_trigger_forward_variables removed.

    Use trigger:forward to specify what to forward to the downstream pipeline. You can control what is forwarded to both parent-child pipelines and multi-project pipelines .

    Possible inputs :

    yaml_variables : true (default), or false . When true , variables defined in the trigger job are passed to downstream pipelines. pipeline_variables : true or false (default). When true , manual pipeline variables and scheduled pipeline variables are passed to downstream pipelines.

    Example of trigger:forward :

    Run this pipeline manually , with the CI/CD variable MYVAR = my value :

    variables: # default variables for each job
      VAR: value
    # Default behavior:
    # - VAR is passed to the child
    # - MYVAR is not passed to the child
    child1:
      trigger:
        include: .child-pipeline.yml
    # Forward pipeline variables:
    # - VAR is passed to the child
    # - MYVAR is passed to the child
    child2:
      trigger:
        include: .child-pipeline.yml
        forward:
          pipeline_variables: true
    # Do not forward YAML variables:
    # - VAR is not passed to the child
    # - MYVAR is not passed to the child
    child3:
      trigger:
        include: .child-pipeline.yml
        forward:
          yaml_variables: false

    CI/CD variables for jobs.

    Keyword type : Global and job keyword. You can use it at the global level, and also at the job level.

    If you define variables as a global keyword , it behaves like default variables for all jobs. Each variable is copied to every job configuration when the pipeline is created. If the job already has that variable defined, the job-level variable takes precedence .

    Variables defined at the global-level cannot be used as inputs for other global keywords like include . These variables can only be used at the job-level, in script , before_script , and after_script sections, as well as inputs in some job keywords like rules .

    Possible inputs : Variable name and value pairs:

  • The name can use only numbers, letters, and underscores ( _ ). In some shells, the first character must be a letter.
  • The value must be a string.
  • CI/CD variables are supported .

    Examples of variables :

    variables:
      DEPLOY_SITE: "https://example.com/"
    deploy_job:
      stage: deploy
      script:
        - deploy-script --url $DEPLOY_SITE --path "/"
      environment: production
    deploy_review_job:
      stage: deploy
      variables:
        REVIEW_PATH: "/review"
      script:
        - deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
      environment: production

    Additional details :

  • All YAML-defined variables are also set to any linked Docker service containers .
  • YAML-defined variables are meant for non-sensitive project configuration. Store sensitive information in protected variables or CI/CD secrets .
  • Manual pipeline variables and scheduled pipeline variables are not passed to downstream pipelines by default. Use trigger:forward to forward these variables to downstream pipelines.

    Related topics :

    Predefined variables are variables the runner automatically creates and makes available in the job.
  • You can configure runner behavior with variables .
  • Introduced in GitLab 13.7.

    Use the description keyword to define a description for a pipeline-level (global) variable. The description displays with the prefilled variable name when running a pipeline manually .

    Keyword type : Global keyword. You cannot use it for job-level variables.

    Possible inputs :

  • A string.
  • Example of variables:description :

    variables:
      DEPLOY_NOTE:
        description: "The deployment note. Explain the reason for this deployment."

    Additional details :

  • When used without value , the variable exists in pipelines that were not triggered manually, and the default value is an empty string ( '' ).
  • Introduced in GitLab 13.7.

    Use the value keyword to define a pipeline-level (global) variable's value. When used with variables: description , the variable value is prefilled when running a pipeline manually .

    Keyword type : Global keyword. You cannot use it for job-level variables.

    Possible inputs :

  • A string.
  • Example of variables:value :

    variables:
      DEPLOY_ENVIRONMENT:
        value: "staging"
        description: "The deployment target. Change this variable to 'canary' or 'production' if needed."

    Additional details :

  • If used without variables: description , the behavior is the same as variables .
  • Introduced in GitLab 15.7.

    Use variables:options to define an array of values that are selectable in the UI when running a pipeline manually .

    Must be used with variables: value , and the string defined for value :

  • Must also be one of the strings in the options array.
  • Is the default selection.
  • If there is no description , this keyword has no effect.

    Keyword type : Global keyword. You cannot use it for job-level variables.

    Possible inputs :

  • An array of strings.
  • Example of variables:options :

    variables:
      DEPLOY_ENVIRONMENT:
        value: "staging"
        options:
          - "production"
          - "staging"
          - "canary"
        description: "The deployment target. Set to 'staging' by default."
    Introduced in GitLab 15.6 with a flag named ci_raw_variables_in_yaml_config . Disabled by default. Enabled on GitLab.com in GitLab 15.6. Enabled on self-managed in GitLab 15.7. Generally available in GitLab 15.8. Feature flag ci_raw_variables_in_yaml_config removed.

    Use the expand keyword to configure a variable to be expandable or not.

    Keyword type : Global and job keyword. You can use it at the global level, and also at the job level.

    Possible inputs :

    true (default): The variable is expandable. false : The variable is not expandable.

    Example of variables:expand :

    variables:
      VAR1: value1
      VAR2: value2 $VAR1
      VAR3:
        value: value3 $VAR1
        expand: false
  • The result of VAR2 is value2 value1 .
  • The result of VAR3 is value3 $VAR1 .
  • Additional details :

  • The expand keyword can only be used with the global and job-level variables keywords. You can't use it with rules:variables or workflow:rules:variables .
  • Use when to configure the conditions for when jobs run. If not defined in a job, the default value is when: on_success .

    Keyword type : Job keyword. You can use it as part of a job. when: always and when: never can also be used in workflow:rules .

    Possible inputs :

    on_success (default): Run the job only when no jobs in earlier stages fail or have allow_failure: true . on_failure : Run the job only when at least one job in an earlier stage fails. A job in an earlier stage with allow_failure: true is always considered successful. never : Don't run the job regardless of the status of jobs in earlier stages. Can only be used in a rules section or workflow: rules . always : Run the job regardless of the status of jobs in earlier stages. Can also be used in workflow:rules . manual : Run the job only when triggered manually . delayed : Delay the execution of a job for a specified duration.

    Example of when :

    stages:
      - build
      - cleanup_build
      - test
      - deploy
      - cleanup
    build_job:
      stage: build
      script:
        - make build
    cleanup_build_job:
      stage: cleanup_build
      script:
        - cleanup build when failed
      when: on_failure
    test_job:
      stage: test
      script:
        - make test
    deploy_job:
      stage: deploy
      script:
        - make deploy
      when: manual
      environment: production
    cleanup_job:
      stage: cleanup
      script:
        - cleanup after jobs
      when: always

    In this example, the script:

  • Executes cleanup_build_job only when build_job fails.
  • Always executes cleanup_job as the last step in pipeline regardless of success or failure.
  • Executes deploy_job when you run it manually in the GitLab UI.
  • Additional details :

  • In GitLab 13.5 and later , you can use when:manual in the same job as trigger . In GitLab 13.4 and earlier, using them together causes the error jobs:#{job-name} when should be on_success, on_failure or always .
  • The default behavior of allow_failure changes to true with when: manual . However, if you use when: manual with rules , allow_failure defaults to false .
  • Related topics :

    when can be used with rules for more dynamic job control. when can be used with workflow to control when a pipeline can start.

    default instead. For example:

    default:
      image: ruby:3.0
      services:
        - docker:dind
      cache:
        paths: [vendor/]
      before_script:
        - bundle config set path vendor/bundle
        - bundle install
      after_script:
        - rm -rf tmp/