慈祥的针织衫 · Guida alla mobilità - ...· 1 月前 · |
酷酷的电梯 · photoshop cs6百度网盘 - ...· 4 月前 · |
发财的西红柿 · 机器学习个人作业——使用机器学习方法分析出租 ...· 5 月前 · |
睿智的火锅 · 安装pyqt5 ...· 6 月前 · |
Software projects rarely work in isolation. Projects often rely on reusable functionality from libraries. Some projects organize unrelated functionality into separate parts of a modular system.
Dependency management is an automated technique for declaring, resolving, and using functionality required by a project.
dependencies { (3) implementation("com.google.guava:guava:32.1.2-jre") (4) testImplementation("junit:junit:4.13.2") dependencies { (3) implementation 'com.google.guava:guava:32.1.2-jre' (4) testImplementation 'junit:junit:4.13.2' Here we declare remote and local repositories for dependency locations.
You can
declare
repositories
to tell Gradle where to fetch local or remote
dependencies
.
In this example, Gradle fetches
dependencies
from the
Maven Central
and
Google
repositories
.
During a build, Gradle locates and downloads the dependencies, a process called
dependency resolution
.
Gradle then
stores resolved dependencies in a local cache
called the
dependency cache
.
Subsequent builds use this cache to avoid unnecessary network calls and speed up the build process.
You can add code to your Java project from an external library such as
com.google.common.base
(a Guava package) which becomes a
dependency
.
In this example, the theoretical project uses Guava version 32.1.2-jre and JUnit 4.13.2 as
dependencies
.
A build engineer can
declare dependencies
for different scopes. For example, you can declare dependencies that are only used at compile time.
Gradle calls the
scope of a dependency
a
configuration
.
Repositories offer dependencies in multiple formats. For information about the formats supported by Gradle, see dependency types .
Metadata describes dependencies. Some examples of metadata include:
You can customize Gradle’s handling of transitive dependencies based on the requirements of a project.
Projects with hundreds of declared dependencies can be difficult to debug. Gradle provides tools to visualize and analyze a project’s dependency graph (i.e. dependency tree). You can use a Build Scan™ or built-in tasks .
Gradle can resolve dependencies from one or many repositories based on Maven, Ivy or flat directory formats. Check out the full reference on all types of repositories for more information.
Organizations building software may want to leverage public binary repositories to download and consume open source dependencies. Popular public repositories include Maven Central and the Google Android repository. Gradle provides built-in shorthand notations for these widely-used repositories.
Under the covers Gradle resolves dependencies from the respective URL of the public repository defined by the shorthand notation. All shorthand notations are available via the RepositoryHandler API. Alternatively, you can spell out the URL of the repository for more fine-grained control.
Maven Central is a popular repository hosting open source libraries for consumption by Java projects.
To declare the Maven Central repository for your build add this to your script:
The Google repository hosts Android-specific artifacts including the Android SDK. For usage examples, see the relevant Android documentation .
To declare the Google Maven repository add this to your build script:
Most enterprise projects set up a binary repository available only within an intranet. In-house repositories enable teams to publish internal binaries, setup user management and security measure and ensure uptime and availability. Specifying a custom URL is also helpful if you want to declare a less popular, but publicly-available repository.
Repositories with custom URLs can be specified as Maven or Ivy repositories by calling the corresponding methods available on the
RepositoryHandler
API.
Gradle supports other protocols than
http
or
https
as part of the custom URL e.g.
file
,
sftp
or
s3
.
For a full coverage see the
section on supported repository types
.
You can also
define your own repository layout
by using
ivy { }
repositories as they are very flexible in terms of how modules are organised in a repository.
You can define more than one repository for resolving dependencies. Declaring multiple repositories is helpful if some dependencies are only available in one repository but not the other. You can mix any type of repository described in the reference section .
This example demonstrates how to declare various named and custom URL repositories for a project:
The order of declaration determines how Gradle will check for dependencies at runtime. If Gradle finds a module descriptor in a particular repository, it will attempt to download all of the artifacts for that module from the same repository . You can learn more about the inner workings of dependency downloads .
Maven POM metadata can reference additional repositories. These will be ignored by Gradle, which will only use the repositories declared in the build itself.
Gradle supports a wide range of sources for dependencies, both in terms of format and in terms of connectivity. You may resolve dependencies from:
Some projects might prefer to store dependencies on a shared drive or as part of the project source code instead of a binary repository product. If you want to use a (flat) filesystem directory as a repository, simply type:
This type of repository does not support any meta-data formats like Ivy XML or Maven POM files. Instead, Gradle will dynamically generate a module descriptor (without any dependency information) based on the presence of artifacts.
As Gradle prefers to use modules whose descriptor has been created from real meta-data rather than being generated, flat directory repositories cannot be used to override artifacts with real meta-data from other repositories declared in the build.
For example, if Gradle finds only
jmxri-1.2.1.jar
in a flat directory repository, but
jmxri-1.2.1.pom
in another repository that supports meta-data, it will use the second repository to provide the module.
For the use case of overriding remote artifacts with local ones consider using an Ivy or Maven repository instead whose URL points to a local directory.
The following sections describe repositories format, Maven or Ivy. These can be declared as local repositories, using a local filesystem path to access them.
The difference with the flat directory repository is that they do respect a format and contain metadata.
When such a repository is configured, Gradle totally bypasses its dependency cache for it as there can be no guarantee that content may not change between executions. Because of that limitation, they can have a performance impact.
They also make build reproducibility much harder to achieve and their use should be limited to tinkering or prototyping.
Many organizations host dependencies in an in-house Maven repository only accessible within the company’s network. Gradle can declare Maven repositories by URL.
For adding a custom Maven repository you can do:
Sometimes a repository will have the POMs published to one location, and the JARs and other artifacts published at another location. To define such a repository, you can do:
maven { // Look for POMs and artifacts, such as JARs, here url = uri("http://repo2.mycompany.com/maven2") // Look for artifacts here if not found at the above location artifactUrls("http://repo.mycompany.com/jars") artifactUrls("http://repo.mycompany.com/jars2") maven { // Look for POMs and artifacts, such as JARs, here url "http://repo2.mycompany.com/maven2" // Look for artifacts here if not found at the above location artifactUrls "http://repo.mycompany.com/jars" artifactUrls "http://repo.mycompany.com/jars2"You can specify credentials for Maven repositories secured by different type of authentication.
See Supported repository transport protocols for authentication options.
Gradle can consume dependencies available in the local Maven repository . Declaring this repository is beneficial for teams that publish to the local Maven repository with one project and consume the artifacts by Gradle in another project.
Gradle uses the same logic as Maven to identify the location of your local Maven cache.
If a local repository location is defined in a
settings.xml
, this location will be used.
The
settings.xml
in
<home directory of the current user>/.m2
takes precedence over the
settings.xml
in
M2_HOME
/conf
.
If no
settings.xml
is available, Gradle uses the default location
<home directory of the current user>/.m2/repository
.
As a general advice, you should avoid adding
mavenLocal()
as a repository.
There are different issues with using
mavenLocal()
that you should be aware of:
For example, project A is built with Maven, project B is built with Gradle, and you need to share the artifacts during development
It is always preferable to use an internal full featured repository instead
In case this is not possible, you should limit this to local builds only
In a multi-repository world, you want to check that changes to project A work with project B
It is preferable to use composite builds for this use case
If for some reason neither composite builds nor full featured repository are possible, then
mavenLocal()
is a last resort option
After all these warnings, if you end up using
mavenLocal()
, consider combining it with
a repository filter
.
This will make sure it only provides what is expected and nothing else.
Organizations might decide to host dependencies in an in-house Ivy repository. Gradle can declare Ivy repositories by URL.
To declare an Ivy repository using the standard layout no additional customization is needed. You just declare the URL.
You can specify that your repository conforms to the Ivy or Maven default layout by using a named layout.
To define an Ivy repository with a non-standard layout, you can define a pattern layout for the repository:
To define an Ivy repository which fetches Ivy files and artifacts from different locations, you can define separate patterns to use to locate the Ivy files and artifacts:
Each
artifact
or
ivy
specified for a repository adds an
additional
pattern to use. The patterns are used in the order that they are defined.
Optionally, a repository with pattern layout can have its
'organisation'
part laid out in Maven style, with forward slashes replacing dots as separators. For example, the organisation
my.company
would then be represented as
my/company
.
You can specify credentials for Ivy repositories secured by basic authentication.
Gradle exposes an API to declare what a repository may or may not contain. There are different use cases for it:
It’s even more important when considering that the declared order of repositories matter.
It is possible to filter either by explicit group , module or version , either strictly or using regular expressions. When using a strict version, it is possible to use a version range, using the format supported by Gradle. In addition, there are filtering options by resolution context: configuration name or even configuration attributes. See RepositoryContentDescriptor for details.
Filters declared using the repository-level content filter are not exclusive. This means that declaring that a repository includes an artifact doesn’t mean that the other repositories can’t have it either: you must declare what every repository contains in extension.
Alternatively, Gradle provides an API which lets you declare that a repository exclusively includes an artifact. If you do so:
repositories {
// This repository will _not_ be searched for artifacts in my.company
// despite being declared first
mavenCentral()
exclusiveContent {
forRepository {
maven {
url = uri("https://repo.mycompany.com/maven2")
filter {
// this repository *only* contains artifacts with group "my.company"
includeGroup("my.company")
repositories {
// This repository will _not_ be searched for artifacts in my.company
// despite being declared first
mavenCentral()
exclusiveContent {
forRepository {
maven {
url "https://repo.mycompany.com/maven2"
filter {
// this repository *only* contains artifacts with group "my.company"
includeGroup "my.company"
It is possible to filter either by explicit group, module or version, either strictly or using regular expressions.
See InclusiveRepositoryContentDescriptor for details.
If you leverage exclusive content filtering in the pluginManagement
section of the settings.gradle(.kts)
, it becomes illegal to add more repositories through the project buildscript.repositories
.
In that case, the build configuration will fail.
Your options are either to declare all repositories in settings or to use non-exclusive content filtering.
Maven repository filtering
For Maven repositories, it’s often the case that a repository would either contain releases or snapshots.
Gradle lets you declare what kind of artifacts are found in a repository using this DSL:
Example 17. Splitting snapshots and releases
Supported metadata sources
When searching for a module in a repository, Gradle, by default, checks for supported metadata file formats in that repository.
In a Maven repository, Gradle looks for a .pom
file, in an ivy repository it looks for an ivy.xml
file and in a flat directory repository it looks directly for .jar
files as it does not expect any metadata.
Starting with 5.0, Gradle also looks for .module
(Gradle module metadata) files.
However, if you define a customized repository you might want to configure this behavior.
For example, you can define a Maven repository without .pom
files but only jars.
To do so, you can configure metadata sources for any repository.
You can specify multiple sources to tell Gradle to keep looking if a file was not found.
In that case, the order of checking for sources is predefined.
The following metadata sources are supported:
Table 1. Supported metadata sources
The defaults for Ivy and Maven repositories change with Gradle 6.0.
Before 6.0, artifact()
was included in the defaults.
Leading to some inefficiency when modules are missing completely.
To restore this behavior, for example, for Maven central you can use:
mavenCentral { metadataSources { mavenPom(); artifact() } }
In a similar way, you can opt into the new behavior in older Gradle versions using:
mavenCentral { metadataSources { mavenPom() } }
Since Gradle 5.3, when parsing a metadata file, be it Ivy or Maven, Gradle will look for a marker indicating that a matching Gradle Module Metadata files exists.
If it is found, it will be used instead of the Ivy or Maven file.
Starting with Gradle 5.6, you can disable this behavior by adding ignoreGradleMetadataRedirection()
to the metadataSources declaration.
Plugin repositories vs. build repositories
Gradle will use repositories at two different phases during your build.
The first phase is when configuring your build and loading the plugins it applied.
To do that Gradle will use a special set of repositories.
The second phase is during dependency resolution.
At this point Gradle will use the repositories declared in your project, as shown in the previous sections.
Plugin repositories
By default Gradle will use the Gradle plugin portal to look for plugins.
However, for different reasons, there are plugins available in other, public or not, repositories.
When a build requires one of these plugins, additional repositories need to be specified so that Gradle knows where to search.
As the way to declare the repositories and what they are expected to contain depends on the way the plugin is applied, it is best to refer to Custom Plugin Repositories.
Repositories used by convention in every subproject can be declared in the settings.gradle(.kts)
file:
Example 20. Declaring a Maven repository in settings
The dependencyResolutionManagement
repositories block accepts the same notations as in a project. This includes Maven or Ivy repositories, with or without credentials, etc.
By default, repositories declared by a project in build.gradle(.kts)
will override whatever is declared in settings.gradle(.kts)
:
Example 21. Preferring project repositories
PREFER_PROJECT
Any repository declared on a project will cause the project to use the repositories declared by the project, ignoring those declared in settings.
Useful when teams need to use different repositories not common among subprojects.
PREFER_SETTINGS
Any repository declared directly in a project, either directly or via a plugin, will be ignored.
Useful for enforcing large teams to use approved repositories only, but will not fail the build when a project or plugin declares a repository.
FAIL_ON_PROJECT_REPOS
Any repository declared directly in a project, either directly or via a plugin, will trigger a build error.
Useful for enforcing large teams to use approved repositories only.
You can change the behavior to prefer the repositories in the settings.gradle(.kts)
file by using repositoriesMode
:
Example 22. Preferring settings repositories
You can force Gradle to fail the build if you want to enforce that only settings repositories are used:
Example 23. Enforcing settings repositories
dependencyResolutionManagement {
repositoriesMode = RepositoriesMode.FAIL_ON_PROJECT_REPOS
dependencyResolutionManagement {
repositoriesMode = RepositoriesMode.FAIL_ON_PROJECT_REPOS
Supported repository transport protocols
Maven and Ivy repositories support the use of various transport protocols. At the moment the following protocols are supported:
Table 2. Repository transport protocols
default application credentials sourced from well known files, Environment variables etc.
Username and password should never be checked in plain text into version control as part of your build file.
You can store the credentials in a local gradle.properties
file and use one of the open source Gradle plugins for encrypting and consuming credentials e.g. the credentials plugin.
The transport protocol is part of the URL definition for a repository.
The following build script demonstrates how to create HTTP-based Maven and Ivy repositories:
Example 24. Declaring a Maven and Ivy repository
For details on HTTP related authentication, see the section HTTP(S) authentication schemes configuration.
When using an AWS S3 backed repository you need to authenticate using AwsCredentials, providing access-key and a private-key. The following example shows how to declare a S3 backed repository and providing AWS credentials:
Example 26. Declaring an S3 backed Maven and Ivy repository
url = uri("s3://myCompanyBucket/maven2")
credentials(AwsCredentials::class) {
accessKey = "someKey"
secretKey = "someSecret"
// optional
sessionToken = "someSTSToken"
ivy {
url = uri("s3://myCompanyBucket/ivyrepo")
credentials(AwsCredentials::class) {
accessKey = "someKey"
secretKey = "someSecret"
// optional
sessionToken = "someSTSToken"
You can also delegate all credentials to the AWS sdk by using the AwsImAuthentication. The following example shows how:
url = uri("s3://myCompanyBucket/maven2")
authentication {
create<AwsImAuthentication>("awsIm") // load from EC2 role or env var
ivy {
url = uri("s3://myCompanyBucket/ivyrepo")
authentication {
create<AwsImAuthentication>("awsIm")
url "s3://myCompanyBucket/maven2"
authentication {
awsIm(AwsImAuthentication) // load from EC2 role or env var
ivy {
url "s3://myCompanyBucket/ivyrepo"
authentication {
awsIm(AwsImAuthentication)
For details on AWS S3 related authentication, see the section AWS S3 repositories configuration.
When using a Google Cloud Storage backed repository default application credentials will be used with no further configuration required:
Example 28. Declaring a Google Cloud Storage backed Maven and Ivy repository using default application credentials
HTTP(S) authentication schemes configuration
When configuring a repository using HTTP or HTTPS transport protocols, multiple authentication schemes are available. By default, Gradle will attempt to use all schemes that are supported by the Apache HttpClient library, documented here. In some cases, it may be preferable to explicitly specify which authentication schemes should be used when exchanging credentials with a remote server. When explicitly declared, only those schemes are used when authenticating to a remote repository.
You can specify credentials for Maven repositories secured by basic authentication using PasswordCredentials.
Example 29. Accessing password-protected Maven repository
The following example show how to configure a repository to use only DigestAuthentication:
BasicAuthentication
Basic access authentication over HTTP. When using this scheme, credentials are sent preemptively.
DigestAuthentication
Digest access authentication over HTTP.
HttpHeaderAuthentication
Authentication based on any custom HTTP header, e.g. private tokens, OAuth tokens, etc.
Using preemptive authentication
Gradle’s default behavior is to only submit credentials when a server responds with an authentication challenge in the form of an HTTP 401 response.
In some cases, the server will respond with a different code (ex. for repositories hosted on GitHub a 404 is returned) causing dependency resolution to fail.
To get around this behavior, credentials may be sent to the server preemptively.
To enable preemptive authentication simply configure your repository to explicitly use the BasicAuthentication scheme:
Using HTTP header authentication
You can specify any HTTP header for secured Maven repositories requiring token, OAuth2 or other HTTP header based authentication using HttpHeaderCredentials with HttpHeaderAuthentication.
Example 32. Accessing header-protected Maven repository
maven {
url = uri("http://repo.mycompany.com/maven2")
credentials(HttpHeaderCredentials::class) {
name = "Private-Token"
value = "TOKEN"
authentication {
create<HttpHeaderAuthentication>("header")
url "http://repo.mycompany.com/maven2"
credentials(HttpHeaderCredentials) {
name = "Private-Token"
value = "TOKEN"
authentication {
header(HttpHeaderAuthentication)
org.gradle.s3.endpoint
Used to override the AWS S3 endpoint when using a non AWS, S3 API compatible, storage service.
org.gradle.s3.maxErrorRetry
Specifies the maximum number of times to retry a request in the event that the S3 server responds with a HTTP 5xx status code. When not specified a default value of 3 is used.
When a region-specific endpoint is not specified for buckets requiring V4 Signatures, Gradle will use the default AWS region (us-east-1) and the
following warning will appear on the console:
Attempting to re-send the request to .... with AWS V4 authentication. To avoid this warning in the future, use region-specific endpoint to access buckets located in regions that require V4 signing.
Failing to specify the region-specific endpoint for buckets requiring V4 signatures means:
AWS S3 Cross Account Access
Some organizations may have multiple AWS accounts, e.g. one for each team. The AWS account of the bucket owner is often different from the artifact publisher and consumers. The bucket owner needs to be able to grant the consumers access otherwise the artifacts will only be usable by the publisher’s account. This is done by adding the bucket-owner-full-control
Canned ACL to the uploaded objects. Gradle will do this in every upload. Make sure the publisher has the required IAM permission, PutObjectAcl
(and PutObjectVersionAcl
if bucket versioning is enabled), either directly or via an assumed IAM Role (depending on your case). You can read more at AWS S3 Access Permissions.
org.gradle.gcs.endpoint
Used to override the Google Cloud Storage endpoint when using a non-Google Cloud Platform, Google Cloud Storage API compatible, storage service.
org.gradle.gcs.servicePath
Used to override the Google Cloud Storage root service path which the Google Cloud Storage client builds requests from, defaults to /
.
GCS URL formats
Google Cloud Storage URL’s are 'virtual-hosted-style' and must be in the following format gcs://<bucketName>/<objectKey>
e.g. gcs://myBucket/maven/release
Handling credentials
Repository credentials should never be part of your build script but rather be kept external.
Gradle provides an API in artifact repositories
that allows you to declare only the type of required credentials. Credential values are looked up from the Gradle Properties
during the build that requires them.
For example, given repository configuration:
Example 33. Externalized repository credentials
name = "mySecureRepository"
credentials(PasswordCredentials::class)
// url = uri(<<some repository url>>)
name = 'mySecureRepository'
credentials(PasswordCredentials)
// url = uri(<<some repository url>>)
The username and password will be looked up from mySecureRepositoryUsername
and mySecureRepositoryPassword
properties.
Note that the configuration property prefix - the identity - is determined from the repository name.
Credentials can then be provided in any of supported ways for Gradle Properties -
gradle.properties
file, command line arguments, environment variables or a combination of those options.
Also, note that credentials will only be required if the invoked build requires them. If for example a project is configured
to publish artifacts to a secured repository, but the build does not invoke publishing task, Gradle will not require
publishing credentials to be present. On the other hand, if the build needs to execute a task that requires credentials
at some point, Gradle will check for credential presence first thing and will not start running any of the tasks
if it knows that the build will fail at a later point because of missing credentials.
Here is a downloadable sample that demonstrates the concept in more detail.
Lookup is only supported for credentials listed in the Table 3.
Table 3. Credentials that support value lookup and their corresponding properties
Before looking at dependency declarations themselves, the concept of dependency configuration needs to be defined.
What are dependency configurations
Every dependency declared for a Gradle project applies to a specific scope.
For example some dependencies should be used for compiling source code whereas others only need to be available at runtime.
Gradle represents the scope of a dependency with the help of a Configuration.
Every configuration can be identified by a unique name.
Many Gradle plugins add pre-defined configurations to your project.
The Java plugin, for example, adds configurations to represent the various classpaths it needs for source code compilation, executing tests and the like.
See the Java plugin chapter for an example.
For more examples on the usage of configurations to navigate, inspect and post-process metadata and artifacts of assigned dependencies, have a look at the resolution result APIs.
Configuration inheritance and composition
A configuration can extend other configurations to form an inheritance hierarchy.
Child configurations inherit the whole set of dependencies declared for any of its superconfigurations.
Configuration inheritance is heavily used by Gradle core plugins like the Java plugin.
For example the testImplementation
configuration extends the implementation
configuration.
The configuration hierarchy has a practical purpose: compiling tests requires the dependencies of the source code under test on top of the dependencies needed write the test class.
A Java project that uses JUnit to write and execute test code also needs Guava if its classes are imported in the production source code.
Under the covers the testImplementation
and implementation
configurations form an inheritance hierarchy by calling the method Configuration.extendsFrom(org.gradle.api.artifacts.Configuration[]).
A configuration can extend any other configuration irrespective of its definition in the build script or a plugin.
Let’s say you wanted to write a suite of smoke tests.
Each smoke test makes a HTTP call to verify a web service endpoint.
As the underlying test framework the project already uses JUnit.
You can define a new configuration named smokeTest
that extends from the testImplementation
configuration to reuse the existing test framework dependency.
val smokeTest by configurations.creating {
extendsFrom(configurations.testImplementation.get())
dependencies {
testImplementation("junit:junit:4.13")
smokeTest("org.apache.httpcomponents:httpclient:4.5.5")
dependencies {
testImplementation 'junit:junit:4.13'
smokeTest 'org.apache.httpcomponents:httpclient:4.5.5'
Resolvable and consumable configurations
Configurations are a fundamental part of dependency resolution in Gradle.
In the context of dependency resolution, it is useful to distinguish between a consumer and a producer. Along these lines, configurations have at least 3 different roles:
as a producer, to expose artifacts and their dependencies for consumption by other projects
(such consumable configurations usually represent the variants the producer offers to its consumers)
For example, to express that an application app
depends on library lib
, at least one configuration is required:
Example 35. Configurations are used to declare dependencies
// declare a "configuration" named "someConfiguration"
val someConfiguration by configurations.creating
dependencies {
// add a project dependency to the "someConfiguration" configuration
someConfiguration(project(":lib"))
dependencies {
// add a project dependency to the "someConfiguration" configuration
someConfiguration project(":lib")
Configurations can inherit dependencies from other configurations by extending from them.
Now, notice that the code above doesn’t tell us anything about the intended consumer of this configuration.
In particular, it doesn’t tell us how the configuration is meant to be used.
Let’s say that lib
is a Java library: it might expose different things, such as its API, implementation, or test fixtures.
It might be necessary to change how we resolve the dependencies of app
depending upon the task we’re performing (compiling against the API of lib
, executing the application, compiling tests, etc.).
To address this problem, you’ll often find companion configurations, which are meant to unambiguously declare the usage:
configurations {
// declare a configuration that is going to resolve the compile classpath of the application
compileClasspath {
extendsFrom(someConfiguration)
// declare a configuration that is going to resolve the runtime classpath of the application
runtimeClasspath {
extendsFrom(someConfiguration)
configurations {
// declare a configuration that is going to resolve the compile classpath of the application
compileClasspath.extendsFrom(someConfiguration)
// declare a configuration that is going to resolve the runtime classpath of the application
runtimeClasspath.extendsFrom(someConfiguration)
someConfiguration
declares the dependencies of my application. It is simply a collection of dependencies.
compileClasspath
and runtimeClasspath
are configurations meant to be resolved: when resolved they should contain the compile classpath, and the runtime classpath of the application respectively.
This distinction is represented by the canBeResolved
flag in the Configuration
type.
A configuration that can be resolved is a configuration for which we can compute a dependency graph, because it contains all the necessary information for resolution to happen.
That is to say we’re going to compute a dependency graph, resolve the components in the graph, and eventually get artifacts.
A configuration which has canBeResolved
set to false
is not meant to be resolved.
Such a configuration is there only to declare dependencies.
The reason is that depending on the usage (compile classpath, runtime classpath), it can resolve to different graphs.
It is an error to try to resolve a configuration which has canBeResolved
set to false
.
To some extent, this is similar to an abstract class (canBeResolved
=false) which is not supposed to be instantiated, and a concrete class extending the abstract class (canBeResolved
=true).
A resolvable configuration will extend at least one non-resolvable configuration (and may extend more than one).
On the other end, at the library project side (the producer), we also use configurations to represent what can be consumed.
For example, the library may expose an API or a runtime, and we would attach artifacts to either one, the other, or both.
Typically, to compile against lib
, we need the API of lib
, but we don’t need its runtime dependencies.
So the lib
project will expose an apiElements
configuration, which is aimed at consumers looking for its API.
Such a configuration is consumable, but is not meant to be resolved.
This is expressed via the canBeConsumed flag of a Configuration
:
Example 37. Setting up configurations
configurations {
// A configuration meant for consumers that need the API of this component
create("exposedApi") {
// This configuration is an "outgoing" configuration, it's not meant to be resolved
isCanBeResolved = false
// As an outgoing configuration, explain that consumers may want to consume it
assert(isCanBeConsumed)
// A configuration meant for consumers that need the implementation of this component
create("exposedRuntime") {
isCanBeResolved = false
assert(isCanBeConsumed)
configurations {
// A configuration meant for consumers that need the API of this component
exposedApi {
// This configuration is an "outgoing" configuration, it's not meant to be resolved
canBeResolved = false
// As an outgoing configuration, explain that consumers may want to consume it
assert canBeConsumed
// A configuration meant for consumers that need the implementation of this component
exposedRuntime {
canBeResolved = false
assert canBeConsumed
In short, a configuration’s role is determined by the canBeResolved
and canBeConsumed
flag combinations:
Table 4. Configuration roles
Choosing the right configuration for dependencies
The choice of the configuration where you declare a dependency is important.
However there is no fixed rule into which configuration a dependency must go.
It mostly depends on the way the configurations are organised, which is most often a property of the applied plugin(s).
For example, in the java
plugin, the created configuration are documented and should serve as the basis for determining where to declare a dependency, based on its role for your code.
As a recommendation, plugins should clearly document the way their configurations are linked together and should strive as much as possible to isolate their roles.
Deprecated configurations
Configurations are intended to be used for a single role: declaring dependencies, performing resolution, or defining consumable variants.
In the past, some configurations did not define which role they were intended to be used for.
A deprecation warning is emitted when a configuration is used in a way that was not intended.
To fix the deprecation, you will need to stop using the configuration in the deprecated role.
The exact changes required depend on how the configuration is used and if there are alternative configurations that should be used instead.
Defining custom configurations
You can define configurations yourself, so-called custom configurations.
A custom configuration is useful for separating the scope of dependencies needed for a dedicated purpose.
Let’s say you wanted to declare a dependency on the Jasper Ant task for the purpose of pre-compiling JSP files that should not end up in the classpath for compiling your source code.
It’s fairly simple to achieve that goal by introducing a custom configuration and using it in a task.
Example 38. Declaring and using a custom configuration
tasks.register("preCompileJsps") {
val jasperClasspath = jasper.asPath
val projectLayout = layout
doLast {
ant.withGroovyBuilder {
"taskdef"("classname" to "org.apache.jasper.JspC",
"name" to "jasper",
"classpath" to jasperClasspath)
"jasper"("validateXml" to false,
"uriroot" to projectLayout.projectDirectory.file("src/main/webapp").asFile,
"outputDir" to projectLayout.buildDirectory.file("compiled-jsps").get().asFile)
tasks.register('preCompileJsps') {
def jasperClasspath = configurations.jasper.asPath
def projectLayout = layout
doLast {
ant.taskdef(classname: 'org.apache.jasper.JspC',
name: 'jasper',
classpath: jasperClasspath)
ant.jasper(validateXml: false,
uriroot: projectLayout.projectDirectory.file('src/main/webapp').asFile,
outputDir: projectLayout.buildDirectory.file("compiled-jsps").get().asFile)
You can manage project configurations with a configurations
object.
Configurations have a name and can extend each other.
To learn more about this API have a look at ConfigurationContainer.
Different kinds of dependencies
Module dependencies
Module dependencies are the most common dependencies. They refer to a module in a repository.
Example 39. Module dependencies
dependencies {
runtimeOnly(group = "org.springframework", name = "spring-core", version = "2.5")
runtimeOnly("org.springframework:spring-aop:2.5")
runtimeOnly("org.hibernate:hibernate:3.0.5") {
isTransitive = true
runtimeOnly(group = "org.hibernate", name = "hibernate", version = "3.0.5") {
isTransitive = true
dependencies {
runtimeOnly group: 'org.springframework', name: 'spring-core', version: '2.5'
runtimeOnly 'org.springframework:spring-core:2.5',
'org.springframework:spring-aop:2.5'
runtimeOnly(
[group: 'org.springframework', name: 'spring-core', version: '2.5'],
[group: 'org.springframework', name: 'spring-aop', version: '2.5']
runtimeOnly('org.hibernate:hibernate:3.0.5') {
transitive = true
runtimeOnly group: 'org.hibernate', name: 'hibernate', version: '3.0.5', transitive: true
runtimeOnly(group: 'org.hibernate', name: 'hibernate', version: '3.0.5') {
transitive = true
See the DependencyHandler class in the API documentation for more examples and a complete reference.
Gradle provides different notations for module dependencies. There is a string notation and a map notation. A module dependency has an API which allows further configuration. Have a look at ExternalModuleDependency to learn all about the API. This API provides properties and configuration methods. Via the string notation you can define a subset of the properties. With the map notation you can define all properties. To have access to the complete API, either with the map or with the string notation, you can assign a single dependency to a configuration together with a closure.
If you declare a module dependency, Gradle looks for a module metadata file (.module
, .pom
or ivy.xml
) in the repositories.
If such a module metadata file exists, it is parsed and the artifacts of this module (e.g. hibernate-3.0.5.jar
) as well as its dependencies (e.g. cglib
) are downloaded.
If no such module metadata file exists, as of Gradle 6.0, you need to configure metadata sources definitions to look for an artifact file called hibernate-3.0.5.jar
directly.
File dependencies
Projects sometimes do not rely on a binary repository product e.g. JFrog Artifactory or Sonatype Nexus for hosting and resolving external dependencies.
It’s common practice to host those dependencies on a shared drive or check them into version control alongside the project source code.
Those dependencies are referred to as file dependencies, the reason being that they represent a file without any metadata (like information about transitive dependencies, the origin or its author) attached to them.
The following example resolves file dependencies from the directories ant
, libs
and tools
.
Example 40. Declaring multiple file dependencies
dependencies {
"antContrib"(files("ant/antcontrib.jar"))
"externalLibs"(files("libs/commons-lang.jar", "libs/log4j.jar"))
"deploymentTools"(fileTree("tools") { include("*.exe") })
dependencies {
antContrib files('ant/antcontrib.jar')
externalLibs files('libs/commons-lang.jar', 'libs/log4j.jar')
deploymentTools(fileTree('tools') { include '*.exe' })
As you can see in the code example, every dependency has to define its exact location in the file system.
The most prominent methods for creating a file reference are
Project.files(java.lang.Object…),
ProjectLayout.files(java.lang.Object…)
and Project.fileTree(java.lang.Object)
Alternatively, you can also define the source directory of one or many file dependencies in the form of a flat directory repository.
The order of the files in a FileTree
is not stable, even on a single computer.
It means that dependency configuration seeded with such a construct may produce a resolution result which has a different ordering, possibly impacting the cacheability of tasks using the result as an input.
Using the simpler files
instead is recommended where possible.
File dependencies allow you to directly add a set of files to a configuration, without first adding them to a repository. This can be useful if you cannot, or do not want to, place certain files in a repository. Or if you do not want to use any repositories at all for storing your dependencies.
To add some files as a dependency for a configuration, you simply pass a file collection as a dependency:
Example 41. File dependencies
dependencies {
runtimeOnly(files("libs/a.jar", "libs/b.jar"))
runtimeOnly(fileTree("libs") { include("*.jar") })
dependencies {
runtimeOnly files('libs/a.jar', 'libs/b.jar')
runtimeOnly fileTree('libs') { include '*.jar' }
File dependencies are not included in the published dependency descriptor for your project.
However, file dependencies are included in transitive project dependencies within the same build.
This means they cannot be used outside the current build, but they can be used within the same build.
You can declare which tasks produce the files for a file dependency.
You might do this when, for example, the files are generated by the build.
Example 42. Generated file dependencies
dependencies {
implementation(files(layout.buildDirectory.dir("classes")) {
builtBy("compile")
tasks.register("compile") {
doLast {
println("compiling classes")
tasks.register("list") {
val compileClasspath: FileCollection = configurations["compileClasspath"]
dependsOn(compileClasspath)
doLast {
println("classpath = ${compileClasspath.map { file: File -> file.name }}")
dependencies {
implementation files(layout.buildDirectory.dir('classes')) {
builtBy 'compile'
tasks.register('compile') {
doLast {
println 'compiling classes'
tasks.register('list') {
FileCollection compileClasspath = configurations.compileClasspath
dependsOn compileClasspath
doLast {
println "classpath = ${compileClasspath.collect { File file -> file.name }}"
Versioning of file dependencies
It is recommended to clearly express the intention and a concrete version for file dependencies.
File dependencies are not considered by Gradle’s version conflict resolution.
Therefore, it is extremely important to assign a version to the file name to indicate the distinct set of changes shipped with it.
For example commons-beanutils-1.3.jar
lets you track the changes of the library by the release notes.
As a result, the dependencies of the project are easier to maintain and organize.
It is much easier to uncover potential API incompatibilities by the assigned version.
Project dependencies
Software projects often break up software components into modules to improve maintainability and prevent strong coupling.
Modules can define dependencies between each other to reuse code within the same project.
Gradle can model dependencies between modules.
Those dependencies are called project dependencies because each module is represented by a Gradle project.
Example 43. Project dependencies
At runtime, the build automatically ensures that project dependencies are built in the correct order and added to the classpath for compilation.
The chapter Authoring Multi-Project Builds discusses how to set up and configure multi-project builds in more detail.
For more information see the API documentation for ProjectDependency.
The following example declares the dependencies on the utils
and api
project from the web-service
project. The method Project.project(java.lang.String) creates a reference to a specific subproject by path.
Example 44. Declaring project dependencies
Type-safe project accessors are an incubating feature which must be enabled explicitly.
Implementation may change at any time.
To add support for type-safe project accessors, add this to your settings.gradle(.kts)
file:
enableFeaturePreview("TYPESAFE_PROJECT_ACCESSORS")
One issue with the project(":some:path")
notation is that you have to remember the path to every project you want to depend on.
In addition, changing a project path requires you to change all places where the project dependency is used, but it is easy to miss one or more occurrences (because you have to rely on search and replace).
Since Gradle 7, Gradle offers an experimental type-safe API for project dependencies.
The same example as above can now be rewritten as:
The type-safe API has the advantage of providing IDE completion so you don’t need to figure out the actual names of the projects.
If you add or remove a project that uses the Kotlin DSL, build script compilation fails if you forget to update a dependency.
The project accessors are mapped from the project path.
For example, if a project path is :commons:utils:some:lib
then the project accessor will be projects.commons.utils.some.lib
(which is the short-hand notation for projects.getCommons().getUtils().getSome().getLib()
).
A project name with kebab case (some-lib
) or snake case (some_lib
) will be converted to camel case in accessors: projects.someLib
.
Local forks of module dependencies
A module dependency can be substituted by a dependency to a local fork of the sources of that module, if the module itself is built with Gradle.
This can be done by utilising composite builds.
This allows you, for example, to fix an issue in a library you use in an application by using, and building, a locally patched version instead of the published binary version.
The details of this are described in the section on composite builds.
Gradle distribution-specific dependencies
Gradle API dependency
You can declare a dependency on the API of the current version of Gradle by using the DependencyHandler.gradleApi() method. This is useful when you are developing custom Gradle tasks or plugins.
Example 46. Gradle API dependencies
Gradle TestKit dependency
You can declare a dependency on the TestKit API of the current version of Gradle by using the DependencyHandler.gradleTestKit() method. This is useful for writing and executing functional tests for Gradle plugins and build scripts.
Example 47. Gradle TestKit dependencies
Local Groovy dependency
You can declare a dependency on the Groovy that is distributed with Gradle by using the DependencyHandler.localGroovy() method. This is useful when you are developing custom Gradle tasks or plugins in Groovy.
Example 48. Gradle’s Groovy dependencies
Documenting dependencies
When you declare a dependency or a dependency constraint, you can provide a custom reason for the declaration.
This makes the dependency declarations in your build script and the dependency insight report easier to interpret.
dependencies {
implementation("org.ow2.asm:asm:7.1") {
because("we require a JDK 9 compatible bytecode generator")
dependencies {
implementation('org.ow2.asm:asm:7.1') {
because 'we require a JDK 9 compatible bytecode generator'
Example: Using the dependency insight report with custom reasons
Output of gradle -q dependencyInsight --dependency asm
> gradle -q dependencyInsight --dependency asm
org.ow2.asm:asm:7.1
Variant compile:
| Attribute Name | Provided | Requested |
|--------------------------------|----------|--------------|
| org.gradle.status | release | |
| org.gradle.category | library | library |
| org.gradle.libraryelements | jar | classes |
| org.gradle.usage | java-api | java-api |
| org.gradle.dependency.bundling | | external |
| org.gradle.jvm.environment | | standard-jvm |
| org.gradle.jvm.version | | 11 |
Selection reasons:
- Was requested: we require a JDK 9 compatible bytecode generator
org.ow2.asm:asm:7.1
\--- compileClasspath
A web-based, searchable dependency report is available by adding the --scan option.
Resolving specific artifacts from a module dependency
Whenever Gradle tries to resolve a module from a Maven or Ivy repository, it looks for a metadata file and the default artifact file, a JAR. The build fails if none of these artifact files can be resolved. Under certain conditions, you might want to tweak the way Gradle resolves artifacts for a dependency.
The dependency only provides a non-standard artifact without any metadata e.g. a ZIP file.
The module metadata declares more than one artifact e.g. as part of an Ivy dependency descriptor.
You only want to download a specific artifact without any of the transitive dependencies declared in the metadata.
Gradle is a polyglot build tool and not limited to just resolving Java libraries. Let’s assume you wanted to build a web application using JavaScript as the client technology. Most projects check in external JavaScript libraries into version control. An external JavaScript library is no different than a reusable Java library so why not download it from a repository instead?
Google Hosted Libraries is a distribution platform for popular, open-source JavaScript libraries. With the help of the artifact-only notation you can download a JavaScript library file e.g. JQuery. The @
character separates the dependency’s coordinates from the artifact’s file extension.
url = uri("https://ajax.googleapis.com/ajax/libs")
patternLayout {
artifact("[organization]/[revision]/[module].[ext]")
metadataSources {
artifact()
configurations {
create("js")
dependencies {
"js"("jquery:jquery:3.2.1@js")
url 'https://ajax.googleapis.com/ajax/libs'
patternLayout {
artifact '[organization]/[revision]/[module].[ext]'
metadataSources {
artifact()
configurations {
dependencies {
js 'jquery:jquery:3.2.1@js'
Some modules ship different "flavors" of the same artifact or they publish multiple artifacts that belong to a specific module version but have a different purpose. It’s common for a Java library to publish the artifact with the compiled class files, another one with just the source code in it and a third one containing the Javadocs.
In JavaScript, a library may exist as uncompressed or minified artifact. In Gradle, a specific artifact identifier is called classifier, a term generally used in Maven and Ivy dependency management.
Let’s say we wanted to download the minified artifact of the JQuery library instead of the uncompressed file. You can provide the classifier min
as part of the dependency declaration.
url = uri("https://ajax.googleapis.com/ajax/libs")
patternLayout {
artifact("[organization]/[revision]/[module](.[classifier]).[ext]")
metadataSources {
artifact()
configurations {
create("js")
dependencies {
"js"("jquery:jquery:3.2.1:min@js")
url 'https://ajax.googleapis.com/ajax/libs'
patternLayout {
artifact '[organization]/[revision]/[module](.[classifier]).[ext]'
metadataSources {
artifact()
configurations {
dependencies {
js 'jquery:jquery:3.2.1:min@js'
Supported Metadata formats
External module dependencies require module metadata (so that, typically, Gradle can figure out the transitive dependencies of a module).
To do so, Gradle supports different metadata formats.
You can also tweak which format will be looked up in the repository definition.
Gradle Module Metadata files
Gradle Module Metadata has been specifically designed to support all features of Gradle’s dependency management model and is hence the preferred format.
You can find its specification here.
POM files
Gradle natively supports Maven POM files.
It’s worth noting that by default Gradle will first look for a POM file, but if this file contains a special marker, Gradle will use Gradle Module Metadata instead.
Ivy files
Similarly, Gradle supports Apache Ivy metadata files.
Again, Gradle will first look for an ivy.xml
file, but if this file contains a special marker, Gradle will use Gradle Module Metadata instead.
Producers vs consumers
A key concept in dependency management with Gradle is the difference between consumers and producers.
When you build a library, you are effectively on the producer side: you are producing artifacts which are going to be consumed by someone else, the consumer.
A lot of problems with traditional build systems is that they don’t make the difference between a producer and a consumer.
A consumer needs to be understood in the large sense:
Producer variants
A producer may want to generate different artifacts for different kinds of consumers: for the same source code, different binaries are produced.
Or, a project may produce artifacts which are for consumption by other projects (same repository) but not for external use.
A typical example in the Java world is the Guava library which is published in different versions: one for Java projects, and one for Android projects.
However, it’s the consumer responsibility to tell what version to use, and it’s the dependency management engine responsibility to ensure consistency of the graph (for example making sure that you don’t end up with both Java and Android versions of Guava on your classpath).
This is where the variant model of Gradle comes into play.
In Gradle, producer variants are exposed via consumable configurations.
Strong encapsulation
In order for a producer to compile a library, it needs all its implementation dependencies on the compile classpath.
There are dependencies which are only required as an implementation detail of the library and there are libraries which are effectively part of the API.
However, a library depending on this produced library only needs to "see" the public API of your library and therefore the dependencies of this API.
It’s a subset of the compile classpath of the producer: this is strong encapsulation of dependencies.
The consequence is that a dependency which is assigned to the implementation
configuration of a library does not end up on the compile classpath of the consumer.
On the other hand, a dependency which is assigned to the api
configuration of a library would end up on the compile classpath of the consumer.
At runtime, however, all dependencies are required.
Gradle makes the difference between different kinds of consumer even within a single project: the Java compile task, for example, is a different consumer than the Java exec task.
More details on the segregation of API and runtime dependencies in the Java world can be found here.
Being respectful of consumers
Whenever, as a developer, you decide to include a dependency, you must understand that there are consequences for your consumers.
For example, if you add a dependency to your project, it becomes a transitive dependency of your consumers, and therefore may participate in conflict resolution if the consumer needs a different version.
A lot of the problems Gradle handles are about fixing the mismatch between the expectations of a consumer and a producer.
However, some projects are easier than others:
if you are at the end of the consumption chain, that is to say you build an application, then there are effectively no consumer of your project (apart from final customers): adding exclusions will have no other consequence than fixing your problem.
however if you are a library, adding exclusions may prevent consumers from working properly, because they would exercise a path of the code that you don’t
Always keep in mind that the solution you choose to fix a problem can "leak" to your consumers.
This documentation aims at guiding you to find the right solution to the right problem, and more importantly, make decisions which help the resolution engine to take the right decisions in case of conflicts.
Gradle provides tooling to navigate dependency graphs and mitigate dependency hell.
Users can render the full graph of dependencies as well as identify the selection reason and origin for a dependency.
Dependencies can originate through build script declared dependencies or transitive dependencies.
You can visualize dependencies with:
List Project Dependencies
Gradle provides the built-in dependencies
task to render a dependency tree from the command line.
By default, the dependency tree renders dependencies for all configurations within a single project.
The dependency tree indicates the selected version of each dependency.
It also displays information about dependency conflict resolution.
The dependencies
task can be especially helpful for issues related to transitive dependencies.
Your build file lists direct dependencies, but the dependencies
task can help you understand which transitive dependencies resolve during your build.
(*)
: Indicates repeated occurrences of a transitive dependency subtree. Gradle expands transitive dependency subtrees only once per project; repeat occurrences only display the root of the subtree, followed by this annotation.
(c)
: This element is a dependency constraint, not a dependency. Look for the matching dependency elsewhere in the tree.
(n)
: A dependency or dependency configuration that cannot be resolved.
Specify a Dependency Configuration
To focus on the information about one dependency configuration, provide the optional parameter --configuration
.
Just like project and task names, Gradle accepts abbreviated names to select a dependency configuration.
For example, you can specify tRC
instead of testRuntimeClasspath
if the pattern matches to a single dependency configuration.
Both of the following examples show dependencies in the testRuntimeClasspath
dependency configuration of a Java project:
> gradle -q dependencies --configuration testRuntimeClasspath
To see a list of all the configurations available in a project, including those added by any plugins, you can run a resolvableConfigurations
report.
For more info, see that plugin’s documentation (for instance, the Java Plugin is documented here).
Example
Consider a project that uses the JGit library to execute Source Control Management (SCM) operations for a release process.
You can declare dependencies for external tooling with the help of a custom dependency configuration.
This avoids polluting other contexts, such as the compilation classpath for your production source code.
The following example declares a custom dependency configuration named "scm" that contains the JGit dependency:
> gradle -q dependencies --configuration scm
------------------------------------------------------------
Root project 'dependencies-report'
------------------------------------------------------------
\--- org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r
+--- com.jcraft:jsch:0.1.54
+--- com.googlecode.javaewah:JavaEWAH:1.1.6
+--- org.apache.httpcomponents:httpclient:4.3.6
| +--- org.apache.httpcomponents:httpcore:4.3.3
| +--- commons-logging:commons-logging:1.1.3
| \--- commons-codec:commons-codec:1.6
\--- org.slf4j:slf4j-api:1.7.2
A web-based, searchable dependency report is available by adding the --scan option.
Identify the Dependency Version Selected
A project may request two different versions of the same dependency either directly or transitively.
Gradle applies version conflict resolution to ensure that only one version of the dependency exists in the dependency graph.
The following example introduces a conflict with commons-codec:commons-codec
, added both as a direct dependency and a transitive dependency of JGit:
dependencies {
"scm"("org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r")
"scm"("commons-codec:commons-codec:1.7")
dependencies {
scm 'org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r'
scm 'commons-codec:commons-codec:1.7'
Dependency Insights
Gradle provides the built-in dependencyInsight
task to render a dependency insight report from the command line.
Dependency insights provide information about a single dependency within a single configuration.
Given a dependency, you can identify the selection reason and origin.
dependencyInsight
accepts the following parameters:
--dependency <dependency>
(mandatory)
The dependency to investigate.
You can supply a complete group:name
, or part of it.
If multiple dependencies match, Gradle generates a report covering all matching dependencies.
--configuration <name>
(mandatory)
The dependency configuration which resolves the given dependency.
This parameter is optional for projects that use the Java plugin, since the plugin provides a default value of compileClasspath
.
--single-path
(optional)
Render only a single path to the dependency.
> gradle -q dependencyInsight --dependency commons-codec --configuration scm
commons-codec:commons-codec:1.7
Variant default:
| Attribute Name | Provided | Requested |
|-------------------|----------|-----------|
| org.gradle.status | release | |
Selection reasons:
- By conflict resolution: between versions 1.7 and 1.6
commons-codec:commons-codec:1.7
\--- scm
commons-codec:commons-codec:1.6 -> 1.7
\--- org.apache.httpcomponents:httpclient:4.3.6
\--- org.eclipse.jgit:org.eclipse.jgit:4.9.2.201712150930-r
\--- scm
A web-based, searchable dependency report is available by adding the --scan option.
For more information about configurations, see the dependency configuration documentation.
Selection Reasons
The "Selection reasons" section of the dependency insight report lists the reasons why a dependency was selected.
Have a look at the table below to understand the meaning of the different terms used:
Table 5. Terminology
Was requested : <text>
The dependency appears in the graph, and the inclusion came with a because
text.
Was requested : didn’t match versions <versions>
The dependency appears with a dynamic version which did not include the listed versions.
May be followed by a because
text.
Was requested : reject version <versions>
The dependency appears with a rich version containing one or more reject
.
May be followed by a because
text.
By conflict resolution : between versions <version>
The dependency appeared multiple times, with different version requests.
This resulted in conflict resolution to select the most appropriate version.
By constraint
A dependency constraint participated in the version selection.
May be followed by a because
text.
By ancestor
There is a rich version with a strictly
which enforces the version of this dependency.
Selected by rule
A dependency resolution rule overruled the default selection process.
May be followed by a because
text.
Rejection : <version> by rule because <text>
A ComponentSelection.reject
rejected the given version of the dependency.
Rejection: version <version>: <attributes information>
The dependency has a dynamic version and some versions did not match the requested attributes.
Forced
The build enforces the version of the dependency through an enforced platform or resolution strategy.
Variant Selection Errors
Sometimes a selection error happens at the variant selection level.
Have a look at the dedicated section to understand these errors and how to resolve them.
Unsafe Configuration Resolution Errors
Resolving a configuration can have side effects on Gradle’s project model.
As a result, Gradle must manage access to each project’s configurations.
There are a number of ways a configuration might be resolved unsafely.
For example:
A task from one project directly resolves a configuration in another project in the task’s action.
A task specifies a configuration from another project as an input file collection.
A build script for one project resolves a configuration in another project during evaluation.
Project configurations are resolved in the settings file.
Gradle produces a deprecation warning for each unsafe access.
Unsafe access can cause indeterminate errors.
You should fix unsafe access warnings in your build.
In most cases, you can resolve unsafe accesses by creating a cross-project dependency on the other project.
See the documentation for sharing outputs between projects for more information.
If you find a use case that can’t be resolved using these techniques, please let us know by filing a GitHub Issue.
This chapter covers the way dependency resolution works inside Gradle.
After covering how you can declare repositories and dependencies, it makes sense to explain how these declarations come together during dependency resolution.
Dependency resolution is a process that consists of two phases, which are repeated until the dependency graph is complete:
When a new dependency is added to the graph, perform conflict resolution to determine which version should be added to the graph.
When a specific dependency, that is a module with a version, is identified as part of the graph, retrieve its metadata so that its dependencies can be added in turn.
The following section will describe what Gradle identifies as conflicts and how it can resolve them automatically.
After that, the retrieval of metadata will be covered, explaining how Gradle can follow dependency links.
How Gradle handles conflicts?
When doing dependency resolution, Gradle handles two types of conflicts:
Version conflicts
That is when two or more dependencies require a given dependency but with different versions.
Implementation conflicts
That is when the dependency graph contains multiple modules that provide the same implementation, or capability in Gradle terminology.
The following sections will explain in detail how Gradle attempts to resolve these conflicts.
The dependency resolution process is highly customizable to meet enterprise requirements.
For more information, see the chapter on Controlling transitive dependencies.
Version conflict resolution
A version conflict occurs when two components:
Resolution strategy
Given the conflict above, there exist multiple ways to handle it, either by selecting a version or failing the resolution.
Different tools that handle dependency management have different ways of handling these type of conflicts.
Maven will take the shortest path to a dependency and use that version.
In case there are multiple paths of the same length, the first one wins.
This means that in the example above, the version of guava
will be 20.0
because the direct dependency is closer than the guice
dependency.
The main drawback of this method is that it is ordering dependent.
Keeping order in a very large graph can be a challenge.
For example, what if the new version of a dependency ends up having its own dependency declarations in a different order than the previous version?
With Maven, this could have unwanted impact on resolved versions.
Apache Ivy is a very flexible dependency management tool.
It offers the possibility to customize dependency resolution, including conflict resolution.
This flexibility comes with the price of making it hard to reason about.
Gradle will consider all requested versions, wherever they appear in the dependency graph.
Out of these versions, it will select the highest one. More information on version ordering
here.
As you have seen, Gradle supports a concept of rich version declaration, so what is the highest version depends on the way versions were declared:
If there is a non range version that falls within the specified ranges or is higher than their upper bound, it will be selected.
If there are only ranges, the selection will depend on the intersection of ranges:
If all the ranges intersect, then the highest existing version of the intersection will be selected.
If there is no clear intersection between all the ranges, the highest existing version will be selected from the highest range. If there is no version available for the highest range, the resolution will fail.
Note that in the case where ranges come into play, Gradle requires metadata to determine which versions do exist for the considered range.
This causes an intermediate lookup for metadata, as described in How Gradle retrieves dependency metadata?.
Qualifiers
There is a caveat to comparing versions when it comes to selecting the highest one.
All the rules of version ordering still apply, but the conflict resolver
has a bias towards versions without qualifiers.
The "qualifier" of a version, if it exists, is the tail end of the version string, starting at the first non-dot separator
found in it. The other (first) part of the version string is called the "base form" of the version. Here are some examples
to illustrate:
As you can see separators are any of the .
, -
, _
, +
characters, plus the empty string when a numeric and a non-numeric part of the version are next to each-other.
When resolving the conflict between competing versions, the following logic applies:
Implementation conflict resolution
Gradle uses variants and capabilities to identify what a module provides.
This is a unique feature that deserves its own chapter to understand what it means and enables.
A conflict occurs the moment two modules either:
How Gradle retrieves dependency metadata?
Gradle requires metadata about the modules included in your dependency graph.
That information is required for two main points:
Each repository is inspected, Gradle does not stop on the first one returning some metadata.
When multiple are defined, they are inspected in the order they were added.
For Maven repositories, Gradle will use the maven-metadata.xml
which provides information about the available versions.
For Ivy repositories, Gradle will resort to directory listing.
This process results in a list of candidate versions that are then matched to the dynamic version expressed.
At this point, version conflict resolution is resumed.
Note that Gradle caches the version information, more information can be found in the section Controlling dynamic version caching.
Obtaining module metadata
Given a required dependency, with a version, Gradle attempts to resolve the dependency by searching for the module the dependency points at.
Depending on the type of repository, Gradle looks for metadata files describing the module (.module
, .pom
or ivy.xml
file) or directly for artifact files.
Modules that have a module metadata file (.module
, .pom
or ivy.xml
file) are preferred over modules that have an artifact file only.
Once a repository returns a metadata result, following repositories are ignored.
All of the artifacts for the module are then requested from the same repository that was chosen in the process above.
All of that data, including the repository source and potential misses are then stored in the The Dependency Cache.
The penultimate point above is what can make the integration with Maven Local problematic.
As it is a cache for Maven, it will sometimes miss some artifacts of a given module.
If Gradle is sourcing such a module from Maven Local, it will consider the missing artifacts to be missing altogether.
Repository disabling
When Gradle fails to retrieve information from a repository, it will disable it for the duration of the build and fail all dependency resolution.
That last point is important for reproducibility.
If the build was allowed to continue, ignoring the faulty repository, subsequent builds could have a different result once the repository is back online.
HTTP Retries
Gradle will make several attempts to connect to a given repository before disabling it.
If connection fails, Gradle will retry on certain errors which have a chance of being transient, increasing the amount of time waiting between each retry.
Blacklisting happens when the repository cannot be contacted, either because of a permanent error or because the maximum retries was reached.
The Dependency Cache
Gradle contains a highly sophisticated dependency caching mechanism, which seeks to minimise the number of remote requests made in dependency resolution, while striving to guarantee that the results of dependency resolution are correct and reproducible.
The Gradle dependency cache consists of two storage types located under $GRADLE_USER_HOME/caches
:
A file-based store of downloaded artifacts, including binaries like jars as well as raw downloaded meta-data like POM files and Ivy files.
The storage path for a downloaded artifact includes the SHA1 checksum, meaning that 2 artifacts with the same name but different content can easily be cached.
A binary store of resolved module metadata, including the results of resolving dynamic versions, module descriptors, and artifacts.
The Gradle cache does not allow the local cache to hide problems and create other mysterious and difficult to debug behavior.
Gradle enables reliable and reproducible enterprise builds with a focus on bandwidth and storage efficiency.
Separate metadata cache
Gradle keeps a record of various aspects of dependency resolution in binary format in the metadata cache.
The information stored in the metadata cache includes:
The resolved module metadata for a particular module, including module artifacts and module dependencies.
The resolved artifact metadata for a particular artifact, including a pointer to the downloaded artifact file.
The absence of a particular module or artifact in a particular repository, eliminating repeated attempts to access a resource that does not exist.
Repository caches are independent
As described above, for each repository there is a separate metadata cache.
A repository is identified by its URL, type and layout.
If a module or artifact has not been previously resolved from this repository, Gradle will attempt to resolve the module against the repository.
This will always involve a remote lookup on the repository, however in many cases no download will be required.
Dependency resolution will fail if the required artifacts are not available in any repository specified by the build, even if the local cache has a copy of this artifact which was retrieved from a different repository.
Repository independence allows builds to be isolated from each other in an advanced way that no build tool has done before.
This is a key feature to create builds that are reliable and reproducible in any environment.
Artifact reuse
Before downloading an artifact, Gradle tries to determine the checksum of the required artifact by downloading the sha file associated with that artifact.
If the checksum can be retrieved, an artifact is not downloaded if an artifact already exists with the same id and checksum.
If the checksum cannot be retrieved from the remote server, the artifact will be downloaded (and ignored if it matches an existing artifact).
As well as considering artifacts downloaded from a different repository, Gradle will also attempt to reuse artifacts found in the local Maven Repository.
If a candidate artifact has been downloaded by Maven, Gradle will use this artifact if it can be verified to match the checksum declared by the remote server.
Checksum based storage
It is possible for different repositories to provide a different binary artifact in response to the same artifact identifier.
This is often the case with Maven SNAPSHOT artifacts, but can also be true for any artifact which is republished without changing its identifier.
By caching artifacts based on their SHA1 checksum, Gradle is able to maintain multiple versions of the same artifact.
This means that when resolving against one repository Gradle will never overwrite the cached artifact file from a different repository.
This is done without requiring a separate artifact file store per repository.
Cache Locking
The Gradle dependency cache uses file-based locking to ensure that it can safely be used by multiple Gradle processes concurrently.
The lock is held whenever the binary metadata store is being read or written, but is released for slow operations such as downloading remote artifacts.
This concurrent access is only supported if the different Gradle processes can communicate together. This is usually not the case for containerized builds.
Cache Cleanup
Gradle keeps track of which artifacts in the dependency cache are accessed.
Using this information, the cache is periodically (at most every 24 hours) scanned for artifacts that have not been used for more than 30 days.
Obsolete artifacts are then deleted to ensure the cache does not grow indefinitely.
Dealing with ephemeral builds
It’s a common practice to run builds in ephemeral containers.
A container is typically spawned to only execute a single build before it is destroyed.
This can become a practical problem when a build depends on a lot of dependencies which each container has to re-download.
To help with this scenario, Gradle provides a couple of options:
Copying and reusing the cache
The dependency cache, both the file and metadata parts, are fully encoded using relative paths.
This means that it is perfectly possible to copy a cache around and see Gradle benefit from it.
The path that can be copied is $GRADLE_USER_HOME/caches/modules-<version>
.
The only constraint is placing it using the same structure at the destination, where the value of GRADLE_USER_HOME
can be different.
Do not copy the *.lock
or gc.properties
files if they exist.
Note that creating the cache and consuming it should be done using compatible Gradle version, as shown in the table below.
Otherwise, the build might still require some interactions with remote repositories to complete missing information, which might be available in a different version.
If multiple incompatible Gradle versions are in play, all should be used when seeding the cache.
Table 6. Dependency cache compatibility
Sharing the dependency cache with other Gradle instances
Instead of copying the dependency cache into each container, it’s possible to mount a shared, read-only directory that will act as a dependency cache for all containers.
This cache, unlike the classical dependency cache, is accessed without locking, making it possible for multiple builds to read from the cache concurrently. It’s important that the read-only cache
is not written to when other builds may be reading from it.
When using the shared read-only cache, Gradle looks for dependencies (artifacts or metadata) in both the writable cache in the local Gradle User Home directory and the shared read-only cache.
If a dependency is present in the read-only cache, it will not be downloaded.
If a dependency is missing from the read-only cache, it will be downloaded and added to the writable cache.
In practice, this means that the writable cache will only contain dependencies that are unavailable in the read-only cache.
The read-only cache should be sourced from a Gradle dependency cache that already contains some of the required dependencies.
The cache can be incomplete; however, an empty shared cache will only add overhead.
The first step in using a shared dependency cache is to create one by copying of an existing local cache.
For this you need to follow the instructions above.
Then set the GRADLE_RO_DEP_CACHE
environment variable to point to the directory containing the cache:
$GRADLE_RO_DEP_CACHE
|-- modules-2 : the read-only dependency cache, should be mounted with read-only privileges
$GRADLE_HOME
|-- caches
|-- modules-2 : the container specific dependency cache, should be writable
|-- ...
|-- ...
In a CI environment, it’s a good idea to have one build which "seeds" a Gradle dependency cache, which is then copied to a different directory.
This directory can then be used as the read-only cache for other builds.
You shouldn’t use an existing Gradle installation cache as the read-only cache, because this directory may contain locks and may be modified by the seeding build.
for tasks generating a visual representation (image, .dot
file, …) of a dependency graph
for tasks providing diagnostics (similar to the dependencyInsight
task)
for tasks which need to perform dependency resolution at execution time (e.g, download files on demand)
the ResolutionResult API gives access to a resolved dependency graph, whether the resolution was successful or not.
the artifacts API provides a simple access to the resolved artifacts, untransformed, but with lazy download of artifacts (they would only be downloaded on demand).
the artifact view API provides an advanced, filtered view of artifacts, possibly transformed.
Working with external dependencies and plugins published on third-party repositories puts your build at risk.
In particular, you need to be aware of what binaries are brought in transitively and if they are legit.
To mitigate the security risks and avoid integrating compromised dependencies in your project, Gradle supports dependency verification.
Dependency verification is, by nature, an inconvenient feature to use.
It means that whenever you’re going to update a dependency, builds are likely to fail.
It means that merging branches are going to be harder because each branch can have different dependencies.
It means that you will be tempted to switch it off.
So why should you bother?
Dependency verification is about trust in what you get and what you ship.
Without dependency verification it’s easy for an attacker to compromise your supply chain.
There are many real world examples of tools compromised by adding a malicious dependency.
Dependency verification is meant to protect yourself from those attacks, by forcing you to ensure that the artifacts you include in your build are the ones that you expect.
It is not meant, however, to prevent you from including vulnerable dependencies.
Finding the right balance between security and convenience is hard but Gradle will try to let you choose the "right level" for you.
Gradle supports both checksum and signature verification out of the box but performs no dependency verification by default.
This section will guide you into configuring dependency verification properly for your needs.
This feature can be used for:
Currently the only source of dependency verification metadata is this XML configuration file.
Future versions of Gradle may include other sources (for example via external services).
Dependency verification is automatically enabled once the configuration file for dependency verification is discovered.
This configuration file is located at $PROJECT_ROOT/gradle/verification-metadata.xml
.
This file minimally consists of the following:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>false</verify-signatures>
</configuration>
</verification-metadata>
Doing so, Gradle will verify all artifacts using checksums, but will not verify signatures.
Gradle will verify any artifact downloaded using its dependency management engine, which includes, but is not limited to:
Gradle will not verify changing dependencies (in particular SNAPSHOT
dependencies) nor locally produced artifacts (typically jars produced during the build itself) as by nature their checksums and signatures would always change.
With such a minimal configuration file, a project using any external dependency or plugin would immediately start failing because it doesn’t contain any checksum to verify.
Scope of the dependency verification
A dependency verification configuration is global: a single file is used to configure verification of the whole build.
In particular, the same file is used for both the (sub)projects and buildSrc
.
If an included build is used:
so if the included build itself uses verification, its configuration is ignored in favor of the current one
which means that including a build works similarly to upgrading a dependency: it may require you to update your current verification metadata
Configuring the console output
By default, if dependency verification fails, Gradle will generate a small summary about the verification failure as well as an HTML report containing the full information about the failures.
If your environment prevents you from reading this HTML report file (for example if you run a build on CI and that it’s not easy to fetch the remote artifacts), Gradle provides a way to opt-in a verbose console report.
For this, you need to add this Gradle property to your gradle.properties
file:
org.gradle.dependency.verification.console=verbose
Bootstrapping dependency verification
It’s worth mentioning that while Gradle can generate a dependency verification file for you, you should always check whatever Gradle generated for you because your build may already contain compromised dependencies without you knowing about it.
Please refer to the appropriate checksum verification or signature verification section for more information.
If you plan on using signature verification, please also read the corresponding section of the docs.
Bootstrapping can either be used to create a file from the beginning, or also to update an existing file with new information.
Therefore, it’s recommended to always use the same parameters once you started bootstrapping.
The dependency verification file can be generated with the following CLI instructions:
gradle --write-verification-metadata sha256 help
The write-verification-metadata
flag requires the list of checksums that you want to generate or pgp
for signatures.
Executing this command line will cause Gradle to:
compute the requested checksums and possibly verify signatures depending on what you asked
At the end of the build, generate the configuration file which will contain the inferred verification metadata
There are dependencies that Gradle cannot discover this way.
In particular, you will notice that the CLI above uses the help
task.
If you don’t specify any task, Gradle will automatically run the default task and generate a configuration file at the end of the build too.
The difference is that Gradle may discover more dependencies and artifacts depending on the tasks you execute.
As a matter of fact, Gradle cannot automatically discover detached configurations, which are basically dependency graphs resolved as an internal implementation detail of the execution of a task: they are not, in particular, declared as an input of the task because they effectively depend on the configuration of the task at execution time.
A good way to start is just to use the simplest task, help
, which will discover as much as possible, and if subsequent builds fail with a verification error, you can re-execute generation with the appropriate tasks to "discover" more dependencies.
Gradle won’t verify either checksums or signatures of plugins which use their own HTTP clients.
Only plugins which use the infrastructure provided by Gradle for performing requests will see their requests verified.
Using generation for incremental updates
The verification file generated by Gradle has a strict ordering for all its content.
It also uses the information from the existing state to limit changes to the strict minimum.
This means that generation is actually a convenient tool for updating a verification file:
Checksum entries generated by Gradle will have a clear origin
that starts with "Generated by Gradle", which is a good indicator that an entry needs to be reviewed,
Entries added by hand will immediately be accounted for, and appear at the right location after writing the file,
The header comments of the file will be preserved, i.e. comments before the root XML node.
This allows you to have a license header or instructions on which tasks and which parameters to use for generating that file.
Using dry mode
By default, bootstrapping is incremental, which means that if you run it multiple times, information is added to the file and in particular you can rely on your VCS to check the diffs.
There are situations where you would just want to see what the generated verification metadata file would look like without actually changing the existing one or overwriting it.
For this purpose, you can just add --dry-run
:
gradle --write-verification-metadata sha256 help --dry-run
Then instead of generating the verification-metadata.xml
file, a new file will be generated, called verification-metadata.dryrun.xml
.
Disabling metadata verification
By default, Gradle will not only verify artifacts (jars, …) but also the metadata associated with those artifacts (typically POM files).
Verifying this ensures the maximum level of security: metadata files typically tell what transitive dependencies will be included, so a compromised metadata file may cause the introduction of undesired dependencies in the graph.
However, because all artifacts are verified, such artifacts would in general easily be discovered by you, because they would cause a checksum verification failure (checksums would be missing from verification metadata).
Because metadata verification can significantly increase the size of your configuration file, you may therefore want to disable verification of metadata.
If you understand the risks of doing so, set the <verify-metadata>
flag to false
in the configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>false</verify-metadata>
<verify-signatures>false</verify-signatures>
</configuration>
<!-- the rest of this file doesn't need to declare anything about metadata files -->
</verification-metadata>
Verifying dependency checksums
Checksum verification allows you to ensure the integrity of an artifact.
This is the simplest thing that Gradle can do for you to make sure that the artifacts you use are un-tampered.
Gradle supports MD5, SHA1, SHA-256 and SHA-512 checksums.
However, only SHA-256 and SHA-512 checksums are considered secure nowadays.
Adding the checksum for an artifact
External components are identified by GAV coordinates, then each of the artifacts by their file names.
To declare the checksums of an artifact, you need to add the corresponding section in the verification metadata file.
For example, to declare the checksum for Apache PDFBox.
The GAV coordinates are:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>false</verify-signatures>
</configuration>
<components>
<component group="org.apache.pdfbox" name="pdfbox" version="2.0.17">
<artifact name="pdfbox-2.0.17.jar">
<sha512 value="7e11e54a21c395d461e59552e88b0de0ebaf1bf9d9bcacadf17b240d9bbc29bf6beb8e36896c186fe405d287f5d517b02c89381aa0fcc5e0aa5814e44f0ab331" origin="PDFBox Official site (https://pdfbox.apache.org/download.cgi)"/>
</artifact>
<artifact name="pdfbox-2.0.17.pom">
<sha512 value="82de436b38faf6121d8d2e71dda06e79296fc0f7bc7aba0766728c8d306fd1b0684b5379c18808ca724bf91707277eba81eb4fe19518e99e8f2a56459b79742f" origin="Generated by Gradle"/>
</artifact>
</component>
</components>
</verification-metadata>
Where to get checksums from?
In general, checksums are published alongside artifacts on public repositories.
However, if a dependency is compromised in a repository, it’s likely its checksum will be too, so it’s a good practice to get the checksum from a different place, usually the website of the library itself.
In fact, it’s a good security practice to publish the checksums of artifacts on a different server than the server where the artifacts themselves are hosted: it’s harder to compromise a library both on the repository and the official website.
In the example above, the checksum was published on the website for the JAR, but not the POM file.
This is why it’s usually easier to let Gradle generate the checksums and verify by reviewing the generated file carefully.
In this example, not only could we check that the checksum was correct, but we could also find it on the official website, which is why we changed the value of the of origin
attribute on the sha512
element from Generated by Gradle
to PDFBox Official site
.
Changing the origin
gives users a sense of how trustworthy your build it.
Interestingly, using pdfbox
will require much more than those 2 artifacts, because it will also bring in transitive dependencies.
If the dependency verification file only included the checksums for the main artifacts you used, the build would fail with an error like this one:
Execution failed for task ':compileJava'.
> Dependency verification failed for configuration ':compileClasspath':
- On artifact commons-logging-1.2.jar (commons-logging:commons-logging:1.2) in repository 'MavenRepo': checksum is missing from verification metadata.
- On artifact commons-logging-1.2.pom (commons-logging:commons-logging:1.2) in repository 'MavenRepo': checksum is missing from verification metadata.
What this indicates is that your build requires commons-logging
when executing compileJava
, however the verification file doesn’t contain enough information for Gradle to verify the integrity of the dependencies, meaning you need to add the required information to the verification metadata file.
See troubleshooting dependency verification for more insights on what to do in this situation.
What checksums are verified?
If a dependency verification metadata file declares more than one checksum for a dependency, Gradle will verify all of them and fail if any of them fails.
For example, the following configuration would check both the md5
and sha1
checksums:
<component group="org.apache.pdfbox" name="pdfbox" version="2.0.17">
<artifact name="pdfbox-2.0.17.jar">
<md5 value="c713a8e252d0add65e9282b151adf6b4" origin="official site"/>
<sha1 value="b5c8dff799bd967c70ccae75e6972327ae640d35" origin="official site" reason="Additional check for this artifact"/>
</artifact>
</component>
There are multiple reasons why you’d like to do so:
an official site doesn’t publish secure checksums (SHA-256, SHA-512) but publishes multiple insecure ones (MD5, SHA1).
While it’s easy to fake a MD5 checksum and hard but possible to fake a SHA1 checksum, it’s harder to fake both of them for the same artifact.
you might want to add generated checksums to the list above
when updating dependency verification file with more secure checksums, you don’t want to accidentally erase checksums
Verifying dependency signatures
In addition to checksums, Gradle supports verification of signatures.
Signatures are used to assess the provenance of a dependency (it tells who signed the artifacts, which usually corresponds to who produced it).
As enabling signature verification usually means a higher level of security, you might want to replace checksum verification with signature verification.
Signatures can also be used to assess the integrity of a dependency similarly to checksums.
Signatures are signatures of the hash of artifacts, not artifacts themselves.
This means that if the signature is done on an unsafe hash (even SHA1), then you’re not correctly assessing the integrity of a file.
For this reason, if you care about both, you need to add both signatures and checksums to your verification metadata.
Gradle only supports verification of signatures published on remote repositories as ASCII-armored PGP files
Not all artifacts are published with signatures
A good signature doesn’t mean that the signatory was legit
As a consequence, signature verification will often be used alongside checksum verification.
About expired keys
It’s very common to find artifacts which are signed with an expired key.
This is not a problem for verification: key expiry is mostly used to avoid signing with a stolen key.
If an artifact was signed before expiry, it’s still valid.
Enabling signature verification
Because verifying signatures is more expensive (both I/O and CPU wise) and harder to check manually, it’s not enabled by default.
Enabling it requires you to change the configuration option in the verification-metadata.xml
file:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-signatures>true</verify-signatures>
</configuration>
</verification-metadata>
That is to say that Gradle’s verification mechanism is much stronger if signature verification is enabled than just with checksum verification.
In particular:
if an artifact is signed with multiple keys, all of them must pass validation or the build will fail
if an artifact passes verification, any additional checksum configured for the artifact will also be checked
However, it’s not because an artifact passes signature verification that you can trust it: you need to trust the keys.
In practice, it means you need to list the keys that you trust for each artifact, which is done by adding a pgp
entry instead of a sha1
for example:
<component group="com.github.javaparser" name="javaparser-core" version="3.6.11">
<artifact name="javaparser-core-3.6.11.jar">
<pgp value="8756c4f765c9ac3cb6b85d62379ce192d401ab61"/>
</artifact>
</component>
For the pgp
and trusted-key
elements, Gradle requires full fingerprint IDs (e.g. b801e2f8ef035068ec1139cc29579f18fa8fd93b
instead of a long ID 29579f18fa8fd93b
).
This minimizes the chance of a collision attack.
At the time, V4 key fingerprints are of 160-bit (40 characters) length. We accept longer keys to be future-proof in case a longer key fingerprint is introduced.
In ignore-key
elements, either fingerprints or long (64-bit) IDs can be used. A shorter ID can only result in a bigger range of exclusion, therefore, it’s safe to use.
This effectively means that you trust com.github.javaparser:javaparser-core:3.6.11
if it’s signed with the key 8756c4f765c9ac3cb6b85d62379ce192d401ab61
.
Without this, the build would fail with this error:
> Dependency verification failed for configuration ':compileClasspath':
- On artifact javaparser-core-3.6.11.jar (com.github.javaparser:javaparser-core:3.6.11) in repository 'MavenRepo': Artifact was signed with key '8756c4f765c9ac3cb6b85d62379ce192d401ab61' (Bintray (by JFrog) <****>) and passed verification but the key isn't in your trusted keys list.
The key IDs that Gradle shows in error messages are the key IDs found in the signature file it tries to verify.
It doesn’t mean that it’s necessarily the keys that you should trust.
In particular, if the signature is correct but done by a malicious entity, Gradle wouldn’t tell you.
Trusting keys globally
Signature verification has the advantage that it can make the configuration of dependency verification easier by not having to explicitly list all artifacts like for checksum verification only.
In fact, it’s common that the same key can be used to sign several artifacts.
If this is the case, you can move the trusted key from the artifact level to the global configuration block:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>true</verify-signatures>
<trusted-keys>
<trusted-key id="8756c4f765c9ac3cb6b85d62379ce192d401ab61" group="com.github.javaparser"/>
</trusted-keys>
</configuration>
<components/>
</verification-metadata>
The configuration above means that for any artifact belonging to the group com.github.javaparser
, we trust it if it’s signed with the 8756c4f765c9ac3cb6b85d62379ce192d401ab61
fingerprint.
The trusted-key
element works similarly to the trusted-artifact element:
It means you can trust the key A
for the first artifact, probably only up to the released version before the key was stolen, but not for B
.
Remember that anybody can put an arbitrary name when generating a PGP key, so never trust the key solely based on the key name.
Verify if the key is listed at the official site.
For example, Apache projects typically provide a KEYS.txt file that you can trust.
Specifying key servers and ignoring keys
Gradle will automatically download the public keys required to verify a signature.
For this it uses a list of well known and trusted key servers (the list may change between Gradle versions, please refer to the implementation to figure out what servers are used by default).
You can explicitly set the list of key servers that you want to use by adding them to the configuration:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>true</verify-signatures>
<key-servers>
<key-server uri="hkp://my-key-server.org"/>
<key-server uri="https://my-other-key-server.org"/>
</key-servers>
</configuration>
</verification-metadata>
Despite this, it’s possible that a key is not available:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>true</verify-signatures>
<ignored-keys>
<ignored-key id="abcdef1234567890" reason="Key is not available in any key server"/>
</ignored-keys>
</configuration>
</verification-metadata>
As soon as a key is ignored, it will not be used for verification, even if the signature file mentions it.
However, if the signature cannot be verified with at least one other key, Gradle will mandate that you provide a checksum.
Exporting keys for faster verification
Gradle automatically downloads the required keys but this operation can be quite slow and requires everyone to download the keys.
To avoid this, Gradle offers the ability to use a local keyring file containing the required public keys.
Note that only public key packets and a single userId per key are stored and used.
All other information (user attributes, signatures, etc.) is stripped from downloaded or exported keys.
Gradle supports 2 different file formats for keyrings: a binary format (.gpg
file) and a plain text format (.keys
), also known as ASCII-armored format.
There are pros and cons for each of the formats: the binary format is more compact and can be updated directly via GPG commands, but is completely opaque (binary).
On the opposite, the ASCII-armored format is human-readable, can be easily updated by hand and makes it easier to do code reviews thanks to readable diffs.
You can configure which file type would be used by adding the keyring-format
configuration option:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<verify-metadata>true</verify-metadata>
<verify-signatures>true</verify-signatures>
<keyring-format>armored</keyring-format>
</configuration>
</verification-metadata>
Available options for keyring format are armored
and binary
.
Without keyring-format
, if the gradle/verification-keyring.gpg
or gradle/verification-keyring.keys
file is present, Gradle will search for keys there in priority.
The plain text file will be ignored if there’s already a .gpg
file (the binary version takes precedence).
You can ask Gradle to export all keys it used for verification of this build to the keyring during bootstrapping:
./gradlew --write-verification-metadata pgp,sha256 --export-keys
Unless keyring-format
is specified, this command will generate both the binary version and the ASCII-armored file.
Use this option to choose the preferred format.
You should only pick one for your project.
It’s a good idea to commit this file to VCS (as long as you trust your VCS).
If you use git and use the binary version, make sure to make it treat this file as binary, by adding this to your .gitattributes
file:
*.gpg binary
You can also ask Gradle to export all trusted keys without updating the verification metadata file:
./gradlew --export-keys
Signature verification bootstrapping takes an optimistic point of view that signature verification is enough.
Therefore, if you also care about integrity, you must first bootstrap using checksum verification, then with signature verification.
Similarly to bootstrapping for checksums, Gradle provides a convenience for bootstrapping a configuration file with signature verification enabled.
For this, just add the pgp
option to the list of verifications to generate.
However, because there might be verification failures, missing keys or missing signature files, you must provide a fallback checksum verification algorithm:
./gradlew --write-verification-metadata pgp,sha256
this means that Gradle will verify the signatures and fallback to SHA-256 checksums when there’s a problem.
When bootstrapping, Gradle performs optimistic verification and therefore assumes a sane build environment.
It will therefore:
automatically add ignored keys for keys which couldn’t be downloaded from public key servers.
See here how to manually add keys if needed
automatically generate checksums for artifacts without signatures or ignored keys
If, for some reason, verification fails during the generation, Gradle will automatically generate an ignored key entry but warn you that you must absolutely check what happens.
This situation is common as explained for this section: a typical case is when the POM file for a dependency differs from one repository to the other (often in a non-meaningful way).
In addition, Gradle will try to group keys automatically and generate the trusted-keys
block which reduced the configuration file size as much as possible.
Forcing use of local keyrings only
The local keyring files (.gpg
or .keys
) can be used to avoid reaching out to key servers whenever a key is required to verify an artifact.
However, it may be that the local keyring doesn’t contain a key, in which case Gradle would use the key servers to fetch the missing key.
If the local keyring file isn’t regularly updated, using key export, then it may be that your CI builds, for example, would reach out to key servers too often (especially if you use disposable containers for builds).
To avoid this, Gradle offers the ability to disallow use of key servers altogether: only the local keyring file would be used, and if a key is missing from this file, the build will fail.
To enable this mode, you need to disable key servers in the configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<key-servers enabled="false"/>
</configuration>
</verification-metadata>
Disabling verification or making it lenient
Dependency verification can be expensive, or sometimes verification could get in the way of day to day development (because of frequent dependency upgrades, for example).
Alternatively, you might want to enable verification on CI servers but not on local machines.
Gradle actually provides 3 different verification modes:
strict
, which is the default.
Verification fails as early as possible, in order to avoid the use of compromised dependencies during the build.
lenient
, which will run the build even if there are verification failures.
The verification errors will be displayed during the build without causing a build failure.
off
when verification is totally ignored.
Disabling dependency verification for some configurations only
In order to provide the strongest security level possible, dependency verification is enabled globally.
This will ensure, for example, that you trust all the plugins you use.
However, the plugins themselves may need to resolve additional dependencies that it doesn’t make sense to ask the user to accept.
For this purpose, Gradle provides an API which allows disabling dependency verification on some specific configurations.
Disabling dependency verification, if you care about security, is not a good idea.
This API mostly exist for cases where it doesn’t make sense to check dependencies.
However, in order to be on the safe side, Gradle will systematically print a warning whenever verification has been disabled for a specific configuration.
As an example, a plugin may want to check if there are newer versions of a library available and list those versions.
It doesn’t make sense, in this context, to ask the user to put the checksums of the POM files of the newer releases because by definition, they don’t know about them.
So the plugin might need to run its code independently of the dependency verification configuration.
To do this, you need to call the ResolutionStrategy#disableDependencyVerification
method:
Example 52. Disabling dependency verification
It’s also possible to disable verification on detached configurations like in the following example:
Example 53. Disabling dependency verification
tasks.register("checkDetachedDependencies") {
val detachedConf: FileCollection = configurations.detachedConfiguration(dependencies.create("org.apache.commons:commons-lang3:3.3.1")).apply {
resolutionStrategy.disableDependencyVerification()
doLast {
println(detachedConf.files)
tasks.register("checkDetachedDependencies") {
def detachedConf = configurations.detachedConfiguration(dependencies.create("org.apache.commons:commons-lang3:3.3.1"))
detachedConf.resolutionStrategy.disableDependencyVerification()
doLast {
println(detachedConf.files)
Trusting some particular artifacts
You might want to trust some artifacts more than others.
For example, it’s legitimate to think that artifacts produced in your company and found in your internal repository only are safe, but you want to check every external component.
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<trusted-artifacts>
<trust group="com.mycompany" reason="We trust mycompany artifacts"/>
</trusted-artifacts>
</configuration>
</verification-metadata>
This means that all components which group is com.mycompany
will automatically be trusted.
Trusted means that Gradle will not perform any verification whatsoever.
The trust
element accepts those attributes:
regex
, a boolean saying if the group
, name
, version
and file
attributes need to be interpreted as regular expressions (defaults to false
)
reason
, an optional reason, why matched artifacts are trusted
In the example above it means that the trusted artifacts would be artifacts in com.mycompany
but not com.mycompany.other
.
To trust all artifacts in com.mycompany
and all subgroups, you can use:
<?xml version="1.0" encoding="UTF-8"?>
<verification-metadata xmlns="https://schema.gradle.org/dependency-verification"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://schema.gradle.org/dependency-verification https://schema.gradle.org/dependency-verification/dependency-verification-1.3.xsd">
<configuration>
<trusted-artifacts>
<trust group="^com[.]mycompany($|([.].*))" regex="true" reason="We trust all mycompany artifacts"/>
</trusted-artifacts>
</configuration>
</verification-metadata>
Trusting multiple checksums for an artifact
It’s quite common to have different checksums for the same artifact in the wild.
How is that possible?
Despite progress, it’s often the case that developers publish, for example, to Maven Central and another repository separately, using different builds.
In general, this is not a problem but sometimes it means that the metadata files would be different (different timestamps, additional whitespaces, …).
Add to this that your build may use several repositories or repository mirrors and it makes it quite likely that a single build can "see" different metadata files for the same component!
In general, it’s not malicious (but you must verify that the artifact is actually correct), so Gradle lets you declare the additional artifact checksums.
For example:
<component group="org.apache" name="apache" version="13">
<artifact name="apache-13.pom">
<sha256 value="2fafa38abefe1b40283016f506ba9e844bfcf18713497284264166a5dbf4b95e">
<also-trust value="ff513db0361fd41237bef4784968bc15aae478d4ec0a9496f811072ccaf3841d"/>
</sha256>
</artifact>
</component>
You can have as many also-trust
entries as needed, but in general you shouldn’t have more than 2.
Skipping Javadocs and sources
By default Gradle will verify all downloaded artifacts, which includes Javadocs and sources.
In general this is not a problem but you might face an issue with IDEs which automatically try to download them during import: if you didn’t set the checksums for those too, importing would fail.
To avoid this, you can configure Gradle to trust automatically all javadocs/sources:
<trusted-artifacts>
<trust file=".*-javadoc[.]jar" regex="true"/>
<trust file=".*-sources[.]jar" regex="true"/>
</trusted-artifacts>
Adding keys to the ASCII-armored keyring
The added key must be ASCII-armored formatted and can be simply added at the end of the file.
If you already downloaded the key in the right format, you can simply append it to the file.
Or you can amend an existing KEYS file by issuing the following commands:
$ gpg --no-default-keyring --keyring /tmp/keyring.gpg --recv-keys 8756c4f765c9ac3cb6b85d62379ce192d401ab61
gpg: keybox '/tmp/keyring.gpg' created
gpg: key 379CE192D401AB61: public key "Bintray (by JFrog) <****>" imported
gpg: Total number processed: 1
gpg: imported: 1
# Write its ASCII-armored version
$ gpg --keyring /tmp/keyring.gpg --export --armor 8756c4f765c9ac3cb6b85d62379ce192d401ab61 > gradle/verification-keyring.keys
Once done, make sure to run the generation command again so that the key is processed by Gradle.
This will do the following:
$ gpg --no-default-keyring --keyring gradle/verification-keyring.gpg --recv-keys 8756c4f765c9ac3cb6b85d62379ce192d401ab61
gpg: keybox 'gradle/verification-keyring.gpg' created
gpg: key 379CE192D401AB61: public key "Bintray (by JFrog) <****>" imported
gpg: Total number processed: 1
gpg: imported: 1
$ gpg --no-default-keyring --keyring gradle/verification-keyring.gpg --recv-keys 6f538074ccebf35f28af9b066a0975f8b1127b83
gpg: key 0729A0AFF8999A87: public key "Kotlin Release <****>" imported
gpg: Total number processed: 1
gpg: imported: 1
Dealing with a verification failure
Dependency verification can fail in different ways, this section explains how you should deal with the various cases.
Missing verification metadata
The simplest failure you can have is when verification metadata is missing from the dependency verification file.
This is the case for example if you use checksum verification, then you update a dependency and new versions of the dependency (and potentially its transitive dependencies) are brought in.
Gradle will tell you what metadata is missing:
Execution failed for task ':compileJava'.
> Dependency verification failed for configuration ':compileClasspath':
- On artifact commons-logging-1.2.jar (commons-logging:commons-logging:1.2) in repository 'MavenRepo': checksum is missing from verification metadata.
the missing module group is commons-logging
, it’s artifact name is commons-logging
and its version is 1.2
.
The corresponding artifact is commons-logging-1.2.jar
so you need to add the following entry to the verification file:
<component group="commons-logging" name="commons-logging" version="1.2">
<artifact name="commons-logging-1.2.jar">
<sha256 value="daddea1ea0be0f56978ab3006b8ac92834afeefbd9b7e4e6316fca57df0fa636" origin="official distribution"/>
</artifact>
</component>
Alternatively, you can ask Gradle to generate the missing information by using the bootstrapping mechanism: existing information in the metadata file will be preserved, Gradle will only add the missing verification metadata.
Incorrect checksums
A more problematic issue is when the actual checksum verification fails:
Execution failed for task ':compileJava'.
> Dependency verification failed for configuration ':compileClasspath':
- On artifact commons-logging-1.2.jar (commons-logging:commons-logging:1.2) in repository 'MavenRepo': expected a 'sha256' checksum of '91f7a33096ea69bac2cbaf6d01feb934cac002c48d8c8cfa9c240b40f1ec21df' but was 'daddea1ea0be0f56978ab3006b8ac92834afeefbd9b7e4e6316fca57df0fa636'
This time, Gradle tells you what dependency is at fault, what was the expected checksum (the one you declared in the verification metadata file) and the one which was actually computed during verification.
Such a failure indicates that a dependency may have been compromised.
At this stage, you must perform manual verification and check what happens.
Several things can happen:
a dependency was tampered in the local dependency cache of Gradle.
This is usually harmless: erase the file from the cache and Gradle would redownload the dependency.
a dependency is available in multiple sources with slightly different binaries (additional whitespace, …)
please inform the maintainers of the library that they have such an issue
you can use also-trust
to accept the additional checksums
Note that a variation of a compromised library is often name squatting, when a hacker would use GAV coordinates which look legit but are actually different by one character, or repository shadowing, when a dependency with the official GAV coordinates is published in a malicious repository which comes first in your build.
Untrusted signatures
If you have signature verification enabled, Gradle will perform verification of the signatures but will not trust them automatically:
> Dependency verification failed for configuration ':compileClasspath':
- On artifact javaparser-core-3.6.11.jar (com.github.javaparser:javaparser-core:3.6.11) in repository 'MavenRepo': Artifact was signed with key '379ce192d401ab61' (Bintray (by JFrog) <****>) and passed verification but the key isn't in your trusted keys list.
In this case it means you need to check yourself if the key that was used for verification (and therefore the signature) can be trusted, in which case refer to this section of the documentation to figure out how to declare trusted keys.
Failed signature verification
If Gradle fails to verify a signature, you will need to take action and verify artifacts manually because this may indicate a compromised dependency.
If such a thing happens, Gradle will fail with:
> Dependency verification failed for configuration ':compileClasspath':
- On artifact javaparser-core-3.6.11.jar (com.github.javaparser:javaparser-core:3.6.11) in repository 'MavenRepo': Artifact was signed with key '379ce192d401ab61' (Bintray (by JFrog) <****>) but signature didn't match
There are several options:
signature was wrong in the first place, which happens frequently with dependencies published on different repositories.
the signature is correct but the artifact has been compromised (either in the local dependency cache or remotely)
The right approach here is to go to the official site of the dependency and see if they publish signatures for their artifacts.
If they do, verify that the signature that Gradle downloaded matches the one published.
If you have checked that the dependency is not compromised and that it’s "only" the signature which is wrong, you should declare an artifact level key exclusion:
<components>
<component group="com.github.javaparser" name="javaparser-core" version="3.6.11">
<artifact name="javaparser-core-3.6.11.pom">
<ignored-keys>
<ignored-key id="379ce192d401ab61" reason="internal repo has corrupted POM"/>
</ignored-keys>
</artifact>
</component>
</components>
However, if you only do so, Gradle will still fail because all keys for this artifact will be ignored and you didn’t provide a checksum:
<components>
<component group="com.github.javaparser" name="javaparser-core" version="3.6.11">
<artifact name="javaparser-core-3.6.11.pom">
<ignored-keys>
<ignored-key id="379ce192d401ab61" reason="internal repo has corrupted POM"/>
</ignored-keys>
<sha256 value="a2023504cfd611332177f96358b6f6db26e43d96e8ef4cff59b0f5a2bee3c1e1"/>
</artifact>
</component>
</components>
Manual verification of a dependency
You will likely face a dependency verification failure (either checksum verification or signature verification) and will need to figure out if the dependency has been compromised or not.
In this section we give an example how you can manually check if a dependency was compromised.
For this we will take this example failure:
> Dependency verification failed for configuration ':compileClasspath':
- On artifact j2objc-annotations-1.1.jar (com.google.j2objc:j2objc-annotations:1.1) in repository 'MyCompany Mirror': Artifact was signed with key '29579f18fa8fd93b' but signature didn't match
This error message gives us the GAV coordinates of the problematic dependency, as well as an indication of where the dependency was fetched from.
Here, the dependency comes from MyCompany Mirror
, which is a repository declared in our build.
The first thing to do is therefore to download the artifact and its signature manually from the mirror:
$ curl https://my-company-mirror.com/repo/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar --output j2objc-annotations-1.1.jar
$ curl https://my-company-mirror.com/repo/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar.asc --output j2objc-annotations-1.1.jar.asc
Then we can use the key information provided in the error message to import the key locally:
$ gpg --recv-keys B801E2F8EF035068EC1139CC29579F18FA8FD93B
And perform verification:
$ gpg --verify j2objc-annotations-1.1.jar.asc
gpg: assuming signed data in 'j2objc-annotations-1.1.jar'
gpg: Signature made Thu 19 Jan 2017 12:06:51 AM CET
gpg: using RSA key 29579F18FA8FD93B
gpg: BAD signature from "Tom Ball <****>" [unknown]
What this tells us is that the problem is not on the local machine: the repository already contains a bad signature.
The next step is to do the same by downloading what is actually on Maven Central:
$ curl https://my-company-mirror.com/repo/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar --output central-j2objc-annotations-1.1.jar
$ curl https://my-company-mirror.com/repo/com/google/j2objc/j2objc-annotations/1/1/j2objc-annotations-1.1.jar.asc --output central-j2objc-annotations-1.1.jar.asc
And we can now check the signature again:
$ gpg --verify central-j2objc-annotations-1.1.jar.asc
gpg: assuming signed data in 'central-j2objc-annotations-1.1.jar'
gpg: Signature made Thu 19 Jan 2017 12:06:51 AM CET
gpg: using RSA key 29579F18FA8FD93B
gpg: Good signature from "Tom Ball <****>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: B801 E2F8 EF03 5068 EC11 39CC 2957 9F18 FA8F D93B
This indicates that the dependency is valid on Maven Central.
At this stage, we already know that the problem lives in the mirror, it may have been compromised, but we need to verify.
A good idea is to compare the 2 artifacts, which you can do with a tool like diffoscope.
We then figure out that the intent wasn’t malicious but that somehow a build has been overwritten with a newer version (the version in Central is newer than the one in our repository).
In this case, you can decide to:
ignore the signature for this artifact and trust the different possible checksums (both for the old artifact and the new version)
or cleanup your mirror so that it contains the same version as in Maven Central
It’s worth noting that if you choose to delete the version from your repository, you will also need to remove it from the local Gradle cache.
This is facilitated by the fact the error message tells you were the file is located:
> Dependency verification failed for configuration ':compileClasspath':
- On artifact j2objc-annotations-1.1.jar (com.google.j2objc:j2objc-annotations:1.1) in repository 'MyCompany Mirror': Artifact was signed with key '29579f18fa8fd93b' but signature didn't match
This can indicate that a dependency has been compromised. Please carefully verify the signatures and checksums.
For your information here are the path to the files which failed verification:
- $<<directory_layout.adoc#dir:gradle_user_home,GRADLE_USER_HOME>>/caches/modules-2/files-2.1/com.google.j2objc/j2objc-annotations/1.1/976d8d30bebc251db406f2bdb3eb01962b5685b3/j2objc-annotations-1.1.jar (signature: GRADLE_USER_HOME/caches/modules-2/files-2.1/com.google.j2objc/j2objc-annotations/1.1/82e922e14f57d522de465fd144ec26eb7da44501/j2objc-annotations-1.1.jar.asc)
GRADLE_USER_HOME = /home/jiraya/.gradle
You can safely delete the artifact file as Gradle would automatically re-download it:
rm -rf ~/.gradle/caches/modules-2/files-2.1/com.google.j2objc/j2objc-annotations/1.1
Cleaning up the verification file
If you do nothing, the dependency verification metadata will grow over time as you add new dependencies or change versions: Gradle will not automatically remove unused entries from this file.
The reason is that there’s no way for Gradle to know upfront if a dependency will effectively be used during the build or not.
As a consequence, adding dependencies or changing dependency version can easily lead to more entries in the file, while leaving unnecessary entries out there.
One option to cleanup the file is to move the existing verification-metadata.xml
file to a different location and call Gradle with the --dry-run
mode: while not perfect (it will not notice dependencies only resolved at configuration time), it generates a new file that you can compare with the existing one.
We need to move the existing file because both the bootstrapping mode and the dry-run mode are incremental: they copy information from the existing metadata verification file (in particular, trusted keys).
Refreshing missing keys
Gradle caches missing keys for 24 hours, meaning it will not attempt to re-download the missing keys for 24 hours after failing.
If you want to retry immediately, you can run with the --refresh-keys
CLI flag:
./gradlew build --refresh-keys
See here how to manually add keys if Gradle keeps failing to download them.
The symbol ]
can be used instead of (
for an exclusive lower bound, and [
instead of )
for exclusive upper bound. e.g ]1.0, 2.0[
An upper bound exclude acts as a prefix exclude.
This means that [1.0, 2.0[
will also exclude all versions starting with 2.0
that are smaller than 2.0
.
For example versions like 2.0-dev1
or 2.0-SNAPSHOT
are no longer included in the range.
Determine which version is 'newest' when performing conflict resolution (watch out though, conflict resolution uses
"base versions").
Any part that contains both digits and letters is split into separate parts for each: 1a1 == 1.a.1
Only the parts of a version are compared. The actual separator characters are not significant: 1.a.1 == 1-a+1 == 1.a-1 == 1a1
(watch out though, in the context of conflict resolution there are exceptions to this rule).
If both are non-numeric, the parts are compared alphabetically, in a case-sensitive manner: 1.A
< 1.B
< 1.a
< 1.b
A version with an extra numeric part is considered higher than a version without (even when it’s zero): 1.1
< 1.1.0
A version with an extra non-numeric part is considered lower than a version without: 1.1.a
< 1.1
dev
is consider lower than any other non-numeric part: 1.0-dev
< 1.0-ALPHA
< 1.0-alpha
< 1.0-rc
.
The strings rc
, snapshot
, final
, ga
, release
and sp
are considered higher than any other string part (sorted in this order): 1.0-zeta
< 1.0-rc
< 1.0-snapshot
< 1.0-final
< 1.0-ga
< 1.0-release
< 1.0-sp
< 1.0
.
These special values are NOT case sensitive, as opposed to regular string parts and they do not depend on the separator used around them: 1.0-RC-1
== 1.0.rc.1
Simple version declaration semantics
When you declare a version using the short-hand notation, for example:
Example 54. A simple declaration
Then the version is considered a required version which means that it should minimally be 1.7.15
but can be upgraded by the engine (optimistic upgrade).
There is, however, a shorthand notation for strict versions, using the !!
notation:
Example 55. Shorthand notation for strict dependencies
dependencies {
// short-hand notation with !!
implementation("org.slf4j:slf4j-api:1.7.15!!")
// is equivalent to
implementation("org.slf4j:slf4j-api") {
version {
strictly("1.7.15")
// or...
implementation("org.slf4j:slf4j-api:[1.7, 1.8[!!1.7.25")
// is equivalent to
implementation("org.slf4j:slf4j-api") {
version {
strictly("[1.7, 1.8[")
prefer("1.7.25")
dependencies {
// short-hand notation with !!
implementation('org.slf4j:slf4j-api:1.7.15!!')
// is equivalent to
implementation("org.slf4j:slf4j-api") {
version {
strictly '1.7.15'
// or...
implementation('org.slf4j:slf4j-api:[1.7, 1.8[!!1.7.25')
// is equivalent to
implementation('org.slf4j:slf4j-api') {
version {
strictly '[1.7, 1.8['
prefer '1.7.25'
A strict version cannot be upgraded and overrides whatever transitive dependencies originating from this dependency provide.
It is recommended to use ranges for strict versions.
The notation [1.7, 1.8[!!1.7.25
above is equivalent to:
Declaring a dependency without version
A recommended practice for larger projects is to declare dependencies without versions and use dependency constraints for version declaration.
The advantage is that dependency constraints allow you to manage versions of all dependencies, including transitive ones, in one place.
Example 56. Declaring a dependency without version
dependencies {
constraints {
implementation("org.springframework:spring-web:5.0.2.RELEASE")
Gradle supports a rich model for declaring versions, which allows to combine different level of version information.
The terms and their meaning are explained below, from the strongest to the weakest:
strictly
Any version not matched by this version notation will be excluded.
This is the strongest version declaration.
On a declared dependency, a strictly
can downgrade a version.
When on a transitive dependency, it will cause dependency resolution to fail if no version acceptable by this clause can be selected.
See overriding dependency version for details.
This term supports dynamic versions.
When defined, this overrides any previous require
declaration and clears previous reject
.
require
Implies that the selected version cannot be lower than what require
accepts but could be higher through conflict resolution, even if higher has an exclusive higher bound.
This is what a direct dependency translates to.
This term supports dynamic versions.
When defined, this overrides any previous strictly
declaration and clears previous reject
.
This is a very soft version declaration.
It applies only if there is no stronger non dynamic opinion on a version for the module.
This term does not support dynamic versions.
Definition can complement strictly
or require
.
When defined, this overrides any previous prefer
declaration and clears previous reject
.
Declares that specific version(s) are not accepted for the module.
This will cause dependency resolution to fail if the only versions selectable are also rejected.
This term supports dynamic versions.
The following table illustrates a number of use cases and how to combine the different terms for rich version declaration:
Table 7. Rich version use cases
Any version starting from 1.5
, equivalent of org:foo:1.5
. An upgrade to 2.4
is accepted.
Tested with 1.5
, soft constraint upgrades according to semantic versioning.
[1.0, 2.0[
Any version between 1.0
and 2.0
, 1.5
if nobody else cares. An upgrade to 2.4
is accepted.
Tested with 1.5
, but follows semantic versioning.
[1.0, 2.0[
Any version between 1.0
and 2.0
(exclusive), 1.5
if nobody else cares.
Overwrites versions from transitive dependencies.
Same as above, with 1.4
known broken.
[1.0, 2.0[
Any version between 1.0
and 2.0
(exclusive) except for 1.4
, 1.5
if nobody else cares.
Overwrites versions from transitive dependencies.
No opinion, works with 1.5
.
1.5
if no other opinion, any otherwise.
No opinion, prefer latest release.
latest.release
The latest release at build time.
On the edge, latest release, no downgrade.
latest.release
The latest release at build time.
No other version than 1.5.
1.5, or failure if another strict
or higher require
constraint disagrees.
Overwrites versions from transitive dependencies.
1.5
or a patch version of it exclusively.
[1.5,1.6[
Latest 1.5.x
patch release, or failure if another strict
or higher require
constraint disagrees.
Overwrites versions from transitive dependencies.
Lines annotated with a lock (🔒) indicate that leveraging dependency locking makes sense in this context.
Another concept that relates with rich version declaration is the ability to publish resolved versions instead of declared ones.
Using strictly
, especially for a library, must be a well thought process as it has an impact on downstream consumers.
At the same time, used correctly, it will help consumers understand what combination of libraries do not work together in their context.
See overriding dependency version for more information.
Rich version information will be preserved in the Gradle Module Metadata format.
However conversion to Ivy or Maven metadata formats will be lossy.
The highest level will be published, that is strictly
or require
over prefer
.
In addition, any reject
will be ignored.
Rich version declaration is accessed through the version
DSL method on a dependency or constraint declaration which gives access to MutableVersionConstraint.
Example 57. Rich version declaration
add("implementation", "org.springframework:spring-core") {
version {
require("4.2.9.RELEASE")
reject("4.3.16.RELEASE")
There are many situations when you want to use the latest version of a particular module dependency, or the latest in a range of versions.
This can be a requirement during development, or you may be developing a library that is designed to work with a range of dependency versions.
You can easily depend on these constantly changing dependencies by using a dynamic version.
A dynamic version can be either a version range (e.g. 2.+
) or it can be a placeholder for the latest version available e.g. latest.integration
.
Alternatively, the module you request can change over time even for the same version, a so-called changing version.
An example of this type of changing module is a Maven SNAPSHOT
module, which always points at the latest artifact published.
In other words, a standard Maven snapshot is a module that is continually evolving, it is a "changing module".
Declaring a dynamic version
Projects might adopt a more aggressive approach for consuming dependencies to modules.
For example you might want to always integrate the latest version of a dependency to consume cutting edge features at any given time.
A dynamic version allows for resolving the latest version or the latest version of a version range for a given module.
By default, Gradle caches dynamic versions of dependencies for 24 hours. Within this time frame, Gradle does not try to resolve newer versions from the declared repositories.
The threshold can be configured as needed for example if you want to resolve new versions earlier.
Declaring a changing version
A team might decide to implement a series of features before releasing a new version of the application or library. A common strategy to allow consumers to integrate an unfinished version of their artifacts early and often is to release a module with a so-called changing version.
A changing version indicates that the feature set is still under active development and hasn’t released a stable version for general availability yet.
In Maven repositories, changing versions are commonly referred to as snapshot versions.
Snapshot versions contain the suffix -SNAPSHOT
.
The following example demonstrates how to declare a snapshot version on the Spring dependency.
Example 59. Declaring a dependency with a changing version
By default, Gradle caches changing versions of dependencies for 24 hours.
Within this time frame, Gradle does not try to resolve newer versions from the declared repositories. The threshold can be configured as needed for example if you want to resolve new snapshot versions earlier.
Gradle is flexible enough to treat any version as changing version e.g. if you wanted to model snapshot behavior for an Ivy module.
All you need to do is to set the property ExternalModuleDependency.setChanging(boolean) to true
.
Controlling dynamic version caching
By default, Gradle caches dynamic versions and changing modules for 24 hours. During that time frame Gradle does not contact any of the declared, remote repositories for new versions. If you want Gradle to check the remote repository more frequently or with every execution of your build, then you will need to change the time to live (TTL) threshold.
Controlling dependency caching programmatically
You can fine-tune certain aspects of caching programmatically using the ResolutionStrategy for a configuration. The programmatic approach is useful if you would like to change the settings permanently.
By default, Gradle caches dynamic versions for 24 hours. To change how long Gradle will cache the resolved version for a dynamic version, use:
Example 60. Dynamic version cache control
By default, Gradle caches changing modules for 24 hours. To change how long Gradle will cache the meta-data and artifacts for a changing module, use:
Example 61. Changing module cache control
Controlling dependency caching from the command line
Avoiding network access with offline mode
The --offline
command line switch tells Gradle to always use dependency modules from the cache, regardless if they are due to be checked again. When running with offline, Gradle will never attempt to access the network to perform dependency resolution. If required modules are not present in the dependency cache, build execution will fail.
Refreshing dependencies
You can control the behavior of dependency caching for a distinct build invocation from the command line.
Command line options are helpful for making a selective, ad-hoc choice for a single execution of the build.
At times, the Gradle Dependency Cache can become out of sync with the actual state of the configured repositories.
Perhaps a repository was initially misconfigured, or perhaps a "non-changing" module was published incorrectly.
To refresh all dependencies in the dependency cache, use the --refresh-dependencies
option on the command line.
The --refresh-dependencies
option tells Gradle to ignore all cached entries for resolved modules and artifacts.
A fresh resolve will be performed against all configured repositories, with dynamic versions recalculated, modules refreshed, and artifacts downloaded.
However, where possible Gradle will check if the previously downloaded artifacts are valid before downloading again.
This is done by comparing published SHA1 values in the repository with the SHA1 values for existing downloaded artifacts.
it will perform HTTP HEAD requests on metadata files but will not re-download them if they are identical
it will perform HTTP HEAD requests on artifact files but will not re-download them if they are identical
In other words, refreshing dependencies only has an impact if you actually use dynamic dependencies or that you have changing dependencies that you were not aware of (in which case it is your responsibility to declare them correctly to Gradle as changing dependencies).
It’s a common misconception to think that using --refresh-dependencies
will force download of dependencies.
This is not the case: Gradle will only perform what is strictly required to refresh the dynamic dependencies.
This may involve downloading new listing or metadata files, or even artifacts, but if nothing changed, the impact is minimal.
Using component selection rules
Component selection rules may influence which component instance should be selected when multiple versions are available that match a version selector.
Rules are applied against every available version and allow the version to be explicitly rejected by rule.
This allows Gradle to ignore any component instance that does not satisfy conditions set by the rule.
Examples include:
For a dynamic version like 1.+
certain versions may be explicitly rejected from selection.
For a static version like 1.4
an instance may be rejected based on extra component metadata such as the Ivy branch attribute, allowing an instance from a subsequent repository to be used.
Rules are configured via the ComponentSelectionRules object.
Each rule configured will be called with a ComponentSelection object as an argument which contains information about the candidate version being considered.
Calling ComponentSelection.reject(java.lang.String) causes the given candidate version to be explicitly rejected, in which case the candidate will not be considered for the selector.
The following example shows a rule that disallows a particular version of a module but allows the dynamic version to choose the next best candidate.
Example 62. Component selection rule
resolutionStrategy {
componentSelection {
// Accept the highest version matching the requested version that isn't '1.5'
all {
if (candidate.group == "org.sample" && candidate.module == "api" && candidate.version == "1.5") {
reject("version 1.5 is broken for 'org.sample:api'")
dependencies {
"rejectConfig"("org.sample:api:1.+")
resolutionStrategy {
componentSelection {
// Accept the highest version matching the requested version that isn't '1.5'
all { ComponentSelection selection ->
if (selection.candidate.group == 'org.sample' && selection.candidate.module == 'api' && selection.candidate.version == '1.5') {
selection.reject("version 1.5 is broken for 'org.sample:api'")
dependencies {
rejectConfig "org.sample:api:1.+"
Note that version selection is applied starting with the highest version first.
The version selected will be the first version found that all component selection rules accept.
A version is considered accepted if no rule explicitly rejects it.
Similarly, rules can be targeted at specific modules.
Modules must be specified in the form of group:module
.
Example 63. Component selection rule with module target
withModule("org.sample:api") {
if (candidate.version == "1.5") {
reject("version 1.5 is broken for 'org.sample:api'")
resolutionStrategy {
componentSelection {
withModule("org.sample:api") { ComponentSelection selection ->
if (selection.candidate.version == "1.5") {
selection.reject("version 1.5 is broken for 'org.sample:api'")
Component selection rules can also consider component metadata when selecting a version.
Possible additional metadata that can be considered are ComponentMetadata and IvyModuleDescriptor.
Note that this extra information may not always be available and thus should be checked for null
values.
Example 64. Component selection rule with metadata
resolutionStrategy {
componentSelection {
// Reject any versions with a status of 'experimental'
all {
if (candidate.group == "org.sample" && metadata?.status == "experimental") {
reject("don't use experimental candidates from 'org.sample'")
// Accept the highest version with either a "release" branch or a status of 'milestone'
withModule("org.sample:api") {
if (getDescriptor(IvyModuleDescriptor::class)?.branch != "release" && metadata?.status != "milestone") {
reject("'org.sample:api' must have testing branch or milestone status")
resolutionStrategy {
componentSelection {
// Reject any versions with a status of 'experimental'
all { ComponentSelection selection ->
if (selection.candidate.group == 'org.sample' && selection.metadata?.status == 'experimental') {
selection.reject("don't use experimental candidates from 'org.sample'")
// Accept the highest version with either a "release" branch or a status of 'milestone'
withModule('org.sample:api') { ComponentSelection selection ->
if (selection.getDescriptor(IvyModuleDescriptor)?.branch != "release" && selection.metadata?.status != 'milestone') {
selection.reject("'org.sample:api' must be a release branch or have milestone status")
Use of dynamic dependency versions (e.g. 1.+
or [1.0,2.0)
) makes builds non-deterministic.
This causes builds to break without any obvious change, and worse, can be caused by a transitive dependency that the build author has no control over.
To achieve reproducible builds, it is necessary to lock versions of dependencies and transitive dependencies such that a build with the same inputs will always resolve the same module versions.
This is called dependency locking.
It enables, amongst others, the following scenarios:
Companies dealing with multi repositories no longer need to rely on -SNAPSHOT
or changing dependencies,
which sometimes result in cascading failures when a dependency introduces a bug or incompatibility.
Now dependencies can be declared against major or minor version range, enabling to test with the latest versions on CI while leveraging locking for stable developer builds.
Teams that want to always use the latest of their dependencies can use dynamic versions, locking their dependencies only for releases.
The release tag will contain the lock states, allowing that build to be fully reproducible when bug fixes need to be developed.
Combined with publishing resolved versions, you can also replace the declared dynamic version part at publication time.
Consumers will instead see the versions that your release resolved.
Locking is enabled per dependency configuration.
Once enabled, you must create an initial lock state.
It will cause Gradle to verify that resolution results do not change, resulting in the same selected dependencies even if newer versions are produced.
Modifications to your build that would impact the resolved set of dependencies will cause it to fail.
This makes sure that changes, either in published dependencies or build definitions, do not alter resolution without adapting the lock state.
Dependency locking makes sense only with dynamic versions.
It will have no impact on changing versions (like -SNAPSHOT
) whose coordinates remain the same, though the content may change.
Gradle will even emit a warning when persisting lock state and changing dependencies are present in the resolution result.
Enabling locking on configurations
Locking of a configuration happens through the ResolutionStrategy:
Example 65. Locking a specific configuration
You can also disable locking on a specific configuration.
This can be useful if a plugin configured locking on all configurations but you happen to add one that should not be locked.
Example 67. Unlocking a specific configuration
Locking buildscript classpath configuration
If you apply plugins to your build, you may want to leverage dependency locking there as well.
In order to lock the classpath
configuration used for script plugins, do the following:
Example 68. Locking buildscript classpath configuration
Generating and updating dependency locks
In order to generate or update lock state, you specify the --write-locks
command line argument in addition to the normal tasks that would trigger configurations to be resolved.
This will cause the creation of lock state for each resolved configuration in that build execution.
Note that if lock state existed previously, it is overwritten.
Lock all configurations in one build execution
When locking multiple configurations, you may want to lock them all at once, during a single build execution.
For this, you have two options:
Run gradle dependencies --write-locks
.
This will effectively lock all resolvable configurations that have locking enabled.
Note that in a multi project setup, dependencies
only is executed on one project, the root one in this case.
Declare a custom task that resolves all configurations. This does not work for Android projects.
tasks.register("resolveAndLockAll") {
notCompatibleWithConfigurationCache("Filters configurations at execution time")
doFirst {
require(gradle.startParameter.isWriteDependencyLocks) { "$path must be run from the command line with the `--write-locks` flag" }
doLast {
configurations.filter {
// Add any custom filtering on the configurations to be resolved
it.isCanBeResolved
}.forEach { it.resolve() }
tasks.register('resolveAndLockAll') {
notCompatibleWithConfigurationCache("Filters configurations at execution time")
doFirst {
assert gradle.startParameter.writeDependencyLocks : "$path must be run from the command line with the `--write-locks` flag"
doLast {
configurations.findAll {
// Add any custom filtering on the configurations to be resolved
it.canBeResolved
}.each { it.resolve() }
Lock state location and format
Lock state will be preserved in a file located at the root of the project or subproject directory.
Each file is named gradle.lockfile
.
The one exception to this rule is for the lock file for the buildscript itself.
In that case the file will be named buildscript-gradle.lockfile
.
The lockfile will have the following content:
gradle.lockfile
# This is a Gradle generated file for dependency locking.
# Manual edits can break the build and are not advised.
# This file is expected to be part of source control.
org.springframework:spring-beans:5.0.5.RELEASE=compileClasspath, runtimeClasspath
org.springframework:spring-core:5.0.5.RELEASE=compileClasspath, runtimeClasspath
org.springframework:spring-jcl:5.0.5.RELEASE=compileClasspath, runtimeClasspath
empty=annotationProcessor
Configuring the per project lock file name and location
When using the single lock file per project, you can configure its name and location.
The main reason for providing this is to enable having a file name that is determined by some project properties, effectively allowing a single project to store different lock state for different execution contexts.
One trivial example in the JVM ecosystem is the Scala version that is often found in artifact coordinates.
Example 71. Changing the lock file name
val scalaVersion = "2.12"
dependencyLocking {
lockFile = file("$projectDir/locking/gradle-${scalaVersion}.lockfile")
def scalaVersion = "2.12"
dependencyLocking {
lockFile = file("$projectDir/locking/gradle-${scalaVersion}.lockfile")
Running a build with lock state present
The moment a build needs to resolve a configuration that has locking enabled and it finds a matching lock state,
it will use it to verify that the given configuration still resolves the same versions.
A successful build indicates that the same dependencies are used as stored in the lock state, regardless if new versions matching the dynamic selector have been produced.
The complete validation is as follows:
Strict mode
In this mode, in addition to the validations above, dependency locking will fail if a configuration marked as locked does not have lock state associated with it.
Lenient mode
In this mode, dependency locking will still pin dynamic versions but otherwise changes to the dependency resolution are no longer errors.
Selectively updating lock state entries
In order to update only specific modules of a configuration, you can use the --update-locks
command line flag.
It takes a comma (,
) separated list of module notations.
In this mode, the existing lock state is still used as input to resolution, filtering out the modules targeted by the update.
❯ gradle classes --update-locks org.apache.commons:commons-lang3,org.slf4j:slf4j-api
Wildcards, indicated with *
, can be used in the group or module name. They can be the only character or appear at the end of the group or module respectively.
The following wildcard notation examples are valid:
Make sure that the configuration for which you no longer want locking is not configured with locking.
Next time you update the save lock state, Gradle will automatically clean up all stale lock state from it.
Ignoring specific dependencies from the lock state
Dependency locking can be used in cases where reproducibility is not the main goal.
As a build author, you may want to have different frequency of dependency version updates, based on their origin for example.
In that case, it might be convenient to ignore some dependencies because you always want to use the latest version for those.
An example is the internal dependencies in an organization which should always use the latest version as opposed to third party dependencies which have a different upgrade cycle.
The notation is a <group>:<name>
dependency notation, where *
can be used as a trailing wildcard.
See the description on updating lock files for more details.
Note that the value *:*
is not accepted as it is equivalent to disabling locking.
Ignoring dependencies will have the following effects:
An ignored dependency applies to all locked configurations. The setting is project scoped.
Ignoring a dependency does not mean lock state ignores its transitive dependencies.
There is no validation that an ignored dependency is present in any configuration resolution.
If the dependency is present in lock state, loading it will filter out the dependency.
If the dependency is present in the resolution result, it will be ignored when validating that resolution matches the lock state.
Finally, if the dependency is present in the resolution result and the lock state is persisted, it will be absent from the written lock state.
direct dependencies are directly required by the component.
A direct dependency is also referred to as a first level dependency.
For example, if your project source code requires Guava, Guava should be declared as direct dependency.
transitive dependencies are dependencies that your component needs, but only because another dependency needs them.
It’s quite common that issues with dependency management are about transitive dependencies.
Often developers incorrectly fix transitive dependency issues by adding direct dependencies.
To avoid this, Gradle provides the concept of dependency constraints.
Adding constraints on transitive dependencies
Dependency constraints allow you to define the version or the version range of both dependencies declared in the build script and transitive dependencies.
It is the preferred method to express constraints that should be applied to all dependencies of a configuration.
When Gradle attempts to resolve a dependency to a module version, all dependency declarations with version, all transitive dependencies and all dependency constraints for that module are taken into consideration.
The highest version that matches all conditions is selected.
If no such version is found, Gradle fails with an error showing the conflicting declarations.
If this happens you can adjust your dependencies or dependency constraints declarations, or make other adjustments to the transitive dependencies if needed.
Similar to dependency declarations, dependency constraint declarations are scoped by configurations and can therefore be selectively defined for parts of a build.
If a dependency constraint influenced the resolution result, any type of dependency resolve rules may still be applied afterwards.
Example 74. Define dependency constraints
implementation("org.apache.httpcomponents:httpclient")
constraints {
implementation("org.apache.httpcomponents:httpclient:4.5.3") {
because("previous versions have a bug impacting this application")
implementation("commons-codec:commons-codec:1.11") {
because("version 1.9 pulled from httpclient has bugs affecting this application")
implementation 'org.apache.httpcomponents:httpclient'
constraints {
implementation('org.apache.httpcomponents:httpclient:4.5.3') {
because 'previous versions have a bug impacting this application'
implementation('commons-codec:commons-codec:1.11') {
because 'version 1.9 pulled from httpclient has bugs affecting this application'
In the example, all versions are omitted from the dependency declaration.
Instead, the versions are defined in the constraints block.
The version definition for commons-codec:1.11
is only taken into account if commons-codec
is brought in as transitive dependency, since commons-codec
is not defined as dependency in the project.
Otherwise, the constraint has no effect.
Dependency constraints can also define a rich version constraint and support strict versions to enforce a version even if it contradicts with the version defined by a transitive dependency (e.g. if the version needs to be downgraded).
Overriding transitive dependency versions
Gradle resolves any dependency version conflicts by selecting the latest version found in the dependency graph.
Some projects might need to divert from the default behavior and enforce an earlier version of a dependency e.g. if the source code of the project depends on an older API of a dependency than some of the external libraries.
Forcing a version of a dependency requires a conscious decision.
Changing the version of a transitive dependency might lead to runtime errors if external libraries do not properly function without them.
Consider upgrading your source code to use a newer version of the library as an alternative approach.
In all situations, this is best expressed saying that your code strictly depends on a version of a transitive.
Using strict versions, you will effectively depend on the version you declare, even if a transitive dependency says otherwise.
strict dependencies don’t suffer an ordering problem: they are applied transitively to the subgraph, and it doesn’t matter in which order dependencies are declared.
conflicting strict dependencies will trigger a build failure that you have to resolve
strict dependencies can be used with rich versions, meaning that it’s better to express the requirement in terms of a strict range combined with a single preferred version.
Let’s say a project uses the HttpClient library for performing HTTP calls. HttpClient pulls in Commons Codec as transitive dependency with version 1.10.
However, the production source code of the project requires an API from Commons Codec 1.9 which is not available in 1.10 anymore.
A dependency version can be enforced by declaring it as strict it in the build script:
Example 75. Setting a strict version
dependencies {
implementation("org.apache.httpcomponents:httpclient:4.5.4")
implementation("commons-codec:commons-codec") {
version {
strictly("1.9")
dependencies {
implementation 'org.apache.httpcomponents:httpclient:4.5.4'
implementation('commons-codec:commons-codec') {
version {
strictly '1.9'
Consequences of using strict versions
Using a strict version must be carefully considered, in particular by library authors.
As the producer, a strict version will effectively behave like a force: the version declaration takes precedence over whatever is found in the transitive dependency graph.
In particular, a strict version will override any other strict version on the same module found transitively.
However, for consumers, strict versions are still considered globally during graph resolution and may trigger an error if the consumer disagrees.
For example, imagine that your project B
strictly depends on C:1.0
.
Now, a consumer, A
, depends on both B
and C:1.1
.
Then this would trigger a resolution error because A
says it needs C:1.1
but B
, within its subgraph, strictly needs 1.0
.
This means that if you choose a single version in a strict constraint, then the version can no longer be upgraded, unless the consumer also sets a strict version constraint on the same module.
In the example above, A
would have to say it strictly depends on 1.1.
For this reason, a good practice is that if you use strict versions, you should express them in terms of ranges and a preferred version within this range.
For example, B
might say, instead of strictly 1.0
, that it strictly depends on the [1.0, 2.0[
range, but prefers 1.0
.
Then if a consumer chooses 1.1 (or any other version in the range), the build will no longer fail (constraints are resolved).
Forced dependencies vs strict dependencies
If the project requires a specific version of a dependency at the configuration-level this can be achieved by calling the method ResolutionStrategy.force(java.lang.Object[]).
configurations {
"compileClasspath" {
resolutionStrategy.force("commons-codec:commons-codec:1.9")
dependencies {
implementation("org.apache.httpcomponents:httpclient:4.5.4")
configurations {
compileClasspath {
resolutionStrategy.force 'commons-codec:commons-codec:1.9'
dependencies {
implementation 'org.apache.httpcomponents:httpclient:4.5.4'
Similar to forcing a version of a dependency, excluding a dependency completely requires a conscious decision.
Excluding a transitive dependency might lead to runtime errors if external libraries do not properly function without them.
If you use excludes, make sure that you do not utilise any code path requiring the excluded dependency by sufficient test coverage.
Transitive dependencies can be excluded on the level of a declared dependency.
Exclusions are spelled out as a key/value pair via the attributes group
and/or module
as shown in the example below.
For more information, refer to ModuleDependency.exclude(java.util.Map).
dependencies {
implementation("commons-beanutils:commons-beanutils:1.9.4") {
exclude(group = "commons-collections", module = "commons-collections")
dependencies {
implementation('commons-beanutils:commons-beanutils:1.9.4') {
exclude group: 'commons-collections', module: 'commons-collections'
In this example, we add a dependency to commons-beanutils
but exclude the transitive dependency commons-collections
.
In our code, shown below, we only use one method from the beanutils library, PropertyUtils.setSimpleProperty()
.
Using this method for existing setters does not require any functionality from commons-collections
as we verified through test coverage.
Example 78. Using a utility from the beanutils library
public class Main {
public static void main(String[] args) throws Exception {
Object person = new Person();
PropertyUtils.setSimpleProperty(person, "name", "Bart Simpson");
PropertyUtils.setSimpleProperty(person, "age", 38);
Effectively, we are expressing that we only use a subset of the library, which does not require the commons-collection
library.
This can be seen as implicitly defining a feature variant that has not been explicitly declared by commons-beanutils
itself.
However, the risk of breaking an untested code path increased by doing this.
For example, here we use the setSimpleProperty()
method to modify properties defined by setters in the Person
class, which works fine.
If we would attempt to set a property not existing on the class, we should get an error like Unknown property on class Person
.
However, because the error handling path uses a class from commons-collections
, the error we now get is NoClassDefFoundError: org/apache/commons/collections/FastHashMap
.
So if our code would be more dynamic, and we would forget to cover the error case sufficiently, consumers of our library might be confronted with unexpected errors.
This is only an example to illustrate potential pitfalls.
In practice, larger libraries or frameworks can bring in a huge set of dependencies.
If those libraries fail to declare features separately and can only be consumed in a "all or nothing" fashion, excludes can be a valid method to reduce the library to the feature set actually required.
On the upside, Gradle’s exclude handling is, in contrast to Maven, taking the whole dependency graph into account.
So if there are multiple dependencies on a library, excludes are only exercised if all dependencies agree on them.
For example, if we add opencsv
as another dependency to our project above, which also depends on commons-beanutils
, commons-collection
is no longer excluded as opencsv
itself does not exclude it.
dependencies {
implementation("commons-beanutils:commons-beanutils:1.9.4") {
exclude(group = "commons-collections", module = "commons-collections")
implementation("com.opencsv:opencsv:4.6") // depends on 'commons-beanutils' without exclude and brings back 'commons-collections'
dependencies {
implementation('commons-beanutils:commons-beanutils:1.9.4') {
exclude group: 'commons-collections', module: 'commons-collections'
implementation 'com.opencsv:opencsv:4.6' // depends on 'commons-beanutils' without exclude and brings back 'commons-collections'
If we still want to have commons-collections
excluded, because our combined usage of commons-beanutils
and opencsv
does not need it, we need to exclude it from the transitive dependencies of opencsv
as well.
dependencies {
implementation("commons-beanutils:commons-beanutils:1.9.4") {
exclude(group = "commons-collections", module = "commons-collections")
implementation("com.opencsv:opencsv:4.6") {
exclude(group = "commons-collections", module = "commons-collections")
dependencies {
implementation('commons-beanutils:commons-beanutils:1.9.4') {
exclude group: 'commons-collections', module: 'commons-collections'
implementation('com.opencsv:opencsv:4.6') {
exclude group: 'commons-collections', module: 'commons-collections'
Historically, excludes were also used as a band aid to fix other issues not supported by some dependency management systems.
Gradle however, offers a variety of features that might be better suited to solve a certain use case.
You may consider to look into the following features:
Update or downgrade dependency versions:
If versions of dependencies clash, it is usually better to adjust the version through a dependency constraint, instead of attempting to exclude the dependency with the undesired version.
Component Metadata Rules:
If a library’s metadata is clearly wrong, for example if it includes a compile time dependency which is never needed at compile time, a possible solution is to remove the dependency in a component metadata rule.
By this, you tell Gradle that a dependency between two modules is never needed — i.e. the metadata was wrong — and therefore should never be considered.
If you are developing a library, you have to be aware that this information is not published, and so sometimes an exclude can be the better alternative.
Resolving mutually exclusive dependency conflicts:
Another situation that you often see solved by excludes is that two dependencies cannot be used together because they represent two implementations of the same thing (the same capability).
Some popular examples are clashing logging API implementations (like log4j
and log4j-over-slf4j
) or modules that have different coordinates in different versions (like com.google.collections
and guava
).
In these cases, if this information is not known to Gradle, it is recommended to add the missing capability information via component metadata rules as described in the declaring component capabilities section.
Even if you are developing a library, and your consumers will have to deal with resolving the conflict again, it is often the right solution to leave the decision to the final consumers of libraries.
I.e. you as a library author should not have to decide which logging implementation your consumers use in the end.
Using a version catalog
A version catalog is a list of dependencies, represented as dependency coordinates, that a user can pick from when declaring dependencies in a build script.
For example, instead of declaring a dependency using a string notation, the dependency coordinates can be picked from a version catalog:
Example 81. Using a library declared in a version catalog
For each catalog, Gradle generates type-safe accessors so that you can easily add dependencies with autocompletion in the IDE.
Each catalog is visible to all projects of a build. It is a central place to declare a version of a dependency and to make sure that a change to that version applies to every subproject.
Catalogs can declare dependency bundles, which are "groups of dependencies" that are commonly used together.
Catalogs can separate the group and name of a dependency from its actual version and use version references instead, making it possible to share a version declaration between multiple dependencies.
A dependency catalog doesn’t enforce the version of a dependency: like a regular dependency notation, it declares the requested version or a rich version.
That version is not necessarily the version that is selected during conflict resolution.
Declaring a version catalog
Version catalogs can be declared in the settings.gradle(.kts)
file.
In the example above, in order to make groovy
available via the libs
catalog, we need to associate an alias with GAV (group, artifact, version) coordinates:
Example 82. Declaring a version catalog
versionCatalogs {
create("libs") {
library("groovy-core", "org.codehaus.groovy:groovy:3.0.5")
library("groovy-json", "org.codehaus.groovy:groovy-json:3.0.5")
library("groovy-nio", "org.codehaus.groovy:groovy-nio:3.0.5")
library("commons-lang3", "org.apache.commons", "commons-lang3").version {
strictly("[3.8, 4.0[")
prefer("3.9")
libs {
library('groovy-core', 'org.codehaus.groovy:groovy:3.0.5')
library('groovy-json', 'org.codehaus.groovy:groovy-json:3.0.5')
library('groovy-nio', 'org.codehaus.groovy:groovy-nio:3.0.5')
library('commons-lang3', 'org.apache.commons', 'commons-lang3').version {
strictly '[3.8, 4.0['
prefer '3.9'
Aliases and their mapping to type safe accessors
Aliases must consist of a series of identifiers separated by a dash (-
, recommended), an underscore (_
) or a dot (.
).
Identifiers themselves must consist of ascii characters, preferably lowercase, eventually followed by numbers.
For example:
Then type safe accessors are generated for each subgroup.
For example, given the following aliases in a version catalog named libs
:
guava
, groovy-core
, groovy-xml
, groovy-json
, androidx.awesome.lib
We would generate the following type-safe accessors:
In case you want to avoid the generation of a subgroup accessor, we recommend relying on case to differentiate.
For example the aliases groovyCore
, groovyJson
and groovyXml
would be mapped to the libs.groovyCore
, libs.groovyJson
and libs.groovyXml
accessors respectively.
When declaring aliases, it’s worth noting that any of the -
, _
and .
characters can be used as separators, but the generated catalog will have all normalized to .
:
for example foo-bar
as an alias is converted to foo.bar
automatically.
Dependencies with same version numbers
In the first example in declaring a version catalog, we can see that we declare 3 aliases for various components of the groovy
library and that all of them share the same version number.
Instead of repeating the same version number, we can declare a version and reference it:
Example 83. Declaring versions separately from libraries
version("groovy", "3.0.5")
version("checkstyle", "8.37")
library("groovy-core", "org.codehaus.groovy", "groovy").versionRef("groovy")
library("groovy-json", "org.codehaus.groovy", "groovy-json").versionRef("groovy")
library("groovy-nio", "org.codehaus.groovy", "groovy-nio").versionRef("groovy")
library("commons-lang3", "org.apache.commons", "commons-lang3").version {
strictly("[3.8, 4.0[")
prefer("3.9")
version('groovy', '3.0.5')
version('checkstyle', '8.37')
library('groovy-core', 'org.codehaus.groovy', 'groovy').versionRef('groovy')
library('groovy-json', 'org.codehaus.groovy', 'groovy-json').versionRef('groovy')
library('groovy-nio', 'org.codehaus.groovy', 'groovy-nio').versionRef('groovy')
library('commons-lang3', 'org.apache.commons', 'commons-lang3').version {
strictly '[3.8, 4.0['
prefer '3.9'
Versions declared separately are also available via type-safe accessors, making them usable for more use cases than dependency versions, in particular for tooling:
Example 84. Using a version declared in a version catalog
checkstyle {
// will use the version declared in the catalog
toolVersion = libs.versions.checkstyle.get()
checkstyle {
// will use the version declared in the catalog
toolVersion = libs.versions.checkstyle.get()
If the alias of a declared version is also a prefix of some more specific alias, as in libs.versions.zinc
and libs.versions.zinc.apiinfo
, then
the value of the more generic version is available via asProvider()
on the type-safe accessor:
Dependencies declared in a catalog are exposed to build scripts via an extension corresponding to their name.
In the example above, because the catalog declared in settings is named libs
, the extension is available via the name libs
in all build scripts of the current build.
Declaring dependencies using the following notation…
Example 86. Dependency notation correspondance
implementation(libs.groovy.core)
implementation(libs.groovy.json)
implementation(libs.groovy.nio)
implementation libs.groovy.core
implementation libs.groovy.json
implementation libs.groovy.nio
dependencies {
implementation("org.codehaus.groovy:groovy:3.0.5")
implementation("org.codehaus.groovy:groovy-json:3.0.5")
implementation("org.codehaus.groovy:groovy-nio:3.0.5")
dependencies {
implementation 'org.codehaus.groovy:groovy:3.0.5'
implementation 'org.codehaus.groovy:groovy-json:3.0.5'
implementation 'org.codehaus.groovy:groovy-nio:3.0.5'
Dependency bundles
Because it’s frequent that some dependencies are systematically used together in different projects, a version catalog offers the concept of a "dependency bundle".
A bundle is basically an alias for several dependencies.
For example, instead of declaring 3 individual dependencies like above, you could write:
Example 88. Using a dependency bundle
version("groovy", "3.0.5")
version("checkstyle", "8.37")
library("groovy-core", "org.codehaus.groovy", "groovy").versionRef("groovy")
library("groovy-json", "org.codehaus.groovy", "groovy-json").versionRef("groovy")
library("groovy-nio", "org.codehaus.groovy", "groovy-nio").versionRef("groovy")
library("commons-lang3", "org.apache.commons", "commons-lang3").version {
strictly("[3.8, 4.0[")
prefer("3.9")
bundle("groovy", listOf("groovy-core", "groovy-json", "groovy-nio"))
version('groovy', '3.0.5')
version('checkstyle', '8.37')
library('groovy-core', 'org.codehaus.groovy', 'groovy').versionRef('groovy')
library('groovy-json', 'org.codehaus.groovy', 'groovy-json').versionRef('groovy')
library('groovy-nio', 'org.codehaus.groovy', 'groovy-nio').versionRef('groovy')
library('commons-lang3', 'org.apache.commons', 'commons-lang3').version {
strictly '[3.8, 4.0['
prefer '3.9'
bundle('groovy', ['groovy-core', 'groovy-json', 'groovy-nio'])
In addition to libraries, version catalog supports declaring plugin versions.
While libraries are represented by their group, artifact and version coordinates, Gradle plugins are identified by their id and version only.
Therefore, they need to be declared separately:
versionCatalogs {
create("libs") {
plugin("versions", "com.github.ben-manes.versions").version("0.45.0")
versionCatalogs {
libs {
plugin('versions', 'com.github.ben-manes.versions').version('0.45.0')
Then the plugin is accessible in the plugins
block and can be consumed in any project of the build using:
Example 91. Using a plugin declared in a catalog
id 'java-library'
id 'checkstyle'
// Use the plugin `versions` as declared in the `libs` version catalog
alias(libs.plugins.versions)
Using multiple catalogs
Aside from the conventional libs
catalog, you can declare any number of catalogs through the Settings
API.
This allows you to separate dependency declarations in multiple sources in a way that makes sense for your projects.
Example 92. Using a custom catalog
create("testLibs") {
val junit5 = version("junit5", "5.7.1")
library("junit-api", "org.junit.jupiter", "junit-jupiter-api").versionRef(junit5)
library("junit-engine", "org.junit.jupiter", "junit-jupiter-engine").versionRef(junit5)
testLibs {
def junit5 = version('junit5', '5.7.1')
library('junit-api', 'org.junit.jupiter', 'junit-jupiter-api').versionRef(junit5)
library('junit-engine', 'org.junit.jupiter', 'junit-jupiter-engine').versionRef(junit5)
Each catalog will generate an extension applied to all projects for accessing its content.
As such it makes sense to reduce the chance of collisions by picking a name that reduces the potential conflicts.
As an example, one option is to pick a name that ends with Libs
.
The libs.versions.toml file
In addition to the settings API above, Gradle offers a conventional file to declare a catalog.
If a libs.versions.toml
file is found in the gradle
subdirectory of the root build, then a catalog will be automatically declared with the contents of this file.
Declaring a libs.versions.toml
file doesn’t make it the single source of truth for dependencies: it’s a conventional location where dependencies can be declared.
As soon as you start using catalogs, it’s strongly recommended to declare all your dependencies in a catalog and not hardcode group/artifact/version strings in build scripts.
Be aware that it may happen that plugins add dependencies, which are dependencies defined outside of this file.
Just like src/main/java
is a convention to find the Java sources, which doesn’t prevent additional source directories to be declared (either in a build script or a plugin), the presence of the libs.versions.toml
file doesn’t prevent the declaration of dependencies elsewhere.
The presence of this file does, however, suggest that most dependencies, if not all, will be declared in this file.
Therefore, updating a dependency version, for most users, should only consists of changing a line in this file.
By default, the libs.versions.toml
file will be an input to the libs
catalog.
It is possible to change the name of the default catalog, for example if you already have an extension with the same name:
Example 93. Changing the default extension name
the [versions]
section is used to declare versions which can be referenced by dependencies
the [libraries]
section is used to declare the aliases to coordinates
the [bundles]
section is used to declare dependency bundles
the [plugins]
section is used to declare plugins
[libraries]
groovy-core = { module = "org.codehaus.groovy:groovy", version.ref = "groovy" }
groovy-json = { module = "org.codehaus.groovy:groovy-json", version.ref = "groovy" }
groovy-nio = { module = "org.codehaus.groovy:groovy-nio", version.ref = "groovy" }
commons-lang3 = { group = "org.apache.commons", name = "commons-lang3", version = { strictly = "[3.8, 4.0[", prefer="3.9" } }
[bundles]
groovy = ["groovy-core", "groovy-json", "groovy-nio"]
[plugins]
versions = { id = "com.github.ben-manes.versions", version = "0.45.0" }
Versions can be declared either as a single string, in which case they are interpreted as a required version, or as a rich versions:
[versions]
my-lib = { strictly = "[1.0, 2.0[", prefer = "1.2" }
Supported members of a version declaration are:
[libraries]
my-lib = "com.mycompany:mylib:1.4"
my-other-lib = { module = "com.mycompany:other", version = "1.4" }
my-other-lib2 = { group = "com.mycompany", name = "alternate", version = "1.4" }
mylib-full-format = { group = "com.mycompany", name = "alternate", version = { require = "1.4" } }
[plugins]
short-notation = "some.plugin.id:1.4"
long-notation = { id = "some.plugin.id", version = "1.4" }
reference-notation = { id = "some.plugin.id", version.ref = "common" }
In case you want to reference a version declared in the [versions]
section, you should use the version.ref
property:
[versions]
some = "1.4"
[libraries]
my-lib = { group = "com.mycompany", name="mylib", version.ref="some" }
The TOML file format is very lenient and lets you write "dotted" properties as shortcuts to full object declarations.
For example, this:
a.b.c="d"
is equivalent to:
a.b = { c = "d" }
a = { b = { c = "d" } }
See the TOML specification for details.
build.gradle.kts
val versionCatalog = extensions.getByType<VersionCatalogsExtension>().named("libs")
println("Library aliases: ${versionCatalog.libraryAliases}")
dependencies {
versionCatalog.findLibrary("groovy-json").ifPresent {
implementation(it)
build.gradle
def versionCatalog = extensions.getByType(VersionCatalogsExtension).named("libs")
println "Library aliases: ${versionCatalog.libraryAliases}"
dependencies {
versionCatalog.findLibrary("groovy-json").ifPresent {
implementation(it)
Sharing catalogs
Version catalogs are used in a single build (possibly multi-project build) but may also be shared between builds.
For example, an organization may want to create a catalog of dependencies that different projects, from different teams, may use.
Importing a catalog from a TOML file
The version catalog builder API supports including a model from an external file.
This makes it possible to reuse the catalog of the main build for buildSrc
, if needed.
For example, the buildSrc/settings.gradle(.kts)
file can include this file using:
Example 94. Sharing the dependency catalog with buildSrc
Only a single file will be accepted when using the VersionCatalogBuilder.from(Object dependencyNotation) method.
This means that notations like Project.files(java.lang.Object…) must refer to a single file, otherwise the build will fail.
If a more complicated structure is required (version catalogs imported from multiple files), it’s advisable to use a code-based approach, instead of TOML file.
dependencyResolutionManagement {
versionCatalogs {
// declares an additional catalog, named 'testLibs', from the 'test-libs.versions.toml' file
create("testLibs") {
from(files("gradle/test-libs.versions.toml"))
dependencyResolutionManagement {
versionCatalogs {
// declares an additional catalog, named 'testLibs', from the 'test-libs.versions.toml' file
testLibs {
from(files('gradle/test-libs.versions.toml'))
The version catalog plugin
While importing catalogs from local files is convenient, it doesn’t solve the problem of sharing a catalog in an organization or for external consumers.
One option to share a catalog is to write a settings plugin, publish it on the Gradle plugin portal or an internal repository, and let the consumers apply the plugin on their settings file.
Alternatively, Gradle offers a version catalog plugin, which offers the ability to declare, then publish a catalog.
To do this, you need to apply the version-catalog
plugin:
Example 96. Applying the version catalog plugin
This plugin will then expose the catalog extension that you can use to declare a catalog:
Example 97. Definition of a catalog
// declare the aliases, bundles and versions in this block
versionCatalog {
library("my-lib", "com.mycompany:mylib:1.2")
// declare the aliases, bundles and versions in this block
versionCatalog {
library('my-lib', 'com.mycompany:mylib:1.2')
Such a catalog can then be published by applying either the maven-publish
or ivy-publish
plugin and configuring the publication to use the versionCatalog
component:
Example 98. Publishing a catalog
Importing a published catalog
A catalog produced by the version catalog plugin can be imported via the settings API:
Example 99. Using a published catalog
Overwriting catalog versions
In case a catalog declares a version, you can overwrite the version when importing the catalog:
Example 100. Overwriting versions declared in a published catalog
create("amendedLibs") {
from("com.mycompany:catalog:1.0")
// overwrite the "groovy" version declared in the imported catalog
version("groovy", "3.0.6")
amendedLibs {
from("com.mycompany:catalog:1.0")
// overwrite the "groovy" version declared in the imported catalog
version("groovy", "3.0.6")
Again, overwriting a version doesn’t mean that the actual resolved dependency version will be the same: this only changes what is imported, that is to say what is used when declaring a dependency.
The actual version will be subject to traditional conflict resolution, if any.
Using a platform to control transitive versions
A platform is a special software component which can be used to control transitive dependency versions.
In most cases it’s exclusively composed of dependency constraints which will either suggest dependency versions or enforce some versions.
As such, this is a perfect tool whenever you need to share dependency versions between projects.
In this case, a project will typically be organized this way:
a platform
project which defines constraints for the various dependencies found in the different sub-projects
a number of sub-projects which depend on the platform and declare dependencies without version
It’s also common to find platforms published as Maven BOMs which Gradle supports natively.
A dependency on a platform is created using the platform
keyword:
Example 101. Getting versions declared in a platform
dependencies {
// get recommended versions from the platform project
api(platform(project(":platform")))
// no version required
api("commons-httpclient:commons-httpclient")
dependencies {
// get recommended versions from the platform project
api platform(project(':platform'))
// no version required
api 'commons-httpclient:commons-httpclient'
it sets the org.gradle.category attribute to platform
, which means that Gradle will select the platform component of the dependency.
it sets the endorseStrictVersions behavior by default, meaning that if the platform declares strict dependencies, they will be enforced.
This means that by default, a dependency to a platform triggers the inheritance of all strict versions defined in that platform, which can be useful for platform authors to make sure that all consumers respect their decisions in terms of versions of dependencies.
This can be turned off by explicitly calling the doNotEndorseStrictVersions
method.
Importing Maven BOMs
Gradle provides support for importing bill of materials (BOM) files, which are effectively .pom
files that use <dependencyManagement>
to control the dependency versions of direct and transitive dependencies.
The BOM support in Gradle works similar to using <scope>import</scope>
when depending on a BOM in Maven.
In Gradle however, it is done via a regular dependency declaration on the BOM:
dependencies {
// import a BOM
implementation(platform("org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE"))
// define dependencies without versions
implementation("com.google.code.gson:gson")
implementation("dom4j:dom4j")
dependencies {
// import a BOM
implementation platform('org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE')
// define dependencies without versions
implementation 'com.google.code.gson:gson'
implementation 'dom4j:dom4j'
In the example, the versions of gson
and dom4j
are provided by the Spring Boot BOM.
This way, if you are developing for a platform like Spring Boot, you do not have to declare any versions yourself but can rely on the versions the platform provides.
Gradle treats all entries in the <dependencyManagement>
block of a BOM similar to Gradle’s dependency constraints.
This means that any version defined in the <dependencyManagement>
block can impact the dependency resolution result.
In order to qualify as a BOM, a .pom
file needs to have <packaging>pom</packaging>
set.
However often BOMs are not only providing versions as recommendations, but also a way to override any other version found in the graph.
You can enable this behavior by using the enforcedPlatform
keyword, instead of platform
, when importing the BOM:
dependencies {
// import a BOM. The versions used in this file will override any other version found in the graph
implementation(enforcedPlatform("org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE"))
// define dependencies without versions
implementation("com.google.code.gson:gson")
implementation("dom4j:dom4j")
// this version will be overridden by the one found in the BOM
implementation("org.codehaus.groovy:groovy:1.8.6")
dependencies {
// import a BOM. The versions used in this file will override any other version found in the graph
implementation enforcedPlatform('org.springframework.boot:spring-boot-dependencies:1.5.8.RELEASE')
// define dependencies without versions
implementation 'com.google.code.gson:gson'
implementation 'dom4j:dom4j'
// this version will be overridden by the one found in the BOM
implementation 'org.codehaus.groovy:groovy:1.8.6'
Using enforcedPlatform
needs to be considered with care if your software component can be consumed by others.
This declaration is effectively transitive and so will apply to the dependency graph of your consumers.
Unfortunately they will have to use exclude
if they happen to disagree with one of the forced versions.
Instead, if your reusable software component has a strong opinion on some third party dependency versions, consider using a rich version declaration with a strictly
.
Should I use a platform or a catalog?
Because platforms and catalogs both talk about dependency versions and can both be used to share dependency versions in a project, there might be a confusion regarding what to use and if one is preferable to the other.
In short, you should:
use catalogs to only define dependencies and their versions for projects and to generate type-safe accessors
use platform to apply versions to dependency graph and to affect dependency resolution
A catalog helps with centralizing the dependency versions and is only, as it name implies, a catalog of dependencies you can pick from.
We recommend using it to declare the coordinates of your dependencies, in all cases.
It will be used by Gradle to generate type-safe accessors, present short-hand notations for external dependencies and it allows sharing those coordinates between different projects easily.
Using a catalog will not have any kind of consequence on downstream consumers: it’s transparent to them.
A platform is a more heavyweight construct: it’s a component of a dependency graph, like any other library.
If you depend on a platform, that platform is itself a component in the graph.
It means, in particular, that:
Constraints defined in a platform can influence transitive dependencies, not only the direct dependencies of your project.
A platform is versioned, and a transitive dependency in the graph can depend on a different version of the platform, causing various dependency upgrades.
A platform can tie components together, and in particular can be used as a construct for aligning versions.
A dependency on a platform is "inherited" by the consumers of your dependency: it means that a dependency on a platform can influence what versions of libraries would be used by your consumers even if you don’t directly, or transitively, depend on components the platform references.
In summary, using a catalog is always a good engineering practice as it centralizes common definitions, allows sharing of dependency versions or plugin versions, but it is an "implementation detail" of the build: it will not be visible to consumers and unused elements of a catalog are just ignored.
A platform is meant to influence the dependency resolution graph, for example by adding constraints on transitive dependencies: it’s a solution for structuring a dependency graph and influencing the resolution result.
In practice, your project can both use a catalog and declare a platform which itself uses the catalog:
Example 104. Using a catalog within a platform definition
Dependency version alignment allows different modules belonging to the same logical group (a platform) to have identical versions in a dependency graph.
Handling inconsistent module versions
Gradle supports aligning versions of modules which belong to the same "platform".
It is often preferable, for example, that the API and implementation modules of a component are using the same version.
However, because of the game of transitive dependency resolution, it is possible that different modules belonging to the same platform end up using different versions.
For example, your project may depend on the jackson-databind
and vert.x
libraries, as illustrated below:
Example 105. Declaring dependencies
dependencies {
// a dependency on Jackson Databind
implementation("com.fasterxml.jackson.core:jackson-databind:2.8.9")
// and a dependency on vert.x
implementation("io.vertx:vertx-core:3.5.3")
dependencies {
// a dependency on Jackson Databind
implementation 'com.fasterxml.jackson.core:jackson-databind:2.8.9'
// and a dependency on vert.x
implementation 'io.vertx:vertx-core:3.5.3'
It’s easy to end up with a set of versions which do not work well together.
To fix this, Gradle supports dependency version alignment, which is supported by the concept of platforms.
A platform represents a set of modules which "work well together".
Either because they are actually published as a whole (when one of the members of the platform is published, all other modules are also published with the same version), or because someone tested the modules and indicates that they work well together (typically, the Spring Platform).
Aligning versions natively with Gradle
Gradle natively supports alignment of modules produced by Gradle.
This is a direct consequence of the transitivity of dependency constraints.
So if you have a multi-project build, and you wish that consumers get the same version of all your modules, Gradle provides a simple way to do this using the Java Platform Plugin.
For example, if you have a project that consists of 3 modules:
Then by default resolution would select core:1.0
and lib:1.1
, because lib
has no dependency on core
.
We can fix this by adding a new module in our project, a platform, that will add constraints on all the modules of your project:
Example 106. The platform module
dependencies {
// The platform declares constraints on all components that
// require alignment
constraints {
api(project(":core"))
api(project(":lib"))
api(project(":utils"))
dependencies {
// The platform declares constraints on all components that
// require alignment
constraints {
api(project(":core"))
api(project(":lib"))
api(project(":utils"))
Once this is done, we need to make sure that all modules now depend on the platform, like this:
Example 107. Declaring a dependency on the platform
dependencies {
// Each project has a dependency on the platform
api(platform(project(":platform")))
// And any additional dependency required
implementation(project(":lib"))
implementation(project(":utils"))
dependencies {
// Each project has a dependency on the platform
api(platform(project(":platform")))
// And any additional dependency required
implementation(project(":lib"))
implementation(project(":utils"))
It is important that the platform contains a constraint on all the components, but also that each component has a dependency on the platform.
By doing this, whenever Gradle will add a dependency to a module of the platform on the graph, it will also include constraints on the other modules of the platform.
This means that if we see another module belonging to the same platform, we will automatically upgrade to the same version.
In our example, it means that we first see core:1.0
, which brings a platform 1.0
with constraints on lib:1.0
and lib:1.0
.
Then we add lib:1.1
which has a dependency on platform:1.1
.
By conflict resolution, we select the 1.1
platform, which has a constraint on core:1.1
.
Then we conflict resolve between core:1.0
and core:1.1
, which means that core
and lib
are now aligned properly.
Aligning versions of modules not published with Gradle
Whenever the publisher doesn’t use Gradle, like in our Jackson example, we can explain to Gradle that all Jackson modules "belong to" the same platform and benefit from the same behavior as with native alignment.
There are two options to express that a set of modules belong to a platform:
A platform is published as a BOM and can be used:
For example, com.fasterxml.jackson:jackson-bom
can be used as platform.
The information missing to Gradle in that case is that the platform should be added to the dependencies if one of its members is used.
No existing platform can be used. Instead, a virtual platform should be created by Gradle:
In this case, Gradle builds up the platform itself based on all the members that are used.
To provide the missing information to Gradle, you can define component metadata rules as explained in the following.
Align versions of modules using a published BOM
Example 108. A dependency version alignment rule
abstract class JacksonBomAlignmentRule: ComponentMetadataRule {
override fun execute(ctx: ComponentMetadataContext) {
ctx.details.run {
if (id.group.startsWith("com.fasterxml.jackson")) {
// declare that Jackson modules belong to the platform defined by the Jackson BOM
belongsTo("com.fasterxml.jackson:jackson-bom:${id.version}", false)
abstract class JacksonBomAlignmentRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext ctx) {
ctx.details.with {
if (id.group.startsWith("com.fasterxml.jackson")) {
// declare that Jackson modules belong to the platform defined by the Jackson BOM
belongsTo("com.fasterxml.jackson:jackson-bom:${id.version}", false)
By using the belongsTo
with false
(not virtual), we declare that all modules belong to the same published platform.
In this case, the platform is com.fasterxml.jackson:jackson-bom
and Gradle will look for it, as for any other module, in the declared repositories.
Example 109. Making use of a dependency version alignment rule
Using the rule, the versions in the example above align to whatever the selected version of com.fasterxml.jackson:jackson-bom
defines.
In this case, com.fasterxml.jackson:jackson-bom:2.9.5
will be selected as 2.9.5
is the highest version of a module selected.
In that BOM, the following versions are defined and will be used:
jackson-core:2.9.5
,
jackson-databind:2.9.5
and
jackson-annotation:2.9.0
.
The lower versions of jackson-annotation
here might be the desired result as it is what the BOM recommends.
abstract class JacksonAlignmentRule: ComponentMetadataRule {
override fun execute(ctx: ComponentMetadataContext) {
ctx.details.run {
if (id.group.startsWith("com.fasterxml.jackson")) {
// declare that Jackson modules all belong to the Jackson virtual platform
belongsTo("com.fasterxml.jackson:jackson-virtual-platform:${id.version}")
abstract class JacksonAlignmentRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext ctx) {
ctx.details.with {
if (id.group.startsWith("com.fasterxml.jackson")) {
// declare that Jackson modules all belong to the Jackson virtual platform
belongsTo("com.fasterxml.jackson:jackson-virtual-platform:${id.version}")
By using the belongsTo
keyword without further parameter (platform is virtual), we declare that all modules belong to the same virtual platform, which is treated specially by the engine.
A virtual platform will not be retrieved from a repository.
The identifier, in this case com.fasterxml.jackson:jackson-virtual-platform
, is something you as the build author define yourself.
The "content" of the platform is then created by Gradle on the fly by collecting all belongsTo
statements pointing at the same virtual platform.
Example 111. Making use of a dependency version alignment rule
Using the rule, all versions in the example above would align to 2.9.5
.
In this case, also jackson-annotation:2.9.5
will be taken, as that is how we defined our local virtual platform.
For both published and virtual platforms, Gradle lets you override the version choice of the platform itself by specifying an enforced dependency on the platform:
Example 112. Forceful platform downgrade
dependencies {
// Forcefully downgrade the virtual Jackson platform to 2.8.9
implementation(enforcedPlatform("com.fasterxml.jackson:jackson-virtual-platform:2.8.9"))
dependencies {
// Forcefully downgrade the virtual Jackson platform to 2.8.9
implementation enforcedPlatform('com.fasterxml.jackson:jackson-virtual-platform:2.8.9')
Introduction to component capabilities
Often a dependency graph would accidentally contain multiple implementations of the same API.
This is particularly common with logging frameworks, where multiple bindings are available, and that one library chooses a binding when another transitive dependency chooses another.
Because those implementations live at different GAV coordinates, the build tool has usually no way to find out that there’s a conflict between those libraries.
To solve this, Gradle provides the concept of capability.
It’s illegal to find two components providing the same capability in a single dependency graph.
Intuitively, it means that if Gradle finds two components that provide the same thing on classpath, it’s going to fail with an error indicating what modules are in conflict.
In our example, it means that different bindings of a logging framework provide the same capability.
Capability coordinates
A capability is defined by a (group, module, version)
triplet.
Each component defines an implicit capability corresponding to its GAV coordinates (group, artifact, version).
For example, the org.apache.commons:commons-lang3:3.8
module has an implicit capability with group org.apache.commons
, name commons-lang3
and version 3.8
.
It is important to realize that capabilities are versioned.
Declaring component capabilities
By default, Gradle will fail if two components in the dependency graph provide the same capability.
Because most modules are currently published without Gradle Module Metadata, capabilities are not always automatically discovered by Gradle.
It is however interesting to use rules to declare component capabilities in order to discover conflicts as soon as possible, during the build instead of runtime.
A typical example is whenever a component is relocated at different coordinates in a new release.
For example, the ASM library lived at asm:asm
coordinates until version 3.3.1
, then changed to org.ow2.asm:asm
since 4.0
.
It is illegal to have both ASM <= 3.3.1 and 4.0+ on the classpath, because they provide the same feature, it’s just that the component has been relocated.
Because each component has an implicit capability corresponding to its GAV coordinates, we can "fix" this by having a rule that will declare that the asm:asm
module provides the org.ow2.asm:asm
capability:
Example 113. Conflict resolution by capability
class AsmCapability : ComponentMetadataRule {
override
fun execute(context: ComponentMetadataContext) = context.details.run {
if (id.group == "asm" && id.name == "asm") {
allVariants {
withCapabilities {
// Declare that ASM provides the org.ow2.asm:asm capability, but with an older version
addCapability("org.ow2.asm", "asm", id.version)
@CompileStatic
class AsmCapability implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
context.details.with {
if (id.group == "asm" && id.name == "asm") {
allVariants {
it.withCapabilities {
// Declare that ASM provides the org.ow2.asm:asm capability, but with an older version
it.addCapability("org.ow2.asm", "asm", id.version)
At this stage, Gradle will only make more builds fail.
It will not automatically fix the problem for you, but it helps you realize that you have a problem.
It is recommended to write such rules in plugins which are then applied to your builds.
Then, users have to express their preferences, if possible, or fix the problem of having incompatible things on the classpath, as explained in the following section.
Selecting between candidates
At some point, a dependency graph is going to include either incompatible modules, or modules which are mutually exclusive.
For example, you may have different logger implementations and you need to choose one binding.
Capabilities help realizing that you have a conflict, but Gradle also provides tools to express how to solve the conflicts.
Selecting between different capability candidates
In the relocation example above, Gradle was able to tell you that you have two versions of the same API on classpath: an "old" module and a "relocated" one.
Now we can solve the conflict by automatically choosing the component which has the highest capability version:
Example 114. Conflict resolution by capability versioning
configurations.all {
resolutionStrategy.capabilitiesResolution.withCapability("org.ow2.asm:asm") {
selectHighestVersion()
configurations.all {
resolutionStrategy.capabilitiesResolution.withCapability('org.ow2.asm:asm') {
selectHighestVersion()
However, fixing by choosing the highest capability version conflict resolution is not always suitable.
For a logging framework, for example, it doesn’t matter what version of the logging frameworks we use, we should always select Slf4j.
In this case, we can fix it by explicitly selecting slf4j as the winner:
Example 115. Substitute log4j with slf4j
configurations.all {
resolutionStrategy.capabilitiesResolution.withCapability("log4j:log4j") {
val toBeSelected = candidates.firstOrNull { it.id.let { id -> id is ModuleComponentIdentifier && id.module == "log4j-over-slf4j" } }
if (toBeSelected != null) {
select(toBeSelected)
because("use slf4j in place of log4j")
configurations.all {
resolutionStrategy.capabilitiesResolution.withCapability("log4j:log4j") {
def toBeSelected = candidates.find { it.id instanceof ModuleComponentIdentifier && it.id.module == 'log4j-over-slf4j' }
if (toBeSelected != null) {
select(toBeSelected)
because 'use slf4j in place of log4j'
Note that this approach works also well if you have multiple Slf4j bindings on the classpath:
bindings are basically different logger implementations and you need only one.
However, the selected implementation may depend on the configuration being resolved.
For example, for tests, slf4j-simple
may be enough but for production, slf4-over-log4j
may be better.
The select
method only accepts a module found in the current candidates.
If the module you want to select is not part of the conflict, you can abstain from performing a selection, effectively not resolving this conflict.
It might be that another conflict exists in the graph for the same capability and will have the module you want to select.
If no resolution is given for all conflicts on a given capability, the build will fail given the module chosen for resolution was not part of the graph at all.
In addition select(null)
will result in an error and so should be avoided.
Each module that is pulled from a repository has metadata associated with it, such as its group, name, version as well as the different variants it provides with their artifacts and dependencies.
Sometimes, this metadata is incomplete or incorrect.
To manipulate such incomplete metadata from within the build script, Gradle offers an API to write component metadata rules.
These rules take effect after a module’s metadata has been downloaded, but before it is used in dependency resolution.
Basics of writing a component metadata rule
Component metadata rules are applied in the components (ComponentMetadataHandler) section of the dependencies block (DependencyHandler) of a build script or in the settings script.
The rules can be defined in two different ways:
While defining rules inline as action can be convenient for experimentation, it is generally recommended to define rules as separate classes.
Rules that are written as isolated classes can be annotated with @CacheableRule
to cache the results of their application such that they do not need to be re-executed each time dependencies are resolved.
Example 116. Example of a configurable component metadata rule
@CacheableRule
abstract class TargetJvmVersionRule @Inject constructor(val jvmVersion: Int) : ComponentMetadataRule {
@get:Inject abstract val objects: ObjectFactory
override fun execute(context: ComponentMetadataContext) {
context.details.withVariant("compile") {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, jvmVersion)
attribute(Usage.USAGE_ATTRIBUTE, objects.named(Usage.JAVA_API))
dependencies {
components {
withModule<TargetJvmVersionRule>("commons-io:commons-io") {
params(7)
withModule<TargetJvmVersionRule>("commons-collections:commons-collections") {
params(8)
implementation("commons-io:commons-io:2.6")
implementation("commons-collections:commons-collections:3.2.2")
@CacheableRule
abstract class TargetJvmVersionRule implements ComponentMetadataRule {
final Integer jvmVersion
@Inject TargetJvmVersionRule(Integer jvmVersion) {
this.jvmVersion = jvmVersion
@Inject abstract ObjectFactory getObjects()
void execute(ComponentMetadataContext context) {
context.details.withVariant("compile") {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, jvmVersion)
attribute(Usage.USAGE_ATTRIBUTE, objects.named(Usage, Usage.JAVA_API))
dependencies {
components {
withModule("commons-io:commons-io", TargetJvmVersionRule) {
params(7)
withModule("commons-collections:commons-collections", TargetJvmVersionRule) {
params(8)
implementation("commons-io:commons-io:2.6")
implementation("commons-collections:commons-collections:3.2.2")
As can be seen in the examples above, component metadata rules are defined by implementing ComponentMetadataRule which has a single execute
method receiving an instance of ComponentMetadataContext as parameter.
In this example, the rule is also further configured through an ActionConfiguration.
This is supported by having a constructor in your implementation of ComponentMetadataRule
accepting the parameters that were configured and the services that need injecting.
Gradle enforces isolation of instances of ComponentMetadataRule
.
This means that all parameters must be Serializable
or known Gradle types that can be isolated.
In addition, Gradle services can be injected into your ComponentMetadataRule
.
Because of this, the moment you have a constructor, it must be annotated with @javax.inject.Inject
.
A commonly required service is ObjectFactory to create instances of strongly typed value objects like a value for setting an Attribute.
A service which is helpful for advanced usage of component metadata rules with custom metadata is the RepositoryResourceAccessor.
A component metadata rule can be applied to all modules — all(rule)
— or to a selected module — withModule(groupAndName, rule)
.
Usually, a rule is specifically written to enrich metadata of one specific module and hence the withModule
API should be preferred.
Declaring rules in a central place
Instead of declaring rules for each subproject individually, it is possible to declare rules in the settings.gradle(.kts)
file for the whole build.
Rules declared in settings are the conventional rules applied to each project: if the project doesn’t declare any rules, the rules from the settings script will be used.
Example 117. Declaring a rule in settings
dependencyResolutionManagement {
components {
withModule<GuavaRule>("com.google.guava:guava")
dependencyResolutionManagement {
components {
withModule("com.google.guava:guava", GuavaRule)
By default, rules declared in a project will override whatever is declared in settings.
It is possible to change this default, for example to always prefer the settings rules:
Example 118. Preferring rules declared in settings
If this method is called and that a project or plugin declares rules, a warning will be issued.
You can make this a failure instead by using this alternative:
Example 119. Enforcing rules declared in settings
Which parts of metadata can be modified?
The component metadata rules API is oriented at the features supported by Gradle Module Metadata and the dependencies API in build scripts.
The main difference between writing rules and defining dependencies and artifacts in the build script is that component metadata rules, following the structure of Gradle Module Metadata, operate on variants directly.
On the contrary, in build scripts you often influence the shape of multiple variants at once (e.g. an api dependency is added to the api and runtime variant of a Java library, the artifact produced by the jar task is also added to these two variants).
Variants can be addressed for modification through the following methods:
The dependency constraints of the variant, including rich versions — withDependencyConstraints {}
block
The location of the published files that make up the actual content of the variant — withFiles { }
block
The component level attributes, currently the only meaningful attribute there is org.gradle.status
The status scheme to influence interpretation of the org.gradle.status
attribute during version selection
The belongsTo property for version alignment through virtual platforms
If the module has Gradle Module Metadata, the data structure the rule operates on is very similar to what you find in the module’s .module
file.
If the module was published only with .pom
metadata, a number of fixed variants is derived as explained in the mapping of POM files to variants section.
If the module was published only with an ivy.xml
file, the Ivy configurations defined in the file can be accessed instead of variants.
Their dependencies, dependency constraints and files can be modified.
Additionally, the addVariant(name, baseVariantOrConfiguration) { }
API can be used to derive variants from Ivy configurations if desired (for example, compile and runtime variants for the Java library plugin can be defined with this).
When to use Component Metadata Rules?
In general, if you consider using component metadata rules to adjust the metadata of a certain module, you should check first if that module was published with Gradle Module Metadata (.module
file) or traditional metadata only (.pom
or ivy.xml
).
If a module was published with Gradle Module Metadata, the metadata is likely complete although there can still be cases where something is just plainly wrong.
For these modules you should only use component metadata rules if you have clearly identified a problem with the metadata itself.
If you have an issue with the dependency resolution result, you should first check if you can solve the issue by declaring dependency constraints with rich versions.
In particular, if you are developing a library that you publish, you should remember that dependency constraints, in contrast to component metadata rules, are published as part of the metadata of your own library.
So with dependency constraints, you automatically share the solution of dependency resolution issues with your consumers, while component metadata rules are only applied to your own build.
If a module was published with traditional metadata (.pom
or ivy.xml
only, no .module
file) it is more likely that the metadata is incomplete as features such as variants or dependency constraints are not supported in these formats.
Still, conceptually such modules can contain different variants or might have dependency constraints they just omitted (or wrongly defined as dependencies).
In the next sections, we explore a number existing oss modules with such incomplete metadata and the rules for adding the missing metadata information.
As a rule of thumb, you should contemplate if the rule you are writing also works out of context of your build.
That is, does the rule still produce a correct and useful result if applied in any other build that uses the module(s) it affects?
Fixing wrong dependency details
Let’s consider as an example the publication of the Jaxen XPath Engine on Maven central.
The pom of version 1.1.3 declares a number of dependencies in the compile scope which are not actually needed for compilation.
These have been removed in the 1.1.4 pom.
Assuming that we need to work with 1.1.3 for some reason, we can fix the metadata with the following rule:
Example 121. Rule to remove unused dependencies of Jaxen metadata
@CacheableRule
abstract class JaxenDependenciesRule: ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
context.details.allVariants {
withDependencies {
removeAll { it.group in listOf("dom4j", "jdom", "xerces", "maven-plugins", "xml-apis", "xom") }
@CacheableRule
abstract class JaxenDependenciesRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
context.details.allVariants {
withDependencies {
removeAll { it.group in ["dom4j", "jdom", "xerces", "maven-plugins", "xml-apis", "xom"] }
Within the withDependencies
block you have access to the full list of dependencies and can use all methods available on the Java collection interface to inspect and modify that list.
In addition, there are add(notation, configureAction)
methods accepting the usual notations similar to declaring dependencies in the build script.
Dependency constraints can be inspected and modified the same way in the withDependencyConstraints
block.
If we take a closer look at the Jaxen 1.1.4 pom, we observe that the dom4j, jdom and xerces dependencies are still there but marked as optional.
Optional dependencies in poms are not automatically processed by Gradle nor Maven.
The reason is that they indicate that there are optional feature variants provided by the Jaxen library which require one or more of these dependencies, but the information what these features are and which dependency belongs to which is missing.
Such information cannot be represented in pom files, but in Gradle Module Metadata through variants and capabilities.
Hence, we can add this information in a rule as well.
Example 122. Rule to add optional feature to Jaxen metadata
@CacheableRule
abstract class JaxenCapabilitiesRule: ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
context.details.addVariant("runtime-dom4j", "runtime") {
withCapabilities {
removeCapability("jaxen", "jaxen")
addCapability("jaxen", "jaxen-dom4j", context.details.id.version)
withDependencies {
add("dom4j:dom4j:1.6.1")
@CacheableRule
abstract class JaxenCapabilitiesRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
context.details.addVariant("runtime-dom4j", "runtime") {
withCapabilities {
removeCapability("jaxen", "jaxen")
addCapability("jaxen", "jaxen-dom4j", context.details.id.version)
withDependencies {
add("dom4j:dom4j:1.6.1")
Here, we first use the addVariant(name, baseVariant)
method to create an additional variant, which we identify as feature variant by defining a new capability jaxen-dom4j to represent the optional dom4j integration feature of Jaxen.
This works similar to defining optional feature variants in build scripts.
We then use one of the add
methods for adding dependencies to define which dependencies this optional feature needs.
In the build script, we can then add a dependency to the optional feature and Gradle will use the enriched metadata to discover the correct transitive dependencies.
Example 123. Applying and utilising rules for Jaxen metadata
components {
withModule<JaxenDependenciesRule>("jaxen:jaxen")
withModule<JaxenCapabilitiesRule>("jaxen:jaxen")
implementation("jaxen:jaxen:1.1.3")
runtimeOnly("jaxen:jaxen:1.1.3") {
capabilities { requireCapability("jaxen:jaxen-dom4j") }
components {
withModule("jaxen:jaxen", JaxenDependenciesRule)
withModule("jaxen:jaxen", JaxenCapabilitiesRule)
implementation("jaxen:jaxen:1.1.3")
runtimeOnly("jaxen:jaxen:1.1.3") {
capabilities { requireCapability("jaxen:jaxen-dom4j") }
Making variants published as classified jars explicit
While in the previous example, all variants, "main variants" and optional features, were packaged in one jar file, it is common to publish certain variants as separate files.
In particular, when the variants are mutual exclusive — i.e. they are not feature variants, but different variants offering alternative choices.
One example all pom-based libraries already have are the runtime and compile variants, where Gradle can choose only one depending on the task at hand.
Another of such alternatives discovered often in the Java ecosystems are jars targeting different Java versions.
As example, we look at version 0.7.9 of the asynchronous programming library Quasar published on Maven central.
If we inspect the directory listing, we discover that a quasar-core-0.7.9-jdk8.jar
was published, in addition to quasar-core-0.7.9.jar
.
Publishing additional jars with a classifier (here jdk8) is common practice in maven repositories.
And while both Maven and Gradle allow you to reference such jars by classifier, they are not mentioned at all in the metadata.
Thus, there is no information that these jars exist and if there are any other differences, like different dependencies, between the variants represented by such jars.
In Gradle Module Metadata, this variant information would be present and for the already published Quasar library, we can add it using the following rule:
Example 124. Rule to add JDK 8 variants to Quasar metadata
@CacheableRule
abstract class QuasarRule: ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
listOf("compile", "runtime").forEach { base ->
context.details.addVariant("jdk8${base.capitalize()}", base) {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 8)
withFiles {
removeAllFiles()
addFile("${context.details.id.name}-${context.details.id.version}-jdk8.jar")
context.details.withVariant(base) {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 7)
@CacheableRule
abstract class QuasarRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
["compile", "runtime"].each { base ->
context.details.addVariant("jdk8${base.capitalize()}", base) {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 8)
withFiles {
removeAllFiles()
addFile("${context.details.id.name}-${context.details.id.version}-jdk8.jar")
context.details.withVariant(base) {
attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 7)
In this case, it is pretty clear that the classifier stands for a target Java version, which is a known Java ecosystem attribute.
Because we also need both a compile and runtime for Java 8, we create two new variants but use the existing compile and runtime variants as base.
This way, all other Java ecosystem attributes are already set correctly and all dependencies are carried over.
Then we set the TARGET_JVM_VERSION_ATTRIBUTE
to 8
for both variants, remove any existing file from the new variants with removeAllFiles()
, and add the jdk8 jar file with addFile()
.
The removeAllFiles()
is needed, because the reference to the main jar quasar-core-0.7.5.jar
is copied from the corresponding base variant.
We also enrich the existing compile and runtime variants with the information that they target Java 7 — attribute(TARGET_JVM_VERSION_ATTRIBUTE, 7)
.
Now, we can request a Java 8 versions for all of our dependencies on the compile classpath in the build script and Gradle will automatically select the best fitting variant for each library.
In the case of Quasar this will now be the jdk8Compile variant exposing the quasar-core-0.7.9-jdk8.jar
.
Example 125. Applying and utilising rule for Quasar metadata
configurations["compileClasspath"].attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 8)
dependencies {
components {
withModule<QuasarRule>("co.paralleluniverse:quasar-core")
implementation("co.paralleluniverse:quasar-core:0.7.9")
configurations.compileClasspath.attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 8)
dependencies {
components {
withModule("co.paralleluniverse:quasar-core", QuasarRule)
implementation("co.paralleluniverse:quasar-core:0.7.9")
Making variants encoded in versions explicit
Another solution to publish multiple alternatives for the same library is the usage of a versioning pattern as done by the popular Guava library.
Here, each new version is published twice by appending the classifier to the version instead of the jar artifact.
In the case of Guava 28 for example, we can find a 28.0-jre (Java 8) and 28.0-android (Java 6) version on Maven central.
The advantage of using this pattern when working only with pom metadata is that both variants are discoverable through the version.
The disadvantage is that there is no information what the different version suffixes mean semantically.
So in the case of conflict, Gradle would just pick the highest version when comparing the version strings.
Turning this into proper variants is a bit more tricky, as Gradle first selects a version of a module and then selects the best fitting variant.
So the concept that variants are encoded as versions is not supported directly.
However, since both variants are always published together we can assume that the files are physically located in the same repository.
And since they are published with Maven repository conventions, we know the location of each file if we know module name and version.
We can write the following rule:
@CacheableRule
abstract class GuavaRule: ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
val variantVersion = context.details.id.version
val version = variantVersion.substring(0, variantVersion.indexOf("-"))
listOf("compile", "runtime").forEach { base ->
mapOf(6 to "android", 8 to "jre").forEach { (targetJvmVersion, jarName) ->
context.details.addVariant("jdk$targetJvmVersion${base.capitalize()}", base) {
attributes {
attributes.attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, targetJvmVersion)
withFiles {
removeAllFiles()
addFile("guava-$version-$jarName.jar", "../$version-$jarName/guava-$version-$jarName.jar")
@CacheableRule
abstract class GuavaRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
def variantVersion = context.details.id.version
def version = variantVersion.substring(0, variantVersion.indexOf("-"))
["compile", "runtime"].each { base ->
[6: "android", 8: "jre"].each { targetJvmVersion, jarName ->
context.details.addVariant("jdk$targetJvmVersion${base.capitalize()}", base) {
attributes {
attributes.attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, targetJvmVersion)
withFiles {
removeAllFiles()
addFile("guava-$version-${jarName}.jar", "../$version-$jarName/guava-$version-${jarName}.jar")
Similar to the previous example, we add runtime and compile variants for both Java versions.
In the withFiles
block however, we now also specify a relative path for the corresponding jar file which allows Gradle to find the file no matter if it has selected a -jre or -android version.
The path is always relative to the location of the metadata (in this case pom
) file of the selection module version.
So with this rules, both Guava 28 "versions" carry both the jdk6 and jdk8 variants.
So it does not matter to which one Gradle resolves.
The variant, and with it the correct jar file, is determined based on the requested TARGET_JVM_VERSION_ATTRIBUTE
value.
Example 127. Applying and utilising rule for Guava metadata
configurations["compileClasspath"].attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 6)
dependencies {
components {
withModule<GuavaRule>("com.google.guava:guava")
// '23.3-android' and '23.3-jre' are now the same as both offer both variants
implementation("com.google.guava:guava:23.3+")
configurations.compileClasspath.attributes {
attribute(TargetJvmVersion.TARGET_JVM_VERSION_ATTRIBUTE, 6)
dependencies {
components {
withModule("com.google.guava:guava", GuavaRule)
// '23.3-android' and '23.3-jre' are now the same as both offer both variants
implementation("com.google.guava:guava:23.3+")
Adding variants for native jars
Jars with classifiers are also used to separate parts of a library for which multiple alternatives exists, for example native code, from the main artifact.
This is for example done by the Lightweight Java Game Library (LWGJ), which publishes several platform specific jars to Maven central from which always one is needed, in addition to the main jar, at runtime.
It is not possible to convey this information in pom metadata as there is no concept of putting multiple artifacts in relation through the metadata.
In Gradle Module Metadata, each variant can have arbitrary many files and we can leverage that by writing the following rule:
Example 128. Rule to add native runtime variants to LWGJ metadata
@CacheableRule
abstract class LwjglRule: ComponentMetadataRule {
data class NativeVariant(val os: String, val arch: String, val classifier: String)
private val nativeVariants = listOf(
NativeVariant(OperatingSystemFamily.LINUX, "arm32", "natives-linux-arm32"),
NativeVariant(OperatingSystemFamily.LINUX, "arm64", "natives-linux-arm64"),
NativeVariant(OperatingSystemFamily.WINDOWS, "x86", "natives-windows-x86"),
NativeVariant(OperatingSystemFamily.WINDOWS, "x86-64", "natives-windows"),
NativeVariant(OperatingSystemFamily.MACOS, "x86-64", "natives-macos")
@get:Inject abstract val objects: ObjectFactory
override fun execute(context: ComponentMetadataContext) {
context.details.withVariant("runtime") {
attributes {
attributes.attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named("none"))
attributes.attribute(MachineArchitecture.ARCHITECTURE_ATTRIBUTE, objects.named("none"))
nativeVariants.forEach { variantDefinition ->
context.details.addVariant("${variantDefinition.classifier}-runtime", "runtime") {
attributes {
attributes.attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named(variantDefinition.os))
attributes.attribute(MachineArchitecture.ARCHITECTURE_ATTRIBUTE, objects.named(variantDefinition.arch))
withFiles {
addFile("${context.details.id.name}-${context.details.id.version}-${variantDefinition.classifier}.jar")
@CacheableRule
abstract class LwjglRule implements ComponentMetadataRule { //val os: String, val arch: String, val classifier: String)
private def nativeVariants = [
[os: OperatingSystemFamily.LINUX, arch: "arm32", classifier: "natives-linux-arm32"],
[os: OperatingSystemFamily.LINUX, arch: "arm64", classifier: "natives-linux-arm64"],
[os: OperatingSystemFamily.WINDOWS, arch: "x86", classifier: "natives-windows-x86"],
[os: OperatingSystemFamily.WINDOWS, arch: "x86-64", classifier: "natives-windows"],
[os: OperatingSystemFamily.MACOS, arch: "x86-64", classifier: "natives-macos"]
@Inject abstract ObjectFactory getObjects()
void execute(ComponentMetadataContext context) {
context.details.withVariant("runtime") {
attributes {
attributes.attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named(OperatingSystemFamily, "none"))
attributes.attribute(MachineArchitecture.ARCHITECTURE_ATTRIBUTE, objects.named(MachineArchitecture, "none"))
nativeVariants.each { variantDefinition ->
context.details.addVariant("${variantDefinition.classifier}-runtime", "runtime") {
attributes {
attributes.attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named(OperatingSystemFamily, variantDefinition.os))
attributes.attribute(MachineArchitecture.ARCHITECTURE_ATTRIBUTE, objects.named(MachineArchitecture, variantDefinition.arch))
withFiles {
addFile("${context.details.id.name}-${context.details.id.version}-${variantDefinition.classifier}.jar")
This rule is quite similar to the Quasar library example above.
Only this time we have five different runtime variants we add and nothing we need to change for the compile variant.
The runtime variants are all based on the existing runtime variant and we do not change any existing information.
All Java ecosystem attributes, the dependencies and the main jar file stay part of each of the runtime variants.
We only set the additional attributes OPERATING_SYSTEM_ATTRIBUTE
and ARCHITECTURE_ATTRIBUTE
which are defined as part of Gradle’s native support.
And we add the corresponding native jar file so that each runtime variant now carries two files: the main jar and the native jar.
In the build script, we can now request a specific variant and Gradle will fail with a selection error if more information is needed to make a decision.
Example 129. Applying and utilising rule for LWGJ metadata
configurations["runtimeClasspath"].attributes {
attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named("windows"))
dependencies {
components {
withModule<LwjglRule>("org.lwjgl:lwjgl")
implementation("org.lwjgl:lwjgl:3.2.3")
configurations["runtimeClasspath"].attributes {
attribute(OperatingSystemFamily.OPERATING_SYSTEM_ATTRIBUTE, objects.named(OperatingSystemFamily, "windows"))
dependencies {
components {
withModule("org.lwjgl:lwjgl", LwjglRule)
implementation("org.lwjgl:lwjgl:3.2.3")
Gradle fails to select a variant because a machine architecture needs to be chosen
> Could not resolve all files for configuration ':runtimeClasspath'.
> Could not resolve org.lwjgl:lwjgl:3.2.3.
Required by:
project :
> Cannot choose between the following variants of org.lwjgl:lwjgl:3.2.3:
- natives-windows-runtime
- natives-windows-x86-runtime
Making different flavors of a library available through capabilities
Because it is difficult to model optional feature variants as separate jars with pom metadata, libraries sometimes compose different jars with a different feature set.
That is, instead of composing your flavor of the library from different feature variants, you select one of the pre-composed variants (offering everything in one jar).
One such library is the well-known dependency injection framework Guice, published on Maven central, which offers a complete flavor (the main jar) and a reduced variant without aspect-oriented programming support (guice-4.2.2-no_aop.jar
).
That second variant with a classifier is not mentioned in the pom metadata.
With the following rule, we create compile and runtime variants based on that file and make it selectable through a capability named com.google.inject:guice-no_aop
.
Example 130. Rule to add no_aop feature variant to Guice metadata
@CacheableRule
abstract class GuiceRule: ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
listOf("compile", "runtime").forEach { base ->
context.details.addVariant("noAop${base.capitalize()}", base) {
withCapabilities {
addCapability("com.google.inject", "guice-no_aop", context.details.id.version)
withFiles {
removeAllFiles()
addFile("guice-${context.details.id.version}-no_aop.jar")
withDependencies {
removeAll { it.group == "aopalliance" }
@CacheableRule
abstract class GuiceRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
["compile", "runtime"].each { base ->
context.details.addVariant("noAop${base.capitalize()}", base) {
withCapabilities {
addCapability("com.google.inject", "guice-no_aop", context.details.id.version)
withFiles {
removeAllFiles()
addFile("guice-${context.details.id.version}-no_aop.jar")
withDependencies {
removeAll { it.group == "aopalliance" }
The new variants also have the dependency on the standardized aop interfaces library aopalliance:aopalliance
removed, as this is clearly not needed by these variants.
Again, this is information that cannot be expressed in pom metadata.
We can now select a guice-no_aop
variant and will get the correct jar file and the correct dependencies.
Example 131. Applying and utilising rule for Guice metadata
withModule<GuiceRule>("com.google.inject:guice")
implementation("com.google.inject:guice:4.2.2") {
capabilities { requireCapability("com.google.inject:guice-no_aop") }
withModule("com.google.inject:guice", GuiceRule)
implementation("com.google.inject:guice:4.2.2") {
capabilities { requireCapability("com.google.inject:guice-no_aop") }
Adding missing capabilities to detect conflicts
Another usage of capabilities is to express that two different modules, for example log4j
and log4j-over-slf4j
, provide alternative implementations of the same thing.
By declaring that both provide the same capability, Gradle only accepts one of them in a dependency graph.
This example, and how it can be tackled with a component metadata rule, is described in detail in the feature modelling section.
Making Ivy modules variant-aware
Modules with Ivy metadata, do not have variants by default.
However, Ivy configurations can be mapped to variants as the addVariant(name, baseVariantOrConfiguration)
accepts any Ivy configuration that was published as base.
This can be used, for example, to define runtime and compile variants.
An example of a corresponding rule can be found here.
Ivy details of Ivy configurations (e.g. dependencies and files) can also be modified using the withVariant(configurationName)
API.
However, modifying attributes or capabilities on Ivy configurations has no effect.
For very Ivy specific use cases, the component metadata rules API also offers access to other details only found in Ivy metadata.
These are available through the IvyModuleDescriptor interface and can be accessed using getDescriptor(IvyModuleDescriptor)
on the ComponentMetadataContext.
Example 132. Ivy component metadata rule
@CacheableRule
abstract class IvyComponentRule : ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
val descriptor = context.getDescriptor(IvyModuleDescriptor::class)
if (descriptor != null && descriptor.branch == "testing") {
context.details.status = "rc"
@CacheableRule
abstract class IvyComponentRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
def descriptor = context.getDescriptor(IvyModuleDescriptor)
if (descriptor != null && descriptor.branch == "testing") {
context.details.status = "rc"
Filter using Maven metadata
For Maven specific use cases, the component metadata rules API also offers access to other details only found in POM metadata.
These are available through the PomModuleDescriptor interface and can be accessed using getDescriptor(PomModuleDescriptor)
on the ComponentMetadataContext.
Example 133. Access pom packaging type in component metadata rule
@CacheableRule
abstract class MavenComponentRule : ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
val descriptor = context.getDescriptor(PomModuleDescriptor::class)
if (descriptor != null && descriptor.packaging == "war") {
// ...
@CacheableRule
abstract class MavenComponentRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
def descriptor = context.getDescriptor(PomModuleDescriptor)
if (descriptor != null && descriptor.packaging == "war") {
// ...
Modifying metadata on the component level for alignment
While all the examples above made modifications to variants of a component, there is also a limited set of modifications that can be done to the metadata of the component itself.
This information can influence the version selection process for a module during dependency resolution, which is performed before one or multiple variants of a component are selected.
The first API available on the component is belongsTo()
to create virtual platforms for aligning versions of multiple modules without Gradle Module Metadata.
It is explained in detail in the section on aligning versions of modules not published with Gradle.
Modifying metadata on the component level for version selection based on status
Gradle and Gradle Module Metadata also allow attributes to be set on the whole component instead of a single variant.
Each of these attributes carries special semantics as they influence version selection which is done before variant selection.
While variant selection can handle any custom attribute, version selection only considers attributes for which specific semantics are implemented.
At the moment, the only attribute with meaning here is org.gradle.status
.
It is therefore recommended to only modify this attribute, if any, on the component level.
A dedicated API setStatus(value)
is available for this.
To modify another attribute for all variants of a component withAllVariants { attributes {} }
should be utilised instead.
A module’s status is taken into consideration when a latest version selector is resolved.
Specifically, latest.someStatus
will resolve to the highest module version that has status someStatus
or a more mature status.
For example, latest.integration
will select the highest module version regardless of its status (because integration
is the least mature status as explained below), whereas latest.release
will select the highest module version with status release
.
The interpretation of the status can be influenced by changing a module’s status scheme through the setStatusScheme(valueList)
API.
This concept models the different levels of maturity that a module transitions through over time with different publications.
The default status scheme, ordered from least to most mature status, is integration
, milestone
, release
.
The org.gradle.status
attribute must be set, to one of the values in the components status scheme.
Thus each component always has a status which is determined from the metadata as follows:
Gradle Module Metadata: the value that was published for the org.gradle.status
attribute on the component
Ivy metadata: status
defined in the ivy.xml, defaults to integration
if missing
Pom metadata: integration
for modules with a SNAPSHOT version, release
for all others
The following example demonstrates latest
selectors based on a custom status scheme declared in a component metadata rule that applies to all modules:
Example 134. Custom status scheme
@CacheableRule
abstract class CustomStatusRule : ComponentMetadataRule {
override fun execute(context: ComponentMetadataContext) {
context.details.statusScheme = listOf("nightly", "milestone", "rc", "release")
if (context.details.status == "integration") {
context.details.status = "nightly"
dependencies {
components {
all<CustomStatusRule>()
implementation("org.apache.commons:commons-lang3:latest.rc")
@CacheableRule
abstract class CustomStatusRule implements ComponentMetadataRule {
void execute(ComponentMetadataContext context) {
context.details.statusScheme = ["nightly", "milestone", "rc", "release"]
if (context.details.status == "integration") {
context.details.status = "nightly"
dependencies {
components {
all(CustomStatusRule)
implementation("org.apache.commons:commons-lang3:latest.rc")
Compared to the default scheme, the rule inserts a new status rc
and replaces integration
with nightly
.
Existing modules with the state integration
are mapped to nightly
.
This section covers mechanisms Gradle offers to directly influence the behavior of the dependency resolution engine.
In contrast to the other concepts covered in this chapter, like dependency constraints or component metadata rules, which are all inputs to resolution, the following mechanisms allow you to write rules which are directly injected into the resolution engine.
Because of this, they can be seen as brute force solutions, that may hide future problems (e.g. if new dependencies are added).
Therefore, the general advice is to only use the following mechanisms if other means are not sufficient.
If you are authoring a library, you should always prefer dependency constraints as they are published for your consumers.
Using dependency resolve rules
A dependency resolve rule is executed for each resolved dependency, and offers a powerful api for manipulating a requested dependency prior to that dependency being resolved.
The feature currently offers the ability to change the group, name and/or version of a requested dependency, allowing a dependency to be substituted with a completely different module during resolution.
Dependency resolve rules provide a very powerful way to control the dependency resolution process, and can be used to implement all sorts of advanced patterns in dependency management.
Some of these patterns are outlined below.
For more information and code samples see the ResolutionStrategy class in the API documentation.
Implementing a custom versioning scheme
In some corporate environments, the list of module versions that can be declared in Gradle builds is maintained and audited externally.
Dependency resolve rules provide a neat implementation of this pattern:
In the build script, the developer declares dependencies with the module group and name, but uses a placeholder version, for example: default
.
The default
version is resolved to a specific version via a dependency resolve rule, which looks up the version in a corporate catalog of approved modules.
This rule implementation can be neatly encapsulated in a corporate plugin, and shared across all builds within the organisation.
Example 135. Using a custom versioning scheme
configurations.all {
resolutionStrategy.eachDependency {
if (requested.version == "default") {
val version = findDefaultVersionInCatalog(requested.group, requested.name)
useVersion(version.version)
because(version.because)
data class DefaultVersion(val version: String, val because: String)
fun findDefaultVersionInCatalog(group: String, name: String): DefaultVersion {
//some custom logic that resolves the default version into a specific version
return DefaultVersion(version = "1.0", because = "tested by QA")
configurations.all {
resolutionStrategy.eachDependency { DependencyResolveDetails details ->
if (details.requested.version == 'default') {
def version = findDefaultVersionInCatalog(details.requested.group, details.requested.name)
details.useVersion version.version
details.because version.because
def findDefaultVersionInCatalog(String group, String name) {
//some custom logic that resolves the default version into a specific version
[version: "1.0", because: 'tested by QA']
Denying a particular version with a replacement
Dependency resolve rules provide a mechanism for denying a particular version of a dependency and providing a replacement version.
This can be useful if a certain dependency version is broken and should not be used, where a dependency resolve rule causes this version to be replaced with a known good version.
One example of a broken module is one that declares a dependency on a library that cannot be found in any of the public repositories, but there are many other reasons why a particular module version is unwanted and a different version is preferred.
In example below, imagine that version 1.2.1
contains important fixes and should always be used in preference to 1.2
.
The rule provided will enforce just this: any time version 1.2
is encountered it will be replaced with 1.2.1
.
Note that this is different from a forced version as described above, in that any other versions of this module would not be affected.
This means that the 'newest' conflict resolution strategy would still select version 1.3
if this version was also pulled transitively.
Example 136. Example: Blacklisting a version with a replacement
configurations.all {
resolutionStrategy.eachDependency {
if (requested.group == "org.software" && requested.name == "some-library" && requested.version == "1.2") {
useVersion("1.2.1")
because("fixes critical bug in 1.2")
configurations.all {
resolutionStrategy.eachDependency { DependencyResolveDetails details ->
if (details.requested.group == 'org.software' && details.requested.name == 'some-library' && details.requested.version == '1.2') {
details.useVersion '1.2.1'
details.because 'fixes critical bug in 1.2'
There’s a difference with using the reject directive of rich version constraints: rich versions will cause the build to fail if a rejected version is found in the graph, or select a non rejected version when using dynamic dependencies.
Here, we manipulate the requested versions in order to select a different version when we find a rejected one.
In other words, this is a solution to rejected versions, while rich version constraints allow declaring the intent (you should not use this version).
Using module replacement rules
It is preferable to express module conflicts in terms of capabilities conflicts.
However, if there’s no such rule declared or that you are working on versions of Gradle which do not support capabilities, Gradle provides tooling to work around those issues.
Module replacement rules allow a build to declare that a legacy library has been replaced by a new one.
A good example when a new library replaced a legacy one is the google-collections
-> guava
migration.
The team that created google-collections decided to change the module name from com.google.collections:google-collections
into com.google.guava:guava
.
This is a legal scenario in the industry: teams need to be able to change the names of products they maintain, including the module coordinates. Renaming of the module coordinates has impact on conflict resolution.
To explain the impact on conflict resolution, let’s consider the google-collections
-> guava
scenario.
It may happen that both libraries are pulled into the same dependency graph.
For example, our project depends on guava
but some of our dependencies pull in a legacy version of google-collections
.
This can cause runtime errors, for example during test or application execution.
Gradle does not automatically resolve the google-collections
-> guava
conflict because it is not considered as a version conflict.
It’s because the module coordinates for both libraries are completely different and conflict resolution is activated when group
and module
coordinates are the same but there are different versions available in the dependency graph (for more info, refer to the section on conflict resolution).
Traditional remedies to this problem are:
Declare exclusion rule to avoid pulling in google-collections
to graph. It is probably the most popular approach.
Avoid dependencies that pull in legacy libraries.
Upgrade the dependency version if the new version no longer pulls in a legacy library.
Downgrade to google-collections
. It’s not recommended, just mentioned for completeness.
Traditional approaches work but they are not general enough.
For example, an organisation wants to resolve the google-collections
-> guava
conflict resolution problem in all projects.
It is possible to declare that certain module was replaced by other.
This enables organisations to include the information about module replacement in the corporate plugin suite and resolve the problem holistically for all Gradle-powered projects in the enterprise.
Example 137. Declaring a module replacement
modules {
module("com.google.collections:google-collections") {
replacedBy("com.google.guava:guava", "google-collections is now part of Guava")
modules {
module("com.google.collections:google-collections") {
replacedBy("com.google.guava:guava", "google-collections is now part of Guava")
For more examples and detailed API, refer to the DSL reference for ComponentMetadataHandler.
What happens when we declare that google-collections
is replaced by guava
?
Gradle can use this information for conflict resolution. Gradle will consider every version of guava
newer/better than any version of google-collections
.
Also, Gradle will ensure that only guava jar is present in the classpath / resolved file list.
Note that if only google-collections
appears in the dependency graph (e.g. no guava
) Gradle will not eagerly replace it with guava
.
Module replacement is an information that Gradle uses for resolving conflicts.
If there is no conflict (e.g. only google-collections
or only guava
in the graph) the replacement information is not used.
Currently it is not possible to declare that a given module is replaced by a set of modules.
However, it is possible to declare that multiple modules are replaced by a single module.
Using dependency substitution rules
Dependency substitution rules work similarly to dependency resolve rules.
In fact, many capabilities of dependency resolve rules can be implemented with dependency substitution rules.
They allow project and module dependencies to be transparently substituted with specified replacements.
Unlike dependency resolve rules, dependency substitution rules allow project and module dependencies to be substituted interchangeably.
Adding a dependency substitution rule to a configuration changes the timing of when that configuration is resolved.
Instead of being resolved on first use, the configuration is instead resolved when the task graph is being constructed.
This can have unexpected consequences if the configuration is being further modified during task execution, or if the configuration relies on modules that are published during execution of another task.
To explain:
A Configuration
can be declared as an input to any Task, and that configuration can include project dependencies when it is resolved.
If a project dependency is an input to a Task (via a configuration), then tasks to build the project artifacts must be added to the task dependencies.
In order to determine the project dependencies that are inputs to a task, Gradle needs to resolve the Configuration
inputs.
Because the Gradle task graph is fixed once task execution has commenced, Gradle needs to perform this resolution prior to executing any tasks.
In the absence of dependency substitution rules, Gradle knows that an external module dependency will never transitively reference a project dependency.
This makes it easy to determine the full set of project dependencies for a configuration through simple graph traversal.
With this functionality, Gradle can no longer make this assumption, and must perform a full resolve in order to determine the project dependencies.
Substituting an external module dependency with a project dependency
One use case for dependency substitution is to use a locally developed version of a module in place of one that is downloaded from an external repository.
This could be useful for testing a local, patched version of a dependency.
The module to be replaced can be declared with or without a version specified.
Example 138. Substituting a module with a project
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(module("org.utils:api"))
.using(project(":api")).because("we work with the unreleased development version")
substitute(module("org.utils:util:2.5")).using(project(":util"))
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute module("org.utils:api") using project(":api") because "we work with the unreleased development version"
substitute module("org.utils:util:2.5") using project(":util")
Note that a project that is substituted must be included in the multi-project build (via settings.gradle
).
Dependency substitution rules take care of replacing the module dependency with the project dependency and wiring up any task dependencies, but do not implicitly include the project in the build.
Substituting a project dependency with a module replacement
Another way to use substitution rules is to replace a project dependency with a module in a multi-project build.
This can be useful to speed up development with a large multi-project build, by allowing a subset of the project dependencies to be downloaded from a repository rather than being built.
The module to be used as a replacement must be declared with a version specified.
Example 139. Substituting a project with a module
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(project(":api"))
.using(module("org.utils:api:1.3")).because("we use a stable version of org.utils:api")
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute project(":api") using module("org.utils:api:1.3") because "we use a stable version of org.utils:api"
When a project dependency has been replaced with a module dependency, that project is still included in the overall multi-project build.
However, tasks to build the replaced dependency will not be executed in order to resolve the depending Configuration
.
Conditionally substituting a dependency
A common use case for dependency substitution is to allow more flexible assembly of sub-projects within a multi-project build.
This can be useful for developing a local, patched version of an external dependency or for building a subset of the modules within a large multi-project build.
The following example uses a dependency substitution rule to replace any module dependency with the group org.example
, but only if a local project matching the dependency name can be located.
Example 140. Conditionally substituting a dependency
resolutionStrategy.dependencySubstitution.all {
requested.let {
if (it is ModuleComponentSelector && it.group == "org.example") {
val targetProject = findProject(":${it.module}")
if (targetProject != null) {
useTarget(targetProject)
configurations.all {
resolutionStrategy.dependencySubstitution.all { DependencySubstitution dependency ->
if (dependency.requested instanceof ModuleComponentSelector && dependency.requested.group == "org.example") {
def targetProject = findProject(":${dependency.requested.module}")
if (targetProject != null) {
dependency.useTarget targetProject
Note that a project that is substituted must be included in the multi-project build (via settings.gradle
).
Dependency substitution rules take care of replacing the module dependency with the project dependency, but do not implicitly include the project in the build.
Substituting a dependency with another variant
Gradle’s dependency management engine is variant-aware meaning that for a single component, the engine may select different artifacts and transitive dependencies.
What to select is determined by the attributes of the consumer configuration and the attributes of the variants found on the producer side.
It is, however, possible that some specific dependencies override attributes from the configuration itself.
This is typically the case when using the Java Platform plugin: this plugin builds a special kind of component which is called a "platform" and can be addressed by setting the component category attribute to platform
, in opposition to typical dependencies which are targetting libraries.
Therefore, you may face situations where you want to substitute a platform dependency with a regular dependency, or the other way around.
Substituting a dependency with attributes
Let’s imagine that you want to substitute a platform dependency with a regular dependency.
This means that the library you are consuming declared something like this:
Example 141. An incorrect dependency on a platform
dependencies {
// This is a platform dependency but you want the library
implementation(platform("com.google.guava:guava:28.2-jre"))
dependencies {
// This is a platform dependency but you want the library
implementation platform('com.google.guava:guava:28.2-jre')
The platform
keyword is actually a short-hand notation for a dependency with attributes.
If we want to substitute this dependency with a regular dependency, then we need to select precisely the dependencies which have the platform
attribute.
This can be done by using a substitution rule:
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(platform(module("com.google.guava:guava:28.2-jre")))
.using(module("com.google.guava:guava:28.2-jre"))
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(platform(module('com.google.guava:guava:28.2-jre'))).
using module('com.google.guava:guava:28.2-jre')
The same rule without the platform
keyword would try to substitute regular dependencies with a regular dependency, which is not what you want, so it’s important to understand that the substitution rules apply on a dependency specification: it matches the requested dependency (substitute XXX
) with a substitute (using YYY
).
You can have attributes on both the requested dependency or the substitute and the substitution is not limited to platform
: you can actually specify the whole set of dependency attributes using the variant
notation.
The following rule is strictly equivalent to the rule above:
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(variant(module("com.google.guava:guava:28.2-jre")) {
attributes {
attribute(Category.CATEGORY_ATTRIBUTE, objects.named(Category.REGULAR_PLATFORM))
}).using(module("com.google.guava:guava:28.2-jre"))
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute variant(module('com.google.guava:guava:28.2-jre')) {
attributes {
attribute(Category.CATEGORY_ATTRIBUTE, objects.named(Category, Category.REGULAR_PLATFORM))
} using module('com.google.guava:guava:28.2-jre')
In composite builds, the rule that you have to match the exact requested dependency attributes is not applied: when using composites, Gradle will automatically match the requested attributes.
In other words, it is implicit that if you include another build, you are substituting all variants of the substituted module with an equivalent variant in the included build.
Substituting a dependency with a dependency with capabilities
Similarly to attributes substitution, Gradle lets you substitute a dependency with or without capabilities with another dependency with or without capabilities.
For example, let’s imagine that you need to substitute a regular dependency with its test fixtures instead.
You can achieve this by using the following dependency substitution rule:
Example 144. Substitute a dependency with its test fixtures
configurations.testCompileClasspath {
resolutionStrategy.dependencySubstitution {
substitute(module("com.acme:lib:1.0")).using(variant(module("com.acme:lib:1.0")) {
capabilities {
requireCapability("com.acme:lib-test-fixtures")
configurations.testCompileClasspath {
resolutionStrategy.dependencySubstitution {
substitute(module('com.acme:lib:1.0'))
.using variant(module('com.acme:lib:1.0')) {
capabilities {
requireCapability('com.acme:lib-test-fixtures')
Capabilities which are declared in a substitution rule on the requested dependency constitute part of the dependency match specification, and therefore dependencies which do not require the capabilities will not be matched.
Please refer to the Substitution DSL API docs for a complete reference of the variant substitution API.
Substituting a dependency with a classifier or artifact
While external modules are in general addressed via their group/artifact/version coordinates, it is common that such modules are published with additional artifacts that you may want to use in place of the main artifact.
This is typically the case for classified artifacts, but you may also need to select an artifact with a different file type or extension.
Gradle discourages use of classifiers in dependencies and prefers to model such artifacts as additional variants of a module.
There are lots of advantages of using variants instead of classified artifacts, including, but not only, a different set of dependencies for those artifacts.
However, in order to help bridging the two models, Gradle provides means to change or remove a classifier in a substitution rule.
Example 145. Dependencies which will lead to a resolution error
dependencies {
implementation("com.google.guava:guava:28.2-jre")
implementation("co.paralleluniverse:quasar-core:0.8.0")
implementation(project(":lib"))
dependencies {
implementation 'com.google.guava:guava:28.2-jre'
implementation 'co.paralleluniverse:quasar-core:0.8.0'
implementation project(':lib')
Execution failed for task ':resolve'.
> Could not resolve all files for configuration ':runtimeClasspath'.
> Could not find quasar-core-0.8.0-jdk8.jar (co.paralleluniverse:quasar-core:0.8.0).
Searched in the following locations:
https://repo1.maven.org/maven2/co/paralleluniverse/quasar-core/0.8.0/quasar-core-0.8.0-jdk8.jar
That’s because there’s a dependency on another project, lib
, which itself depends on a different version of quasar-core
:
Example 146. A "classified" dependency
What happens is that Gradle would perform conflict resolution between quasar-core
0.8.0 and quasar-core
0.7.10.
Because 0.8.0 is higher, we select this version, but the dependency in lib
has a classifier, jdk8
and this classifier doesn’t exist anymore in release 0.8.0.
To fix this problem, you can ask Gradle to resolve both dependencies without classifier:
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute(module("co.paralleluniverse:quasar-core"))
.using(module("co.paralleluniverse:quasar-core:0.8.0"))
.withoutClassifier()
configurations.all {
resolutionStrategy.dependencySubstitution {
substitute module('co.paralleluniverse:quasar-core') using module('co.paralleluniverse:quasar-core:0.8.0') withoutClassifier()
This rule effectively replaces any dependency on quasar-core
found in the graph with a dependency without classifier.
Alternatively, it’s possible to select a dependency with a specific classifier or, for more specific use cases, substitute with a very specific artifact (type, extension and classifier).
For more information, please refer to the following API documentation:
Disabling transitive resolution
By default Gradle resolves all transitive dependencies specified by the dependency metadata.
Sometimes this behavior may not be desirable e.g. if the metadata is incorrect or defines a large graph of transitive dependencies.
You can tell Gradle to disable transitive dependency management for a dependency by setting ModuleDependency.setTransitive(boolean) to false
.
As a result only the main artifact will be resolved for the declared dependency.
A project can decide to disable transitive dependency resolution completely.
You either don’t want to rely on the metadata published to the consumed repositories or you want to gain full control over the dependencies in your graph.
For more information, see Configuration.setTransitive(boolean).
Changing configuration dependencies prior to resolution
At times, a plugin may want to modify the dependencies of a configuration before it is resolved.
The withDependencies
method permits dependencies to be added, removed or modified programmatically.
Example 150. Modifying dependencies on a configuration
create("implementation") {
withDependencies {
val dep = this.find { it.name == "to-modify" } as ExternalModuleDependency
dep.version {
strictly("1.2")
implementation {
withDependencies { DependencySet dependencies ->
ExternalModuleDependency dep = dependencies.find { it.name == 'to-modify' } as ExternalModuleDependency
dep.version {
strictly "1.2"
Setting default configuration dependencies
A configuration can be configured with default dependencies to be used if no dependencies are explicitly set for the configuration.
A primary use case of this functionality is for developing plugins that make use of versioned tools that the user might override.
By specifying default dependencies, the plugin can use a default version of the tool only if the user has not specified a particular version to use.
Example 151. Specifying default dependencies on a configuration
create("pluginTool") {
defaultDependencies {
add(project.dependencies.create("org.gradle:my-util:1.0"))
pluginTool {
defaultDependencies { dependencies ->
dependencies.add(project.dependencies.create("org.gradle:my-util:1.0"))
Excluding a dependency from a configuration completely
Similar to excluding a dependency in a dependency declaration, you can exclude a transitive dependency for a particular configuration completely by using Configuration.exclude(java.util.Map).
This will automatically exclude the transitive dependency for all dependencies declared on the configuration.
configurations {
"implementation" {
exclude(group = "commons-collections", module = "commons-collections")
dependencies {
implementation("commons-beanutils:commons-beanutils:1.9.4")
implementation("com.opencsv:opencsv:4.6")
configurations {
implementation {
exclude group: 'commons-collections', module: 'commons-collections'
dependencies {
implementation 'commons-beanutils:commons-beanutils:1.9.4'
implementation 'com.opencsv:opencsv:4.6'
Matching dependencies to repositories
Gradle exposes an API to declare what a repository may or may not contain.
This feature offers a fine grained control on which repository serve which artifacts, which can be one way of controlling the source of dependencies.
Head over to the section on repository content filtering to know more about this feature.
Enabling Ivy dynamic resolve mode
Gradle’s Ivy repository implementations support the equivalent to Ivy’s dynamic resolve mode.
Normally, Gradle will use the rev
attribute for each dependency definition included in an ivy.xml
file.
In dynamic resolve mode, Gradle will instead prefer the revConstraint
attribute over the rev
attribute for a given dependency definition.
If the revConstraint
attribute is not present, the rev
attribute is used instead.
To enable dynamic resolve mode, you need to set the appropriate option on the repository definition.
A couple of examples are shown below.
Note that dynamic resolve mode is only available for Gradle’s Ivy repositories.
It is not available for Maven repositories, or custom Ivy DependencyResolver
implementations.
Example 153. Enabling dynamic resolve mode
// Can use a rule instead to enable (or disable) dynamic resolve mode for all repositories
repositories.withType<IvyArtifactRepository> {
resolve.isDynamicMode = true
// Can use a rule instead to enable (or disable) dynamic resolve mode for all repositories
repositories.withType(IvyArtifactRepository) {
resolve.dynamicMode = true
Capabilities as first-level concept
Components provide a number of features which are often orthogonal to the software architecture used to provide those features.
For example, a library may include several features in a single artifact.
However, such a library would be published at single GAV (group, artifact and version) coordinates.
This means that, at single coordinates, potentially co-exist different "features" of a component.
With Gradle it becomes interesting to explicitly declare what features a component provides.
For this, Gradle provides the concept of capability.
A feature is often built by combining different capabilities.
In an ideal world, components shouldn’t declare dependencies on explicit GAVs, but rather express their requirements in terms of capabilities:
Declaring capabilities for external modules
It’s worth noting that Gradle supports declaring capabilities for components you build, but also for external components in case they didn’t.
For example, if your build file contains the following dependencies:
dependencies {
// This dependency will bring log4:log4j transitively
implementation("org.apache.zookeeper:zookeeper:3.4.9")
// We use log4j over slf4j
implementation("org.slf4j:log4j-over-slf4j:1.7.10")
dependencies {
// This dependency will bring log4:log4j transitively
implementation 'org.apache.zookeeper:zookeeper:3.4.9'
// We use log4j over slf4j
implementation 'org.slf4j:log4j-over-slf4j:1.7.10'
As is, it’s pretty hard to figure out that you will end up with two logging frameworks on the classpath.
In fact, zookeeper
will bring in log4j
, where what we want to use is log4j-over-slf4j
.
We can preemptively detect the conflict by adding a rule which will declare that both logging frameworks provide the same capability:
dependencies {
// Activate the "LoggingCapability" rule
components.all(LoggingCapability::class.java)
class LoggingCapability : ComponentMetadataRule {
val loggingModules = setOf("log4j", "log4j-over-slf4j")
override
fun execute(context: ComponentMetadataContext) = context.details.run {
if (loggingModules.contains(id.name)) {
allVariants {
withCapabilities {
// Declare that both log4j and log4j-over-slf4j provide the same capability
addCapability("log4j", "log4j", id.version)
@CompileStatic
class LoggingCapability implements ComponentMetadataRule {
final static Set<String> LOGGING_MODULES = ["log4j", "log4j-over-slf4j"] as Set<String>
void execute(ComponentMetadataContext context) {
context.details.with {
if (LOGGING_MODULES.contains(id.name)) {
allVariants {
it.withCapabilities {
// Declare that both log4j and log4j-over-slf4j provide the same capability
it.addCapability("log4j", "log4j", id.version)
> Could not resolve all files for configuration ':compileClasspath'.
> Could not resolve org.slf4j:log4j-over-slf4j:1.7.10.
Required by:
project :
> Module 'org.slf4j:log4j-over-slf4j' has been rejected:
Cannot select module with conflict on capability 'log4j:log4j:1.7.10' also provided by [log4j:log4j:1.2.16(compile)]
> Could not resolve log4j:log4j:1.2.16.
Required by:
project : > org.apache.zookeeper:zookeeper:3.4.9
> Module 'log4j:log4j' has been rejected:
Cannot select module with conflict on capability 'log4j:log4j:1.2.16' also provided by [org.slf4j:log4j-over-slf4j:1.7.10(compile)]
See the capabilities section of the documentation to figure out how to fix capability conflicts.
Declaring additional capabilities for a local component
All components have an implicit capability corresponding to the same GAV coordinates as the component.
However, it is also possible to declare additional explicit capabilities for a component.
This is convenient whenever a library published at different GAV coordinates is an alternate implementation of the same API:
Example 156. Declaring capabilities of a component
Capabilities must be attached to outgoing configurations, which are consumable configurations of a component.
This example shows that we declare two capabilities:
It’s worth noting we need to do 1. because as soon as you start declaring explicit capabilities, then all capabilities need to be declared, including the implicit one.
The second capability can be specific to this library, or it can correspond to a capability provided by an external component.
In that case, if com.other:module
appears in the same dependency graph, the build will fail and consumers will have to choose what module to use.
Capabilities are published to Gradle Module Metadata.
However, they have no equivalent in POM or Ivy metadata files.
As a consequence, when publishing such a component, Gradle will warn you that this feature is only for Gradle consumers:
Maven publication 'maven' contains dependencies that cannot be represented in a published pom file.
- Declares capability com.acme:my-library:1.0
- Declares capability com.other:module:1.1
Gradle supports the concept of features: it’s often the case that a single library can be split up into multiple related yet distinct libraries, where each feature can be used alongside the main library.
Features allow a component to expose multiple related libraries, each of which can declare its own dependencies.
These libraries are exposed as variants, similar to how the main library exposes variants for its API and runtime.
This allows for a number of different scenarios (list is non-exhaustive):
a main library is built with support for different mutually-exclusive implementations of runtime features; the user must choose one, and only one, implementation of each such feature
a main library is built with support for optional runtime features, each of which requires a different set of dependencies
a main library comes with supplementary features like test fixtures
a main library comes with a main artifact, and enabling an additional feature requires additional artifacts
Selection of features via capabilities
Declaring a dependency on a component is usually done by providing a set of coordinates (group, artifact, version also known as GAV coordinates).
This allows the engine to determine the component we’re looking for, but such a component may provide different variants.
A variant is typically chosen based on the usage. For example, we might choose a different variant for compiling against a component (in which case we need the API of the component) or when executing code (in which case we need the runtime of the component).
All variants of a component provide a number of capabilities, which are denoted similarly using GAV coordinates.
By default, a variant provides a capability corresponding to the GAV coordinates of its component
No two variants in a dependency graph can provide the same capability
Multiple variants of a single component may be selected as long as they provide different capabilities
A typical component will only provide variants with the default capability.
A Java library, for example, exposes two variants (API and runtime) which provide the same capability.
As a consequence, it is an error to have both the API and runtime of a single component in a dependency graph.
However, imagine that you need the runtime and the test fixtures runtime of a component.
Then it is allowed as long as the runtime and test fixtures runtime variant of the library declare different capabilities.
If we do so, a consumer would then have to declare two dependencies:
Features can be declared by applying the java-library
plugin.
The following code illustrates how to declare a feature named mongodbSupport
:
Example 157. Registering a feature
Gradle will automatically set up a number of things for you, in a very similar way to how the Java Library Plugin sets up configurations.
Dependency scope configurations are created in the same manner as for the main feature:
the configuration mongodbSupportImplementation
, used to declare implementation dependencies for this feature
the configuration mongodbSupportRuntimeOnly
, used to declare runtime-only dependencies for this feature
the configuration mongodbSupportCompileOnly
, used to declare compile-only dependencies for this feature
the configuration mongodbSupportCompileOnlyApi
, used to declare compile-only API dependencies for this feature
the configuration mongodbSupportApiElements
, used by consumers to fetch the artifacts and API dependencies of this feature
the configuration mongodbSupportRuntimeElements
, used by consumers to fetch the artifacts and runtime dependencies of this feature
Most users will only need to care about the dependency scope configurations, to declare the specific dependencies of this feature:
Example 158. Declaring dependencies of a feature
By convention, Gradle maps the feature name to a capability whose group and version are the same as the group and version of the main component, respectively, but whose name is the main component name followed by a -
followed by the kebab-cased feature name.
For example, if the component’s group is org.gradle.demo
, its name is provider
, its version is 1.0
, and the feature is named mongodbSupport
, the feature’s variants will have the org.gradle.demo:provider-mongodb-support:1.0
capability.
If you choose the capability name yourself or add more capabilities to a variant, it is recommended to follow the same convention.
using Gradle Module Metadata, everything is published and consumers will get the full benefit of features
using POM metadata (Maven), features are published as optional dependencies and artifacts of features are published with different classifiers
using Ivy metadata, features are published as extra configurations, which are not extended by the default
configuration
Publishing features is supported using the maven-publish
and ivy-publish
plugins only.
The Java Library Plugin will take care of registering the additional variants for you, so there’s no additional configuration required, only the regular publications:
Example 159. Publishing a component with features
publications {
create("myLibrary", MavenPublication::class.java) {
from(components["java"])
Adding javadoc and sources JARs
Similar to the main Javadoc and sources JARs, you can configure the added feature so that it produces JARs for the Javadoc and sources.
Example 160. Producing javadoc and sources JARs for features
registerFeature("mongodbSupport") {
usingSourceSet(sourceSets["mongodbSupport"])
withJavadocJar()
withSourcesJar()
registerFeature('mongodbSupport') {
usingSourceSet(sourceSets.mongodbSupport)
withJavadocJar()
withSourcesJar()
A consumer can specify that it needs a specific feature of a producer by declaring required capabilities.
For example, if a producer declares a "MySQL support" feature like this:
Example 161. A library declaring a feature to support MySQL
dependencies {
// This project requires the main producer component
implementation(project(":producer"))
// But we also want to use its MySQL support
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-mysql-support")
dependencies {
// This project requires the main producer component
implementation(project(":producer"))
// But we also want to use its MySQL support
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-mysql-support")
This will automatically bring the mysql-connector-java
dependency on the runtime classpath.
If there were more than one dependency, all of them would be brought, meaning that a feature can be used to group dependencies which contribute to a feature together.
Similarly, if an external library with features was published with Gradle Module Metadata, it is possible to depend on a feature provided by that library:
dependencies {
// This project requires the main producer component
implementation("org.gradle.demo:producer:1.0")
// But we also want to use its MongoDB support
runtimeOnly("org.gradle.demo:producer:1.0") {
capabilities {
requireCapability("org.gradle.demo:producer-mongodb-support")
dependencies {
// This project requires the main producer component
implementation('org.gradle.demo:producer:1.0')
// But we also want to use its MongoDB support
runtimeOnly('org.gradle.demo:producer:1.0') {
capabilities {
requireCapability("org.gradle.demo:producer-mongodb-support")
Handling mutually exclusive variants
The main advantage of using capabilities as a way to handle features is that you can precisely handle compatibility of variants.
The rule is simple:
We can leverage this to ensure that Gradle fails whenever the user mis-configures dependencies.
Consider a situation where your library supports MySQL, Postgres and MongoDB, but that it’s only allowed to choose one of those at the same time.
We can model this restriction by ensuring each feature also provides the same capability, thus making it impossible for these features to be used together in the same graph.
registerFeature("mysqlSupport") {
usingSourceSet(sourceSets["mysqlSupport"])
capability("org.gradle.demo", "producer-db-support", "1.0")
capability("org.gradle.demo", "producer-mysql-support", "1.0")
registerFeature("postgresSupport") {
usingSourceSet(sourceSets["postgresSupport"])
capability("org.gradle.demo", "producer-db-support", "1.0")
capability("org.gradle.demo", "producer-postgres-support", "1.0")
registerFeature("mongoSupport") {
usingSourceSet(sourceSets["mongoSupport"])
capability("org.gradle.demo", "producer-db-support", "1.0")
capability("org.gradle.demo", "producer-mongo-support", "1.0")
dependencies {
"mysqlSupportImplementation"("mysql:mysql-connector-java:8.0.14")
"postgresSupportImplementation"("org.postgresql:postgresql:42.2.5")
"mongoSupportImplementation"("org.mongodb:mongodb-driver-sync:3.9.1")
registerFeature('mysqlSupport') {
usingSourceSet(sourceSets.mysqlSupport)
capability('org.gradle.demo', 'producer-db-support', '1.0')
capability('org.gradle.demo', 'producer-mysql-support', '1.0')
registerFeature('postgresSupport') {
usingSourceSet(sourceSets.postgresSupport)
capability('org.gradle.demo', 'producer-db-support', '1.0')
capability('org.gradle.demo', 'producer-postgres-support', '1.0')
registerFeature('mongoSupport') {
usingSourceSet(sourceSets.mongoSupport)
capability('org.gradle.demo', 'producer-db-support', '1.0')
capability('org.gradle.demo', 'producer-mongo-support', '1.0')
dependencies {
mysqlSupportImplementation 'mysql:mysql-connector-java:8.0.14'
postgresSupportImplementation 'org.postgresql:postgresql:42.2.5'
mongoSupportImplementation 'org.mongodb:mongodb-driver-sync:3.9.1'
Then if the consumer tries to get both the postgres-support
and mysql-support
features (this also works transitively):
dependencies {
// This project requires the main producer component
implementation(project(":producer"))
// Let's try to ask for both MySQL and Postgres support
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-mysql-support")
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-postgres-support")
implementation(project(":producer"))
// Let's try to ask for both MySQL and Postgres support
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-mysql-support")
runtimeOnly(project(":producer")) {
capabilities {
requireCapability("org.gradle.demo:producer-postgres-support")
Cannot choose between
org.gradle.demo:producer:1.0 variant mysqlSupportRuntimeElements and
org.gradle.demo:producer:1.0 variant postgresSupportRuntimeElements
because they provide the same capability: org.gradle.demo:producer-db-support:1.0
In other dependency management engines, like Apache Maven™, dependencies and artifacts are bound to a component that is published at a particular GAV (group-artifact-version) coordinates.
The set of dependencies for this component are always the same, regardless of which artifact may be used from the component.
If the component does have multiple artifacts, each one is identified by a cumbersome classifier.
There are no common semantics associated with classifiers and that makes it difficult to guarantee a globally consistent dependency graph.
This means that nothing prevents multiple artifacts for a single component (e.g., jdk7
and jdk8
classifiers) from appearing in a classpath and causing hard to diagnose problems.
Maven component model
In addition to a component, Gradle has the concept of variants of a component.
Variants correspond to the different ways a component can be used, such as for Java compilation or native linking or documentation.
Artifacts are attached to a variant and each variant can have a different set of dependencies.
How does Gradle know which variant to choose when there’s more than one?
Variants are matched by use of attributes, which provide semantics to the variants and help the engine to produce a consistent resolution result.
Gradle differentiates between two kind of components:
For local components, variants are mapped to consumable configurations.
For external components, variants are defined by published Gradle Module Metadata or are derived from Ivy/Maven metadata.
Variants vs configurations
Variants and configurations are sometimes used interchangeably in the documentation, DSL or API for historical reasons.
All components provide variants and those variants may be backed by a consumable configuration.
Not all configurations are variants because they may be used for declaring or resolving dependencies.
Variant attributes
Attributes are type-safe key-value pairs that are defined by the consumer (for a resolvable configuration) and the producer (for each variant).
The consumer can define any number of attributes.
Each attribute helps narrow the possible variants that can be selected.
Attribute values do not need to be exact matches.
The variant can also define any number of attributes.
The attributes should describe how the variant is intended to be used.
For example, Gradle uses an attribute named org.gradle.usage
to describe with how a component is used by the consumer (for compilation, for runtime execution, etc).
It is not unusual for a variant to have more attributes than the consumer needs to provide to select it.
Variant attribute matching
About producer variants
The variant name is mostly for debugging purposes and error messages.
The name does not participate variant matching—only its attributes do.
There are no restrictions on the number of variants a component can define.
Usually, a component has at least an implementation variant, but it could also expose test fixtures, documentation or source code.
A component may also expose different variants for different consumers for the same usage. For example, when compiling, a component could have different headers for Linux vs Windows vs macOS.
Gradle performs variant aware selection by matching the attributes requested by the consumer against attributes defined by the producer. The selection algorithm is detailed in another section.
A simple example
Let’s consider an example where a consumer is trying to use a library for compilation.
First, the consumer needs to explain how it’s going to use the result of dependency resolution. This is done by setting attributes on the resolvable configuration of the consumer.
The consumer wants to resolve a variant that matches: org.gradle.usage=java-api
Second, the producer needs to expose the different variants of the component.
The producer component exposes 2 variants:
A more complicated example
In the real world, consumers and producers have more than one attribute.
A Java Library project in Gradle will involve several different attributes:
org.gradle.dependency.bundling
that describes how the variant handles dependencies (shadow jar vs fat jar vs regular jar)
org.gradle.libraryelements
, that describes the packaging of the variant (classes or jar)
org.gradle.jvm.version
that describes the minimal version of Java this variant targets
org.gradle.jvm.environment
that describes the type of JVM this variant targets
Let’s consider an example where the consumer wants to run tests with a library on Java 8 and the producer supports two different Java versions (Java 8 and Java 11).
First, the consumer needs to explain which version of the Java it needs.
The consumer wants to resolve a variant that:
its API for Java 8 consumers (named apiJava8Elements
) with attribute org.gradle.usage=java-api
and org.gradle.jvm.version=8
its runtime for Java 8 consumers (named runtime8Elements
) with attribute org.gradle.usage=java-runtime
and org.gradle.jvm.version=8
its API for Java 11 consumers (named apiJava11Elements
) with attribute org.gradle.usage=java-api
and org.gradle.jvm.version=11
its runtime for Java 11 consumers (named runtime11Elements
) with attribute org.gradle.usage=java-runtime
and org.gradle.jvm.version=11
the consumer wants a variant with compatible attributes to org.gradle.usage=java-runtime
and org.gradle.jvm.version=8
the variants runtime8Elements
and runtime11Elements
have `org.gradle.usage=java-runtime
the variants apiJava8Elements
and apiJava11Elements
are incompatible
the variant runtime8Elements
is compatible because it can run on Java 8
the variant runtime11Elements
is incompatible because it cannot run on Java 8
Gradle provides the artifacts and dependencies from the runtime8Elements
variant to the consumer.
Compatibility of variants
What if the consumer sets org.gradle.jvm.version
to 7?
Dependency resolution would fail with an error message explaining that there’s no suitable variant.
Gradle recognizes that the consumer wants a Java 7 compatible library and the minimal version of Java available on the producer is 8.
If the consumer requested org.gradle.jvm.version=15
, then Gradle knows either the Java 8 or Java 11 variants could work. Gradle select the highest compatible Java version (11).
when more than one variant from the producer matches the consumer attributes (ambiguity error)
when no variants from the producer match the consumer attributes (incompatibility error)
> Could not resolve all files for configuration ':compileClasspath'.
> Could not resolve project :lib.
Required by:
project :ui
> Cannot choose between the following variants of project :lib:
- feature1ApiElements
- feature2ApiElements
All of them match the consumer attributes:
- Variant 'feature1ApiElements' capability org.test:test-capability:1.0:
- Unmatched attribute:
- Found org.gradle.category 'library' but wasn't required.
- Compatible attributes:
- Provides org.gradle.dependency.bundling 'external'
- Provides org.gradle.jvm.version '11'
- Required org.gradle.libraryelements 'classes' and found value 'jar'.
- Provides org.gradle.usage 'java-api'
- Variant 'feature2ApiElements' capability org.test:test-capability:1.0:
- Unmatched attribute:
- Found org.gradle.category 'library' but wasn't required.
- Compatible attributes:
- Provides org.gradle.dependency.bundling 'external'
- Provides org.gradle.jvm.version '11'
- Required org.gradle.libraryelements 'classes' and found value 'jar'.
- Provides org.gradle.usage 'java-api'
All compatible candidate variants are displayed with their attributes.
Unmatched attributes are presented first, as they might be the missing piece in selecting the proper variant.
Compatible attributes are presented second as they indicate what the consumer wanted and how these variants do match that request.
There will not be any incompatible attributes as the variant would not be considered a candidate.
In the example above, the fix does not lie in attribute matching but in capability matching, which are shown next to the variant name.
Because these two variants effectively provide the same attributes and capabilities, they cannot be disambiguated.
So in this case, the fix is most likely to provide different capabilities on the producer side (project :lib
) and express a capability choice on the consumer side (project :ui
).
Dealing with no matching variant errors
A no matching variant error looks like the following:
> No variants of project :lib match the consumer attributes:
- Configuration ':lib:compile':
- Incompatible attribute:
- Required artifactType 'dll' and found incompatible value 'jar'.
- Other compatible attribute:
- Provides usage 'api'
- Configuration ':lib:compile' variant debug:
- Incompatible attribute:
- Required artifactType 'dll' and found incompatible value 'jar'.
- Other compatible attributes:
- Found buildType 'debug' but wasn't required.
- Provides usage 'api'
- Configuration ':lib:compile' variant release:
- Incompatible attribute:
- Required artifactType 'dll' and found incompatible value 'jar'.
- Other compatible attributes:
- Found buildType 'release' but wasn't required.
- Provides usage 'api'
or like:
> No variants of project : match the consumer attributes:
- Configuration ':myElements' declares attribute 'color' with value 'blue':
- Incompatible because this component declares attribute 'artifactType' with value 'jar' and the consumer needed attribute 'artifactType' with value 'dll'
- Configuration ':myElements' variant secondary declares attribute 'color' with value 'blue':
- Incompatible because this component declares attribute 'artifactType' with value 'jar' and the consumer needed attribute 'artifactType' with value 'dll'
depending upon the stage in the variant selection algorithm where the error occurs.
All potentially compatible candidate variants are displayed with their attributes.
Incompatible attributes are presented first, as they usually are the key in understanding why a variant could not be selected.
Other attributes are presented second, this includes requested and compatible ones as well as all extra producer attributes that are not requested by the consumer.
> Could not resolve all task dependencies for configuration ':resolveMe'.
> Could not resolve project :.
Required by:
project :
> Configuration 'mismatch' in project : does not match the consumer attributes
Configuration 'mismatch':
- Incompatible because this component declares attribute 'color' with value 'blue' and the consumer needed attribute 'color' with value 'green'
It occurs when Gradle cannot select a single variant of a dependency because an explicitly requested attribute value does not match (and is not compatible with) the value of that attribute on any of the variants of the dependency.
A sub-type of this failure occurs when Gradle successfully selects multiple variants of the same component, but the selected variants are incompatible with each other.
This looks like the following, where a consumer wants to select two different variants of a component, each supplying different capabilities, which is acceptable.
Unfortunately one variant has color=blue
and the other has color=green
:
> Could not resolve all task dependencies for configuration ':resolveMe'.
> Could not resolve project :.
Required by:
project :
> Multiple incompatible variants of org.example:nyvu:1.0 were selected:
- Variant org.example:nyvu:1.0 variant blueElementsCapability1 has attributes {color=blue}
- Variant org.example:nyvu:1.0 variant greenElementsCapability2 has attributes {color=green}
> Could not resolve project :.
Required by:
project :
> Multiple incompatible variants of org.example:pi2e5:1.0 were selected:
- Variant org.example:pi2e5:1.0 variant blueElementsCapability1 has attributes {color=blue}
- Variant org.example:pi2e5:1.0 variant greenElementsCapability2 has attributes {color=green}
Dealing with ambiguous transformation errors
ArtifactTransforms can be used to transform artifacts from one type to another, changing their attributes.
Variant selection can use the attributes available as the result of an artifact transform as a candidate variant.
If a project registers multiple artifact transforms, needs to use an artifact transform to produce a matching variant for a consumer’s request, and multiple artifact transforms could each be used to accomplish this, then Gradle will fail with an ambiguous transformation error like the following:
> Could not resolve all task dependencies for configuration ':resolveMe'.
> Found multiple transforms that can produce a variant of project : with requested attributes:
- color 'red'
- shape 'round'
Found the following transforms:
- From 'configuration ':roundBlueLiquidElements'':
- With source attributes:
- color 'blue'
- shape 'round'
- state 'liquid'
- Candidate transform(s):
- Transform 'BrokenTransform' producing attributes:
- color 'red'
- shape 'round'
- state 'gas'
- Transform 'BrokenTransform' producing attributes:
- color 'red'
- shape 'round'
- state 'solid'
Outgoing variants report
The report task outgoingVariants
shows the list of variants available for selection by consumers of the project. It displays the capabilities, attributes and artifacts for each variant.
This task is similar to the dependencyInsight
reporting task.
By default, outgoingVariants
prints information about all variants.
It offers the optional parameter --variant <variantName>
to select a single variant to display.
It also accepts the --all
flag to include information about legacy and deprecated configurations, or --no-all
to exclude this information.
Here is the output of the outgoingVariants
task on a freshly generated java-library
project:
> Task :outgoingVariants
--------------------------------------------------
Variant apiElements
--------------------------------------------------
API elements for the 'main' feature.
Capabilities
- new-java-library:lib:unspecified (default capability)
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = jar
- org.gradle.usage = java-api
Artifacts
- build/libs/lib.jar (artifactType = jar)
Secondary Variants (*)
--------------------------------------------------
Secondary Variant classes
--------------------------------------------------
Description = Directories containing compiled class files for main.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = classes
- org.gradle.usage = java-api
Artifacts
- build/classes/java/main (artifactType = java-classes-directory)
--------------------------------------------------
Variant mainSourceElements (i)
--------------------------------------------------
Description = List of source directories contained in the Main SourceSet.
Capabilities
- new-java-library:lib:unspecified (default capability)
Attributes
- org.gradle.category = verification
- org.gradle.dependency.bundling = external
- org.gradle.verificationtype = main-sources
Artifacts
- src/main/java (artifactType = directory)
- src/main/resources (artifactType = directory)
--------------------------------------------------
Variant runtimeElements
--------------------------------------------------
Runtime elements for the 'main' feature.
Capabilities
- new-java-library:lib:unspecified (default capability)
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = jar
- org.gradle.usage = java-runtime
Artifacts
- build/libs/lib.jar (artifactType = jar)
Secondary Variants (*)
--------------------------------------------------
Secondary Variant classes
--------------------------------------------------
Description = Directories containing compiled class files for main.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = classes
- org.gradle.usage = java-runtime
Artifacts
- build/classes/java/main (artifactType = java-classes-directory)
--------------------------------------------------
Secondary Variant resources
--------------------------------------------------
Description = Directories containing the project's assembled resource files for use at runtime.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = resources
- org.gradle.usage = java-runtime
Artifacts
- build/resources/main (artifactType = java-resources-directory)
--------------------------------------------------
Variant testResultsElementsForTest (i)
--------------------------------------------------
Description = Directory containing binary results of running tests for the test Test Suite's test target.
Capabilities
- new-java-library:lib:unspecified (default capability)
Attributes
- org.gradle.category = verification
- org.gradle.testsuite.name = test
- org.gradle.testsuite.target.name = test
- org.gradle.testsuite.type = unit-test
- org.gradle.verificationtype = test-results
Artifacts
- build/test-results/test/binary (artifactType = directory)
(i) Configuration uses incubating attributes such as Category.VERIFICATION.
(*) Secondary variants are variants created via the Configuration#getOutgoing(): ConfigurationPublications API which also participate in selection, in addition to the configuration itself.
From this you can see the two main variants that are exposed by a java library, apiElements
and runtimeElements
.
Notice that the main difference is on the org.gradle.usage
attribute, with values java-api
and java-runtime
.
As they indicate, this is where the difference is made between what needs to be on the compile classpath of consumers, versus what’s needed on the runtime classpath.
It also shows secondary variants, which are exclusive to Gradle projects and not published.
For example, the secondary variant classes
from apiElements
is what allows Gradle to skip the JAR creation when compiling against a java-library
project.
Information about invalid consumable configurations
A project cannot have multiple configurations with the same attributes and capabilities.
In that case, the project will fail to build.
In order to be able to visualize such issues, the outgoing variant reports handle those errors in a lenient fashion.
This allows the report to display information about the issue.
Resolvable configurations report
Gradle also offers a complimentary report task called resolvableConfigurations
that displays the resolvable configurations of a project, which are those which can have dependencies added and be resolved. The report will list their attributes and any configurations that they extend. It will also list a summary of any attributes which will be affected by Compatibility Rules or Disambiguation Rules during resolution.
By default, resolvableConfigurations
prints information about all purely resolvable configurations.
These are configurations that are marked resolvable but not marked consumable.
Though some resolvable configurations are also marked consumable, these are legacy configurations that should not have dependencies added in build scripts.
This report offers the optional parameter --configuration <configurationName>
to select a single configuration to display.
It also accepts the --all
flag to include information about legacy and deprecated configurations, or --no-all
to exclude this information.
Finally, it accepts the --recursive
flag to list in the extended configurations section those configurations which are extended transitively rather than directly.
Alternatively, --no-recursive
can be used to exclude this information.
Here is the output of the resolvableConfigurations
task on a freshly generated java-library
project:
> Task :resolvableConfigurations
--------------------------------------------------
Configuration annotationProcessor
--------------------------------------------------
Description = Annotation processors and their dependencies for source set 'main'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.libraryelements = jar
- org.gradle.usage = java-runtime
--------------------------------------------------
Configuration compileClasspath
--------------------------------------------------
Description = Compile classpath for source set 'main'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = classes
- org.gradle.usage = java-api
Extended Configurations
- compileOnly
- implementation
--------------------------------------------------
Configuration runtimeClasspath
--------------------------------------------------
Description = Runtime classpath of source set 'main'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = jar
- org.gradle.usage = java-runtime
Extended Configurations
- implementation
- runtimeOnly
--------------------------------------------------
Configuration testAnnotationProcessor
--------------------------------------------------
Description = Annotation processors and their dependencies for source set 'test'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.libraryelements = jar
- org.gradle.usage = java-runtime
--------------------------------------------------
Configuration testCompileClasspath
--------------------------------------------------
Description = Compile classpath for source set 'test'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = classes
- org.gradle.usage = java-api
Extended Configurations
- testCompileOnly
- testImplementation
--------------------------------------------------
Configuration testRuntimeClasspath
--------------------------------------------------
Description = Runtime classpath of source set 'test'.
Attributes
- org.gradle.category = library
- org.gradle.dependency.bundling = external
- org.gradle.jvm.environment = standard-jvm
- org.gradle.jvm.version = 11
- org.gradle.libraryelements = jar
- org.gradle.usage = java-runtime
Extended Configurations
- testImplementation
- testRuntimeOnly
--------------------------------------------------
Compatibility Rules
--------------------------------------------------
Description = The following Attributes have compatibility rules defined.
- org.gradle.dependency.bundling
- org.gradle.jvm.environment
- org.gradle.jvm.version
- org.gradle.libraryelements
- org.gradle.plugin.api-version
- org.gradle.usage
--------------------------------------------------
Disambiguation Rules
--------------------------------------------------
Description = The following Attributes have disambiguation rules defined.
- org.gradle.category
- org.gradle.dependency.bundling
- org.gradle.jvm.environment
- org.gradle.jvm.version
- org.gradle.libraryelements
- org.gradle.plugin.api-version
- org.gradle.usage
From this you can see the two main configurations used to resolve dependencies, compileClasspath
and runtimeClasspath
, as well as their corresponding test configurations.
Mapping from Maven/Ivy to Gradle variants
Neither Maven nor Ivy have the concept of variants, which are only natively supported by Gradle Module Metadata.
Gradle can still work with Maven and Ivy by using different variant derivation strategies.
Relationship with Gradle Module Metadata
Gradle Module Metadata is a metadata format for modules published on Maven, Ivy and other kinds of repositories.
It is similar to the pom.xml
or ivy.xml
metadata file, but this format contains details about variants.
See the Gradle Module Metadata specification for more information.
Mapping of Maven POM metadata to variants
Modules published on a Maven repository are automatically converted into variant-aware modules.
There is no way for Gradle to know which kind of component was published:
the compile
variant maps the <scope>compile</scope>
dependencies.
This variant is equivalent to the apiElements
variant of the Java Library plugin.
All dependencies of this scope are considered API dependencies.
the runtime
variant maps both the <scope>compile</scope>
and <scope>runtime</scope>
dependencies.
This variant is equivalent to the runtimeElements
variant of the Java Library plugin.
All dependencies of those scopes are considered runtime dependencies.
in both cases, the <dependencyManagement>
dependencies are not converted to constraints
the platform-compile
variant maps the <scope>compile</scope>
dependency management dependencies as dependency constraints.
the platform-runtime
variant maps both the <scope>compile</scope>
and <scope>runtime</scope>
dependency management dependencies as dependency constraints.
the enforced-platform-compile
is similar to platform-compile
but all the constraints are forced
the enforced-platform-runtime
is similar to platform-runtime
but all the constraints are forced
You can understand more about the use of platform and enforced platforms variants by looking at the importing BOMs section of the manual.
By default, whenever you declare a dependency on a Maven module, Gradle is going to look for the library
variants.
However, using the platform
or enforcedPlatform
keyword, Gradle is now looking for one of the "platform" variants, which allows you to import the constraints from the POM files, instead of the dependencies.
Mapping of Ivy files to variants
Gradle has no built-in derivation strategy implemented for Ivy files.
Ivy is a flexible format that allows you to publish arbitrary files and can be heavily customized.
If you want to implement a derivation strategy for compile and runtime variants for Ivy, you can do so with component metadata rule.
The component metadata rules API allows you to access Ivy configurations and create variants based on them.
If you know that all the Ivy modules your are consuming have been published with Gradle without further customizations of the ivy.xml
file, you can add the following rule to your build:
build.gradle.kts
abstract class IvyVariantDerivationRule @Inject internal constructor(objectFactory: ObjectFactory) : ComponentMetadataRule {
private val jarLibraryElements: LibraryElements
private val libraryCategory: Category
private val javaRuntimeUsage: Usage
private val javaApiUsage: Usage
init {
jarLibraryElements = objectFactory.named(LibraryElements.JAR)
libraryCategory = objectFactory.named(Category.LIBRARY)
javaRuntimeUsage = objectFactory.named(Usage.JAVA_RUNTIME)
javaApiUsage = objectFactory.named(Usage.JAVA_API)
override fun execute(context: ComponentMetadataContext) {
// This filters out any non Ivy module
if(context.getDescriptor(IvyModuleDescriptor::class) == null) {
return
context.details.addVariant("runtimeElements", "default") {
attributes {
attribute(LibraryElements.LIBRARY_ELEMENTS_ATTRIBUTE, jarLibraryElements)
attribute(Category.CATEGORY_ATTRIBUTE, libraryCategory)
attribute(Usage.USAGE_ATTRIBUTE, javaRuntimeUsage)
context.details.addVariant("apiElements", "compile") {
attributes {
attribute(LibraryElements.LIBRARY_ELEMENTS_ATTRIBUTE, jarLibraryElements)
attribute(Category.CATEGORY_ATTRIBUTE, libraryCategory)
attribute(Usage.USAGE_ATTRIBUTE, javaApiUsage)
dependencies {
components { all<IvyVariantDerivationRule>() }
abstract class IvyVariantDerivationRule implements ComponentMetadataRule {
final LibraryElements jarLibraryElements
final Category libraryCategory
final Usage javaRuntimeUsage
final Usage javaApiUsage
@Inject
IvyVariantDerivationRule(ObjectFactory objectFactory) {
jarLibraryElements = objectFactory.named(LibraryElements, LibraryElements.JAR)
libraryCategory = objectFactory.named(Category, Category.LIBRARY)
javaRuntimeUsage = objectFactory.named(Usage, Usage.JAVA_RUNTIME)
javaApiUsage = objectFactory.named(Usage, Usage.JAVA_API)
void execute(ComponentMetadataContext context) {
// This filters out any non Ivy module
if(context.getDescriptor(IvyModuleDescriptor) == null) {
return
context.details.addVariant("runtimeElements", "default") {
attributes {
attribute(LibraryElements.LIBRARY_ELEMENTS_ATTRIBUTE, jarLibraryElements)
attribute(Category.CATEGORY_ATTRIBUTE, libraryCategory)
attribute(Usage.USAGE_ATTRIBUTE, javaRuntimeUsage)
context.details.addVariant("apiElements", "compile") {
attributes {
attribute(LibraryElements.LIBRARY_ELEMENTS_ATTRIBUTE, jarLibraryElements)
attribute(Category.CATEGORY_ATTRIBUTE, libraryCategory)
attribute(Usage.USAGE_ATTRIBUTE, javaApiUsage)
dependencies {
components { all(IvyVariantDerivationRule) }