The NVIDIA
®
CUDA
®
Toolkit enables developers to build NVIDIA GPU
accelerated compute applications for desktop computers, enterprise, and data centers to
hyperscalers. It consists of the CUDA compiler toolchain including the CUDA runtime
(cudart) and various CUDA libraries and tools. To build an application, a developer has
to install only the CUDA Toolkit and necessary libraries required for linking.
In order to run a CUDA application, the system should have a CUDA enabled GPU and an
NVIDIA display driver that is compatible with the CUDA Toolkit that was used to build
the application itself. If the application relies on dynamic linking for libraries, then
the system should have the right version of such libraries as well.
Figure 1. Components of CUDA
Every CUDA toolkit also ships with an NVIDIA display driver package for convenience. This
driver supports all the features introduced in that version of the CUDA Toolkit. Please
check the toolkit and driver version mapping in the
release notes
. The driver package
includes both the user mode CUDA driver (libcuda.so) and kernel mode components
necessary to run the application.
Typically, upgrading a CUDA Toolkit involves upgrading both the toolkit and the driver to
get the bleeding edge toolkit and driver capabilities.
Figure 2. CUDA Upgrade Path
But this is not always required. CUDA Compatibility guarantees allow for upgrading only
certain components and that will be the focus of the rest of this document. We will see
how the upgrade to a new CUDA Toolkit can be simplified to not always require a full
system upgrade.
From CUDA 11 onwards, applications compiled with a CUDA Toolkit release from within a
CUDA major release family can run, with limited feature-set, on systems having at least
the
minimum required driver version
as indicated below. This minimum required
driver can be different from the driver packaged with the CUDA Toolkit but should belong
to the same major release.
Refer to the
CUDA Toolkit Release Notes
for the complete
table.
Table 1. Example CUDA Toolkit 11.x Minimum Required Driver Versions (Refer to CUDA
Release Notes)
CUDA Toolkit
Linux x86_64 Minimum Required Driver Version
Windows Minimum Required Driver Version
CUDA 12.x
>=525.60.13
>=527.41
CUDA 11.x
>= 450.80.02*
>=452.39*
*
CUDA 11.0 was released with an earlier driver version, but by upgrading to Tesla
Recommended Drivers 450.80.02 (Linux) / 452.39 (Windows) as indicated, minor
version compatibility is possible across the CUDA 11.x family of toolkits.
While applications
built against any of the older CUDA Toolkits always continued to function on newer
drivers due to binary backward compatibility, before CUDA 11, applications built against
newer CUDA Toolkit releases were not supported on older drivers without forward
compatibility package (see
Forward Compatibility Support Across Major Toolkit Versions
).
If you are using a new CUDA 10.x minor release, then the minimum required
driver version is the same as the driver that’s packaged as part of that toolkit
release. Consequently, the minimum required driver version changed for every new CUDA
Toolkit minor release until CUDA 11.1. Therefore, system administrators always have to
upgrade drivers in order to support applications built against CUDA Toolkits from 10.x
releases.
CUDA Toolkit
Linux x86_64 Minimum Required Driver Version
Windows MinimumRequired Driver Version
CUDA 10.2
>= 440.33
>=441.22
CUDA 10.1
>= 418.39
>=418.96
CUDA 10.0
>= 410.48
>=411.31
With minor version
compatibility, upgrading to CUDA 11.1 is now possible on older drivers from within the
same major release family such as 450.80.02 that was shipped with CUDA 11.0, as shown
below:
$ nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
...<snip>...
$ samples/bin/x86_64/linux/release/deviceQuery
samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Tesla T4"
CUDA Driver Version / Runtime Version 11.0 / 11.1
CUDA Capability Major/Minor version number: 7.5
...<snip>...
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.0, CUDA Runtime Version = 11.1, NumDevs = 1
Result = PASS
Minimum required driver version guidance can be found in the
release notes
. Note that if the minimum
required driver version is not installed in the system, applications will return an
error as shown
below.
$ samples/bin/x86_64/linux/release/deviceQuery
samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
cudaGetDeviceCount returned 3
-> initialization error
Result = FAIL
Developers and system admins should note two important caveats when relying on minor
version compatibility. If either of these caveats are limiting, then a new CUDA driver
from the same minor version of the toolkit that the application was built with or later
is required.
Limited feature set
Sometimes features introduced in a CUDA Toolkit version
may actually span both the toolkit and the driver. In such cases an application
that relies on features introduced in a newer version of the toolkit and driver
may return the following error on older drivers:
cudaErrorCallRequiresNewerDriver
. As mentioned earlier,
admins should then upgrade the installed driver also.
Application
developers can avoid running into this problem by having the application
explicitly check for the availability of features. Refer to the
CUDA Compatibility Developers Guide
for more details.
Applications using PTX will see runtime issues
Applications that compile
device code to PTX will not work on older drivers. If the application requires
PTX then admins have to upgrade the installed driver.
PTX Developers
should refer to the CUDA Compatibility Developers Guide and PTX programming
guide in the
CUDA C++ Programming Guide
for
details on this limitation.
As described, applications that directly rely only on the CUDA runtime can be deployed in
the following two scenarios:
CUDA driver that’s installed on the system is newer than the
runtime.
CUDA runtime is newer than the CUDA driver on the system but they are
from the same major release of CUDA Toolkit.
In scenario 2, system admins should be aware of the aforementioned limitations and should
be able to tell why an application may be failing if they run into any issues.
Minor version compatibility has another benefit that offers flexibility in the use and
deployment of libraries. Applications that use libraries that support minor version
compatibility can be deployed on systems with a different version of the toolkit and
libraries without recompiling the application for the difference in the library version.
This holds true for both older and newer versions of the libraries provided they are all
from the same major release family. Note that libraries themselves have
interdependencies that should be considered. For example, each cuDNN version requires a
certain version of cuBLAS.
Figure 3. NVRTC supports minor version compatibility from CUDA 11.3 onwards
However, if an application is unable to leverage the minor version compatibility due
to any of the aforementioned reasons, then the Forward Compatibility model can be used
as an alternative even though Forward Compatibility is mainly intended for compatibility
across major toolkit versions.
Increasingly, data centers and enterprises may not want to update the NVIDIA GPU Driver
across major release versions due to the rigorous testing and validation that happens
before any system level driver installations are done.
To support such scenarios, CUDA introduced a Forward Compatibility Upgrade path in CUDA
10.0.
Figure 4. Forward Compatibility Upgrade Path
Forward Compatibility is applicable only for systems with NVIDIA Data Center GPUs or
select
NGC Server Ready
SKUs of RTX cards. It’s
mainly intended to support applications built on newer CUDA Toolkits to run on systems
installed with an older NVIDIA Linux GPU driver from different major release families.
This new forward-compatible upgrade path requires the use of a special package called
“CUDA compat package”.
The CUDA compat package is available in the local installers or the CUDA network repositories
provided by NVIDIA as
cuda-compat-12.1
.
Install the package on the system using the package installer.
On Ubuntu, for example:
$ sudo apt-get install -y cuda-compat-12-1
The compat package will then be installed to the versioned toolkit location typically found
in the toolkit directory. For example, for 11.8 it will be found in
/usr/local/cuda-12.1/.
The cuda-compat package consists of the following files:
libcuda.so.*
- the CUDA Driver
libnvidia-nvvm.so.*
- JIT LTO ( CUDA 11.5 and later only)
libnvidia-ptxjitcompiler.so.*
- the JIT (just-in-time) compiler for
PTX files
libcudadebugger.so.*
-GPU debugging support for CUDA Driver (CUDA 11.8
and later only)
These files should be kept together as the CUDA driver is dependent on the
libnvidia-ptxjitcompiler.so.*
of the same version.
Note:
This package only provides the files, and does not configure the system.
Example:
CUDA Compatibility is installed and the application can now run successfully as shown below.
In this example, the user sets
LD_LIBRARY_PATH
to include the files installed
by the
cuda-compat-12-1
package.
$ sudo apt-get install -y cuda-compat-12-1
Selecting previously unselected package cuda-compat-12-1.
(Reading database ... 339974 files and directories currently installed.)
Preparing to unpack .../cuda-compat-12-0_530.30-1_amd64.deb ...
Unpacking cuda-compat-12-1 (530.30-1) ...
Setting up cuda-compat-12-1 (530.30-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9.2) ...
Check the files installed under
/usr/local/cuda/compat
:
$ ls -l /usr/local/cuda/compat
total 145676
lrwxrwxrwx 1 root root 12 Jun 3 00:45 libcuda.so -> libcuda.so.1
lrwxrwxrwx 1 root root 17 Jun 3 00:45 libcuda.so.1 -> libcuda.so.530.30
-rw-r--r-- 1 root root 26255520 Jun 2 20:55 libcuda.so.530.30
lrwxrwxrwx 1 root root 25 Jun 3 00:45 libcudadebugger.so.1 -> libcudadebugger.so.525.30
-rw-r--r-- 1 root root 10938424 May 27 20:49 libcudadebugger.so.525.30
lrwxrwxrwx 1 root root 19 Jun 3 00:45 libnvidia-nvvm.so -> libnvidia-nvvm.so.4
lrwxrwxrwx 1 root root 24 Jun 3 00:45 libnvidia-nvvm.so.4 -> libnvidia-nvvm.so.525.30
-rw-r--r-- 1 root root 92017376 Jun 2 21:21 libnvidia-nvvm.so.530.30
lrwxrwxrwx 1 root root 34 Jun 3 00:45 libnvidia-ptxjitcompiler.so.1 -> libnvidia-ptxjitcompiler.so.530.30
-rw-r--r-- 1 root root 19951576 Jun 2 20:38 libnvidia-ptxjitcompiler.so.530.30
The user can set
LD_LIBRARY_PATH
to include the files installed before
running the CUDA 12.1 application:
$ LD_LIBRARY_PATH=/usr/local/cuda/compat:$LD_LIBRARY_PATH samples/bin/x86_64/linux/release/deviceQuery
samples/bin/x86_64/linux/release/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "Tesla T4"
CUDA Driver Version / Runtime Version 12.1 / 12.1 CUDA Capability Major/Minor version number: 9.0
...<snip>...
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.1, CUDA Runtime Version = 12.1, NumDevs = 1
Result = PASS
The cuda-compat package files can also be extracted from the appropriate datacenter driver
‘runfile’ installers (
.run
) available in
NVIDIA driver downloads
. To do this:
Download the latest NVIDIA Data Center GPU driver , and extract the
.run
file using option
-x
.
Copy the four CUDA compatibility upgrade files, listed at the start of this section,
into a user- or root-created directory.
Follow your system’s guidelines for making sure that the system linker picks up the new
libraries.
Note:
Symlinks under
/usr/local/cuda/compat
need to be created manually when
using the runfile installer.
CUDA forward
compat packages should be used only in the following situations when forward
compatibility is required across major releases.
The CUDA compat package is named after the highest toolkit that it can support. If you
are on the R418 driver but require 11.2 application support, please install the
cuda-compat package for 11.2. But when performing a full system upgrade, when choosing
to install both the toolkit and the driver, remove any forward compatible packages
present in the system.
For example, if you are upgrading the driver to 450.80.02 which is the minimum required
driver version for the 11.2 toolkit, then the cuda-compat package is not required in
most cases. 10.x and 11.0 applications will be supported due to backward compatibility
and future 11.x applications will be supported due to minor-version compatibility.
But there are feature restrictions that may make this option less desirable for your
scenario - for example: Applications requiring PTX JIT compilation support. Unlike the
minor-version compatibility that is defined between CUDA runtime and CUDA driver,
forward compatibility is defined between the kernel driver and the CUDA driver, and
hence such restrictions do not apply. In order to circumvent the limitation, a forward
compatibility package may be used in such scenarios as well.
C - Compatible
X – Not compatible
* – New Feature Branches (such as 495.xx) are not supported targets for CUDA Forward
Compatibility.
Branches R515, R510, R465, R460, R455, R440, R418, R410, R396, R390 are end of life and
are not supported targets for compatibility.
There are specific features in the CUDA driver that require kernel-mode support and will
only work with a newer kernel mode driver. A few features depend on other user-mode
components and are therefore also unsupported.
[1] This relies on
CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR_SUPPORTED
and
CU_DEVICE_ATTRIBUTE_VIRTUAL_ADDRESS_MANAGEMENT_SUPPORTED
, which
should be queried if you intend to use the full range of this functionality.
[2] Supported on Red Hat Enterprise Linux operating system version 8.1 or higher.
In addition to the CUDA driver and certain compiler components, there are other drivers
in the system installation stack (for example, OpenCL) that remain on the old version.
The forward-compatible upgrade path is for CUDA only.
A well-written application should use following error codes to determine if CUDA Forward
Compatible Upgrade is supported. System administrators should be aware of these error
codes to determine if there are errors in the deployment.
CUDA_ERROR_SYSTEM_DRIVER_MISMATCH = 803
. This error indicates that
there is a mismatch between the versions of the display driver and the CUDA driver.
CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE = 804
. This error
indicates that the system was upgraded to run with forward compatibility but the
visible hardware detected by CUDA does not support this configuration.
There are two models of deployment for the CUDA compat package. We recommend the use of the
‘shared’ deployment mode.
Shared deployment: Allows sharing the same compat package across installed toolkits in the
system. Download and extract the latest forward compatibility package with the highest
toolkit version in its name. Using RPATH, or through LD_LIBRARY_PATH or through an
automatic loader (for example.
ld.so.conf
), point to that package. This
is the most recommended choice.
The user can set
LD_LIBRARY_PATH
to include the files installed before
running the CUDA 11.1
application:
$ LD_LIBRARY_PATH=/usr/local/cuda/compat:$LD_LIBRARY_PATH
Per-application deployment: Individual applications can choose a package of their choice
and place it as part of the
Modules
system tied to the toolkit and the libraries. Using the
Modules
system, the admin, or the user, can set up ‘module’ scripts for each
version of each toolkit package, and then load the module script for the toolkit as
needed.
$ module load cuda/11.0
There is an important
consideration to the per-application deployment approach. Older forward compatibility
packages are not supported on new driver versions. Therefore the module load scripts
should proactively query the system for whether the compatibility package can be used
before loading the files. This is especially critical if there was a full system
upgrade. In the cases where the module script cannot use CUDA compatible upgrade, a
fallback path to the default system’s installed CUDA driver can provide a more
consistent experience and this can be achieved using RPATH.
The CUDA driver maintains backward compatibility to continue support of applications built on
older toolkits. Using a compatible minor driver version, applications build on CUDA
Toolkit 11 and newer are supported on any driver from within the corresponding major
release. Using the CUDA Forward Compatibility package, system administrators can run
applications built using a newer toolkit even when an older driver that does not satisfy
the minimum required driver version is installed on the system. This forward
compatibility allows the CUDA deployments in data centers and enterprises to benefit
from the faster release cadence and the latest features and performance of CUDA
Toolkit.
CUDA compatibility helps users by:
Faster upgrades to the latest CUDA releases:
Enterprises or
data centers with NVIDIA GPUs have complex workflows and require advance
planning for NVIDIA driver rollouts. Not having to update the driver for
newer CUDA releases can mean that new versions of the software can be made
available faster to users without any delays.
Faster upgrades of the CUDA libraries:
Users can upgrade to the latest software
libraries and applications built on top of CUDA (for example, math libraries or deep
learning frameworks) without an upgrade to the entire CUDA Toolkit or driver. This
is possible as these libraries and frameworks do not have a direct dependency on the
CUDA runtime, compiler or driver.
This section includes some FAQs related to CUDA compatibility.
What is the difference between CUDA forward compatible upgrade and CUDA minor
version compatibility? When should users use these features?
When to use
If you cannot upgrade the kernel driver but need to
use the latest CUDA Toolkit.
If you want to support newer applications on older
drivers within the same major release family.
GPUs supported
11.4 UMD (User Mode Driver) and later will extend
forward compatibility support to select
NGC Ready NVIDIA
RTX boards.
Prior to that forward
compatibility will be supported only on NVIDIA Data
Center cards.
All GPU products supported
OS distributions supported
Linux only
Windows, Linux
Features Supported
Some features such as (CUDA-GL interop, Power 9 ATS,
cuMemMap APIs) are not supported. These features
depend on a new kernel mode driver and thus are not
supported.
These are explicitly called out in the
documentation.
All existing CUDA features (from older minor
releases) work. Users may have to incorporate checks in
their application when using new features in the minor
release (that require a new driver) to ensure graceful
errors. More information can be found in the “Handling
New CUDA Features” section.
CUDA Releases Supported
All CUDA releases supported through the lifetime of
the datacenter driver branch. For example, R418 (CUDA
10.1) EOLs in March 2022 - so all CUDA versions released
(including major releases) during this timeframe are
supported.
Only works within a ‘major’ release family (e.g. 11.0
through 11.x).
Compatibility is not supported across major CUDA
releases.
Application includes PTX or uses NVRTC
No additional workflow required.
Users should use the new PTX static library to
rebuild binaries. Refer to the workflow section for more
details.
Requires administrator involvement
Depends on the deployment. Users can also set up
LD_LIBRARY_PATH
with the new
libraries from the
cuda-compat-*
package.
Not required.
Does CUDA forward compatible upgrades work intra-branch?
Users can
upgrade the kernel mode driver within the same branch. Sometimes this may
require updating the cuda-compat* package. This use-case is supported only
for drivers on LLB and LTS branches of driver for select GPUs.
Which GPUs are supported by the driver ?
The CUDA compatible upgrade is
meant to ease the management of large production systems for enterprise
customers. 11.4 UMD (User Mode Driver) and later will extend forward
compatibility support to select NGC Ready NVIDIA RTX boards. Prior to that
forward compatibility will be supported only on NVIDIA Data Center cards.
It’s important to note that HW support is defined by the kernel mode driver
and as such, newer CUDA drivers on their own will not enable new HW support.
Refer to the following table to determine which hardware is supported by your
system.
The developer is using PTX code in the application and seeing some errors or
issues. What should we do?
PTX and application compatibility
information can be found in the CUDA Compatibility Developer’s Guide
CUDA C++ Programming
Guide
.
If we build our CUDA application using CUDA 11.0, can it continue to be used
with newer NVIDIA drivers (e.g. CUDA 11.1/R455, 11.x etc.)? Or is it only
the other way around?
Drivers have always been backwards compatible
with CUDA. This means that a CUDA 11.0 application will be compatible with
R450 (11.0), R455 (11.1) and beyond. CUDA applications typically statically
include all the libraries (for example cudart, CUDA math libraries such as
cuBLAS, cuFFT) they need, so they should work on new drivers or CUDA Toolkit
installations.
In other words, since CUDA is backward compatible,
existing CUDA applications can continue to be used with newer CUDA
versions.
What is the minimum CUDA 11.x driver that supports the CUDA minor version
compatibility?
The minimum driver version required is 450.80.02.
What about new features introduced in minor releases of CUDA? How does a
developer build an application using newer CUDA Toolkits (e.g. 11.x) work on
a system with a CUDA 11.0 driver (R450)?
By using new CUDA versions, users can benefit from new CUDA programming model
APIs, compiler optimizations and math library features.
A subset of CUDA APIs don’t need a new driver and they can all be
used without any driver dependencies. For example,
async copy
APIs
introduced in 11.1 do not need a new driver.
To use other CUDA APIs introduced in a minor release (that require a
new driver), one would have to implement fallbacks or fail
gracefully. This situation is not different from what is available
today where developers use macros to compile out features based on
CUDA versions. Users should refer to the CUDA headers and
documentation for new CUDA APIs introduced in a release.
There are
some issues that admins can advise the application developers to
accommodate in their code. Please refer to the
Best Practices Guide
for further information.
Does CUDA compatibility work with containers?
Yes. CUDA minor version compatibility and CUDA forward compatible upgrade
both work when using either
NGC Deep Learning Framework
containers or using containers that are based on the official
CUDA base images
. The
images include the CUDA compatible upgrade libraries and the NVIDIA
Container Toolkit (
nvidia-docker2
) has logic
to correctly load the required libraries.
I’m running an NGC container and see this error: “This container was built
for NVIDIA Driver Release 450.51 or later, but version 418.126.02 was
detected and compatibility mode is UNAVAILABLE.”. What could be
wrong?
It is possible you are either running a wrong version of the
NVIDIA driver on the system or your system does not have an NVIDIA Data
Center GPU.
Trademarks
NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA
Corporation in the Unites States and other countries. Other company and product
names may be trademarks of the respective companies with which they are
associated.
This document is provided for information
purposes only and shall not be regarded as a warranty of a
certain functionality, condition, or quality of a product.
NVIDIA Corporation (“NVIDIA”) makes no representations or
warranties, expressed or implied, as to the accuracy or
completeness of the information contained in this document
and assumes no responsibility for any errors contained
herein. NVIDIA shall have no liability for the consequences
or use of such information or for any infringement of
patents or other rights of third parties that may result
from its use. This document is not a commitment to develop,
release, or deliver any Material (defined below), code, or
functionality.
NVIDIA reserves the right to make corrections, modifications,
enhancements, improvements, and any other changes to this
document, at any time without notice.
Customer should obtain the latest relevant information before
placing orders and should verify that such information is
current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and
conditions of sale supplied at the time of order
acknowledgement, unless otherwise agreed in an individual
sales agreement signed by authorized representatives of
NVIDIA and customer (“Terms of Sale”). NVIDIA hereby
expressly objects to applying any customer general terms and
conditions with regards to the purchase of the NVIDIA
product referenced in this document. No contractual
obligations are formed either directly or indirectly by this
document.
NVIDIA products are not designed, authorized, or warranted to be
suitable for use in medical, military, aircraft, space, or
life support equipment, nor in applications where failure or
malfunction of the NVIDIA product can reasonably be expected
to result in personal injury, death, or property or
environmental damage. NVIDIA accepts no liability for
inclusion and/or use of NVIDIA products in such equipment or
applications and therefore such inclusion and/or use is at
customer’s own risk.
NVIDIA makes no representation or warranty that products based on
this document will be suitable for any specified use.
Testing of all parameters of each product is not necessarily
performed by NVIDIA. It is customer’s sole responsibility to
evaluate and determine the applicability of any information
contained in this document, ensure the product is suitable
and fit for the application planned by customer, and perform
the necessary testing for the application in order to avoid
a default of the application or the product. Weaknesses in
customer’s product designs may affect the quality and
reliability of the NVIDIA product and may result in
additional or different conditions and/or requirements
beyond those contained in this document. NVIDIA accepts no
liability related to any default, damage, costs, or problem
which may be based on or attributable to: (i) the use of the
NVIDIA product in any manner that is contrary to this
document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA
patent right, copyright, or other NVIDIA intellectual
property right under this document. Information published by
NVIDIA regarding third-party products or services does not
constitute a license from NVIDIA to use such products or
services or a warranty or endorsement thereof. Use of such
information may require a license from a third party under
the patents or other intellectual property rights of the
third party, or a license from NVIDIA under the patents or
other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if
approved in advance by NVIDIA in writing, reproduced without
alteration and in full compliance with all applicable export
laws and regulations, and accompanied by all associated
conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE
BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND OTHER
DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING
PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED,
IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE
MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF
NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A
PARTICULAR PURPOSE. TO THE EXTENT NOT PROHIBITED BY LAW, IN
NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES, INCLUDING
WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL,
INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER
CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING
OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding
any damages that customer might incur for any reason
whatsoever, NVIDIA’s aggregate and cumulative liability
towards customer for the products described herein shall be
limited in accordance with the Terms of Sale for the
product.
VESA DisplayPort
DisplayPort and DisplayPort Compliance Logo, DisplayPort Compliance Logo for
Dual-mode Sources, and DisplayPort Compliance Logo for Active Cables are
trademarks owned by the Video Electronics Standards Association in the United
States and other countries.
Trademarks
NVIDIA and the NVIDIA logo are trademarks and/or registered trademarks of NVIDIA
Corporation in the Unites States and other countries. Other company and product
names may be trademarks of the respective companies with which they are
associated.