Choosing the Right ABI
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
recommended commands:
PyTorch whl file from PyTorch.org
python -m pip install .
bazel build //:libtorchtrt -c opt –config pre_cxx11_abi
libtorch-shared-with-deps-*.zip from PyTorch.org
python -m pip install .
bazel build //:libtorchtrt -c opt –config pre_cxx11_abi
libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org
python setup.py bdist_wheel –use-cxx11-abi
bazel build //:libtorchtrt -c opt
PyTorch preinstalled in an NGC container
python setup.py bdist_wheel –use-cxx11-abi
bazel build //:libtorchtrt -c opt
PyTorch from the NVIDIA Forums for Jetson
python setup.py bdist_wheel –use-cxx11-abi
bazel build //:libtorchtrt -c opt
PyTorch built from Source
python setup.py bdist_wheel –use-cxx11-abi
bazel build //:libtorchtrt -c opt
Build steps
Open the app “x64 Native Tools Command Prompt for VS 2022” - note that Admin privileges may be necessary
Ensure Bazelisk (Bazel launcher) is installed on your machine and available from the command line. Package installers such as Chocolatey can be used to install Bazelisk
Install latest version of Torch (i.e. with pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu124
)
Clone the Torch-TensorRT repository and navigate to its root directory
Run pip install ninja wheel setuptools
Run pip install --pre -r py/requirements.txt
Run set DISTUTILS_USE_SDK=1
Run python setup.py bdist_wheel
Run pip install dist/*.whl
Advanced setup and Troubleshooting
In the WORKSPACE
file, the cuda_win
, libtorch_win
, and tensorrt_win
are Windows-specific modules which can be customized. For instance, if you would like to build with a different version of CUDA, or your CUDA installation is in a non-standard location, update the path in the cuda_win module.
Similarly, if you would like to use a different version of pytorch or tensorrt, customize the urls in the libtorch_win
and tensorrt_win
modules, respectively.
Local versions of these packages can also be used on Windows. See toolchains\\ci_workspaces\\WORKSPACE.win.release.tmpl
for an example of using a local version of TensorRT on Windows.
Alternative Build Systems
Building with CMake (TorchScript Only)
It is possible to build the API libraries (in cpp/) and the torchtrtc executable using CMake instead of Bazel.
Currently, the python API and the tests cannot be built with CMake.
Begin by installing CMake.
Latest releases of CMake and instructions on how to install are available for different platforms
[on their website](https://cmake.org/download/).
A few useful CMake options include:
CMake finders for TensorRT are provided in cmake/Modules. In order for CMake to use them, pass
-DCMAKE_MODULE_PATH=cmake/Modules when configuring the project with CMake.
Libtorch provides its own CMake finder. In case CMake doesn’t find it, pass the path to your install of
libtorch with -DTorch_DIR=<path to libtorch>/share/cmake/Torch
If TensorRT is not found with the provided cmake finder, specify -DTensorRT_ROOT=<path to TensorRT>
Finally, configure and build the project in a build directory of your choice with the following command
from the root of Torch-TensorRT project:
cmake -S. -B<build directory> \
[-DCMAKE_MODULE_PATH=cmake/Module] \
[-DTorch_DIR=<path to libtorch>/share/cmake/Torch] \
[-DTensorRT_ROOT=<path to TensorRT>] \
[-DCMAKE_BUILD_TYPE=Debug|Release]
cmake --build <build directory>
Prerequisites
Install or compile a build of PyTorch/LibTorch for aarch64
NVIDIA hosts builds the latest release branch for Jetson here:
https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048
Environment Setup
To build natively on aarch64-linux-gnu platform, configure the WORKSPACE
with local available dependencies.
Replace WORKSPACE
with the corresponding WORKSPACE file in //toolchains/jp_workspaces
Configure the correct paths to directory roots containing local dependencies in the new_local_repository
rules:
NOTE: If you installed PyTorch using a pip package, the correct path is the path to the root of the python torch package.
In the case that you installed with sudo pip install
this will be /usr/local/lib/python3.8/dist-packages/torch
.
In the case you installed with pip install --user
this will be $HOME/.local/lib/python3.8/site-packages/torch
.
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
use that library, set the paths to the same path but when you compile make sure to add the flag --config=pre_cxx11_abi
new_local_repository(
name = "libtorch",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "/usr/local/lib/python3.8/dist-packages/torch",
build_file = "third_party/libtorch/BUILD"
Compile C++ Library and Compiler CLI
NOTE: Due to shifting dependency locations between Jetpack 4.5 and 4.6 there is a now a flag to inform bazel of the Jetpack version
--platforms //toolchains:jetpack_x.x
Compile Torch-TensorRT library using bazel command:
bazel build //:libtorchtrt --platforms //toolchains:jetpack_5.0
Compile Python API
NOTE: Due to shifting dependencies locations between Jetpack 4.5 and newer Jetpack versions there is now a flag for setup.py
which sets the jetpack version (default: 5.0)
Compile the Python API using the following command from the //py
directory: