Welcome to xInfer! This guide will walk you through the process of setting up your environment and building the library from source.
Prerequisites
xInfer is a high-performance library that sits on top of the NVIDIA software stack. Before you begin, you must have the following components installed on your system.
1. NVIDIA Driver
You need a recent NVIDIA driver that supports your GPU and CUDA Toolkit version. You can check your driver version by running:
nvidia-smi
2. NVIDIA CUDA Toolkit
xInfer is built on CUDA. We recommend CUDA 11.8 or newer.
Verification: Check if nvcc is installed and in your PATH:
TensorRT requires the cuDNN library for accelerating deep learning primitives. We recommend cuDNN 8.6 or newer.
Verification: Check for the cuDNN header file:
ls /usr/include/cudnn.h
Installation: Download and install it from the NVIDIA cuDNN Archive. Make sure to follow the installation instructions for copying the header and library files to your CUDA Toolkit directory.
4. NVIDIA TensorRT
TensorRT is the core optimization engine used by xInfer. We recommend TensorRT 8.6 or newer.
Verification: Check if the TensorRT header NvInfer.h exists:
ls /usr/include/x86_64-linux-gnu/NvInfer.h
(The path may vary depending on your installation method).
Installation: Download and install it from the NVIDIA TensorRT page. The .deb or .rpm package installation is the easiest method.
5. xTorch Library
The xInfer::builders and zoo modules are designed to work seamlessly with models trained or defined in xTorch.
Installation: You must build and install xTorch first. Please follow the instructions at the xTorch GitHub Repository.
git clone https://github.com/your-username/xtorch.gitcd xtorchmkdir build && cd buildcmake ..make -jsudo make install
6. Other Dependencies (Compiler, CMake, OpenCV)
You will need a modern C++ compiler and the build tools.
Create a build directory and run CMake. This command will find all the dependencies you installed and prepare the build files.
mkdir buildcd buildcmake ..
!!! tip "Troubleshooting Dependency Issues"
If CMake has trouble finding a specific library (especially TensorRT if you installed it from a .zip file), you can give it a hint with the -D flag. For example:
Use make to compile the library. This will create the libxinfer.so shared library and all the example executables.
# Use -j to specify the number of parallel jobs to speed up compilationmake -j$(nproc)
Step 4: (Optional) Run Tests
After a successful build, you can run the example executable to verify that everything is working.
# From the build directory./xinfer_example```You should see the output from the example programs, indicating a successful build.### Step 5: (Optional) Install System-WideIf you want to use `xInfer` as a dependency in other C++ projects on your system, you can install it. This will copy the library files and headers to system directories (like `/usr/local/lib` and `/usr/local/include`).```bashsudo make install
After this step, other CMake projects can find and use your library with a simple find_package(xinfer REQUIRED) command.
Next Steps
Congratulations! You have successfully built and installed xInfer.
Check out the 🚀 Quickstart guide to run your first high-performance inference pipeline.
Explore the Model Zoo API to see all the pre-packaged solutions you can use out-of-the-box.