How to Install TensorRT on Ubuntu

Sushrut Ashtikar
Level Up Coding
Published in
2 min readMar 16, 2023

--

Photo by Resul Kaya on Unsplash

TensorRT is a high-performance deep-learning inference engine developed by NVIDIA. It is designed to optimize and accelerate the inference of deep neural networks on NVIDIA GPUs. In this blog, I will guide you on how to install TensorRT on your ubuntu based system.

Step 1: Check Prerequisites

Before installing TensorRT, you need to ensure that your system meets the following requirements:

  • NVIDIA GPU with Compute Capability 3.0 or higher
  • Ubuntu 16.04, 18.04, or 20.04
  • NVIDIA CUDA Toolkit 10.2, 11.0, or 11.1
  • NVIDIA cuDNN Library 7.6, 8.0, or 8.1
  • Python 3.6, 3.7, 3.8, or 3.9

Step 2: Download TensorRT here.
(Reference version 7.0 in this blog)

This is a very important step, if the incorrect tensor-rt version is selected it will crash while converting the weights.
Version Specifications:

| Cuda Version  | TensorRT Version  |
| ------------- | ----------------- |
| 10.2 | 7.0 |
| 11.1 | 7.2.2 |
| 11.2 | 7.2.2 |

You can download the TensorRT package from the NVIDIA website. To download the package, you need to sign up for the NVIDIA Developer Program.

Once you have signed up, navigate to the TensorRT download page and select the version that matches your system requirements. You will need to accept the license agreement to download the package.

Step 3: Install TensorRT

Install the TensorRT package using the following command:

sudo dpkg -i nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.0.0.11-ga-20191216_1-1_amd64.deb
sudo apt update
sudo apt install tensorrt libnvinfer7

(Optional step)

If you’re facing some kind of path issue, add these lines at the bottom of the .bashrc file located in /home/$USER/

export CUDA_HOME=/usr/local/cuda
export DYLD_LIBRARY_PATH=$CUDA_HOME/lib64:$DYLD_LIBRARY_PATH
export PATH=$CUDA_HOME/bin:$PATH
export C_INCLUDE_PATH=$CUDA_HOME/include:$C_INCLUDE_PATH
export CPLUS_INCLUDE_PATH=$CUDA_HOME/include:$CPLUS_INCLUDE_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
export LD_RUN_PATH=$CUDA_HOME/lib64:$LD_RUN_PATH

Conclusion

In this blog, I have provided a step-by-step guide on how to install TensorRT on your system. By following these steps, you should be able to install TensorRT and start using it to accelerate the inference of your deep neural networks on NVIDIA GPUs.

If you enjoy my blog and would like to support my work, consider buying me a coffee. Your contribution will help me keep the blog running and creating valuable content. Thank you!

Level Up Coding

Thanks for being a part of our community! Before you go:

🚀👉 Join the Level Up talent collective and find an amazing job

--

--

Software Developer with over 5 years of expertise in building applications. Passionate about technology and always seeking to learn and implement new solutions.