How to Easily Install PyTorch on Jetson Orin Nano running JetPack 6.2
NVIDIA Jetson devices offer powerful AI inference capabilities at the edge, making them ideal for running deep learning models efficiently. This guide provides step-by-step instructions to set up an NVIDIA® Jetson Orin Nano™ Developer Kit for deep learning with PyTorch and torchvision. The installation process will be performed via a terminal from a host machine connected through USB or a serial connection. 🚀
Software and Package Versions
This guide involves the following software and package versions for setting up PyTorch on the Jetson Orin Nano:
- JetPack - 6.2 (also compatible with 6.1)
- CUDA - 12.6
- Python - 3.10
- cuSPARSELt - 0.7.0
- torch - 2.5.0a0+872d972e41 (NVIDIA wheel).
- torchvision - 0.20.0 (built from source)
Tips for Headless setup - WIFI connection and IP address
If your Jetson device is controlled via headless setup, here are some instructions to connect to Wifi using nmcli
:
- Check available WiFi networks:
nmcli device wifi list
- Connect to a WiFi network:
nmcli device wifi connect "SSID_NAME" password "YOUR_PASSWORD"
- Verify the Connection:
nmcli connection show --active
Note: If encountering unauthorized error, add sudo
before these commands.
After connecting to WIFI, the user can retrieve the IP address assigned to the Jetson device. Check the active connections with the command in step 3, which should output their names, UUIDs, types, and devices. Notice the device of the WIFI (type) connection and use it with the command ip a show <DEVICE_NAME>
. From another device, e.g., host machine, that also connects to this WIFI, verify the connection to Jetson by using ping
on this IP address. By securing this connection, the user can easily share files, servers, and CLI session from Jetson device to host machine.
Step 1 - Installing JetPack and SDK Components
The Jetson device must be flashed with JetPack, which includes the Jetson Linux OS and necessary GPU computing libraries such as CUDA and cuDNN. Follow the official NVIDIA JetPack installation guide for instructions. Ensure that firmware version is updated to 36+ for JetPack > 6.
To flash Jetson Orin Nano, install NVIDIA SDK Manager on a Linux host machine and follow the setup steps. During flashing, select Jetson SDK Components to install CUDA toolkit in the second step. If CUDA is missing post-installation, follow the JetPack package management guide.
CUDA installation can be verified with -
$ nvcc --version
or
$ nvidia-smi
which should show the same CUDA version.
For additional verification, compile and run a CUDA sample from its Github repository:
# Clone suitable version for testing CUDA 12.6
$ git clone --branch v12.5 https://github.com/NVIDIA/cuda-samples.git
$ cd cuda-samples/Samples/1_Utilities/deviceQuery
$ make
$ ./deviceQuery
Step 2 - Installing PyTorch and torchvision
Jetson devices use integrated GPUs (iGPUs) while default CUDA backend of PyTorch is optimized for discrete GPUs (dGPUs). To enable PyTorch with GPU acceleration on Jetson, follow the custom installation available in NVIDIA instructions and NVIDIA forums. The supporting package torchvision also has to be built from source on the Jetson device.
Installing cuSPARSELt
For PyTorch versions 24.06+ (see Compatibility Matrix), cuSPARSELt is required. Install it with these instructions by selecting Linux OS, aarch64-jetson architecture, and Ubuntu distribution:
$ wget https://developer.download.nvidia.com/compute/cusparselt/0.7.0/local_installers/cusparselt-local-tegra-repo-ubuntu2204-0.7.0_1.0-1_arm64.deb
$ sudo dpkg -i cusparselt-local-tegra-repo-ubuntu2204-0.7.0_1.0-1_arm64.deb
$ sudo cp /var/cusparselt-local-tegra-repo-ubuntu2204-0.7.0/cusparselt-*-keyring.gpg /usr/share/keyrings/
$ sudo apt-get update
$ sudo apt-get -y install libcusparselt0 libcusparselt-dev
Installing PyTorch
(Optional) Create a virtual environment:
$ sudo apt-get install virtualenv
$ cd <target-directory>
$ python3 -m virtualenv -p python3 <venv-name>
$ source <venv-name>/bin/activate
Install PyTorch with a custom wheel built by NVIDIA:
- Check compatibility from NVIDIA Jetson PyTorch matrix.
- Select suitable wheel from list of released wheels by selecting
v$JP_VERSION
(JetPack version) >pytorch
>$PYT_VERSION ... .whl
(PyTorch version).
- Install with pip -
$ pip3 install --no-cache https://developer.download.nvidia.com/compute/redist/jp/v$JP_VERSION/pytorch/$PYT_VERSION ... .whl
In this tutorial, PyTorch version 2.5 for JetPack 6.1 (still compatible with 6.2) will be installed with:
$ pip3 install --no-cache https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
Verify with Python Terminal:
$ python3
>>> import torch
>>> print(torch.__version__)
>>> print('CUDA available: ' + str(torch.cuda.is_available())) # Should be True
>>> print('cuDNN version: ' + str(torch.backends.cudnn.version()))
>>> a = torch.cuda.FloatTensor(2).zero_()
>>> print('Tensor a = ' + str(a))
>>> b = torch.randn(2).cuda()
>>> print('Tensor b = ' + str(b))
>>> c = a + b
>>> print('Tensor c = ' + str(c))
Installing torchvision
torchvision must be built from source. Select a compatible version of torchvision from PyTorch releases and replaces $VERSION
in the following commands:
$ sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev
$ git clone --branch release/0.$VERSION https://github.com/pytorch/vision torchvision
$ cd torchvision
$ export BUILD_VERSION=0.$VERSION.0
$ python3 setup.py install --user # remove --user if installing in virtualenv
In this tutorial, torchvision version 0.20.0 is installed by replacing $VERSION
with 20.
Verify with Python terminal:
$ python3
>>> import torchvision
>>> print(torchvision.__version__)
Troubleshooting
Here are some known issues and tips for troubleshooting:
- If a warning about numpy version is show, downgrade to numpy with
pip install 'numpy<2'
- If you encounter this runtime error -
RuntimeError: operator torchvision::nms does not exist
, follow the instructions in this NVIDIA forum post. Specifically, install these pre-compiled binaries oftorch
andtorchvision
:
$ pip install http://jetson.webredirect.org/jp6/cu126/+f/5cf/9ed17e35cb752/torch-2.5.0-cp310-cp310-linux_aarch64.whl#sha256=5cf9ed17e35cb7523812aeda9e7d6353c437048c5a6df1dc6617650333049092
$ pip install http://jetson.webredirect.org/jp6/cu126/+f/5f9/67f920de3953f/torchvision-0.20.0-cp310-cp310-linux_aarch64.whl#sha256=5f967f920de3953f2a39d95154b1feffd5ccc06b4589e51540dc070021a9adb9
Step 3 - (Optional) Setting up Jupyter Notebook for Remote Access via SSH tunnel
If your Jetson device is connected to the same network as your other machines—or directly via USB or a serial cable—you can start a remote Jupyter server from the Jetson device for convenient headless operations. First, find the IP address of your Jetson device:
$ ip a
Look for an IP address similar to 192.168.x.x or 10.x.x.x.
(Optional) Control Jetson terminal via SSH. Log in with Jetson username (user of Linux OS running on Jetson) and enter password when prompted.
$ ssh <jetson-username>@<jetson-ip>
Install Jupyterlab or Jupyter Notebook on Jetson terminal (or SSH session):
$ pip install jupyterlab
$ pip isntall notebook
Start server from Jetson terminal (or SSH session):
$ jupyter notebook --no-browser --port=8888 --ip=0.0.0.0
From your local PC, forward port 8888 using SSH:
ssh -N -L 8888:localhost:8888 <jetson-username>@<jetson-ip>
And access running Jupyter server with localhost on your PC: http://localhost:8888/
Conclusion
By following this guide, you’ve successfully installed PyTorch and torchvision on your Jetson Orin Nano running JetPack 6.2. With this setup, your device is now prepared for deep learning tasks, AI model deployment, and edge computing applications. This installation is part of our comprehensive walkthrough on running ResNet18 with FastAI on the Jetson Orin Nano—check out the full tutorial here if you’re interested!