Skip to main content

Getting Started with Roboflow Inference on NVIDIA® Jetson Devices

This wiki guide explains how to easily deploy AI models using Roboflow inference server running on NVIDIA Jetson devices. Here we will use Roboflow Universe to select an already trained model, deploy the model to the Jetson device and perform inference on a live webcam stream!

Roboflow Inference is the easiest way to use and deploy computer vision models, providing an HTTP Roboflow API used for running inference. Roboflow inference supports:

  • Object detection
  • Image Segmentation
  • Image Classification

and foundation models like CLIP and SAM.

Prerequisites

  • Ubuntu Host PC (native or VM using VMware Workstation Player)
  • reComputer Jetson or any other NVIDIA Jetson device
note

This wiki has been tested and verified on a reComputer J4012 and reComputer Industrial J4012 powered by NVIDIA Jetson Orin NX 16GB module

Flash JetPack to Jetson

Now you need to make sure that the Jetson device is flashed with a JetPack system. You can either use NVIDIA SDK Manager or command-line to flash JetPack to the device.

For Seeed Jetson-powered devices flashing guides, please refer to the below links:

note

Make sure to Flash JetPack version 5.1.1 because that is the version we have verified for this wiki

Tap into 50,000+ Models at Roboflow Universe

Roboflow offers 50,000+ ready-to-use AI models for everyone to get started with computer vision deployment in the fastest way. You can explore them all at Roboflow Universe. Roboflow Universe also offers 200,000+ datasets where you can use these datasets to train a model on Roboflow cloud servers or else bring your own dataset, use Roboflow online image annotation tool and start training.

  • Step 1: We will use a people detection model from Roboflow Universe as reference

  • Step 2: Here the model name will follow the format "model_name/version". In this case, it is people-detection-general/7. We will use this model name later in this wiki when we start inferencing.

Obtain Roboflow API Key

Now we need to obtain a Roboflow API key for the Roboflow inference server to work.

  • Step 1: Sign up for a new Roboflow account by entering your credentials

  • Step 2: Sign in to the account, navigate to Projects > Workspaces > <your_workspace_name> > Roboflow API, and click Copy next to "Private API Key" section

Keep this private key because we will be needing it later.

Running Roboflow Inference Server

You can get started with Roboflow inference on NVIDIA Jetson in 3 different ways.

  1. Using pip package - Using pip package will be the fastest way to get started, however you will need to install SDK components (CUDA, cuDNN, TensorRT) along with JetPack.
  2. Using Docker hub - Using Docker hub will be a little slow because it will first pull a Docker image which is around 19GB. however you do not need to install SDK components because the Docker image will already have those.
  3. Using local Docker build - Using local Docker build is an extension of Docker hub method where you can change the source code for the Docker image according to your desired application (such as enable TensorRT precision with INT8).

Before moving on to running Roboflow inference server, you need to obtain an AI model to inference on, and a Roboflow API key. We will first go through that.

Using pip Package

  • Step 1: If you only flash the Jetson device with Jetson L4T, you need to install SDK components first
sudo apt update
sudo apt install nvidia-jetpack -y
  • Step 2: Execute the below commands on the terminal to install Roboflow inference server pip package
sudo apt update
sudo apt install python3-pip -y
pip install inference-gpu
  • Step 3: Execute the below and replace with your Roboflow Private API Key that you obtained before
export ROBOFLOW_API_KEY=your_key_here
  • Step 4: Connect a webcam to the Jetson device and execute the following Python script to run an open-source people detection model on your webcam stream
webcam.py
import cv2
import inference
import supervision as sv

annotator = sv.BoxAnnotator()

inference.Stream(
source="webcam",
model=" people-detection-general/7",

output_channel_order="BGR",
use_main_thread=True,

on_prediction=lambda predictions, image: (
print(predictions),

cv2.imshow(
"Prediction",
annotator.annotate(
scene=image,
detections=sv.Detections.from_roboflow(predictions)
)
),
cv2.waitKey(1)
)
)

Finally, you will see the result as follows


Learn more

Roboflow offers very detailed and comprehensive documentation. So it is highly recommended to check them here.

Tech Support & Product Discussion

Thank you for choosing our products! We are here to provide you with different support to ensure that your experience with our products is as smooth as possible. We offer several communication channels to cater to different preferences and needs.

Loading Comments...