Skip to main content

Train your own data set with Yolov5 and Deploy on reTerminal DM


Object detection, a fundamental aspect of computer vision, involves identifying objects within images and finds application in numerous fields such as surveillance, healthcare, and self-driving cars. Single-Stage Object Detectors are a specific category of models that simplify this task by directly predicting object categories and bounding box coordinates without requiring an initial region proposal stage. This streamlined approach is especially valuable in real-time applications. Notably, the YOLO (You Only Look Once) family of models exemplifies this efficiency, offering fast object detection with compromising accuracy.

Getting Started

Before you start this project, you may need to prepare your hardware and software in advance as described here.

Hardware preparation

reTerminal DMCoral USB Accelerator

Software preparation

We recommend installing the latest version of Raspberry Pi 64 bit OS from their official website. If you prefer to install a new Raspbian OS, please follow the steps outlined in this guide.



Roboflow is a comprehensive platform for computer vision that offers a wide range of tools and services to streamline the process of developing and deploying computer vision models. One of its standout features is its robust support for custom datasets with annotations. Users can easily upload their own datasets, complete with annotations, to train and fine-tune computer vision models. So create a account with this.



Ultralytics is a cutting-edge platform and research organization that specializes in computer vision and deep learning. They are renowned for their contributions to the field, particularly in the development of the YOLO (You Only Look Once) family of object detection models, such as YOLOv5. Ultralytics offers a range of state-of-the-art tools and open-source software for object detection, image classification, and more, making advanced computer vision accessible to researchers and developers. Their dedication to innovation and performance-driven solutions has garnered significant attention and acclaim in the computer vision community, making Ultralytics a go-to resource for those seeking to push the boundaries of what's possible in the realm of deep learning and visual recognition.

Choose a data set from Roboflow

  • Step 1 Please access the provided website and navigate to the dataset of your choice using the search function. Roboflow


  • Step 2 After you select a data set click Download this Dataset


  • Step 3 Select YOLOv5 download format.


  • Step 4 Select show download code and Press continue.


  • Step 5 In the "Jupyter" section, you'll find a code snippet. Please copy this snippet to your clipboard.


Train your custom dataset

  • Step 1 Please access the provided github link and click on Open in Colab Github Link


Open In Colab

Before deploying a model on resource-constrained devices like the Raspberry Pi, it's often essential to conduct model conversion and quantization to ensure optimal performance. This process involves several steps: converting a PyTorch model (in .pt format) to a TensorFlow Lite (TFLite) model with quantization, specifically to the uint8 data type. You can train your custom dataset and convert it into a TFLite model using this Colab notebook. We have outlined a step-by-step process for training within the Colab environment. Please follow these instructions, obtain the data.yaml file and best-int8.tflite file, and return to this wiki for further guidance.

Prepare your reTerminal DM

  • Step 1 On Terminal execute these commands one by one.
sudo git clone
cd yolov5
pip install -r requirements.txt
sudo apt-get install python3-tflite-runtime
  • Step 2 Paste Data.yaml file and best-int8.tflite file inside yolov5 folder


Inference with

  • Step 1 Open reterminal and navigate to yolov5 folder
cd yolov5
  • Step 2 Inference with
python --weight best-int8.tflite --img 224 --source <your source > --nosave --view-img --data data.yaml

You can explore the official Ultralytics GitHub site at to learn how to use the script and discover the various sources you can utilize for feeding images or video streams into the YOLOv5 model.

Run on Edge TPU

The deployment of the YOLOv5n model on an Edge TPU represents a dynamic synergy between state-of-the-art object detection and high-performance edge computing. This amalgamation empowers applications in edge AI, such as real-time object recognition in resource-constrained environments, making it invaluable for tasks like security surveillance, retail analytics, and autonomous systems. YOLOv5n's efficient design harmonizes seamlessly with Edge TPU's hardware acceleration, delivering rapid and accurate object detection at the edge of the network, where low latency and real-time processing are paramount.

  • Inference with
python --weight best-int8_edgetpu.tflite --img 224 --source <your source > --nosave --view-img --data data.yaml


Tech support

Thank you for choosing our products! We are here to provide you with different support to ensure that your experience with our products is as smooth as possible. We offer several communication channels to cater to different preferences and needs.

Loading Comments...