Skip to main content

Getting started with StarAI Robot Arm with LeRobot

Follower ViolaLeader ViolinFollower Cello
fig1fig2fig3

Products Introduction

  1. Open-Source & Developer-Friendly It is an open-source, developer-friendly 6+1 DoF robotic arm solution from Fishion Star Technology Limited.
  2. Integration with LeRobot Designed for integration with LeRobot Platform , which provides PyTorch models, datasets, and tools for imitation learning in real-world robotic tasks — including data collection, simulation, training, and deployment.
  3. Comprehensive Learning Resources Provides comprehensive open-source learning resources like assembly and calibration guides, and example custom grasping tasks to assist users in quickly getting started and developing robotic applications.
  4. Compatible with Nvidia Supports deployment on the reComputer Mini J4012 Orin NX 16GB platform.

Main Features

  • Ready to Go — No Assembly Required. Just Unbox and Dive into the World of AI.
  • 6+1 Degrees of Freedom and a 470mm Reach — Built for Versatility and Precision.
  • Powered by Dual Brushless Bus Servos — Smooth, Silent, and Strong with up to 300g Payload.
  • Parallel Gripper with 66mm Maximum Opening — Modular Fingertips for Quick-Replace Flexibility.
  • Exclusive Hover Lock Technology — Instantly Freeze Leader Arm at Any Position with a Single Press.

Specifications

ItemFollower Arm | ViolaLeder Arm |ViolinFollower Arm |Cello
Degrees of Freedom6+16+16+1
Reach470mm470mm670mm
Span940mm940mm1340mm
Repeatability2mm-1mm
Working Payload300g (with 70% Reach)-750g (with 70% Reach)
ServosRX8-U50H-M x2
RA8-U25H-M x4
RA8-U26H-M x1
RX8-U50H-M x2
RA8-U25H-M x4
RA8-U26H-M x1
RX18-U100H-M x3
RX8-U50H-M x3
RX8-U51H-M x1
Parallel Gripper Ki-
Wrist RotateYesYesYes
Hold at any PositionYesYes (with handle button)Yes
Wrist Camera MountProvides reference 3D printing filesProvides reference 3D printing files
Works with LeRobot
Works with ROS 2
Works with MoveIt2
Works with Gazebo
Communication HubUC-01UC-01UC-01
Power Supply12V10A/120w XT3012V10A/120w XT3012V25A/300w XT60

For more information about servo motors, please visit the following link.

RA8-U25H-M

RX18-U100H-M

RX8-U50H-M

Initial environment setup

For Ubuntu x86:

  • Ubuntu 22.04
  • CUDA 12+
  • Python 3.10
  • Torch 2.6

For Jetson Orin:

  • Jetson JetPack 6.0+
  • Python 3.10
  • Torch 2.6

Installation and Debugging

Install LeRobot

Environments such as pytorch and torchvision need to be installed based on your CUDA.

  1. Install Miniconda: For Jetson:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
chmod +x Miniconda3-latest-Linux-aarch64.sh
./Miniconda3-latest-Linux-aarch64.sh
source ~/.bashrc

Or, For X86 Ubuntu 22.04:

mkdir -p ~/miniconda3
cd miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
source ~/miniconda3/bin/activate
conda init --all
  1. Create and activate a fresh conda environment for lerobot
conda create -y -n lerobot python=3.10 && conda activate lerobot
  1. Clone Lerobot:
git clone https://github.com/Seeed-Projects/lerobot-starai.git ~/lerobot

Switch to the starai-arm-develop branch.

git checkout starai-arm-develop
  1. When using miniconda, install ffmpeg in your environment:
conda install ffmpeg -c conda-forge
tip

This usually installs ffmpeg 7.X for your platform compiled with the libsvtav1 encoder. If libsvtav1 is not supported (check supported encoders with ffmpeg -encoders), you can:

  • [On any platform] Explicitly install ffmpeg 7.X using:
conda install ffmpeg=7.1.1 -c conda-forge
  • [On Linux only] Install ffmpeg build dependencies and compile ffmpeg from source with libsvtav1, and make sure you use the corresponding ffmpeg binary to your install with which ffmpeg.
  1. Install LeRobot with dependencies for the starai motors:
cd ~/lerobot && pip install -e ".[starai]"
sudo apt remove brltty

For Jetson Jetpack devices (please make sure to install Pytorch-gpu and Torchvision from step 5 before executing this step):

conda install -y -c conda-forge "opencv>=4.10.0.84"  # Install OpenCV and other dependencies through conda, this step is only for Jetson Jetpack 6.0+
conda remove opencv # Uninstall OpenCV
pip3 install opencv-python==4.10.0.84 # Then install opencv-python via pip3
conda install -y -c conda-forge ffmpeg
conda uninstall numpy
pip3 install numpy==1.26.0 # This should match torchvision
  1. Check Pytorch and Torchvision

Since installing the lerobot environment via pip will uninstall the original Pytorch and Torchvision and install the CPU versions of Pytorch and Torchvision, you need to perform a check in Python.

import torch
print(torch.cuda.is_available())

If the printed result is False, you need to reinstall Pytorch and Torchvision according to the official website tutorial.

If you are using a Jetson device, install Pytorch and Torchvision according to this tutorial.

Unboxing the Robotic Arm

Robotic Arm Kit Includes

  • Leader arm
  • Follower arm
  • Controller (handle)
  • Parallel gripper
  • Installation tools (screws, hex wrench)
  • Power supply ×2
  • C-clamp ×2
  • UC-01 debuging board ×2

UC-01 debuging board switch:

Configure Arm Port

Enter the ~/lerobot directory:

cd ~/lerobot

Run the following command in the terminal to find USB ports associated to your arms:

lerobot-find-port
tip

Remember to remove the usb, otherwise the interface will not be detected.

For example:

  1. Example output when identifying the leader arm's port (e.g., /dev/tty.usbmodem575E0031751 on Mac, or possibly /dev/ttyUSB0 on Linux):
  2. Example output when identifying the follower arm's port (e.g., /dev/tty.usbmodem575E0032081on Mac, or possibly /dev/ttyUSB1 on Linux):
tip

If the ttyUSB0 serial port cannot be identified, try the following solutions:

List all USB ports.

lsusb

Once identified, check the information of the ttyusb.

sudo dmesg | grep ttyUSB

The last line indicates a disconnection because brltty is occupying the USB. Removing brltty will resolve the issue.

sudo apt remove brltty

Finally,use chmod command.

sudo chmod 777 /dev/ttyUSB*

You might need to give access to the USB ports by running:

sudo chmod 666 /dev/ttyUSB*

Calibrate

For Initial Calibration

Please rotate each joint left and right to the corresponding positions.

For Re-Calibration

Follow the on-screen prompt: enter the letter "c" and press the Enter key.

Below are the reference values. Under normal circumstances, the actual limit reference values should fall within the range of ±10° of these references.

Servo IDLower Angle Limit (°)Upper Angle Limit (°)Notes
motor_0-180°180°Rotate to the limit position
motor_1-90°90°Rotate to the limit position
motor_2-90°90°Rotate to the limit position
motor_3-180°180°No limit; rotate to the reference angle limits
motor_4-90°90°Rotate to the limit position
motor_5-180°180°No limit; rotate to the reference angle limits
motor_6100°Rotate to the limit position
tip

Taking PC (Linux) and Jetson board as examples, the first USB device inserted will be mapped to ttyUSB0, and the second USB device inserted will be mapped to ttyUSB1.

Please pay attention to the mapping interfaces of the leader and follower before running the code.

Leader Robotic Arm

Connect the leader to /dev/ttyUSB0, or modify the --teleop.port parameter, and then execute:

lerobot-calibrate     --teleop.type=starai_violin --teleop.port=/dev/ttyUSB0 --teleop.id=my_awesome_staraiviolin_arm

Follower Robotic Arm

Connect the follower to /dev/ttyUSB1, or modify the --teleop.port parameter, and then execute:

lerobot-calibrate     --robot.type=starai_viola --robot.port=/dev/ttyUSB1 --robot.id=my_awesome_staraiviola_arm

After running the command, you need to manually move the robotic arm to allow each joint to reach its limit position. The terminal will display the recorded range data. Once this operation is completed, press Enter.

tip

The calibration files will be saved to the following paths: ~/.cache/huggingface/lerobot/calibration/robots and ~/.cache/huggingface/lerobot/calibration/teleoperators.

Dual-Arm Calibration Setup

Tutorial

Leader Robotic Arm

Connect left_arm_port to /dev/ttyUSB0 and right_arm_port to /dev/ttyUSB2, or modify the --teleop.left_arm_port and --teleop.right_arm_port parameters, and then execute:

lerobot-calibrate     --teleop.type=bi_starai_leader  --teleop.left_arm_port=/dev/ttyUSB0  --teleop.right_arm_port=/dev/ttyUSB2  --teleop.id=bi_starai_leader

Follower Robotic Arm

Connect left_arm_port to /dev/ttyUSB1 and right_arm_port to /dev/ttyUSB3, or modify the --robot.left_arm_port and --robot.right_arm_port parameters, and then execute:

lerobot-calibrate     --robot.type=bi_starai_follower  --robot.left_arm_port=/dev/ttyUSB1  --robot.right_arm_port=/dev/ttyUSB3 --robot.id=bi_starai_follower
tip

The difference between single-arm and dual-arm setups lies in the --teleop.type and --robot.type parameters. Additionally, dual-arm setups require separate USB ports for the left and right arms, totaling four USB ports: --teleop.left_arm_port, --teleop.right_arm_port, --robot.left_arm_port, and --robot.right_arm_port.

If using a dual-arm setup, you need to manually modify the robotic arm file types --teleop.type and --robot.type, as well as the USB ports --teleop.left_arm_port, --teleop.right_arm_port, --robot.left_arm_port, and --robot.right_arm_port, to adapt to teleoperation, data collection, training, and evaluation commands.

Teleoperate

Move the arm to the position shown in the diagram and set it to standby.

Then you are ready to teleoperate your robot (It won't display the cameras)! Run this simple script :

lerobot-teleoperate \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--teleop.type=starai_violin \
--teleop.port=/dev/ttyUSB0 \
--teleop.id=my_awesome_staraiviolin_arm
Dual-Arm
lerobot-teleoperate \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.id=bi_starai_follower \
--teleop.type=bi_starai_leader \
--teleop.left_arm_port=/dev/ttyUSB0 \
--teleop.right_arm_port=/dev/ttyUSB2 \
--teleop.id=bi_starai_leader

The remote operation command will automatically detect the following parameters:

  1. Identify any missing calibrations and initiate the calibration procedure.
  2. Connect the robot and the remote operation device and start the remote operation.

After the program starts, the Hover Lock Technology remains functional.

Add cameras

After inserting your two USB cameras, run the following script to check the port numbers of the cameras. It is important to remember that the camera must not be connected to a USB Hub; instead, it should be plugged directly into the device. The slower speed of a USB Hub may result in the inability to read image data.

lerobot-find-cameras opencv # or realsense for Intel Realsense cameras

The terminal will print out the following information. For example, the laptop camera is index 2, and the USB camera is index 4.

--- Detected Cameras ---
Camera #0:
Name: OpenCV Camera @ /dev/video2
Type: OpenCV
Id: /dev/video2
Backend api: V4L2
Default stream profile:
Format: 0.0
Width: 640
Height: 480
Fps: 30.0
--------------------
Camera #1:
Name: OpenCV Camera @ /dev/video4
Type: OpenCV
Id: /dev/video4
Backend api: V4L2
Default stream profile:
Format: 0.0
Width: 640
Height: 360
Fps: 30.0
--------------------

Finalizing image saving...
Image capture finished. Images saved to outputs/captured_images

You can find the images captured by each camera in the outputs/images_from_opencv_cameras directory and verify the port index information corresponding to cameras at different positions.

After confirming the external cameras, replace the camera information below with your actual camera information, and you will be able to display the cameras on your computer during remote operation:

lerobot-teleoperate \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--teleop.type=starai_violin \
--teleop.port=/dev/ttyUSB0 \
--teleop.id=my_awesome_staraiviolin_arm \
--display_data=true

Dual-Arm
lerobot-teleoperate \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.id=bi_starai_follower \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--teleop.type=bi_starai_leader \
--teleop.left_arm_port=/dev/ttyUSB0 \
--teleop.right_arm_port=/dev/ttyUSB2 \
--teleop.id=bi_starai_leader \
--display_data=true
tip

If you find bug like this.

You can downgrade the rerun version to resolve the issue.

pip3 install rerun-sdk==0.23

Record the dataset

Once you're familiar with teleoperation, you can record your first dataset.

If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the Hugging Face settings:

huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential

Store your Hugging Face repository name in a variable to run these commands:

HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER

Record 10 episodes and upload your dataset to the hub:

lerobot-record \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--teleop.type=starai_violin \
--teleop.port=/dev/ttyUSB0 \
--teleop.id=my_awesome_staraiviolin_arm \
--display_data=true \
--dataset.repo_id=${HF_USER}/record-test \
--dataset.episode_time_s=30 \
--dataset.reset_time_s=30 \
--dataset.num_episodes=10 \
--dataset.push_to_hub=True \
--dataset.single_task="Grab the black cube"
Dual-Arm
lerobot-record \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.id=bi_starai_follower \
--teleop.type=bi_starai_leader \
--teleop.left_arm_port=/dev/ttyUSB0 \
--teleop.right_arm_port=/dev/ttyUSB2 \
--teleop.id=bi_starai_leader \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--display_data=true \
--dataset.repo_id=${HF_USER}/record-test \
--dataset.episode_time_s=30 \
--dataset.reset_time_s=30 \
--dataset.num_episodes=10 \
--dataset.push_to_hub=True \
--dataset.single_task="Grab the black cube"
tip

To differentiate between single-arm and dual-arm setups, the --dataset.repo_id here is named starai/record-test_bi_arm.

tip

If you do not want to use the Hugging Face Hub dataset upload feature, you can choose --dataset.push_to_hub=false. Also, replace --dataset.repo_id=${HF_USER}/starai with a custom local folder name, for example, --dataset.repo_id=starai/record-test. The data will be stored in ~/.cache/huggingface/lerobot under the system's home directory.

Not uploading to Hub: (Recommended, the following tutorials will focus on local data)

lerobot-record \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--teleop.type=starai_violin \
--teleop.port=/dev/ttyUSB0 \
--teleop.id=my_awesome_staraiviolin_arm \
--display_data=true \
--dataset.repo_id=starai/record-test \
--dataset.episode_time_s=30 \
--dataset.reset_time_s=30 \
--dataset.num_episodes=10 \
--dataset.push_to_hub=False \
--dataset.single_task="Grab the black cube"
Dual-Arm
lerobot-record \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.id=bi_starai_follower \
--teleop.type=bi_starai_leader \
--teleop.left_arm_port=/dev/ttyUSB0 \
--teleop.right_arm_port=/dev/ttyUSB2 \
--teleop.id=bi_starai_leader \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--display_data=true \
--dataset.repo_id=starai/record-test_bi_arm \
--dataset.episode_time_s=30 \
--dataset.reset_time_s=30 \
--dataset.num_episodes=10 \
--dataset.push_to_hub=False \
--dataset.single_task="Grab the black cube"
tip

To differentiate between single-arm and dual-arm setups, the --dataset.repo_id here is named starai/record-test_bi_arm.

  • record provides a set of tools for capturing and managing data during robot operations:

1. Data Storage

  • Data is stored in the LeRobotDataset format and saved to disk during the recording process.

2. Checkpointing and Resuming

  • Checkpoints are automatically created during recording.
  • If an issue occurs, you can resume by rerunning the same command with --resume=true. When resuming recording, you must set --dataset.num_episodes to the additional number of episodes to record, not the target total number of episodes in the dataset!
  • To start recording from scratch, manually delete the dataset directory.

3. Recording Parameters

Set the data recording workflow using command-line parameters:

Parameter Description
- warmup-time-s: The initialization time.
- episode-time-s: The duration for each data collection session.
- reset-time-s: The preparation time between each data collection.
- num-episodes: The expected number of data sets to collect.
- push-to-hub: Determines whether to upload the data to HuggingFace Hub.

4. Keyboard Controls During Recording

Use keyboard shortcuts to control the data recording workflow:

  • Press right arrow key (→): Prematurely stop the current episode or reset the time, then move to the next one.
  • Press left arrow key (←): Cancel the current episode and re-record it.
  • Press ESC: Immediately stop the session, encode the video, and upload the dataset.
tip

On Linux, if the left and right arrow keys and the escape key have no effect during data recording, ensure that the $DISPLAY environment variable is set. See pynput limitations.

Once you are familiar with data recording, you can create a larger dataset for training. A good starting task is to grasp an object at different positions and place it in a small box. We recommend recording at least 50 episodes, with 10 episodes per location. Keep the camera fixed and maintain consistent grasping behavior throughout the recording. Also, ensure that the object you are manipulating is visible in the camera. A good rule of thumb is that you should be able to complete the task by looking only at the camera image.

Replay an episode

Now try to replay the first episode on your robot:

lerobot-replay \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--dataset.repo_id=starai/record-test \
--dataset.episode=1 # choose the episode you want to replay
Dual-Arm
lerobot-replay \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.id=bi_starai_follower \
--dataset.repo_id=starai/record-test_bi_arm \
--dataset.episode=0 # choose the episode you want to replay

Train policy

To train a policy to control your robot, here is an example command:

lerobot-train \
--dataset.repo_id=starai/record-test \
--policy.type=act \
--output_dir=outputs/train/act_viola_test \
--job_name=act_viola_test \
--policy.device=cuda \
--wandb.enable=False \
--policy.repo_id=starai/my_policy\
--steps=200000
Dual-Arm
lerobot-train \
--dataset.repo_id=starai/record-test_bi_arm \
--policy.type=act \
--output_dir=outputs/train/act_bi_viola_test \
--job_name=act_bi_viola_test \
--policy.device=cuda \
--wandb.enable=False \
--policy.repo_id=starai/my_policy\
--steps=200000
  1. policy.type supports input diffusion,pi0,pi0fast
  2. We provide the dataset as a parameter: dataset.repo_id=starai/record-test.
  3. We will load the configuration from configuration_act.py. Importantly, this policy will automatically adapt to the robot's motor states, motor actions, and the number of cameras, and will be saved in your dataset.
  4. We provide wandb.enable=true to use Weights and Biases for visualizing training charts. This is optional, but if you use it, make sure you have logged in by running wandb login.

Resume training from a specific checkpoint.

lerobot-train \
--config_path=outputs/train/act_bi_viola_test/checkpoints/last/pretrained_model/train_config.json \
--resume=true\
--steps=400000
If Training SmolVLA policy command:
pip install -e ".[smolvla]"

Training

lerobot-train \
--policy.path=lerobot/smolvla_base \ # <- Use pretrained fine-tuned model
--dataset.repo_id=${HF_USER}/mydataset \
--batch_size=64 \
--steps=20000 \
--output_dir=outputs/train/my_smolvla \
--job_name=my_smolvla_training \
--policy.device=cuda \
--wandb.enable=true

Evaluate

lerobot-record \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.id=my_awesome_staraiviola_arm \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--dataset.single_task="Grasp a lego block and put it in the bin." \ # <- Use the same task description you used in your dataset recording
--dataset.repo_id=${HF_USER}/eval_DATASET_NAME_test \
--dataset.episode_time_s=50 \
--dataset.num_episodes=10 \
# <- Teleop optional if you want to teleoperate in between episodes \
# --teleop.type=so100_leader \
# --teleop.port=/dev/ttyACM0 \
# --teleop.id=my_red_leader_arm \
--policy.path=HF_USER/FINETUNE_MODEL_NAME # <- Use your fine-tuned model
If Training Libero policy command:

LIBERO is a benchmark designed to study lifelong robot learning. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called lifelong learning in decision making (LLDM), and it’s a key step toward building robots that become truly personalized helpers.

LIBERO includes five task suites:

  • LIBERO-Spatial (libero_spatial) – tasks that require reasoning about spatial relations.

  • LIBERO-Object (libero_object) – tasks centered on manipulating different objects.

  • LIBERO-Goal (libero_goal) – goal-conditioned tasks where the robot must adapt to changing targets.

  • LIBERO-90 (libero_90) – 90 short-horizon tasks from the LIBERO-100 collection.

  • LIBERO-Long (libero_10) – 10 long-horizon tasks from the LIBERO-100 collection.

Together, these suites cover 130 tasks, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.

Training with LIBERO

lerobot-train \
--policy.type=smolvla \
--policy.repo_id=${HF_USER}/libero-test \
--dataset.repo_id=HuggingFaceVLA/libero \
--env.type=libero \
--env.task=libero_10 \
--output_dir=./outputs/ \
--steps=100000 \
--batch_size=4 \
--eval.batch_size=1 \
--eval.n_episodes=1 \
--eval_freq=1000 \

Evaluating with LIBERO

To Install LIBERO, after following LeRobot official instructions, just do: pip install -e ".[libero]"

Single-suite evaluation

lerobot-eval \
--policy.path="your-policy-id" \
--env.type=libero \
--env.task=libero_object \
--eval.batch_size=2 \
--eval.n_episodes=3
  • --env.task picks the suite (libero_object, libero_spatial, etc.).

  • --eval.batch_size controls how many environments run in parallel.

  • --eval.n_episodes sets how many episodes to run in total.

Multi-suite evaluation

lerobot-eval \
--policy.path="your-policy-id" \
--env.type=libero \
--env.task=libero_object,libero_spatial \
--eval.batch_size=1 \
--eval.n_episodes=2
  • Pass a comma-separated list to --env.task for multi-suite evaluation.

Evaluate your policy

Run the following command to record 10 evaluation episodes:

lerobot-record  \
--robot.type=starai_viola \
--robot.port=/dev/ttyUSB1 \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--robot.id=my_awesome_staraiviola_arm \
--display_data=false \
--dataset.repo_id=starai/eval_record-test \
--dataset.single_task="Put lego brick into the transparent box" \
--policy.path=outputs/train/act_viola_test/checkpoints/last/pretrained_model
# <- Teleop optional if you want to teleoperate in between episodes \
# --teleop.type=starai_violin \
# --teleop.port=/dev/ttyUSB0 \
# --teleop.id=my_awesome_leader_arm \
Dual-Arm
lerobot-record  \
--robot.type=bi_starai_follower \
--robot.left_arm_port=/dev/ttyUSB1 \
--robot.right_arm_port=/dev/ttyUSB3 \
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video2, width: 1280, height: 720, fps: 30},front: {type: opencv, index_or_path: /dev/video4, width: 1280, height: 720, fps: 30}}" \
--robot.id=bi_starai_follower \
--display_data=false \
--dataset.repo_id=starai/eval_record-test_bi_arm \
--dataset.single_task="test" \
--policy.path=outputs/train/act_bi_viola_test/checkpoints/last/pretrained_model

As you can see, this is almost the same as the command previously used to record the training dataset, with a few changes:

  1. The --policy.path parameter, which indicates the path to your trained policy weights file (for example, outputs/train/act_viola_test/checkpoints/last/pretrained_model). If you have uploaded your model weights to the Hub, you can also use the model repository (for example, ${HF_USER}/starai).

  2. The name of the evaluation dataset dataset.repo_id starts with eval_. This operation will record videos and data specifically for the evaluation phase, which will be saved in a folder starting with eval_, such as starai/eval_record-test.

  3. If you encounter File exists: 'home/xxxx/.cache/huggingface/lerobot/xxxxx/starai/eval_xxxx' during the evaluation phase, please delete the folder starting with eval_ and run the program again.

  4. When encountering mean is infinity. You should either initialize with stats as an argument or use a pretrained model, please ensure that the keywords such as up and front in the --robot.cameras parameter are strictly consistent with those used during the data collection phase.

FAQ

  • If you are using the tutorial in this document, please git clone the recommended GitHub repository: https://github.com/servodevelop/lerobot.git.

  • If teleoperation works normally but teleoperation with a Camera does not display the image interface, please refer to here.

  • If you encounter a libtiff issue during dataset teleoperation, please update the libtiff version.

    conda install libtiff==4.5.0  # for Ubuntu 22.04, use libtiff==4.5.1
  • After installing LeRobot, it may automatically uninstall the GPU version of PyTorch, so you need to manually install torch-gpu.

  • For Jetson, please first install PyTorch and Torchvision before running conda install -y -c conda-forge ffmpeg, otherwise, there will be a version mismatch issue when compiling torchvision.

  • Training 50 episodes of ACT data on a 3060 8GB laptop takes approximately 6 hours, while training 50 episodes on a 4090 or A100 computer takes about 2-3 hours.

  • During data collection, ensure the stability of the camera position and angle, as well as the environmental lighting, and minimize the unstable background and pedestrians captured by the camera. Otherwise, significant changes in the deployment environment may cause the robotic arm to fail to grasp objects normally.

  • The num-episodes in the data collection command should ensure sufficient data collection and should not be manually paused midway. This is because the mean and variance of the data are calculated only after data collection is completed, which is necessary for training.

  • If the program prompts that it cannot read the USB camera image data, please ensure that the USB camera is not connected through a Hub. The USB camera must be directly connected to the device to ensure fast image transmission rates.

Citation

StarAI Robot Arm ROS2 Moveit2: star-arm-moveit2

lerobot-starai github: lerobot-starai

STEP: STEP

URDF: URDF

Huggingface Project: Lerobot

ACT or ALOHA: Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware

VQ-BeT: VQ-BeT: Behavior Generation with Latent Actions

Diffusion Policy: Diffusion Policy

TD-MPC: TD-MPC

Tech Support & Product Discussion

Thank you for choosing our products! We are here to provide you with different support to ensure that your experience with our products is as smooth as possible. We offer several communication channels to cater to different preferences and needs.

Loading Comments...