LeRobot Quickstart Guide

Get started with Hugging Face LeRobot — install, connect a robot arm, record 50 demonstrations, and train your first manipulation policy. All in under 2 hours.

Beginner Intermediate ⏱ 1–2 hours Updated April 2026
1. Install 2. Hardware 3. Calibrate 4. Record 5. Visualize 6. Train 7. Evaluate 8. Share

Prerequisites

  • A supported robot arm (see step 2 for the full list)
  • Ubuntu 22.04 or 24.04 (macOS works for some features)
  • Python 3.10+
  • NVIDIA GPU with 8+ GB VRAM (for training; not needed for recording)
  • USB webcam or RealSense camera

What you will build

In this quickstart, you will install LeRobot, connect and calibrate a robot arm, record 50 demonstration episodes of a pick-and-place task, train an ACT (Action Chunking with Transformers) policy, run it on your robot, and share your dataset to HuggingFace Hub. By the end, your robot will autonomously pick up objects and place them — learned entirely from your demonstrations.

1

Install LeRobot

LeRobot installs as a Python package. We recommend using a virtual environment to keep things clean.

# Create a virtual environment python3 -m venv ~/lerobot_env source ~/lerobot_env/bin/activate # Install LeRobot — the base package pip install lerobot # Or install with all hardware extras pip install "lerobot[all]" # Or install with specific hardware support pip install "lerobot[dynamixel]" # for SO-100, Koch, OpenArm pip install "lerobot[realsense]" # for Intel RealSense cameras # Verify it works lerobot --version
LeRobot v0.8.1
Quick check: If you see a version number, you are good to go. If you get an error about missing libusb or serial ports, install system dependencies: sudo apt install -y libusb-1.0-0-dev libudev-dev
2

Supported Hardware

LeRobot works with a growing ecosystem of affordable robot arms. Here are the most popular options:

SO-100

6-DOF desktop arm, Dynamixel servos. The most popular LeRobot arm.

~$200 (DIY kit)

OpenArm

6-DOF arm designed for data collection. Built-in wrist camera mount.

$2,400

Koch v1.1

5-DOF low-cost arm. Great for learning, limited for complex tasks.

~$150 (DIY)

Aloha

Dual-arm bimanual setup. The gold standard for dexterous manipulation.

$20,000+

Moss

Lightweight 6-DOF arm with integrated controller.

~$500

WidowX 250

Professional-grade research arm from Trossen Robotics.

$3,500

Need a robot arm for LeRobot?

OpenArm is designed from the ground up for data collection — pre-calibrated, wrist camera mount, leader-follower ready. Ships in 3–5 business days.

View OpenArm in Store
3

Connect and Calibrate Your Arm

Plug in your robot arm via USB and run the calibration routine. This establishes communication and maps the physical joint limits.

# Find your robot's serial port ls /dev/ttyUSB* /dev/ttyACM* # Set permissions (only needed once, or add udev rule) sudo chmod 666 /dev/ttyUSB0 # Run calibration lerobot calibrate \ --robot-type=so100 \ --port=/dev/ttyUSB0

The calibration prompts you to move the arm through its range of motion:

Calibrating SO-100 on /dev/ttyUSB0... Move joint 1 (base) to its minimum position, then press Enter. Move joint 1 (base) to its maximum position, then press Enter. ... Calibration complete! Saved to ~/.lerobot/calibration/so100.json

If you have a leader arm for teleoperation, calibrate it too:

lerobot calibrate \ --robot-type=so100 \ --port=/dev/ttyUSB0 \ --leader-port=/dev/ttyUSB1
Important: If calibration hangs, check that no other program is using the serial port (close any Arduino IDE, screen sessions, etc.). Also verify the baud rate matches your servos — Dynamixel defaults to 1000000.
4

Record 50 Demo Episodes

Time to teach your robot. Set up a simple task — we recommend "pick up a block and place it in a bowl" as a first task. Then record 50 demonstrations.

# Start recording with leader arm teleoperation lerobot record \ --robot-type=so100 \ --port=/dev/ttyUSB0 \ --leader-port=/dev/ttyUSB1 \ --fps=30 \ --task="pick_block_place_bowl" \ --num-episodes=50 \ --output-dir=~/datasets/my_first_dataset # Keyboard controls: # Enter — start/stop an episode # Space — pause/resume # r — redo (discard and restart current episode) # q — quit and save all completed episodes

Tips for recording good demonstrations:

  • Be consistent: Start from the same home position each episode
  • Move smoothly: Avoid jerky or rushed movements
  • Vary positions: Place the block in slightly different spots each time
  • Succeed every time: Only keep successful demonstrations
  • Take breaks: Fatigue leads to sloppy demos — record in batches of 10–15
50 episodes is enough for a simple pick-and-place task with ACT policy. The whole recording session takes about 20–30 minutes. Each episode is typically 5–15 seconds long.
5

Visualize Your Dataset

Before training, inspect your recorded episodes to make sure they look correct.

# Launch the built-in visualizer lerobot visualize-dataset \ --dataset-path=~/datasets/my_first_dataset # Or view a specific episode lerobot visualize-dataset \ --dataset-path=~/datasets/my_first_dataset \ --episode=0

This opens a browser window showing camera views alongside joint position plots. Check for:

  • Camera images are clear (not black, not frozen)
  • Joint trajectories are smooth (no sudden spikes)
  • Episodes start and end cleanly
  • The task is completed successfully in each episode
# Quick stats about your dataset python3 -c " from lerobot.common.datasets import LeRobotDataset ds = LeRobotDataset('~/datasets/my_first_dataset') print(f'Episodes: {ds.num_episodes}') print(f'Total frames: {len(ds)}') print(f'FPS: {ds.fps}') print(f'Keys: {list(ds[0].keys())}') "
6

Train an ACT Policy

Now the fun part — train a neural network policy on your demonstrations. We will use ACT (Action Chunking with Transformers), which works great for manipulation tasks and trains fast.

# Train ACT policy on your dataset lerobot train \ --policy=act \ --dataset-path=~/datasets/my_first_dataset \ --output-dir=~/policies/my_first_policy \ --num-epochs=100 \ --batch-size=8 \ --lr=1e-4 # Training output: # Epoch 1/100 | Loss: 0.4521 | Time: 12s # Epoch 10/100 | Loss: 0.1234 | Time: 11s # Epoch 50/100 | Loss: 0.0312 | Time: 11s # Epoch 100/100 | Loss: 0.0187 | Time: 11s # Training complete! Policy saved to ~/policies/my_first_policy

Training time depends on your GPU and dataset size:

  • RTX 3060 (12 GB): ~30 minutes for 100 epochs on 50 episodes
  • RTX 4090 (24 GB): ~10 minutes
  • CPU only: ~3 hours (not recommended)
When to stop training: Watch the loss curve. If loss has not decreased for 20+ epochs, you can stop early. Overfitting is less of a concern with small datasets and ACT — the policy generalizes well even when training loss is very low.
7

Evaluate Your Policy

Run the trained policy on your robot and see how it performs. Make sure your task setup matches the training environment.

# Run policy on robot lerobot eval \ --policy-path=~/policies/my_first_policy \ --robot-type=so100 \ --port=/dev/ttyUSB0 \ --num-rollouts=10 # Output: # Rollout 1/10: SUCCESS # Rollout 2/10: SUCCESS # Rollout 3/10: FAILURE — gripper missed object # Rollout 4/10: SUCCESS # ... # Success rate: 7/10 (70%) # Average episode length: 8.3s

70% success rate on your first try is a solid result. To improve:

  • Collect more demos for the specific failure cases (object at edge, different orientation)
  • Increase training epochs to 200 if loss was still decreasing
  • Try Diffusion Policy for tasks where the robot takes different paths to the same goal
# Want to try Diffusion Policy instead? lerobot train \ --policy=diffusion \ --dataset-path=~/datasets/my_first_dataset \ --output-dir=~/policies/my_first_diffusion \ --num-epochs=200
8

Share to HuggingFace Hub

Push your dataset and trained policy to HuggingFace Hub so others can use it — or so you can access it from other machines.

# Login to HuggingFace (one-time setup) huggingface-cli login # Push your dataset lerobot push-to-hub \ --dataset-path=~/datasets/my_first_dataset \ --repo-id="your-username/my-first-lerobot-dataset" # Push your trained policy lerobot push-to-hub \ --policy-path=~/policies/my_first_policy \ --repo-id="your-username/my-first-act-policy"

Your dataset is now available at https://huggingface.co/datasets/your-username/my-first-lerobot-dataset and your policy at https://huggingface.co/your-username/my-first-act-policy.

Congratulations — you did it.

You installed LeRobot, connected a robot, recorded demonstrations, trained a policy, and deployed it. This is the foundation of modern robot learning. From here, you can scale up to larger datasets, more complex tasks, and VLA models. Check out the tutorials below to go deeper.

Troubleshooting

Serial port permission denied

Run sudo chmod 666 /dev/ttyUSB0 or add your user to the dialout group: sudo usermod -aG dialout $USER and log out/in. For a permanent fix, create a udev rule.

Robot arm not detected

Check USB connection. Run ls /dev/ttyUSB* to see available ports. Try a different USB port or cable. Some USB hubs cause issues — connect directly to the computer.

Training loss not decreasing

Check that your dataset is not empty or corrupt: lerobot visualize-dataset --dataset-path=~/datasets/my_first_dataset. Try reducing learning rate to 5e-5. Make sure episodes contain actual movement (not just the arm sitting still).

Policy runs but robot does not move

Verify the robot is in the correct control mode. Check that the action dimensions match your robot (6 joints + gripper = 7D). Run lerobot eval --verbose to see the raw actions being sent.

Camera feed is black in recordings

Check the camera index with v4l2-ctl --list-devices. Make sure no other application is using the camera. Verify with python3 -c "import cv2; cap = cv2.VideoCapture(0); print(cap.read()[0])" — should print True.

Frequently Asked Questions

LeRobot supports SO-100, Koch v1.1, Aloha, Moss, OpenArm, WidowX, and UR series robots. The framework is extensible — you can add support for any robot by writing a simple Python driver that implements the LeRobot Robot interface.

For simple tasks like pick-and-place with ACT policy, 50 demonstrations is enough to get a working policy. For more complex tasks, aim for 100–200. The key is quality over quantity — 50 clean demos outperform 200 sloppy ones.

ACT (Action Chunking with Transformers) predicts a sequence of future actions from a single observation, making it fast and effective for manipulation tasks. Diffusion Policy uses a denoising diffusion process to generate actions and often handles multi-modal behavior better. ACT trains faster; Diffusion Policy is more robust to ambiguous situations. Start with ACT for your first project.

Yes. LeRobot includes simulation environments and example datasets you can use to learn the framework. You can train policies on community datasets from HuggingFace Hub and evaluate in simulation before deploying on hardware.

A GPU with at least 8 GB VRAM is strongly recommended. ACT policy training on 50 episodes takes about 30 minutes on an RTX 3060 and 10 minutes on an RTX 4090. CPU training is possible but takes 5–10x longer, making iteration painfully slow.

Was this tutorial helpful?

Stay Ahead in Robotics

Get the latest on robot deployments, data collection, and physical AI — delivered to your inbox.