Teleoperation Platform

From hardware to trained policy. Control any robot, record demonstrations, and train AI models — all from one platform.

17+ Robots 4 Input Methods Real-time Control Auto Data Recording

Three Tiers — Pick the Level That Fits

Whether you want raw hardware, a recording-ready kit, or a fully managed data-to-policy pipeline, we have a tier for you.

Self-Service

Free with hardware purchase

Hardware Cost Only
  • Buy hardware from us
  • Get SDK (rc connect) free
  • Set up your own teleop and recording
  • Best for: teams with ML experience
Browse Hardware
Most Popular

DataKit

Recording-ready out of the box

$2,500 one-time setup
  • Everything in Self-Service, plus:
  • Camera synchronization configured
  • Multi-stream recording ready
  • Quality scoring enabled
  • LeRobot / OpenPI export
  • 10 sample episodes included

Best for: teams that want to start collecting data immediately

Get DataKit

Full Platform

End-to-end data-to-policy

$5K–15K setup
+ $2–3K/month
  • Everything in DataKit, plus:
  • Cloud data upload
  • Automatic quality scoring on every episode
  • Model training (ACT, Diffusion Policy, VLA)
  • Deployment back to robot
  • Dedicated support

Best for: teams building production robot AI

Contact Sales

Supported Hardware

We support the broadest range of robots, hands, and input devices in the industry. Buy or rent from us, or bring your own.

Robot Arms

AgileX Piper

6-DOF — from $3,000

AgileX NERO

7-DOF — from $8,000

OpenArm 101

8-DOF — from $4,500 (we manufacture)

Mobile ALOHA

Bimanual — from $35,000 (we manufacture)

Dobot CR5/CR10

from $8,500

Universal Robots UR5/UR10

Industrial collaborative

Flexiv Rizon

Adaptive force control

UFactory xArm

Cost-effective 6/7-DOF

Dexterous Hands

Wuji Hand

20-DOF, tactile sensing

BrainCo Revo2

6-finger dexterous

Inspire Hand

Research-grade manipulation

Sharpa Hand

Anthropomorphic design

LinkerBot O6

Compact 6-finger

Input Devices

Meta Quest 3 VR

Full 6-DOF hand tracking

Juqiao Glove (RC G1 Tactile)

Electronic skin feedback

Manus VR Gloves Pro

Haptic feedback

Keyboard / Gamepad

Basic control

Humanoids

Unitree G1

Compact humanoid

Unitree H1

Full-size humanoid

Booster K1

Research humanoid

Booster T1

Task-focused humanoid

See the full hardware catalog for specs and availability. Leasing available on all platforms.

How It Works

Four steps from zero to a trained policy running on your robot.

1

Buy or Rent Hardware

Pick a robot arm, dexterous hand, or humanoid from our store. Hardware ships in days. Leasing available for teams that want to try before they commit.

2

Install the SDK

pip install centeros-cli && centeros connect
One command connects your robot, cameras, and input device. Auto-discovers hardware on the network.

3

Teleoperate and Record Demonstrations

Control the robot with VR, gloves, gamepad, or a leader arm. Every session is automatically recorded with synchronized video, joint states, and actions.

4

Train Model and Deploy Back to Robot

Upload your data, select a training recipe (ACT, Diffusion Policy, VLA), and get a trained policy. Deploy it back to the same robot with one click.

Data Pipeline

Raw teleop recordings become deployment-ready policies through a six-stage pipeline.

Record Quality Score Clean Export Train Deploy

Export Formats

MCAP, HDF5, LeRobot (Parquet), and OpenPI — export to the format your training pipeline needs. No conversion scripts required.

Automatic Quality Scoring

Auto-detect stalls, failures, resets, and out-of-distribution episodes. Every recording gets a quality score so you only train on clean data.

37% usable data

Only 37% of Raw Teleop Data Is SFT-Usable

Most teleop sessions include false starts, operator hesitation, and recovery motions that actively harm policy training. Our quality scorer automatically identifies the good demonstrations so you do not waste compute training on noise.

Training Models

Upload your data, pick an architecture, get a trained policy. We support the models that matter for robot manipulation.

ACT

Action Chunking Transformer. Fast to train, works well with 50-200 demonstrations for single-arm tasks. The default for most teams starting out.

Diffusion Policy

Denoising diffusion for action generation. Handles multi-modal action distributions — ideal for contact-rich and bimanual tasks.

Tactile-VLA

Vision-Language-Action model with tactile input. Language-conditioned multi-task policies for robots with force and touch sensing.

Fine-tune Pi0 / Spirit

Start from pre-trained open-source foundation models and fine-tune on your data. Faster convergence, less data required.

Upload your data, get a trained policy. See our AI Models page for architecture details and benchmarks.

Ready to Start?

Tell us about your robot, your task, and your timeline. We will scope the right tier and get you moving.

Email us directly: contact@roboticscenter.ai