The Lab

The SVRC Palo Alto lab is located at 654 High Street, Palo Alto — two blocks from the Stanford Research Park boundary. The facility covers approximately 3,500 square feet, organized into four zones: 12 active data collection stations, a GPU compute cluster, a hardware integration and testing area, and a conference and remote collaboration space.

The location adjacent to the Stanford research corridor is intentional: we maintain research relationships with several Stanford groups and host visiting researchers regularly. Being in Palo Alto rather than a lower-cost location reflects the value of proximity to the robotics research community — a significant fraction of our academic partnerships originate from personal connections with Stanford faculty and graduate students.

Robot Fleet

The active robot fleet includes 20+ robots across several platform categories:

  • OpenArm 7-DOF fleet (8 stations): Our standard manipulation platform for single-arm data collection. 7-DOF serial chain, 5kg payload, 850mm reach, equipped with wrist-mounted Intel RealSense D405 and ATI Mini45 force/torque sensor. These stations handle the majority of standard manipulation task collection.
  • Bimanual ALOHA-style stations (2 stations): Full bimanual setups based on the Stanford ALOHA configuration. 14-DOF total, overhead camera + wrist cameras on each arm. Used for bimanual assembly, garment handling, and two-handed manipulation tasks.
  • Dexterous hand setups (2 stations): Single-arm plus multi-finger dexterous hand for in-hand manipulation, tool use, and high-dexterity tasks. Currently configured with Inspire Hands RH56DP (6-DOF per finger, tactile sensing).
  • Mobile manipulation platforms (2 stations): Hello Robot Stretch 3 for navigation + manipulation tasks, and a mobile ALOHA configuration for bimanual mobile manipulation research.
  • Specialty platforms: Unitree G1 humanoid (1 unit), kitchen manipulation station with Fanuc CR-7iA/L, and a dedicated glove teleoperation station for high-dexterity tasks.

Data Collection Infrastructure

  • 3-camera synchronized setup per station: Each station runs a standard camera configuration: overhead wide-angle (Intel RealSense D435i), wrist-mounted close-up (RealSense D405), and side-view context camera (Basler acA1920-40gc). All three cameras synchronized to within 1ms via hardware trigger. Frame rate: 30fps standard, 60fps available for high-speed tasks.
  • Custom HDF5 pipeline: Episode data stored as HDF5 with synchronized observations (joint states, camera frames, F/T readings) and action sequences. Compatible with LeRobot, RLDS, and custom loaders. Average episode size: 50–200MB depending on duration and camera count.
  • 50TB+ stored dataset: Accumulated across 3 years of collection. Indexed in PostgreSQL by task type, robot platform, operator, date, and success label. Available to customers with appropriate data access agreements.
  • Automated quality classifier: Fine-tuned ResNet-50 on the final 10 frames of each episode. Achieves 92% accuracy on held-out test sets for binary success classification. All new episodes pass through the classifier; borderline predictions (confidence 0.4–0.7) are queued for human review.

GPU Cluster

The compute infrastructure consists of 8× NVIDIA A100 80GB GPUs across two servers, with 25GbE interconnect to the NAS storage array. The cluster is shared between internal research and customer training jobs, managed via a job queue with SLA tiers:

  • Standard queue: Training start within 24 hours, best-effort scheduling. Appropriate for most policy training jobs.
  • Priority queue: Training start within 4 hours. Available to enterprise customers and time-sensitive research projects.
  • Reserved access: Dedicated GPU allocation for ongoing projects. Available via monthly agreement.

Typical training turnaround for standard policy sizes (ACT at 200 demos, diffusion policy at 500 demos) is 4–12 hours on a single A100. Large foundation model fine-tuning runs (100K+ demos, large backbone) are scoped individually.

Collaboration Models

ModelWho It's ForTerms
AcademicUniversity research groups with publication plansFree data collection + compute in exchange for dataset sharing and acknowledgment
StartupSeed to Series A robotics companiesDiscounted service rates for advisory equity or extended engagement
EnterpriseSeries B+ companies or large deployment programsCustom SLA, dedicated capacity, full IP ownership of collected data

We're actively looking for research partners in dexterous manipulation, mobile manipulation, and foundation model fine-tuning. If you're interested in visiting the lab or starting a collaboration, see our join page or reach out via the contact form.