Robotics summit audience and robotics demo atmosphere
Palo Alto · 200–300 builders, researchers, operators, investors

Robotics Data Summit

A high-signal gathering on the next bottleneck in robotics: what data is still missing, what teams need to collect next, and how to connect hardware, teleoperation, annotation, evaluation, and deployment into one real operating loop.

200–300 expected attendees
Hardware + data live demos and working stacks
Experts + professors what robotics still cannot learn well

Robotics does not only need better models. It needs better data loops.

We want one room where labs, startups, operators, and system builders can compare notes on what is still missing from embodied AI datasets: failure data, tactile signals, edge cases, recovery traces, human corrections, fleet feedback, and domain-specific workflows.

What is missing today

Long-horizon household sequences, multi-camera manipulation traces, tactile-rich grasp failures, intervention data, and evaluation datasets tied to real deployment constraints.

What experts will discuss

Which data modalities matter most now, how far simulation can go alone, where annotation standards still break, and what new benchmarks the ecosystem actually needs.

What attendees will leave with

A clearer map of the robotics data stack, practical collection and annotation patterns, and concrete ways to turn hardware access into training-ready datasets.

Hands-on systems people can actually inspect

Not just slides. Real robots, data collection rigs, and operator workflows on the floor.

OpenArm robot arm for data collection and teleoperation

OpenArm data collection rigs

Visible, affordable, and practical for collecting manipulation demonstrations, operator interventions, and rapid task iteration.

Humanoid robot hardware at the summit

Humanoids and whole-body capture

What high-value humanoid data looks like beyond isolated clips: intent, balance, contact, recovery, and supervision cost.

Dexterous or embodied robot platform for summit demos

Embodied platforms for operations

How teams move from demos to repeatable data pipelines with teleop, policy evaluation, annotation QA, and real-world feedback.

What we want industry experts and professors to challenge us on

Which robotics data is under-collected?

Failure traces, uncertainty labels, retries, tactile interactions, and sequences where humans intervene or correct a policy.

What is still too expensive to annotate?

Long videos, dense manipulation states, multimodal alignment, policy intent, and event labels that matter operationally rather than visually.

What would make datasets more reusable?

Shared schemas, richer metadata, cross-robot transfer assumptions, teleop provenance, environment tags, and benchmark-ready structure.

What is still missing from evaluation?

Offline QA tied to deployment risk, benchmark coverage for edge cases, and loops that connect rollout evidence back into collection priorities.

How SVRC connects hardware, operators, annotation, and deployment feedback

We do not want a summit that stops at “data is important.” We want to show the full stack teams can use: hardware access, teleoperation, multimodal capture, structured annotation, QA, and a platform that keeps the loop moving.

1

Access real hardware

OpenArm, humanoids, hands, and mobile systems that generate meaningful interaction data instead of toy-only traces.

2

Capture richer operator signals

Teleop, camera views, state streams, intervention logs, and session metadata that tell you how a task actually unfolds.

3

Structure annotation and QA

Task rubrics, reviewer roles, reject reasons, versioned annotations, and a cleaner path from raw media to learning-ready assets.

4

Feed deployment back into data ops

Use platform telemetry, episode history, and failure review to decide what to collect next instead of waiting for intuition alone.

A summit agenda designed to stay useful

09:30

Doors open + hardware walk-in

Demo floor opens with robot stations, data collection examples, and platform walkthroughs.

10:30

Opening panel: What robotics data still lacks

Researchers and operators compare the gap between current datasets and real deployment needs.

12:00

Lightning talks from labs and industry

Short, concrete talks on tactile data, teleop supervision, evaluation gaps, and annotation bottlenecks.

14:00

Working session: From robot to training-ready dataset

An integrated session on capture, schema, QA, annotation, and storage patterns teams can adopt immediately.

16:00

Roundtable: What should the ecosystem build next?

Benchmarks, shared formats, missing modalities, and what could make the next year of robotics data genuinely better.

17:30

Networking + founder / lab matching

Meet researchers, hardware teams, data operators, and companies building embodied AI stacks.

Want to attend, speak, or bring a demo?

We are curating a room for founders, professors, students, robotics operators, and data teams who care about what robotics must learn next.