← Robotics Academy

Communication & system architecture

“Communication structure design” is where buses, middleware, and APIs meet: you need consistent time bases, clear failure modes, and logging that preserves what the policy actually saw.

ROS 2 graphs, APIs, and system timing

Learning outcomes

  • Map physical buses to middleware and application layers for one robot you know.
  • Identify where time sync and command-rate limits matter for teleop.
  • Name three integration bugs that look like “AI failure” but are wiring or timing.
Learn

Layered stack: electrical → drivers → ROS/HTTP → apps.

Practice

Trace one command from joystick to motor; note clocks and drops.

Challenge

Document your graph in a diagram; invite review on the Forum.

Facilitation: Whiteboard the stack as a class — students add failure arrows (noise, latency, version skew).

Self-check

Why does “good policy, bad behavior” happen?
Often observation or action mismatch across time bases or coordinate frames.
What to log first?
Versions, timestamps, and joint commands before expanding sensors.

STEM alignment: systems & networks, debugging, communicating technical design.

1. Layered stack

  • Physical / electrical — CAN, EtherCAT, RS-485; termination, grounding, EMI.
  • Device drivers — motor frames, encoder packets, camera MIPI/USB.
  • Middleware — ROS 2 / DDS: topics, QoS, discovery.
  • Application — planners, policies, teleop UI, loggers.
  • Edge / cloud — training jobs, fleet analytics (policy & privacy).

2. Reference teleop flow

Forward and feedback paths should share a time reference; under delay, prefer slowing motion over chasing aggressive gains.

3. ROS 2 & QoS

Separate best-effort sensor streams from reliable control commands where needed. Mis-matched QoS is a common “I see images but the arm never moves” failure. Namespace robots clearly when multiple arms or sim+real coexist.

4. Web & HTTP APIs

Fast iteration often uses a localhost HTTP or WebSocket bridge for teleop and logging. Harden with auth, rate limits, and input validation before any wide-area deployment.

5. Alignment with SVRC’s data loop

Our approach emphasizes capture → evaluation → failure replay → retraining. Architecturally, that means record close to the sensor timestamp source, version your software stack per dataset, and make evaluation jobs reproducible — themes echoed on the Data Platform page.

Checklist: End-to-end latency budget? Command rate vs actuator bandwidth? Safe behavior on timeout? Logs include timestamps & software version?

← Humanoids · Back to Academy overview