Communication & system architecture
“Communication structure design” is where buses, middleware, and APIs meet: you need consistent time bases, clear failure modes, and logging that preserves what the policy actually saw.
Learning outcomes
- Map physical buses to middleware and application layers for one robot you know.
- Identify where time sync and command-rate limits matter for teleop.
- Name three integration bugs that look like “AI failure” but are wiring or timing.
Layered stack: electrical → drivers → ROS/HTTP → apps.
Trace one command from joystick to motor; note clocks and drops.
Document your graph in a diagram; invite review on the Forum.
Self-check
Why does “good policy, bad behavior” happen?
What to log first?
STEM alignment: systems & networks, debugging, communicating technical design.
1. Layered stack
- Physical / electrical — CAN, EtherCAT, RS-485; termination, grounding, EMI.
- Device drivers — motor frames, encoder packets, camera MIPI/USB.
- Middleware — ROS 2 / DDS: topics, QoS, discovery.
- Application — planners, policies, teleop UI, loggers.
- Edge / cloud — training jobs, fleet analytics (policy & privacy).
2. Reference teleop flow
[ Operator input ] VR, gamepad, keyboard, space mouse
│
▼ USB / Bluetooth / WebSocket
[ Host bridge ] Retargeting, filtering, workspace limits
│
├── HTTP / WebSocket ──► [ Policy / recorder ]
│
▼
[ ROS 2 graph ] Perception · state estimation · controllers
│
▼
[ Fieldbus ] CAN / EtherCAT frames
│
▼
[ Actuators ] Current / position / torque mode
▲
│
[ Proprioception ] Encoders, IMU, F/T (if equipped)
Forward and feedback paths should share a time reference; under delay, prefer slowing motion over chasing aggressive gains.
3. ROS 2 & QoS
Separate best-effort sensor streams from reliable control commands where needed. Mis-matched QoS is a common “I see images but the arm never moves” failure. Namespace robots clearly when multiple arms or sim+real coexist.
4. Web & HTTP APIs
Fast iteration often uses a localhost HTTP or WebSocket bridge for teleop and logging. Harden with auth, rate limits, and input validation before any wide-area deployment.
5. Alignment with SVRC’s data loop
Our approach emphasizes capture → evaluation → failure replay → retraining. Architecturally, that means record close to the sensor timestamp source, version your software stack per dataset, and make evaluation jobs reproducible — themes echoed on the Data Platform page.