Definition

Diffusion Policy applies denoising diffusion probabilistic models (DDPMs) to action generation. Instead of predicting a single action, the model iteratively denoises a random sample into an action trajectory. This enables capturing multimodal distributions over possible behaviors — critical for contact-rich manipulation where multiple valid strategies exist. Diffusion Policy has shown strong results on bimanual tasks, tool use, and cloth folding. It typically operates on action chunks (sequences of 8–32 future actions) rather than single-step predictions.

Why It Matters for Robot Teams

Understanding diffusion policy is essential for teams building real-world robot systems. Whether you are collecting demonstration data, training policies in simulation, or deploying in production, this concept directly affects your workflow and system design.