Neural Policy

A neural policy is a robot control policy parameterized by a neural network that maps observations (images, proprioception, language) directly to actions (joint positions, Cartesian deltas, gripper commands). In contrast to classical motion planning pipelines, neural policies learn the mapping end-to-end from data without hand-engineered intermediate representations. Modern neural policies use convolutional encoders for vision, transformers for sequence modeling, and architectures like ACT, Diffusion Policy, or VLA backbones for action generation. A key property of neural policies is that they can be trained from demonstrations or reward signals, enabling them to handle tasks too complex for hand-coded controllers.
PolicyDeep Learning

Explore More Terms

Browse the full robotics glossary with 70+ terms.

Back to Glossary