SAC

Soft Actor-Critic — an off-policy RL algorithm that maximizes both expected return and policy entropy, encouraging exploration while maintaining stability. SAC is popular for robot manipulation in simulation due to its sample efficiency and robustness to hyperparameters. The entropy regularization prevents premature convergence to deterministic policies and improves robustness to environment variations.

Robot LearningRL

Explore More Terms

Browse the full robotics glossary with 1,000+ terms.

Back to Glossary