Dropout
A regularization technique that randomly sets a fraction of neural network activations to zero during training, preventing co-adaptation of neurons. Dropout reduces overfitting and improves generalization. At inference time, all neurons are active but scaled. Dropout is commonly applied in policy networks to prevent overfitting to limited robot demonstration data.