Post
DK1 data collection gets messy when teams are not consistent about where an episode begins, where it ends, and which runs should be excluded before training.
How are you defining episode boundaries and filtering low-quality demonstrations in your DK1 workflow?
Please share practical checks for resets, idle frames, partial failures, operator hesitation, and what your QA pass looks like before a dataset is considered usable.
If you reply, include one exact QA rule that caught a bad run and one rule that turned out to be too strict.