[TRLC-DK1] Episode boundaries and dataset QA for builders labs (intermediate)

How are you defining usable DK1 demonstrations so your team does not waste time training on low-quality episodes?

Forum / Posts Index / TRLC-DK1

Post

DK1 data collection gets messy when teams are not consistent about where an episode begins, where it ends, and which runs should be excluded before training.

How are you defining episode boundaries and filtering low-quality demonstrations in your DK1 workflow?

Please share practical checks for resets, idle frames, partial failures, operator hesitation, and what your QA pass looks like before a dataset is considered usable.

If you reply, include one exact QA rule that caught a bad run and one rule that turned out to be too strict.

Module: TRLC-DK1 · Audience: builders-labs · Type: question

Tags: dk1, dataset-qa, episode-boundaries, demonstrations

Comment 1

The best replies here explain how a team distinguishes informative failures from noisy unusable recordings.

Comment 2

Episode boundaries become much easier to search and reuse when people share exact examples such as reset pose reached, object released, or task timeout hit.

Comment 3

If you have a lightweight QA checklist before dataset export, post it. That is exactly what other labs are trying to find.