Embracing Noise: How Data Corruption Can Make Models Smarter
2025-12-10
Watch
Machine learning often assumes clean, high-quality data. Yet the real world is noisy, incomplete, and messy, and models trained only on sanitized datasets become brittle. This talk explores the counterintuitive idea that deliberately corrupting data during training can make models more robust. By adding structured noise, masking inputs, or flipping labels, we can prevent overfitting, improve generalization, and build systems that survive real world conditions. Attendees will leave with a clear understanding of why “bad data” can sometimes lead to better models.