Microsoft Research | Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact @MicrosoftResearch | Uploaded August 2024 | Updated October 2024, 1 week ago.
Speaker: Benjamin Stahl
Host: Hannes Gamper
In this talk, we explore advancements in computational models for speech quality assessment. Self-supervised learning models have emerged as powerful front-ends, outperforming supervised-only models. However, their large size renders them impractical for production tasks. We discuss strategies to distill self-supervised learning-based models into more compact forms using unlabeled data, achieving significant size reduction while maintaining an advantage over supervised-only models.
See more at microsoft.com/en-us/research/video/final-intern-talk-distilling-self-supervised-learning-based-speech-quality-assessment-into-compact
Speaker: Benjamin Stahl
Host: Hannes Gamper
In this talk, we explore advancements in computational models for speech quality assessment. Self-supervised learning models have emerged as powerful front-ends, outperforming supervised-only models. However, their large size renders them impractical for production tasks. We discuss strategies to distill self-supervised learning-based models into more compact forms using unlabeled data, achieving significant size reduction while maintaining an advantage over supervised-only models.
See more at microsoft.com/en-us/research/video/final-intern-talk-distilling-self-supervised-learning-based-speech-quality-assessment-into-compact