Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact  @MicrosoftResearch
Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact  @MicrosoftResearch
Microsoft Research | Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact @MicrosoftResearch | Uploaded August 2024 | Updated October 2024, 1 week ago.
Speaker: Benjamin Stahl
Host: Hannes Gamper

In this talk, we explore advancements in computational models for speech quality assessment. Self-supervised learning models have emerged as powerful front-ends, outperforming supervised-only models. However, their large size renders them impractical for production tasks. We discuss strategies to distill self-supervised learning-based models into more compact forms using unlabeled data, achieving significant size reduction while maintaining an advantage over supervised-only models.

See more at microsoft.com/en-us/research/video/final-intern-talk-distilling-self-supervised-learning-based-speech-quality-assessment-into-compact
Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into CompactAI Forum 2023 | The Emergence of General AI for MedicineEnd-to-end Reinforcement Learning for the Large-scale Traveling Salesman ProblemOnline Facility Location with PredictionsImproving text prediction accuracy using neurophysiologyCausal AI for Decision MakingAI Case Studies for Natural Science Research with Bonnie KruftSeeing AI app - Indoor NavigationLightning talks: Sustainably nourishing the worldMicrosoft Research India - who we are.Efficient Large-Scale AI Workshop | Session 1: Skills acquisition and new capabilitiesChallenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)

Final intern talk: Distilling Self-Supervised-Learning-Based Speech Quality Assessment into Compact @MicrosoftResearch

SHARE TO X SHARE TO REDDIT SHARE TO FACEBOOK WALLPAPER