Title: UME: Upcycling Mixture-of-Experts for Scalable and Efficient Automatic Speech Recognition
Authors: Li Fu, Shanyong Yu, Siqi Li, Lu Fan, Youzheng Wu, Xiaodong He
Published: 23rd December 2024 (Monday) @ 12:08:37
Link: http://arxiv.org/abs/2412.17507v1
Abstract
Recent advancements in scaling up models have significantly improved performance in Automatic Speech Recognition (ASR) tasks. However, training large ASR models from scratch remains costly. To address this issue, we introduce UME, a novel method that efficiently Upcycles pretrained dense ASR checkpoints into larger Mixture-of-Experts (MoE) architectures. Initially, feed-forward networks are converted into MoE layers. By reusing the pretrained weights, we establish a robust foundation for the expanded model, significantly reducing optimization time. Then, layer freezing and expert balancing strategies are employed to continue training the model, further enhancing performance. Experiments on a mixture of 170k-hour Mandarin and English datasets show that UME: 1) surpasses the pretrained baseline by a margin of 11.9% relative error rate reduction while maintaining comparable latency; 2) reduces training time by up to 86.7% and achieves superior accuracy compared to training models of the same size from scratch.
Core research question: Can we efficiently scale up models by reusing existing small ASR models as an optimal starting point, thereby reducing training overhead without significantly impacting the Real-Time Factor (RTF)?
Upcycling pretrained ASR models for scaling up typically involves two main applications:
- One has access to an existing ASR model and aims to enhance its performance by upscaling to a larger size.
- One plans to train a large ASR model but is hindered by high training costs and tuning complexities. An alternative is to initially train a small model and then upcycle it to a larger model once the small model saturates â Robust Speech Recognition via Large-Scale Weak Supervision (Whisper)
MoE for ASR - Related Work
âŠresearch on shared embedding networks has improved expert routing mechanisms [19]â[21]. To enhance multilingual ASR performance, language-based routing has been further examined [22]â[24]. To simplify model architecture and improve scalability, Hu et al. [25] introduced a Conformer MoE model for multilingual ASR, notably without using shared embeddings. More recently, Song et al. [26] presented a unified MoE model that integrates streaming and non-streaming capabilities, achieving consistent latency levels when scaling a 200M-dense model to a 1B-MoE variant. However, the primary focus of existing research has been developing novel MoE architectures, which often incur high costs when training models from scratch. In contrast, our proposed UME approach emphasizes efficient scaling by upcycling pretrained, smaller dense checkpoints into larger MoE models.