Title: Towards Better Disentanglement in Non-Autoregressive Zero-Shot Expressive Voice Conversion
Authors: Seymanur Akti, Tuan Nam Nguyen, Alexander Waibel
Published: 4th June 2025 (Wednesday) @ 14:42:12
Link: http://arxiv.org/abs/2506.04013v1

Abstract

Expressive voice conversion aims to transfer both speaker identity and expressive attributes from a target speech to a given source speech. In this work, we improve over a self-supervised, non-autoregressive framework with a conditional variational autoencoder, focusing on reducing source timbre leakage and improving linguistic-acoustic disentanglement for better style transfer. To minimize style leakage, we use multilingual discrete speech units for content representation and reinforce embeddings with augmentation-based similarity loss and mix-style layer normalization. To enhance expressivity transfer, we incorporate local F0 information via cross-attention and extract style embeddings enriched with global pitch and energy features. Experiments show our model outperforms baselines in emotion and speaker similarity, demonstrating superior style adaptation and reduced source style leakage.