Title: HiFi-GAN: High-Fidelity Denoising and Dereverberation Based on Speech Deep Features in Adversarial Networks
Authors: Jiaqi Su, Zeyu Jin, Adam Finkelstein
Published: 10th June 2020 (Wednesday) @ 07:24:39
Link: http://arxiv.org/abs/2006.05694v2

Abstract

Real-world audio recordings are often degraded by factors such as noise, reverberation, and equalization distortion. This paper introduces HiFi-GAN, a deep learning method to transform recorded speech to sound as though it had been recorded in a studio. We use an end-to-end feed-forward WaveNet architecture, trained with multi-scale adversarial discriminators in both the time domain and the time-frequency domain. It relies on the deep feature matching losses of the discriminators to improve the perceptual quality of enhanced speech. The proposed model generalizes well to new speakers, new speech content, and new environments. It significantly outperforms state-of-the-art baseline methods in both objective and subjective experiments.


This is the paper from Princeton and Adobe that is confusingly also named “HiFi-GAN”, not to be confused with the other being HiFi-GAN Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis. This paper appeared on arXiv in June 2020 ~4 months before the other (from Kakao Enterprise) which was first posted in October 2020.

This paper appeared at Interspeech 2020