Title: Acoustic BPE for Speech Generation with Discrete Tokens
Authors: Feiyu Shen, Yiwei Guo, Chenpeng Du, Xie Chen, Kai Yu
Published: 23rd October 2023 (Monday) @ 05:38:41
Link: http://arxiv.org/abs/2310.14580v4

Abstract

Discrete audio tokens derived from self-supervised learning models have gained widespread usage in speech generation. However, current practice of directly utilizing audio tokens poses challenges for sequence modeling due to the length of the token sequence. Additionally, this approach places the burden on the model to establish correlations between tokens, further complicating the modeling process. To address this issue, we propose acoustic BPE which encodes frequent audio token patterns by utilizing byte-pair encoding. Acoustic BPE effectively reduces the sequence length and leverages the prior morphological information present in token sequence, which alleviates the modeling challenges of token correlation. Through comprehensive investigations on a speech language model trained with acoustic BPE, we confirm the notable advantages it offers, including faster inference and improved syntax capturing capabilities. In addition, we propose a novel rescore method to select the optimal synthetic speech among multiple candidates generated by rich-diversity TTS system. Experiments prove that rescore selection aligns closely with human preference, which highlights acoustic BPE’s potential to other speech generation tasks.


  • they do classic BPE on DSUs (they do exactly what we proposed to do with BPE; my thesis) - except they use - presumably single-codepoint - Chinese Unicode characters to represent tokens; and - use HuBERT’s final layer and a smaller codebook size; IIRC
  • 
but they don’t look at ASR; they focus instead on “syntax capturing”
    • basically tested by scrambling the words in a sentence and generating synthetically with TTS and doing a classification between these nonsense utterances and correct, grammatical utterances, if I understood correctly
  • need to understandquestion
    1. their rescoring method (some kind of decoding thing like the guys in the Sardine lab work on a lot 
?); and
    2. the entropy informativeness point they make