Very Low Complexity Speech Synthesis Using Framewise Autoregressive GAN (FARGAN) with Pitch Prediction

Xiph.Org Foundation      Amazon Web Services
Code arXiv

Abstract

Neural vocoders are now being used in a wide range of speech processing applications. In many of those applications, the vocoder can be the most complex component, so finding lower complexity algorithms can lead to significant practical benefits. In this work, we propose FARGAN, an autoregressive vocoder that takes advantage of long-term pitch prediction to synthesize high-quality speech in small subframes, without the need for teacher-forcing. Experimental results show that the proposed 600~MFLOPS FARGAN vocoder can achieve both higher quality and lower complexity than existing low-complexity vocoders. The quality even matches that of existing higher-complexity vocoders.

Comparison Between Different Models (listening via headset is recommended)

Original FARGAN
600MFLOPS (Proposed)
HiFi-GAN (V1)
38.5GFLOPS
CARGAN
64.47GFLOPS
Framewise WaveGAN
1.2GFLOPS
LPCNet
2.8GFLOPS
HiFi-GAN (V3)
2.8GFLOPS

Ablations (listening via headset is recommended)

Original FARGAN
Baseline
FARGAN
w/o Pitch Prediction
FARGAN
w/o Autoregression

Out-of-domain Singing & Noisy Samples (listening via headset is recommended)

  • This part shows how FARGAN perfoms compared to other models when reconstructing out-of-domain samples such as singing and noisy speech.

  • All models were not trained on any singing or noisy speech datasets.

  • The original version of the following demo samples were obtained from the free datasets defined in the following references:

    • 4 singing voice items from the NUS-48E dataset:
      Zhiyan Duan, Haotian Fang, Bo Li, Khe Chai Sim and Ye Wang. “The NUS Sung and Spoken Lyrics Corpus: A Quantitative Comparison of Singing and Speech“. Asia-Pacific Signal and Information Processing Association Annual Submit and Conference 2013 (APSIPA ASC 2013).
    • 1 singing voice item from the demo page of the following paper:
      Takahashi, Naoya, Mayank Kumar, and Yuki Mitsufuji. "Hierarchical diffusion models for singing voice neural vocoder." ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2023.
    • 5 noisy speech items from the dataset provided by Valentini et al.:
      Botinhao, Cassia Valentini, et al. "Speech enhancement for a noise-robust text-to-speech synthesis system using deep recurrent neural networks." Interspeech 2016. 2016.
Original FARGAN HiFi-GAN (V1) CARGAN LPCNet HiFi-GAN (V3)

BibTeX


        @ARTICLE{10632624,
          author={Valin, Jean-Marc and Mustafa, Ahmed and Büthe, Jan},
          journal={IEEE Signal Processing Letters}, 
          title={Very Low Complexity Speech Synthesis Using Framewise Autoregressive GAN (FARGAN) With Pitch Prediction}, 
          year={2024},
          volume={31},
          pages={2115-2119},
          doi={10.1109/LSP.2024.3440956}}