Ahmed Mustafa, Jean-Marc Valin, Jan Büthe, Paris Smaragdis, Mike Goodwin
Amazon Web Services University of Illinois at Urbana-Champaign
Abstract:
GAN vocoders are currently one of the state-of-the-art methods for building high-quality neural waveform generative models. However, most of their architectures require dozens of billion floating-point operations per second (GFLOPS) to generate speech waveforms in samplewise manner. This makes GAN vocoders still challenging to run on normal CPUs without accelerators or parallel computers. In this work, we propose a new architecture for GAN vocoders that mainly depends on recurrent and fully-connected networks to directly generate the time domain signal in framewise manner. This results in considerable reduction of the computational cost and enables very fast generation on both GPUs and low-complexity CPUs. Experimental results show that our Framewise-WaveGAN vocoder achieves significantly higher quality than auto-regressive maximumlikelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS. This makes GAN vocoders more practical on edge and low-power devices.
Accepted at ICASSP 2023 (arxiv).
Before you listen:
- The normal voice samples in this demo are obtained from the publicly-available datasets:
The VCTK, provided under creative commons license
The CMU ARCTIC Database, © Carnegie Mellon University, 2003, All Rights Reserved, license.
Both datasets were not used in training the vocoders in this demo.
The singing voice samples in this demo are obtained from the official demo of this paper. We don't train the vocoders on any singing voice datasets. This is just to show the pitch consistency of the two vocoders when generating unseen expressive speech samples.
Normal Voice Samples (listening via headset is recommended):
LPCNet 1.2GFLOPS | Framewise WaveGAN 1.2GFLOPS (Proposed) | Original |
---|---|---|
Singing Voice Samples:
LPCNet 1.2GFLOPS | Framewise WaveGAN 1.2GFLOPS (Proposed) | Original |
---|---|---|