This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Multi-GradSpeech: Towards Diffusion-based Multi-Speaker Text-to-speech Using Consistent Diffusion Models

Abstract

Despite imperfect score-matching causing drift in training and sampling distributions of diffusion models, recent advances in diffusion-based acoustic models have revolutionized data-sufficient single-speaker Text-to-Speech (TTS) approaches, with Grad-TTS being a prime example. However, the sampling drift problem leads to these approaches struggling in multi-speaker scenarios in practice due to more complex target data distribution compared to single-speaker scenarios. In this paper, we present Multi-GradSpeech, a multi-speaker diffusion-based acoustic models which introduces the Consistent Diffusion Model (CDM) as a generative modeling approach. We enforce the consistency property of CDM during the training process to alleviate the sampling drift problem in the inference stage, resulting in significant improvements in multi-speaker TTS performance. Our experimental results corroborate that our proposed approach can improve the performance of different speakers involved in multi-speaker TTS compared to Grad-TTS, even outperforming the fine-tuning approach. Audio samples are available at https://welkinyang.github.io/multi-gradspeech/

Index Terms—  Text-to-speech, multi-speaker modeling, diffusion models

1 Introduction

Text-to-Speech (TTS) systems that use deep learning rely on datasets with one or more speakers to train models that can produce high-quality speech. This approach consists of two key components: an acoustic model that converts text into frame-level acoustic features, such as mel-spectrograms, and a vocoder that converts these features into waveforms. The acoustic model plays an essential role in controlling factors such as prosody, speaker identity, and style, ultimately contributing to the naturalness and expressiveness of generated speech. While it is straightforward to train a single-speaker acoustic model to generate the target speaker’s voice, supporting multiple speakers is critical to creating an economical and efficient multi-speaker TTS system or as pre-trained models for fine-tuning to improve the performance of the generated speech. On the other hand, improving multi-speaker acoustic modeling is a widely researched topic due to the greater complexity of target data distribution in multi-speaker scenarios compared to single-speaker scenarios.

Researchers have long sought to investigate generative modeling approaches [1, 2, 3, 4] to improve multi-speaker acoustic modeling. Auto-regressive models, which include RNNs or Transformers, have been used as generative models for acoustic models [5, 6, 7] . However, these models suffer from issues such as accumulated error and exposure bias [8, 9] during the inference phase, which can reduce the quality of multi-speaker acoustic modeling. Beyond this, some studiess [10, 11, 12] have used non-autoregressive models optimized by L1 or L2 losses as acoustic models for multi-speaker acoustic modeling. However, these models often produce over-smoothed speech, resulting from the assumption that data noise is drawn from Laplace or zero-mean Gaussian distribution [13], which may not be accurate. In recent years, generative adversarial networks (GANs) and normalizing flows (NFs) have been applied as generative modeling approaches to address the over-smoothing problems in multi-speaker acoustic modeling [14, 15].

In recent years, diffusion models have emerged as a potent class of generative networks, particularly in the domain of image generation. These models have also been applied to acoustic modeling, producing impressive speech synthesis quality, as reported in recent researches [16, 17, 18]. However, current efforts primarily focus on single-speaker scenarios and the sampling drift problem of the diffusion models [19] have locked the full potential of these approaches for multi-speaker TTS scenarios due to more complex target data distribution. Specifically, these approaches suffer from quality degradation from single-speaker scenarios to multi-speaker scenarios when there is ample data available for the target speakers. Furthermore, over-smoothing persists in the generated results when target speaker data is scarce.

This paper proposes Multi-GradSpeech, a novel solution to address the challenges mentioned earlier. Multi-GradSpeech utilizes the Consistent Diffusion Model (CDM) [19] for generative modeling, which tackles the sampling drift problem during the inference phase to enhance the performance of diffusion-based multi-speaker acoustic modeling. The experiments conducted on a Mandarin multi-speaker dataset demonstrated that Multi-GradSpeech performs better than Grad-TTS [16] for diffusion-based multi-speaker modeling without the need for fine-tuning.

Refer to caption
Fig. 1: The overall architecture of Multi-GradSpeech, where the dashed line appears only in training.

2 Background

Basically diffusion models are a class of generative models that gradually add noise to real data through a forward process. While the reverse process which can be approximated using a neural network removes noise from a simple distribution. In the context of stochastic differential equations (SDE) [20], defining p0p_{0} as the data distribution, the forward and backward processes can be driven by two SDEs respectively. The forward SDE is defined as:

dxt=g(t)dBt,x0p0,xtN(x0,σt2Id).dx_{t}=g(t)dB_{t},x_{0}\sim p_{0},x_{t}\sim N\left(x_{0},\sigma_{t}^{2}I_{d}\right). (1)

and the backward SDE is defined as:

dxt=g(t)2xlogp(xt,t)dt+g(t)dB¯tdx_{t}=-g(t)^{2}\nabla_{x}\log p\left(x_{t},t\right)dt+g(t)d\bar{B}_{t} (2)

where t[0,T]t\in[0,T], σt\sigma_{t} is the noise schedule of the diffusion process, BtB_{t} is a Brownian motion, g(t)2=dσt2dtg(t)^{2}=\frac{d\sigma_{t}^{2}}{dt} and p(xt,t)p\left(x_{t},t\right) is the probability density of xtx_{t}. It is worth noting that for all backward SDEs there exists a corresponding deterministic process satisfying the following Ordinary Differential Equation (ODE):

dxt=12g(t)2xlogp(xt,t).dx_{t}=-\frac{1}{2}g(t)^{2}\nabla_{x}\log p\left(x_{t},t\right).\vspace{-5pt} (3)

Since this ODE and the corresponding backward SDE share the identical marginal probability density, we can generate samples by solving the ODE using first-order Euler solver or other higher-order solvers. However, for both backward SDE and ODE, xlogp(xt,t)\nabla_{x}\log p\left(x_{t},t\right) which is also known as the score function is intractable. Several studies proposed to approximate it by a time-conditional denoiser that is to say predicting the true data x0x_{0} from corrupted data xtx_{t}. Specifically, we aim to use neural networks to learn the function hh: d×[0,1]d\mathbb{R}^{d}\times[0,1]\rightarrow\mathbb{R}^{d}:

h(x,t)=𝔼[x0xt=x]h(x,t)=\mathbb{E}\left[x_{0}\mid x_{t}=x\right] (4)

where the expectation is over x0x_{0} by running the backward process with the initial condition xt=xx_{t}=x. By leveraging Tweedie’s formula, we can establish the relationship between the score function and the denoiser hh:

xlogp(x,t)=h(x,t)xσt2\nabla_{x}\log p(x,t)=\frac{h(x,t)-x}{\sigma_{t}^{2}} (5)

Bringing Eq. (5) into Eq. (2) and Eq. (3) we can use the trained denoiser for sampling. Denoising score matching loss is commonly used to train h:

LDSM=𝔼x0p0,xtN(x0,σt2Id)hθ(xt,t)x02,L_{DSM}=\mathbb{E}_{x_{0}\sim p_{0},x_{t}\sim N\left(x_{0},\sigma_{t}^{2}I_{d}\right)}\left\|h_{\theta}\left(x_{t},t\right)-x_{0}\right\|^{2}, (6)

where θ\theta is a neural network.

Perfectly learning the real denoiser hh^{*} is commonly unattainable, which leads to the sampling drift challenge. Specifically, during the sampling stage xtx_{t} is derived from an imperfect hθh^{*}_{\theta}, and each time step produces a drift with respect to the true distribution ptp^{*}_{t}, which ultimately leads to the generation of samples far from the true distribution due to error accumulation. To mitigate sampling drift, Daras et al. proposed Consistent Diffusion Model (CDM). In [19], a denoiser function h is said to be consistent if for all t[0,T]t\in[0,T] and all xdx\in\mathbb{R}^{d},

h(x,t)=𝔼h[x0xt=x]h(x,t)=\mathbb{E}_{h}\left[x_{0}\mid x_{t}=x\right] (7)

where the expectation is over xtx_{t}^{\prime} by solving Eq. (2) or Eq. (3) starting at xtx_{t} with function hh. Further, a consistency loss is added to the original loss for enforcing consistency property of the denoiser:

LCDM=𝔼xtpt𝔼t𝒰[tϵ,t][𝔼θ[hθ(xt,t)xt=x]hθ(x,t)]2/2L_{CDM}=\mathbb{E}_{x_{t}\sim p_{t}}\mathbb{E}_{t^{\prime}\sim\mathcal{U}[t-\epsilon,t]}\Bigg{[}\mathbb{E}_{\theta}\left[h_{\theta}\left(x_{t^{\prime}},t^{\prime}\right)\mid x_{t}=x\right]-h_{\theta}(x,t)\Bigg{]}^{2}/2

(8)

where the innermost expectation is over xtx_{t^{\prime}}. To reduce the computation time of this loss function, tt^{\prime} is generally chosen to be close to tt and the backward process from xtx_{t} to xtx_{t}{\prime} is discretized into a small number of steps.

3 Multi-GradSpeech

Although diffusion-based acoustic models, such as Grad-TTS, show good results in single-speaker TTS when adequate data is available, their performance in multi-speaker TTS still has room for improvement. In this study, we propose a CDM-based multi-speaker acoustic modeling approach, Multi-GradSpeech, to enhance the performance of multi-speaker acoustic models. The proposed approach comprises three modules: a prior encoder, an aligner, and a CDM-based decoder (shown in Fig. 1). We describe Multi-GradSpeech’s training and inference in detail.

3.1 Training

During the training phase, Multi-GradSpeech is optimized using prior loss, duration loss, and diffusion-based loss. The architecture of Multi-GradSpeech is illustrated in Fig. 1. Multi-GradSpeech takes phoneme and speaker id as inputs and encodes them with phoneme and speaker embeddings, respectively. The encoded inputs are fed into the prior encoder, which generates phoneme-level intermediate representations with the same number of channels as the real mel-spectrogram (denoted as μ\mu). The phoneme-level μ\mu and speaker embedding are then fed into the aligner to predict the duration and frame-level μ\mu, and compute the mean square error (MSE) loss between predicted and real durations and mel-spectrogram (denoted as x0x_{0}), respectively.

Next, we feed the frame-level μ\mu, time steps (denoted as tt), and speaker embedding into a CDM-based decoder (denoted as hθh_{\theta}) to generate hθ(xt,t)h_{\theta}(x_{t},t), where conditioncondition is ignored for simplicity. The denoiser hθh_{\theta} is trained by computing the MSE loss between x0x_{0} and hθ(xt,t)h_{\theta}(x_{t},t), which is referred to as LDSML_{DSM} and mentioned in Eq. 6.

While denoising score matching loss LDSML_{DSM} can optimize hθh_{\theta}, the challenge of sample drift in the inference phase degrades the performance of diffusion-based acoustic models in multi-speaker scenarios. Therefore, we further incorporate the consistency loss function also referred to as LCDML_{CDM} in Eq. 8 to enhance the consistency property of hθh_{\theta}. Specifically, we first sample tt^{{}^{\prime}} where tuniform(tϵ,t)t^{{}^{\prime}}\sim uniform(t-\epsilon,t) and ϵ\epsilon determines how close tt^{{}^{\prime}} is to tt. Then, a first-order backward SDE is performed for several time steps with tt as the starting point and tt^{{}^{\prime}} as the end point to obtain xtx_{t^{{}^{\prime}}}. Finally, xtx_{t^{{}^{\prime}}} and tt^{{}^{\prime}} are fed into hθh_{\theta} to obtain hθ(xt,t)h_{\theta}(x_{t^{{}^{\prime}}},t^{{}^{\prime}}) and calculate the MSE loss with hθ(xt,t)h_{\theta}(x_{t},t). Thus, the final loss for Multi-GradSpeech is as follows, where λ\lambda controls the scale of LCDML_{CDM}:

Lfinal=Lduration+Lprior+LDSM+λ𝔼t[LCDM]L_{final}=L_{duration}+L_{prior}+L_{DSM}+\lambda\mathbb{E}_{t}\left[L_{CDM}\right] (9)

3.2 Inference

Similar to the training phase, Multi-GradSpeech first predicts the frame-level μ\mu except that the duration is predicted by the duration predictor instead of using real duration. Then x0x_{0} is progressively recovered by solving the ODE in E.q 3 using the stochastic solver proposed in [21]. Note that at this point xlogp(xt,t)\nabla_{x}\log p\left(x_{t},t\right) in E.q 3 is approximated by hθ(xt,t)h_{\theta}\left(x_{t},t\right) through E.q 5.

4 Experiments

4.1 Data Setups

In our study, we conducted experiments on a Mandarin Chinese dataset featuring multiple speakers. This dataset was comprised of 50 Mandarin Chinese speakers, each with between 500 to 10,000 paired audio and transcript files, adding up to a total of 123.73 hours. The original sampling rate of the data was 48Khz and we resampled it to 16Khz. We extracted mel-spectrograms utilizing a hop size of 200 and a window size of 800 for acoustic modeling. We used the Montreal Forced Aligner to obtain the phoneme durations.

4.2 Implementation Details

The Multi-GradSpeech system utilizes the same architectural configurations for the prior encoder, duration predictor, and CDM-based decoder as seen in Grad-TTS. Solving the first-order backward SDE from time tt to tt^{{}^{\prime}} is necessary to compute LCDML_{CDM}. In accordance with [19], we take six steps in the backward SDE with ϵ\epsilon set to 0.05 and a lambda value of 2 to impose consistency constraints. In the inference phase, we set parameters SchurnS_{churn}, SminS_{min}, SmaxS_{max}, and SnoiseS_{noise} to 11, 0.05, 15, and 1.003 respectively for the stochastic sampler and take 18 backward steps. We use the modified RefineGAN proposed in [22] as the vocoder except that we modified the parameters of the up-sampling network to make the output with a sample rate of 48Khz.

4.3 Evaluation

4.3.1 Target speakers

In order to evaluate the effectiveness of Multi-GradSpeech, we carefully selected two sets of speakers that vary greatly in the amount of data available for evaluation. Each set includes one male and one female speaker to ensure gender balance. The first set, which we refer to as Speakers-S, has sufficient data for single-speaker acoustic modeling (9.09 hours and 10.93 hours). Conversely, the second set Speakers-I, has 1.47 hours and 1.13 hours of data, which is generally considered insufficient for single-speaker acoustic modeling. Our aim is to conduct a comparative analysis of Multi-GradSpeech’s performance on these two sets of speakers.

4.3.2 Metrics and comparison approaches

We conducted subjective and objective evaluations to measure the quality of samples generated by Multi-GradSpeech as compared to ground truth recordings and Grad-TTS. The subjective evaluation consisted of a Mean Opinion Score (MOS) and Similarity MOS (SMOS) test, while the objective evaluation used Word Error Rate (WER) as a metric.

Table 1: Subjective and objective evaluations on different speakers.
Model Training method Target Speaker
Speakers-S Speakers-I
MOS (↑) SMOS (↑) WER (↓) MOS (↑) SMOS (↑) WER (↓)
Grad-TTS Single-speaker 3.50±0.13 4.20±0.02 5.13 2.13±0.12 3.47±0.06 27.80
Multi-speaker 3.35±0.12 4.16±0.02 5.14 2.73±0.12 3.80±0.05 8.94
Finetune 3.43±0.13 4.19±0.02 6.02 2.80±0.11 3.84±0.05 9.07
Multi-GradSpeech Single-speaker 3.92±0.12 4.22±0.02 3.99 2.80±0.13 3.69±0.06 17.85
Multi-speaker 3.90±0.13 4.19±0.02 3.10 3.23±0.13 3.82±0.05 6.06
Finetune 3.91±0.12 4.21±0.02 3.75 2.98±0.14 3.74±0.05 8.57
Multi-GradSpeech (w/o LCDML_{CDM}) Multi-speaker 3.88±0.12 4.18±0.02 3.34 2.94±0.14 3.72±0.06 6.57
Recording MOS (↑) WER (↓)
4.66±0.12 1.72

To evaluate MOS, we selected 20 sentences from the open-source Chinese Standard Mandarin Speech Corpus that were not part of the training data. We generated results for 15 listeners to score on a scale of 1 to 5. For the WER evaluation, we employed a pre-trained ASR model from the FunASR framework [23]. To produce a multi-speaker version of Grad-TTS, we added speaker embedding to its prior encoder, duration predictor, and decoder to control speaker identity. Additionally, we used the Maximum Likelihood SDE solver and took 20 steps for discretization for Grad-TTS. The experimental results are shown in Table 1.

Refer to caption
Fig. 2: Visualizations of generated mel-spectrograms by different approaches.

4.3.3 Analysis of subjective and objective results

In this study, we first trained single-speaker Grad-TTS and Multi-GradSpeech models using data from Speakers-I and Speakers-S. The single-speaker versions of both models demonstrated strong performance on both subjective and objective metrics when tested on Speakers-S. However, we observed a significant drop in performance when testing on Speakers-I, suggesting that the amount of data available for training an individual speaker model was insufficient. The Multi-GradSpeech model outperformed Grad-TTS on all metrics, indicating its superior modeling capabilities.

We then trained multi-speaker versions of both models using the whole dataset and observed the performance on two sets of speakers. We found that while Grad-TTS showed a slight decrease in MOS value and no change in WER, Multi-GradSpeech maintained its performance for Speakers-s and saw a significant decrease in WER. Moreover, both models showed significant improvements in MOS values and WER for Speakers-I, indicating the benefits of multi-speaker modeling.

We also fine-tuned the multi-speaker models for the four target speakers and the parameters of prior encoder were frozen and the other parts were updated, but observed no improvements in performance. While this may be due to the need for careful parameter design and fine-tuning epochs, it suggests that Multi-GradSpeech can build a strong multi-speaker TTS without fine-tuning. Notably, all versions of both models demonstrated high speaker similarity performance on Speakers-S, although SMOS values were reduced on Speakers-I.

Finally, we trained a version of Multi-GradSpeech without the CDM component (remove LcdmL_{cdm}) to demonstrate its importance. The resulting model exhibited noticeable performance degradation across all metrics, highlighting the crucial role of CDM in multi-speaker TTS.

4.3.4 Analysis of visualization results

In order to uncover the reasons for the disparate performance of certain approaches in subjective and objective experiments, we examined mel-spectrograms generated by these approaches in a test case which is shown in Fig .2. Specifically, we selected individual speakers from the Speakers-S and Speakers-I, which we denoted as Speaker-S and Speaker-I. We synthesized this test case using both single-speaker and multi-speaker versions of Grad-TTS and Multi-GradSpeech. Initially, we compared the results of Grad-TTS generation on Speaker-S, noting that there was no significant difference betwee single-speaker and multi-speaker versions, with the exception of a small section in a red box. We discovered that the silence within this part had been over-smoothed in the multi-speaker version, giving it a metallic and unnatural sound. This helps explain why the WER did not decrease when Grad-TTS switched from the single-speaker version to the multi-speaker version on Speakers-S, but the MOS value decreased.

By contrast, when we applied Grad-TTS to Speaker-I and compared the single-speaker and multi-speaker versions, we found that the multi-speaker version did lead to more complete articulation. However, the lack of spectral detail in the overall sound persisted, leading to an over-smoothed result. This discrepancy explains why there was a significant decrease in WER when Grad-TTS transitioned from the single-speaker version to the multi-speaker version on Speakers-S, but the MOS value is only about 2.7.

Finally, when considering the single-speaker and multi-speaker versions of Multi-GradSpeech on Speaker-S, no significant differences were observed. Furthermore, we found that the over-smoothing problem did not appear in the multi-speaker version of Multi-GradSpeech on Speaker-I, and adequate detail was present in the middle and high frequencies.

5 CONCLUSION AND Future WORK

We introduce Multi-GradSpeech, a diffusion-based acoustic model for multi-speaker modeling. By utilizing Consistent Diffusion Models, we demonstrate significant improvements in our approach compared to Grad-TTS. We anticipate that Multi-GradSpeech will serve as an alternative to prior diffusion-based acoustic models, particularly in large TTS.

References

  • [1] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu, “Pixel recurrent neural networks,” in International conference on machine learning. PMLR, 2016, pp. 1747–1756.
  • [2] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio, “Generative adversarial nets,” in NIPS 2014.
  • [3] Danilo Jimenez Rezende and Shakir Mohamed, “Variational inference with normalizing flows,” in ICML 2015.
  • [4] Jonathan Ho, Ajay Jain, and Pieter Abbeel, “Denoising diffusion probabilistic models,” in NeurIPS 2020.
  • [5] Yuxuan Wang, R. J. Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc V. Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous, “Tacotron: Towards end-to-end speech synthesis,” in Interspeech 2017.
  • [6] Mingjian Chen, Xu Tan, Yi Ren, Jin Xu, Hao Sun, Sheng Zhao, and Tao Qin, “Multispeech: Multi-speaker text to speech with transformer,” in Interspeech 2020.
  • [7] Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, and Dong Yu, “DurIAN: Duration Informed Attention Network for Speech Synthesis,” in Proc. Interspeech 2020.
  • [8] Haohan Guo, Frank K. Soong, Lei He, and Lei Xie, “A new gan-based end-to-end TTS training algorithm,” in Interspeech 2019.
  • [9] Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, and Haizhou Li, “Teacher-student training for robust tacotron-based TTS,” in ICASSP 2020.
  • [10] Chung-Ming Chien, Jheng-Hao Lin, Chien-yu Huang, Po-Chun Hsu, and Hung-yi Lee, “Investigating on incorporating pretrained and learnable speaker representations for multi-speaker multi-style text-to-speech,” in ICASSP 2021.
  • [11] Song Li, Beibei Ouyang, Lin Li, and Qingyang Hong, “Light-tts: Lightweight multi-speaker multi-lingual text-to-speech,” in ICASSP 2021.
  • [12] Song Li, Beibei Ouyang, Lin Li, and Qingyang Hong, “Lightspeech: Lightweight non-autoregressive multi-speaker text-to-speech,” in SLT 2021.
  • [13] Ahmad Reza Heravi and Ghosheh Abed Hodtani, “Where does minimum error entropy outperform minimum mean square error? A new and closer look,” IEEE Access.
  • [14] Jinhyeok Yang, Jae-Sung Bae, Taejun Bak, Young-Ik Kim, and Hoon-Young Cho, “Ganspeech: Adversarial training for high-fidelity multi-speaker speech synthesis,” in Interspeech 2021.
  • [15] Jaehyeon Kim, Sungwon Kim, Jungil Kong, and Sungroh Yoon, “Glow-tts: A generative flow for text-to-speech via monotonic alignment search,” in NeurIPS 2020.
  • [16] Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail A. Kudinov, “Grad-tts: A diffusion probabilistic model for text-to-speech,” in ICML 2021, Marina Meila and Tong Zhang, Eds.
  • [17] Myeonghun Jeong, Hyeongju Kim, Sung Jun Cheon, Byoung Jin Choi, and Nam Soo Kim, “Diff-TTS: A Denoising Diffusion Model for Text-to-Speech,” in Proc. Interspeech 2021, 2021, pp. 3605–3609.
  • [18] Songxiang Liu, Dan Su, and Dong Yu, “Diffgan-tts: High-fidelity and efficient text-to-speech with denoising diffusion gans,” arXiv preprint arXiv:2201.11972, 2022.
  • [19] Giannis Daras, Yuval Dagan, Alexandros G. Dimakis, and Constantinos Daskalakis, “Consistent diffusion models: Mitigating sampling drift by learning to be consistent,” CoRR, vol. abs/2302.09057, 2023.
  • [20] Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole, “Score-based generative modeling through stochastic differential equations,” in ICLR 2021.
  • [21] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine, “Elucidating the design space of diffusion-based generative models,” in Proc. NeurIPS, 2022.
  • [22] Heyang Xue, Xinsheng Wang, Yongmao Zhang, Lei Xie, Pengcheng Zhu, and Mengxiao Bi, “Learn2sing 2.0: Diffusion and mutual information-based target speaker SVS by learning from singing teacher,” in Interspeech 2022, Hanseok Ko and John H. L. Hansen, Eds.
  • [23] Zhifu Gao, Zerui Li, Jiaming Wang, Haoneng Luo, Xian Shi, Mengzhe Chen, Yabin Li, Lingyun Zuo, Zhihao Du, Zhangyu Xiao, and Shiliang Zhang, “Funasr: A fundamental end-to-end speech recognition toolkit,” in INTERSPEECH, 2023.