equation
(1) |
12pt \SetWatermarkScale1.1 \SetWatermarkAngle90 \SetWatermarkHorCenter202mm \SetWatermarkVerCenter170mm \SetWatermarkColordarkgray \SetWatermarkTextLate-Breaking / Demo Session Extended Abstract, ISMIR 2024 Conference
Self-supervised Multi-view Learning for Disentangled Music Audio Representations
Abstract
Self-supervised learning (SSL) offers a powerful way to learn robust, generalizable representations without labeled data. In music, where labeled data is scarce, existing SSL methods typically use generated supervision and multi-view redundancy to create pretext tasks. However, these approaches often produce entangled representations and lose view-specific information. We propose a novel self-supervised multi-view learning framework for audio designed to incentivize separation between private and shared representation spaces. A case study on audio disentanglement in a controlled setting demonstrates the effectiveness of our method.
1 Introduction
SSL uses pretext tasks to uncover patterns from unlabeled data. In single-view SSL, a model learns from one perspective of the input via information restoration [1]. Multi-view SSL, however, utilizes distinct views to generate supervision, assuming shared information across views suffices for downstream tasks [2]. These methods align and contrast information across views for learning.
Recent multi-view SSL studies use contrastive learning to treat audio segments or augmentations as transformed views [3, 4]. However, these approaches neglect the intrinsic structure of music audio, entangling representations with attributes like timbre, frequency, and tude. They also focus on shared information across views, missing task-relevant, view-specific details.
Previous research has primarily focused on separating pitch and timbre to encode disentangled music representations, often designing dedicated encoders for each attribute. Some approaches train generative models with explicit supervision on pitch and timbre latents [5], while others reduce supervision by applying auxiliary metric-based regularization in the latent spaces [6, 7, 8, 9].
We propose a novel self-supervised multi-view learning framework for music audio, inspired by multi-view/multimodal disentanglement [10, 11], which explicitly separates shared and private representations. This approach preserves the uniqueness of each view while capturing common latent factors. As a case study, we tackle music audio disentanglement from a multi-view learning perspective with self-supervision. We validate our method using Syntone [12], a dataset with controlled variations in music attributes such as timbre and frequency.
2 Method
Our method is shown in Figure LABEL:fig:block_diag for the case of two views. The model receives as input a pair of normalized log mel spectrograms, denoted as , which are characterized by identical timbre (waveform class) but distinct frequency. Factorized latents and are inferred from a shared encoder parameterized by . To separate the private and shared spaces, we split the latents into and , where are the private latents of each view and correspond to the private-shared latents. Following [13], we use a PoE-based consistency model to approximate the true posterior so that we effectively make . For all the continuous latent variable , we assume .
We set the latent dimensions to be and sample from the parameterized distributions to arrive at three -dimensional latent variables: , , and corresponding to the -private, -private, and shared representations, respectively. We concatenate each private latent with the shared latent as input to the shared decoder parameterized by .
Following DMVAE [10] and -TCVAE [14], the tractable evidence lower bound of our model assumes the form for each sample pair and views of the data:
(2) |
The objective to maximize is the sum of the reconstruction accuracy compensated by the KL-Divergence for each view . Particularly, each KL term in Eq (2) is decomposed into mutual information , total correlation , and dimension-wise KL , with , , and as penalty weights respectively.
3 Experimental Design
Data Synthesis: We generate a controlled dataset of 1-second audio samples and corresponding normalized log-mel spectrograms with varying timbre and frequency factors, similar to the data generation of SynTone [12]. We randomly choose pairs of log-mel spectrograms with the same timbre but with different frequencies to incentivize our model to learn shared timbre information and varying frequency information.
Model Training: We train our model for 100 epochs with a learning rate of using the Adam optimizer. We use and for weighting the decomposed KL-Divergence in Eq. 2 (explained in Sec. 4).
Evaluation Design: We use mutual information matrices between the latents and the factors to evaluate how much information about each factor is contained in each subspace and within the latent dimensions. For downstream classification, we use our trained disentanglement model as a feature extractor and train a lightweight classifier to predict timbre and quantized frequency. We compare the downstream performance of different latents on both tasks to understand the information contained in each subspace.
4 Results and Discussion
In Figure 1, we show that our model is able to clearly disentangle frequency information into the private subspace and timbre information into the shared in terms of the mutual information between factors and latent dimensions. This observation supports our hypothesis that a multi-view framework combined with a paired dataset that contains shared and private information should be able to learn separate embedding subspaces pertaining to different factors.
Our downstream classification performance in Table 1 shows similar trends to the above: when using the private latent or a combination of the private and shared, our model performs very well at predicting frequency, and when using the shared latent or combination of both, we are able to classify timbre successfully.

Timbre | Frequency | |
---|---|---|
Latent Used | ||
Private | ||
Shared | ||
Both |
We also perform an ablation of weights of the decomposed KL-Divergence terms in Eq. 2 for , , and . Surprisingly, we found that higher weights of (MI) and (TC) negatively affect private-shared disentanglement, while increasing as a simple regularizer on each latent has the biggest impact on improving performance.
5 Conclusion and Future Work
We present a novel architecture for disentanglement of view-specific and shared latent subspaces in a controlled audio setting. We show that our hypothesis holds in that unique frequency information is encoded strongly into view-specific representations and common timbre information is encoded best in the shared latents. In the future we will experiment with expanding the objective function to further improve subspace-level latent disentanglement, and also plan to apply our method to real music samples.
References
- [1] R. Balestriero, M. Ibrahim, V. Sobal, A. S. Morcos, S. Shekhar, T. Goldstein, F. Bordes, A. Bardes, G. Mialon, Y. Tian, A. Schwarzschild, A. G. Wilson, J. Geiping, Q. Garrido, P. Fernandez, A. Bar, H. Pirsiavash, Y. LeCun, and M. Goldblum, “A cookbook of self-supervised learning,” ArXiv, vol. abs/2304.12210, 2023. [Online]. Available: https://api.semanticscholar.org/CorpusID:258298825
- [2] P. P. Liang, Z. Deng, M. Q. Ma, J. Y. Zou, L.-P. Morency, and R. Salakhutdinov, “Factorized contrastive learning: Going beyond multi-view redundancy,” Advances in Neural Information Processing Systems, vol. 36, 2024.
- [3] A. Saeed, D. Grangier, and N. Zeghidour, “Contrastive learning of general-purpose audio representations,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 3875–3879.
- [4] J. Spijkervet and J. A. Burgoyne, “Contrastive learning of musical representations,” in Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR 2021, Online, November 7-12, 2021, 2021.
- [5] Y. Luo, K. Agres, and D. Herremans, “Learning disentangled representations of timbre and pitch for musical instrument sounds using gaussian mixture variational autoencoders,” in Proceedings of the 20th International Society for Music Information Retrieval Conference, ISMIR 2019, Delft, The Netherlands, November 4-8, 2019, A. Flexer, G. Peeters, J. Urbano, and A. Volk, Eds., 2019, pp. 746–753. [Online]. Available: http://archives.ismir.net/ismir2019/paper/000091.pdf
- [6] Y.-J. Luo, K. W. Cheuk, T. Nakano, M. Goto, and D. Herremans, “Unsupervised disentanglement of pitch and timbre for isolated musical instrument sounds.” in ISMIR, 2020, pp. 700–707.
- [7] K. Tanaka, R. Nishikimi, Y. Bando, K. Yoshii, and S. Morishima, “Pitch-timbre disentanglement of musical instrument sounds based on vae-based metric learning,” in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 111–115.
- [8] Y.-J. Luo, S. Ewert, and S. Dixon, “Towards robust unsupervised disentanglement of sequential data – a case study using music audio,” 2022. [Online]. Available: https://arxiv.org/abs/2205.05871
- [9] ——, “Unsupervised pitch-timbre disentanglement of musical instruments using a jacobian disentangled sequential autoencoder,” in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024, pp. 1036–1040.
- [10] M. Lee and V. Pavlovic, “Private-shared disentangled multimodal vae for learning of hybrid latent representations,” arXiv preprint arXiv:2012.13024, 2020.
- [11] J. Xu, Y. Ren, H. Tang, X. Pu, X. Zhu, M. Zeng, and L. He, “Multi-vae: Learning disentangled view-common and view-peculiar visual representations for multi-view clustering,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9214–9223.
- [12] Y. Brima, U. Krumnack, S. Pika, and G. Heidemann, “Learning disentangled audio representations through controlled synthesis,” 2024. [Online]. Available: https://arxiv.org/abs/2402.10547
- [13] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural computation, vol. 14, no. 8, pp. 1771–1800, 2002.
- [14] R. T. Chen, X. Li, R. B. Grosse, and D. K. Duvenaud, “Isolating sources of disentanglement in variational autoencoders,” Advances in neural information processing systems, vol. 31, 2018.