Few-shot musical source separation

Abstract

Deep learning-based approaches to musical source separation are often limited to the instrument classes that the models are trained on and do not generalize to separate unseen instruments. To address this, we propose a few-shot musical source separation paradigm. We condition a generic U-Net source separation model using few audio examples of the target instrument. We train a few-shot conditioning encoder jointly with the U-Net to encode the audio examples into a conditioning vector to configure the U-Net via feature-wise linear modulation (FiLM). We evaluate the trained models on real musical recordings in the MUSDB18 and MedleyDB datasets. We show that our proposed few-shot conditioning paradigm outperforms the baseline one-hot instrument-class conditioned model for both seen and unseen instruments. We further experiment with different conditioning example characteristics, including examples from different recordings, multi-sourced examples, and negative conditioning examples, to show the potential of applying the proposed few-shot approach to a wider variety of real-world scenarios.

Related

October 2024 | CIKM

PODTILE: Facilitating Podcast Episode Browsing with Auto-generated Chapters

A. Ghazimatin, E. Garmash, G. Penha, K. Sheets, M. Achenbach, O. Semerci, R. Galvez, M. Tannenberg, S. Mantravadi, D. Narayanan, O. Kalaydzhyan, D. Cole, B. Carterette, A. Clifton, P. N. Bennett, C. Hauff, M. Lalmas-Roelleke

October 2024 | Journal of Online Trust & Safety

Algorithmic Impact Assessments at Scale: Practitioners’ Challenges and Needs

Amar Ashar, Karim Ginena, Maria Cipollone, Renata Barreto, Henriette Cramer

May 2024 | The Web Conference

Personalized Audiobook Recommendations at Spotify Through Graph Neural Networks

Marco De Nadai, Francesco Fabbri, Paul Gigioli, Alice Wang, Ang Li, Fabrizio Silvestri, Laura Kim, Shawn Lin, Vladan Radosavljevic, Sandeep Ghael, David Nyhan, Hugues Bouchard, Mounia Lalmas-Roelleke, Andreas Damianou