Few-shot musical source separation

Abstract

Deep learning-based approaches to musical source separation are often limited to the instrument classes that the models are trained on and do not generalize to separate unseen instruments. To address this, we propose a few-shot musical source separation paradigm. We condition a generic U-Net source separation model using few audio examples of the target instrument. We train a few-shot conditioning encoder jointly with the U-Net to encode the audio examples into a conditioning vector to configure the U-Net via feature-wise linear modulation (FiLM). We evaluate the trained models on real musical recordings in the MUSDB18 and MedleyDB datasets. We show that our proposed few-shot conditioning paradigm outperforms the baseline one-hot instrument-class conditioned model for both seen and unseen instruments. We further experiment with different conditioning example characteristics, including examples from different recordings, multi-sourced examples, and negative conditioning examples, to show the potential of applying the proposed few-shot approach to a wider variety of real-world scenarios.

Related

November 2024 | SIAM Journal on Mathematics of Data Science

Topological Fingerprints for Audio Identification

Wojciech Reise, Ximena Fernández, Maria Dominguez, Heather A. Harrington, Mariano Beguerisse-Díaz

October 2024 | CIKM

PODTILE: Facilitating Podcast Episode Browsing with Auto-generated Chapters

A. Ghazimatin, E. Garmash, G. Penha, K. Sheets, M. Achenbach, O. Semerci, R. Galvez, M. Tannenberg, S. Mantravadi, D. Narayanan, O. Kalaydzhyan, D. Cole, B. Carterette, A. Clifton, P. N. Bennett, C. Hauff, M. Lalmas-Roelleke

October 2024 | Journal of Online Trust & Safety

Algorithmic Impact Assessments at Scale: Practitioners’ Challenges and Needs

Amar Ashar, Karim Ginena, Maria Cipollone, Renata Barreto, Henriette Cramer