Singing Voice Separation with Deep U-Net Convolutional Networks

Abstract

The decomposition of a music audio signal into its vocal and backing track components is analogous to image-toimage translation, where a mixed spectrogram is transformed into its constituent sources. We propose a novel application of the U-Net architecture — initially developed for medical imaging — for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. Through both quantitative evaluation and subjective assessment, experiments demonstrate that the proposed algorithm achieves state-of-the-art performance.

Related

June 2023 | ICASSP

Contrastive Learning-based Audio to Lyrics Alignment for Multiple Languages

Simon Durand, Daniel Stoller, Sebastian Ewert

March 2023 | CLeaR - Causal Learning and Reasoning

Non-parametric identifiability and sensitivity analysis of synthetic control models

Jakob Zeitler, Athanasios Vlontzos, Ciarán Mark Gilligan-Lee

March 2023 | CLeaR - Causal Learning and Reasoning

Estimating long-term causal effects from short-term experiments and long-term observational data with unobserved confounding

Graham Van Goffrier, Lucas Maystre, Ciarán Mark Gilligan-Lee