Singing Voice Separation with Deep U-Net Convolutional Networks

Abstract

The decomposition of a music audio signal into its vocal and backing track components is analogous to image-toimage translation, where a mixed spectrogram is transformed into its constituent sources. We propose a novel application of the U-Net architecture — initially developed for medical imaging — for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. Through both quantitative evaluation and subjective assessment, experiments demonstrate that the proposed algorithm achieves state-of-the-art performance.

Related

October 2021 | CSCW

Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?

Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, Jennifer Thom

September 2021 | ECML-PKDD

Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty

Judith Bütepage, Lucas Maystre, Mounia Lalmas

May 2021 | CHI

Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, Henriette Cramer