Learning a large scale vocal similarity embedding for music

Abstract

This work describes an approach for modeling singing voice at scale by learning lowdimensional vocal embeddings from large collections of recorded music. We derive embeddings for different representations of the voice with genre labels. We evaluate on both objective (ranked retrieval) and subjective (perceptual evaluation) tasks. We conclude with a summary of our ongoing effort to crowdsource vocal style tags to refine our model.

Related

April 2021 | The Web Conference

Where To Next? A Dynamic Model of User Preferences

Francesco Sanna Passino, Lucas Maystre, Dmitrii Moor, Ashton Anderson, Mounia Lalmas

April 2021 | AISTATS

Collaborative Classification from Noisy Labels

Lucas Maystre, Nagarjuna Kumarappan, Judith Bütepage, Mounia Lalmas

March 2021 | WSDM

Shifting Consumption towards Diverse Content on Music Streaming Platforms

Christian Hansen, Rishabh Mehrotra, Casper Hansen, Brian Brost, Lucas Maystre, Mounia Lalmas