Learning a large scale vocal similarity embedding for music

Abstract

This work describes an approach for modeling singing voice at scale by learning lowdimensional vocal embeddings from large collections of recorded music. We derive embeddings for different representations of the voice with genre labels. We evaluate on both objective (ranked retrieval) and subjective (perceptual evaluation) tasks. We conclude with a summary of our ongoing effort to crowdsource vocal style tags to refine our model.

Related

February 2022 | WSDM

Variational User Modeling with Slow and Fast Features

Ghazal Fazelnia, Eric Simon, Ian Anderson, Ben Carterette, Mounia Lalmas

November 2021 | ISMIR - International Society for Music Information Retrieval Conference

Multi-Task Learning of Graph-based Inductive Representations of Music Content

Antonia Saravanou, Federico Tomasi, Rishabh Mehrotra and Mounia Lalmas