Learning a large scale vocal similarity embedding for music

Abstract

This work describes an approach for modeling singing voice at scale by learning lowdimensional vocal embeddings from large collections of recorded music. We derive embeddings for different representations of the voice with genre labels. We evaluate on both objective (ranked retrieval) and subjective (perceptual evaluation) tasks. We conclude with a summary of our ongoing effort to crowdsource vocal style tags to refine our model.

Related

November 2022 | NeurIPS

Society of Agents: Regrets Bounds of Concurrent Thompson Sampling

Yan Chen, Perry Dong, Qinxun Bai, Maria Dimakopoulou, Wei Xu, Zhengyuan Zhou

November 2022 | NeurIPS

Temporally-Consistent Survival Analysis

Lucas Maystre, Daniel Russo

November 2022 | NeurIPS

Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved Confounders

Olivier Jeunen, Ciarán M. Gilligan-Lee, Rishabh Mehrotra, Mounia Lalmas