Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, Jennifer Thom
Learning a large scale vocal similarity embedding for music
This work describes an approach for modeling singing voice at scale by learning lowdimensional vocal embeddings from large collections of recorded music. We derive embeddings for different representations of the voice with genre labels. We evaluate on both objective (ranked retrieval) and subjective (perceptual evaluation) tasks. We conclude with a summary of our ongoing effort to crowdsource vocal style tags to refine our model.
Judith Bütepage, Lucas Maystre, Mounia Lalmas
Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, Henriette Cramer