Neural Music Synthesis for Flexible Timbre Control

Abstract

The recent success of raw audio waveform synthesis models like WaveNet motivates a new approach for music synthesis, in which the entire process — creating audio samples from a score and instrument information — is modeled using generative neural networks. This paper describes a neural music synthesis model with flexible timbre controls, which consists of a recurrent neural network conditioned on a learned instrument embedding followed by a WaveNet vocoder. The learned embedding space successfully captures the diverse variations in timbres within a large dataset and enables timbre control and morphing by interpolating between instruments in the embedding space. The synthesis quality is evaluated both numerically and perceptually, and an interactive web demo is presented.

Related

October 2021 | CSCW

Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?

Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, Jennifer Thom

September 2021 | ECML-PKDD

Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty

Judith Bütepage, Lucas Maystre, Mounia Lalmas

May 2021 | CHI

Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, Henriette Cramer