Shift-Invariant Kernel Additive Modelling for Audio Source Separation

Abstract

A major goal in blind source separation to identify and separate sources is to model their inherent characteristics. While most state-of-the-art approaches are supervised methods trained on large datasets, interest in non-data-driven approaches such as Kernel Additive Modelling (KAM) remains high due to their interpretability and adaptability. KAM performs the separation of a given source applying robust statistics on the time-frequency bins selected by a source-specific kernel function, commonly the K-NN function. This choice assumes that the source of interest repeats in both time and frequency. In practice, this assumption does not always hold. Therefore, we introduce a shift-invariant kernel function capable of identifying similar spectral content even under frequency shifts. This way, we can considerably increase the amount of suitable sound material available to the robust statistics. While this leads to an increase in separation performance, a basic formulation, however, is computationally expensive. Therefore, we additionally present acceleration techniques that lower the overall computational complexity.

Related

July 2020 | IJCAI - International Joint Conference on Artificial Intelligence

Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling

Daniel Stoller, Mi Tian, Sebastian Ewert, and Simon Dixon

July 2020 | WCCI/IJCNN - IEEE World Congress on Computational Intelligence / International Joint Conference on Neural Networks

Using a Neural Network Codec Approximation Loss to Improve Source Separation Performance in Limited Capacity Networks

Ishwarya Ananthabhotla, Sebastian Ewert, Joseph A. Paradiso