Contrastive Learning-based Audio to Lyrics Alignment for Multiple Languages
Simon Durand, Daniel Stoller, Sebastian Ewert
The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.
Simon Durand, Daniel Stoller, Sebastian Ewert
M Iftekhar Tanveer, Diego Casabuena, Jussi Karlgren, Rosie Jones
Katariina Martikainen, Jussi Karlgren, Khiet Truong