Deep Learning for Audio-Based Music Classification and Tagging

Abstract

Over the last decade, music-streaming services have grown dramatically. Pandora, one company in the field, has pioneered and popularized streaming music by successfully deploying the Music Genome Project [1] (https://www.pandora.com/about/mgp) based on human-annotated content analysis. Another company, Spotify, has a catalog of over 40 million songs and over 180 million users as of mid-2018 (https://press.spotify.com/us/about/), making it a leading music service provider worldwide. Giant technology companies such as Apple, Google, and Amazon have also been strengthening their music service platforms. Furthermore, artificial intelligence speakers, such as Amazon Echo, are gaining popularity, providing listeners with a new and easily accessible way to listen to music.

Related

September 2022 | RecSys

Identifying New Podcasts with High General Appeal Using a Pure Exploration Infinitely-Armed Bandit Strategy

Maryam Aziz, Jesse Anderton, Kevin Jamieson, Alice Wang, Hugues Bouchard, Javed Aslam

September 2022 | Interspeech

Unsupervised Speaker Diarization that is Agnostic to Language Overlap Aware and Free of Tuning

M Iftekhar Tanveer, Diego Casabuena, Jussi Karlgren, Rosie Jones

September 2022 | Interspeech

Exploring audio-based stylistic variation in podcasts

Katariina Martikainen, Jussi Karlgren, Khiet Truong