Classification Of Spontaneous And Scripted Speech For Multilingual Audio
Shahar Elisha, Andrew McDowell, Mariano Beguerisse-Díaz, Emmanouil Benetos
Spotify listeners use language to express their needs, whether typing queries or speaking the names of songs they would like to hear. Additionally, songs and podcasts contain language that we can understand, classify, and match to user interests. We conduct research on all aspects of language technologies that are applicable to audio streaming. We help Spotify understand you, by being conversational, multilingual and interactive. We also learn the semantics of audio content and creators from language descriptions, including a knowledge graph of entities, ensuring our methods are scalable and include approaches to developing and maintaining shared vocabularies and ontologies. Our research areas range from computational linguistics, natural language processing and speech applications, to machine learning applied to all aspects of language.
Shahar Elisha, Andrew McDowell, Mariano Beguerisse-Díaz, Emmanouil Benetos
A. Ghazimatin, E. Garmash, G. Penha, K. Sheets, M. Achenbach, O. Semerci, R. Galvez, M. Tannenberg, S. Mantravadi, D. Narayanan, O. Kalaydzhyan, D. Cole, B. Carterette, A. Clifton, P. N. Bennett, C. Hauff, M. Lalmas-Roelleke
Ekaterina Garmash, Edgar Tanaka, Ann Clifton, Joana Correia, Sharmistha Jat, Winstead Zhu, Rosie Jones, Jussi Karlgren