Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?


Voice assistants offer users access to an increasing variety of personalized functionalities. The researchers and engineers who build these experiences rely on various signals from users to create the machine learning models powering them. One type of signal is explicit in situ feedback. While collecting explicit in situ user feedback via voice assistants would help improve and inspect the underlying models, from a user perspective it can be disruptive to the overall experience, and the user might not feel compelled to respond. However, careful design can help alleviate friction in the experience. In this paper, we explore the opportunities and the design space for voice assistant feedback elicitation. First, we present four usage categories of explicit in-situ context for model evaluation and improvement, derived from interviews with machine learning practitioners. Then, using realistic scenarios generated for each category and based on examples from the interviews, we conducted an online study to evaluate multiple voice assistant designs. Our results reveal that when the voice assistant is framed as a learner or a collaborator, users were more willing to respond to its request for feedback and felt that the experience was less disruptive. In addition, giving users instructions on how to initiate feedback themselves can reduce the perceived disruptiveness to the experience compared to asking users for feedback directly in the form of a question. Based on our findings, we discuss the implications and potential future directions for designing voice assistants to elicit user feedback for personalized voice experiences.


June 2023 | ICASSP

Contrastive Learning-based Audio to Lyrics Alignment for Multiple Languages

Simon Durand, Daniel Stoller, Sebastian Ewert

May 2023 | CHI

Minimizing change aversion through mixed methods research: a case study of redesigning Spotify’s Your Library

Ingrid Pettersson, Carl Fredriksson, Raha Dadgar, John Richardson, Lisa Shields, Duncan McKenzie

March 2023 | CLeaR - Causal Learning and Reasoning

Non-parametric identifiability and sensitivity analysis of synthetic control models

Jakob Zeitler, Athanasios Vlontzos, Ciarán Mark Gilligan-Lee