Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

Abstract

In order to support fairness-forward thinking by machine learning (ML) practitioners, fairness researchers have created toolkits that aim to transform state-of-the-art research contributions into easily-accessible APIs. Despite these efforts, recent research indicates a disconnect between the needs of practitioners and the tools offered by fairness research. By engaging 20 ML practitioners in a simulated scenario in which they utilize fairness toolkits to make critical decisions, this work aims to utilize practitioner feedback to inform recommendations for the design and creation of fair ML toolkits. Through the use of survey and interview data, our results indicate that though fair ML toolkits are incredibly impactful on users’ decision-making, there is much to be desired in the design and demonstration of fairness results. To support the future development and evaluation of toolkits, this work offers a rubric that can be used to identify critical components of Fair ML toolkits.

 

Related

October 2021 | CSCW

Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?

Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, Jennifer Thom

September 2021 | ECML-PKDD

Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty

Judith Bütepage, Lucas Maystre, Mounia Lalmas

May 2021 | ICWSM

Representation of Music Creators on Wikipedia, Differences in Gender and Genre

Alice Wang, Aasish Pappu, Henriette Cramer