Consumption-based approaches in proactive detection for content moderation
Shahar Elisha, John N. Pougué-Biyong, Mariano Beguerisse-Díaz
Potential negative outcomes of machine learning and algorithmic bias have gained deserved attention. However, there are still relatively few standard processes to assess and address algorithmic biases in industry practice. Practical tools that integrate into engineers’ workflows are needed. As a case study, we present two tooling efforts to create tools for teams in practice to address algorithmic bias. Both intend to increase understanding of data, models, and outcome measurement decisions. We describe the development of 1) a prototype checklist based on existing literature frameworks; and 2) dashboarding for quantitatively assessing outcomes at scale. We share both technical and organizational lessons learned on checklist perceptions, data challenges and interpretation pitfalls.
Shahar Elisha, John N. Pougué-Biyong, Mariano Beguerisse-Díaz
Amar Ashar, Karim Ginena, Maria Cipollone, Renata Barreto, Henriette Cramer