Model Selection for Production System via Automated Online Experiments


A challenge that machine learning practitioners in the industry face is the task of selecting the best model to deploy in production. As a model is often an intermediate component of a production system, online controlled experiments such as A/B tests yield the most reliable estimation of the effectiveness of the whole system, but can only compare two or a few models due to budget constraints. We propose an automated online experimentation mechanism that can efficiently perform model selection from a large pool of models with a small number of online experiments. We derive the probability distribution of the metric of interest that contains the model uncertainty from our Bayesian surrogate model trained using historical logs. Our method efficiently identifies the best model by sequentially selecting and deploying a list of models from the candidate set that balance exploration-exploitation. Using simulations based on real data, we demonstrate the effectiveness of our method on two different tasks.


October 2021 | CSCW

Let Me Ask You This: How Can a Voice Assistant Elicit Explicit User Feedback?

Ziang Xiao, Sarah Mennicken, Bernd Huber, Adam Shonkoff, Jennifer Thom

September 2021 | ECML-PKDD

Gaussian Process Encoders: VAEs with Reliable Latent-Space Uncertainty

Judith Bütepage, Lucas Maystre, Mounia Lalmas

May 2021 | CHI

Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

Brianna Richardson, Jean Garcia-Gathright, Samuel F. Way, Jennifer Thom, Henriette Cramer