Workshop on Stein's Method in Machine Learning and Statistics
International Conference on Machine Learning 2019
Date: Fri. 14th or Sat. 15th June 2019.
Location: Long Beach Convention Center.
Stein's method is a technique from probability theory for bounding the distance between probability measures using differential and difference operators. Although the method was initially designed as a technique for proving central limit theorems, it has recently caught the attention of the machine learning (ML) community and has been used for a variety of practical tasks. Recent applications include goodness-of-fit testing, generative modeling, global non-convex optimisation, variational inference, de novo sampling, constructing powerful control variates for Monte Carlo variance reduction, and measuring the quality of Markov chain Monte Carlo algorithms.
Although Stein's method has already had significant impact in ML, most of the applications only scratch the surface of this rich area of research in probability theory. Significant gains could be made by encouraging both communities to interact directly, and this workshop aims to facilitate this discussion.
The workshop will begin with an introduction to Stein's method which will be accessible for researchers in machine learning who are unfamiliar with the topic. It will then consist of an alternating sequence of invited talks from machine learning researchers and experts in Stein's method, which will highlight both foundational topics and applications in machine learning and statistics. The workshop will also include a session for contributed posters and will conclude with a panel discussion to elicit a concise summary of the state of the field.
Call for Contributed Posters
The workshop is looking for contributed posters on all aspects of Stein's method, including foundational work and/or its application to any topic relating to machine learning or statistics. Submissions can either be about novel work, or be based on recently published papers. We particularly welcome submissions which aim to spark discussions on novel uses of Stein's method in machine learning, statistics and related topics.
To contribute to the workshop, please send a half-page abstract in pdf form to steinworkshop [at] gmail.com. The submissions will then be reviewed by a panel consisting of the organisers of the workshop, and be accepted based on their relevance to the workshop as highlighted above.
This workshop aims to be inclusive, and we therefore particularly encourage submissions from under-represented communities.
Note that four complimentary workshop registrations are available. Please indicate whether you would like to be considered for these when submitting your proposal. Complimentary registrations will be prioritised for early-career researchers.
Deadline for submissions: 10th May 2019
Accepted posters announced: 17th May 2019
Registration for this workshop is through the main ICML website. We encourage participants to register as soon as possible as places are limited and often fill up quickly.
- Anima Anandkumar (California Institute of Technology, US). Anima is Bren Professor in the Department of Computing and Mathematical Sciences at the California Institute of Technology and Director of Machine Learning Research at NVIDIA. She has several papers establishing connections between Stein's method and discriminative learning and tensor methods.
- Lawrence Carin (Duke University, US). Lawrence is James L. Meriam Distinguished Professor and Vice-Provost for Research at Duke University. Amongst other contributions, Lawrence has contributed methodology using Stein's method in the context of variational autoencoders.
- Louis Chen (National University Singapore, Singapore). Louis is Emeritus Professor in the Department of Mathematics at the National University of Singapore. On top of his dozens of papers on Stein's method, he has co-authored a book and edited two others on the topic.
- Andrew Duncan (Imperial College London, UK). Andrew is RAEng Assistant Professor in the Department of Mathematics at Imperial College London and group leader for the Data-Centric Engineering Programme at the Alan Turing Institute. Andrew's research applies Stein's method to a variety of problems in ML, including to assessing convergence of MCMC samplers and for learning models with an unnormalised likelihood.
- Arthur Gretton (University College London, UK). Arthur is Professor at the Gatsby Computational Neuroscience Unit at University College London. He is the author of several papers at the intersection of kernel methods and Stein's method for hypothesis testing, and received a NeurIPS 2017 Best Paper award for a novel approach to goodness-of-fit testing using Stein's method.
- Susan Holmes (Stanford University, US). Susan is Professor in the Statistics Department at Stanford University. She has contributed several papers on Stein's method including novel methodology and connections to Markov chain theory, as well as applications to the bootstrap.
- Francois-Xavier Briol (University of Cambridge, UK). Francois-Xavier is a research associate in the Department of Engineering at the University of Cambridge and a visiting researcher at the Alan Turing Institute within the Data-Centric Engineering. His research focuses on inference and computation for probabilistic models, and has worked on applications of Stein's method to Bayesian computation.
- Lester Mackey (Microsoft Research, US). Lester is a Researcher at Microsoft Research New England and an Adjunct Professor of Statistics at Stanford University. He has been actively developing Stein's method tools for a variety of problems in machine learning and probabilistic inference including global non-convex optimization, de novo sampling, hypothesis testing, causal inference, and Markov chain Monte Carlo parameter tuning and sampler selection.
- Chris J. Oates (Newcastle University, UK). Chris is Chair in Statistics at Newcastle University, UK, and group leader for the Data-Centric Engineering programme at the Alan Turing Institute, UK. His research interests include using Stein's method for posterior computation in the Bayesian statistical context.
- Qiang Liu (UT Austin, US). Qiang is an associate professor of computer science at UT Austin. His research focuses on probabilistic learning and approximate inference, including application of Stein's method to addressing algorithmic challenges in approximate inference and learning.
- Larry Goldstein (University of Southern California, US). Larry is Professor of Mathematics at the University of Southern California. His research interests center on Stein's method for distributional approximation and its applications in high dimensional statistical contexts. He is co-author of the 2010 Springer text on 'Normal approximation by Stein's method'.