email

Workshop on Stein's Method in Machine Learning and Statistics beach

International Conference on Machine Learning 2019

Date: Saturday 15th June 2019.

Location: Long Beach Convention Center - Room 104A.

Outline

Stein's method is a technique from probability theory for bounding the distance between probability measures using differential and difference operators. Although the method was initially designed as a technique for proving central limit theorems, it has recently caught the attention of the machine learning (ML) community and has been used for a variety of practical tasks. Recent applications include goodness-of-fit testing, generative modeling, global non-convex optimisation, variational inference, de novo sampling, constructing powerful control variates for Monte Carlo variance reduction, and measuring the quality of Markov chain Monte Carlo algorithms.

Although Stein's method has already had significant impact in ML, most of the applications only scratch the surface of this rich area of research in probability theory. Significant gains could be made by encouraging both communities to interact directly, and this workshop aims to facilitate this discussion.

The workshop will begin with an introduction to Stein's method which will be accessible for researchers in machine learning who are unfamiliar with the topic. It will then consist of an alternating sequence of invited talks from machine learning researchers and experts in Stein's method, which will highlight both foundational topics and applications in machine learning and statistics. The workshop will also include a session for contributed posters and will conclude with a panel discussion to elicit a concise summary of the state of the field.

Registration: Registration for this workshop is through the main ICML website. We encourage participants to register as soon as possible as places are limited and often fill up quickly.

Timetable

Speakers

List of Accepted Posters

  1. Stein's Method for Error Analysis of Multivariate Density Estimation using Models with Normalizing Flow.
  2. Stein methods for Robot Navigation.
  3. Estimation and Sampling of Unnormalized Statistical Models with Stein Score Matching.
  4. Stein’s Method for Policy Gradients Methods in Deep Reinforcement Learning.
  5. Spectral Estimators for Gradient Fields of Log-Densities.
  6. Relative Kernel Stein Discrepancy for Multiple Model Comparison.
  7. Stein Point Markov Chain Monte Carlo.
  8. Stein’s Lemma for the Reparameterization Trick with Gaussian Mixtures.
  9. A Stein-Papangelou Goodness-of-Fit Test for Point Processes.
  10. Adaptive MCMC via combining local samplers.
  11. To choose or not to choose the Prior. That’s the question!
  12. Stein Variational Online Changepoint Detection with Applications to Hawkes Processes and Neural Networks.
  13. Active Domain Randomization.
  14. Sobolev Descent.
  15. Yet Another Look at SVGD.
  16. Multi-Agent Learning Using Malliavin-Stein Variational Gradient Descent.
  17. Stein's method for computing inverse operators.
  18. Using Stein's method to find bounds for the multivariate normal approximation of the group sequential maximum likelihood estimator

Organisation