![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Learning-Rate-Free Optimisation on the Space of Probability Measures
![]() Learning-Rate-Free Optimisation on the Space of Probability MeasuresAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact nobody. RCLW03 - Accelerating statistical inference and experimental design with machine learning The task of sampling from a target probability distribution whose density is only known up to a normalisation constant is of fundamental importance to computational statistics and machine learning. There are various popular approaches to this task, including Markov chain Monte Carlo (MCMC) and variational inference (VI). More recently, there has been growing interest in developing hybrid sampling methods which combine the non-parametric nature of MCMC with the parametric approach used in VI. In particular, particle based variational inference (ParVI) methods approximate the target distribution using an ensemble of interacting particles, which are deterministically updated by iteratively minimising a metric such as the Kullback-Leibler divergence. Unfortunately, such methods invariably depend on hyperparameters such as the learning rate, which must be carefully tuned by practitioners in order to ensure convergence to the target distribution at a suitable rate. In this talk, we introduce a suite of new sampling algorithms which are entirely learning rate free. Our approach leverages the perspective of sampling as an optimisation problem over the space of probability measures, and existing ideas from convex optimisation. We discuss how to establish the convergence of our algorithms under assumptions on the target distribution. We then illustrate the performance of our approach on a range of numerical examples, including several high dimensional models and datasets, demonstrating comparable performance to existing state-of-the-art sampling algorithms, but with no need to tune a learning rate. Finally, we discuss how our approach can be adapted for two related problems: (i) sampling on constrained domains and (ii) inference and learning in latent variable models. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsGet the best tips for academic writing Type the title of a new list here Race EqualityOther talksEnhancing Neurological Care with Digital Biomarkers Applying Christian Values to a Genetic Age Barlow Twins Foundation Model (BTFM) for Earth Observation Geometric Langlands duality with generalized coefficients (VIRTUAL) Poster Session with Morning Coffee Prediction and its application to mechanical properties |