News & Events

Subscribe to email list

Please select the email list(s) to which you wish to subscribe.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA

Enter the characters shown in the image.

User menu

You are here

How quickly does the Gibbs sampler converge for log-concave distributions?

Thursday, November 7, 2024 - 11:00 to 12:00
Hugo Lavenant, Assistant Professor, Department of Decision Sciences, Bocconi University
Statistics Seminar
ESB 4192 / Zoom

To join this seminar virtually: Please request Zoom connection details from ea [at] stat.ubc.ca.

Abstract: The Gibbs sampler is a Markov Chain Monte Carlo algorithm that iteratively samples from the conditional distributions of a probability measure of interest and is widely used in computational statistics. Under the assumption of log-concavity, for its random scan version, we provide a sharp bound on the speed of convergence in relative entropy. Assuming that evaluating conditionals is cheap compared to evaluating the joint density, our results imply that the number of full evaluations required for the Gibbs sampler to mix grows linearly with the condition number and is independent of the dimension. This contrasts with gradient-based methods, whose mixing time typically increases with the dimension. Our techniques also allow us to analyze Metropolis-within-Gibbs schemes, as well as the Hit-and-Run algorithm. This is joint work with Filippo Ascolani and Giacomo Zanella.