To join this seminar virtually: Please request Zoom connection details from ea [at] stat.ubc.ca.
Abstract: The Gibbs sampler is a Markov Chain Monte Carlo algorithm that iteratively samples from the conditional distributions of a probability measure of interest and is widely used in computational statistics. Under the assumption of log-concavity, for its random scan version, we provide a sharp bound on the speed of convergence in relative entropy. Assuming that evaluating conditionals is cheap compared to evaluating the joint density, our results imply that the number of full evaluations required for the Gibbs sampler to mix grows linearly with the condition number and is independent of the dimension. This contrasts with gradient-based methods, whose mixing time typically increases with the dimension. Our techniques also allow us to analyze Metropolis-within-Gibbs schemes, as well as the Hit-and-Run algorithm. This is joint work with Filippo Ascolani and Giacomo Zanella.