News & Events

Subscribe to email list

Please select the email list(s) to which you wish to subscribe.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Image CAPTCHA

Enter the characters shown in the image.

User menu

You are here

Approximate posterior inference for Bayesian nonparametrics with guarantees

Friday, March 21, 2025 - 12:30 to 13:30
Xinglong Li, UBC Statistics Ph.D. student
ESB 4192 / Zoom

To join this seminar virtually: Please request Zoom connection details from ea [at] stat.ubc.ca

Abstract: Bayesian nonparametric (BNP) models provide a flexible and powerful framework for statistical modeling by allowing the number of features or subgroups within a population to grow with the data volume. However, posterior inference in BNP models is challenging due to the infinite parameters involved, and the lack of general and efficient inference procedures impedes their practical application. Exact posterior inference methods either analytically marginalize out the infinitely many parameters, or introduce auxiliary variables to adaptively adjust the model size during inference. The former approach relies on conjugacy relationships between priors and likelihoods and suffers from high computational costs. Similarly, the latter approach is also computationally demanding, as it requires numerical integration during sampling for nonconjugate models.

An alternative common practice in fitting BNP models involves approximating the nonparametric model with a parametric one, and subsequently applying a standard inference algorithm. While this is practical, parametric truncation can lead to significant unknown posterior approximation errors, particularly for BNP models with heavy tails that support the power-law behavior of the population. Previous work on truncated inference in BNP models has determined the truncation level via analysis of the forward generative model, which does not accurately reflect the error of approximation of the target posterior distribution. This thesis aims to develop approximate inference algorithms that can be directly used for posterior inference for general BNP models. We propose truncated inference methods and provide estimates of the posterior truncation error. Rather than setting the truncation level based on prior approximation error, we establish a desired posterior truncation error level, allowing the algorithm to adapt the truncation level until the desired truncation error is reached. The proposed algorithms are general in that they can be applied to a wide range of BNP models with completely random measure (CRM) priors. We have applied these algorithms to edge-exchangeable network models, where feature assignment variables are observed, and to latent feature models with latent feature assignment variables.