Workshop on New Directions in Optimization, Statistics and Machine Learning

Evaluating Lossy Compression Rates of Deep Generative Models

Implicit generative models such as GANs have achieved remarkable progress at generating convincing fake images, but how well do they really match the distribution? Log-likelihood has been used extensively to evaluate generative models whenever it’s convenient to do so, but measuring log-likelihoods for implicit generative models presents computational challenges. Furthermore, in order to obtain a density, one needs to smooth the distribution using a noisy model (typically Gaussian), and this choice is hard to motivate. We take a different approach: viewing log-likelihood as a measure of lossless compression, we instead evaluate the lossy compression rates of the generative model, thereby removing the need for a noise distribution.

We show how a single run of annealed importance sampling (AIS) can be used to upper bound the entire rate-distortion curve for an implicit generative model. Interestingly, with a Euclidean distortion metric, this is a nearly identical computation to how one would estimate a single scalar log-likelihood. But the rate-distortion curve gives a more complete picture of the performance of the model. We estimate rate-distortion curves for VAEs, GANs, and adversarial autoencoders, and arrive at insights not obtainable from log-likelihoods alone.

Date & Time

April 15, 2020 | 4:30pm – 5:30pm

Location

Virtual

Affiliation

Member, School of Mathematics

Categories