Abstract
This paper explores key generative models—Variational Auto-Encoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DDPMs)—along with the metrics used to evaluate their performance. We provide a detailed overview of each model’s structure, training process, and objective function. Additionally, we critically assess commonly used evaluation metrics such as Inception Score (IS) and Fréchet Inception Distance (FID). To address these issues, we also discuss newer metrics like Memorization-Informed FID (MiFID) and Feature Likelihood Divergence (FLD). Our aim is to offer a practical guide to understanding these models, their objective functions, and the evaluation metrics, focusing on their relevance in current generative modeling research.
Original language | English |
---|---|
Pages (from-to) | 235-248 |
Number of pages | 14 |
Journal | Communications for Statistical Applications and Methods |
Volume | 32 |
Issue number | 2 |
DOIs | |
State | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2025 The Korean Statistical Society, and Korean International Statistical Society.
Keywords
- diffusion process
- feature likelihood divergence
- fréchet inception distance
- generative model
- inception score
- model evaluation
- performance metric