Toward a Deeper Understanding of Generative Adversarial Networks
Speaker:
Dr. Farzan FARNIA
Postdoctoral Research Associate
Laboratory for Information and Decision Systems, MIT
Abstract:
While modern adversarial learning frameworks achieve state-of-the-art performance on benchmark image, sound, and text datasets, we still lack a solid understanding of their robustness, generalization, and convergence behavior. In this talk, we aim to bridge this gap between theory and practice using a principled analysis of these frameworks through the lens of optimal transport and information theory. We specifically focus on the Generative Adversarial Network (GAN) framework which represents a game between two machine players for learning the distribution of data. In the first half of the talk, we study equilibrium in GAN games for which we show the classical Nash equilibrium may not exist. We then introduce a new equilibrium notion for GAN problems, called proximal equilibrium, through which we develop a GAN training algorithm with improved stability. We provide several numerical results on large-scale datasets supporting our proposed training method for GANs. In the second half of the talk, we attempt to understand why GANs often fail in learning multi-modal distributions. We focus our study on the benchmark Gaussian mixture models and demonstrate the failures of standard GAN architectures under this simple class of multi-modal distributions. Leveraging optimal transport theory, we design a novel architecture for the GAN players which is tailored to mixtures of Gaussians. We theoretically and numerically show the significant gain achieved by our designed GAN architecture in learning multi-modal distributions. We conclude the talk by discussing some open research challenges in adversarial learning.
Biography:
Farzan Farnia is a postdoctoral research associate at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, where he is co-supervised by Professor Asu Ozdaglar and Professor Ali Jadbabaie. Prior to joining MIT, Farzan received his master’s and PhD degrees in electrical engineering from Stanford University and his bachelor’s degrees in electrical engineering and mathematics from Sharif University of Technology. At Stanford, he was a graduate research assistant at the Information Systems Laboratory advised by Professor David Tse. Farzan’s research interests include statistical learning theory, optimal transport theory, information theory, and convex optimization. He has been the recipient of the Stanford Graduate Fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford’s electrical engineering PhD qualifying exams in 2014.
Join Zoom Meeting:
https://cuhk.zoom.us/j/99476583146?pwd=QVdsaTJLYU1ab2c0ODV0WmN6SzN2Zz09
Enquiries: Miss Caroline TAI at Tel. 3943 8440
For more information, please refer to http://www.cse.cuhk.edu.hk/seminar-archive/