DAMSL-273 Introduction to Deep Generative Modelling
Type
Elective
Course Code
DAMSL-273
Teaching Semester
B semester
ECTS Credits
10
Syllabus
- Random variables, conditional probability, chain rule, Bayes theorem, central limit theorem, multivariate Gaussian, PDF of a transformed random variable, inverse transform sampling
- Gaussian mixture models, Expectation-Maximization algorithm, maximum log-likelihood estimation, basics on Monte Carlo, Metropolis algorithm & Monte Carlo Markov chains
- Definition, time-series generation as the benchmark example, capturing long-range correlations, NN architectures (dilated CNNs, RNNs & transformers), MLE optimization, discrete vs continuous state space, conditional DAR models and presentation of WaveNet, WaveRNN, PixelRNN, presentation of the architectures, present the training algorithm
- Definition of an autoencoder, definition of VAE, MLE approximation, training algorithm, Denoising VAEs, beta-VAE, applications in scientific discovery
- Definition, invertible transformations, tractable exact MLE, derive general formula, present the training algorithm, how it can extend the VAE model
- what is information?, Shannon entropy, divergences (KLD, f, alpha, Renyi), variational representation/duality formulas (scale well with dimension while density ratio type of methods collapse when applied to high dimensional datasets), probability distances (Wasserstein, MMD), examples with Gaussians and/or the exponential family of distributions
- Define it as a minimization problem, intractable due to unknown PDFs, use variational formulas, minimax optimization, vanilla GAN, training algorithm (stochastic gradient descend/ascend), basic properties, Wasserstein GAN, Conditional GAN, DCGAN, BigGAN, MelGAN, InfoGAN, CycleGAN (as a more general adversarial type of learning)
- Definition, architectures, training algorithms (score matching algorithm, contrastive estimation), product of experts
- Definition, forward/reverse process, MLE approximation, denoising DPMs, applications, DALL-E2 & Imagen models for text-to-image generation
Learning Outcomes
- Having attended and succeeded in the course, the student is able to describe the probabilistic foundations of deep generative models and gain knowledge about various model architectures, training algorithms and their underlying principles.
- Having attended and succeeded in the course, the student is able to comprehend the application areas of deep generative models in fields like computer vision, language and speech processing.
- Having attended and succeeded in the course, the student is capable of applying learned concepts to implement and train deep generative models and utilize these models for tasks such as synthetic tabular data generation, time-series synthesis and generative image processing.
- Having attended and succeeded in the course, the student is able to analyze and compare different deep generative models, understanding their strengths and limitations and critically assess the performance of these models in various scenarios.
- Having attended and succeeded in the course, the student is able to develop new approaches in deep generative modeling and distill information from research papers and practical demonstrations to create innovative solutions.
- Having attended and succeeded in the course, the student is able to critically evaluate the effectiveness of deep generative models in real-world applications and assess the impact of these models in advancing the field of generative AI.
Student Performance Evaluation
Homework and/or Lab Assignments, Final Exam and/or Project
Prerequisite Courses
Linear Algebra , Probabilities