In GANs, you have two players with different goals:

This creates a minimax game, balanced by both updates during training.

$$ \min_{\theta_g} \max_{\theta_d} \frac{1}{2} \mathbb{E}{x \sim p{data}} [\log D(x)] + \frac{1}{2} \mathbb{E}_{z \sim p(z)} [\log (1 - D(G(z)))] $$

Discriminator : Using gradient ascent, it maximizes the loss with respect to discriminator parameters

$$ \theta_d \leftarrow \theta_d + \alpha \nabla_{\theta_d} V(\theta_d, \theta_g) $$

Generator : Using gradient descent, it minimizes the loss with respect to generator parameter

$$ \theta_g \leftarrow \theta_g - \alpha \nabla_{\theta_g} V(\theta_d, \theta_g) $$

Start of Training


Mid Training