GANs are Neural Network architecture comprised of two networks: generator and discriminator, that are pitted against each other. original paper
One way to think about GANs is the competition between a cop and a counterfeiter. Both learning new tricks aimed to gain advantage. This produces and double feedback loop.
Great intro from deeplearning4j
Samples generated based on CelebA dataset paper by NVIDIA
(Tends to produce blurry-ish images) typically attributed to the objective function difference
gan_model = tfgan.gan_model(
generator_fn,
discriminator_fn,
real_data=images,
generator_inputs=tf.random_normal([batch_size, noise_dims]))
vanilla_gan_loss = tfgan.gan_loss(
gan_model,
generator_loss_fn=tfgan.losses.minimax_generator_loss,
discriminator_loss_fn=tfgan.losses.minimax_discriminator_loss)
generator_optimizer = tf.train.AdamOptimizer(0.001, beta1=0.5)
discriminator_optimizer = tf.train.AdamOptimizer(0.0001, beta1=0.5)
gan_train_ops = tfgan.gan_train_ops(
gan_model,
improved_wgan_loss,
generator_optimizer,
discriminator_optimizer)
train_step_fn = tfgan.get_sequential_train_steps()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
with slim.queues.QueueRunners(sess):
for i in range(num_steps):
cur_loss, _ = train_step_fn(
sess, gan_train_ops, global_step, train_step_kwargs={})
# just potentially usefull animations
#![dec](images/deconvolution.gif)