DEV Community

Cover image for Simple GAN network demo using TF2
Z. QIU
Z. QIU

Posted on

Simple GAN network demo using TF2

I have browsed this video on youtube homepage by chance today: https://www.youtube.com/watch?v=yYUN_k36u5Q. This tutorial talks about building a GAN (Generative Adversarial Nets) network using PyTorch and it is quite interesting and simple. Thus I cloned his github source file and updated it a little bit so that it runs with Tensorflow 2.

In this example here in my post, a REAL artist can draw wonderful "paintings" of his own style, which are parabolas that are inside the gray area in figure below and are parallel to the blue and red parabolas.
Alt Text

Then we introduce a GAN network where a generator and a discriminator are defined. The Generator model shall try to simulate the oeuvres of the artist as well as possible. The structure of this Generator model is as shown below:
Alt Text

The Discriminator model, on contrary, shall do its best to distinguish the generated paintings and the REAL paintings. Its structure is as shown below:
Alt Text

Now let's begin to code.

Firstly the dependencies. One should have tensorflow 2, numpy and matplotlib installed.

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = "2"

import tensorflow as tf2
import tensorflow.keras as keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
import tensorflow.keras.backend as K

import numpy as np
import matplotlib.pyplot as plt

print("TF version: ", tf2.__version__)

Enter fullscreen mode Exit fullscreen mode

Some important hyper parameters.

# Hyper Parameters
BATCH_SIZE = 32
LR_G = 0.0001           # learning rate for generator
LR_D = 0.0001           # learning rate for discriminator
N_IDEAS = 5             # think of this as number of ideas for generating an art work (Generator)
ART_COMPONENTS = 15     # it could be total point G can draw in the canvas
PAINT_POINTS = np.vstack([np.linspace(-1, 1, ART_COMPONENTS) for _ in range(BATCH_SIZE)])
Enter fullscreen mode Exit fullscreen mode

This function will return painting works of the real artist:

 # return painting of the famous artist (real target)
def artist_works():    
    a = np.random.uniform(1, 2, size=BATCH_SIZE)[:, np.newaxis]
    paintings = a * np.power(PAINT_POINTS, 2) + (a-1)
    # print("Shape of paitings: ", paintings.shape)
    return paintings
Enter fullscreen mode Exit fullscreen mode

Now the models as well as their optimizers.

model_G = Sequential([
    Dense(128, activation="relu",input_shape=(None, N_IDEAS)), 
    Dense(ART_COMPONENTS)
])

model_D = Sequential([
    Dense(128, activation="relu", input_shape=(None, ART_COMPONENTS)),  
    Dense(1, activation="sigmoid"),
    ])

opt_D = keras.optimizers.Adam(model_D.variables, lr=LR_D)
opt_G = keras.optimizers.Adam(model_G.variables, lr=LR_G)
Enter fullscreen mode Exit fullscreen mode

Finally the training part and the plot stuffs.


plt.ion()   # something about continuous plotting

for step in range(10000):
    artist_paintings = artist_works()  # real painting from artist
    G_ideas = np.random.randn(BATCH_SIZE, N_IDEAS)
    with tf2.GradientTape(persistent=True) as tape:
        G_paintings = model_G(G_ideas)                    # fake painting from model G (random ideas)
        prob_artist1 = model_D(G_paintings)               # model D computes the prob for assessing if G_paintings are real
        G_loss = K.mean(K.log(1. - prob_artist1))         # loss function of G model


        prob_artist0 = model_D(artist_paintings)          # D try to increase this prob for real paitings
        prob_artist1 = model_D(G_paintings)  # D try to reduce this prob for generated paitings
        D_loss = - K.mean(K.log(prob_artist0) + K.log(1. - prob_artist1))                               # loss function of G model

    L_gradx = tape.gradient(G_loss, model_G.variables)   
    opt_G.apply_gradients(grads_and_vars=zip(L_gradx, model_G.variables))

    L_gradx2 = tape.gradient(D_loss, model_D.variables) 
    opt_D.apply_gradients(grads_and_vars=zip(L_gradx2, model_D.variables))


    if step % 50 == 0:  # plotting
        plt.cla()
        plt.plot(PAINT_POINTS[0], G_paintings[0], c='#4AD631', lw=3, label='Generated painting',)
        plt.plot(PAINT_POINTS[0], 2 * np.power(PAINT_POINTS[0], 2) + 1, c='#74BCFF', lw=3, label='upper bound')
        plt.plot(PAINT_POINTS[0], 1 * np.power(PAINT_POINTS[0], 2) + 0, c='#FF9359', lw=3, label='lower bound')
        plt.text(-.5, 2.3, 'D accuracy=%.2f (0.5 for D to converge)' % K.mean(prob_artist0), fontdict={'size': 13})
        plt.text(-.5, 2, 'D score= %.2f (-1.38 for G to converge)' % -D_loss, fontdict={'size': 13})
        plt.ylim((0, 3));plt.legend(loc='upper right', fontsize=10);plt.draw();plt.pause(0.01)
        print("Current step: {}".format(step))
plt.ioff()
plt.show()
Enter fullscreen mode Exit fullscreen mode

See how the simulated painting evolves:

Alt Text

As we can see, as the training steps increases, the "generated paintings" in green becomes more and more real: it becomes quite parallel to upper and lower limit parabolas and stays inside the limits.

Top comments (0)