Nobody has the intention to build a GAN.

A vanilla GAN consists of two models, which are typically realized through neural networks. Namely those are called Generator and Discriminator.

The most reduced input for the Generator can be realized through a single scalar value, in other words, a 1 dimensional vector. It can be drawn from a uniform distribution, e.g. U(-1, 1).

To realize that, only a few lines of imports in Python and PyTorch are needed. For actually make a GAN happen, a sinlge import of torch would be necessary. To increase readability and due to its various usage, having nn and Tensor directly available without the torch module as prefix justifies two more lines of code. The modules os and matplotlib help during debugging for visualization purposes.

              
              from torch import nn
              from torch import Tensor
              import torch
              import os
              import matplotlib.pyplot as plt
              
            

As described in the introduction, a mapping from a 1D input vector to an output is intended. A very simple transformation consists in mapping from a uniform distribution to a gaussian distribution. Therefor, the output vector corresponds to another 1D vector. Further, PyTorch models are often times inherited from nn.Module. So at a first step, a Generator model in Python and written based on PyTorch needs an initialization of the object, a forward method and optionally the training with the backpropagation in the train methdod.

              
              class Generator(nn.Module):
                """ Maps from Uniform distribution
                    to Gaussian distribution.
                    Latent space of 1D vector in input
                    to 1D vector in output.
                """
                def __init__(self):
                    super(Generator, self).__init__()
                    self.input_size = 1
                    self.output_size = 1

                def forward(self) -> Tensor:
                  pass

                def train(self) -> Tensor:
                  pass
              
            

Before defining a Discriminator through a similar sequence of steps, the network for the Generator has to be formulated through a bunch of layers. This definition is kind of trial and error based. But it turns out, that a negative slope of 0.1 for the LeakyReLU activation function fits best. The first layer is basically a matrix of size [1, 8], followed by [8, 16] and [16, 1] mapping finally back to a [1, 1] matrix. As the print of the object generator.fc2 shows, it is not a simple matrix, or tensor, object, cause PyTorch extends it through the bias and all the backpropagation functionality.

              
                generator = Generator(batch_size=8)
                print(generator.fc2)
                print(type(generator.fc2))

                Linear(in_features=8, out_features=16, bias=True)
                class 'torch.nn.modules.linear.Linear'
              
            

Besides that, the model is constructed by an additonal parameter batch_size. An input tensor, which in that case contains a single value drawn from a uniform distribution U(-1, 1) is forwarded through a chain of those layers whose output iteratively is activated through LeakyReLU.

              
            class Generator(nn.Module):
              """ Maps from Uniform distribution to Gaussian distribution.
                  Latent space of 1D vector input to 1D vector output.
              """
              def __init__(self, batch_size: int):
                  self.batch_size = batch_size
                  super(Generator, self).__init__()
                  self.lr = 0.0005
                  self.input_size = 1
                  self.output_size = 1

                  self.fc = nn.Linear(self.input_size, 8)
                  self.fc2 = nn.Linear(8, 16)
                  self.fc3 = nn.Linear(16, self.output_size)
                  self.lrelu = nn.LeakyReLU(negative_slope=0.1)
              
              def forward(self, x: Tensor) -> Tensor:
                  output = self.fc(x)
                  output = self.lrelu(output)
                  output = self.fc2(output)
                  output = self.lrelu(output)
                  output = self.fc3(output)
                  output = self.lrelu(output)

                  return output
              
            

Similarily, the Discriminator has to be deduced. The overall aim will be to determine if the incoming sample comes from a gaussian distribution or not. Therefor the outcome of a probability value between 0 and 1 seems appropriate. A 1 means, very sure, very probable that the sample comes the gaussian, otherwise through 0, or closer to 0, a lower probability to be gaussian. Sigmoid as final activation function after the forward path through the network is therefor used. As the Generator outputs a 1D tensor, the Discriminator's input is also of the same size. To distinguish is expected to be a more simple task than to generate. Therefore, the Discriminator forward method is built up in a way more reduced stack of layers than the Generator. Additionally, the discriminator gets another property k, which is a training hyperparameter. The Discriminator will be trained k times on the given batch. This means potentially k times more often than the Generator.

              
                class Discriminator(nn.Module):
                  """ Decides if a sample is drawn from
                      Gaussian distribution.
                  """
                  def __init__(self, batch_size: int):
                      self.batch_size = batch_size
                      super(Discriminator, self).__init__()
                      self.lr = 0.0005
                      self.input_size = 1
                      self.output_size = 1
                      self.k = 2

                      self.fc = nn.Linear(self.input_size, 8)
                      self.fc2 = nn.Linear(8, self.output_size)
                      self.lrelu = nn.LeakyReLU(negative_slope=0.1)
                      self.sigmoid = nn.Sigmoid()

                  def forward(self, x: Tensor):
                      output = self.fc(x)
                      output = self.lrelu(output)
                      output = self.fc2(output)
                      output = self.sigmoid(output)

                      return output
              
            

As the GAN contains both networks, a combined object containing a self.generator and self.discriminator object. Within this class, the iterative training process can be executed. Besides that, the batch_size and the number of iterations for the alternating training Generator and Discriminator are defined by n_iterations.

              
              class SimpleGan():
                def __init__(self):
                  self.n_iterations = 20000
                  self.batch_size = 512

                  # Submodels
                  self._generator = Generator(self.batch_size)
                  self._discriminator = Discriminator(self.batch_size)

              
            
The outcome of interest when training a GAN is often times a trained generator. The discriminator describes a function approximation for a otherwise unavailable estimation of reality likeliness or acceptance.