Final Year Project by Harry Bond-Preston
Optimising on random noise as proposed by gatys et al. A single pretrained discriminator can be used to generate from any source image. Here the demonstration shows the real time noise image as it is generated over 150 iterations. The demonstration speeds up as it progresses as the image changes less.
Generational Adversarial Networks or 'GANS'. Here the generators are pretrained for 42k iterations on random crops of the source image. A checkpoint of the generator is saved at regular intervals. The demonstration generates an image using each checkpoint to show the learning process. Again it speeds up as it progresses as the improvement of the generator slows as it gets more and more accurate.
The two GAN models are Periodic-Spatial-GAN and Deep-Convolutional-GAN respectively. DCGAN is limited to 64x64 images.
By interpolating between random noise vectors and using these as inputs to the trained GAN generators it is possible to create decent looking, tileable animated textures from a single image as seen below.