Results
Dataset
For all my experiments I use monet2photo dataset, which consists of Claude Monet painting and landscape images from Flickr. All examples are generated with photo to painting generator.
Training teacher
The first task was implementing Cycle GAN. I used catalyst and callbacks to separate generator and discriminator training phases.
I trained teacher network for about 60 hours (62k iterations) to get relatively good results, here is an output images example:
I will tell you more about an artifacts latter.
Distillation
I tried to use scheme proposed in the previous section. I mentioned that student converges extremely fast (about three ours and 30k iterations) to results same as a teacher.
Results reliableness
After this experiment I thought that this is it, but after checking if my results useful I also trained the student network without layer transfer and initialize other parts of network with random weights. The convergence were slightly worse in comparasion to teacher training process. It takes about two days to reach But the images were pretty well but with slightly more artifacts.
Using pre-trained network
Artifacts
I faced with two types of artifact the first is red or green points on the dark parts of images:
And chessboard artifact on the sky:
Last updated