The Contextual Loss
Technion – Israel Institute of Technology
ECCV 2018 (Oral) [paper 1]
[Supplementary 1]
Code [GitHub]
Roey Mechrez*
Itamar Talmi*
Firas Shama
Lihi Zelnik-Manor
arXiv 2018 [paper 2]
[Supplementary 2]
Abstract
We also show that with the contextual loss it is possible to train a CNN to maintain natural image statistics.
Maintaining natural image statistics is a crucial factor in restoration and generation of realistic looking images.
When training CNNs, photorealism is usually attempted by adversarial training (GAN), that pushes the output images to lie on the manifold of natural images.
GANs are very powerful, but not perfect.
They are hard to train and the results still often suffer from artifacts.
The contextual loss is a complementary approach, whose goal is to train a feed-forward CNN to maintain natural internal statistics.
We look explicitly at the distribution of features in an image and train the network to generate images with natural feature distributions.
Our approach reduces by orders of magnitude the number of images required for training and achieves state-of-the-art results on both single-image super-resolution,
and high-resolution surface normal estimation.
Applications
[single-image animation]
[Puppet Control Video]
Gender Translation (no GAN):
[male2female]
[female2male]
Domain Transfer (with GAN):
[horse2zebra]
Perceptual Super Resolution Resualts:
[PIRM]
[BSD100]
[DIV2K]
[set14]
[set5]
[urban100]
Papers
Try Our Code
Code to reporduce the experiments described in our paper is available in [GitHub]
Recent Related Work
Template Matching with Deformable Diversity SimilarityItamar Talmi*, Roey Mechrez*, Lihi Zelnik-Manor In IEEE CVPR, 2017. [ProjectPage]