Blind Image Super-Resolution with Spatially Variant Degradations

publication
ACM SIGGRAPH ASIA 2019
authors
Victor Cornillère, Abdelaziz Djelouah, Wang Yifan, Olga Sorkine-Hornung Christopher Schroers,

Blind Image Super-Resolution with Spatially Variant Degradations

Upscaling results with spatially varying degradation. Handling spatially variant degradations is critical when dealing with composited content. In this case the spaceship was composited onto the background image. The two regions have been down scaled with different kernels, and as a result, there is no single kernel that can be used for upscaling the entire image without artifacts. Our method avoids these problems by allowing for automatic local adaptation of the degradation. Photo Credits: Derivative from Spaceship by Francois Grassard (CC-BY).

abstract

Existing deep learning approaches to single image super-resolution have achieved impressive results but mostly assume a setting with fixed pairs of high resolution and low resolution images. However, to robustly address realistic upscaling scenarios where the relation between high resolution and low resolution images is unknown, blind image super-resolution is required. To this end, we propose a solution that relies on three components: First, we use a degradation aware SR network to synthesize the HR image given a low resolution image and the corresponding blur kernel. Second, we train a kernel discriminator to analyze the generated high resolution image in order to predict errors present due to providing an incorrect blur kernel to the generator. Finally, we present an optimization procedure that is able to recover both the degradation kernel and the high resolution image by minimizing the error predicted by our kernel discriminator. We also show how to extend our approach to spatially variant degradations that typically arise in visual effects pipelines when compositing content from different sources and how to enable both local and global user interaction in the upscaling process.

downloads