DeepCFL: Deep Contextual Features Learning from a Single Image
Abstract
Recently, there is a vast interest in developing image feature learning methods that are independent of the training data, such as deep image prior [35], InGAN [28, 29], SinGAN [27], and DCIL [8]. These methods perform various tasks, such as image restoration, image editing, and image synthesis. In this work, we proposed a new training data-independent framework, called Deep Contextual Features Learning (DeepCFL), to perform image synthesis and image restoration based on the semantics of the input image. The contextual features are simply the high dimensional vectors representing the semantics of the given image. DeepCFL is a single image GAN framework that learns the distribution of the context vectors from the input image. We show the performance of contextual learning in various challenging scenarios: outpainting, inpainting, and restoration of randomly removed pixels. DeepCFL is applicable when the input source image and the generated target image are not aligned. We illustrate image synthesis using DeepCFL for the task of image resizing.
1 Introduction
Recently, there has been a remarkable success for image restoration and image synthesis methods that do not use training data [35, 30, 7, 28, 27, 31, 8] . One of the major challenges for the deep feature learning methods above is the limited contextual understanding in the absence of feature learning from training samples [7]. Contextual learning is mostly studied for image inpainting [25] and image transformation tasks [23], where many pairs of source and target images are used to learn the image context.







Restoration of missing pixels in an image is a classical inverse problem [46, 18, 5, 33, 44, 4, 6, 10, 32]. It addresses various applications such as image editing, restoration of damaged paintings, image completion, and image outpainting. The image transformation model allows formulation for a variety of tasks such as style transfer, single image animation, and domain transfer [23].
Traditionally, image restoration is formulated as optimization problems, where the objective function includes a loss term and an image prior term, e.g., sparse [1, 9] and low-rank [13] priors. The desired image is reconstructed by finding the solution for the optimization problem. Deep learning models have shown an ability to capture image priors implicitly by minimizing the loss over the training samples [17, 25, 19, 43, 3, 38, 36, 37, 40]. However, training data-based methods have their limitations, such as generalizability to new images [35, 7].
Recently, there is a growing interest in developing methods that are independent of training data to perform image restoration and image synthesis tasks [35, 30, 7, 28, 27, 31, 8]. Ulyanov et al. proposed deep image prior (DIP) [35], which shows that the handcrafted structure of the convolution neural network (CNN) provides an implicit image prior [35]. However, image prior learning using pixel-to-pixel loss in [35] is limited to the tasks which have a spatial correspondence between the pixels of the source image and the target image [23]. One approach would be to learn the internal patch distribution from the input image when the source and the target images are not aligned.
The single image GAN frameworks show applications where the spatial mapping between the source and the target images is not well-defined [27, 28, 29, 30]. Shocher et al. proposed an internal learning (IL) framework to synthesize realistic image patches using image-conditional GAN, called InGAN [28, 29]. Shaham et al. showed an unconditional generative model for image synthesis, named SinGAN [27]. Mastan et al. have shown the single image GAN framework for denoising-super resolution and image resizing [8].
The pixel-to-pixel loss framework in [35] and the internal patch distribution learning frameworks in [27, 28, 29] do not perform image reconstruction by considering the context of the objects. An image could be considered as a collection of high dimensional context vectors [23]. These high dimensional vectors are the image statistics captured at the intermediate layers of the features extractor such as VGG19 network [23, 22]. An interesting question would be that, given an incomplete image summary, can we synthesize new context vectors and use them to reconstruct the image. The context of an image is critical to perform image restoration and image synthesis tasks (Fig. 1 and Fig. 8) [35, 7, 30, 28]. We present a single image GAN framework (DeepCFL) which studies the contextual features in the image. The problem is novel as it aims to learn the distribution of the contextual features (contextual learning) in the image instead of internal patch distribution, as in the case of InGAN [28, 29] and SinGAN [27].
We have shown a pictorial representation of DeepCFL in Fig. 3. The aim is to utilize the image features of the original image , which are present in the corrupted image . We generate a restored image which utilizes image features from . We use an encoder-decoder network to generate . Then, we iteratively minimize the total loss (TL) between the corrupted image and the restored image. TL is a combination of contextual features loss (CFL) and reconstruction loss (RL). Fig. 3 shows that CFL allows feature learning using two different tools: contextual adversarial loss (CAL) and context vectors loss (CVL). The detailed description of each component of the framework and the formal definitions of the loss functions are described Sec. 3.
CAL performs distribution matching in the adversarial framework to synthesize new context vectors for the corrupted image . CVL computes the direct difference between the context vectors extracted from the corrupted image and the restored image . Therefore, in CFL, CAL generates new context vectors and CVL improvises them. RL is a pixel-to-pixel loss (i.e., mean squared error), which ensures the preservation of image features in the restored images. Intuitively, the main idea is to generate new context vectors using CFL and map them to the image features implicitly through pixel-based comparison using RL.
We have studied the performance of DeepCFL for the following tasks: image outpainting, inpainting of arbitrary holes, and restoration of pixels missing in the corrupted image. We also show the applications in the presence of non-aligned image data using image resizing. The key contributions of this work are summarized below.
- •
-
•
DeepCFL investigates image reconstruction considering the contextual features. The contextual features learning is useful for the applications that use only a single image as input. We show the generalizability of DeepCFL by performing multiple applications (Sec. 4).
- •
2 Related work
Deep feature learning captures good image features by using the strong internal data repetitions (self-similarity prior) [11, 15, 45, 30, 42], hand-crafted structure [35, 7], and explicit regularizer [21]. DeepCFL is a single image GAN setup, which is different from features learning frameworks proposed earlier [37, 34, 39, 26, 19, 41, 24, 23]. Single image GAN frameworks performs variety of tasks such as image editing [27], retargeting [29], denoising super-resolution [8], and video inpainting [42, 16]. Our contextual learning framework is somewhat related to [35, 7, 28, 8]. InGAN [29, 28] and SinGAN [27] are single image GAN frameworks for learning the internal patch distribution. DCIL leverage internal learning with the contextual loss [8]. DeepCFL is related to [35, 7, 28, 8] and does not employ a masked patch discriminator for CAL [37]. It does not use a features expansion network and relies on the features reconstruction capabilities of the encoder-decoder network [37].
3 Our Framework
DeepCFL is a single image GAN framework to synthesize new context vectors that are consistent with the semantics of the input source image. The task is to extract features from the source image and synthesize a new target image. The source image could be a clean or a corrupted image. The target image could be of the same size as the source image or a different size. For example, in the case of image restoration, we use a corrupted source image with missing pixel values. The contextual features are used to fill the missing regions of the corrupted image. For image synthesis, a clean image is used to synthesize new images of different sizes. Below, we discuss image restoration and context vectors before we describe the DeepCFL framework.
Let denote the set of original images, denote the set of corrupted images, and denote the set of restored images. Let denotes a corrupted image, i.e., . is computed by removing pixels from an original image using a binary mask as follows: , where is the Hadamard product and . The mask defines the underlying image restoration application. For example, in image outpainting of 20% pixels, the mask removes the 10% pixels each along the right side and the left side of the image. For the restoration of pixels, the mask contains zeros at random locations. For image inpainting, the mask contains arbitrary shapes. The objective is to restore the image details in , which were removed by .
Image restoration procedure. The task is to generate a new image , which contains the restored pixels. Here, is the generator network which maps the corrupted image to a restored image , i.e., . The corrupted image could be considered as a source image as it contains the features from the original image . The main intuition is to estimate the context for masked regions of based on the image features present at the unmasked regions of (Fig. 3). The image restoration process iteratively minimizes the loss computed between and .
What are context vectors? The context vectors of an image are the image statistics present at intermediate layers of a feature extractor . VGG19 has been widely used to extract image statistics. Formally, given an image , let denote the set of context vectors extracted from . Here, is the pre-trained VGG19 network [12] which maps image to its context vectors . denotes the feature extracted from the layer of and is the number of layers in .
Why context vectors are important? Fig. 1 and Fig. 8 show that the contextual learning framework would allow image restoration and image synthesis based on the semantics of the input (refer Fig. 4 and Fig. 6 for more examples). For example, in the case of restoration of missing pixels, the key observation is to improve the masked regions in the restore image using the unmasked regions in the corrupted image . It is done by matching the distribution of the contextual features of the corrupted image and the contextual features of the restored image (Sec. 3.2).
DeepCFL. We now discuss the DeepCFL framework shown in Fig. 3. It consists of a generator , a discriminator , and a features extractor . The corrupted image is fed into . The generator outputs an image . Next, we feed and into to compute and . Then we minimize the total loss (TL) computed between and (Eq. 1). The two primary components of TL are the contextual features loss (CFL) and the reconstruction loss (RL). CFL synthesizes new context vectors for the masked regions in , where the features learning procedure is assisted by contextual features in . is used for computing CFL. RL is computed between the unmasked regions of and to provide image feature consistency in .
3.1 Network Design
Generator. The generator maps the source image to the target image . is a depth-5 encoder-decoder network without skip connections (ED). The ED architecture works as the implicit regularizer to stabilize the image feature learning [7, 35]. It exploits the inherent self-similarity present in the source image. We use context normalization [37] to maximize features learning. Intuitively, DeepCFL is unsupervised in the sense that no training data are used to train the generator network for any of the tasks. It is a single image GAN framework which uses pre-trained VGG19 as the features extractor. VGG19 is widely used in style transfer works for defining loss at VGG features space. The feature extractor distills strong prior in the framework [8].
Discriminator. The discriminator maps the context vectors to a discriminator map , where each entry in denotes the probability of the context vector coming from the distribution of the contextual feature of the original image. Fig. 3 illustrates the discriminator task to distinguish context vectors and . The generator learns the context vectors through its interaction with . We use a multi-scale discriminator (MSD), where each output is a weighted average of the output from several discriminators (we have illustrated using a single CNN for simplicity in Fig. 3 and Fig. 3). Note that the discriminators in MSD would resize the context vectors.
3.2 Loss Function
The goal of the loss function is to maximize the feature learning from source by comparing it with generated image . The total loss (TL) is defined in Eq. 1.
(1) |
Here, denotes CFL and denotes RL. The terms and are the coefficients of CFL and RL. We have pictorially shown CFL and RL in Fig. 3. The total loss described in Eq. 1 compares the image features in two ways: CFL and RL. CFL provides new image features to , which are consistent with the object context of . RL maximizes the likelihood of randomly initialized network weights.
3.2.1 Contextual Features Loss (CFL)
The purpose of CFL is to learn the distribution of context vectors to synthesize image features in based on the semantics of the input . We extract context vectors and and then minimize the loss described in Eq. 2.
(2) |
Here, denotes CFL, denotes CAL, and denotes CVL. and are the coefficients of CAL and CVL. Eq. 2 shows that CFL compares the context vectors in two ways. (1) Context vector comparison in the adversarial framework using CAL. (2) Contextual features comparison by computing cosine distance in CVL. CAL is an adversarial loss computed using the generator and the discriminator . It is aimed to synthesize new contextual features that are indistinguishable from the features of the source image. The CVL computes the difference between contextually similar vectors to make the synthesized features of similar to the features of .
Context Adversarial Loss (CAL). We have used the LSGAN [20] variant of the adversarial learning framework.
(3) |
Here, is the generator with optimal parameters. The loss is defined in Eq. 4.
(4) |
Eq. 4 shows the distribution matching of context vectors of the restored image and context vectors of the corrupted image . The discriminator tries to determine whether the context vectors are from or (see Fig. 3 and Fig. 3). Intuitively, this would help us to fill the context of the masked regions of by learning the context of the objects in unmasked areas in . We have described , , and in Sec. 3.1.
Context Vector Loss (CVL). The main purpose of CVL is to improve the quality of contextual features in learned by CAL. is the sum of the contextual loss [23] computed at each layer in . We have defined CVL for layer in Eq. 5.
(5) |
Here, is the contextual similarity defined using the cosine distance between the features contained in and . Note that is computed by finding for each feature , a feature that is most similar to it and then summed for all . Fig. 3 illustrate the matched context vectors of and by an arrow. Intuitively, the feature matching performed between the context vectors of masked regions of and the context vectors of unmasked regions of enables feature refinements for the new context vectors created by CAL. We used layer of to compute context vectors as the higher layers capture the high-level content in terms of objects structure [12]. It is interesting to note that CVL is different from perceptual loss , which computes features difference without using contextual similarity criterion.
3.2.2 Reconstruction Loss (RL).
RL is aimed to preserve image features and it is computed between corrupted image and restored image (Fig. 3). Let denotes RL. We define in Eq. 6.
(6) |
Eq. 6 shows the comparison between unmasked regions of with the unmasked regions of . The unmasked regions in contains image features from and masked regions in are corrupted due to mask, i.e., . RL is a pixel-wise loss and it imposes a strong self-similarity prior [35].






4 Applications
Here, we discuss the following applications of DeepCFL. (1) Image outpainting: extension of an image along the sides. (2) Image inpainting of irregular holes in the image. (3) Content-aware image resizing: synthesis of new objects when we resize an image. (4) Restoration in the presence of high degree of corruption: pixels111We have used original implementations of DIP [35], MEDS [7], and DCIL [8]. We implemented image restoration using the internal learning of InGAN [28]. We have provided the implementation details in the supplementary material..
4.1 Image Outpainting.
Image outpainting relates to image extension, which creates new features while maintaining the semantics of the scene. Image extension uses training data to learn image context and then generates the complete scene given partial information [37, 34, 39, 25]. Our outpainting task does not use any training samples and synthesize features using only the corrupted image. We address outpainting as an image extension for convenience.
A good image outpainting approach would fill the image features based on the semantics of the object present at the boundaries. The ability of the generator to synthesize new contextual features over a large spatial extent along the sides depends upon the contextual learning. Unlike pixel-to-pixel loss, the context vectors based loss functions CFL (Eq. 2) aims to fill new features in the masked regions of the restored image, which are semantically similar to the unmasked regions of the corrupted image (refer Sec. 3).
In Fig. 4, we show outpainting of 20% missing pixels, where the corrupted image is generated by removing 10% pixels along the right side and the left side. DIP [35], MEDS [7], and InGAN [28] are contextual features learning independent methods. Image outpainting is better achieved using the semantics of the objects in the contextual learning-based DeepCFL framework. Table 1 shows the quantitative comparison on the standard datasets from [14], Set5 and Set14 datasets [7]. It could be observed that DeepCFL outperforms the other methods for outpainting. We have provided more details in the supplementary material.
DIP [35] | MEDS [7] | InGAN [28] | DeepCFL | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SD |
|
|
|
|
||||||||
Set14 |
|
|
|
|
||||||||
Set5 |
|
|
|
|






4.2 Image Inpainting.
The input image has non-uniform corrupted regions spread across the entire image in the inpainting task. It is a natural way by which an image could get corrupted [19, 26]. The critical property to perform inpainting without using training data is to utilize the internal self-similarity property of the natural images [35, 42]. The computation of the MSE between the generator output and the corrupted image tends to capture strong self-similarity prior [35]. DeepCFL leverages this learning by incorporating the context vectors comparison. The features learning procedure for inpainting is similar to outpainting described in Sec 4.1.
Fig. 5 shows the visual results for arbitrary hole inpainting. It could be observed that the contextual learning of DeepCFL minimizes the features spillover between different objects and fill the arbitrary holes considering the semantics of the image. The quantitative comparison (SSIM) for inpainting is as follows: DIP [35]: 0.90, MEDS [7]: 0.88, InGAN [29] 0.90, and DeepCFL (ours): 0.91. We have provided more comparisons of generated images in the supplementary material. DeepCFL performs comparably to other frameworks. The estimation of the parameters from a single image is highly sensitive to the hyper-parameters (e.g., learning rate) [35, 7]. We believe that the restoration quality of our method and other methods could be improved further using the hyper-parameter search.
4.3 Image Resize
We have discussed image outpainting, which is different from content-aware image resize, where the task is to resize the image while preserving the salient objects of the image [29]. DeepCFL is able to synthesize new objects when resizing the input image (Fig. 6). The source image is scaled along the height and the width. Therefore, the pixel correspondence between the source and the generated target images is not well defined. The image resize is done by using the generator to scale the input and then computing the adversarial loss in a cycle consistent way.
Fig. 8 show the challenging scenario of object synthesis for various single image GAN frameworks. Inspired by InGAN [29], our framework DeepCFL studies deep contextual features. DeepCFL is different from DCIL [8] as it uses the adversarial framework on VGG features space for image outpainting. In contrast, DCIL uses the adversarial framework on the image space for Denoising-super resolution. We believe that the results of various single image GAN framework in Fig. 8 could be improvised further.
4.4 Restoration of pixels.
To investigate contextual features leaning in the presence of a high degree of corruption, we perform restoration of missing pixels spread across the entire image uniformly at random. It is a different setup than outpainting and inpainting, where one has to fill a missing region (i.e., a contiguous array of pixels). We further increase the task difficulty by using the corrupted image containing a word cloud. We denote the above setup as RestoreWC 50% (WC denotes word-cloud). It is a challenging setup because the small font present in the corrupted image would require to fill fine image features details.
We show image restoration in RestoreWC 50% setup in Fig. 7. The quantitative comparison (SSIM) for RestoreWC 50% is as follows. DIP [35]: 0.92, MEDS [7]: 0.93, InGAN [29]: 0.92, and DeepCFL (ours): 0.92. It could be observed that DeepCFL performs comparably to other frameworks. It might be because the image features computed from the highly corrupted image might not be sufficient for restoration in the single image GAN framework. Therefore, contextual learning is a bit less effective. We believe that the pixel-based loss would not have the object synthesis abilities of the single image GAN frameworks (Fig. 8).
5 Ablation Studies and Limitations
We show the usefulness of contextual learning in the adversarial framework in Fig. 9. The restored image features are highlighted in the cropped images. It could be observed that the single image GAN framework (DeepCFL) synthesizes image features for image restoration.
In Fig. 10, we show an ablation study to disentangle the reconstruction using context vector loss (CVL), context adversarial loss (CAL), and contextual features loss (CFL) as defined in Sec. 3.2. The CFL setup performs better as it uses adversarial learning and context vector learning together.
Fig. 12 shows the restoration in the presence of two discriminator architectures setup: single scale discriminator (SSD) and multiscale discriminator (MSD). InGAN [29] shows that MSD improves the performance significantly for image synthesis. We observed that higher model capacity did not significantly improve image restoration, similar to [7] as the masked SSIM for SSD setup is (0.971) is close to MSD setup (0.976). The visual performance enhancement would be because MSD setup enforces image statistics consistency at multiple levels, which is harder than solving at a single scale SSD setup. Our intuition is that solving a hard problem would help to learn better image features [7]. Moreover, quantitative enhancement is close. Our interpretation of it is as follows. MSD in DeepCFL is operating on the context vectors. The scaling of the context vectors in MSD of DeepCFL and scaling the image in [29, 27, 8] are completely different operations. The performance enhancement for image restoration using the scaling of context vector might not be very effective.
Fig. 13 shows the reconstruction when the information in the corrupted image is not sufficient to fill the missing regions. The limitation is due to the lack of feature learning from the training samples in the single image GAN framework. A similar limitation has also been reported for image manipulation tasks [27]. Restoration of an object which is partially present in the image would also be exciting. However, it is not within the scope of this work.
Fig. 14 shows the the restoration of 90% pixels () using image features learning from 10% pixels. It could be observed that it is difficult to understand the semantics of the scene from 10% pixels. The experiment confirms our observation that the adversarial learning of image context is less effective for the high degree of corruption. We show more results in the supplementary material.

image

image

0.88

0.91

0.92













image

image

0.92

0.91

0.91

6 Discussion
DeepCFL is a single image GAN framework. The data-driven supervised feature learning setups use paired examples of ground truth (GT) and corrupted images. The corrupted images are fed into the network and generated outputs are matched with the GT image. DeepCFL is not trained by showing training samples of GT and corrupted images. DeepCFL can be fairly compared only with training data-independent methods as they also do not use training samples. Training based methods could synthesize image feature details that are not present in the input image, which is not possible in the training data-independent setups (Fig. 13 and Fig. 14). The feature extractor VGG-19 contains layers at different scales, where each layer contains varying levels of abstractions. We believe that combining features from various VGG-19 layers would be helpful. Moreover, it would increase the model complexity. The scope of DeepCFL is limited to the contextual features present in layer. We propose as future work to perform studies on how to increase VGG19 layers for feature comparison while minimizing the computational overhead.
7 Conclusion
We investigate deep contextual features learning (CFL) in the single image GAN framework for image restoration and image synthesis. The main challenge to accomplish the above tasks is when the information contained in the input image is not sufficient for synthesizing the necessary image features. DeepCFL synthesizes image features based on the semantics to perform outpainting, inpainting, restoration of pixels, and image resizing. It would be interesting to study the performance of the single image GAN framework in the setting of videos similar to [42, 16].
Acknowledgments. Indra Deep Mastan was supported by Visvesvaraya Ph.D. fellowship. Shanmuganathan Raman was supported by SERB Core Research Grant and SERB MATRICS.
References
- [1] Michal Aharon, Michael Elad, Alfred Bruckstein, et al. K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11):4311, 2006.
- [2] Shai Avidan and Ariel Shamir. Seam carving for content-aware image resizing. In ACM Transactions on graphics (TOG), volume 26, page 10. ACM, 2007.
- [3] Siavash Arjomand Bigdeli and Matthias Zwicker. Image restoration using autoencoding priors. arXiv preprint arXiv:1703.09964, 2017.
- [4] Antoni Buades, Bartomeu Coll, and J-M Morel. A non-local algorithm for image denoising. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 60–65. IEEE, 2005.
- [5] Harold C Burger, Christian J Schuler, and Stefan Harmeling. Image denoising: Can plain neural networks compete with bm3d? In 2012 IEEE conference on computer vision and pattern recognition, pages 2392–2399. IEEE, 2012.
- [6] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
- [7] Indra Deep Mastan and Shanmuganathan Raman. Multi-level encoder-decoder architectures for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
- [8] Indra Deep Mastan and Shanmuganathan Raman. Dcil: Deep contextual internal learning for image restoration and image retargeting. WACV, 2020.
- [9] Weisheng Dong, Lei Zhang, Guangming Shi, and Xin Li. Nonlocally centralized sparse representation for image restoration. IEEE transactions on Image Processing, 22(4):1620–1630, 2012.
- [10] Alexei A Efros and Thomas K Leung. Texture synthesis by non-parametric sampling. In Proceedings of the seventh IEEE international conference on computer vision, volume 2, pages 1033–1038. IEEE, 1999.
- [11] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image processing, 15(12):3736–3745, 2006.
- [12] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2414–2423, 2016.
- [13] Shuhang Gu, Qi Xie, Deyu Meng, Wangmeng Zuo, Xiangchu Feng, and Lei Zhang. Weighted nuclear norm minimization and its applications to low level vision. International journal of computer vision, 121(2):183–208, 2017.
- [14] Felix Heide, Wolfgang Heidrich, and Gordon Wetzstein. Fast and flexible convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5135–5143, 2015.
- [15] Daniel Glasner Shai Bagon Michal Irani. Super-resolution from a single image. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, pages 349–356, 2009.
- [16] Dahun Kim, Sanghyun Woo, Joon-Young Lee, and In So Kweon. Deep video inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5792–5801, 2019.
- [17] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- [18] Anat Levin. Blind motion deblurring using image statistics. In Advances in Neural Information Processing Systems, pages 841–848, 2007.
- [19] Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018.
- [20] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2794–2802, 2017.
- [21] Gary Mataev, Peyman Milanfar, and Michael Elad. Deepred: Deep image prior powered by red. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019.
- [22] Roey Mechrez, Itamar Talmi, Firas Shama, and Lihi Zelnik-Manor. Learning to maintain natural image statistics. arXiv preprint arXiv:1803.04626, 2018.
- [23] Roey Mechrez, Itamar Talmi, and Lihi Zelnik-Manor. The contextual loss for image transformation with non-aligned data. European Conference on Computer Vision (ECCV), 2018.
- [24] Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Z Qureshi, and Mehran Ebrahimi. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212, 2019.
- [25] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536–2544, 2016.
- [26] Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H Li, Shan Liu, and Ge Li. Structureflow: Image inpainting via structure-aware appearance flow. In Proceedings of the IEEE International Conference on Computer Vision, pages 181–190, 2019.
- [27] Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. Singan: Learning a generative model from a single natural image. The IEEE International Conference on Computer Vision (ICCV), 2019.
- [28] Assaf Shocher, Shai Bagon, Phillip Isola, and Michal Irani. Internal distribution matching for natural image retargeting. arXiv preprint arXiv:1812.00231, 2018.
- [29] Assaf Shocher, Shai Bagon, Phillip Isola, and Michal Irani. Ingan: Capturing and remapping the “dna” of a natural image. In International Conference on Computer Vision (ICCV), 2019.
- [30] Assaf Shocher, Nadav Cohen, and Michal Irani. “zero-shot” super-resolution using deep internal learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3118–3126, 2018.
- [31] Oleksii Sidorov and Jon Yngve Hardeberg. Deep hyperspectral prior: Single-image denoising, inpainting, super-resolution. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019.
- [32] Denis Simakov, Yaron Caspi, Eli Shechtman, and Michal Irani. Summarizing visual data using bidirectional similarity. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2008.
- [33] Jian Sun, Zongben Xu, and Heung-Yeung Shum. Image super-resolution using gradient profile prior. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2008.
- [34] Piotr Teterwak, Aaron Sarna, Dilip Krishnan, Aaron Maschinot, David Belanger, Ce Liu, and William T Freeman. Boundless: Generative adversarial networks for image extension. arXiv preprint arXiv:1908.07007, 2019.
- [35] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- [36] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- [37] Yi Wang, Xin Tao, Xiaoyong Shen, and Jiaya Jia. Wide-context semantic image extrapolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1399–1408, 2019.
- [38] Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multi-scale neural patch synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 3, 2017.
- [39] Zongxin Yang, Jian Dong, Ping Liu, Yi Yang, and Shuicheng Yan. Very long natural scenery image prediction by outpainting. In Proceedings of the IEEE International Conference on Computer Vision, pages 10561–10570, 2019.
- [40] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5505–5514, 2018.
- [41] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Free-form image inpainting with gated convolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 4471–4480, 2019.
- [42] Haotian Zhang, Long Mai, Ning Xu, Zhaowen Wang, John Collomosse, and Hailin Jin. An internal learning approach to video inpainting. In Proceedings of the IEEE International Conference on Computer Vision, pages 2720–2729, 2019.
- [43] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
- [44] Lei Zhang and Wangmeng Zuo. Image restoration: From sparse and low-rank priors to deep priors [lecture notes]. IEEE Signal Processing Magazine, 34(5):172–179, 2017.
- [45] Maria Zontak and Michal Irani. Internal statistics of a single natural image. In CVPR 2011, pages 977–984. IEEE, 2011.
- [46] Daniel Zoran and Yair Weiss. From learning models of natural image patches to whole image restoration. In 2011 International Conference on Computer Vision, pages 479–486. IEEE, 2011.