This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Attention-Guided Multi-scale Interaction Network for Face Super-Resolution

Xujie Wan, Wenjie Li, Guangwei Gao,  Huimin Lu,  Jian Yang,  and Chia-Wen Lin,  This work was supported in part by the foundation of the Key Laboratory of Artificial Intelligence of Ministry of Education under Grant AI202404, and in part by the Open Fund Project of Provincial Key Laboratory for Computer Information Processing Technology (Soochow University) under Grant KJS2274. (Xujie Wan and Wenjie Li contributed equally to this work.) (Corresponding author: Guangwei Gao.)Xujie Wan and Guangwei Gao are with the Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing 210046, China, Key Laboratory of Artificial Intelligence, Ministry of Education, Shanghai 200240, China, and also with the Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou 215006, China (e-mail: wanxujie991205@gmail.com, csggao@gmail.com).Wenjie Li is with the Pattern Recognition and Intelligent System Laboratory, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100080, China (e-mail: lewj2408@gmail.com).Huimin Lu is with the School of Automation, Southeast University, Nanjing 210096, China (e-mail: dr.huimin.lu@ieee.org).Jian Yang is with the School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China (e-mail: csjyang@njust.edu.cn).Chia-Wen Lin is with the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan 30013, R.O.C. (e-mail: cwlin@ee.nthu.edu.tw).
Abstract

Recently, CNN and Transformer hybrid networks demonstrated excellent performance in face super-resolution (FSR) tasks. Since numerous features at different scales in hybrid networks, how to fuse these multi-scale features and promote their complementarity is crucial for enhancing FSR. However, existing hybrid network-based FSR methods ignore this, only simply combining the Transformer and CNN. To address this issue, we propose an attention-guided Multi-scale interaction network (AMINet), which contains local and global feature interactions as well as encoder-decoder phases feature interactions. Specifically, we propose a Local and Global Feature Interaction Module (LGFI) to promote fusions of global features and different receptive fields’ local features extracted by our Residual Depth Feature Extraction Module (RDFE). Additionally, we propose a Selective Kernel Attention Fusion Module (SKAF) to adaptively select fusions of different features within LGFI and encoder-decoder phases. Our above design allows the free flow of multi-scale features from within modules and between encoder and decoder, which can promote the complementarity of different scale features to enhance FSR. Comprehensive experiments confirm that our method consistently performs well with less computational consumption and faster inference.

Index Terms:
Face super-resolution, Hybrid networks, Multi-scale interaction, Attention-guided.

I Introduction

Face super-resolution (FSR), also known as face hallucination, aims at restoring high-resolution (HR) face images from low-resolution (LR) face images [1]. In contrast to standard image super-resolution, the primary objective of FSR is to reconstruct as many facial structural features as possible (i.e. the shape and contour of facial components). In practical scenarios, a range of face-specific tasks such as face detection [2] and face recognition [3] require HR face images. However, The quality of captured face images is frequently diminished due to variations in hardware configuration, positioning, and shooting angles of the imaging devices, seriously affecting the above downstream tasks. Therefore, FSR has gained increased attention in recent years.

Refer to caption
Figure 1: Model complexity studies for ×\times 8 FSR on CelebA test set [4]. Our AMINet achieves an excellent balance between model size, model performance, and inference speed.

Recently, since the advantages exhibited by hybrid networks [5] of CNNs and Transformers in FSR, this type of method has gained increased attention. Specifically, CNN-based FSR methods [6] generally do not require large computational consumption. Still, they specialize in extracting local details, such as the local texture of the face, color, etc., and are unable to model long-range feature interaction, such as the global profile of the face. Transformer-based FSR methods [7] can simulate global modeling well, but their computational consumption is huge. Hybrid-based FSR methods leverage the strengths of both CNN and Transformer, facilitating models to accomplish the extraction of local and global facial features while maintaining a manageable computational cost, which is not possible with CNN or Transformer alone. The impressive performance of hybrid-based FSR methods comes from numerous features extracted inside their networks at different scales, such as global features from self-attention, local features from convolution, and features from different stages of the encoder-decoder, which facilitates models to refine local facial details and global facial contours.

However, while existing hybrid-based FSR methods consider utilizing features from different scales to improve FSR, they ignore the problem of how we fuse these multi-scale features to make their properties better complement each other. For example, Faceformer [8] simply parallelizes the connected CNN modules and the window-based Transformer [9] modules. SCTANet [10] also only juxtaposes spatial attention-based residual blocks and multi-head self-attention in designed modules. CTCNet [5] simply connects the CNN module in tandem with the Transformer module. None of the above methods realize the importance of effectively blending different scales and facilitating the free flow of features at different scales within the module to refine facial details.

To address this problem, we propose an Attention-Guided Multi-scale Interaction Network (AMINet) for FSR in this work. Our AMINet fuses multi-scale features in two main ways, including the fusion of features obtained from self-attention and convolution and the fusion of features at different stages of the encoder-decoder. Specifically, we design a Local and Global Feature Interaction Module (LGFI) to adaptively fuse global facial and local features with different receptive fields obtained by convolutions. In LGFI, self-attention is responsible for extracting global features, our proposed Residual Depth Feature Extraction Module (RDFE) extracts local features at different scales using separable convolutional kernels of different sizes, and our proposed Selective Kernel Attention Fusion Module (SKAF) is responsible for weighted fusion of these two parts of features for our model to adaptively perform selective fusion during training. In addition, we also utilize our proposed SKAF as a crucial fusion module in our Encoder and Decoder Feature Fusion Module (EDFF) to further perform feature communications of our method by fusing features at different scales from the encoder-decoder processes.

Our above design greatly enhances the flow and exchange of features at different scales within the model and improves the representation of our model. As a result, our method can obtain a more powerful feature representation than existing FSR methods. As shown in Fig. 1, our method can achieve the best FSR performance with a smaller size and faster inference speed, demonstrating our method’s effectiveness. In summary, the main contributions are as follows:

  • We design an LGFI to differ from the traditional Transformer by allowing free flow and adaptive selective fusion of local and global features within the module.

  • We design an RDFE, which enables better refinement of facial details by fusion and refinement of local features extracted by convolutional kernels of different sizes.

  • We design the SKAF to help selective fusions of different scale features within LGFI and EDFF by selecting appropriate convolutional kernels.

  • Comprehensive experiments confirm that our method consistently achieves good FSR performance with less cost by exploring multiple levels of feature interactions.

In the following sections, we will introduce related work and discuss their disadvantages in Section II. Then, we will introduce our method and proposed core modules in Section III. Next, we will evaluate our method and analyze our ablation experiments in Section IV. Finally, we will conclude this paper according to our experiments in Section V.

Refer to caption

Figure 2: Network structure of the proposed AMINet.

II Related Work

II-A Face Super-Resolution

Early deep learning approaches focused on leveraging facial priors as guidance to enhance FSR [11] accuracy. For instance, Chen et al. [12] developed an end-to-end prior-based network that utilized facial landmarks and heatmaps to generate FSR images. Similarly, Kim et al. [13] employed a face alignment network for landmark extraction in conjunction with a progressive training technique to produce realistic face images. Ma et al. [14] introduced DICNet, which integrates facial landmark priors iteratively to enhance image quality at each step. Hu et al. [15] explored the use of 3D shape priors to better capture and define sharp facial structures. While these methods have advanced FSR, they require additional labeling of training datasets. Moreover, inaccuracies in prior estimation can significantly diminish FSR performance, especially when dealing with highly blurred face images.

Attention-based FSR methods have been proposed to promote FSR to avoid the adverse effects of inaccurate prior estimates on FSR. Zhang et al. [16] proposed a supervised pixel-by-pixel generation of the adversarial network to improve face recognition performance during FSR. Chen et al. [17] proposed the SPARNet, which can focus on important facial structure features adaptively by using spatial attention in residual blocks. Lu et al. [18] proposed a partial attention mechanism to enhance the consistency of the fidelity of facial detail and facial structure. Bao et al. [19] introduced the equalization texture enhancement module to enhance the facial texture detail through histogram equalization. Wang et al. [20]critical introduced Fourier transform into FSR, fully exploring the correlation between spatial domain features and frequency domain features. Shi et al. [21] designed a two-branch network, which introduces convolution based on local changes to enhance the ability of convolution. Liu et al. [10] improve the interaction ability of regional and global features through designed hybrid attention modules. Li et al. [22] designed a wavelet-based network to reduce the loss of downsampling in the encoder-decoder. Although the above methods can reconstruct reasonable FSR images, they cannot promote the efficient fusion of local features with global features and different features at different stages of the encoder-decoder, affecting FSR’s efficiency and accuracy.

II-B Attention-based Super-Resolution

The attention mechanism can improve the super-resolution accuracy of models due to its flexibility in focusing on key areas of facial features. In the super-resolution [23] task, different variants of the attention mechanism include self-attention, spatial attention, channel attention, and hybrid attention.

Zhang et al. [24] inserted channel attention into residual blocks to enhance model representation. Xin et al. [25] utilized channel attention plus residual mechanisms to combine a multi-level information fusion strategy. Chen et al. [17] enhance FSR by utilizing an improved facial spatial attention cooperated with the hourglass structure. Gao et al. [26] performed shuffling to hybrid attention. Wang et al. [27] constructed a simplified feed-forward network using spatial attention to reduce parameters and computational complexity. To model long-range feature interaction, the self-attention in Transformer [28, 29] has been widely used in super-resolution. Gao et al. [30] reduced costs by utilizing the recurse mechanism on self-attention. Li et al. [31, 32] combined self-attention and convolutions to complement each other’s required features. Zeng et al. [33] introduced a self-attention network that investigates the relationships among features at various levels. Shi et al. [21] enhance FSR by mitigating the adverse effects of inaccurate prior estimates through a parallel self-attention mechanism, effectively capturing both local and non-local dependencies. To combine the advantages of different attentions, Yang et al. [34] integrated channel attention with spatial attention to enhance feature acquisition and correlation modeling. Bao et al. [10] and Gao et al. [5] employed spatial attention and self-attention to capture facial structure and details. Zhang et al. [35] employed a hybrid attention module that combines self-attention, spatial attention, and channel attention to optimize fine-grained facial details and broad facial structure. Unlike the above attention-based methods that significantly enhance model representation, we utilize attention to learn feature maps from different receptive fields, allowing our network to adaptively select the appropriate convolutional kernel size to match the multi-scale feature fusion. This design enables our network to effectively perform multi-scale feature extraction. Furthermore, it improves the integration of features across various scales, leading to enhanced performance and greater adaptability.

III Proposed Method

In this section, we first describe the overall architecture of the proposed Attention-Guided Multi-scale Interaction Network (AMINet). Then, we introduce our local and global feature interaction module (LGFI) and encoder and decoder feature fusion module (EDFF) in our AMINet in detail. Finally, since we provide a GAN version of our AMINet, called AMIGAN, we further introduce the loss functions used to supervise AMIGAN during training.

Refer to caption

Figure 3: The architecture of (a) Residual Depth Feature Extraction Module (RDFE), (b) Self-attention (SA), respectively.

III-A Overview of AMINet

As illustrated in Fig. 2, our proposed AMINet features a U-shaped CNN-Transformer hierarchical architecture with three distinct stages: encoding, bottleneck, and decoding. For an LR input face image ILR3×H×W{I_{LR}\in\mathbb{R}^{3\times H\times W}}, in the encoding stage, our network aims to extract features at different scales and capture multi-scale feature representations of the input image to get the facial feature F3C×H×W{F_{3}\in\mathbb{R}^{C\times H\times W}}. Then, the bottleneck stage network continues to refine the feature F3{F_{3}} and provides a more informative representation to get the refined feature F4{F_{4}} for the subsequent reconstruction phase. In decoding, the network focuses on feature upsample and facial detail reconstruction. Meanwhile, an interactive connection is used between the encoding and decoding stages to ensure the features are fully integrated throughout the network. We can get the reconstructed face feature F7{F_{7}} with rich facial details through the above operators. Finally, through a convolution with reduced channel dimensions plus a residual connection, we get the HR output face image IHR3×H×W{I_{HR}\in\mathbb{R}^{3\times H\times W}}.

III-A1 Encoding stage

The encoding stage in our network aims to extract facial features of different scales. In this stage, given a degraded face image ILR3×H×W{I_{LR}\in\mathbb{R}^{3\times H\times W}}, first, a 3×3{3\times 3} convolution is used to extract facial shallow features. Then, extracted facial features are further fined by three encoder stages. Each encoder comprises our designed Local and Global Feature Interaction Module (LGFI) and a downsampling operator. After each encoder, the input face feature’s channel counts will be doubled, and the size of the image of the input face feature will be halved. As shown in Fig. 2, the features obtained after three encoders are as follows: F12C×H2×W2{F_{1}\in\mathbb{R}^{2C\times\frac{H}{2}\times\frac{W}{2}}}, F24C×H4×W4{F_{2}\in\mathbb{R}^{4C\times\frac{H}{4}\times\frac{W}{4}}}, F38C×H8×W8{F_{3}\in\mathbb{R}^{8C\times\frac{H}{8}\times\frac{W}{8}}}.

III-A2 Bottleneck stage

In the bottleneck stage between the encoding and decoding stages, obtained encoding features are designed to be fine-grained. F48C×H8×W8{F_{4}\in\mathbb{R}^{8C\times\frac{H}{8}\times\frac{W}{8}}} is obtained through the bottleneck stage. In this stage, we continue to use two LGFIs to refine and enhance encoding features to ensure they are better utilized in the decoding stage. After this stage, our model can continuously enhance the information about the facial structure at different scales, thus improving the perception of facial details.

III-A3 Decoding stage

In the decoding stage, there are three decoders. We focus on multi-scale feature fusion, aiming at reconstructing high-quality face images at this stage. As depicted in Fig. 2, each decoder includes an upsampling operation, an EDFF, and an LGFI. Each upsampling operator halves the input feature channel counts while doubling the width and weight of the input facial feature. Compared to encoding stages, decoding stages additionally use our proposed SKAF to adaptively selectively fuse different scale features from the encoder and decoder stages. Through this design, different scale features can interact to recover more detailed face features. The features obtained after three decoders are as follows: F54C×H4×W4{F_{5}\in\mathbb{R}^{4C\times\frac{H}{4}\times\frac{W}{4}}}, F62C×H2×W2{F_{6}\in\mathbb{R}^{2C\times\frac{H}{2}\times\frac{W}{2}}}, F7C×H×W{F_{7}\in\mathbb{R}^{C\times H\times W}}. Finally, a 3×3{3\times 3} convolution unit is utilized to transform our obtained deep facial feature into output FSR image ISR{I_{SR}}.

As for the loss of our AMINet, given a dataset {𝐼LRi,𝐼HRi}i=1N\mathop{\left\{{\mathop{I}\nolimits_{LR}^{i},\mathop{I}\nolimits_{HR}^{i}}\right\}}\nolimits_{i=1}^{N}, we optimize our AMINet by minimizing the pixel-level loss function:

(Θ)=1Ni=1NFAMINet(ILRi,Θ)IHRi1,{\mathcal{L}(\Theta)=\frac{1}{N}\sum_{i=1}^{N}\left\|F_{AMINet}(I_{LR}^{i},\Theta)-I_{HR}^{i}\right\|_{1}}, (1)

where NN denotes paired training face image counts. ILRi{I_{LR}^{i}} and IHRi{I_{HR}^{i}} are the face LR image and HR image of the ii-th pair, respectively. Meanwhile, FAMINet()F_{AMINet}(\cdot) and Θ\Theta denote the AMINet and the number of parameters of AMINet, respectively.

III-B Local and Global Feature Interaction Module (LGFI)

In our AMINet, LGFI is mainly used for local and global facial feature extraction. As shown in Fig. 2, LGFI consists of Self-attention(SA), a Residual Depth Feature Extraction Module (RDFE), and a Selective Kernel Attention Fusion Module (SKAF), used for local and global feature fusion and interaction, respectively. The SA is designed to extract global features. At the same time, RDEM is designed to extract local features at different scales and enrich local facial details through multiple convolutional kernels under numerous receptive fields. It is worth mentioning that the local and global branches output features through SKAF, which can obtain the corresponding weight of input face information, separate and multiply it with the two branches separately, which can better promote the interaction of channel information and improve the performance of face restoration.

III-B1 Self-attention (SA)

We utilize Self-attention (SA) to extract global facial features, which can effectively model the relationships between distant features. Meanwhile, through the multi-head mechanism in SR, features can be captured from different subspaces, improving the robustness and generalization ability of the model. As illustrated in Fig. 3 (b), we start by applying a 1×1{1\times 1} convolutional layer followed by a 3×3{3\times 3} depth-wise convolutional layer to combine pixel-level cross-channel information and extract channel-level spatial context. From this spatial context, we then generate Q,K,VC×H×W{Q,K,V\in\mathbb{R}^{C\times H\times W}}. For an input facial feature XC×H×WX\in\mathbb{R}^{C\times H\times W} , the process of obtaining Q,K,VC×H×W{Q,K,V\in\mathbb{R}^{C\times H\times W}} can be described as:

Q=Fdw3(Fconv1(X)),{Q=F_{dw3}(F_{conv1}(X))}, (2)
K=Fdw3(Fconv1(X)),{K=F_{dw3}(F_{conv1}(X))}, (3)
V=Fdw3(Fconv1(X)),{V=F_{dw3}(F_{conv1}(X))}, (4)

where Fconv1(){F_{conv1}}(\cdot) is the 1×1{1\times 1} pointwise convlution and Fdw3(){F_{dw3}}(\cdot) is the 3×3{3\times 3} depthwise convlution.

Next, we reshape QQ, KK, and VV into Q^C×HW{\hat{Q}\in\mathbb{R}^{C\times HW}}, K^HW×C{\hat{K}\in\mathbb{R}^{HW\times C}}, and V^C×HW{\hat{V}\in\mathbb{R}^{C\times HW}}, respectively. After that, the dot product is multiplied by VV to obtain weights XwC×HWX_{w}\in\mathbb{R}^{C\times HW}, which facilitates the capturing of the important local context in SA. Finally, we rearrange XwX_{w} into Xw^C×H×W\hat{X_{w}}\in\mathbb{R}^{C\times H\times W}. The above operations can be expressed as:

Xweighted=Softmax(Q^K^/d)V^,{X_{weighted}=\operatorname{Softmax}(\hat{Q}\cdot\hat{K}/\sqrt{d})}\cdot\hat{V}, (5)
Xsa=Fconv1(R(Xweighted)),{X_{sa}=F_{conv1}(R(X_{weighted}))}, (6)

where Xsa{X_{sa}} is the attention map of SA, d{\sqrt{d}} is a factor used to scale the dot product of K^{\hat{K}} and Q^{\hat{Q}}, R(){R(\cdot)} stands for the rearrange operation, Xsa{X_{sa}} denotes the output of SA.

III-B2 Residual depth feature extraction module (RDFE)

As shown in Fig. 3 (a), we design RDFE to extract local facial features at different scales. Compared with the traditional feed-forward network (FFN), our RDFE is beneficial for processing more complex features and multi-scale features flexibly. Specifically, for the input feature 𝑋C×H×W\mathop{X}\in\mathbb{R}^{C\times H\times W}, we use depthwise convolutions of 3×3{3\times 3}, 5×5{5\times 5}, and 7×7{7\times 7} to parallelly extract three scales facial features, which depthwise convolution can reduce the computational complexity of the model, while convolution with different kernel sizes can effectively extracting rich face details. The above operations can be expressed as:

𝑓1,𝑓2,𝑓3=𝐹dw3(𝑋),𝐹dw5(𝑋),𝐹dw7(𝑋),\mathop{f}\nolimits_{1},\mathop{f}\nolimits_{2},\mathop{f}\nolimits_{3}=\mathop{F}\nolimits_{dw3}(\mathop{X}),\mathop{F}\nolimits_{dw5}(\mathop{X}),\mathop{F}\nolimits_{dw7}(\mathop{X}), (7)

where Fdw3F_{dw3}, Fdw5F_{dw5}, and Fdw7F_{dw7} are 3×3{3\times 3}, 5×5{5\times 5}, and 7×7{7\times 7} depthwise convlution, respectively. Additionally, we use an attention unit FauF_{au} to calculate the feature weight of the fusion feature of three branches. Then, we use calculated weight to perform multiplications with the three branch features at different scales. This operation enables our model to adaptively allocate weights to extract important facial information under different receptive fields, improving our model’s performance and generalization ability. Next, we use 3×3{3\times 3}, 5×5{5\times 5}, and 7×7{7\times 7} depthwise convolutions to further reconstruct selected important facial features. The above operations can be expressed as:

𝑓=𝐹au(𝐻cat(𝑓1,𝑓2,𝑓3)),\mathop{f}\nolimits^{{}^{\prime}}=\mathop{F}\nolimits_{au}(\mathop{H}\nolimits_{cat}(\mathop{f}\nolimits_{1},\mathop{f}\nolimits_{2},\mathop{f}\nolimits_{3})), (8)
𝑓1,𝑓2,𝑓3=𝑓1(𝑋)𝑓,𝑓2(𝑋)𝑓,𝑓3(𝑋)𝑓,\mathop{f}\nolimits_{1}^{{}^{\prime}},\mathop{f}\nolimits_{2}^{{}^{\prime}},\mathop{f}\nolimits_{3}^{{}^{\prime}}=\mathop{f}\nolimits_{1}(\mathop{X})\cdot\mathop{f}\nolimits^{{}^{\prime}},\mathop{f}\nolimits_{2}(\mathop{X})\cdot\mathop{f}\nolimits^{{}^{\prime}},\mathop{f}\nolimits_{3}(\mathop{X})\cdot\mathop{f}\nolimits^{{}^{\prime}}, (9)

where HcatH_{cat} is a concat operator. Then, we aggregate the three branches’ features to combine facial detail information under different receptive fields. This process can be described as:

𝑓′′=𝐻cat(𝐹conv1(𝑓1,𝑓2,𝑓3))+X\mathop{f}\nolimits^{{}^{\prime\prime}}=\mathop{H}\nolimits_{cat}(\mathop{F}\nolimits_{conv1}(\mathop{f}\nolimits_{1}^{{}^{\prime}},\mathop{f}\nolimits_{2}^{{}^{\prime}},\mathop{f}\nolimits_{3}^{{}^{\prime}}))+X (10)

where 𝐹conv1()\mathop{F}\nolimits_{conv1}(\cdot) represents 1×1{1\times 1} convolution. Finally, we utilize a Feature Refinement Module to refine features obtained from previous multiple branches. Specifically, we begin by applying normalization and multiple 3×33\times 3 convolutional layers to refine the local facial context. Afterward, the hourglass block further integrates multi-scale information to capture global and local relationships. The above operations can be expressed as:

𝑓′′′=𝐹frm(𝑓′′),\mathop{f}\nolimits^{{}^{\prime\prime\prime}}=\mathop{F}\nolimits_{frm}(\mathop{f}\nolimits^{{}^{\prime\prime}}), (11)

where 𝐹frm()\mathop{F}\nolimits_{frm}(\cdot) indicates feature refinement module.

III-B3 Selctive Kernel Attention Fusion Module (SKAF)

Inspired by SKNet [36] and LSKNet [37], as shown in Fig. 2, we design a SKAF module to make our model with the ability to select local and global features required for reconstruction for fusion interaction. Specifically, given global feature 𝑋1C×H×W\mathop{X}\nolimits_{1}\in\mathbb{R}^{C\times H\times W} obtained by the SA and local features 𝑋2C×H×W\mathop{X}\nolimits_{2}\in\mathbb{R}^{C\times H\times W} obtained by the RDFE, we first fuse the local and global features extracted by a 5×55\times 5 convolution and a 7×77\times 7 convolution to get a hybrid feature XX. This operation can be expressed as:

𝑋1,𝑋2=𝐹conv5(𝑋1,𝑋2),𝐹conv7(𝑋1,𝑋2),\mathop{X}\nolimits_{1}^{{}^{\prime}},\mathop{X}\nolimits_{2}^{{}^{\prime}}=\mathop{F}\nolimits_{conv5}(\mathop{X}\nolimits_{1},\mathop{X}\nolimits_{2}),\mathop{F}\nolimits_{conv7}(\mathop{X}\nolimits_{1},\mathop{X}\nolimits_{2}), (12)
X=𝐻cat(𝑋1,𝑋2),X=\mathop{H}\nolimits_{cat}(\mathop{X}\nolimits_{1}^{{}^{\prime}},\mathop{X}\nolimits_{2}^{{}^{\prime}}), (13)

where 𝐹conv5()\mathop{F}\nolimits_{conv5}(\cdot) represents 5×5{5\times 5} convolution, 𝐹conv7()\mathop{F}\nolimits_{conv7}(\cdot) represents 7×7{7\times 7} convolution, 𝐻cat()\mathop{H}\nolimits_{cat}(\cdot) indicates the concat operation along the channel dimension. Then, we impose pooling to learn the weight of obtained hybrid features, where the weight reflects the importance of features under different receptive fields. The process of obtaining the weight for selecting required facial features is as follows:

X=𝐻sig(𝐻cat(𝐻avp(X),𝐻map(X))),X=\mathop{H}\nolimits_{sig}(\mathop{H}\nolimits_{cat}(\mathop{H}\nolimits_{avp}(X),\mathop{H}\nolimits_{map}(X))), (14)

where 𝐻avp()\mathop{H}\nolimits_{avp}(\cdot) indicates the average pooling operation, 𝐻map()\mathop{H}\nolimits_{map}(\cdot) indicates the max pooling operation. Finally, we multiply the weights obtained from the above calculations with the local and global features, respectively. Thus, our SKAF can adaptively select the important local and global information required for reconstruction. The process of obtaining important local and global features XX^{{}^{\prime}}, X′′X^{{}^{\prime\prime}} by adaptive weight selection can be expressed as:

𝑋,𝑋′′=𝐻cs(X),\mathop{X}\nolimits^{{}^{\prime}},\mathop{X}\nolimits^{{}^{\prime\prime}}=\mathop{H}\nolimits_{cs}(X), (15)

where 𝐻cs()\mathop{H}\nolimits_{cs}(\cdot) indicates the feature separation operation along the channel dimension. Through the above operators, we can get the adaptive selected local and global features.

Refer to caption
Figure 4: Comparison of LGFI and Transformer structures, where SA is self-attention, RDFE and FFN are CNN parts, and SKAF is our feature fusion module.

III-C Encoder and Decoder Feature Fusion Module (EDFF)

To fully utilize the multi-scale features extracted from the encoding and decoding stage, we introduce an EDFF to fuse different features, enabling our AMINet with better feature propagation and representation capabilities. As shown in Figure. 2, our EDFF mainly utilizes our proposed SKAF to fuse and select different scale features required for reconstruction. Given the feature 𝑋EC×H×W\mathop{X}\nolimits_{E}\in\mathbb{R}^{C\times H\times W} the feature 𝑋DC×H×W\mathop{X}\nolimits_{D}\in\mathbb{R}^{C\times H\times W} from the decoding stage and the encoding stage, respectively. Firstly, we concatenate features from the encoding and decoding stages along the channel dimension. Then, a 1×1{1\times 1} convolution is used to reduce the channel counts and reduce the process’s computational costs to obtain two weights through our SKAF. These operations can be expressed as:

𝑋,𝑋′′=𝐹skaf(𝐹conv1(𝐻cat(𝑋E,𝑋D))),\mathop{X}\nolimits^{{}^{\prime}},\mathop{X}\nolimits^{{}^{\prime\prime}}=\mathop{F}\nolimits_{skaf}(\mathop{F}\nolimits_{conv1}(\mathop{H}\nolimits_{cat}(\mathop{X}\nolimits_{E},\mathop{X}\nolimits_{D}))), (16)

where 𝐹skaf()\mathop{F}\nolimits_{skaf}(\cdot) represents Selective Kernel Attention Fusion Module, 𝐹conv1()\mathop{F}\nolimits_{conv1}(\cdot) stands for the 1×1{1\times 1} convolutional layer, 𝐻cat()\mathop{H}\nolimits_{cat}(\cdot) denotes the operation of concatenating features across the channel dimension. Next, we feed the obtained two weights into two branches for multiplication. Through this operator, we obtain the selected facial features from hybrid features obtained by fusing the encoding and decoding features. Finally, we add the features of the two branches. The process is:

𝑋ED=𝑋E𝑋+𝑋D𝑋′′,\mathop{X}\nolimits_{ED}=\mathop{X}\nolimits_{E}\cdot\mathop{X}\nolimits^{{}^{\prime}}+\mathop{X}\nolimits_{D}\cdot\mathop{X}\nolimits^{{}^{\prime\prime}}, (17)

through the above operators, we can complete the process of the adaptive fusion of encoding features and decoding features.

III-D Model Extension

Since the GAN-based methods [38, 39] can get better perceptual qualities, we expand our AMINet to AMIGAN to generate more high-quality SR results. The loss function used in training AMIGAN consists of the following three parts:

III-D1 Pixel loss

Pixel-level loss is used to reduce the pixel difference between the SR and HR images. This loss is expressed as:

pix=1Ni=1NG(ILRi)IHRi1,{\mathcal{L}_{pix}=\frac{1}{N}\sum_{i=1}^{N}\left\|G(I_{LR}^{i})-I_{HR}^{i}\right\|_{1}}, (18)

where GG indicates the AMIGAN generator.

III-D2 Perceptual loss

To enhance the visual quality of super-resolution images, we apply perceptual loss. This involves using a pre-trained VGG19 [40] model to extract facial features from both the HR images and our generated FSR images. Then, we compare the obtained perceptual features of HR and FSR images to constrain the generation of FSR features. Therefore, the perceptual loss can be described as:

pcp=1Ni=1Nl=1LVGG1MVGGlfVGGl(ISRi)fVGGl(IHRi)1,{\mathcal{L}_{pcp}=\frac{1}{N}\sum_{i=1}^{N}\sum_{l=1}^{L_{VGG}}\frac{1}{M_{VGG}^{l}}\left\|f_{VGG}^{l}\left(I_{SR}^{i}\right)-f_{VGG}^{l}\left(I_{HR}^{i}\right)\right\|_{1}}, (19)

where fVGGlf_{VGG}^{l} represents the feature map from the ll-th layer of the VGG network, LVGGL_{VGG} is the total number of layers in VGG, and MVGGlM_{VGG}^{l} indicates the quantity of elements within that feature map.

TABLE I: Verify the effectiveness of LGFI (CelebA, ×8\times 8).
Methods PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow}
LGFI w/o SA 27.75 0.7944 0.4652 0.1886
LGFI w/o RDFE 27.51 0.7840 0.4495 0.2085
LGFI w/o SKAF 27.74 0.7932 0.4611 0.1979
LGFI 27.83 0.7961 0.4725 0.1821
TABLE II: Quantatitive comparison between LGFI and traditional Transformer as shown in Fig. 4 (CelebA, ×8\times 8).
Methods Parameters PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow}
Transformer 11.32M 27.73 0.7952 0.4511 0.1878
LGFI 12.62M 27.83 0.7961 0.4725 0.1821

III-D3 Adversarial loss

GANs have been shown to be effective in reconstructing photorealistic images [38, 39]. GAN generates FSR results through the generator while using the discriminator to distinguish between ground truth and FSR results, which ultimately enables the generator to generate realistic FSR results in the process of constant confrontation. This process is:

dis=𝔼[log(D(IHR))]𝔼[log(1D(G(ILR)))],{\mathcal{L}_{dis}=-\mathbb{E}\left[\log\left(D\left(I_{HR}\right)\right)\right]-\mathbb{E}\left[\log\left(1-D\left(G\left(I_{LR}\right)\right)\right)\right]}, (20)

additionally, the generator tries to minimize:

adv=𝔼[log(D(G(ILR)))],\mathcal{L}_{adv}=-\mathbb{E}\left[\log\left(D\left(G\left(I_{LR}\right)\right)\right)\right], (21)

thus, AMIGAN is refined by minimizing the following total objective function:

=λpixpix+λpcppcp+λadvadv,{\mathcal{L}=\lambda_{pix}\mathcal{L}_{pix}+\lambda_{pcp}\mathcal{L}_{pcp}+\lambda_{adv}\mathcal{L}_{adv}}, (22)

where λpix\lambda_{pix}, λpcp\lambda_{pcp}, and λadv\lambda_{adv} represent the weighting factors for the corresponding pixel loss, perceptual loss, and adversarial loss, respectively.

TABLE III: Quantatitive comparison between RDFE and FFN (CelebA, ×8\times 8).
Methods Parameters PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow}
FFN 12.11M 27.72 0.7931 0.4578 0.1922
RDFE 12.62M 27.83 0.7961 0.4725 0.1821
TABLE IV: Ablation study on our RDFE (CelebA, ×8\times 8).
Methods PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow}
Single path (3×3{3\times 3} dw) 27.73 0.7934 0.4619 0.1915
Single path (5×5{5\times 5} dw) 27.71 0.7912 0.4587 0.1944
Single path (7×7{7\times 7} dw) 27.72 0.7926 0.4602 0.1922
RDFE w/o AU 27.76 0.7951 0.4673 0.1846
RDFE w/o FRM 27.75 0.7941 0.4643 0.1928
RDFE 27.83 0.7961 0.4725 0.1821
TABLE V: Ablation study on our SKAF (CelebA, ×8\times 8).
5×55\times 5 conv 7×77\times 7 conv Avgpool Maxpool PSNR{\uparrow} SSIM{\uparrow}
×{\times} ×{\times} ×{\times} ×{\times} 27.69 0.7919
×\times {\surd} \surd \surd 27.76 0.7955
{\surd} ×{\times} \surd \surd 27.73 0.7946
×{\times} \surd ×{\times} ×{\times} 27.74 0.7931
×{\times} \surd \surd ×{\times} 27.79 0.7951
×{\times} \surd ×{\times} \surd 27.78 0.7946
{\surd} \surd \surd \surd 27.83 0.7961
TABLE VI: Ablation study on our EDFF (CelebA, ×8\times 8).
Methods Baseline + Our EDFF
Parameters PSNR{\uparrow} SSIM{\uparrow} Parameters PSNR{\uparrow} SSIM{\uparrow}
SPARNet [17] 10.6M 27.73 0.7949 12.1M 27.85 0.7964
SFMNet [20] 8.6M 27.96 0.7996 10.7M 28.07 0.8011

IV Experiments

This section will introduce the experimental setting, our ablation studies, and comparisons with current advanced FSR methods to thoroughly demonstrate the advantages of our approach on both synthetic and real test datasets.

IV-A Datasets and Evaluation Metrics

In our studies, we utilize the CelebA [4] dataset for training and evaluation on CelebA [4], Helen [41], and SCface [42] datasets, respectively. We center-crop the aligned face images and resize them to 128×128{128\times 128} pixels to obtain high-resolution (HR) versions. These HR images are then downsampled to 16×16{16\times 16} pixels using bicubic interpolation, producing the corresponding low-resolution (LR) images. For our experiments, we randomly choose 18,000 CelebA images for training and 1,000 for testing. In addition, We also utilize the SCface test set as a real-world evaluation dataset. To measure the quality of the FSR results, we use five metrics: PSNR [43], SSIM [43], LPIPS [44], VIF [45], and FID [46].

IV-B Implementation details

We implement our model using the PyTorch framework on an NVIDIA GeForce RTX 3090. The network is optimized using the Adam optimizer, with parameters set to β1=0.9{\beta_{1}=0.9} and β2=0.99{\beta_{2}=0.99}. The initial learning rate is 2×104{2\times 10^{-4}}, with separate learning rates for the generator and discriminator set at 1×104{1\times 10^{-4}} and 4×104{4\times 10^{-4}}, respectively. The loss function weights are configured as λpix=1{\lambda_{pix}=1}, λpcp=0.01{\lambda_{pcp}=0.01}, and λadv=0.01{\lambda_{adv}=0.01}.

TABLE VII: Quantitative comparisons of ours and existing FSR methods for ×8\times 8 FSR on CelebA and Helen test sets.
Methods CelebACelebA HelenHelen
PSRN{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow} PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} LPIPS{\downarrow}
Bicubic 23.61 0.6779 0.1821 0.4899 22.95 0.6762 0.1745 0.4912
SAN [47] 27.43 0.7826 0.4553 0.2080 25.46 0.7360 0.4029 0.3260
RCAN [24] 27.45 0.7824 0.4618 0.2205 25.50 0.7383 0.4049 0.3437
HAN [48] 27.47 0.7838 0.4673 0.2087 25.40 0.7347 0.4074 0.3274
SwinIR [9] 27.88 0.7967 0.4590 0.2001 26.53 0.7856 0.4398 0.2644
FSRNet [12] 27.05 0.7714 0.3852 0.2127 25.45 0.7364 0.3482 0.3090
DICNet [14] - - - - 26.15 0.7717 0.4085 0.2158
FACN [49] 27.22 0.7802 0.4366 0.1828 25.06 0.7189 0.3702 0.3113
SPARNet [17] 27.73 0.7949 0.4505 0.1995 26.43 0.7839 0.4262 0.2674
SISN [18] 27.91 0.7971 0.4785 0.2005 26.64 0.7908 0.4623 0.2571
AD-GNN [50] 27.82 0.7962 0.4470 0.1937 26.57 0.7886 0.4363 0.2432
Restormer-M [51] 27.94 0.8027 0.4624 0.1933 26.91 0.8013 0.4595 0.2258
LAAT [52] 27.91 0.7994 0.4624 0.1879 26.89 0.8005 0.4569 0.2255
ELSFace [53] 27.41 0.7922 0.4451 0.1867 26.04 0.7873 0.4193 0.2811
SFMNet [20] 27.96 0.7996 0.4644 0.1937 26.86 0.7987 0.4573 0.2322
SPADNet [54] 27.82 0.7966 0.4589 0.1987 26.47 0.7857 0.4295 0.2654
AMINet 28.26 0.8091 0.4893 0.1755 27.01 0.8042 0.4694 0.2067

Refer to caption

Figure 5: Visual comparisons for ×\times8 FSR on CelebA test set. Our method can recover accurate face images.

IV-C Ablation Studies

IV-C1 Study of LGFI

LGFI is proposed to extract local features and global relationships of images, which represents a new attempt to interact with local and global information. To verify the reasonableness of our design of LGFI, as shown in Table I, we design four ablation models. The first model removes the SA, labeled “LGFI w/o SA”. The second model removes RDFE, labeled as “LGFI w/o RDFE”. The third model removes SKAF, labeled as “LGFI w/o SKAF”. We have the following observations: (a) introducing SA, and RDFE alone can improve model performance. This is because the above two modules can capture local and global features to promote facial feature reconstruction, including facial details and overall overall contours; (b) Model performance has been significantly increased by introducing the SKAF to capture the relationship between local and global facial features. This is because our SKAF can promote interaction between our SA and RDFE, integrating richer information and providing supplementary information for the final FSR image reconstruction. In our LGFI, using only one module can not achieve optimal results, validating the effectiveness of LGFI.

IV-C2 Comparison between LGFI and Transformer

As shown in Figure 4, LGFI uses a dual branch structure to interact with representing the local and global features. In contrast, the traditional Transformer in Restormer [55] uses a serial structure to link the local and global features. To verify the effectiveness of LGFI, we replace all LGFIs in the network with Transformers and conduct comparative experiments with similar parameters between the two models. From Table II, we can see that the network’s performance using LGFI is better when the two networks maintain similar parameters. This is because LGFI utilizes the features of both local and global branches for interaction, facilitating the communication of multi-scale facial information.

IV-C3 Comparison between RDFE and FFN

The feed-forward network (FFN) performs independent nonlinear transformations of the inputs at each position to help the Transformer capture local features, but it lacks the ability to extract multi-scale features, which is not favorable for accurate FSR. In contrast, our RDFE can extract multi-scale local features well. To compare RDFE and FFN, we replace RDFE with FFN while keeping the parameters of the two models similar. As shown in Table III, since FFN’s ability to capture feature interactions is limited compared to our RDFE that utilizes multiple branches to capture different receptive field facial features, our RDFE performs much better than FFN with similar computational consumption.

IV-C4 Effectiveness of RDFE

In RDFE, a three-branch network guided by an attention mechanism is used for deep feature extraction, and the feature refinement module is used to enrich feature representation. To verify the effectiveness of RDFE, we conduct multiple ablation experiments. We design five improved models. The first model adopts a single branch structure of 3×3{3\times 3} depthwise convolution, labeled as “Single path (3×3{3\times 3} dw)”. The second model adopts a single branch structure of 5×5{5\times 5} depthwise convolution, labeled as “Single path (5×5{5\times 5} dw)”. The third model adopts a single branch structure of 7×7{7\times 7} depthwise convolution, labeled as “Single path (7×7{7\times 7} dw)”. The fourth model removes attention units labeled as ”w/o AU”. The fifth model removes the feature refinement module, labeled as ”w/o FRM”. From the Table IV, we have the following observations: (a) By comparing the first three rows and the last row of the table, it can be seen that multi-scale branching facilitates the model’s performance due to its ability to extract face features at different levels; (b) From the comparison between the second and the last rows of the table and the last row, it can be seen that using attention units (AU) to guide three-branch feature extraction can enable the model to adaptively allocate weights, enhance the representation of important facial information, and thus improve model performance; (c) From the last two rows of the table, we can conclude that the feature refinement module (FRM) module can further integrate multi-scale information, refine multi-scale fusion features, and thus improve performance.

Refer to caption

Figure 6: Visual comparisons for ×\times8 FSR on Helen test set. Our method can recover accurate face images.

IV-C5 Effectiveness of SKAF

SKAF is an important component of LGFI, facilitating information exchange between local and global branches. We perform a series of ablation experiments to validate the impact of our SKAF module and assess the practicality of the combined approach. Since SKAF consists of dual branch convolutional layers, maximum pooling layers, and average pooling layers, we verify the effectiveness of module components in SKAF. From Table V, we have the following observations: (a) From the last three rows of the table, we find that using a single pooling branch results in reduced performance, while using average pooling alone results in lower performance than using maximum pooling alone. This is because the salient features of the face are the key to facial recovery, with maximum pooling focusing on salient facial feature information. In contrast, average pooling focuses on the overall information of the face. (b) Compared to the third and fifth rows,, it can be concluded that using both 5×5{5\times 5} and 7×7{7\times 7} simultaneously can improve performance and fully utilize key facial information under different receptive fields.

IV-C6 Study of EDFF

This section presents a set of experiments to validate the effectiveness of our EDFF, a module tailored for fusing multi-scale features. We add EDFF to SPARNet [17], which uses EDFF to connect the encoding and decoding stages in SPARNet and send them to the next decoding stage. Additionally, we add EDFF to SFMNet [20], and the specific operation is the same as in SPARNet. From the results of Table VI, we can see that although the parameters of both models increase slightly, the performance of the models improves, which precisely proves that EDFF is helpful for feature fusion in encoding and decoding stages.

TABLE VIII: Quantitative comparison of ours with GAN-based methods for ×8\times 8 FSR on the Helen test set.
Methods PSNR{\uparrow} SSIM{\uparrow} VIF{\uparrow} FID{\downarrow}
FSRGAN [12] 25.02 0.7279 0.3400 146.55
DICGAN [14] 25.59 0.7398 0.3925 144.25
SPARGAN [17] 25.86 0.7518 0.3932 149.54
SFMGAN [20] 25.96 0.7618 0.4019 141.23
AMIGAN (Ours) 26.35 0.7769 0.4101 122.43

IV-D Comparison with Other Methods

This section compares our AMINet and its GAN-based variant with leading FSR methods currently available, including SAN [47], RCAN [24], HAN [48], SwinIR [9], FSRNet [12], DICNet [14], FACN [49], SPARNet [17], SISN [18], AD-GNN [50], Restormer-M [51], LAAT [52], ELSFace [53], SFMNet [20] and SPADNet [54].

Refer to caption
Figure 7: Visual comparison of existing GAN-based FSR methods on the Helen test set. Obviously, our AMIGAN can reconstruct high-quality face images with clear facial components.

Refer to caption

Figure 8: Visual comparisons for ×\times8 FSR on SCface test set. Our method can recover clearer face images than existing methods.

IV-D1 Comparison on CelebA dataset

We conduct a quantitative comparison of AMINet against existing FSR methods on the CelebA test set, as detailed in Table VII. Our AMINet outperforms all other evaluation metrics, including PSNR, SSIM, LPIPS, and VIF, which fully demonstrates its efficiency. This strongly validates the effectiveness of AMINet. Additionally, the visual comparison in Fig. 5 reveals that previous FSR methods struggled to accurately reproduce facial features like the eyes and mouth. In contrast, AMINet excels at preserving the facial structure and producing more precise results, proving its effectiveness.

IV-D2 Comparison on Helen dataset

We evaluate our method on the Helen test set to further assess AMINet’s versatility. Table VII provides a quantitative comparison of ×\times8 FSR results about it, where AMINet achieves the better performance. Visual comparisons in Fig. 6 indicate that existing FSR methods struggle to maintain accuracy, leading to blurred shapes and a loss of facial details. In contrast, AMINet successfully preserves facial contours and details, reinforcing its effectiveness and adaptability across different datasets.

IV-D3 Comparison with GAN-based methods

We present AMIGAN as an innovative approach to bolster the visual fidelity of image restoration tasks. To substantiate its superiority, we have conducted a rigorous comparison of AMIGAN against state-of-the-art GAN-based methodologies, namely FSRGAN [12], DICGAN [14], SPARGAN [17], and SFMGAN [20]. As a complementary assessment metric, we introduce the FID [46] to quantitatively evaluate the GANs’ performance. The outcomes presented in Table VIII, derived from tests on the Helen dataset, reveal that AMIGAN outpaces its competitors considerably. Furthermore, the visual inspection illustrated in Fig. 7 underscores AMIGAN’s exceptional capabilities. Unlike existing FSR methods, which exhibit visible artifacts in generated facial images, AMIGAN meticulously restores critical facial features and intricate texture details around the mouth and nose, which underscores AMIGAN’s prowess in facial texture restoration, resulting in a notable enhancement in clarity and overall visual realism.

IV-D4 Comparison on Real-world surveillance faces

All the above comparisons are tested on synthetic test sets, which fail to simulate real-world scenarios accurately. To further evaluate our model’s performance in real-world conditions, we also conduct experiments using low-quality face images from the SCface dataset [42]. As shown in Fig. 8, we visually compare reconstruction results. From this figure, We find that the reconstruction results of face prior-based methods are not satisfactory. The challenge lies in accurately estimating priors from real-world LR facial images. Incorrect prior information can lead to misleading guidance during the reconstruction process. In contrast, our AMINet can restore clearer face details and faithful face structures. This result fully demonstrates our method’s effectiveness in real scenarios.

IV-E Model Complexity Analysis

In addition to the performance indicators mentioned earlier, the number of model parameters and inference time are crucial factors in evaluating performance. As shown in Fig. 1, we compare our model with existing ones in terms of parameters, PSNR values, and inference speed. We can see that AMINet still performs well while maintaining a fast inference time and a small parameter count.

V Conclusions

This work proposes an attention-guided Multi-scale interaction network for face super-resolution. Specifically, we design an LGFI, which allows accessible communication of global features obtained from self-attention and local features obtained from our designed RDFE. To enhance the variegation of local features in RDFE, we employ a multi-scale depth separable convolutional kernel coupled with attention mechanisms to extract and refine local features. Furthermore, to adaptively fuse features at different scales, we propose an RDFE to utilize the attention mechanism to select a convolutional kernel of the appropriate size to promote feature fusion. Extensive experiments on the synthetics and real test sets show our designed modules significantly improve the communication of features at different scales with modules, allowing our proposed method to outperform existing methods regarding FSR performance, model size, and inference speed.

References

  • [1] L. Liu, R. Lan, and Y. Wang, “Discriminative face hallucination via locality-constrained and category embedding representation,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 12, pp. 7314–7325, 2021.
  • [2] D. Mamieva, A. B. Abdusalomov, M. Mukhiddinov, and T. K. Whangbo, “Improved face detection method via learning small faces on hard images based on a deep learning approach,” Sensors, vol. 23, no. 1, p. 502, 2023.
  • [3] G. Hu, Y. Yang, D. Yi, J. Kittler, W. Christmas, S. Z. Li, and T. Hospedales, “When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition,” in ICCVW, 2015, pp. 142–150.
  • [4] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in ICCV, 2015, pp. 3730–3738.
  • [5] G. Gao, Z. Xu, J. Li, J. Yang, T. Zeng, and G.-J. Qi, “Ctcnet: A cnn-transformer cooperation network for face image super-resolution,” IEEE Transactions on Image Processing, vol. 32, pp. 1978–1991, 2023.
  • [6] E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin, “Learning face hallucination in the wild,” in AAAI, vol. 29, no. 1, 2015.
  • [7] J. Shi, Y. Wang, S. Dong, X. Hong, Z. Yu, F. Wang, C. Wang, and Y. Gong, “Idpt: Interconnected dual pyramid transformer for face super-resolution.” in IJCAI, 2022, pp. 1306–1312.
  • [8] Y. Wang, T. Lu, Y. Zhang, Z. Wang, J. Jiang, and Z. Xiong, “Faceformer: Aggregating global and local representation for face hallucination,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 6, pp. 2533–2545, 2022.
  • [9] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, “Swinir: Image restoration using swin transformer,” in ICCV, 2021, pp. 1833–1844.
  • [10] Q. Bao, Y. Liu, B. Gang, W. Yang, and Q. Liao, “Sctanet: A spatial attention-guided cnn-transformer aggregation network for deep face image super-resolution,” IEEE Transactions on Multimedia, vol. 25, pp. 8554–8565, 2023.
  • [11] W. Li, M. Wang, K. Zhang, J. Li, X. Li, Y. Zhang, G. Gao, W. Deng, and C.-W. Lin, “Survey on deep face restoration: From non-blind to blind and beyond,” arXiv:2309.15490, 2023.
  • [12] Y. Chen, Y. Tai, X. Liu, C. Shen, and J. Yang, “Fsrnet: End-to-end learning face super-resolution with facial priors,” in CVPR, 2018, pp. 2492–2501.
  • [13] D. Kim, M. Kim, G. Kwon, and D.-S. Kim, “Progressive face super-resolution via attention to facial landmark,” arXiv:1908.08239, 2019.
  • [14] C. Ma, Z. Jiang, Y. Rao, J. Lu, and J. Zhou, “Deep face super-resolution with iterative collaboration between attentive recovery and landmark estimation,” in CVPR, 2020, pp. 5569–5578.
  • [15] X. Hu, W. Ren, J. LaMaster, X. Cao, X. Li, Z. Li, B. Menze, and W. Liu, “Face super-resolution guided by 3d facial priors,” in ECCV.   Springer, 2020, pp. 763–780.
  • [16] M. Zhang and Q. Ling, “Supervised pixel-wise gan for face super-resolution,” IEEE Transactions on Multimedia, vol. 23, pp. 1938–1950, 2020.
  • [17] C. Chen, D. Gong, H. Wang, Z. Li, and K.-Y. K. Wong, “Learning spatial attention for face super-resolution,” IEEE Transactions on Image Processing, vol. 30, pp. 1219–1231, 2020.
  • [18] T. Lu, Y. Wang, Y. Zhang, Y. Wang, L. Wei, Z. Wang, and J. Jiang, “Face hallucination via split-attention in split-attention network,” in ACMMM, 2021, pp. 5501–5509.
  • [19] Q. Bao, R. Zhu, B. Gang, P. Zhao, W. Yang, and Q. Liao, “Distilling resolution-robust identity knowledge for texture-enhanced face hallucination,” in ACMMM, 2022, pp. 6727–6736.
  • [20] C. Wang, J. Jiang, Z. Zhong, and X. Liu, “Spatial-frequency mutual learning for face super-resolution,” in CVPR, 2023, pp. 22 356–22 366.
  • [21] J. Shi, Y. Wang, Z. Yu, G. Li, X. Hong, F. Wang, and Y. Gong, “Exploiting multi-scale parallel self-attention and local variation via dual-branch transformer-cnn structure for face super-resolution,” IEEE Transactions on Multimedia, vol. 26, pp. 2608–2620, 2023.
  • [22] W. Li, H. Guo, X. Liu, K. Liang, J. Hu, Z. Ma, and J. Guo, “Efficient face super-resolution via wavelet-based feature enhancement network,” in ACMMM, 2024.
  • [23] J. Li, Z. Pei, W. Li, G. Gao, L. Wang, Y. Wang, and T. Zeng, “A systematic survey of deep learning-based single-image super-resolution,” ACM Computing Surveys, vol. 56, no. 10, pp. 1–40, 2024.
  • [24] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image super-resolution using very deep residual channel attention networks,” in ECCV, 2018, pp. 286–301.
  • [25] J. Xin, N. Wang, X. Gao, and J. Li, “Residual attribute attention network for face image super-resolution,” in AAAI, vol. 33, no. 01, 2019, pp. 9054–9061.
  • [26] G. Gao, W. Li, J. Li, F. Wu, H. Lu, and Y. Yu, “Feature distillation interaction weighting network for lightweight image super-resolution,” in AAAI, vol. 36, no. 1, 2022, pp. 661–669.
  • [27] Y. Wang, Y. Li, G. Wang, and X. Liu, “Multi-scale attention network for single image super-resolution,” in CVPR, 2024, pp. 5950–5960.
  • [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in NeurIPs, 2017, pp. 5998–6008.
  • [29] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv:1810.04805, 2018.
  • [30] G. Gao, Z. Wang, J. Li, W. Li, Y. Yu, and T. Zeng, “Lightweight bimodal network for single-image super-resolution via symmetric cnn and recursive transformer,” arXiv:2204.13286, 2022.
  • [31] W. Li, J. Li, G. Gao, W. Deng, J. Zhou, J. Yang, and G.-J. Qi, “Cross-receptive focused inference network for lightweight image super-resolution,” IEEE Transactions on Multimedia, vol. 26, pp. 864–877, 2023.
  • [32] W. Li, J. Li, G. Gao, W. Deng, J. Yang, G.-J. Qi, and C.-W. Lin, “Efficient image super-resolution with feature interaction weighted hybrid network,” arXiv:2212.14181, 2022.
  • [33] K. Zeng, Z. Wang, T. Lu, J. Chen, J. Wang, and Z. Xiong, “Self-attention learning network for face super-resolution,” Neural Networks, vol. 160, pp. 164–174, 2023.
  • [34] Y. Yang and Y. Qi, “Image super-resolution via channel attention and spatial graph convolutional network,” Pattern Recognition, vol. 112, p. 107798, 2021.
  • [35] Z. Zhang and C. Qi, “Feature maps need more attention: A spatial-channel mutual attention-guided transformer network for face super-resolution,” Applied Sciences, vol. 14, no. 10, p. 4066, 2024.
  • [36] X. Li, W. Wang, X. Hu, and J. Yang, “Selective kernel networks,” in CVPR, 2019, pp. 510–519.
  • [37] Y. Li, Q. Hou, Z. Zheng, M.-M. Cheng, J. Yang, and X. Li, “Large selective kernel network for remote sensing object detection,” in ICCV, 2023, pp. 16 794–16 805.
  • [38] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” in CVPR, 2017, pp. 4681–4690.
  • [39] X. Wang, K. Yu, S. Wu, J. Gu, Y. Liu, C. Dong, Y. Qiao, and C. Change Loy, “Esrgan: Enhanced super-resolution generative adversarial networks,” in ECCVW, 2018, pp. 1–16.
  • [40] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv:1409.1556, 2014.
  • [41] V. Le, J. Brandt, Z. Lin, L. Bourdev, and T. S. Huang, “Interactive facial feature localization,” in ECCV, 2012, pp. 679–692.
  • [42] M. Grgic, K. Delac, and S. Grgic, “Scface–surveillance cameras face database,” Multimedia Tools and Applications, vol. 51, no. 3, pp. 863–879, 2011.
  • [43] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [44] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in CVPR, 2018, pp. 586–595.
  • [45] H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430–444, 2006.
  • [46] A. Obukhov and M. Krasnyanskiy, “Quality assessment method for gan based on modified metrics inception score and fréchet inception distance,” in CoMeSySo, 2020, pp. 102–114.
  • [47] T. Dai, J. Cai, Y. Zhang, S.-T. Xia, and L. Zhang, “Second-order attention network for single image super-resolution,” in CVPR, 2019, pp. 11 065–11 074.
  • [48] B. Niu, W. Wen, W. Ren, X. Zhang, L. Yang, S. Wang, K. Zhang, X. Cao, and H. Shen, “Single image super-resolution via a holistic attention network,” in ECCV.   Springer, 2020, pp. 191–207.
  • [49] J. Xin, N. Wang, X. Jiang, J. Li, X. Gao, and Z. Li, “Facial attribute capsules for noise face super resolution,” in AAAI, vol. 34, no. 07, 2020, pp. 12 476–12 483.
  • [50] Q. Bao, B. Gang, W. Yang, J. Zhou, and Q. Liao, “Attention-driven graph neural network for deep face super-resolution,” IEEE Transactions on Image Processing, vol. 31, pp. 6455–6470, 2022.
  • [51] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in CVPR, 2022, pp. 5728–5739.
  • [52] G. Li, J. Shi, Y. Zong, F. Wang, T. Wang, and Y. Gong, “Learning attention from attention: Efficient self-refinement transformer for face super-resolution.” in IJCAI, 2023, pp. 1035–1043.
  • [53] H. Qi, Y. Qiu, X. Luo, and Z. Jin, “An efficient latent style guided transformer-cnn framework for face super-resolution,” IEEE Transactions on Multimedia, vol. 26, pp. 1589–1599, 2024.
  • [54] C. Wang, J. Jiang, K. Jiang, and X. Liu, “Structure prior-aware dynamic network for face super-resolution,” IEEE Transactions on Biometrics, Behavior, and Identity Science, 2024.
  • [55] S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in CVPR, 2022, pp. 5728–5739.