This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Template-Free Try-on Image Synthesis via Semantic-guided Optimization

Chien-Lung Chou, Chieh-Yun Chen, Chia-Wei Hsieh, Hong-Han Shuai, Jiaying Liu, and Wen-Huang Cheng C.-L. Chou, C.-W. Hsieh, and H.-H. Shuai are with the Department of Electrical and Computer Engineering, National Chiao Tung University. E-mail: {chienlung.eed04,maggie1209.tem04,hhshuai}@nctu.edu.tw. C.-Y. Chen and W.-H. Cheng are with the Institute of Electronics, National Chiao Tung University. W.-H. Cheng is also with the Artificial Intelligence and Data Science Program, National Chung Hsing University. E-mail: {cychen.ee09g,whcheng}@nctu.edu.tw. Jiaying Liu is with the Wangxuan Institute of Computer Technology, Peking University. E-mail: liujiaying@pku.edu.cn.
Abstract

The virtual try-on task is so attractive that it has drawn considerable attention in the field of computer vision. However, presenting the three-dimensional (3D) physical characteristic (e.g., pleat and shadow) based on a 2D image is very challenging. Although there have been several previous studies on 2D-based virtual try-on work, most 1) required user-specified target poses that are not user-friendly and may not be the best for the target clothing, and 2) failed to address some problematic cases, including facial details, clothing wrinkles and body occlusions. To address these two challenges, in this paper, we propose an innovative template-free try-on image synthesis (TF-TIS) network. The TF-TIS first synthesizes the target pose according to the user-specified in-shop clothing. Afterward, given an in-shop clothing image, a user image, and a synthesized pose, we propose a novel model for synthesizing a human try-on image with the target clothing in the best fitting pose. The qualitative and quantitative experiments both indicate that the proposed TF-TIS outperforms the state-of-the-art methods, especially for difficult cases.

Index Terms:
Virtual try-on, image synthesis, pose transfer, semantic-guided learning, cross-modal learning

I Introduction

SHOPPING in the brick-and-mortar stores takes considerable time to purchase one satisfactory item of clothing because it usually requires entering a store to try on several clothing candidates. In contrast, shopping online is expected to be a much faster purchase journey because the process of finding the products with relevant items is facilitated by the online searching and recommendation technology. However, although the usage rate of online shopping is rapidly increasing, it is still overshadowed by the brick-and-mortar stores because e-commerce platforms cannot provide sufficient information for the consumers. Among many promising approaches [1, 2, 3, 4, 5, 6, 7, 8] bridging the gap between online and offline shopping, virtual try-on is regarded as the key technology for the online fashion industry to burgeon, as well as a feasible work to bridge the gap between online and offline shopping.

Refer to caption
Figure 1: Examples of our template-free try-on image synthesis (TF-TIS) network, which takes only the in-shop clothing image and user image as input without a defined pose. Our goal is to generate a realistic try-on image according to the synthesized pose from the referential clothing image, which can reduce the cost of hiring photographers. No other work has achieved this.

To realize virtual try-on services, a recent line of studies has used clothing warping to transform the in-shop clothing and then paste the warped clothing on user images [9, 1], which preserves the details of clothes, including patterns and decorative designs. Nonetheless, the quality of the results significantly decreases when an occlusion (e.g., the user’s arm is in front of the chest, obscuring the garment in the source image) or a dramatic pose change (e.g., the limbs are from wide-opened to crossed) occurs. To solve these challenging cases, our previous work [10] introduced a semantic-guided method, which uses the semantic parsing to learn the relationship between different poses. However, the clothing part of the virtual try-on results still has some artifacts (i.e., missing details, such as small buttons and local inconsistency, such as distorted plaid), which are important for a try-on service. Moreover, although the state-of-the-art virtual try-on applications [11, 12] have demonstrated the try-on results in arbitrary poses (including our previous work [10]), they require users to assign the target poses instead of directly recommending the suitable poses based on the clothing style. Therefore, to create a convenient and practical virtual try-on service, a virtual try-on application that automatically synthesizes a suitable pose corresponding to the target clothing is desirable.

Based on the above observations, in this paper, we propose a novel virtual try-on network, namely, the template-free try-on image synthesis (TF-TIS) framework, for synthesizing high-quality try-on images with automatically synthesized poses. In addition, Fig. 1 illustrates examples of the template-free virtual try-on. Given a source user image and an in-shop clothing, the goal is to first synthesize the target pose automatically, which is further leveraged to generate the try-on image. Fig. 2 presents the TF-TIS framework comprising four modules: 1) cloth2pose, which synthesizes a suitable pose from the in-shop clothing (Column 3 in Fig. 1), 2) the pose-guided parsing translator, which translates the source pose to semantic segmentation according to the synthesized poses (Column 4 in Fig. 1), 3) segmentation region coloring, which renders the clothing and human information on the semantic segmentation (Column 5 in Fig. 1), and 4) salient region refinement, which polishes the important regions, such as faces and logos (the last column in Fig. 1).

Specifically, given an in-shop clothing for try-on, we first aim to synthesize a suitable corresponding pose represented as keypoints111The poses are specified by keypoints, which contain 18 keypoints and each keypoint represents one human body joint.. One of the basic approaches is to use the images of mannequins wearing corresponding in-shop clothes as the target poses. However, some in-shop clothes may not have corresponding images. Another approach is to cluster the in-shop clothes first and assign the most frequent poses in the cluster as the target pose. Nevertheless, such an approach highly depends on the clustering results, whereas rare/unseen clothes may not find the appropriate poses. Therefore, we propose a novel cloth2pose network to directly learn the relationship between the in-shop garment and the target pose, which leverages the deep features from the pretrained model and then uses the regressor to fit the joint map (i.e., keypoints). To the best of our knowledge, this is the first work to generate suitable poses for corresponding in-shop clothes.

Afterward, given a source user image and a synthesized pose, the goal is to synthesize a realistic try-on image. An intuitive method to tackle difficult cases of the body occlusion or the dramatic pose transfer is offering the body parsing information to the current try-on networks. However, this method is not compatible with existing try-on models because most of the previous try-on works have focused on directly warping the clothing item and pasting the warped clothing onto the users. Therefore, the pose-guided parsing translator is proposed by constructing a deep convolutional network to transform a pose into a semantic segmentation form to guide the learning of the next stage. Semantic segmentation plays a critical role in solving difficult cases. For example, limb parsing provides information for solving dramatic pose changes, whereas limb and clothing parsing offer clues addressing body occlusion issues.

Moreover, to present realistic try-on images to users, we color the transformed semantic segmentation with the appearance of the a human and clothes by using a conditional generative adversarial network (CGAN) in segmentation region coloring. Finally, salient region refinement focuses on two salient regions for try-on services (i.e., face and clothing) and improves these regions with details to achieve better virtual try-on images. For clothing refinement, We constructed a detail-retaining network, which adopts two encoders to extract relatively important features and global and local discriminators to retain the consistency of images, especially clothing.

Our previous work is called FashionOn[10]. We have made several changes in this work, and the contributions are summarized as follows.

  • We designed a new pose synthesis framework, which directly learns the relationship between in-shop clothes and try-on poses to synthesize a suitable try-on pose. The automatically synthesized poses can facilitate a user-friendly platform without the extra effort of uploading a target pose and exhibit better virtual try-on results to attract customers. To the best of our knowledge, TF-TIS is the first virtual try-on network to provide a suitable pose for the corresponding clothing image.

  • We redesigned the clothing refinement generator (Section III-D-2) composed of two distinct encoders due to the unsatisfying results caused by the same parameters of the encoder for the two input features of in-shop clothing and warped coarse clothing. One is to encode the warped coarse clothing and we integrate it with the very detailed features of the in-shop clothing extracted from the other encoder. In addition, we adopted a UNet-like architecture to avoid losing the warped coarse clothing information, such as shape and color.

  • To enhance the consistency of the generated image, which improves the quality of pictures, we proposed global and local discriminators for our ClothingGAN (Section III-D-2). With the local discriminator, the generator is forced to synthesize the image with more natural details. Using the global discriminator, the generator is forced to generate a realistic picture.

II Related work

II-A Virtual Try-on

Existing virtual try-on approaches can be roughly categorized into 3D-based methods (e.g., 3D body shape) and 2D-based methods (e.g., clothing warping). We first introduce these two approaches and then compare them with TF-TIS.

II-A1 3D-based Try-on

To generate more realistic results, numerous approaches [13, 14, 15] have used users’ 3D body shape measurements and 4D sequence (e.g., video) to offer more information. For example, with high-resolution videos, Pons-Moll et al. [13] first captured the geometry of clothing on a body to obtain a rough body meshes and then aligned the defined clothing templates to garments of the input scans again to generate more realistic and body-fitting clothes.

Given the high cost of physics-based simulation to accurately drape a 3D garment on a 3D body, Gundogdu et al. [15] implemented 3D cloth draping using neural networks. Specifically, they used a PointNet-like model to derive the user information and encoded the garment meshes to obtain the point-wise, patch-wise, and global features for the fitted garment estimation.

In summary, although 3D-based approaches can produce try-on videos, the collection of measurement data can be costly, requiring extensive manual labeling or expensive equipment. Therefore, many scholars have resorted to using rich 2D images, which can be easily found online, to achieve the virtual try-on task. Moreover, the proposed TF-TIS only requires a source image and an in-shop clothing to a synthesize try-on image with the suitable pose.

II-A2 2D-based Try-on

To synthesize the try-on images, it is necessary to transform the in-shop clothing to fit users’ poses. Therefore, spline-based approaches are introduced to achieve this task. Among them, thin plate spline (TPS) [16] has been widely adopted and predominates in the nonrigid transfer of images instead of direct generation using neural networks. For example, Han et al. [1] presented an image-based virtual try-on network (VITON) that warps in-shop clothes through TPS and cascades a refinement network to generate the warped-clothing details with the coarse-grained person image. However, some details are still missed due to the refinement network [1].

To correct deficiencies in [1], Wang et al. [9] constructed a two-stage framework, CP-VTON, combining the generated person with the warped clothes through a generated composition mask without adopting the refinement network. Moreover, Zheng et al. advanced CP-VTON [9] and proposed Virtually Trying on New Clothing with Arbitrary Poses (VTNCAP)[11] by adopting a bidirectional GAN and an attention mechanism, which take the place of the generated composition mask in CP-VTON, to focus more on the clothing region.

Nevertheless, they still neglected that the facial region is also an important factor to determine the quality of the virtual try-on task and cannot preserve the detailed clothing information (e.g., pleats and shadows) to follow the human poses. In contrast, TF-TIS was developed as a semantic segmentation-based method that avoids these issues. In addition, TF-TIS preserves the comprehensive details of the in-shop clothes (e.g., patterns and texture) and the realistic human appearance (e.g., hair color and facial features) in accordance with human poses and different body shapes. The visual comparison between TF-TIS and the other mentioned methods is illustrated in Fig. 3.

Refer to caption
Figure 2: Training overview. Stage I (cloth2pose) exploits the correlation of clothes and poses and synthesizes a pose from the in-shop clothes via sequential convolution blocks. Stage II (pose-guided parsing translator) transfers the human semantic segmentation to MgM_{g} according to MsM^{{}^{\prime}}_{s}, McM_{c}, and PP. Stage III (segmentation region coloring) fills clothing information and the user’s appearance into the segmentation to synthesize a realistic try-on image IgI_{g}. Stage IV (salient region refinement) consists of two parts: FacialGAN and ClothingGAN. FacialGAN generates high-frequency details as a residual output and directly adds on the facial region of IgI_{g}. ClothingGAN extracts the fine information from the in-shop clothing image and uses the features for the details of the clothes CrC_{r}.

II-B Pose Transfer

Research on human pose transfer [17, 18, 19, 20, 21, 22, 23] has trended recently, as copious applications are planned in the future. The process of human pose transfer comprises two stages: pose estimation and image generation. The first stage can be divided into two categories (i.e., keypoints estimation [24, 25, 26] and human semantic parsing [27, 28, 29]). For example, Hidalgo et al.[26] trained a single-stage network through multi-task learning to determine the keypoints of the whole body simultaneously. Gong et al.[28] used the part grouping network (PGN) to reformulate instance-level human parsing as two twinned sub-tasks that can be jointly learned and mutually refined.

For the second stage, with the advances of GANs, image generation has received considerable attentions and has been widely adopted [18, 21, 23] to generate realistic images. Among the existing pose transfer research, most constructed novel architectures and successfully transferred the pose of the given human image based on the human joint points. For instance, Ma et al.[17] separated this task into two stages: pose integration, which generates initial but blurry images, and image refinement, which refines images by training a refinement network in an adversarial way. Siarohin et al.[20] used geometric affine transformation to mitigate the misalignment problem between different poses. However, most of the previous works did not extend the application of pose transfer to explore virtual try-on. By infusing pose transfer, virtual try-on services provide consumers with more chances to realize their appearance in trying on new clothes in multiple aspects and induce them to buy clothes. Hence, by converting semantic segmentation, TF-TIS seamlessly integrates virtual try-on with pose transfer to generate multi-view try-on images for customers.

II-C Cross-modal Learning

The association between different fields has been studied and exploited recently [30, 31, 32, 33, 34, 35, 36, 37] (e.g., the cross-modal matching between audio and visual signals [30, 31, 32], image and text [33, 35, 36]). Castrejón et al. [38] used the identical network architecture with different weights as encoders to extract low-level features from different modalities (e.g., sketches and natural images) and then inputted them into the shared cross-modal representation network to learn the representation for scenes. Tae-Hyun et al. [39] attempted to learn voice-face correlation in a self-supervised manner (i.e., directly capturing the dominant facial traits of the person correlated with the input speech instead of synthesizing the face from the attributes). Inspired by the cross-modal learning, which can find hidden details from an inconspicuous part of data or align the embedding from one domain to another, we used a similar concept to learn the correlation between clothes and human poses to synthesize the image of the virtual try-on with a suitable pose for the user from the corresponding clothing.

TABLE I: Notation Table
Symbols Definitions
CtC_{t} in-shop clothing image
McM_{c} in-shop clothing mask
CdC_{d} detailed clothing representation
CwC_{w} warped-clothing representation
CrC_{r} refined clothing
PP keypoint tensor (suitable pose)
FF in-shop clothing feature map
IsI_{s} source user image
IsI^{{}^{\prime}}_{s} source user image without clothes
ItI_{t} target user image
IgI_{g} generated try-on image
MsM_{s} source body semantic segmentation
MsM^{{}^{\prime}}_{s} source body semantic segmentation without clothing part
MgM_{g} generated body semantic segmentation
MtM_{t} target body semantic segmentation
MifgM^{fg}_{i} foreground channels of MiM_{i}, is,g,ti\in s,g,t
MifaceM^{face}_{i} facial channels of MiM_{i}, is,g,ti\in s,g,t
MiclothingM^{clothing}_{i} clothing channels of MiM_{i}, is,g,ti\in s,g,t
IifaceI^{face}_{i} facial part of IiI_{i}, is,g,ti\in s,g,t
IiclothingI^{clothing}_{i} clothing part of IiI_{i}, is,g,ti\in s,g,t
dd high-frequency residual face details
Nc2pN_{c2p} number of convolution blocks in cloth2pose
\otimes pixel-wise multiplication

III Proposed Method

As illustrated in Fig. 2, given an in-shop clothing image CtC_{t} and a source user image IsI_{s}, the goal of TF-TIS is to generate the try-on image IgI_{g} with an automatically synthesized suitable pose such that the personal appearance and the clothing texture are retained. To achieve this goal, we developed a four-stage framework in TF-TIS: (I) cloth2pose, which derives a suitable pose PP based on the in-shop clothing CtC_{t} by exploiting the correlation between poses and clothes; (II) the pose-guided parsing translator, which transforms the body semantic segmentation MsM_{s} into a new one, MgM_{g}, according to the derived pose; (III) segmentation region coloring, which takes IsI_{s}, CtC_{t}, and MgM_{g} as input and synthesizes a coarse try-on image IgI_{g} by rendering the personal appearance and clothing information into the segmentation regions; and (IV) salient region refinement, which refines the salient but blurry regions of the try-on result IgI_{g}, generated from the last stage (i.e, FacialGAN refines facial regions and ClothingGAN refines clothing regions). To clarify the definition of each symbol, we created a table (Table I) to illustrate this clearly.

III-A Cloth2pose

A virtual try-on service usually requires three inputs [11, 9, 12]: 1) a user image, 2) an in-shop clothing image, and 3) a target pose. One potential improvement is to automatically generate the target pose according to the in-shop clothing because it reduces the users’ efforts. Moreover, a suitable pose better demonstrates the in-shop clothing, which may stimulate consumption. For example, plain T-shirts in a sideways pose can mostly show muscle lines. To synthesize the target pose directly from in-shop clothing, cloth2pose uses pairs of in-shop clothes and mannequin photos on the online shopping site for training. Specifically, cloth2pose first derives keypoints of mannequin photos by existing models, e.g., [24, 40, 41].222We use OpenPose model [24], a 2D pose estimation model pre-trained on large-scale human pose datasets (COCO [42] and MPII [43]) in our experiment. The following keypoints are used: nose, eyes, ears, neck, shoulders, elbows, wrists, hips, knees, and ankles. Let xkx_{k} denote the 2D position of the kthk^{th} keypoint on the image (ItI_{t}). Because it is difficult to regress the clothing features to a single point, we converted the keypoint position xkx_{k} into the pose map PkP_{k} by applying a 2D Gaussian distribution for each keypoint. The values at position pR2p\in{R}^{2} in PkP_{k} are defined as follows:

Pk(p)=exp(pxk22σ2),P_{k}(p)=\exp{\left(-\frac{\lVert p-x_{k}\lVert^{2}_{2}}{\sigma^{2}}\right)}, (1)

where σ\sigma determines the spread of the peak. After constructing the 2D keypoint map for each keypoint, we stack all the 2D keypoint maps together as a keypoint tensor, denoted as PP.

Afterward, cloth2pose extracts features of the in-shop clothes by using the first 10 layers of VGG-19 [44], denoted as ϕ0\phi_{0}. Let CtC_{t} denote the image of in-shop clothing. The clothing feature map FF is obtained as ϕ0(Ct)\phi_{0}(C_{t}). Here, cloth2pose exploits a progressive refinement architecture as illustrated in Fig. 2. Specifically, at the first block, the network produces a set of keypoint information only from the clothing feature map: P1=ϕ1(F)P^{1}=\phi_{1}(F), where ϕ1\phi_{1} refers to the first convolutional block. For the succeeding convolutional blocks, we employed five convolutional layers with a 7×77\times 7 kernel and two with a 1×11\times 1 kernel to generate the keypoint tensor, and each layer is followed by a ReLU. The convolutional block takes the concatenation of FF and the prediction from the previous block as input to predict the refined keypoint tensor:

Pi=ϕi(F,Pi1),2iNc2p,P^{i}=\phi_{i}(F,P^{i-1}),\forall 2\leq i\leq N_{c2p}, (2)

where ϕi\phi_{i} represents the ithi^{th} convolutional block and Nc2pN_{c2p} is the number of total convolutional blocks in cloth2pose.

An intuitive choice for the loss function is the L2L_{2} distance between the keypoint tensors extracted from the pose estimation model (PP) and that estimated from cloth2pose (PNc2pP^{N_{c2p}}), i.e., PNc2pP22\lVert P^{N_{c2p}}-P\lVert^{2}_{2}. However, only using the L2L_{2} loss is likely to generate many responses in various locations for one joint. Given this condition, we employed the sparsity constraint to limit the number of candidates. Therefore, if the model predicts several candidates for one keypoint, the nonjoint area is penalized less by L2 loss than by L1 loss. The final loss is as follows:

c2p=iNc2pPiP22+λPNc2p1,\mathcal{L}_{c2p}=\sum_{i\in N_{c2p}}\lVert P^{i}-P\lVert^{2}_{2}\ +\ \lambda\lVert P^{N_{c2p}}\lVert_{1}, (3)

where λ=0.00008\lambda=0.00008 is the hyperparameter for striking a balance between multiple candidates and keypoint vanishing. If the value of λ\lambda is too high (e.g., λ=0.001\lambda=0.001), then the output PNc2pP^{N_{c2p}} is without any candidates. Conversely, if the value is too low (e.g., λ=0.00001\lambda=0.00001), then the sparsity constraint becomes ineffective, and the output still has more than one candidate.

III-B Pose-guided Parsing Translator

Showing the corresponding area of each body part explicitly, the human body segmentation are employed to synthesize realistic human images. Accordingly, the goal of the pose-guided parsing translator is to translate the source body semantic segmentation MsM_{s} to the target body semantic segmentation MtM_{t} according to the target pose PP. We first used the PGN[28], which is pretrained on the Crowd Instance-level Human Parsing dataset, to produce semantic parsing labels. The labels contain 20 categories, including left-hand, top clothes, and face. Afterward, to precisely map each item to the new position according to the pose PP, we used one-hot encoding to constitute a 20-channel tensor MR20×W×HM\in{R}^{{20}\times{W}\times{H}}, where each channel is a binary mask representing one category. Due to the unnecessity of the clothing channel of MsM_{s}, we replace it with the original in-shop clothing mask McM_{c}. This replacement facilitates offering the in-shop clothing shape to realize the virtual try-on service.

Adapted from pix2pix[45], the pose-guided parsing translator consists of two downsampling layers, nine residual blocks, and two upsampling layers. Convolutional layers and highway connections, concatenating the input and the output of the corresponding block, are composed in each residual block. The objective of the translator GtG_{t} adopts a CGAN as follows:

GANGt(Gt,Dt)=𝔼Min,Mt[logD(Min,Mt)]+𝔼Min[log(1D(Min,Gt(Min))],\begin{split}\mathcal{L}^{G_{t}}_{GAN}(G_{t},D_{t})&=\mathbb{E}_{M_{in},M_{t}}[log\textit{D}(M_{in},M_{t})]\\ &+\mathbb{E}_{M_{in}}[log(1-\textit{D}(M_{in},G_{t}(M_{in}))],\end{split} (4)

where GtG_{t} minimizes the objective against DtD_{t} that maximizes it (i.e., argminGtmaxDtGANGt(Gt,Dt)\arg\min_{G_{t}}\max_{D_{t}}\mathcal{L}^{G_{t}}_{GAN}(G_{t},D_{t})) and MinM_{in} represents the concatenation of MsM^{{}^{\prime}}_{s}, PP, and McM_{c}.

To accurately differentiate each pixel as the corresponding channel, we integrate a pixel-wise binary cross-entropy loss of the GtG_{t}, denoted as BCEGt\mathcal{L}^{G_{t}}_{BCE}, with our CGAN objective, and the discriminator stays the same:

(Gt)BCEGt=ncMtlog(Gt(Min))+(1Mt)log(1Gt(Min)),\begin{split}\mathcal{L}&{}^{G_{t}}_{BCE}(G_{t})=\\ &-\sum_{n_{c}}M_{t}\log(G_{t}(M_{in}))+(1-M_{t})\log(1-G_{t}(M_{in})),\end{split} (5)

where ncn_{c} denotes the total number of channels of human parsing masks. In summary, the objective of the pose-guided parsing translator is derived as follows:

argminGtmaxDtGANGt(Gt,Dt)+λbceBCEGt(Gt).\arg\min_{G_{t}}\max_{D_{t}}\mathcal{L}^{G_{t}}_{GAN}(G_{t},D_{t})+\lambda_{bce}\mathcal{L}^{G_{t}}_{BCE}(G_{t}). (6)

III-C Segmentation Region Coloring

Having obtained the target semantic segmentation from the previous stage, the segmentation region coloring aims to synthesize a coarse try-on result by rendering information into the segmentation regions, denoted as Mg=Gt(Min)M_{g}=G_{t}(M_{in}). Given the great success of applying GANs in various image generation tasks, we adopte the architecture of CGAN[46] to synthesize results. Specifically, we propose a coloring generator GcG_{c} rendering the personal information into the body semantic segmentation MgM_{g} according to IsI_{s} and CtC_{t} (i.e., the appearance of the source person and in-shop clothing texture). Because it is difficult to derive a significant number of training images, we traine our network to change the source person. To avoid supplying GcG_{c} with the clothing information, we remove the clothing information from IsI_{s}. In other words, we take as input 1) the in-shop clothing CtR3×W×HC_{t}\in{R}^{{3}\times{W}\times{H}}, 2) the source person image without clothing information IsR3×W×HI^{{}^{\prime}}_{s}\in{R}^{{3}\times{W}\times{H}}, and 3) the target semantic segmentation MgR20×W×HM_{g}\in{R}^{20\times{W}\times{H}} for GcG_{c}.

Fig. 2 illustrates the architecture of TF-TIS. We adopted the UNet architecture with highway connections, combining the input and processed information. Highway connections were employed to avoid the vanishing gradient [47]. Six residual blocks were implemented between the encoder and the decoder of GcG_{c}. For each residual block, two convolutional layers and ReLU were stacked to integrate MgM_{g}, IsI^{{}^{\prime}}_{s}, and CtC_{t} from small local regions to broader regions so that the appearance information of IsI^{{}^{\prime}}_{s} and CtC_{t} can be extracted.

Because the background information is less important and easily distracts the generator from synthesizing try-on images, we filtered out it to force GcG_{c} to concentrate on generating the correct human part of the image rather than the whole image. Specifically, the background information of the generation result Ig=Gc(Ct,Is,Mg)I_{g}=G_{c}(C_{t},I^{{}^{\prime}}_{s},M_{g}) is filtered out with MgfgM^{fg}_{g} and so is the ground truth ItI_{t} with MtfgM^{fg}_{t}, where MgfgM^{fg}_{g} and MtfgM^{fg}_{t} represent MgM_{g} and MtM_{t} without the background channel, respectively. Afterward, a global structural information and other low-frequency features are obtained from calculating the L1 distance function:

L1Gc=WHIgMgfgItMtfg1,\mathcal{L}^{G_{c}}_{L1}=\sum_{W}\sum_{H}\left\lVert I_{g}\otimes M^{fg}_{g}-I_{t}\otimes M^{fg}_{t}\right\lVert_{1}, (7)

where \otimes represents the pixel-wise multiplication.

For the discriminator, we constructed the coloring discriminator DcD_{c} against GcG_{c} to distinguish two pairs: one including ItI_{t} and IsI_{s}, and the other including IgI_{g} and IsI_{s}. With the additional real image IsI_{s}, DcD_{c} impels GcG_{c} to generate more realistic images. Moreover, because this is a binary classification problem (i.e., the image is real or fake), we employed the binary cross-entropy loss as the GAN loss to compare the generated images:

GANGc=BCE(Dc(Gc(Ct,Is,Mg),Is),1),\mathcal{L}^{G_{c}}_{GAN}=\mathcal{L}_{BCE}(D_{c}(G_{c}(C_{t},I^{{}^{\prime}}_{s},M_{g}),I_{s}),1), (8)
GANDc=BCE(Dc(Gc(Ct,Is,Mg),Is),0)+BCE(Dc(It,Is),1),\begin{split}\mathcal{L}^{D_{c}}_{GAN}&=\mathcal{L}_{BCE}(D_{c}(G_{c}(C_{t},I^{{}^{\prime}}_{s},M_{g}),I_{s}),0)\\ &+\mathcal{L}_{BCE}(D_{c}(I_{t},I_{s}),1),\end{split} (9)

where GcG_{c} attempts to deceive DcD_{c} to recognize the synthesized image as a real image; thus, the goal of BCE\mathcal{L}_{BCE} in GANGc\mathcal{L}^{G_{c}}_{GAN} is equal to 11. In contrast, because DcD_{c} must classify the generated or real images correctly, the goals of BCE\mathcal{L}_{BCE} in GANDc\mathcal{L}^{D_{c}}_{GAN} are equal to 0 and 11, respectively. In summary, the overall loss function of segmentation region coloring is as follows:

Gc=GANGc+λL1Gc.\mathcal{L}^{G_{c}}=\mathcal{L}^{G_{c}}_{GAN}+\lambda\mathcal{L}^{G_{c}}_{L1}. (10)

III-D Salient Region Refinement

Because users care most about the characteristics of products, the performance of the virtual try-on service is highly dependent on the saliency of the synthesized image, for example, users (e.g., facial details or body shape), clothing features (e.g., button or bow tie), and 3D physics (e.g., pleat and shadows). Hence, in the fourth stage, we proposed two networks to refine the facial and clothing regions separately.

III-D1 FacialGAN

Modeling faces and hair is challenging but essential in synthesizing try-on images. To simplify this complicated work, our network generates residual face details instead of the whole face. Precisely, for the facial refinement network GrfG_{rf}, we adjusted the model of the segmentation region coloring (GcG_{c}) to the facial refinement task by excluding the fully connected layer to avoid losing input details during compression. To force GrfG_{rf} to concentrate on facial details, MgfaceM^{face}_{g} and MsfaceM^{face}_{s} were introduced to filter out the facial region from IgI_{g} and IsI_{s}, respectively, where MgfaceM^{face}_{g} denotes the parsing channels representing the head (including the face, neck, and hair). As such, GrfG_{rf} generates the high-frequency details as the residual output d=Grf(Igface,Isface)d=G_{rf}(I^{face}_{g},I^{face}_{s}), where Igface=IgMgfaceI^{face}_{g}=I_{g}\otimes M^{face}_{g} and Isface=IsMsfaceI^{face}_{s}=I_{s}\otimes M^{face}_{s}. After processing images through GrfG_{rf}, the fine-tuned result is obtained by adding dd to IgI_{g}.

In addition, inspired by [48, 49], the perceptual loss was exploited to produce images that have a similar feature representation even though the pixel-wise accuracy is not high. Let (d+Ig)face(d+I_{g})^{face} and ItfaceI^{face}_{t} denote the regions within MgfaceM^{face}_{g} of (d+Ig)(d+I_{g}) and ItI_{t}, respectively. In addition to calculating the loss pixel-wise (d+Ig)faceItface1\left\lVert(d+I_{g})^{face}-I^{face}_{t}\right\lVert_{1}, we computed the perceptual loss by mapping both (d+Ig)face(d+I_{g})^{face} and ItfaceI^{face}_{t} into the perceptual feature space through the different layers (ϕi\phi_{i}) of the VGG-19 model. This additional loss allows the model to reconstruct the details and edges better.

vggGrf((d+Ig),faceItface)=iλiϕi((d+Ig)face)ϕi(Itface)1,\begin{split}\mathcal{L}^{G_{rf}}_{vgg}((d+I_{g})&{}^{face},I^{face}_{t})\\ &=\sum_{i}\lambda_{i}\left\lVert\phi_{i}((d+I_{g})^{face})-\phi_{i}(I^{face}_{t})\right\lVert_{1},\end{split} (11)

where ϕi\phi_{i} represents the feature map retrieved from the ithi^{th} layer in the pretrained VGG-19 model[44]. Furthermore, like previous stages, we integrated the GAN loss as follows:

GANGrf=BCE(Drf((d+Ig)face,Isface),1)\mathcal{L}^{G_{rf}}_{GAN}=\mathcal{L}_{BCE}(D_{rf}((d+I_{g})^{face},I^{face}_{s}),1) (12)
GANDrf=BCE(Drf(Isface,(d+Ig)face),0)+BCE(Drf(Isface,Itface),1).\begin{split}\mathcal{L}^{D_{rf}}_{GAN}&=\mathcal{L}_{BCE}(D_{rf}(I^{face}_{s},(d+I_{g})^{face}),0)\\ &+\mathcal{L}_{BCE}(D_{rf}(I^{face}_{s},I^{face}_{t}),1).\end{split} (13)

The overall loss function of FacialGAN is as follows:

Grf=λf1GANGrf+λf2vggGrf((d+Ig)face,Itface)+λf3WH(d+Ig)faceItface1+λf4WH(d+Ig)MgfgItMtfg1,\begin{split}\mathcal{L}^{G_{rf}}&=\lambda_{f1}\mathcal{L}^{G_{rf}}_{GAN}\\ &+\lambda_{f2}\mathcal{L}^{G_{rf}}_{vgg}((d+I_{g})^{face},I^{face}_{t})\\ &+\lambda_{f3}\sum_{W}\sum_{H}\left\lVert(d+I_{g})^{face}-I^{face}_{t}\right\lVert_{1}\\ &+\lambda_{f4}\sum_{W}\sum_{H}\left\lVert(d+I_{g})\otimes M^{fg}_{g}-I_{t}\otimes M^{fg}_{t}\right\lVert_{1},\end{split} (14)

where λi\lambda_{i} denotes the weight of the corresponding loss.

III-D2 ClothingGAN

Most state-of-the-art virtual try-on networks [1, 9, 50, 11] preserve detailed clothing information by fusing the prewarped clothes onto the try-on images directly. However, these approaches encounter the problems of limbs occlusion or incorrect warping patterns of clothes. To solve these problems, in our previous work (FashionOn)[10], we implemented the virtual try-on framework by 1) transforming the human pose into the semantic segmentation form through GtG_{t}, 2) coloring the clothing textures and human appearance through GcG_{c}, and 3) processing images through refinement networks.

Although FashionOn fills in most clothing information back, some tiny but important details (e.g., neckline or button) are missing and the generated images are not sufficient realistic. Hence, we modify the previous clothing refinement generator and construct a new one (GrcG_{rc}) to retrieve clothing features directly from the in-shop clothing CtC_{t} and render them into the clothing region of IgI_{g}. Inputting the concatenation of the in-shop clothing and warped clothing into the Clothing UNet in our previous work [10] improved the details, but the generated clothing region still lacks fined details, such as the neckline and buttons. The unsatisfactory results are caused by the same parameters of the encoder for the two input features of in-shop clothing and warped clothing. Moreover, the subtle difference in the details is neglected by the discriminator.

Based on these observations, the proposed ClothingGAN GrcG_{rc} contains four parts: (a) detail encoder (EDE_{D}), (b) warped-clothing encoder (EWE_{W}), (c) decoder (DecDec), and (d) context discriminator (DrcD_{rc}). The generator exploits detailed information on in-shop clothing and warped clothing obtained from EDE_{D} and EWE_{W}, respectively, which are then input into DecDec to generate an image of refined clothing. Next, DrcD_{rc}, which consists of the local and the global discriminators, differentiates whether the refined clothing is real or fake by comparing the local and global consistency with real images.

Detail Encoder (EDE_{D}). The objective of EDE_{D} is to learn the detailed and neglected information (i.e., missing information in the previous stage) from an in-shop clothing image (CtC_{t}). To extract detailed visual features, we use seven convolutional layers followed by an instance normalization (IN) layer[51] together with LeakyReLU[52] as the activation function, which is more than EWE_{W} because detailed information is required from the original in-shop clothing, such as texture and logos. After training, EDE_{D} can generate a detailed clothing representation, denoted as Cd=ED(Ct)C_{d}=E_{D}(C_{t}), which is further employed by the decoder to complement the details and synthesize the refined clothing.

Warped-Clothing Encoder (EWE_{W}). As depicted in Fig. 2, we use the UNet architecture to encode Igclothing=IgMgclothingI^{clothing}_{g}=I_{g}\otimes M^{clothing}_{g}, where MgclothingRW×HM^{clothing}_{g}\in{R}^{{W}\times{H}} is the clothing part of MgM_{g}. The encoder includes five downsampling convolutional layers with kernel=5, and each layer is followed by an IN layer with LeakyReLU. Each layer of the UNet encoder is connected to the corresponding layer of the UNet decoder through highway connections to produce high-level features. Finally, we obtain the warped-clothing representation Cw=Ew(Igclothing)C_{w}=E_{w}(I^{clothing}_{g}). In the following section, we present how the outputs of EDE_{D} and EWE_{W} have been further employed in the decoder network.

Decoder (DecDec). To generate refined clothing via the decoder, we concatenate the encoded features CdC_{d} and CwC_{w} obtained from EDE_{D} and EWE_{W}, respectively, as input. From layer to layer in the decoder, we first derive the features obtained from the previous layer and the precomputed feature maps at EWE_{W} connected through a highway connection. Next, we upsample the feature map with the 2×22\times 2 bicubic operation. After upsampling, a 3×33\times 3 convolutional and ReLU operation are applied. Using the highway connections with EWE_{W} allows the network to align the detailed clothing features with the warped-clothing features obtained by the UNet encoder (EWE_{W}). In other words, the generator can be written as follows:

Cr=Grc(Ct,Igclothing)=Dec(ED(Ct),EW(Igclothing)).\begin{split}C_{r}&=G_{rc}(C_{t},I^{clothing}_{g})\\ &=Dec(E_{D}(C_{t}),E_{W}(I^{clothing}_{g})).\end{split} (15)

To bridge the difference between the refined clothing CrC_{r} and the target clothing region Itclothing=ItMtclothingI^{clothing}_{t}=I_{t}\otimes M^{clothing}_{t}, where MtclothingM^{clothing}_{t} represents the clothing channel of MtM_{t}, we introduced the L1 loss (L1Grc\mathcal{L}^{G_{rc}}_{L_{1}}) and the perceptual loss (vggGrc\mathcal{L}^{G_{rc}}_{vgg}) to refine the clothing as follows:

L1Grc(Cr,Itclothing)=WHCrItclothing1,\mathcal{L}^{G_{rc}}_{L_{1}}(C_{r},I^{clothing}_{t})=\sum_{W}\sum_{H}\left\lVert C_{r}-I^{clothing}_{t}\right\lVert_{1}, (16)
vggGrc(Cr,Itclothing)=i=15λiϕi(Cr)ϕi(Itclothing)1,\mathcal{L}^{G_{rc}}_{vgg}(C_{r},I^{clothing}_{t})=\sum_{i=1}^{5}\lambda_{i}\left\lVert\phi_{i}(C_{r})-\phi_{i}(I^{clothing}_{t})\right\lVert_{1}, (17)

where ϕi(C)\phi_{i}(C) represents the feature map of the clothing CC of the ithi^{th} layer in the VGG-19 model [44]. By exploiting the L1 loss instead of L2 loss here, we address the problems of blurry generated images. To further avoid the misalignment, the refined clothing CrC_{r} is integrated into IgI_{g}, where the clothing region is removed, to synthesize a refined human Irg=CrMgclothing+Ig(1Mgclothing)I_{rg}=C_{r}\otimes M^{clothing}_{g}+I_{g}\otimes(1-M^{clothing}_{g}). The parsing mask MgclothingM^{clothing}_{g} is used to select the clothing region, which facilitates the process of excluding limbs in front of the clothing when fusing the clothing. The loss for the refined clothing try-on is defined as follows:

fullbodyGrc(Irg,It)=WHIrgIt1.\mathcal{L}^{G_{rc}}_{fullbody}(I_{rg},I_{t})=\sum_{W}\sum_{H}\left\lVert I_{rg}-I_{t}\right\lVert_{1}. (18)

Context Discriminator. To make the refined clothing more realistic, we also employed the GAN loss GANGrc\mathcal{L}^{G_{rc}}_{GAN} by adopting the context discriminator comprising the global and the local discriminators that classify the refined clothing as real or fake by comparing the local and the global consistency with real images. Both discriminators are based on a convolutional network that compresses the images into small feature tensors. A fully connected layer is applied to the concatenation of the output feature tensors and predicts a constant value between 11 and 0, which represents the probability that the refined clothing is real.

The global discriminator takes as input the image in which we create a bounding box of the clothing part from the result and resizes it, using bilinear interpolation, to 128×128128\times 128. It consists of five two-stride convolutional layers with kernel=5 and a fully connected layer that outputs a 1024-dimensional vector. The local discriminator follows a similar pattern, except the last two single-stride convolutional layers have a kernel=3 and an input size of 64×6464\times 64. The input of the local discriminator is generated by randomly sampling 16×1616\times 16 from the bounding box and resizing it to 64×6464\times 64.

After deriving the outputs from the global and the local discriminators, we build a fully connected layer, followed by a sigmoid function to process the concatenation of two vectors (a 2048-dimensional vector). The output value ranges from 0 to 1, representing the probability that the refined clothing is real, rather than generated. The GAN loss is defined as follows:

GANGrc=BCE(Drc(rlocalf,rglobalf),1),\mathcal{L}^{G_{rc}}_{GAN}=\mathcal{L}_{BCE}(D_{rc}(r^{f}_{local},r^{f}_{global}),1), (19)
GANDrc=BCE(Drc(rlocalf,rglobalf),0)+BCE(Drc(rlocalt,rglobalt),1),\begin{split}\mathcal{L}^{D_{rc}}_{GAN}&=\mathcal{L}_{BCE}(D_{rc}(r^{f}_{local},r^{f}_{global}),0)\\ &+\mathcal{L}_{BCE}(D_{rc}(r^{t}_{local},r^{t}_{global}),1),\end{split} (20)

where rr is the resized result, the subscript globalglobal or locallocal denoted the whole or sub-sampled result, respectively, and the superscript tt or ff means that result is true or fake (generated), respectively. The overall loss function of the ClothingGAN is defined as follows:

Grc=λc1vggGrc+λc2L1Grc+λc3fullbodyGrc+λc4GANGrc,\mathcal{L}^{G_{rc}}=\mathcal{\lambda}_{c1}\mathcal{L}^{G_{rc}}_{vgg}+\lambda_{c2}\mathcal{L}^{G_{rc}}_{L_{1}}+\lambda_{c3}\mathcal{L}^{G_{rc}}_{fullbody}+\lambda_{c4}\mathcal{L}^{G_{rc}}_{GAN}, (21)

where λci(i=1,2,3,4)\lambda_{ci}\ (i=1,2,3,4) denotes the weight of the corresponding loss.

IV Experiments

The datasets and implementation are detailed here. Afterward, we conduct qualitative and quantitative experiments with the state-of-the-art method and our previous work FashionOn [10] to demonstrate the effectiveness of TF-TIS.

Refer to caption
Figure 3: Visual detail comparison. To compare the details of generated images between different models, we excluded the cloth2pose module from our network. The leftmost three columns are the input, and the rest of the columns are the output of different models and the local enlargement of them. Our TF-TIS has the best performance regarding details, such as the neckline of polo shirts and clothing pattern, and retains global and local consistency.

IV-A Dataset

To train and evaluate the proposed TF-TIS, a dataset containing two different poses and one clothing image for each person is required. Still, most of the existing datasets provide either only one pose for each person with the corresponding clothing image [1, 9] or multiple poses for each person but without clothing images[53]. Therefore, we collected a new large-scale dataset containing 10,89510,895 in-shop clothes with the corresponding images of mannequins wearing in-shop clothes in two different poses.333Please refer to the images in https://github.com/fashion-on/FashionOn.github.io. In addition, the DeepFashion dataset [53], with a size of 288×192288\times 192, is also adopted to broaden the diversity of the data. After removing the incomplete image pairs and wrapping one in-shop clothing and two human images into each triplet, 11,28311,283 triplets were created. Finally, we randomly split the dataset into the training set and the testing set with 9,5909,590 and 1,6931,693 triplets, respectively.

Refer to caption
Figure 4: Pose retrieval examples of cloth2pose. We queried our training dataset by comparing the features extracted via cloth2pose to all clothing features in the dataset. For each query, we present the top five retrieved samples. To focus on the pose information, we eliminate the human information, such as skin or hair color. The leftmost two columns are input clothes and the translated parsing, generated via Stage II from the derived pose. Although some examples are not like the query, it still shows that we could easily find results visually close to the query.

IV-B Implementation Details

Cloth2pose. We initialize the first 10 layers with that of the VGG-19 [44] and fine-tune them to generate a set of clothing feature maps FF from the information on the in-shop clothing. For the following convolutional blocks, each contains five convolutional layers with a 7×77\times 7 kernel and two with a 1×11\times 1 kernel. Each layer is followed by a ReLU. In this stage, we apply Nc2p=4N_{c2p}=4 to the number of the convolutional blocks.

Pose-guided Parsing Translator. Based on the framework of ResNet, we implement two downsampling layers, nine residual blocks, and followed by two upsampling layers. Specifically, we construct two single-stride convolutional layers with a 3×33\times 3 kernel and one highway connection, combining the input and the output of each corresponding residual block.

Segmentation Region Coloring. The architecture is composed of the encoder and decoder with six residual blocks between them. Except for the last residual block and one fully connected layer, each block contains two single-stride convolutional layers with a 3×33\times 3 kernel, one downsampling two-stride convolutional layer with a 3×33\times 3 kernel. The number of filters of all convolutional layers linearly increases and decreases, respectively, for the encoder and decoder.

Salient Region Refinement. The generator of FacialGAN (GrfG_{rf}) is similar to GcG_{c} but without the fully connected layer. In addition, GrfG_{rf} has four residual blocks containing two convolutional layers and one downsampling convolutional layer. For ClothingGAN, the generator (GrcG_{rc}) comprises two different encoders and one decoder. The detail encoder (EDE_{D}) consists of four downsampling convolutional layers and three convolutional layers, and the warped-clothing encoder (EWE_{W}) consists of four downsampling convolutional layers and one convolutional layer. All downsampling convolutional layers have a 4×44\times 4 kernel and a 2×22\times 2 stride, and other convolutional layers have a 3×33\times 3 kernel and a 1×11\times 1 stride. Both kinds of convolutional layers are followed by the IN layer and LeakyReLU. The decoder (DecDec) consists of five 3×33\times 3 convolutional layers, and each layer is followed by one upsampling layer, one IN layer, and one ReLU.

For the context discriminator (DrcD_{rc}), we adopt two discriminators: (1) the global discriminator, which consists of four downsampling convolutional layers and outputs a 1024-dimensional vector representing the global consistency, and (2) the local discriminator, which consists of three downsampling convolutional layers and outputs a 1024-dimensional vector representing the local consistency. A fully connected layer and sigmoid function are applied to the concatenation of the two vectors to differentiate whether the image is real or generated.

We used Adam [54] with β1=0.5\beta_{1}=0.5 and β2=0.999\beta_{2}=0.999 as the optimizer for all stages. The learning rates of the pose-guided parsing translator and the other stages are 2e-4 and 2e-5, respectively.

IV-C Qualitative Results

Refer to caption
Figure 5: Qualitative results sampled from our testing dataset. For every example (six images as a group) we show from left to right is: the input clothing, the generated segmentation image with the synthesized pose from TF-TIS, the try-on result with the synthesized pose, the generated segmentation image with the defined pose in our dataset, the try-on result with the defined pose, and the real try-on image.

Several try-on results are depicted in Fig 3, 5, 6, and 7.

IV-C1 Evaluation of Virtual Try-on

As Fig. 3 reveals, we compare TF-TIS with the state-of-the-art clothing warping-based method (VTNCAP [11]) and our previous work (FashionOn [10]), which adopts a coarse-to-fine strategy. In addition, because CP-VTON does not include the pose transfer, we combine the state-of-the-art pose transfer method GFLA [55] with CP-VTON [9] as an additional baseline (GFLA+CP-VTON). The results indicate that all methods accomplish the task of virtual try-on with arbitrary poses. However, the results of VTNCAP and GFLA+CP-VTON contain some artifacts, while the results of FashionOn lose some details and local consistency. Several cases are worth mentioning and listed below.

Neglecting Tiny but Essential Details. Fig. 3 illustrates that the ClothingGAN (GrcG_{rc}) does generate detailed information. From the left to the right, the results are from the state-of-the-art works (VTNCAP, GFLA+CP-VTON, and FashionOn) and ablation studies for GrcG_{rc} in TF-TIS (TF-TIS without GrcG_{rc}, TF-TIS without the local discriminator, and TF-TIS). The approaches without GrcG_{rc} (two encoders), including VTNCAP, GFLA+CP-VTON, and FashionOn, fail at the erroneous neckline and the small button, as revealed in Rows 1 and 3. The neckline and the small button on the clothing image by FashionOn are neglected because FashionOn uses only one encoder to extract the information of the concatenation of the in-shop clothing and warped clothing, which degrades the focus of both images. In contrast, the local discriminator of TF-TIS discerns tiny clothing details and the global discriminator is applied to retain the consistency of the entire image. As a result, TF-TIS generates the neckline and small button based on more comprehensive information of the warping clothing, which generates an appearance that is closer to the in-shop clothing images.

Wrong Warping Pattern. As depicted in Row 2 in Fig. 3, FashionOn and TF-TIS successfully resolve the wrong warping pattern problems of VTNCAP. Because warping clothes through TPS [16] only considers the deformation of clothes in two dimensions, the warped clothes are unrealistic. Although in Rows 1 and 2 GFLA+CP-VTON preserves the neckline and the button and generates smooth plaid, GFLA+CP-VTON misses the shade and makes the clothes an average color in Row 4. In Row 6, GFLA+CP-VTON mistakes the red pocket as being on the right side. In contrast, we predict the warped-clothing mask based on the in-shop clothing mask and the warped body segmentation, which consider the correlation between body parts. Moreover, the proposed TF-TIS retains the consistency of clothes, such as the pattern shape, which makes the plaid shirts more realistic because we adopt global and local discriminators to discern the clothing details and to retain consistency.

Average Face. The VTNCAP often synthesizes an average face as depicted in the fourth column of Fig. 3, because it simply uses the whole body as a mask and renders the human information into it. In contrast, we treat human parsing using 18 channels and render the information for each body part into the corresponding region, which is more specific for every part. Additionally, our works employs the FacialGAN to refine the facial part, making it more distinctive, instead of synthesizing the average faces.

Clothing Color Degradation. In the second, fourth, fifth and sixth rows in Fig. 3, the clothing color of the results derived by VTNCAP changes from the color of the in-shop clothing. In contrast, FashionOn and TF-TIS successfully preserve the color of the in-shop clothing, which is important in virtual try-on services.

Refer to caption
Figure 6: Visual comparison of AdaIN [56] and IN [51] for ClothingGAN.

Human Limbs Occlusion. Rows 5 and 6 in Fig. 3 reveal that the proposed TF-TIS can solve the human limbs occlusion problems in VTNCAP. Rather than simply warping it through TPS, we simultaneously warp the clothing and the body segmentation, then render the human appearance and the clothing information sequentially. Hence, GcG_{c} can easily render the appearance based on all semantic segmentation, preserving the natural correlation between clothes and humans.

Dropping the Detailed Logo. In Fig. 3, the rightmost two columns are the ablation study for the local discriminator within the context discriminator. Row 4 shows that the local discriminator generates the full logo. The “PARIS” logo is evident with almost all five characters, using the local discriminator in the rightmost column. Without the local discriminator, it only generates three characters.

Comparison of AdaIN and IN for GrcG_{rc}. We replace the IN layer in the two encoders of the ClothingGAN with an adaptive instance normalization layer (AdaIN) to evaluate whether AdaIN helps preserve the clothing details in Fig. 6. Equation 15 for AdaIN becomes the following:

Cr=Dec((1γ)EW(Igclothing)+γAdaIN(EW(Igclothing),ED(Ct))),\begin{split}C_{r}=Dec(&(1-\gamma)E_{W}(I^{clothing}_{g})\\ &+\gamma AdaIN(E_{W}(I^{clothing}_{g}),E_{D}(C_{t}))),\end{split} (22)

where γ\gamma is a hyperparameter for the content-style trade-off. We used γ=0.25\gamma=0.25 and 0.75 to evaluate the difference and demonstrated the visual comparison in Fig. 6.

AdaIN(x,y)=α(y)(xμ(x)α(x))+μ(y),{AdaIN}(x,y)=\alpha(y){\left(\frac{x-\mu(x)}{\alpha(x)}\right)}+\mu(y), (23)

where xx represents the content input, yy is the style input, and α(y)\alpha(y) denotes the standard deviation of yy. The AdaIN simply scales the normalized content input with α(y)\alpha(y) and shifts it using μ(y)\mu(y). Fig. 6 reveals that AdaIN tends to generate global features for the clothing information and fails to generate robust details. For example, as presented in Row 4, AdaIN fails to synthesize the robust edge of the suspenders. Moreover, as displayed in Row 5, AdaIN tends to generate the blurry flowers. When increasing the hyperparameter γ\gamma to contain a higher proportion of features from CtC_{t}, the GrcG_{rc} adopting AdaIN generates more robust but still blurrier results than using IN.

Refer to caption
Figure 7: Qualitative results sampled from the testing dataset.

IV-C2 Evaluation of cloth2pose

Because none of the previous research can generate the target poses according to the in-shop clothes, we evaluate the performance of cloth2pose by determining whether cloth2pose can learn the relationship between the in-shop clothing and try-on pose. Specifically, in the testing phase, given the in-shop clothing, we use cloth2pose to derive the synthesized pose and generate the translated parsing (second column in Fig. 4). Afterward, we compute the L2 distances between the in-shop clothing feature and all clothing features in the training dataset and retrieve top five try-on poses results with the smallest clothing distance. The in-shop clothing features are extracted by using the first 10 layers of the VGG-19 [44].

Fig. 4 presents several examples. The retrieved results reveal that the synthesized poses are very close to some real poses in the top five results (e.g., the fourth sample in Row 2, the first sample in Row 5, and the first sample in Row 6). Moreover, our retrieved examples also demonstrate that different poses should be synthesized in accordance with the in-shop clothing to better present the clothing. For example, T-shirts, like the clothes in Rows 1 to 3, are demonstrated in the front views to show the logo or with one hand in the pocket to show the muscles. However, the camisole tops in Rows 4 to 6, are demonstrated with people standing sideways to show their body shapes, facing the right or left.

Moreover, the qualitative results of our testing dataset are presented in Fig. 5, and indicate that our model can synthesize a better pose to display clothing. For each example, we present the input clothing (CtC_{t}), the user (IsI_{s}), the translated human parsing with the synthesized pose via the cloth2pose module and the generated image, the human parsing with the defined pose and the generated image, and the ground truth image of the defined pose. Although appearing a little different from the image with the defined pose, the cloth2pose results capture the key information about the human, such as the direction they face. Moreover, we synthesize suitable poses for clothes. For instance, 1) in Row 3, we derive the pose in the front view to show the pattern of the clothing and 2) in Row 4 to 5, we synthesize the sideways pose to show the upper arms and shoulders of people. Therefore, our model understands the relation between clothes and poses and can synthesize better poses to present better try-on results, which induces users to buy clothes.

IV-D Quantitative Results

Because the structural similarity (SSIM) [57] and inception score (IS) [58] are fairly standard metrics that focus on the overall quality of the generated image instead of the pixel-wise comparison, we calculated them for the reconstruction of the try-on results in our dataset. The SSIM measures the similarity by comparing the generated images against the original images in the structural information, whereas IS provides scores to indicate whether the generated results are visually diverse and semantically meaningful.

Compared with the other virtual try-on systems (i.e., VTNCAP, CP-VTON, GFLA+CP-VTON, and FashionOn), our method outperforms them in terms of SSIM and IS, as revealed in Table II. Moreover, TF-TIS outperforms VTNCAP and CP-VTON in terms of IS by 18.9%18.9\% and 8.14%8.14\%, respectively. Additionally, the comparison in term of SSIM indicates that TF-TIS exceeds VTNCAP and CP-VTON by 19.8%19.8\% and 11.5%11.5\%, respectively. Although TF-TIS only surpasses the results of FashionOn within 1% in both metrics, the result complements the important details and the local and global consistency that FashionOn lacks, as demonstrated in Fig 3.

TABLE II: Comparison of the virtual try-on testing dataset. We randomly sampled 1300 data from the testing dataset.
Method IS SSIM
VTNCAP [11] 2.5874 ±\pm 0.0965 0.7282
CP-VTON [9] 2.8495 ±\pm 0.0832 0.7824
GFLA [55] + CP-VTON [9] 3.0266 ±\pm 0.1740 0.8070
FashionOn (w/o refine) 3.0679 ±\pm 0.1247 0.8689
FashionOn (w/ refine) [10] 3.0693 ±\pm 0.1560 0.8724
TF-TIS (Ours) 3.0777 ±\pm 0.1143 0.8725
Real Data 3.2350 ±\pm 0.1282 1

Note: IS: inception score; SSIM: structural similarity. The higher the score, the better the result.

Runtime. We evaluated the efficiency of the proposed TF-TIS by separately reporting the running time of the four modules. The results of the runtime were conducted on a NVIDIA 1080-Ti GPU and were averaged with 2000 randomly selected image sets. The runtime of each module is as follows: cloth2pose (1.3 ms), pose-guided parsing translator (2.6 ms), segmentation region coloring (3.1 ms), and salient region refinement (GrfG_{rf}: 1.9 ms, GrcG_{rc}: 2.6 ms). The results indicate that the proposed TF-TIS not only reduces the cost of hiring photographers but also provides a real-time try-on service for fashion e-commerce platforms.

V Conclusion and Future work

In this paper, we present a part-level learning network (TF-TIS) for virtual try-on service with automatically synthesized poses. The previous work requires a user-specified target pose for try-on. In contrast, TF-TIS precisely generates try-on images with the poses synthesized from the clothing characteristics, which better demonstrates the clothes. The experimental results indicate that TF-TIS significantly outperforms the state-of-the-art virtual try-on approaches on various clothing types, is better in term of being lifelike in appearance, and recommends poses that induce customers to buy clothes. Moreover, as shown in the experiments, TF-TIS captures the relation between clothes and poses to synthesize better poses to present users with better try-on results. In addition, by proposing the global and the local discriminators in the clothing refinement network, TF-TIS retains consistency of images and preserves critical human information and clothing characteristics. Therefore, TF-TIS resolves many challenging problems (e.g., generating tiny but essential details and preserving detailed logos). In the future, we plan to extend our approach to learn how different garment sizes deform on a real body in images using transfer training from 3D human model methods.

Acknowledgments

This work was supported in part by the Ministry of Science and Technology of Taiwan under Grants MOST-109-2221-E-009-114-MY3, MOST-109-2218-E-009-025, MOST-109-2221-E-009-097, MOST-109-2218-E-009-016, MOST-109-2223-E-009-002-MY3, MOST-109-2218-E-009-025 and MOST-109-2221-E-001-015, in part by the National Natural Science Foundation of China under Grant 61772043, in part by the Fundamental Research Funds for the Central Universities, and in part by the Beijing Natural Science Foundation under Contract 4192025.

References

  • [1] X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, “Viton: An image-based virtual try-on network,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [2] H.-J. Chen, K. M. Hui, S. Y. Wang, L.-W. Tsao, H.-H. Shuai, , and W.-H. Cheng, “Beautyglow: On-demand makeup transfer framework with reversible generative network,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [3] S. C. Hidayati, K.-L. Hua, W.-H. Cheng, and S.-W. Sun, “What are the fashion trends in new york?” in ACM International Conference on Multimedia (ACMMM) Grand Challenges, 2014.
  • [4] S. C. Hidayati, K.-L. Hua, Y. Tsao, H.-H. Shuai, J. Liu, and W.-H. Cheng, “Garment detectives: Discovering clothes and its genre in consumer photos,” in IEEE Workshop on Artificial Intelligence for Art Creation (AIArt), 2019.
  • [5] B. Wu, T. Mei, W.-H. Cheng, and Y. Zhang, “Unfolding temporal dynamics: Predicting social media popularity using multi-scale temporal decomposition,” in AAAI, 2016.
  • [6] S. C. Hidayati, T. W. Goh, J.-S. G. Chan, C.-C. Hsu, J. See, L.-K. Wong, K.-L. Hua, Y. Tsao, and W.-H. Cheng, “Dress with style: Learning style from joint deep embedding of clothing styles and body shapes,” IEEE Transactions on Multimedia, vol. 23, pp. 365–377, 2021.
  • [7] L. Lo, C.-L. Liu, R.-A. Lin, B. Wu, H.-H. Shuai, and W.-H. Cheng, “Dressing for attention: Outfit based fashion popularity prediction,” in 2019 IEEE International Conference on Image Processing, 2019.
  • [8] S. C. Hidayati, Y.-T. C. Cheng-Chun Hsu, K.-L. Hua, J. Fu, and W.-H. Cheng, “What dress fits me best? fashion recommendation on the clothing style for personal body shape,” in ACM International Conference on Multimedia, 2018.
  • [9] B. Wang, H. Zheng, X. Liang, Y. Chen, L. Lin, and M. M. Yang, “Toward characteristic-preserving image-based virtual try-on network,” in European Conference on Computer Vision (ECCV), 2018.
  • [10] C.-W. Hsieh, C.-Y. Chen, C.-L. Chou, H.-H. Shuai, J. Liu, and W.-H. Cheng, “FashionOn: Semantic-guided image-based virtual try-on with detailed human and clothing information,” in ACM International Conference on Multimedia (ACMMM), 2019.
  • [11] N. Zheng, X. Song, Z. Chen, L. Hu, D. Cao, and L. Nie, “Virtually trying on new clothing with arbitrary poses,” in ACM International Conference on Multimedia (ACMMM), 2019.
  • [12] C.-W. Hsieh, C.-Y. Chen, C.-L. Chou, H.-H. Shuai, and W.-H. Cheng, “Fit-me: Image-based virtual try-on with arbitrary poses,” in IEEE International Conference on Image Processing (ICIP), 2019.
  • [13] G. Pons-Moll, S. Pujades, S. Hu, and M. Black, “Clothcap: Seamless 4d clothing capture and retargeting,” ACM Transactions on Graphics (TOG), 2017.
  • [14] T. Y. Wang, D. Ceylan, J. Popovic, and N. J. Mitra, “Learning a shared shape space for multimodal garment design,” ACM Transactions on Graphics (TOG), 2018.
  • [15] E. Gundogdu, V. Constantin, A. Seifoddini, M. Dang, M. Salzmann, and P. Fua, “Garnet: A two-stream network for fast and accurate 3d cloth draping,” in IEEE International Conference on Computer Vision (ICCV), 2019.
  • [16] S. J. Belongie, J. Malik, and J. Puzicha, “Shape matching and object recognition using shape contexts,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2002.
  • [17] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, “Pose guided person image generation,” in Advances in Neural Information Processing Systems (NIPS), 2017.
  • [18] C. Si, W. Wang, L. Wang, and T. Tan, “Multistage adversarial losses for pose-based human image synthesis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [19] A. Pumarola, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer, “Unsupervised person image synthesis in arbitrary poses,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [20] A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe, “Deformable gans for pose-based human image generation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [21] S. Song, W. Zhang, J. Liu, and T. Mei, “Unsupervised person image generation with semantic parsing transformation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [22] Y. Li, C. Huang, and C. C. Loy, “Dense intrinsic appearance flow for human pose transfer,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [23] Z. Zhu, T. Huang, B. Shi, M. Yu, B. Wang, and X. Bai, “Progressive pose attention transfer for person image generation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [24] Z. Cao, G. Hidalgo, T. Simon, S. Wei, and Y. Sheikh, “Openpose: Realtime multi-person 2d pose estimation using part affinity fields,” arXiv preprint arXiv:1812.08008, 2018.
  • [25] G. Rogez, P. Weinzaepfel, and C. Schmid, “LCR-Net++: Multi-person 2d and 3d pose detection in natural images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019.
  • [26] G. Hidalgo, Y. Raaj, H. Idrees, D. Xiang, H. Joo, T. Simon, and Y. Sheikh, “Single-network whole-body pose estimation,” in IEEE International Conference on Computer Vision (ICCV), 2019.
  • [27] M. M. Kalayeh, E. Basaran, M. Gokmen, M. E. Kamasak, and M. Shah, “Human semantic parsing for person re-identification,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [28] K. Gong, X. Liang, Y. Li, Y. Chen, M. Yang, and L. Lin, “Instance-level human parsing via part grouping network,” in European Conference on Computer Vision (ECCV), 2018.
  • [29] J. Guo, Y. Yuan, L. Huang, C. Zhang, J.-G. Yao, and K. Han, “Beyond human parts: Dual part-aligned representations for person re-identification,” in IEEE International Conference on Computer Vision (ICCV), 2019.
  • [30] A. Senocak, T.-H. Oh, J. Kim, M.-H. Yang, and I. S. Kweon, “Learning to localize sound source in visual scenes,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [31] A. Owens, P. Isola, J. McDermott, A. Torralba, E. H. Adelson, and W. T. Freeman, “Visually indicated sounds,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [32] A. Owens and A. A. Efros, “Audio-visual scene analysis with self-supervised multisensory features,” in European Conference on Computer Vision (ECCV), 2018.
  • [33] Y. Zhang and H. Lu, “Deep cross-modal projection learning for image-text matching,” in European Conference on Computer Vision (ECCV), 2018.
  • [34] J. Sanchez-Riera, K.-L. Hua, Y.-S. Hsiao, T. Lim, S. C. Hidayati, and W.-H. Cheng, “A comparative study of data fusion for rgb-d based visual recognition,” Pattern Recognition Letters, vol. 73, pp. 1–6, 2016.
  • [35] S. Li, T. Xiao, H. Li, W. Yang, and X. Wang, “Identity-aware textual-visual matching with latent co-attention,” in IEEE International Conference on Computer Vision (ICCV), 2017.
  • [36] Y. Liu, Y. Guo, E. M. Bakker, and M. S. Lew, “Learning a recurrent residual fusion network for multimodal matching,” in IEEE International Conference on Computer Vision (ICCV), 2017.
  • [37] J. Sanchez-Riera, K. Srinivasan, K.-L. Hua, W.-H. Cheng, M. A. Hossain, and M. F. Alhamid, “Robust rgb-d hand tracking using deep learning priors,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 9, pp. 2289–2301, 2018.
  • [38] L. Castrejón, Y. Aytar, C. Vondrick, H. Pirsiavash, and A. Torralba, “Learning aligned cross-modal representations from weakly aligned data,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [39] T.-H. Oh, T. Dekel, C. Kim, I. Mosseri, W. T. Freeman, M. Rubinstein, and W. Matusik, “Speech2face: Learning the face behind a voice,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
  • [40] Y.-C. Lin, M.-C. Hu, W.-H. Cheng, Y.-H. Hsieh, and H.-M. Chen, “Human action recognition and retrieval using sole depth information,” in ACM International Conference on Multimedia (ACMMM), 2012.
  • [41] S. C. Hidayati, C.-W. You, W.-H. Cheng, and K.-L. Hua, “Learning and recognition of clothing genres from full-body images,” IEEE Transactions on Cybernetics, vol. 48, no. 5, pp. 1647–1659, 2018.
  • [42] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollàr, “Microsoft coco: Common objects in context,” in European Conference on Computer Vision (ECCV), 2014.
  • [43] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2d human pose estimation: New benchmark and state of the art analysis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
  • [44] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations (ICLR), 2015.
  • [45] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [46] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint, 2014.
  • [47] Y. Bengio, P. Y. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks (TNN), 1994.
  • [48] M. S. M. Sajjadi, B. Schölkopf, and M. Hirsch, “Enhancenet: Single image super-resolution through automated texture synthesis,” in IEEE International Conference on Computer Vision (ICCV), 2017.
  • [49] H.-X. Xie, L. Lo, H.-H. Shuai, and W.-H. Cheng, “Au-assisted graph attention convolutional network for micro-expression recognition,” in ACM International Conference on Multimedia (ACMMM), 2020.
  • [50] Z. Wu, G. Lin, Q. Tao, and J. Cai, “M2E-try on net: Fashion from model to everyone,” in ACM International Conference on Multimedia (ACMMM), 2019.
  • [51] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [52] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in International Conference on Machine Learning Workshops (ICMLW), 2013.
  • [53] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang, “Deepfashion: Powering robust clothes recognition and retrieval with rich annotations,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • [54] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in International Conference on Learning Representations (ICLR), 2015.
  • [55] Y. Ren, X. Yu, J. Chen, T. H. Li, and G. Li, “Deep image spatial transformation for person image generation,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  • [56] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in IEEE International Conference on Computer Vision (ICCV), 2017.
  • [57] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing (TIP), 2004.
  • [58] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, X. Chen, and X. Chen, “Improved techniques for training gans,” in Advances in Neural Information Processing Systems (NIPS), 2016.
[Uncaptioned image] Chien-Lung Chou received the B.S. degree from the Department of Electrical and Computer Engineering, National Chiao Tung University (NCTU), Hsinchu, Taiwan, in 2019. He is now a master student in the Department of Electrical and Computer Engineering, University of Michigan. His research interest includes artificial intelligence, deep learning, and computer vision.
[Uncaptioned image] Chieh-Yun Chen received the B.S. degree from the Deparment of Elecrtophysics, National Chiao Tung University (NCTU), Hsinchu, Taiwan, in 2020. She is now a master student in the Institute of Electronics, NCTU. Her research interests are mainly in artificial intelligence, deep learning, and computer vision.
[Uncaptioned image] Chia-Wei Hsieh received the B.S. degree from the Department of Electrical and Computer Engineering, National Chiao Tung University (NCTU), Hsinchu, Taiwan. She is now master student in Electrical and Computer Engineering - Machine Learning and Data Science, University of California, San Diego (UCSD). Her interest includes machine learning and computer vision.
[Uncaptioned image] Hong-Han Shuai received the B.S. degree from the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, R.O.C., in 2007, the M.S. degree in computer science from NTU in 2009, and the Ph.D. degree from Graduate Institute of Communication Engineering, NTU, in 2015. He is now an associate professor in NCTU. His research interests are in the area of multimedia processing, machine learning, social network analysis, and data mining. His works have appeared in top-tier conferences such as MM, CVPR, AAAI, KDD, WWW, ICDM, CIKM and VLDB, and top-tier journals such as TKDE, TMM and JIOT. Moreover, he has served as the PC member for international conferences including MM, AAAI, IJCAI, WWW, and the invited reviewer for journals including TKDE, TMM, JVCI and JIOT.
[Uncaptioned image] Jiaying Liu (M’10-SM’17) is currently an Associate Professor with the Wangxuan Institute of Computer Technology, Peking University. She received the Ph.D. degree (Hons.) in computer science from Peking University, Beijing China, 2010. She has authored over 100 technical articles in refereed journals and proceedings, and holds 42 granted patents. Her current research interests include multimedia signal processing, compression, and computer vision. Dr. Liu is a Senior Member of IEEE/CCF/CSIG. She was a Visiting Scholar with the University of Southern California, Los Angeles, from 2007 to 2008. She was a Visiting Researcher with the Microsoft Research Asia in 2015 supported by the Star Track Young Faculties Award. She has served as a member of Multimedia Systems & Applications Technical Committee (MSA-TC), Visual Signal Processing and Communications Technical Committee (VSPC) and Education and Outreach Technical Committee (EO-TC) in IEEE Circuits and Systems Society, a member of the Image, Video, and Multimedia (IVM) Technical Committee in APSIPA. She has served as the Associate Editor for IEEE Trans. on Image Processing, and Elsevier JVCI. She has also served as the Technical Program Chair of IEEE VCIP-2019/ACM ICMR-2021, the Publicity Chair of IEEE ICME-2020/ICIP-2019/VCIP-2018, and the Area Chair of ECCV-2020/ICCV-2019. She was the APSIPA Distinguished Lecturer (2016-2017).
[Uncaptioned image] Wen-Huang Cheng is Professor with the Institute of Electronics, National Chiao Tung University (NCTU), Hsinchu, Taiwan. He is also Jointly Appointed Professor with the Artificial Intelligence and Data Science Program, National Chung Hsing University (NCHU), Taichung, Taiwan. Before joining NCTU, he led the Multimedia Computing Research Group at the Research Center for Information Technology Innovation (CITI), Academia Sinica, Taipei, Taiwan, from 2010 to 2018. His current research interests include multimedia, artificial intelligence, computer vision, and machine learning. He has actively participated in international events and played important leading roles in prestigious journals and conferences and professional organizations, like Associate Editor for IEEE Transactions on Multimedia, General co-chair for IEEE ICME (2022) and ACM ICMR (2021), Chair-Elect for IEEE MSA technical committee, governing board member for IAPR. He has received numerous research and service awards, including the 2018 MSRA Collaborative Research Award, the 2017 Ta-Yu Wu Memorial Award from Taiwan’s Ministry of Science and Technology (the highest national research honor for young Taiwanese researchers under age 42), the 2017 Significant Research Achievements of Academia Sinica, the Top 10% Paper Award from the 2015 IEEE MMSP, and the K. T. Li Young Researcher Award from the ACM Taipei/Taiwan Chapter in 2014. He is IET Fellow and ACM Distinguished Member.