This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

DiffCL: A Diffusion-Based Contrastive Learning Framework with Semantic Alignment for Multimodal Recommendations

Qiya Song, Jiajun Hu, Lin Xiao, Bin Sun , Xieping Gao, Shutao Li
Abstract

Multimodal recommendation systems integrate diverse multimodal information into the feature representations of both items and users, thereby enabling a more comprehensive modeling of user preferences. However, existing methods are hindered by data sparsity and the inherent noise within multimodal data, which impedes the accurate capture of users’ interest preferences. Additionally, discrepancies in the semantic representations of items across different modalities can adversely impact the prediction accuracy of recommendation models. To address these challenges, we introduce a novel diffusion-based contrastive learning framework (DiffCL) for multimodal recommendation. DiffCL employs a diffusion model to generate contrastive views that effectively mitigate the impact of noise during the contrastive learning phase. Furthermore, it improves semantic consistency across modalities by aligning distinct visual and textual semantic information through stable ID embeddings. Finally, the introduction of the Item-Item Graph enhances multimodal feature representations, thereby alleviating the adverse effects of data sparsity on the overall system performance. We conduct extensive experiments on three public datasets, and the results demonstrate the superiority and effectiveness of the DiffCL.

Index Terms:
Multimodal Recommender Systems, Self-supervised Learning, Diffusion Model, Graph Contrastive Learning.

I Introduction

Recommender systems (RSs), widely applied across various online platforms [1, 2, 3], represent a category of information filtering technologies designed to anticipate user preferences for various items and subsequently deliver tailored recommendations. These systems analyze users’ historical behavioral patterns alongside relevant information to identify items or content that may align with their interests. Initially, early RSs predominantly relied on user interaction data, leveraging historical interaction to reflect behavioral similarities among users. However, as user needs become increasingly complex and diverse, these traditional systems have begun to reveal their limitations. In response to this challenge, multimodal recommender systems (MRSs) have emerged, emphasizing the integration of interaction information between users and items while incorporating diverse multimodal features. This approach enables MRSs to capture user preferences in a more comprehensive manner. MRSs have demonstrated considerable success in domains such as e-commerce and short video recommendations, where the richness of multimodal information enhances their ability to deliver personalized recommendations effectively.

Refer to caption
Refer to caption
Figure 1: Two methods for constructing graph contrastive learning: edge dropout and add random noise. (a) Some edges are randomly selected and removed from the graph according to a predefined dropout rate. (b) Add random uniform noise or gaussian noise to the feature embeddings after they have been processed by the Graph Encoder.

Building upon conventional recommendation methods [4, 5], current MRSs employ diverse strategies to integrate information from multiple modalities. For instance, VBPR, as an extension of matrix factorization techniques [5], introduces visual modal information into the feature representation of items for the first time, thereby effectively addressing the data sparsity challenge in RSs. Nevertheless, early methods primarily concentrated on unimodal data, resulting in models that are incapable of comprehensively modeling user preferences and item representations. As deep learning technology [6, 7, 8] advances and gain widespread application, RSs leveraging deep learning have attracted significant attention from researchers. These methods [9, 10] are capable of learning the underlying features of raw data and complex nonlinear correlations between items and users, thereby effectively modeling users’ complex preferences. ACNE [11] is an deep learning approach and its enhanced version (ACNE-ST) for modeling overlapping communities in network embedding, demonstrating superior performance in vertex classification task. CRL [12] is based on matrix factorization, simultaneously considering the complementarity of global and local information, and collaboratively learning both topics and network embeddings. Simultaneously, deep learning technology can represent different modal features in the same embedding space, which provides conditions for the rapid development of MRSs. To capture higher-order features of user interaction with the items, Graph Neural Networks (GNN) are utilized in RSs [13, 14, 15]. It updates the representation of the current node by accumulating information from adjacent nodes within the graph, thereby integrating modal information through the message-passing process to yield a more comprehensive item representation. Existing GNN-based recommendation methods [16, 17, 18] require a significant amount of high-quality interaction data for modeling user-item relationships. Nonetheless, in practical applications of recommendation systems, interaction data is frequently sparse, which constrains the ability of recommendation models to produce accurate results.

Recently, self-supervised learning (SSL) generates supervision signals from unlabeled data, providing a new method for addressing the data sparsity issue in RSs [19]. For instance, NCL [20] and HCCF [21] combine SSL with collaborative filtering to model user-item interactions, yet they do not adapt them to specific multimodal recommendation scenarios. On the other hand, SGL [22] employs dropout techniques to randomly eliminate interaction edges, subsequently constructing contrastive views to enhance item representations through contrastive learning. However, these methods concentrate exclusively on the enhancement of interactive data, neglecting the importance of considering multimodal information during the data augmentation process, which limits the model’s capacity to representation multimodal information. Recent research has sought to address this gap by proposing the integration of multimodal modeling with self-supervised techniques [23, 24] to enhance the accuracy of RSs. MMGCL [25] and SLMRec [23] construct contrastive views for contrastive learning by perturbing the final modal features with added random noise. MMSSL [26] enriches the multimodal feature embeddings of items based on SSL with interaction information to achieve better recommendation results. Nevertheless, as shown in the Figure 1, these methods are generally based on intuitive cross-view contrastive learning or simple random augmentation techniques, which can introduce noise information irrelevant to the recommendation results to some extent.

Drawing inspiration from the success of diffusion models in the field of data generation, we propose a novel framework based on diffusion models for MRSs. Specifically, we first utilize graph convolutional networks to process pre-extracted raw multimodal features in order to capture higher-order multimodal feature representations. Subsequently, to construct a contrastive view that distinct from previous methods, we introduce diffusion models into the graph contrastive learning phase, utilizing both the forward and inverse processes of diffusion models to generate contrastive views, as opposed to merely adding random noise or employing dropout techniques. This approach effectively mitigates the impact of noise introduced during self-supervised learning. Finally, an Item-Item graph is introduced to augment the embedding of items, addressing the impact of data sparsity in RSs. Alongside this, we implement an ID-guided semantic alignment task to align semantic information across different modalities, enhancing semantic consistency. This alignment, guided by ID features, leverages their stability and uniqueness to ensure that the semantics of the item remain consistent, irrespective of the modality perspective. The contributions of this paper are briefly summarized as follows:

  • We propose a novel Diffusion-Based Contrastive Learning framework (DiffCL) for multi-modal recommendation, which enhances the semantic representation of items by introducing Item-Item graphs to mitigate the effects of data sparsity.

  • We introduce diffusion model to generate contrastive views during the graph contrastive learning phase, reducing the impact of noisy information in graph contrastive learning tasks.

  • We utilize stable ID embeddings to guide the semantic alignment for enhancing consistency different modalities, thereby enabling effective complementary learning between the visual and textual modalities.

  • We conduct extensive experiments on three public datasets to validate the superiority and effectiveness of the DiffCL.

II Related Work

II-A Multimodal Recommendation

The multimodal recommendation system introduces multiple data sources as auxiliary information, extracts the semantic information corresponding to these auxiliary sources, and integrates them through multimodal fusion technology to obtain the user’s multimodal preferences and the item’s multimodal representation, which are used in final recommendation stage to improve recommendation accuracy. Early research focuses on single modality. For example, DUIF [27] builds on this foundation by utilizing additional user information to further enhance the user’s feature representation. ACF [28] utilizes an attention network to adaptively learn the weights of user preferences for items. In realistic scenarios, items exist with multiple modal information; therefore, utilizing different modal information at the same time allows for better modeling of user preferences. CKE [29] is designed to enrich the feature representation of an item by combining image features and text features of item using a knowledge graph based on the matrix factorization technique. Wei et al. [16] use multiple graph convolutional network to process different modal information as a way to extract cues about user’s preference for particular modality. Zhang et al. [30] provide a more comprehensive representation of items by building item semantic graphs to present hidden relationships between items. In this study, we endeavor to comprehensively leverage the multimodal features of items to model user interest preferences. To this end, we enhance item feature representation through the construction of an Item-Item graph, which uncovers latent relationships among items and establishes a more robust foundation for recommendations.

II-B Graph-based Models for Recommendation

Graph Convolutional Networks (GCNs) have unique advantages in processing graph data structures, aggregate information from neighboring nodes, and facilitate the extraction of higher-order features [31, 32]. They are widely used in recommender systems. NGCF [9] fuses the GCN architecture based on matrix factorization, pursuing explicit encoding of higher-order collaborative signals to improve the performance of RSs. Several studies suggest that the nonlinear structures in GCNs are ineffective for extracting collaborative signals between users and items. Based on this, a lightweight GCN recommendation framework, LightGCN is proposed, which removes the original weight matrix and nonlinear activation function in GCN and achieves better recommendation effects. The GCN captures prevalence features other than collaborative signals, and the JMPGCF [33], to match the user’s sensitivity on prevalence, utilizes graph Laplace paradigm to capture prevalence features at multi-grained simultaneously. Recently, some contrastive learning methods are introduced into GCN-based recommender systems. SGL [22] generates different contrast views through various dropout operations to perform contrastive learning based on them. MMSSL [26] introduces an inter-modal contrast learning task to retain semantic commonality across modalities, reducing the impact of noisy information on recommendation results. Building on these approaches, this work leverages graph convolutional networks to model user preferences and item multimodal features effectively. Furthermore, by incorporating a diffusion model into the graph contrastive learning phase, we aim to reduce the influence of multimodal noise, thereby achieving more robust and accurate recommendation outcomes.

II-C Diffusion Models for Recommendation

In recent years, inspired by the diffusion model’s (DM) wide application [34, 35, 36] in the generative domain, some research combines DM with recommender systems to seek better recommendation results. For example, PDRec [37] leverages diffusion-based user preferences to improve the performance of sequential recommendation models. DreamRec [38] introduces DM to explore the latent connections of items in the item space and uses the user’s sequential behavior as a guide to generate the final recommendations. DiffRec [39] uses DM in the denoising process to generate collaborative information that is similar to the global but personalized. Unlike the application of DM in the continuous item embedding space, LD4MRec [40] uses DM on the discrete item index and combines it with multimodal sequential information to guide the prediction of recommendation results. DiffMM [41] enhances the user’s representation by combining cross-modal contrast learning with a modal awareness graph diffusion model to better model collaborative signals as well as align multimodal feature information. In this work, we propose a novel graph contrastive learning component grounded in the diffusion model. Unlike conventional methods that rely on adding random noise to generate contrastive views, our approach harnesses the strong generative capabilities of DM to construct more informative and meaningful contrastive views, thereby enhancing the effectiveness of the learning process.

III Preliminary

In this section, we present some preliminary knowledge related to the paper.

III-A Graph Neural Networks (GNNs)

GNNs [42] used to learn relationships between data from graph data structures. Where nodes denote entities and edges signify the relationships between these entities. The abstract structure of a GNN can be understood in terms of: message passing, aggregation and update functions.

The message passing from neighbor jj to kk is represented by Equation 1. Each node kk collects information from its neighbors. The formula is as follows:

mk(l)=j𝒩(k)M(l)(hj(l1),hk(l1),ejk),m_{k}^{(l)}=\sum_{j\in\mathcal{N}(k)}M^{(l)}(h_{j}^{(l-1)},h_{k}^{(l-1)},e_{jk}), (1)

here, mk(l)m_{k}^{(l)} is the aggregated message for node kk at layer ll, 𝒩(k)\mathcal{N}(k) denotes the set of neighbors of kk, hj(l1)h_{j}^{(l-1)} and hk(l1)h_{k}^{(l-1)} are the feature vectors of nodes jj and kk from the before layer, and ejke_{jk} denotes the edge features. The accumulation function combines these messages to form a new representation for the node: hk(l)=U(l)(hk(l1),mk(l))h_{k}^{(l)}=U^{(l)}(h_{k}^{(l-1)},m_{k}^{(l)}), where U(l)U^{(l)} is an update function that can take various forms, such as summation, mean, or a learnable neural network. The foundational structure of Graph Neural Networks (GNNs) centers on the interplay of message passing, aggregation, and updating mechanisms, facilitating effective learning from complex relational data. This framework is highly versatile and can be adapted to address a wide range of tasks.

III-B Multimodal Fusion

Multimodal fusion [43] is a powerful approach that integrates information from various modalities or sources to enhance the performance of machine learning models. By utilizing multiple types of data, multimodal fusion allows the system to offer a more holistic perspective of the underlying information. The process of multimodal fusion is divided into three main phases: feature extraction, alignment, and combination. During the feature extraction stage, pertinent features are identified and retrieved from each modality and converted into a unified representation. Alignment involves matching or correlating features across modalities to ensure that the information is contextually relevant. Finally, in the combination phase, the aligned features are merged using techniques such as early fusion, late fusion, or hybrid methods to generate a final representation that captures the strengths of each modality.

III-C Graph Contrastive Learning

Graph contrastive learning is a self-supervised learning algorithm for graph data that optimizes graph embedding by constructing pairs of positive and negative samples. The core idea is to maximize the similarity between positive samples and the difference between negative samples to enhance the learning of graph representation. The most important part in graph contrastive learning is the graph augmentation strategy, which generates different views by randomly transforming (e.g. node dorpping, edge perturbation, attribute masking and subgraph) the original graph to increase the robustness and generalization ability of the model.

IV Methodology

In this section, we take an indepth exploration of the DiffCL framework. It includes what key components the DiffCL consists of and a mathematical description of those components. The detailed workflow of the DiffCL is illustrated in Figure 2.

Refer to caption
Figure 2: The overview architecture of our DiffCL framework. The DiffCL consists of three modules. (a) Graph Encoder is used to extract higher-order user preference cues and collaborative signals. The G={GmGv,Gt,Gid}G=\left\{G_{m}\mid G_{v},G_{t},G_{id}\right\} represents three different user-item graphs. The EmE_{m} are the feature embeddings of different modalities obtained by GCNs, m{v,t,id}m\in\left\{v,t,id\right\}. (b) Diffusion Graph Contrastive introduces diffusion model to construct contrast views for contrastive learning. Em1E^{1}_{m} and Em2E^{2}_{m} are feature embeddings obtained by diffusion model, m{v,t}m\in\left\{v,t\right\}. (c) Enhancement and Alignment implements item semantic enhancement and cross-modal semantic alignment. The ID embedding includes EiduE^{u}_{id} and EidiE^{i}_{id}.

IV-A Problem Formulation

In the recommendation process, relying solely on user-item interaction data may lead to insufficient information, failing to fully reflect users’ interests and preferences. Multimodal recommendation systems can capture users’ needs more comprehensively by integrating data from different modalities, thus providing more personalized and accurate recommendations.

The process of multimodal recommender system is as follows, given U={u}U=\left\{u\right\} and I={i}I=\left\{i\right\} as the sets of users and items, respectively, the total counts are |U||U| and |I||I|. We process the original user-item ID interaction to get the embedded feature EidE_{id} of the ID, and then the visual modal features EvE_{v} and textual modal features EtE_{t} are obtained through different encoders. After a series of enhancement and alignment operations, we obtain the final item embedding eie_{i} and user embedding eue_{u}. And the inner product of the two is calculated to obtain the predicted score y^u,i\hat{y}_{u,i} of user uu for the target item ii, with the following formula:

y^u,i=eueiT.\hat{y}_{u,i}=e_{u}\cdot e_{i}^{T}. (2)

The multimodal recommender system calculates the user’s predicted scores for different items and orders them from highest to lowest score, taking the top-K items as the final recommended list.

IV-B Graph Encoder

Some research on recommender systems based on GCNs has demonstrated that constructing item-user heterogeneous graphs and processing them with graph neural networks can better capture the user’s preference cues, which in turn improves the recommendation accuracy of the whole system. Inspired by these works, we propose a graph encoder component that consists of three different GCNs to capture higher-order features of different modalities.

First, we use ResNet50 [44] and BERT [27] to extract the image information and text information in the raw data, respectively, and encode the features of these modal information. After that, the graph encoder component is utilized to capture the higher-order features of user-item interactions, image and text features, respectively.

Anchored in the interaction information in the raw data and the multi-model information of the items, we constructed three user-item graphs G={GmGv,Gt,Gid}G=\left\{G_{m}\mid G_{v},G_{t},G_{id}\right\}. We construct the interaction matrix JJ to represent the interaction information, such that jui=1j_{ui}=1 if an interaction exists between user uu and item ii, otherwise jui=0j_{ui}=0. Gm={n,e}G_{m}=\left\{n,e\right\} denotes user-item graph, m{v,t,id}m\in\left\{v,t,id\right\}, nn represents the set of nodes and ee represents the set of edges in the graph. The features after the ll-th layer of convolutional network are denoted as Em(l)E_{m}^{(l)} and the final embedded features are denoted as Em{E}_{m} and they are mathematically expressed as follows:

Em(l)=iNu1|Nu||Ni|Em(l1),E_{m}^{(l)}=\sum_{i\in{N}_{u}}\frac{1}{\sqrt{\left|{N}_{u}\right|}\sqrt{\left|{N}_{i}\right|}}E_{m}^{(l-1)}, (3)

where, NiN_{i} is the single-hop neighbor of ii in graph GmG_{m}, NuN_{u} is the single-hop neighbor of uu in graph GmG_{m}.

Em=l=0LEm(l),{E}_{m}=\sum_{l=0}^{L}E^{(l)}_{m}, (4)

where, LL is the layer count of the graph convolution, Em(0)E_{m}^{(0)} is the original feature after initial feature extraction.

IV-C Diffusion Graph Contrastive Learning

Over the last few years, diffusion model (DM) has excelled in the field of data generation, which can generate data that is highly consistent with the original data. Motivated by the application of DM, we propose a new multimodal recommender system method called DiffCL based on DM. Diffusion Graph Contrastive Learning is the most important component of the DiffCL. We introduce the DM to the graph contrastive learning phase and use it to generate two similar but inconsistent contrasting views to enhance the representation of items and users. Specifically, we gradually add Gaussian noise to the original user-item graph to destroy the original interaction information between the two, and then restore the original interaction by predicting the original state of the data through a probabilistic diffusion process.

IV-C1 Graph Diffusion Forward Process

The higher order feature captured by the Graph Encoder is EmE_{m}, which is represented by the following mathematical form:

Em\displaystyle E_{m} =\displaystyle= [emuemi],\displaystyle\begin{bmatrix}e_{m}^{u}&e_{m}^{i}\end{bmatrix}, (5)

where, m{v,t,id}m\in\left\{v,t,id\right\}, emue_{m}^{u} and emie_{m}^{i} denote user embedding and item embedding in a particular modality, respectively.

Our graph diffusion process includes only visual modalities and textual modalities, here we take visual modalities as an example to describe the diffusion process mathematically. We consider the visual modality of the embedding Ev=[evuevi]E_{v}=\begin{bmatrix}e_{v}^{u}&e_{v}^{i}\end{bmatrix}. We initialize the diffusion process with x0=[evuevi]x_{0}=\begin{bmatrix}e_{v}^{u}&e_{v}^{i}\end{bmatrix}. The forward process is a Markov chain that constructs the final 𝒙1:T\boldsymbol{x}_{1:T} by gradually adding Gaussian noise at each time step tt. Specifically, the process from xt1x_{t-1} to xtx_{t} is expressed as follows:

q(𝒙t𝒙t1)\displaystyle q\left(\boldsymbol{x}_{t}\mid\boldsymbol{x}_{t-1}\right) =\displaystyle= 𝒩(𝒙t;1βt𝒙t1,βt𝑰),\displaystyle\mathcal{N}\left(\boldsymbol{x}_{t};\sqrt{1-\beta_{t}}\boldsymbol{x}_{t-1},\beta_{t}\boldsymbol{I}\right), (6)

where, 𝒩\mathcal{N} represents a Gaussian distribution, and βt\beta_{t} is a noise scale that regulates the increase of Gaussian noise at every time step tt. As tt\to\infty, xt{x}_{t} will converge to a standard Gaussian distribution. Since independent Gaussian noise distributions are additive, it follows that we can get xt{x}_{t} directly from x0{x}_{0} . This process is expressed by the formula:

q(𝒙t𝒙0)\displaystyle q\left(\boldsymbol{x}_{t}\mid\boldsymbol{x}_{0}\right) =\displaystyle= 𝒩(𝒙t;γ¯t𝒙0,(1γ¯t)𝑰).\displaystyle\mathcal{N}\left(\boldsymbol{x}_{t};\sqrt{\bar{\gamma}_{t}}\boldsymbol{x}_{0},\left(1-\bar{\gamma}_{t}\right)\boldsymbol{I}\right). (7)

We utilize two parameters to control the total amount of noise added during the process from x0{x}_{0} to xt{x}_{t}: γt{\gamma_{t}} and γt¯\overline{\gamma_{t}}. Their mathematical representation is as follows:

γt=1βt,\gamma_{t}=1-\beta_{t}, (8)
γt¯=1tγt.\overline{\gamma_{t}}=\prod_{1}^{t}\gamma_{t}. (9)

Then xt{x}_{t} can be re-parameterized as:

xt=γt¯x0+1γt¯ε,x_{t}=\sqrt{\overline{\gamma_{t}}}x_{0}+\sqrt{1-\overline{\gamma_{t}}}\varepsilon, (10)

where, ε𝒩(0,𝑰)\varepsilon\sim\mathcal{N}\left(0,\boldsymbol{I}\right). We employ a linear noise scheduler for 1γ¯1-\overline{\gamma} to control the amount of noise in x0:Tx_{0:\textit{T}}:

1γ¯t\displaystyle 1-\bar{\gamma}_{t} =\displaystyle= s[γmin+t1T1(γmaxγmin)],\displaystyle s\cdot\left[\gamma_{\min}+\frac{t-1}{T-1}\left(\gamma_{\max}-\gamma_{\min}\right)\right]\ , (11)

where, t{1,2,,T}t\in\left\{1,2,\cdots,T\right\}, ss is noise scale and s[0,1]s\in\left[0,1\right], γmin\gamma_{\min} and γmax\gamma_{\max} represent the maximum and minimum limits of additive noise.

IV-C2 Graph Diffusion Reverse Process

Reverse process’s objective is to eliminate the noise introduced through process from x0{x}_{0} to xt{x}_{t} and to recover x0{x}_{0}. This process generates a pseudo-feature similar to the original visual representation. The transformation associated with the reverse process begins at xt{x}_{t} and gradually recovers x0{x}_{0} through a denoising transformation step. The mathematical expression for the reverse process is as follows:

pθ(xt1xt)=𝒩(xt1;μθ(xt,t),Σθ(xt,t)),p_{\theta}\left(x_{t-1}\mid x_{t}\right)=\mathcal{N}\left(x_{t-1};\mu_{\theta}\left(x_{t},t\right),\Sigma_{\theta}\left(x_{t},t\right)\right), (12)

μθ(xt,t)\mu_{\theta}\left(x_{t},t\right) and Σθ(xt,t)\Sigma_{\theta}\left(x_{t},t\right) denote the predicted values of the mean and variance of the Gaussian distribution in the next state, respectively, and we obtain them by having two learnable parameter neural networks.

IV-C3 Graph Contrastive Learning

Our framework employs a widely used enhancement strategy that utilizes contrast learning to enhance modality-specific feature representations. Specifically, we utilize diffusion model to generate contrasting views. The visual representation of the graph contrastive learning of visual modalities, for example, is EvE_{v} after the graph convolution operation. Let x0=Ev{x}_{0}=E_{v}, we can get a representation Ev1E_{v}^{1} similar to EvE_{v} through diffusion model. Repeat the execution of the operation’s to another representation Ev2E_{v}^{2}. Then we perform graph contrastive learning based on InfoNCE [45] loss function with the following formula:

uv=u1Ulogexp(s(eu1,v1,eu1,v2))/τu2Uexp(s(eu2,v1,eu2,v2))/τ,\displaystyle\mathcal{L}_{u}^{v}=\sum_{u_{1}\in U}-\log{\frac{\exp\left({s}\left({e}_{u_{1},v}^{1},e_{u_{1},v}^{2}\right)\right)/\tau}{\sum_{u_{2}\in U}\exp\left({s}\left({e}_{u_{2},v}^{1},e_{u_{2},v}^{2}\right)\right)/\tau}}, (13)
iv=i1Ilogexp(s(ei1,v1,ei1,v2))/τi2Iexp(s(ei2,v1,ei2,v2))/τ.\displaystyle\mathcal{L}_{i}^{v}=\sum_{i_{1}\in I}-\log{\frac{\exp\left({s}\left({e}_{i_{1},v}^{1},e_{i_{1},v}^{2}\right)\right)/\tau}{\sum_{i_{2}\in I}\exp\left({s}\left({e}_{i_{2},v}^{1},e_{i_{2},v}^{2}\right)\right)/\tau}}. (14)
clv=uv+iv,\mathcal{L}_{cl}^{v}=\mathcal{L}_{u}^{v}+\mathcal{L}_{i}^{v}, (15)

here, s()s\left(\cdot\right) represents the cosine similarity function, and τ\tau is a hyperparameter that controls the rate of model convergence.

The same reasoning leads to:

clt=ut+it.\mathcal{L}_{cl}^{t}=\mathcal{L}_{u}^{t}+\mathcal{L}_{i}^{t}. (16)

The final loss of graph contrastive learning loss is as follows:

cl=λcl(clv+clt),\mathcal{L}_{cl}=\lambda_{\mathrm{cl}}(\mathcal{L}_{cl}^{v}+\mathcal{L}_{cl}^{t}), (17)

where, λcl\lambda_{\mathrm{cl}} is a hyperparameter used to control the graph contrastive learning loss.

IV-D Multimodal Feature Enhancement and Alignment

IV-D1 Multimodal Feature Enhancement

To extract the semantic connections among various items, we build the Item-Item graph ( I-I graph), which is implemented by the KNN algorithm. Specifically, we calculate the similarity scores Si,jmS_{i,j}^{m} between different item pairs (i,j)\left(i,j\right) separately based on different modal features to obtain I-I graphs for specific modalities. The similarity score is calculated by the following equation:

Si,jm=(fim)fjmfimfjm,S^{m}_{i,j}=\frac{\left(f_{i}^{m}\right)^{\top}f_{j}^{m}}{\left\|f_{i}^{m}\right\|\left\|f_{j}^{m}\right\|}, (18)

where, ii and jj denote pairs of items consisting of different items. fimf_{i}^{m} and fjmf_{j}^{m} denote the original features of the specific modes of items ii and jj, respectively. m{v,t}m\in\left\{v,t\right\} represents the modality.

To reduce the effect of redundant data upon model accuracy, we selectively discard the obtained similarity scores. We keep only the K neighbors whose similarity scores are ranked in the top-K and assign them a value of 11, which can be expressed as follows:

Si,jm={1 if Si,jm top-K(Si,jm)0 otherwise .S^{m}_{i,j}=\begin{cases}1&\text{ if }S^{m}_{i,j}\in\text{ top-}K\left(S^{m}_{i,j}\right)\\ 0&\text{ otherwise }\end{cases}. (19)

When Si,jmS^{m}_{i,j} is 1, it represents a potential connection between item pairs (i,j)\left(i,j\right). Simultaneously, we fix KK = 10. We normalize SmS^{m} utilize the following formula:

S^m=(Dm)12Sm(Dm)12,\widehat{S}^{m}=\left(D^{m}\right)^{-\frac{1}{2}}{S}^{m}\left(D^{m}\right)^{-\frac{1}{2}}, (20)

Here, DmD^{m} represents diagonal matrix of Sm{S}^{m} and DmN×ND^{m}\in\mathbb{R}^{N\times N}. It generates a symmetric, normalized matrix that helps eliminate the influence of node degrees on the results, making subsequent aggregation operations more stable. The calculation formula for DiimD^{m}_{ii} is as follows:

Diim=jSi,jm.D^{m}_{ii}=\sum_{j}{S}^{m}_{i,j}. (21)

Then, we aggregate multi-layer neighbor information based on the obtained modality-aware adjacency matrix:

Am(l)=jNiS^i,jmAjm(l1),A_{m}^{(l)}=\sum_{j\in N_{i}}\widehat{S}_{i,j}^{m}A_{j_{m}}^{(l-1)}, (22)

where, jj is first-order neighbor of ii, AjmA_{j_{m}} denote the embedding of item jj in modality mm, m{v,t}m\in\left\{v,t\right\}.

To better utilize various modal information to dig user preferences, we enhance the final embedding EmE_{m} using the embeddings Am(l)A_{m}^{(l)} of the specific modal I-I graph, expressed by the formula:

Em\displaystyle E_{m} =\displaystyle= [emuemi+Am(l)].\displaystyle\begin{bmatrix}e_{m}^{u}&e_{m}^{i}+A_{m}^{(l)}\end{bmatrix}. (23)

IV-D2 Multimodal Feature fusion

Different modalities carry distinct modal information, which is both relevant and complementary. To more comprehensively capture user behavior preferences, we implement feature-level fusion for visual and textual features, resulting in the following fused feature representation:

Evt=μ×Ev+(1μ)×Et,{E}_{vt}=\mu\times{E}_{v}+(1-\mu)\times{E}_{t}, (24)

where, Ev{E}_{v} and Et{E}_{t} denote the visual features and textual features, respectively, and μ\mu is a trainable parameter with an initial value of 0.5.

In the phase of feature fusion, we do not fuse the ID modality with other multimodal features. This is because, in multimodal recommendation systems, the ID modality possesses uniqueness and stability. Therefore, we only use it to align multimodal features and calculate the final predicted scores.

IV-D3 Multimodal Semantic Alignment

In multimodal recommendation systems, the feature distributions of different modalities are generally inconsistent, and the fusion process often retains a lot of noise information. Moreover, some existing modal semantic alignment methods disrupt historical interaction information, which adversely affects the final predictions. Therefore, we propose a cross-modal alignment method that uses stable ID features as guidance, effectively leveraging the ID embedding to better align semantic information of different modalities, ensuring semantic consistency among the information from various modalities. Inspired by the article PPMDR [46], we parameterize the final ID modality feature EidE_{id}, the visual modality feature EvE_{v}, and the textual modality feature using a Gaussian distribution. Then, we calculate the distance between the ID modality and the visual modality, and the textual modality feature distributions as losses, respectively. The formula is as follows:

EidN(μid,σid2),{E}_{id}\sim N\left(\mu_{id},\sigma_{id}^{2}\right), (25)
{EvN(μv,σv2),EtN(μt,σt2),\left\{\begin{aligned} {E}_{v}\sim N\left(\mu_{v},\sigma_{v}^{2}\right),\\ {E}_{t}\sim N\left(\mu_{t},\sigma_{t}^{2}\right),\end{aligned}\right. (26)
{align1=|μidμv|+|σidσv|,align2=|μidμt|+|σidσt|.\left\{\begin{aligned} \mathcal{L}_{align_{1}}=|\mu_{id}-\mu_{v}|+|\sigma_{id}-\sigma_{v}|,\\ \mathcal{L}_{align_{2}}=|\mu_{id}-\mu_{t}|+|\sigma_{id}-\sigma_{t}|.\end{aligned}\right. (27)

The final alignment loss is calculated as follows:

align=λalign(align1+align2),\mathcal{L}_{align}=\lambda_{align}(\mathcal{L}_{align_{1}}+\mathcal{L}_{align_{2}}), (28)

where, λalign\lambda_{align} is hyper-parameter used to balance the alignment loss.

IV-E Model Optimization

In recommendation tasks, Bayesian Personalized Ranking (BPR) is a commonly used optimization method. The basic idea of BPR is to increase the distinction between the expected scores of positive and negative samples, as it supposes users are more likely to prefer the items they have interacted with. We construct a triplet (u,p,n)D(u,p,n)\in{D} for calculating the BPR loss, uu represents user, pp denote the items that have been interacted with by uu and nn denote the items that have been interacted with by uu. The formula is as follows:

BPR=(u,p,n)Dlog(σ(yu,pyu,n)),\mathcal{L}_{BPR}=\sum_{(u,p,n)\in{D}}-\log(\sigma(y_{u,p}-y_{u,n})), (29)

where, yu,py_{u,p} represents predicted score for pp by uu, while yu,ny_{u,n} denotes the predicted score for nn by the same user. Additionally, σ\sigma refers to the sigmoid function.

yu,py_{u,p} and yu,ny_{u,n} are calculated by the following equations:

yu,p=(evtu)Tevtp+(eidu)Teidp,y_{u,p}=\left(e_{vt}^{u}\right)^{T}\cdot e_{vt}^{p}+\left(e_{id}^{u}\right)^{T}\cdot e_{id}^{p}, (30)
yu,n=(evtu)Tevtn+(eidu)Teidn,y_{u,n}=\left(e_{vt}^{u}\right)^{T}\cdot e_{vt}^{n}+\left(e_{id}^{u}\right)^{T}\cdot e_{id}^{n}, (31)

where, evtue_{vt}^{u} and evtpe_{vt}^{p} signify the representations of uu and pp following modal fusion, respectively. Meanwhile, eidue_{id}^{u} and eidpe_{id}^{p} refer to the ID embeddings for uu an pp.

Finally, we combine BPR loss, diffusion graph contrastive learning loss, and cross-modal alignment loss to calculate the total loss, as shown in the following formula:

=λclcl+align+BPR+E\mathcal{L}=\lambda_{\mathrm{cl}}\mathcal{L}_{\mathrm{cl}}+\mathcal{L}_{align}+\mathcal{L}_{\mathrm{BPR}}+\mathcal{L}_{\mathrm{E}} (32)

E\mathcal{L}_{\mathrm{E}} is the regularization loss, the calculation formula is as follows:

E=λE(Ev22+Et22),\mathcal{L}_{E}=\lambda_{E}\left(\left\|E_{v}\right\|_{2}^{2}+\left\|E_{t}\right\|_{2}^{2}\right), (33)

where, λE\lambda_{E} is hyperparameter to regulate the impact of the L2L_{2} regularization.

TABLE I: Specific Data Distribution Of The Experimental Dataset
Dataset #User #Item #Interaction Spasity
Baby 19,445 7,050 160,792 99.88%
Sports 35,598 18,357 296,337 99.96%
Video 24,303 10,672 231,780 99.91%

V Experiments

To fairly appraise the performance of the DiffCL, we construct a large number of experiments to assess its performance. First, we compare the DiffCL with other state-of-the-art multimodal recommendation methods, this comparison is based on the same dataset processing. In addition, we construct different variants for the DiffCL to validate the effect of distinct components to ensure that we can effectively improve the final recommendation accuracy for each component. Finally, we set different hyperparameters for the model to search for the optimal hyperparameter settings. This experiment is set up to answer the following three questions:

  • RQ1: How does the DiffCL’s performance on different datasets compare to general RSs and MRSs?

  • RQ2: How does the different components that make up the DiffCL affect its overall recommended performance?

  • RQ3: How does setting different hyperparameters change the performance of the DiffCL?

V-A Experimental Settings

In this paper, the dataset used is the Amazon review dataset, which is extensively used in MRSs. The initial dataset contains information about the user’s interaction with the item, description of the text and images of the items, and the text of comments about the item from interacting users, as well as other information such as the price of the item. For all comparative models, we use the same way of processing the dataset. Specifically, we use 5-core filtering to filter the raw data and optimize the data quality. Prior to model training, we pre-extracted item’s visual and textual features for the following recommendation task by utilizing the pre-trained ResNet50 [44] and BERT [47] with initial dimensions of 4096 and 384 dimensions for the visual and textual features, respectively. The data sets are as follows: (1) Baby, (2) Video, (3) Sports and Outdoors (denoted as Sports). Table I shows the details of the three datasets. The training, validation and test sets are divided in the ratio of 8:1:1.

V-A1 Baselines

In this subsection, we demonstrate the superiority of DiffCL by comparing it with current state-of-the-art recommendation methods. The comparative models used for the experiments include general recommendation model as well as multimodal recommendation model. Our purpose is use comparative experiments to demonstrate that DiffCL has some advantages over other models in the recommendation task. In addition, sufficient support for its application in real world is provided by experiments on real-world datasets.

The general recommendation model and multimodal recommendation model used for the comparative experiments are as follows:

(a) general recommendation methods:

BPR [48] : This is a recommendation algorithm for implicit feedback that uses a Bayesian approach to model user preferences for items and randomly selects negative samples during training to improve model’s generalization.

LightGCN: It removes some unnecessary components from the graph convolutional network and improves the training efficiency of the model.

(b) multimodal recommendation methods:

VBPR: This method is based on an extension of BPR that introduces visual features of items to improve RS’s performance for multimodal recommendation scenarios that include visual data.

MMGCN [16]: This method utilizes a graph structure to capture complicated relationships between users and items. It also designs specialized mechanisms to integrate information from various modalities, ensuring that information from various modalities can effectively complement .

DualGNN [18] : This method models the relationships between users and items simultaneously using GNN, capturing multi-level relational information to improve the accuracy and personalization of recommendations.

SLMRec [49] : This method utilizes self-supervised learning by designing tasks to generate labels and adopts a contrastive learning strategy to optimize the model by constructing positive and negative sample pairs.

BM3 [24] : This method simplifies the self-supervision task in multimodal recommender systems.

MGCN [17] : This method is based on the GCN, which utilizes item information to purify modal features. A behavior-aware fuser is also designed that can adaptively learn different modal features.

DiffMM [41] :This method is based on diffusion model, which enhances the user’s representation by combining cross-modal contrastive learning and modality-aware graph diffusion models for more accurate recommendation results.

Freedom [50] : This method is based on freezing the U-I graph and the I-I graph, and a degree-sensitive edge pruning method is designed to delete possible noisy edges.

V-A2 Evaluation protocols

In this subsection, we introduce the evaluation metrics used in the experiments. The evaluation metrics employ in this experiment are Recall@KK (R@KK) and NDCG@KK (N@KK). Recall represents the recall rate, while NDCG stands for Normalized Discounted Cumulative Gain. We set K={10,20}K=\left\{10,20\right\}, indicating number of items in final recommendation list.

V-A3 Details

This subsection provides a detailed description of the hyperparameter settings of the DiffCL on different datasets. To ensure the fairness of the evaluation, we use MMRec [51] to implement all the comparative baselines and also execute a grid search for hyperparameters of these models to determine the optimal hyperparameter settings. In addition, Adam optimizer is employed to make superior the DiffCL and other models.

We install learning rate of the DiffCL to 0.001, dropout rate to 0.5, and τ\tau in graph contrastive learning to 0.4. Besides, the values of λcl\lambda_{\mathrm{cl}}, λalign\lambda_{align} and λE\lambda_{E} are not the same for different datasets. Specifically, we set three different sets of loss weights for the baby, video, and sports datasets, respectively. They are as follows: {0.1,0.4,0.7}\left\{0.1,0.4,0.7\right\}, {0.01,1.0,1.0}\left\{0.01,1.0,1.0\right\} and {0.7,0.4,0.9}\left\{0.7,0.4,0.9\right\}.

TABLE II: The Comparison Of Different Baselines And The DiffCL Performance On Three Datasets
Datasets Baby Video Sports
Model R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20
BPR 0.0268 0.0441 0.0144 0.0188 0.0722 0.1106 0.0386 0.0486 0.0306 0.0465 0.0169 0.0210
LightGCN 0.0402 0.0644 0.0274 0.0375 0.0873 0.1351 0.0475 0.0599 0.0423 0.0642 0.0229 0.0285
DiffCL 0.0641 0.0987 0.0343 0.0433 0.1421 0.2069 0.0804 0.0974 0.0754 0.1095 0.0421 0.0509
Improv. 32.50% 42.70% 14.23% 5.60% 59.45% 50.55% 62.11% 56.43% 64.78% 60.59% 65.06% 64.91%
VBPR 0.0397 0.0665 0.0210 0.0279 0.1198 0.1796 0.0647 0.0802 0.0509 0.0765 0.0274 0.0340
MMGCN 0.0397 0.0641 0.0206 0.0269 0.0843 0.1323 0.0440 0.0565 0.0380 0.0610 0.0206 0.0266
DualGNN 0.0518 0.0820 0.0273 0.0350 0.1200 0.1807 0.0656 0.0814 0.0583 0.0865 0.0320 0.0393
SLMRec 0.0529 0.0775 0.0290 0.0353 0.1187 0.1767 0.0642 0.0792 0.0663 0.0990 0.0365 0.0450
BM3 0.0539 0.0848 0.0283 0.0362 0.1166 0.1772 0.0636 0.0793 0.0632 0.0940 0.0346 0.0426
MGCN 0.0608 0.0927 0.0333 0.0415 0.1345 0.1997 0.0740 0.0910 0.0713 0.1060 0.0392 0.0489
Freedom 0.0622 0.0948 0.0330 0.0414 0.1226 0.1858 0.0662 0.0827 0.0722 0.1062 0.0394 0.0484
DiffMM 0.0619 0.0947 0.0326 0.0394 0.0683 0.1019 0.0374 0.0455
DiffCL 0.0641 0.0987 0.0343 0.0433 0.1421 0.2069 0.0804 0.0974 0.0754 0.1095 0.0421 0.0509
Improv. 3.05% 4.11% 3.93% 4.58% 5.65% 3.60% 8.64% 7.03% 4.43% 3.11% 6.85% 5.16%
TABLE III: The Performance Comparison Of Different Variants
Variants Metrics Datasets
Baby Video Sports
DiffCLbaseline R@20 0.0854 0.1907 0.0956
N@20 0.0364 0.0856 0.0428
DiffCLdiff R@20 0.0925 0.1978 0.1095
N@20 0.0396 0.0895 0.0509
DiffCLalign R@20 0.0907 0.1965 0.0960
N@20 0.0392 0.0893 0.0428
DiffCLh R@20 0.0986 0.1921 0.1099
N@20 0.0430 0.0872 0.0494
DiffCLdiff+align R@20 0.0911 0.1904 0.1093
N@20 0.0403 0.0866 0.0506
DiffCLdiff+h R@20 0.0986 0.1940 0.1102
N@20 0.0430 0.0885 0.0495
DiffCLalign+h R@20 0.0993 0.1968 0.1114
N@20 0.0432 0.0896 0.0496
DiffCL R@20 0.0987 0.2069 0.1095
N@20 0.0433 0.0974 0.0509

V-B Comparative Experiments (RQ1)

The experimental results are comprehensively summarized in Table II. This table presents the specific performance of DiffCL alongside all comparative models. Bold numbers represent the results for DiffCL, underlined numbers denote the results for the best comparative model, and Improv. denotes the percentage improvement of DiffCL over the best comparative model.

The results of this experiment show that most multimodal recommendation models perform significantly better than general recommendation models. This is due to the ability of multimodal recommendation methods to integrate multimodal information about items, thus enabling better capture of user preference cues. From the experimental results, the DiffCL performs best on the sports dataset compared to the general recommendation model. It improves 64.78%, 60.59%, 65.06% and 64.91% on the four evaluation metrics R@1010, R@2020, N@1010 and N@2020 respectively. In contrast, the DiffCL has a smaller boost on the baby dataset, improving only 32.50%, 42.70%, 14.23%, and 5.60% under each of the four metrics mentioned above. This result shows that on the baby dataset, the multimodal information of the items has little effect on the user’s preference, and users may be more concerned about other factors such as the quality and price of the items.

Refer to caption
Refer to caption
Figure 3: The performance of the DiffCL under various λdiff\lambda_{diff} settings

Compared to other multimodal recommendation models, DiffCL consistently outperforms the leading models across all scenarios. According to the experimental results, DiffCL demonstrates superior performance on the video dataset, with improvements of 5.65%, 3.60%, 8.64%, and 7.03% across the four evaluation metrics: R@1010, R@2020, N@1010, and N@2020, respectively. These results validate the effectiveness of DiffCL in improving recommendation accuracy. Overall, the findings indicate that DiffCL enhances recommendation performance by constructing contrastive views for graph-based contrastive learning through the diffusion model, utilizing the Item-Item (I-I) graph for data augmentation, and employing ID modality-guided inter-modal alignment.

Refer to caption
Refer to caption
Figure 4: The performance of the DiffCL under various λalign\lambda_{align} settings

V-C Ablation Study (RQ2)

In this section, a large number of ablation experiments are performed in order to verify the effectiveness of the various components that make up DiffCL. Specifically, our ablation experiments included the following variants:

  • DiffCLbaseline: Remove all components.

  • DiffCLdiff: Retain only the diffusion graph contrastive learning task.

  • DiffCLalign: Retain only the ID modal guidance intra-modal semantic alignment task.

  • DiffCLh: Retain only the feature enhancement task.

  • DiffCLdiff+align: Retain both the diffusion graph contrastive learning task and the ID modal guidance intra-modal semantic alignment task.

  • DiffCLdiff+h: Retain both the diffusion graph contrastive learning task and the feature enhancement task.

  • DiffCLalign+h: Retain both the ID modal guidance intra-modal semantic alignment task and the feature enhancement task.

Table III shows the results of the final ablation experiments. Bold numbers denote best result and underlined numbers denote sub-optimal results. According to the final results, all components of the DiffCL are valid in improving the performance of the whole system respectively. In addition, the models consisting of the combination of any two components also get better recommendation results compared to the models with a single component.

Refer to caption
Refer to caption
Figure 5: The performance of the DiffCL under various λE\lambda_{E} settings

V-D Hyperparameter Effects (RQ3)

In this section, we investigate the effect of varying loss weights on the performance of diffusion map contrastive learning and multimodal indirect alignment. Specifically, we conduct a series of hyperparameter experiments to analyze how the values of these weights impact model performance. The loss weights for both the diffusion map contrastive learning task and the multimodal indirect alignment task are set to: λdiff,λalign,λE{0.01,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0}\lambda_{diff},\lambda_{align},\lambda_{E}\in\{0.01,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0\}. The experimental results across three datasets are presented in Figures 3, 4, and 5.The results reveal that the values of the loss weights significantly influence the model’s performance. Although the loss weights for the three parameters, λdiff\lambda_{diff}, λalign\lambda_{align}, and λE\lambda_{E}, are set to the same range, the optimal values for each weight vary depending on the dataset and task. Tuning these weights appropriately is crucial for achieving the best performance. The findings emphasize the importance of hyperparameter selection in optimizing the robustness and accuracy of the Diffcl across different task.

VI Conclusion

In this paper, a diffusion-based contrastive learning (DiffCL) framework is proposed for multimodal recommendation. This method generates high-quality contrastive views by introducing a diffusion model during the graph contrastive learning stage, effectively addressing the issue of reduced recommendation accuracy caused by noise in self-supervised tasks. Furthermore, it employs stable ID embeddings to guide semantic alignment across different modalities, significantly enhancing the semantic consistency of items. To comprehensively evaluate the performance of the DiffCL , we conduct a series of experiments on multiple real-world datasets and compare it with various recommendation models. The experimental results demonstrate the effectiveness of each component of the DiffCL and its superiority in recommendation performance.

In future research, we aim to optimize the integration of diffusion models within recommender systems, extending their application beyond specific stages of the recommendation process. By leveraging the powerful generative capabilities of the diffusion model, we intend to perform data augmentation from multiple perspectives to achieve superior recommendation outcomes.

References

  • [1] D. Wang, X. Zhang, D. Yu, G. Xu, and S. Deng, “Came: Content- and context-aware music embedding for recommendation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 3, pp. 1375–1388, 2021.
  • [2] Y. Zheng, J. Qin, P. Wei, Z. Chen, and L. Lin, “Cipl: Counterfactual interactive policy learning to eliminate popularity bias for online recommendation,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2023.
  • [3] J. Chen, G. Zou, P. Zhou, W. Yirui, Z. Chen, H. Su, H. Wang, and Z. Gong, “Sparse enhanced network: An adversarial generation method for robust augmentation in sequential recommendation,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 8, pp. 8283–8291, 2024.
  • [4] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural collaborative filtering,” in Proceedings of the 26th international conference on world wide web, 2017, pp. 173–182.
  • [5] S. Rendle, “Factorization machines,” in 2010 IEEE International conference on data mining.   IEEE, 2010, pp. 995–1000.
  • [6] Q. Song, R. Dian, B. Sun, J. Xie, and S. Li, “Multi-scale conformer fusion network for multi-participant behavior analysis,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 9472–9476.
  • [7] Q. Song, B. Sun, and S. Li, “Multimodal sparse transformer network for audio-visual speech recognition,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 10 028–10 038, 2023.
  • [8] J. Han, L. Zheng, Y. Xu, B. Zhang, F. Zhuang, P. S. Yu, and W. Zuo, “Adaptive deep modeling of users and items using side information for recommendation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 3, pp. 737–748, 2020.
  • [9] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, “Neural graph collaborative filtering,” in Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval, 2019, pp. 165–174.
  • [10] Y. Lei, Z. Wang, W. Li, H. Pei, and Q. Dai, “Social attentive deep q-networks for recommender systems,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 5, pp. 2443–2457, 2020.
  • [11] J. Chen, Z. Gong, J. Mo, W. Wang, W. Wang, C. Wang, X. Dong, W. Liu, and K. Wu, “Self-training enhanced: Network embedding and overlapping community detection with adversarial learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 11, pp. 6737–6748, 2022.
  • [12] J. Chen, Z. Gong, W. Wang, W. Liu, and X. Dong, “Crl: Collaborative representation learning by coordinating topic modeling and network embeddings,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 8, pp. 3765–3777, 2022.
  • [13] P. Qiao, Z. Zhang, Z. Li, Y. Zhang, K. Bian, Y. Li, and G. Wang, “Tag: Joint triple-hierarchical attention and gcn for review-based social recommender system,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 10, pp. 9904–9919, 2022.
  • [14] J. Yu, H. Yin, J. Li, M. Gao, Z. Huang, and L. Cui, “Enhancing social recommendation with adversarial graph convolutional networks,” IEEE Transactions on knowledge and data engineering, vol. 34, no. 8, pp. 3727–3739, 2020.
  • [15] J. Yu, X. Xia, T. Chen, L. Cui, N. Q. V. Hung, and H. Yin, “Xsimgcl: Towards extremely simple graph contrastive learning for recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 2, pp. 913–926, 2023.
  • [16] Y. Wei, X. Wang, L. Nie, X. He, R. Hong, and T.-S. Chua, “Mmgcn: Multi-modal graph convolution network for personalized recommendation of micro-video,” in Proceedings of the 27th ACM international conference on multimedia, 2019, pp. 1437–1445.
  • [17] P. Yu, Z. Tan, G. Lu, and B.-K. Bao, “Multi-view graph convolutional network for multimedia recommendation,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 6576–6585.
  • [18] Q. Wang, Y. Wei, J. Yin, J. Wu, X. Song, and L. Nie, “Dualgnn: Dual graph neural network for multimedia recommendation,” IEEE Transactions on Multimedia, vol. 25, pp. 1074–1084, 2021.
  • [19] J. Yu, H. Yin, X. Xia, T. Chen, J. Li, and Z. Huang, “Self-supervised learning for recommender systems: A survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no. 1, pp. 335–355, 2023.
  • [20] Z. Lin, C. Tian, Y. Hou, and W. X. Zhao, “Improving graph collaborative filtering with neighborhood-enriched contrastive learning,” in Proceedings of the ACM web conference, 2022, pp. 2320–2329.
  • [21] L. Xia, C. Huang, Y. Xu, J. Zhao, D. Yin, and J. Huang, “Hypergraph contrastive collaborative filtering,” in Proceedings of the 45th International ACM SIGIR conference on research and development in information retrieval, 2022, pp. 70–79.
  • [22] J. Wu, X. Wang, F. Feng, X. He, L. Chen, J. Lian, and X. Xie, “Self-supervised graph learning for recommendation,” in Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, 2021, pp. 726–735.
  • [23] J. Yu, H. Yin, J. Li, Q. Wang, N. Q. V. Hung, and X. Zhang, “Self-supervised multi-channel hypergraph convolutional network for social recommendation,” in Proceedings of the web conference 2021, 2021, pp. 413–424.
  • [24] X. Zhou, H. Zhou, Y. Liu, Z. Zeng, C. Miao, P. Wang, Y. You, and F. Jiang, “Bootstrap latent representations for multi-modal recommendation,” in Proceedings of the ACM Web Conference 2023, 2023, pp. 845–854.
  • [25] Z. Yi, X. Wang, I. Ounis, and C. Macdonald, “Multi-modal graph contrastive learning for micro-video recommendation,” in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 1807–1811.
  • [26] W. Wei, C. Huang, L. Xia, and C. Zhang, “Multi-modal self-supervised learning for recommendation,” in Proceedings of the ACM Web Conference, 2023, pp. 790–800.
  • [27] X. Geng, H. Zhang, J. Bian, and T.-S. Chua, “Learning image and user features for recommendation in social networks,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 4274–4282.
  • [28] J. Chen, H. Zhang, X. He, L. Nie, W. Liu, and T.-S. Chua, “Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention,” in Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, 2017, pp. 335–344.
  • [29] F. Zhang, N. J. Yuan, D. Lian, X. Xie, and W.-Y. Ma, “Collaborative knowledge base embedding for recommender systems,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp. 353–362.
  • [30] J. Zhang, Y. Zhu, Q. Liu, S. Wu, S. Wang, and L. Wang, “Mining latent structures for multimedia recommendation,” in Proceedings of the 29th ACM international conference on multimedia, 2021, pp. 3872–3880.
  • [31] Y. Fang, H. Wu, Y. Zhao, L. Zhang, S. Qin, and X. Wang, “Diversifying collaborative filtering via graph spreading network and selective sampling,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 10, pp. 13 860–13 873, 2024.
  • [32] B. Wu, X. He, Q. Zhang, M. Wang, and Y. Ye, “Gcrec: Graph-augmented capsule network for next-item recommendation,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 10 164–10 177, 2023.
  • [33] K. Liu, F. Xue, X. He, D. Guo, and R. Hong, “Joint multi-grained popularity-aware graph convolution collaborative filtering for recommendation,” IEEE Transactions on Computational Social Systems, vol. 10, no. 1, pp. 72–83, 2022.
  • [34] F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 9, pp. 10 850–10 869, 2023.
  • [35] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  • [36] M. W. Lam, J. Wang, R. Huang, D. Su, and D. Yu, “Bilateral denoising diffusion models,” arXiv preprint arXiv:2108.11514, 2021.
  • [37] H. Ma, R. Xie, L. Meng, X. Chen, X. Zhang, L. Lin, and Z. Kang, “Plug-in diffusion model for sequential recommendation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 8, 2024, pp. 8886–8894.
  • [38] Z. Yang, J. Wu, Z. Wang, X. Wang, Y. Yuan, and X. He, “Generate what you prefer: Reshaping sequential recommendation via guided diffusion,” Advances in Neural Information Processing Systems, vol. 36, 2024.
  • [39] W. Wang, Y. Xu, F. Feng, X. Lin, X. He, and T.-S. Chua, “Diffusion recommender model,” in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, pp. 832–841.
  • [40] P. Yu, Z. Tan, G. Lu, and B.-K. Bao, “Ld4mrec: Simplifying and powering diffusion model for multimedia recommendation,” arXiv preprint arXiv:2309.15363, 2023.
  • [41] Y. Jiang, L. Xia, W. Wei, D. Luo, K. Lin, and C. Huang, “Diffmm: Multi-modal diffusion model for recommendation,” arXiv preprint arXiv:2406.11781, 2024.
  • [42] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” arXiv preprint arXiv:1609.02907, 2016.
  • [43] T. Baltrušaitis, C. Ahuja, and L.-P. Morency, “Multimodal machine learning: A survey and taxonomy,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 2, pp. 423–443, 2018.
  • [44] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  • [45] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
  • [46] W. Liu, C. Chen, X. Liao, M. Hu, J. Yin, Y. Tan, and L. Zheng, “Federated probabilistic preference distribution modelling with compactness co-clustering for privacy-preserving multi-domain recommendation.” in IJCAI, 2023, pp. 2206–2214.
  • [47] J. Devlin, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  • [48] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “Bpr: Bayesian personalized ranking from implicit feedback,” arXiv preprint arXiv:1205.2618, 2012.
  • [49] Z. Tao, X. Liu, Y. Xia, X. Wang, L. Yang, X. Huang, and T.-S. Chua, “Self-supervised learning for multimedia recommendation,” IEEE Transactions on Multimedia, 2022.
  • [50] X. Zhou and Z. Shen, “A tale of two graphs: Freezing and denoising graph structures for multimodal recommendation,” in Proceedings of the 31st ACM International Conference on Multimedia, 2023, pp. 935–943.
  • [51] H. Zhou, X. Zhou, Z. Zeng, L. Zhang, and Z. Shen, “A comprehensive survey on multimodal recommender systems: Taxonomy, evaluation, and future directions,” arXiv preprint arXiv:2302.04473, 2023.