This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Opinion Optimization in Directed Social Networks

Haoxin Sun, Zhongzhi Zhang
Abstract

Shifting social opinions has far-reaching implications in various aspects, such as public health campaigns, product marketing, and political candidates. In this paper, we study a problem of opinion optimization based on the popular Friedkin-Johnsen (FJ) model for opinion dynamics in an unweighted directed social network with nn nodes and mm edges. In the FJ model, the internal opinion of every node lies in the closed interval [0,1][0,1], with 0 and 1 being polar opposites of opinions about a certain issue. Concretely, we focus on the problem of selecting a small number of knk\ll n nodes and changing their internal opinions to 0, in order to minimize the average opinion at equilibrium. We then design an algorithm that returns the optimal solution to the problem in O(n3)O(n^{3}) time. To speed up the computation, we further develop a fast algorithm by sampling spanning forests, the time complexity of which is O(ln)O(ln), with ll being the number of samplings. Finally, we execute extensive experiments on various real directed networks, which show that the effectiveness of our two algorithms is similar to each other, both of which outperform several baseline strategies of node selection. Moreover, our fast algorithm is more efficient than the first one, which is scalable to massive graphs with more than twenty million nodes.

1 Introduction

As an important part of our lives, online social networks and social media have dramatically changed the way people propagate, exchange, and formulate opinions (Ledford 2020). Increasing evidence indicates that in contrast to traditional communications and interaction, in the current digital age online communications and discussions have significantly influenced human activity in an unprecedented way, leading to universality, criticality and complexity of information propagation (Notarmuzi et al. 2022). In order to understand mechanisms for opinion propagation and shaping, a variety of mathematical models for opinion dynamics have been established (Jia et al. 2015; Proskurnikov and Tempo 2017; Dong et al. 2018; Anderson and Ye 2019). Among different models, the Friedkin-Johnsen (FJ) model (Friedkin and Johnsen 1990) is a popular one, which has been applied to many aspects (Bernardo et al. 2021; Friedkin et al. 2016). For example, the concatenated FJ model has been recently adapted to capture and reproduce the complex dynamics behind the Paris Agreement negotiation process, which explains why consensus was achieved in these multilateral international negotiations (Bernardo et al. 2021).

A fundamental quantity for opinion dynamics is the overall opinion or average opinion, which reflects the public opinions about certain topics of interest. In the past years, the subject of modifying opinions in a graph has attracted considerable attention in the scientific community (Gionis, Terzi, and Tsaparas 2013; Abebe et al. 2018; Chan, Liang, and Sozio 2019; Xu et al. 2020), since it has important implications in diverse realistic situations, such as commercial marketing, political election, and public health campaigns. For example, previous work has formulated and studied the problem of optimizing the overall or average opinion for the FJ model in undirected graphs by changing a certain attribute of some chosen nodes, including internal opinion (Xu et al. 2020), external opinion (Gionis, Terzi, and Tsaparas 2013), and susceptibility to persuasion (Abebe et al. 2018; Chan, Liang, and Sozio 2019), and so on. Thus far, most existing studies about modifying opinions focused on undirected graphs. In this paper, we study the problem of minimizing or maximizing average opinion in directed graphs (digraphs), since they can better mimic realistic networks. Moreover, because previous algorithms for unweighted graphs do not carry over to digraphs, we will propose an efficient linear-time approximation algorithm to solve the problem.

We adopt the discrete-time FJ model in a social network modeled by a digraph 𝒢=(V,E)\mathcal{G}=(V,E) with nn nodes and mm arcs. In the model, each node iVi\in V is endowed with an internal/innate opinion sis_{i} in the interval [0,1][0,1], where 0 and 1 are two polar opposing opinions regarding a certain topic. Moreover, each node iVi\in V has an expressed opinion zi(t)z_{i}(t) at time tt. During the opinion evolution process, the internal opinions of all nodes never change, while the expressed opinion zi(t+1)z_{i}(t+1) of any node ii at time t+1t+1 evolves as a weighted average of sis_{i} and the expressed opinions of ii’s neighbors at time tt. For sufficiently large tt, the expressed opinion zi(t)z_{i}(t) of every node ii converges to an equilibrium opinion ziz_{i}. We address the following optimization problem OpinionMin (or OpinionMax): Given a digraph 𝒢=(V,E)\mathcal{G}=(V,E) and a positive integer knk\ll n, how to choose kk nodes and change their internal opinions to 0 (or 1), so that the average overall steady-state opinion is minimized (or maximized).

The main contributions of our work are as follows. We formalize the problem OpinionMin (or OpinionMax) of optimizing the average equilibrium opinion by optimally selecting kk nodes and modifying their internal opinions to 0 (or 1), and show that both problems are equivalent to each other. We prove that the OpinionMin problem has an optimal solution and give an exact algorithm, which returns the optimal solution in O(n3)O(n^{3}) time. We then provide an interpretation for the average equilibrium opinion from the perspective of spanning converging forests, based on which and Wilson’s algorithm we propose a sampling based fast algorithm. The fast algorithm has an error guarantee for the main quantity concerned, and has a time complexity of O(ln)O(ln), where ll is the number of samplings. Finally, we perform extensive experiments on various real networks, which shows that our fast algorithm is almost as effective as the exact one, both outperforming several natural baselines. Furthermore, compared with the exact algorithm, our fast algorithm is more efficient, and scales to massive graphs with more than twenty million nodes.

2 Related Work

In this section, we briefly review the existing work related to ours.

Establishing mathematical models is a key step for understanding opinion dynamics and various models have been developed in the past years (Jia et al. 2015; Proskurnikov and Tempo 2017; Dong et al. 2018; Anderson and Ye 2019). Among existing models, the FJ model (Friedkin and Johnsen 1990) is a classic one, which is a significant extension of the DeGroot model (Degroot 1974). Due to its theoretical and practical significance, the FJ model has received much interest since its development. A sufficient condition for stability of the FJ model was obtained in (Ravazzi et al. 2015), its average innate opinion was inferred in (Das et al. 2013), and the vector of its expressed opinions at equilibrium was derived in (Das et al. 2013; Bindel, Kleinberg, and Oren 2015). Moreover, some explanations of the FJ model were also provided (Ghaderi and Srikant 2014; Bindel, Kleinberg, and Oren 2015). Finally, in recent years many variants or extensions of the FJ model have been introduced and studied by incorporating different factors affecting opinion formation, such as peer pressure (Semonsen et al. 2019), cooperation and competition (He et al. 2020; Xu et al. 2020), and interactions among higher-order nearest neighbors (Zhang et al. 2020).

In addition to the properties, interpretations and extensions of the FJ model itself, some social phenomena have been quantified based on the FJ model, such as disagreement (Musco, Musco, and Tsourakakis 2018), conflict (Chen, Lijffijt, and De Bie 2018), polarization (Matakos, Terzi, and Tsaparas 2017; Musco, Musco, and Tsourakakis 2018), and controversy (Chen, Lijffijt, and De Bie 2018), and a randomized algorithm approximately computing polarization and disagreement was designed in (Xu, Bao, and Zhang 2021), which was later used in (Tu and Neumann 2022). Also, many optimization problems for these quantities in the FJ model have been proposed and analyzed, including minimizing polarization (Musco, Musco, and Tsourakakis 2018; Matakos, Terzi, and Tsaparas 2017), disagreement (Musco, Musco, and Tsourakakis 2018), and conflict (Chen, Lijffijt, and De Bie 2018; Zhu and Zhang 2022), by different strategies such as modifying node’s internal opinions (Matakos, Terzi, and Tsaparas 2017), allocating edge weights (Musco, Musco, and Tsourakakis 2018) and adding edges (Zhu, Bao, and Zhang 2021). In order to solve these problems, different algorithms were designed by leveraging some mathematical tools, such as semidefinite programming (Chen, Lijffijt, and De Bie 2018) and Laplacian solvers (Zhu, Bao, and Zhang 2021).

Apart from polarization, disagreement, and conflict, another important optimization objective for opinion dynamics is the overall opinion or average opinion at equilibrium. For example, based on the FJ model, maximizing or minimizing the overall opinion has been considered by using different node-based schemes, such as changing the node’s internal opinions (Xu et al. 2020), external opinions (Gionis, Terzi, and Tsaparas 2013), and susceptibility to persuasion (Abebe et al. 2018; Chan, Liang, and Sozio 2019). On the other hand, for the DeGroot model of opinion dynamics in the presence of leaders, optimizing the overall opinion or average opinion was also heavily studied (Luca et al. 2014; Yi, Castiglia, and Patterson 2021; Zhou and Zhang 2021). An identical problem was also considered for a vote model (Yildiz et al. 2013), the asymptotic mean opinion of which is similar to that in the extended DeGroot model (Yi, Castiglia, and Patterson 2021). The vast majority of previous studies concentrated on unweighted graphs, with the exception of a few works (Ahmadinejad et al. 2015; Yi, Castiglia, and Patterson 2021), which addressed opinion optimization problems in digraphs and developed approximation algorithms with the time complexity of at least O(n2.373)O(n^{2.373}). In comparison, our fast algorithm is more efficient since it has linear time complexity.

3 Preliminary

This section is devoted to a brief introduction to some useful notations and tools, in order to facilitate the description of problem formulation and algorithms.

3.1 Directed Graph and Its Laplacian Matrix

Let 𝒢=(V,E)\mathcal{G}=(V,E) denote an unweighted simple directed graph (digraph) with n=|V|n=|V| nodes (vertices) and m=|E|m=|E| directed edges (arcs), where V={v1,v2,,vn}V=\{v_{1},v_{2},\cdots,v_{n}\} is the set of nodes, and E={(vi,vj)V×V}E=\{(v_{i},v_{j})\in V\times V\} is the set of directed edges. The existence of arc (vi,vj)E(v_{i},v_{j})\in E means that there is an arc pointing from node viv_{i} to node vjv_{j}. In what follows, viv_{i} and ii are used interchangeably to represent node viv_{i} if incurring no confusion. An isolated node is a node with no arcs pointing to or coming from it. Let N(i)N(i) denote the set of nodes that can be accessed by node ii. In other words, N(i)={j:(i,j)E}N(i)=\{j:(i,j)\in E\}. A path PP from node v1v_{1} to vkv_{k} is an alternating sequence of nodes and arcs v1v_{1},(v1,v2)(v_{1},v_{2}),v2v_{2},\cdots, vj1,(vj1v_{j-1},(v_{j-1},vj)v_{j}), vjv_{j} in which nodes are distinct and every arc (vi,vi+1)(v_{i},v_{i+1}) is from viv_{i} to vi+1v_{i+1}. A loop is a path plus an arc from the ending node to the starting node. A digraph is (strongly) connected if for any pair nodes vxv_{x} and vyv_{y}, there is a path from vxv_{x} to vyv_{y}, and there is a path from vyv_{y} to vxv_{x} at the same time. A digraph is called weakly connected if it is connected when one replaces any directed edge (i,j)(i,j) with two directed edges (i,j)(i,j) and (j,i)(j,i) in opposite directions. A tree is a weakly connected graph with no loops. An isolated node is considered as a tree. A forest is a particular graph that is a disjoint union of trees.

The connections of digraph 𝒢=(V,E)\mathcal{G}=(V,E) are encoded in its adjacency matrix 𝑨=(aij)n×n\bm{\mathit{A}}=(a_{ij})_{n\times n}, with the element aija_{ij} at row ii and column jj being 11 if (vi,vj)E(v_{i},v_{j})\in E and aij=0a_{ij}=0 otherwise. For a node ii in digraph 𝒢\mathcal{G}, its in-degree di+d^{+}_{i} is defined as di+=j=1najid^{+}_{i}=\sum_{j=1}^{n}a_{ji}, and its out-degree did^{-}_{i} is defined as di=j=1naijd^{-}_{i}=\sum_{j=1}^{n}a_{ij}. In the sequel, we use did_{i} to represent the out-degree did_{i}^{-}. The diagonal out-degree matrix of digraph 𝒢\mathcal{G} is defined as 𝑫=diag(d1,d2,,dn){\bm{\mathit{D}}}={\rm diag}(d_{1},d_{2},\ldots,d_{n}), and the Laplacian matrix of digraph 𝒢\mathcal{G} is defined to be 𝑳=𝑫𝑨{\bm{\mathit{L}}}={\bm{\mathit{D}}}-{\bm{\mathit{A}}}. Let 𝟏\mathbf{1} and 𝟎\mathbf{0} be the two nn-dimensional vectors with all entries being ones and zeros, respectively. Then, by definition, the sum of all entries in each row of 𝑳\bm{\mathit{L}} is equal to 0 obeying 𝑳𝟏=𝟎\bm{\mathit{L}}\mathbf{1}=\mathbf{0}. Let 𝑰\bm{\mathit{I}} be the nn-dimensional identity matrix.

In a digraph 𝒢\mathcal{G}, if for any arc (i,j)(i,j), the arc (j,i)(j,i) exists, 𝒢\mathcal{G} is reduced to an undirected graph. When 𝒢\mathcal{G} is undirected, aij=ajia_{ij}=a_{ji} holds for an arbitrary pair of nodes ii and jj, and thus di+=did^{+}_{i}=d^{-}_{i} holds for any node iVi\in V. Moreover, in undirected graph 𝒢\mathcal{G} both adjacency matrix 𝑨\bm{\mathit{A}} and Laplacian matrix 𝑳\bm{\mathit{L}} of 𝒢\mathcal{G} are symmetric, satisfying 𝑳𝟏=𝟎\bm{\mathit{L}}\mathbf{1}=\mathbf{0}.

3.2 Friedkin-Johnsen Model on Digraphs

The Friedkin-Johnsen (FJ) model (Friedkin and Johnsen 1990) is a popular model for opinion evolution and formation. For the FJ opinion model on a digraph 𝒢=(V,E)\mathcal{G}=(V,E), each node/agent iVi\in V is associated with two opinions: one is the internal opinion sis_{i}, the other is the expressed opinion zi(t)z_{i}(t) at time tt. The internal opinion sis_{i} is in the closed interval [0,1][0,1], reflecting the intrinsic position of node ii on a certain topic, where 0 and 1 are polar opposites of opinions regarding the topic. A higher value of sis_{i} signifies that node ii is more favorable toward the topic, and vice versa. During the process of opinion evolution, the internal opinion sis_{i} remains constant, while the expressed opinion zi(t)z_{i}(t) evolves at time t+1t+1 as follows:

zi(t+1)=si+jN(i)aijzj(t)1+jN(i)aij.z_{i}(t+1)=\frac{s_{i}+\sum_{j\in N(i)}a_{ij}z_{j}(t)}{1+\sum_{j\in N(i)}a_{ij}}. (1)

Let 𝒔=(s1,s2,,sn)\bm{\mathit{s}}=(s_{1},s_{2},\cdots,s_{n})^{\top} denote the vector of internal opinions, and let 𝒛(t)=(z1(t),z2(t),,zn(t))\bm{\mathit{z}}(t)=(z_{1}(t),z_{2}(t),\cdots,z_{n}(t))^{\top} denote the vector of expressed opinions at time tt.

Lemma 3.1

(Bindel, Kleinberg, and Oren 2015) As tt approaches infinity, 𝐳(t)\bm{\mathit{z}}(t) converges to an equilibrium vector 𝐳=(z1,z2,,zn)\bm{\mathit{z}}=(z_{1},z_{2},\cdots,z_{n})^{\top} satisfying 𝐳=(𝐈+𝐋)1𝐬\bm{\mathit{z}}=(\bm{\mathit{I}}+\bm{\mathit{L}})^{-1}\bm{\mathit{s}}.

Let ωij\omega_{ij} be the element at the ii-th row and the jj-th column of matrix 𝛀(𝑰+𝑳)1\mathbf{\Omega}\triangleq\left(\bm{\mathit{I}}+\bm{\mathit{L}}\right)^{-1}, which is called the fundamental matrix of the FJ model for opinion dynamics (Gionis, Terzi, and Tsaparas 2013). The fundamental matrix has many good properties (Chebotarev and Shamis 1997, 1998). It is row stochastic, since j=1nωij=1\sum_{j=1}^{n}\omega_{ij}=1. Moreover, 0ωji<ωii10\leq\omega_{ji}<\omega_{ii}\leq 1 for any pair of nodes ii and jj. The equality ωji=0\omega_{ji}=0 holds if and only if jij\neq i and there is no path from node jj to node ii; and ωii=1\omega_{ii}=1 holds if and only if the out-degree did_{i} of nodes ii is 0. Then, according to Lemma 3.1, for every node iVi\in V, its expressed opinion ziz_{i} is given by zi=j=1nωijsjz_{i}=\sum^{n}_{j=1}\omega_{ij}s_{j}, a convex combination of the internal opinions for all nodes.

4 Problem Formulation

An important quantity for opinion dynamics is the overall expressed opinion or the average expressed opinion at equilibrium, the optimization problem for which on the FJ model has been addressed under different constraints (Gionis, Terzi, and Tsaparas 2013; Ahmadinejad et al. 2015; Abebe et al. 2018; Xu et al. 2020; Yi, Castiglia, and Patterson 2021). In this section, we propose a problem of minimizing average expressed opinion for the FJ opinion dynamics model in a digraph, and design an exact algorithm optimally solving the problem.

4.1 Average Opinion and Structure Centrality

For the FJ model in digraph 𝒢=(V,E)\mathcal{G}=(V,E), the overall expressed opinion is defined as the sum zsumz_{\rm sum} of expressed opinions ziz_{i} of every node iVi\in V at equilibrium. By Lemma 3.1, zi=j=1nωijsjz_{i}=\sum^{n}_{j=1}\omega_{ij}s_{j} and zsum=i=1nzi=i=1nj=1nωjisiz_{\rm sum}=\sum_{i=1}^{n}z_{i}=\sum_{i=1}^{n}\sum_{j=1}^{n}\omega_{ji}s_{i}. Given the vector for the equilibrium expressed opinions 𝒛\bm{\mathit{z}}, we use g(𝒛)g(\bm{\mathit{z}}) to denote the average expressed opinion. By definition,

g(𝒛)=1nzsum=1ni=1nzi=i=1nj=1nωjinsi.g(\bm{\mathit{z}})=\frac{1}{n}z_{\rm sum}=\frac{1}{n}\sum_{i=1}^{n}z_{i}=\sum_{i=1}^{n}\frac{\sum_{j=1}^{n}\omega_{ji}}{n}s_{i}\,. (2)

Since g(𝒛)=zsum/ng(\bm{\mathit{z}})=z_{\rm sum}/n, related problems and algorithms for g(𝒛)g(\bm{\mathit{z}}) and zsumz_{\rm sum} are equivalent to each other. In what follows, we focus on the quantity g(𝒛)g(\bm{\mathit{z}}).

Equation (2) tells us that the average expressed opinion g(𝒛)g(\bm{\mathit{z}}) is determined by two aspects: the internal opinion of every node, as well as the network structure characterizing interactions between nodes encoded in matrix 𝛀\mathbf{\Omega}, both of which constitute the social structure of opinion system for the FJ model. The former is an intrinsic property of each node, while the latter is a structure property of the network, both of which together determine the opinion dynamics system. Concretely, for the equilibrium expressed opinion zi=j=1nωijsjz_{i}=\sum_{j=1}^{n}\omega_{ij}s_{j} of node ii, ωij\omega_{ij} indicates the convex combination coefficient or contribution of the internal opinion for node jj. And the average of the jj-th column elements of 𝛀\mathbf{\Omega}, denoted by ρj1ni=1nωij\rho_{j}\triangleq\frac{1}{n}\sum_{i=1}^{n}\omega_{ij}, measures the contribution of the internal opinion of node jj to g(𝒛)g(\bm{\mathit{z}}). We call ρj\rho_{j} as the structure centrality (Friedkin 2011) of node jj in opinion dynamics modelled by the FJ model, since it catches the long-run structure influence of node jj on the average expressed opinion. Note that matrix 𝛀\mathbf{\Omega} is row stochastic and 0ωij10\leq\omega_{ij}\leq 1 for any pair of nodes ii and jj, 0ρj10\leq\rho_{j}\leq 1 holds for every node jVj\in V, and j=1nρj=1\sum_{j=1}^{n}\rho_{j}=1.

Using structure centrality, the average expressed opinion g(𝒛)g(\bm{\mathit{z}}) is expressed as g(𝒛)=i=1nρisig(\bm{\mathit{z}})=\sum_{i=1}^{n}\rho_{i}s_{i}, which shows that the average expressed opinion g(𝒛)g(\bm{\mathit{z}}) is a convex combination of the internal opinions of all nodes, with the weight for sis_{i} being the structure centrality ρi\rho_{i} of node ii.

4.2 Problem Statement

As shown above, for a given digraph 𝒢=(V,E)\mathcal{G}=(V,E), its node centrality remains fixed. For the FJ model on 𝒢=(V,E)\mathcal{G}=(V,E) with initial vector 𝒔=(s1,s2,,sn)\bm{\mathit{s}}=(s_{1},s_{2},\cdots,s_{n})^{\top} of internal opinions, if we choose a set TVT\subset V of kk nodes and persuade them to change their internal opinions to 0, the average equilibrium opinion, denoted by gT(𝒛)g_{T}(\bm{\mathit{z}}), will decrease. It is clear that for T=T=\emptyset, g(𝒛)=g(𝒛)g_{\emptyset}(\bm{\mathit{z}})=g(\bm{\mathit{z}}). Moreover, for two node sets HH and TT, if THVT\subset H\subset V, then gT(𝒛)gH(𝒛)g_{T}(\bm{\mathit{z}})\geq g_{H}(\bm{\mathit{z}}). Then the problem OpinionMin of opinion minimization arises naturally: How to optimally select a set TT with a small number of kk nodes and change their internal opinions to 0, so that their influence on the overall equilibrium opinion is maximized. Mathematically, it is formally stated as follows.

Problem 1 (OpinionMin)

Given a digraph 𝒢=(V,E)\mathcal{G}=(V,E), a vector 𝐬\bm{\mathit{s}} of internal opinions, and an integer knk\ll n, we aim to find the set TVT\subseteq V with |T|=k|T|=k nodes, and change the internal opinions of these chosen kk nodes to 0, so that the average equilibrium opinions is minimized. That is,

T=argminUV,|U|=kgU(𝒛).T=\arg\min_{U\subseteq V,|U|=k}g_{U}(\bm{\mathit{z}}). (3)

Similarly, we can define the problem OpinionMax for maximizing the average equilibrium opinion by optimally selecting a set TT of kk nodes and changing their internal opinions to 1. The goal of problem OpinionMin is to drive the average equilibrium opinion gT(𝒛)g_{T}(\bm{\mathit{z}}) towards the polar value 0, while the of goal of problem OpinionMax is to drive gT(𝒛)g_{T}(\bm{\mathit{z}}) towards polar value 1. Although the definitions and formulations of problems OpinionMin and OpinionMax are different, we can prove that they are equivalent to each other. In the sequel, we only consider the OpinionMin problem.

4.3 Optimal Solution

Although the OpinionMin problem is seemingly combinatorial, we next show that there is an exact algorithm optimally solving the problem in O(n3)O(n^{3}) time.

Theorem 4.1

The optimal solution to the OpinionMin problem is the set TT of kk nodes with the largest product of structure centrality and internal opinion. That is, for any node iTi\in T and any node jVTj\in V\setminus T, ρisiρjsj\rho_{i}s_{i}\geq\rho_{j}s_{j}.

Proof.  Since the modifying of the internal opinions does not change the structure centrality ρi\rho_{i} of any node ii, the optimal set TT of nodes for the OpinionMin problem satisfies

T=argminUV,|U|=kiUρisi=argmaxUV,|U|=kiUρisi,\displaystyle T=\arg\min_{U\subseteq V,|U|=k}\sum_{i\notin U}\rho_{i}s_{i}=\arg\max_{U\subseteq V,|U|=k}\sum_{i\in U}\rho_{i}s_{i},

which finishes the proof.  \Box

Theorem 4.1 shows that the key to solve Problem 1 is to compute ρi\rho_{i} for every node ii. In Algorithm 1, we present an algorithm Exact, which computes ρi\rho_{i} exactly. The algorithm first computes the inverse 𝛀\mathbf{\Omega} of matrix 𝑰+𝑳\bm{\mathit{I}}+\bm{\mathit{L}}, which takes O(n3)O(n^{3}) time. Based on the obtained 𝛀=(ωij)n×n\mathbf{\Omega}=(\omega_{ij})_{n\times n}, the algorithm then computes ρisi\rho_{i}s_{i} for each iVi\in V in O(n2)O(n^{2}) time, by using the relation ρisi=1nj=1nωjisi\rho_{i}s_{i}=\frac{1}{n}\sum_{j=1}^{n}\omega_{ji}s_{i}. Finally, Algorithm 1 constructs the set TT of kk nodes with the largest value of ρisi\rho_{i}s_{i}, which takes O(kn)O(kn) time. Therefore, the total time complexity of Algorithm 1 is O(n3)O(n^{3}).

Due to the high computation complexity, Algorithm 1 is computationally infeasible for large graphs. In the next section, we will give a fast algorithm for Problem 1, which is scalable to graphs with twenty million nodes.

Input     :  A digraph 𝒢=(V,E)\mathcal{G}=(V,E); an internal opinion vector 𝒔\bm{\mathit{s}}; an integer kk obeying relation 1k|V|1\leq k\leq|V|
Output :  TT: A subset of VV with |T|=k|T|=k
1 Initialize solution T=T=\emptyset
2 Compute 𝛀=(𝑰+𝑳)1\mathbf{\Omega}=(\bm{\mathit{I}}+\bm{\mathit{L}})^{-1}
3 Compute ρisi=1nj=1nωjisi\rho_{i}s_{i}=\frac{1}{n}\sum_{j=1}^{n}\omega_{ji}s_{i} for each iVi\in V
4 for t=1t=1 to kk do
5   Select ii s. t. iargmaxiVTρisii\leftarrow\mathrm{arg\,max}_{i\in V\setminus T}\rho_{i}s_{i}
6   Update solution TT{i}T\leftarrow T\cup\{i\}
7 
return TT.
Algorithm 1 Exact(𝒢,𝒔,k)(\mathcal{G},\bm{\mathit{s}},k)

5 Fast Sampling Algorithm

In this section, we develop a linear time algorithm to approximately evaluate the structure centrality of every node and solve the OpinionMin problem by using the connection of the fundamental matrix 𝛀\mathbf{\Omega} and the spanning converging forest. Our fast algorithm is based on the sampling of spanning converging forests, the ingredient of which is an extention of Wilson’s algorithm (Wilson 1996; Wilson and Propp 1996).

5.1 Interpretation of Structure Centrality

For a digraph 𝒢=(V,E)\mathcal{G}=(V,E), a spanning subgraph of 𝒢\mathcal{G} is a subgraph of 𝒢\mathcal{G} with node set being VV and edge set being a subset of EE. A converging tree is a weakly connected digraph, where one node, called the root node, has out-degree 0 and all other nodes have out-degree 1. An isolated node is considered as a converging tree with the root being itself. A spanning converging forest of digraph 𝒢\mathcal{G} is a spanning subdigraph of 𝒢\mathcal{G}, where all weakly connected components are converging trees. A spanning converging forest is in fact an in-forest in (Agaev and Chebotarev 2001; Chebotarev and Agaev 2002).

Let \mathcal{F} be the set of all spanning converging forests of digraph 𝒢\mathcal{G}. For a spanning converging forest ϕ\phi\in\mathcal{F}, let VϕV_{\phi} and EϕE_{\phi} denote its node set and arc set, respectively. By definition, for each node iVϕi\in V_{\phi}, there is at most one node jVϕj\in V_{\phi} obeying (i,j)Eϕ(i,j)\in E_{\phi}. For a spanning converging forest ϕ\phi, define (ϕ)={i:(i,j)ϕ,jVϕ}\mathcal{R}(\phi)=\{i:(i,j)\notin\phi,\forall j\in V_{\phi}\}, which is actually the set of roots of all converging trees that constitute ϕ\phi. Since each node ii in ϕ\phi belongs to a certain converging tree, we define function rϕ(i):V(ϕ)r_{\phi}(i):V\rightarrow\mathcal{R}(\phi) to map node ii to the root of the converging tree including ii. Thus, if rϕ(i)=jr_{\phi}(i)=j we conclude that j(ϕ)j\in\mathcal{R}(\phi), and nodes ii and jj belong to the same converging tree in ϕ\phi. Define ij\mathcal{F}_{ij} to be the set of those spanning converging forests, where for each spanning converging forest nodes ii and jj are in the same converging tree rooted at node jj. That is, ij={ϕ:rϕ(i)=j,ϕ}\mathcal{F}_{ij}=\{\phi:r_{\phi}(i)=j,\phi\in\mathcal{F}\}. Then, we have ii={ϕ:i(ϕ),ϕ}\mathcal{F}_{ii}=\{\phi:i\in\mathcal{R}(\phi),\phi\in\mathcal{F}\}.

Spanning converging forests have a close connection with the fundamental matrix of the FJ model, which is in fact the in-forest matrix of a digraph 𝒢\mathcal{G} introduced (Chebotarev and Shamis 1997, 1998). Using the approach in (Chaiken 1982), it is easy to derive that the entry ωij\omega_{ij} of the fundamental matrix 𝛀\mathbf{\Omega} can be written as ωij=|ij|/||\omega_{ij}=|\mathcal{F}_{ij}|/|\mathcal{F}|.

With the notions mentioned above, we now provide an interpretation and another expression of structure centrality ρi\rho_{i} for any node ii. For the convenience of description, we introduce some more notations. For a node iVi\in V and a spanning converging forest ϕ\phi\in\mathcal{F} of digraph 𝒢=(V,E)\mathcal{G}=(V,E), let M(ϕ,i)M(\phi,i) be a set defined by M(ϕ,i)={j:rϕ(j)=i}M(\phi,i)=\{j:r_{\phi}(j)=i\}. By definition, for any ϕ\phi\in\mathcal{F}, if i(ϕ)i\notin\mathcal{R}(\phi), M(ϕ,i)=M(\phi,i)=\emptyset; if i(ϕ)i\in\mathcal{R}(\phi), |M(ϕ,i)||M(\phi,i)| is equal to the number of nodes in the converging tree in ϕ\phi, whose root is node ii. For two nodes ii and jj and a spanning converging forest ϕ\phi, define s(ϕ,j,i)s(\phi,j,i) as a function taking two values, 0 or 1:

s(ϕ,j,i)={1 if rϕ(j)=i,0 if rϕ(j)i.s(\phi,j,i)=\begin{cases}1&\text{ if }r_{\phi}(j)=i,\\ 0&\text{ if }r_{\phi}(j)\neq i.\end{cases} (4)

Then, the structure centrality ρi\rho_{i} of node ii is recast as

ρi\displaystyle\rho_{i} =1nj=1nωji=1n||j=1n|ji|=1n||j=1nϕs(ϕ,j,i)\displaystyle=\frac{1}{n}\sum_{j=1}^{n}\omega_{ji}=\frac{1}{n|\mathcal{F}|}\sum_{j=1}^{n}|\mathcal{F}_{ji}|=\frac{1}{n|\mathcal{F}|}\sum\limits_{j=1}^{n}\sum\limits_{\phi\in\mathcal{F}}s(\phi,j,i)
=1n||ϕj=1ns(ϕ,j,i)=1n||ϕ|M(ϕ,i)|,\displaystyle=\frac{1}{n|\mathcal{F}|}\sum\limits_{\phi\in\mathcal{F}}\sum\limits_{j=1}^{n}s(\phi,j,i)=\frac{1}{n|\mathcal{F}|}\sum\limits_{\phi\in\mathcal{F}}|M(\phi,i)|, (5)

which indicates that ρi\rho_{i} is the average number of nodes in the converging trees rooted at node ii in all ϕ\phi\in\mathcal{F}, divided by nn.

5.2 An Expansion of Wilson’s Algorithm

We first give a brief introduction to the loop-erasure operation to a random walk (Lawler and Gregory 1980), which is a process obtained from the random walk by performing an erasure operation on its loops in chronological order. Concretely, for a random walk P=v1,(v1,v2),v2,,vk1,(vj1,vj),vjP=v_{1},(v_{1},v_{2}),v_{2},\ldots,v_{k-1},(v_{j-1},v_{j}),v_{j}, the loop-erasure PLEP_{\rm LE} to PP is an alternating sequence v~1,(v~1,v~2),v~2,v~q1,(v~q1,v~q),v~q\widetilde{v}_{1},(\widetilde{v}_{1},\widetilde{v}_{2}),\widetilde{v}_{2}\ldots,\widetilde{v}_{q-1},(\widetilde{v}_{q-1},\widetilde{v}_{q}),\widetilde{v}_{q} of nodes and arcs obtained inductively in the following way. First set v~1=v1\widetilde{v}_{1}=v_{1} and append v~1\widetilde{v}_{1} to PLEP_{\rm LE}. Suppose that sequence v~1\widetilde{v}_{1}, (v~1,v~2)(\widetilde{v}_{1},\widetilde{v}_{2}), v~2\widetilde{v}_{2}, \ldots, v~h1\widetilde{v}_{h-1}, (v~h1,v~h)(\widetilde{v}_{h-1},\widetilde{v}_{h}), v~h\widetilde{v}_{h} has been added to PLEP_{\rm LE} for some h1h\geq 1. If v~h=vj\widetilde{v}_{h}=v_{j}, then q=hq=h and v~h\widetilde{v}_{h} is the last node in PLEP_{\rm LE}. Otherwise, define v~h+1=vr+1\widetilde{v}_{h+1}=v_{r+1}, where r=max{i:vi=v~h}r=\max\{i:v_{i}=\widetilde{v}_{h}\}.

Based on the loop-erasure operation on a random walk, Wilson proposed an algorithm to generate a uniform spanning tree rooted at a given node (Wilson 1996; Wilson and Propp 1996). Following the three steps below, we introduce Wilson’s algorithm to get a spanning tree τ=(Vτ,Eτ)\tau=(V_{\tau},E_{\tau}) of a connected digraph 𝒢=(V,E)\mathcal{G}=(V,E), which is rooted at node uu. (i) Set τ=({u},)\tau=(\{u\},\emptyset) with Vτ={u}V_{\tau}=\{u\}. Choose iVVτi\in V\setminus V_{\tau}. Then create an unbiased random walk starting at node ii. At each time step, the walk jumps to a neighbor of current position with identical probability. The walk stops, when the whole walk PP reaches some node in τ\tau. (ii) Perform loop-erasure operation on the random walk PP to get PLE=v~1P_{\rm LE}=\widetilde{v}_{1}, (v~1,v~2)(\widetilde{v}_{1},\widetilde{v}_{2}), v~2\widetilde{v}_{2}, \ldots, (v~q1,v~q)(\widetilde{v}_{q-1},\widetilde{v}_{q}), v~q\widetilde{v}_{q}, and add the nodes and arcs in PLEP_{\rm LE} to τ\tau. Then update VτV_{\tau} with the nodes in τ\tau. (iii) If VτVV_{\tau}\neq V, repeat step (ii), otherwise end circulation and return τ\tau.

For a digraph 𝒢=(V,E)\mathcal{G}=(V,E), connected or disconnected, we can also apply Wilson’s Algorithm to get a spanning converging forest ϕ0\phi_{0}\in\mathcal{F}, by using the method similar to that in (Avena and Gaudillière 2018; Pilavcı et al. 2021), which includes the following three steps. (i) We construct an augmented digraph 𝒢=(V,E)\mathcal{G}^{\prime}=(V^{\prime},E^{\prime}) of 𝒢=(V,E)\mathcal{G}=(V,E), obtained from 𝒢=(V,E)\mathcal{G}=(V,E) by adding a new node Δ\Delta and some new edges. Concretely, in 𝒢=(V,E)\mathcal{G}^{\prime}=(V^{\prime},E^{\prime}), V=V{Δ}V^{\prime}=V\cup\{\Delta\} and E=E{(i,Δ)}{(Δ,i)}E^{\prime}=E\cup\{(i,\Delta)\}\cup\{(\Delta,i)\} for all iVi\in V. (ii) Using Wilson’s algorithm to generate a uniform spanning tree τ\tau for the augmented graph 𝒢\mathcal{G}^{\prime}, whose root node is Δ\Delta. (iii) Deleting all the edges (i,Δ)τ(i,\Delta)\in\tau we get a spanning forest ϕ0\phi_{0}\in\mathcal{F} of 𝒢\mathcal{G}. Assigning (ϕ0)={i:(i,Δ)τ}\mathcal{R}(\phi_{0})=\{i:(i,\Delta)\in\tau\} as the set of roots for trees ϕ0\phi_{0} makes ϕ0\phi_{0} become a converging spanning forest of 𝒢\mathcal{G}.

The spanning converging forest ϕ0\phi_{0} obtained using the above steps is uniformly selected from \mathcal{F} (Avena and Gaudillière 2018). In other words, for any spanning converging forest ϕ\phi in \mathcal{F}, we have (ϕ0=ϕ)=1/||\mathbb{P}(\phi_{0}=\phi)=1/|\mathcal{F}|. Following the three steps above for generating a uniform spanning converging forest of digraph 𝒢\mathcal{G}, in Algorithm 2 we present an algorithm to generate a uniform spanning converging forest ϕ\phi of digraph 𝒢\mathcal{G}, which returns a list RootIndex with the ii-th element RootIndex[ii] recording the root of the tree in ϕ\phi node ii belongs to. That is, RootIndex[ii]=rϕ(i)r_{\phi}(i).

Input     :  𝒢\mathcal{G} : a digraph
Output : RootIndex : a vector recording the root index of every node
1 InForest[ii] \leftarrow false , i=1,2,,ni=1,2,\ldots,n
2 Next[ii] 1\leftarrow-1 , i=1,2,,ni=1,2,\ldots,n
3 RootIndex[ii] 0\leftarrow 0, i=1,2,,ni=1,2,\ldots,n
4 for i=1i=1 to nn do
5 uiu\leftarrow i
6 while not InForest[uu] do
7      seed \leftarrow Rand()
8    if seed 11+du\leq\frac{1}{1+d_{u}} then
9         InForest[uu] \leftarrow true
10         Next[uu]1\leftarrow-1
11         RootIndex[uu] u\leftarrow u
12       
13    else
14         Next[uu] \leftarrow RandomSuccessor(u,𝒢u,\mathcal{G})
15         u \leftarrow Next[uu]
16       
17    
18  RootNow \leftarrow RootIndex[uu]
19 uiu\leftarrow i
20 while not InForest[uu] do
21      InForest[uu] \leftarrow true
22      RootIndex[uu] \leftarrow RootNow
23      u \leftarrow Next[uu]
24    
25 
26
27return RootIndex
Algorithm 2 RandomForest(𝒢)\textsc{RandomForest}(\mathcal{G})

Below we give a detailed description for Algorithm 2. InForest is a list recording whether a node is in the forest or not in the random walk process. In line 1, we initialize InForest[i][i] to false, for all iVi\in V. If node ii is not a root of any tree in the forest ϕ\phi, Next[ii] is the node jj satisfying (i,j)ϕ(i,j)\in\phi; if node ii belongs to the root set (ϕ)\mathcal{R}(\phi), Next[ii] =1=-1. We initialize Next[ii] =1=-1 in line 2. We start a random walk at node uu in the extended graph 𝒢\mathcal{G}^{\prime} in line 5 to create a forest branch of 𝒢\mathcal{G}. The probability of visiting node Δ\Delta starting from uu is 1/(1+du)1/(1+d_{u}). In line 7, we generate a random real number in (0,1)(0,1) using function Rand(). If the random number satisfies the inequality in line 8, the walk jumps to node Δ\Delta at this step. According to the previous analysis, in extended graph 𝒢\mathcal{G}^{\prime}, those nodes that point directly to Δ\Delta belong to the root set (ϕ)\mathcal{R}(\phi). In lines 9-11, we set the node uu as a root node and update InForest[uu], Next[u][u], and RootIndex[u][u]. If the inequality in line 8 does not hold, we use function RandomSuccessor(u,𝒢u,\mathcal{G}) to return a node randomly selected from the neighbors of uu in 𝒢\mathcal{G} in line 13. Then we update uu to Next[uu] and go back to line 6. The 𝐟𝐨𝐫\mathbf{for} loop stops when the random walk goes to a node already existing in the forest. When the loop stops, we get a newly created branch. In lines 15-20, we add the loop-erasure of the branch to the forest and then update RootIndex.

We now analyze the time complexity of Algorithm 2. Before doing this, we present some properties of the diagonal element ωii\omega_{ii} of matrix 𝛀\mathbf{\Omega} for all nodes iVi\in V.

Lemma 5.1

For any i=1,2,,ni=1,2,\ldots,n, the diagonal element ωii\omega_{ii} of matrix 𝛀\mathbf{\Omega} sastisfies 11+diωii22+di\frac{1}{1+d_{i}}\leq\omega_{ii}\leq\frac{2}{2+d_{i}}.

Lemma 5.2

For any unweighted digraph 𝒢=(V,E)\mathcal{G}=(V,E), the expected time complexity of Algorithm 2 is O(n)O(n).

Proof.  Wilson showed that the expected running time of generating a uniform spanning tree of a connected digraph 𝒢\mathcal{G} rooted at node uu is equal to a weighted average of commute time between the root and the other nodes (Wilson 1996). Marchal rewrote this average of commute time in terms of graph matrices in Proposition 1 in (Marchal 2000). According to Marchal’s result, the expected running time of Algorithm 2 is equal to the trace i=1nωii(1+di)\sum_{i=1}^{n}\omega_{ii}(1+d_{i}) of matrix 𝛀(𝑰+𝑫)\mathbf{\Omega}(\bm{\mathit{I}}+\bm{\mathit{D}}). Using Lemma 5.2, we have i=1nωii(1+di)i=1n2+2di2+di2n(11n+1).\sum_{i=1}^{n}\omega_{ii}(1+d_{i})\leq\sum_{i=1}^{n}\frac{2+2d_{i}}{2+d_{i}}\leq 2n\left(1-\frac{1}{n+1}\right). Thus, the expected time complexity of Algorithm 2 is O(n)O(n).  \Box

5.3 Fast Approximation Algorithm

Here by using (5.1), we present an efficient sampling-based algorithm Fast to estimate ρi\rho_{i} for all iVi\in V and approximately solve the problem OpinionMin in linear time.

The ingredient of the approximation algorithm Fast is the variation of Wilson’s algorithm introduced in the preceding subsection. The details of algorithm Fast are described in Algorithm 3. First, by applying Algorithm 2 we generate ll random spanning converging forests ϕ1,ϕ2,,ϕl\phi_{1},\phi_{2},\ldots,\phi_{l}. Then, we compute ρ^i=1nlj=1l|M(ϕj,i)|\widehat{\rho}_{i}=\frac{1}{nl}\sum_{j=1}^{l}|M(\phi_{j},i)| for all iVi\in V. Note that each of these ll spanning converging forests has the same probability of being created from all spanning converging forests in \mathcal{F} (Avena and Gaudillière 2018). Thus, we have 𝔼(1nlj=1l|M(ϕj,i)|)=ρi\mathbb{E}\left(\frac{1}{nl}\sum_{j=1}^{l}\left|M(\phi_{j},i)\right|\right)=\rho_{i}, which implies that ρ^i\widehat{\rho}_{i} in Algorithm 3 is an unbiased estimation of ρi\rho_{i}. Then, ρ^isi\widehat{\rho}_{i}s_{i} is an unbiased estimation of ρisi\rho_{i}s_{i}. Finally, we choose kk nodes from VV with the top-kk values of ρ^isi\widehat{\rho}_{i}s_{i}.

Input     : 𝒢\mathcal{G} : a digraph
kk : size of the target set
ll : number of generated spanning forests
Output : T^\widehat{T} : the target set
1 Initialize : T^\widehat{T}\leftarrow\emptyset, ρ^i0,i=1,2,,n\widehat{\rho}_{i}\leftarrow 0,\ i=1,2,\ldots,n
2 for t=1t=1 to ll do
3   RootIndex \leftarrow RandomForest(𝒢\mathcal{G})
4 for i=1i=1 to nn do
5    uu\leftarrow RootIndex[ii]
6    ρ^uρ^u+1\widehat{\rho}_{u}\leftarrow\widehat{\rho}_{u}+1
7 
8ρ^ρ^/nl\widehat{\rho}\leftarrow\widehat{\rho}/nl     % ρ^=(ρ^1,ρ^2,,ρ^n)\widehat{\rho}=(\widehat{\rho}_{1},\widehat{\rho}_{2},\ldots,\widehat{\rho}_{n})^{\top}
9 for i=1i=1 to kk do
10 uargmaxqVT^ρ^qsqu\leftarrow\arg\max\limits_{q\in V\setminus\widehat{T}}\widehat{\rho}_{q}s_{q}
11 T^T^{u}\widehat{T}\leftarrow\widehat{T}\bigcup\{u\}
12 
13return T^\widehat{T}
Algorithm 3 Fast(𝒢,k,l)\textsc{Fast}\left(\mathcal{G},k,l\right)
Theorem 5.3

The time complexity of Algorithm 3 is O(ln)O(ln).

Running Algorithm 2 requires determining the number of sampling ll, which determines the accuracy of ρ^isi\widehat{\rho}_{i}s_{i} as an approximation of ρisi\rho_{i}s_{i}. In general, the larger the value of ll, the more accurate the estimation of ρ^isi\widehat{\rho}_{i}s_{i} to ρisi\rho_{i}s_{i}. Next, we bound the number ll of required samplings of spanning converging forests to guarantee a desired estimation precision of ρ^isi\widehat{\rho}_{i}s_{i} by applying the Hoeffding’s inequality (Hoeffding and Wassily 1963).

We now demonstrate that with a proper choice of ll, ρ^isi\widehat{\rho}_{i}s_{i} as an estimator of ρisi\rho_{i}s_{i} has an approximation guarantee for all iVi\in V. Specifically, we establish an (ϵ,δ)(\epsilon,\delta)-approximation of ρ^isi\widehat{\rho}_{i}s_{i}: for any small parameters ϵ>0\epsilon>0 and δ>0\delta>0, the approximation error ρ^isi\widehat{\rho}_{i}s_{i} is bounded by ϵ\epsilon with probability at least 1δ1-\delta. Theorem 5.4 shows how to properly choose ll so that ρ^isi\widehat{\rho}_{i}s_{i} is an (ϵ,δ)(\epsilon,\delta)-approximation of ρisi\rho_{i}s_{i}.

Refer to caption
Figure 1: Average of equilibrium expressed opinions for our two algorithms Exact and Fast, and four baseline heuristics Random (Rand), In-degree (ID), Internal opinion (IO), and Expressed opinion (EO), on four directed real networks: (a) Filmtrust, (b) Dblp, (c) Humanproteins, and (d) P2p-Gnutella08.
Theorem 5.4

For any ϵ>0\epsilon>0 and δ(0,1)\delta\in(0,1), if ll is chosen obeying l=12ϵ2ln2δl=\left\lceil\frac{1}{2\epsilon^{2}}\ln{\frac{2}{\delta}}\right\rceil, then for any iVi\in V, {|ρ^[i]siρisi|>ϵ}δ\mathbb{P}\left\{|\widehat{\rho}[i]s_{i}-\rho_{i}s_{i}|>\epsilon\right\}\leq\delta.

Recall that our problem aims to determine the optimal set TT, which consists of kk nodes with the largest ρisi\rho_{i}s_{i}. To avoid calculating ρi\rho_{i} directly, we propose a fast algorithm (Algorithm 3), which returns a set T^\widehat{T} containing top kk nodes of the highest ρ^[i]si\hat{\rho}[i]s_{i}. Based on the result of Theorem 5.4, we can get a union bound between gT^(𝒛)g_{\widehat{T}}(\bm{\mathit{z}}) and gT(𝒛)g_{T}(\bm{\mathit{z}}), as stated in the following theorem.

Theorem 5.5

For given parameters k,ϵ,δk,\epsilon,\delta, if ll is chosen according to Theorem 5.4, the inequality |gT^(𝐳)gT(𝐳)|<2kϵ|g_{\widehat{T}}(\bm{\mathit{z}})-g_{T}(\bm{\mathit{z}})|<2k\epsilon holds with high probability.

Proof.  According to Theorem 5.4, we suppose now inequalities |ρ^[i]siρisi|>ϵ|\widehat{\rho}[i]s_{i}-\rho_{i}s_{i}|>\epsilon hold for any iVi\in V. Since the nodes in set TT have the top value of ρisi\rho_{i}s_{i}, we have

gT^(𝒛)gT(𝒛)=iT^ρisiiTρisi=iTρisiiT^ρisi0.g_{\widehat{T}}(\bm{\mathit{z}})-g_{T}(\bm{\mathit{z}})=\sum_{i\notin\widehat{T}}\rho_{i}s_{i}-\sum_{i\notin T}\rho_{i}s_{i}=\sum_{i\in{T}}\rho_{i}s_{i}-\sum_{i\in\widehat{T}}\rho_{i}s_{i}\geq 0.

By Theorem 5.4, one obtains

gT^(𝒛)gT(𝒛)iTρ^[i]siiT^ρisi+kϵ\displaystyle\quad g_{\widehat{T}}(\bm{\mathit{z}})-g_{T}(\bm{\mathit{z}})\leq\sum_{i\in{T}}\widehat{\rho}[i]s_{i}-\sum_{i\in\widehat{T}}\rho_{i}s_{i}+k\epsilon
iT^ρ^[i]siiT^ρisi+kϵ2kϵ,\displaystyle\leq\sum_{i\in\widehat{T}}\widehat{\rho}[i]s_{i}-\sum_{i\in\widehat{T}}\rho_{i}s_{i}+k\epsilon\leq 2k\epsilon,

which completes the proof.  \Box

Therefore, for any fixed kk, the number of samples does not depend on nn.

6 Experiments

In this section, we conduct extensive experiments on various real-life directed networks, in order to evaluate the performance of our two algorithms Exact and Fast in terms of effectiveness and efficiency. The data sets of selected real networks are publicly available in the KONECT (Kunegis 2013) and SNAP (Leskovec and Sosič 2016), the detailed information of which is presented in the first three columns of Table 1. In the dataset networks, the number nn of nodes ranges from about 1 thousand to 24 million, and the number mm of directed edges ranges from about 2 thousand to 58 million. All our experiments are programmed in Julia using a single thread, and are run on a machine equipped with 4.2 GHz Intel i7-7700 CPU and 32GB of main memory.

6.1 Effectiveness

We first compare the effectiveness of our algorithms Exact and Fast with four baseline schemes for node selection: Random, In-degree, Internal opinion, and Expressed opinion. Random selects kk nodes at random. In-degree chooses kk nodes with the largest in-degree, since a node with a high in-degree may has a strong influence on other nodes (Xu et al. 2020). For Internal opinion and Expressed opinion, they have been used in (Gionis, Terzi, and Tsaparas 2013). Internal opinion returns kk nodes with the largest original internal opinions, while Expressed opinion selects kk nodes with the largest equilibrium expressed opinions in the FJ model corresponding to the original internal opinion vector.

Network Nodes Arcs Running time (ss) for Exact and Fast Relative error (×103\times 10^{-3})
Optimal l=500l=500 l=1000l=1000 l=2000l=2000 l=500l=500 l=1000l=1000 l=2000l=2000
Filmtrust 874 1,853 0.023 0.019 0.037 0.042 1.08 0.55 0.11
Humanproteins 2,239 6,452 0.251 0.032 0.039 0.056 0.19 0.16 0.03
Adolescenthealth 2,539 12,969 0.354 0.034 0.068 0.134 0.72 0.59 0.09
P2p-Gnutella08 6,301 20,777 4.825 0.067 0.123 0.244 0.83 0.64 0.04
Wiki-Vote 7,115 103,689 7.405 0.078 0.156 0.312 1.13 0.87 0.13
Dblp 12,590 49,744 40.870 0.106 0.210 0.419 0.38 0.13 0.08
Wikipedialinks 17,649 296,918 110.744 0.248 0.477 0.932 1.32 0.97 0.06
Twitterlist 23,370 33,101 259.484 0.127 0.250 0.498 0.25 0.12 0.01
P2p-Gnutella31 62,586 147,892 - 0.628 1.236 2.550 - - -
Soc-Epinions 75,879 508,837 - 1.260 2.501 4.973 - - -
Email-EuAll 265,009 418,956 - 3.016 5.929 11.844 - - -
Stanford 281,903 2,312,500 - 7.474 14.908 29.815 - - -
NotreDame 325,729 1,469,680 - 4.823 9.574 19.232 - - -
BerkStan 685,230 7,600,600 - 14.021 28.009 56.130 - - -
Google 875,713 5,105,040 - 26.583 53.655 106.005 - - -
NorthwestUSA 1,207,940 2,820,770 - 27.758 55.509 110.410 - - -
WikiTalk 2,394,380 5,021,410 - 20.277 37.622 75.105 - - -
Greatlakes 2,758,120 6,794,810 - 64.391 128.167 255.147 - - -
FullUSA 23,947,300 57,708,600 - 559.147 1116.550 2230.770 - - -
Table 1: The running time and the relative error of Algorithms 1 and 3 on real networks for various sampling number ll.

In our experiment, the number ll of samplings in algorithm Fast is set be 500500. For each node ii, its internal opinion sis_{i} is generated uniformly in the interval [0,1][0,1]. For each real network, we first calculate the equilibrium expressed opinions of all nodes and their average opinion for the original internal opinions. Then, using our algorithms Exact and Fast and the four baseline strategies, we select k=10,20,30,40,50k=10,20,30,40,50 nodes and change their internal opinions to 0, and recompute the average expressed opinion associated with the modified internal opinions. We also execute experiments for other distributions of internal opinions. For example, we consider the case that the internal opinions follow a normal distribution with mean 0 and variance 11. For this case, we perform a linear transformation, mapping the internal opinions into interval [0,1][0,1], so that the smallest internal opinion is mapped to 0, while the largest internal opinion corresponds to 1. As can be seen from Figure 1, for each network algorithm Fast always returns a result close to the optimal solution corresponding to algorithm Exact for both uniform distribution and standardized normal distribution, outperforming the four other baseline strategies.

For the cases that internal opinions obey power-law distribution or exponential distribution, here we do not report the results since they are similar to that observed in Figure 1.

6.2 Efficiency and Scalability

As shown above, algorithm Fast has similar effectiveness to that of algorithm Exact. Below we will show that algorithm Fast is more efficient than algorithm Exact. To this end, in Table 1 we compare the performance of algorithms Exact and Fast. First, we compare the running time of the two algorithms on the real-life directed networks listed in Table 1. For our experiment, the internal opinion of all nodes in each network obeys uniform distribution from [0,1][0,1], kk is equal to 50, and ll is chosen to be 500, 1000, and 2000. As shown in Table 1, Fast is significantly faster than Exact for all ll, which becomes more obvious when the number of nodes increases. For example, Exact fails to run on the last 11 networks in Table 1, due to time and memory limitations. In contrast, Fast still works well in these networks. Particularly, algorithm Fast is scalable to massive networks with more than twenty million nodes, e.g., FullUSA with over 2.9×1072.9\times 10^{7} nodes.

Table 1 also reports quantitative comparison of the effectiveness between algorithms Exact and Fast. Let gTg_{T} and gT^g_{\widehat{T}} denote the average opinion obtained, respectively, by algorithms Exact and Fast, and let γ=|gTgT^|/gT\gamma=|g_{T}-g_{\widehat{T}}|/g_{T} be the relative error of gT^g_{\widehat{T}} with respect to gTg_{T}. The last three columns of Table 1 present the relative errors for different real networks and various numbers ll of samplings. From the results, we can see that for all networks and different ll, the relative error γ\gamma is negligible, with the largest value being less than 0.00140.0014. Moreover, for each network, γ\gamma is decreased when ll increases. This again indicates that the results returned by Fast are very close to those corresponding to Exact. Therefore, algorithm Fast is both effective and efficient, and scales to massive graphs.

7 Conclusions

In this paper, we studied how to optimize social opinions based on the Friedkin-Johnsen (FJ) model in an unweighted directed social network with nn nodes and mm edges, where the internal opinion sis_{i}, i=1,2,,ni=1,2,\cdots,n, of every node ii is in interval [0,1][0,1]. We concentrated on the problem of minimizing the average of equilibrium opinions by selecting a set UU of knk\ll n nodes and modifying their internal opinions to 0. Although the problem seems combinatorial, we proved that there is an algorithm Exact solving it in O(n3)O(n^{3}) time, which returns the kk optimal nodes with the top kk values of ρisi\rho_{i}s_{i}, i=1,,ni=1,\cdots,n, where ρi\rho_{i} is the structure centrality of node ii.

Although algorithm Exact avoids the naïve enumeration of all (nk)\tbinom{n}{k} cases for set UU, it is not applicable to large graphs. To make up for this deficiency, we proposed a fast algorithm for the problem. To this end, we provided an interpretation of ρi\rho_{i} in terms of rooted spanning converging forests, and designed a fast sampling algorithm Fast to estimate ρi\rho_{i} for all nodes by using a variant of Wilson’s Algorithm. The algorithm simultaneously returns kk nodes with largest values of ρisi\rho_{i}s_{i} in O(ln)O(ln) time, where ll denotes the number of samplings. Finally, we performed experiments on many real directed networks of different sizes to demonstrate the performance of our algorithms. The results show that the effectiveness of algorithm Fast is comparable to that of algorithm Exact, both of which are better than the baseline algorithms. Furthermore, relative to Exact, Fast is more efficient, since is Fast is scalable to massive graphs with over twenty million nodes, while Exact only applies to graphs with less than tens of thousands of nodes. It is worth mentioning that it is easy to extend or modify our algorithm to weighed digraphs and apply it to solve other optimization problems for opinion dynamics.

Acknowledgements

Zhongzhi Zhang is the corresponding author. This work was supported by the Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01), the National Natural Science Foundation of China (Nos. 61872093 and U20B2051), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology.

References

  • Abebe et al. (2018) Abebe, R.; Kleinberg, J.; Parkes, D.; and Tsourakakis, C. E. 2018. Opinion dynamics with varying susceptibility to persuasion. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1089–1098. ACM.
  • Agaev and Chebotarev (2001) Agaev, R. P.; and Chebotarev, P. Y. 2001. Spanning forests of a digraph and their applications. Automation and Remote Control, 62(3): 443–466.
  • Ahmadinejad et al. (2015) Ahmadinejad, A. M.; Dehghani, S.; Hajiaghayi, M. T.; Mahini, H.; and Yazdanbod, S. 2015. Forming external behaviors by leveraging internal opinions. In Proceedings of 2015 IEEE Conference on Computer Communications, 2728–2734. IEEE.
  • Anderson and Ye (2019) Anderson, B. D.; and Ye, M. 2019. Recent advances in the modelling and analysis of opinion dynamics on influence networks. International Journal of Automation and Computing, 16(2): 129–149.
  • Avena and Gaudillière (2018) Avena, L.; and Gaudillière, A. 2018. Two applications of random spanning forests. Journal of Theoretical Probability, 31(4): 1975–2004.
  • Bernardo et al. (2021) Bernardo, C.; Wang, L.; Vasca, F.; Hong, Y.; Shi, G.; and Altafini, C. 2021. Achieving consensus in multilateral international negotiations: The case study of the 2015 Paris Agreement on climate change. Science Advances, 7(51): eabg8068.
  • Bindel, Kleinberg, and Oren (2015) Bindel, D.; Kleinberg, J.; and Oren, S. 2015. How bad is forming your own opinion? Games and Economic Behavior, 92: 248–265.
  • Chaiken (1982) Chaiken, S. 1982. A combinatorial proof of the all minors matrix tree theorem. SIAM J. Alg. Disc. Meth., 3(3): 319–329.
  • Chan, Liang, and Sozio (2019) Chan, T.-H. H.; Liang, Z.; and Sozio, M. 2019. Revisiting opinion dynamics with varying susceptibility to persuasion via non-convex local search. In Proceedings of the 2019 World Wide Web Conference, 173–183. ACM.
  • Chebotarev and Agaev (2002) Chebotarev, P.; and Agaev, R. 2002. Forest matrices around the Laplacian matrix. Linear Algebra and its Applications, 356(1-3): 253–274.
  • Chebotarev and Shamis (1997) Chebotarev, P. Y.; and Shamis, E. V. 1997. The matrix-forest theorem and measuring relations in small social groups. Automation and Remote Control, 58(9): 1505–1514.
  • Chebotarev and Shamis (1998) Chebotarev, P. Y.; and Shamis, E. V. 1998. On proximity measures for graph vertices. Automation and Remote Control, 59(10): 1443–1459.
  • Chen, Lijffijt, and De Bie (2018) Chen, X.; Lijffijt, J.; and De Bie, T. 2018. Quantifying and minimizing risk of conflict in social networks. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1197–1205. ACM.
  • Das et al. (2013) Das, A.; Gollapudi, S.; Panigrahy, R.; and Salek, M. 2013. Debiasing social wisdom. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 500–508. ACM.
  • Degroot (1974) Degroot, M. H. 1974. Reaching a consensus. Journal of the American Statistical Association, 69(345): 118–121.
  • Dong et al. (2018) Dong, Y.; Zhan, M.; Kou, G.; Ding, Z.; and Liang, H. 2018. A survey on the fusion process in opinion dynamics. Information Fusion, 43: 57–65.
  • Friedkin (2011) Friedkin, N. E. 2011. A formal theory of reflected appraisals in the evolution of power. Administrative Science Quarterly, 56(4): 501–529.
  • Friedkin and Johnsen (1990) Friedkin, N. E.; and Johnsen, E. C. 1990. Social influence and opinions. Journal of Mathematical Sociology, 15(3-4): 193–206.
  • Friedkin et al. (2016) Friedkin, N. E.; Proskurnikov, A. V.; Tempo, R.; and Parsegov, S. E. 2016. Network science on belief system dynamics under logic constraints. Science, 354(6310): 321–326.
  • Ghaderi and Srikant (2014) Ghaderi, J.; and Srikant, R. 2014. Opinion dynamics in social networks with stubborn agents: Equilibrium and convergence rate. Automatica, 50(12): 3209–3215.
  • Gionis, Terzi, and Tsaparas (2013) Gionis, A.; Terzi, E.; and Tsaparas, P. 2013. Opinion maximization in social networks. In Proceedings of the 2013 SIAM International Conference on Data Mining, 387–395. SIAM.
  • He et al. (2020) He, G.; Zhang, W.; Liu, J.; and Ruan, H. 2020. Opinion dynamics with the increasing peer pressure and prejudice on the signed graph. Nonlinear Dynamics, 99: 1–13.
  • Hoeffding and Wassily (1963) Hoeffding; and Wassily. 1963. Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 58(301): 13–30.
  • Jia et al. (2015) Jia, P.; MirTabatabaei, A.; Friedkin, N. E.; and Bullo, F. 2015. Opinion dynamics and the evolution of social power in influence networks. SIAM Review, 57(3): 367–397.
  • Kunegis (2013) Kunegis, J. 2013. Konect: the koblenz network collection. In Proceedings of the 22nd International World Wide Web Conference, 1343–1350. ACM.
  • Lawler and Gregory (1980) Lawler; and Gregory, F. 1980. A self-avoiding random walk. Duke Mathematical Journal, 47(3): 655–693.
  • Ledford (2020) Ledford, H. 2020. How Facebook, Twitter and other data troves are revolutionizing social science. Nature, 582(7812): 328–330.
  • Leskovec and Sosič (2016) Leskovec, J.; and Sosič, R. 2016. SNAP: A general-purpose network analysis and graph-mining library. ACM Transactions on Intelligent Systems and Technology, 8(1): 1.
  • Luca et al. (2014) Luca, V.; Fabio, F.; Paolo, F.; and Asuman, O. 2014. Message passing optimization of harmonic influence centrality. IEEE Transactions on Control of Network Systems, 1(1): 109–120.
  • Marchal (2000) Marchal, P. 2000. Loop-erased random walks, spanning trees and Hamiltonian cycles. Electronic Communications in Probability, 5: 39–50.
  • Matakos, Terzi, and Tsaparas (2017) Matakos, A.; Terzi, E.; and Tsaparas, P. 2017. Measuring and moderating opinion polarization in social networks. Data Mining and Knowledge Discovery, 31(5): 1480–1505.
  • Musco, Musco, and Tsourakakis (2018) Musco, C.; Musco, C.; and Tsourakakis, C. E. 2018. Minimizing polarization and disagreement in social networks. In Proceedings of the 2018 World Wide Web Conference, 369–378.
  • Notarmuzi et al. (2022) Notarmuzi, D.; Castellano, C.; Flammini, A.; Mazzilli, D.; and Radicchi, F. 2022. Universality, criticality and complexity of information propagation in social media. Nature Communications, 13: 1308.
  • Pilavcı et al. (2021) Pilavcı, Y. Y.; Amblard, P.-O.; Barthelme, S.; and Tremblay, N. 2021. Graph Tikhonov regularization and interpolation via random spanning forests. IEEE transactions on Signal and Information Processing over Networks, 7: 359–374.
  • Proskurnikov and Tempo (2017) Proskurnikov, A. V.; and Tempo, R. 2017. A tutorial on modeling and analysis of dynamic social networks. Part I. Annual Reviews in Control, 43: 65–79.
  • Ravazzi et al. (2015) Ravazzi, C.; Frasca, P.; Tempo, R.; and Ishii, H. 2015. Ergodic randomized algorithms and dynamics over networks. IEEE Transactions on Control of Network Systems, 1(2): 78–87.
  • Semonsen et al. (2019) Semonsen, J.; Griffin, C.; Squicciarini, A.; and Rajtmajer, S. 2019. Opinion dynamics in the presence of increasing agreement pressure. IEEE Transactions on Cybernetics, 49(4): 1270–1278.
  • Tu and Neumann (2022) Tu, S.; and Neumann, S. 2022. A viral marketing-based model for opinion dynamics in online social networks. In Proceedings of the Web Conference, 1570–1578.
  • Wilson (1996) Wilson, D. B. 1996. Generating random spanning trees more quickly than the cover time. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, 296–303.
  • Wilson and Propp (1996) Wilson, D. B.; and Propp, J. G. 1996. How to get an exact sample from a generic markov chain and sample a random spanning tree from a directed graph, both within the cover time. In Proceedings of the Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, 448–457.
  • Xu et al. (2020) Xu, P.; Hu, W.; Wu, J.; and Liu, W. 2020. Opinion maximization in social trust networks. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, 1251–1257.
  • Xu, Bao, and Zhang (2021) Xu, W.; Bao, Q.; and Zhang, Z. 2021. Fast evaluation for relevant quantities of opinion dynamics. In Proceedings of The Web Conference, 2037–2045. ACM.
  • Yi, Castiglia, and Patterson (2021) Yi, Y.; Castiglia, T.; and Patterson, S. 2021. Shifting opinions in a social network through leader selection. IEEE Transactions on Control of Network Systems, 8(3): 1116–1127.
  • Yildiz et al. (2013) Yildiz, E.; Ozdaglar, A.; Acemoglu, D.; Saberi, A.; and Scaglione, A. 2013. Binary opinion dynamics with stubborn agents. ACM Transactions on Economics and Computation, 1(4): 1–30.
  • Zhang et al. (2020) Zhang, Z.; Xu, W.; Zhang, Z.; and Chen, G. 2020. Opinion dynamics incorporating higher-order interactions. In Proceedings of the 20th IEEE International Conference on Data Mining, 1430–1435. IEEE.
  • Zhou and Zhang (2021) Zhou, X.; and Zhang, Z. 2021. Maximizing influence of leaders in social networks. In Proceedings of the 27th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2400–2408. ACM.
  • Zhu, Bao, and Zhang (2021) Zhu, L.; Bao, Q.; and Zhang, Z. 2021. Minimizing polarization and disagreement in social networks via link recommendation. Advances in Neural Information Processing Systems, 34: 2072–2084.
  • Zhu and Zhang (2022) Zhu, L.; and Zhang, Z. 2022. A Nearly-Linear Time Algorithm for Minimizing Risk of Conflict in Social Networks. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2648–2656.