This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Seeding an Uncertain Technology

Eric Gao Department of Economics, MIT. ericgao@mit.edu. I would like to thank Mohammad Akbarpour, Joshua Gross, Matthew O. Jackson, Daniel Luo, and Eric Tang for helpful feedback.
(September 16, 2025)
Abstract

I study how a startup with uncertainty over product quality and no knowledge of the underlying diffusion network optimally chooses initial seeds. To ensure widespread adoption when the product is good while minimizing negative perceptions when it is bad, the optimal number of initial seeds should grow logarithmically with network size. When there are agents of different types that govern their connectivity, it is asymptotically optimal to seed agents of a single type: the type that minimizes the marginal cost per probability of making the product go viral. These results rationalize startup behavior in practice.

Keywords: Optimal Seeding, Diffusion, Inhomogeneous Random Networks.
JEL Codes: D85.

1 Introduction

Many product launches start within a small, particular group: Facebook started at Harvard, Apple’s first deal was for only 50 Apple I’s with Byte Shop, Tesla’s 2008 reveal was a 350-person event, and many video games have restrictive beta testing programs. One common explanation for such behavior is that new companies do not have the funding to engage in larger launches or are only familiar with a particular niche. However, it is always possible to secure additional funding or hire experts in other niches if small launches were not optimal. What, then, explains relatively contained product launches?

To answer this question, I formulate the problem a startup faces when deciding how to roll out a new product as an optimal seeding problem but with less initial information. Unlike established companies with more resources and expertise, startups face the unique challenge of having less information about how the product will spread. Such uncertainty comes from two potential sources. First, the startup may lack information about the quality of the product itself due to insufficient time and capital for extensive testing. Second, the startup may have little information on the underlying network structure that governs how adoption spreads. If maximizing adoption is the sole goal, one possible solution would be to simply seed everyone. However, this approach is clearly flawed: If the product is bad, everyone will have a bad impression of the product, leading to difficulties releasing a new and improved version of the product in the future.

Instead, the startup can choose some set of individuals to initially seed (i.e., beta testers). If the product is good, it will spread more widely; if bad, knowledge of its shortcomings remains localized. As such, exploiting network effects is one way to hedge against a potentially poor initial launch. This intuition guides the optimal seeding strategy. When the product is bad, components in the network are small and a constant size, leading to a constant “marginal cost” of additional seeds. When the product is good, a giant component emerges and additional seeding is most beneficial if it is the first seed to hit the giant component. As such, the “marginal benefit” of additional seeds decreases as the probability of the giant component already being hit grows.111For the purposes of this paper, all costs and benefits stem solely from the number of users. In the case of platforms such as Instagram and Facebook, this is indeed the case: Adding users is essentially costless and profits come from ads, etc. that scale with usage. However, the approach taken can easily be generalized to accommodate non-zero costs of seeding, which is discussed later. Setting marginal cost equal to marginal benefit gives that the optimal number of seeds is logarithmic in network size. When there are many types of agents, it is furthermore optimal to only seed agents of a single type: the type that has the lowest “marginal cost per probability” of hitting the giant component when the product is good. These findings rationalize the observed behavior of many now-successful startups.

Related Literature

We relate most directly to the literature on optimal seeding. The closest paper in spirit is McAdams and Song (2025), which considers optimal marketing schemes when agents with private signals about product quality make strategic inferences about product quality and make an irreversible decision on whether to adopt the product. However, their results are driven by timing (if a product were good, agents would have heard about it sooner rather than later) instead of network structure.

Similarly, Iyer and Adamic (2019) consider when having too many initial seeds is potentially harmful. Their results stem from diffusion being driven by social forces: In their model, a customer’s experience using the product depends on how many neighbors are also using the product. Early adopters may have initially bad experiences and stop using the product, even after more people join. They show, through simulations, that seeding the “core” of a network is often preferred to initial mass adoption.

My results are driven by inherent uncertainty in the quality of the product. Most prior literature studying optimal seeding assumes full knowledge of the diffusion process, so that there is no uncertainty in the technology or information being seeded. For example, consider Sadler (2019) which considers how to optimally influence public opinion in a network. However, there may be large uncertainties in how a particular message is perceived, especially varying based on ideological predispositions. A post highlighting one party’s accomplishments may be subjectively interpreted by different individuals as:

  • A sign that that party can accomplish things.

  • A sign indicating whether a party’s priorities are right or wrong

  • A joke or satirical if the divergence between beliefs is large enough.

Furthermore, these differences also influence how content propagates. After observing content, individuals may comment in support or opposition of the original post; they may also re-post, share, or “quote” the post, adding their own thoughts, whether critical or supportive. Experimental testing to form precise beliefs about a post’s performance is infeasible due to the fast pace of politics.

Akbarpour et al. (2023) also considers an inhomogeneous random network model to study the value of optimal seeding versus simply introducing additional seeds. They find that whenever significant diffusion is feasible, random seeding with a few more seeds outperforms optimal seeding. However, the sole objective in this setting is to maximize diffusion—taking this logic to its limit, the optimal policy is to just seed everyone. Yet in the context of information dissemination, Banerjee et al. (2023) finds that selective seeding outperforms mass seeding, albeit for different reasons than the forces considered in this paper. In their setting, mass seeding (and in particular, common information of the fact that everyone is initially exposed to the information) discourages individuals from asking clarifying questions out of fear of seeming slow to process information.

Other papers have also studied the optimal seeding problem. Banerjee et al. (2019) finds that individuals within a particular network are good at identifying influential nodes (without macro-level knowledge of the entire network structure). Sadler (2025) derives tractable and less measurement-intensive methods to compute which group of individuals within a network should be seeded. Keng and Kwa (2025) studies the (random) linear threshold model where one node adopts a technology if at least a constant fraction of their neighbors have already adopted, and characterizes the probability of full contagion based on the set of initial adopters. Unfortunately, a common theme in the literature is that analytically finding the optimal (set of) seeds is generally intractable.

2 Model

We use the inhomogeneous random networks (IRN) model developed by Bollobás et al. (2007). A network is a graph G=(N,E)G=(N,E) consisting of a set of agents NN and a set of edges EN×NE\subset N\times N. We consider sequences of networks as |N||N| grows large, as is common in the literature. Each agent has some type nin_{i} drawn from the set 𝒯\mathcal{T}, with |Ni||N_{i}| denoting the number of agents of type ii. For each ii, let

μ(i)=lim|N||Ni||N|\mu(i)=\lim_{|N|\to\infty}\frac{|N_{i}|}{|N|}

represent the proportion of agents that are of type ii when the network grows large. There are two states of the world: Either the product is good (GG) or it is bad (BB).222A richer model in which the designer has a distribution of beliefs over more than two states of the world, with probabilities of users having good or bad experiences depending on the state, exhibits similar qualitative behavior but is much less tractable.

An agent of type ii shares an edge with each agent of type jj with probability κX(i,j)/|N|\kappa^{X}(i,j)/|N| when the state of the world is X{G,B}X\in\{G,B\}. As the network is undirected, κX(i,j)=κX(j,i)\kappa^{X}(i,j)=\kappa^{X}(j,i) for all i,j𝒯i,j\in\mathcal{T}. For large networks, κX(i,j)\kappa^{X}(i,j) is then the expected number of type jj neighbors an agent of type ii has when the state of the world is XX. Let

TX=[κX(i,j)]i,j𝒯 for X{G,B}T^{X}=\left[\kappa^{X}(i,j)\right]_{i,j\in\mathcal{T}}\text{ for }X\in\{G,B\}

denote the kernel matrix when the state of the world is XX. The largest eigenvalue of TXT^{X}, denoted λ1X\lambda_{1}^{X}, determines the macroscopic behavior of the IRN. If this eigenvalue is less than one, there are many small disjoint components with the largest having O(log|N|)O(\log|N|) agents. If the largest eigenvalue is more than one, then there is one giant component containing a positive fraction of all the agents (and many smaller disjoint components once again bounded by O(log|N|)O(\log|N|) in size). We assume that λ1G>1\lambda_{1}^{G}>1 and λ1B<1\lambda_{1}^{B}<1 to keep the problem interesting. Economically, this corresponds to good products blowing up and becoming viral and bad products failing to do so as individuals are unlikely to adopt bad products.

The designer knows agent types but not the underlying network structure and chooses to seed a subset of agents SNS\subseteq N. This is equivalent to the designer choosing a number of initial seeds of each type and then randomly seeding accordingly, as the designer does not know the realized network. In many social media platform examples, the set of types can be thought of as which university an individual attends, with the designer being able to choose how initial seeds are dispersed or concentrated across universities (e.g., Facebook, BeReal, and Fizz all initially started on college campuses). After initial seeds are chosen, nature draws the state of the world from {G,B}\{G,B\} and a network is realized according to either TGT^{G} or TBT^{B}. An agent adopts the technology if there is a path from some initial seed to them (i.e. agent ii adopts if they are in the same component as an initially seeded agent jj). Let AX(N,S)A^{X}(N,S) denote the expected number of agents who adopt when the state of the world is XX, the set of agents is NN, and the set of initial seeds is SS. The designer values adoption when the state of the world is good and dislikes adoption when the state of the world is bad. Let

S(N)=argmaxSN{AG(S,N)λAB(S,N)}S^{*}(N)=\operatorname*{\arg\!\max}_{S\subseteq N}\left\{A^{G}(S,N)-\lambda A^{B}(S,N)\right\}

be the solution to the designer’s problem, where λ\lambda encodes the designer’s relative weights for each state of the world. For example, λ\lambda can incorporate the designer’s prior belief over the two states or whether the designer cares more or less about good versus bad customer experiences. We seek to characterize S(N)S^{*}(N) as |N||N| grows large.

3 Optimal Seeding

Optimal seeding takes on a simple structure: Seed an “optimal” type a logarithmic number of times, where “optimal” can be determined purely by model fundamentals.

Theorem 1.

For any kernels κG,κB\kappa^{G},\kappa^{B}:

  1. 1.

    |S(N)|=Θ(log|N|)|S^{*}(N)|=\Theta(\log|N|);

  2. 2.

    It is asymptotically optimal (at rate O(1/|N|)O(1/|N|) to only seed agents of the type which has the lowest marginal cost per probability of reaching the giant component.

We will prove the first point in the simpler case of an Erdős-Rényi random graph (a special case of an IRN when there is only a single type). Next, we will take a more reduced-form approach to prove the second point. Intuitions will be developed along the way.

Proof.

Consider an Erdős-Rényi random graph with one type ii and let κX=κX(i,i)\kappa^{X}=\kappa^{X}(i,i) be the expected number of neighbors each agent has. In this setting, the designer simply chooses the number of agents to initially seed. Our assumption about subcritical behavior when the product is bad and supercritical behavior when the product is good is then κB<1,κG>1\kappa^{B}<1,\kappa^{G}>1.

When the product is bad, components have expected size 11κB\frac{1}{1-\kappa^{B}} and with probability one as |N||N| grows large, the largest component is O(log|N|)O(\log|N|). As such, every component is vanishing relative to the network. This implies that as long as

lim|N|log(|N|)|S(N)||N|=0\lim_{|N|\to\infty}\frac{\log(|N|)\cdot|S^{*}(N)|}{|N|}=0

the probability of seeding any component twice goes to zero; note that this is satisfied by |S(N)|=Θ(log|N|)|S^{*}(N)|=\Theta(\log|N|). Thus,

AB(S,N)=|S|11κB.A^{B}(S,N)=|S|\cdot\frac{1}{1-\kappa^{B}}.

When the product is good, there is a giant component that contains a positive fraction of all agents. In particular, let yy be the solution to

1y=exp(κGy).1-y=\exp\left({-\kappa^{G}y}\right).

Then, with probability one as |N||N| grows large, there is a component of size y|N|y|N| and each agent has a probability of yy of being in the giant component. Each remaining component has expected size 11(1y)κG\frac{1}{1-(1-y)\kappa^{G}}. What is AG(S,N)A^{G}(S,N)? The chance that the giant component is not seeded is (1y)|S|(1-y)^{|S|} and in expectation, |S|(1y)|S|(1-y) seeds do not hit the giant component. Thus,

AG(S,N)=(1(1y)|S|)y|N|+|S|(1y)11(1y)κG.A^{G}(S,N)=\left(1-(1-y)^{|S|}\right)y|N|+|S|(1-y)\cdot\frac{1}{1-(1-y)\kappa^{G}}.

With ABA^{B} and AGA^{G} in mind, we now solve the designer’s problem. The “marginal cost” of another seed is

λ1κB\frac{\lambda}{1-\kappa^{B}}

as each additional seed hits a component of that size in expectation, scaled by λ\lambda. Note that this marginal cost is constant: It does not depend on which agents the designer has already seeded. The “marginal benefit” of another seed when the designer has already seeded |S||S| agents is

y[(1y)|S|y|N|]+(1y)[11(1y)κG].y\cdot\left[(1-y)^{|S|}y|N|\right]+(1-y)\left[\frac{1}{1-(1-y)\kappa^{G}}\right].

Intuitively, with probability yy the additional seed hits the giant component but is useful only if the giant component has not already been seeded. Then, the designer seeds as long as the marginal benefit outweighs the marginal cost:

y[(1y)|S|y|N|]+(1y)[11(1y)κG]λ1κB.y\cdot\left[(1-y)^{|S|}y|N|\right]+(1-y)\left[\frac{1}{1-(1-y)\kappa^{G}}\right]\geq\frac{\lambda}{1-\kappa^{B}}.

Solving for |S||S| gives

|S|1log(11y)[log(|N|)+2log(y)log(λ1κB1y1(1y)κG)]|S|\leq\frac{1}{\log\left(\frac{1}{1-y}\right)}\left[\log(|N|)+2\log(y)-\log\left(\frac{\lambda}{1-\kappa^{B}}-\frac{1-y}{1-(1-y)\kappa^{G}}\right)\right]

so

|S(N)|=1log(11y)[log(|N|)+2log(y)log(λ1κB1y1(1y)κG)].|S^{*}(N)|=\left\lceil\frac{1}{\log\left(\frac{1}{1-y}\right)}\left[\log(|N|)+2\log(y)-\log\left(\frac{\lambda}{1-\kappa^{B}}-\frac{1-y}{1-(1-y)\kappa^{G}}\right)\right]\right\rceil.

As y,λ,κB,κGy,\lambda,\kappa^{B},\kappa^{G} are constants, |S(N)|=O(log|N|)|S^{*}(N)|=O(\log|N|). In the limit as the graph grows large, only yy matters; this corresponds to all other parameters only affecting a vanishing portion of the network.

To illustrate the solution, consider the problem Instagram faced when it launched in 2010. Back then, the world population was around 7 billion, which we use for |N||N|. We can estimate yy by taking the current fraction of people who use Instagram: approximately 2 billion out of the world’s 8.1 billion. Plugging these estimates into the non-vanishing portion of |S(N)||S^{*}(N)| gives

|S(N)|=log(|N|)log(11y)log(7,000,000,000)log(112/8.1)80|S^{*}(N)|=\frac{\log(|N|)}{\log\left(\frac{1}{1-y}\right)}\approx\frac{\log(7,000,000,000)}{\log\left(\frac{1}{1-2/8.1}\right)}\approx 80

initial seeds to be optimal. Instagram co-founder Kevin Systrom once stated that Instagram indeed started with around 100 beta testers who spread the app before it officially launched.333https://www.quora.com/How-many-beta-users-did-Instagram-have-right-before-launch While this back-of-the-envelope calculation and model is not a perfect representation of the world, our results align well with observed data.

Let us now consider the IRN case. Behavior is similar, but component sizes are more difficult to pin down. In this model, each node of type ii has probability y(i)y(i) of being in the giant component when the product is good, where the function y()y(\cdot) solves

1y(i)=exp(j𝒯κG(i,j)y(j)𝑑μ(j)) for all i𝒯.1-y(i)=\exp\left({-\int_{j\in\mathcal{T}}\kappa^{G}(i,j)y(j)d\mu(j)}\right)\text{ for all }i\in\mathcal{T}.

Then, the expected size of the giant component when the product is good is

y|N| for y=j𝒯y(j)𝑑μ(j).y|N|\text{ for }y=\int_{j\in\mathcal{T}}y(j)d\mu(j).

The expected size of the small component a type ii agent is in when the product is bad is

CB(i)=[(ITB)1𝟙]iC^{B}(i)=[(I-T^{B})^{-1}\mathbbm{1}]_{i}

where the subscript ii denotes the iith component of a vector. Similarly, the expected size of the small component a type ii agent is in when the product is good, conditional on not being part of the giant component, is

CG(i)=[(IT^G)1𝟙]iC^{G}(i)=[(I-\hat{T}^{G})^{-1}\mathbbm{1}]_{i}

where

T^G=[κG(i,j)(1y(j))]i,𝒯j\hat{T}^{G}=\left[\kappa^{G}(i,j)(1-y(j))\right]_{i,\in\mathcal{T}j}

is the dual kernel governing how agents not in the giant component are connected.

In the Erdős-Rényi random graph case, there was only one decision: when to stop seeding. In the general IRN case, the problem of selecting which types to seed is a combinatorially difficult integer problem. Instead, we will first consider a continuous relaxation by dropping the requirement that the number of seeds of each type must be an integer. Repeating a similar process as before, the marginal utility (marginal benefit minus marginal cost) from seeding an agent of type ii when the probability of seeding the giant component is qq is

y(i)[(1q)y|N|)]+(1y(i))CG(i)λCB(i).y(i)\cdot\left[(1-q)y|N|\right)]+(1-y(i))C^{G}(i)-\lambda C^{B}(i).

Then, a seed of type ii should be seeded until

q1λCB(i)(1y(i))CG(i)y(i)y|N|q\geq 1-\frac{\lambda C^{B}(i)-(1-y(i))C^{G}(i)}{y(i)y|N|}

so the optimal probability to seed the giant component must be

q(N)=maxi𝒯{1λCB(i)(1y(i))CG(i)y(i)y|N|}q^{*}(N)=\max_{i\in\mathcal{T}}\left\{1-\frac{\lambda C^{B}(i)-(1-y(i))C^{G}(i)}{y(i)y|N|}\right\}

since any less would mean the marginal utility of seeding some type is strictly positive, and any more would mean that a marginal reduction in seeds of any type would be strictly beneficial. With this optimal probability fixed, the optimal set of seeds must then solve the following optimization problem:

SR(N)=argmin{Si}i𝒯{i𝒯Si[λCB(ni)(1y(ni))CG(ni)]}s.t.1i𝒯(1y(i))Siq(N)\begin{split}S^{R}(N)=\operatorname*{\arg\!\min}_{\{S_{i}\}_{i\in\mathcal{T}}}&\left\{\sum_{i\in\mathcal{T}}S_{i}\left[\lambda C^{B}(n_{i})-(1-y(n_{i}))C^{G}(n_{i})\right]\right\}\\ \text{s.t.}&1-\prod_{i\in\mathcal{T}}(1-y(i))^{S_{i}}\geq q^{*}(N)\end{split} (1)

where SR(N)S^{R}(N) specifies a (possibly non-integer) number SiS_{i} of seeds of type ii to seed under the relaxed problem. Taking logarithms of the constraint gives (1) to be equivalent to

SR(N)=argminSN{i𝒯Si[λCB(ni)(1y(ni))CG(ni)]}s.t.i𝒯[Silog(1y(i))]log(1q(N)).\begin{split}S^{R}(N)=\operatorname*{\arg\!\min}_{S\subset N}&\left\{\sum_{i\in\mathcal{T}}S_{i}\left[\lambda C^{B}(n_{i})-(1-y(n_{i}))C^{G}(n_{i})\right]\right\}\\ \text{s.t.}&\sum_{i\in\mathcal{T}}\left[S_{i}\log(1-y(i))\right]\leq\log(1-q^{*}(N)).\end{split} (2)

This is now a linear program with a linear constraint, so the solution is to only seed agents of type

j=argminj𝒯{λCB(j)(1y(j))CG(j)log(1y(j))}j^{*}=\operatorname*{\arg\!\min}_{j\in\mathcal{T}}\left\{\frac{\lambda C^{B}(j)-(1-y(j))C^{G}(j)}{-\log(1-y(j))}\right\}

until the probability of seeding the giant component hits at least q(N)q^{*}(N). In particular, this corresponds to seeding

SjR(N)=log(1q(N))log(1y(j)).S^{R}_{j^{*}}(N)=\frac{\log(1-q^{*}(N))}{\log(1-y(j))}.

Finally, undoing the relaxation by rounding up gives

Sj(N)=SjR(N).S^{*}_{j^{*}}(N)=\left\lceil S^{R}_{j^{*}}(N)\right\rceil.

The designer’s utility under Sj(N)S^{*}_{j^{*}}(N) is at most λCB(j)(1y(j))CG(j)\lambda C^{B}(j^{*})-(1-y(j^{*}))C^{G}(j^{*}) less than the designer’s utility under SjR(N)S^{R}_{j^{*}}(N) in the relaxed problem. The designer’s utility under the relaxed problem is an upper bound of the designer’s utility under the optimal solution to the original integer problem. Then, the designer’s value from the problem grows as

Θ(q(N)|N|)\displaystyle\Theta(q^{*}(N)\cdot|N|) =Θ(maxi𝒯{1λCB(i)(1y(i))CG(i)y(i)y|N|}|N|)\displaystyle=\Theta\left(\max_{i\in\mathcal{T}}\left\{1-\frac{\lambda C^{B}(i)-(1-y(i))C^{G}(i)}{y(i)y|N|}\right\}|N|\right)
=Θ(maxi𝒯{|N|λCB(i)(1y(i))CG(i)y(i)y})=Θ(|N|)\displaystyle=\Theta\left(\max_{i\in\mathcal{T}}\left\{|N|-\frac{\lambda C^{B}(i)-(1-y(i))C^{G}(i)}{y(i)y}\right\}\right)=\Theta(|N|)

so the constant gap between seeding only jj^{*} and the optimal solution vanishes at rate O(1/|N|)O(1/|N|).

Empirically, this parallels how many startups have initial users from a very specific group: Facebook initially targeted Harvard students, WhatsApp was first used by Russian immigrants in the Bay Area, and Spotify’s beta testers were Swedish music bloggers. In the case of WhatsApp, the initial app was poorly received and only gained traction after integrating new smartphone updates into their product, which makes minimizing negative initial impressions important. These solutions also require minimal knowledge to implement: The startup only needs to know marginal costs/benefits and probabilities for type jj^{*}.

4 Discussion

I view these results as complementary to other forces that may explain how startups behave when faced with the optimal seeding problem. Conventional wisdom suggests starting in a specific niche to better know an audience or focusing on one geographic area to better understand regulatory red tape. My model can be easily adapted to accommodate both situations. A better understanding of certain audiences can be modeled via changes in κG,κB\kappa^{G},\kappa^{B}, leading certain types to have different y(i),CG(i),CB(i)y(i),C^{G}(i),C^{B}(i) values. Similarly, adding constant marginal costs to seeding individual agents does not change the logarithmic scaling of optimal seeding while costs based on the number of types seeded further pushes the designer to seeding only a single group. My results hold under both of these generalizations.

Another potential concern is that the IRN model is not representative of real-world networks. In particular, many social media apps are used only if there is high clustering among friends, but clustering is generally not present in random graphs. This provides another rationale for keeping initial seeds centralized and is especially pertinent in cases such as Facebook, where a critical mass of individuals needed to be seeded for anything to happen. My work provides an intuitive answer to the question of why Facebook didn’t seed a second or third critical mass of early adopters at universities outside of Harvard. If Facebook was a good product then a single mass of early adopters is enough to make the product go viral with high probability; otherwise, those additional masses would have only led to additional bad impressions if the product was bad.

My work also contributes to the ex-ante design of optimal seeding strategies. In the standard influence maximization problem, the designer has some exogenously determined number of seeds and richer network information and needs to determine which particular agents (instead of just agent types) are optimal to seed. However, upstream to this problem, the designer often needs to acquire funding (a startup pitching to VC firms before having the resources to conduct market research, an experimental economist writing a grant before conducting a field experiment) before knowing network structure; this initial amount of funding then becomes the constraint on the number of seeds in the downstream influence maximization problem. Combined with Akbarpour et al. (2023)’s insight about how a constant number of additional seeds is more impactful than rich network knowledge, the designer’s initial funding application should grow logarithmically in the size of the network.

There are several avenues for further research. Most interesting is incorporating dynamics into this model of diffusion. Beta testers are not only useful for spreading the word, but also for providing feedback about the product. Similarly, additional users also provide the designer with revenue that can be re-invested in research and development to improve the product. As such, product quality endogenously changes as more and more agents adopt the product. With a better-quality product, the designer may want to seed new agents, which matches waves of “brand ambassadors” for various companies being observed in the world. Designers may also have preferences for the speed of adoption, which a dynamic model would also address.

References

  • (1)
  • Akbarpour et al. (2023) Akbarpour, Mohammad, Suraj Malladi, and Amin Saberi. 2023. “Just a Few Seeds More: The Inflated Value of Network Data for Diffusion.” https://web.stanford.edu/˜mohamwad/NetworkSeeding.pdf.
  • Banerjee et al. (2023) Banerjee, Abhijit, Emily Breza, Arun G Chandrasekhar, and Benjamin Golub. 2023. “When Less Is More: Experimental Evidence on Information Delivery During India’s Demonetisation.” The Review of Economic Studies 91 1884–1922. 10.1093/restud/rdad068.
  • Banerjee et al. (2019) Banerjee, Abhijit, Arun G Chandrasekhar, Esther Duflo, and Matthew O Jackson. 2019. “Using Gossips to Spread Information: Theory and Evidence from Two Randomized Controlled Trials.” The Review of Economic Studies 86 2453–2490. 10.1093/restud/rdz008.
  • Bollobás et al. (2007) Bollobás, Béla, Svante Janson, and Oliver Riordan. 2007. “The phase transition in inhomogeneous random graphs.” Random Structures and Algorithms 31 3–122. 10.1002/rsa.20168.
  • Iyer and Adamic (2019) Iyer, Shankar, and Lada A. Adamic. 2019. “When can overambitious seeding cost you?” Applied Network Science 4. 10.1007/s41109-019-0146-z.
  • Keng and Kwa (2025) Keng, Ying Ying, and Kiam Heong Kwa. 2025. “Contagion probability in linear threshold model.” Applied Mathematics and Computation 487 129090. 10.1016/j.amc.2024.129090.
  • McAdams and Song (2025) McAdams, David, and Yangbo Song. 2025. “Adoption epidemics and viral marketing.” Theoretical Economics 20 453–480. 10.3982/te5886.
  • Sadler (2019) Sadler, Evan. 2019. “Influence Campaigns.” SSRN Electronic Journal. 10.2139/ssrn.3371835.
  • Sadler (2025) Sadler, Evan. 2025. “Seeding a Simple Contagion.” Econometrica 93 71–93. 10.3982/ecta22448.