Sublinear Random Access Generators for Preferential Attachment Graphs
Abstract
We consider the problem of sampling from a distribution on graphs, specifically when the distribution is defined by an evolving graph model, and consider the time, space and randomness complexities of such samplers.
In the standard approach, the whole graph is chosen randomly according to the randomized evolving process, stored in full, and then queries on the sampled graph are answered by simply accessing the stored graph. This may require prohibitive amounts of time, space and random bits, especially when only a small number of queries are actually issued. Instead, we propose to generate the graph on-the-fly, in response to queries, and therefore to require amounts of time, space, and random bits which are a function of the actual number of queries.
We focus on two random graph models: the Barabási-Albert Preferential Attachment model (BA-graphs) [3] and the random recursive tree model [25]. We give on-the-fly generation algorithms for both models. With probability , each and every query is answered in time, and the increase in space and the number of random bits consumed by any single query are both , where denotes the number of vertices in the graph.
Our results show that, although the BA random graph model is defined by a sequential process, efficient random access to the graph’s nodes is possible. In addition to the conceptual contribution, efficient on-the-fly generation of random graphs can serve as a tool for the efficient simulation of sublinear algorithms over large BA-graphs, and the efficient estimation of their performance on such graphs.
1 Introduction
Consider a Markov process in which a sequence of states, , evolves over time . Suppose there is a set of predicates defined over the state space . Namely, for every predicate and state , the value of is well defined. A query is a pair and the answer to the query is . In the general case, answering a query requires letting the Markov process run for steps until is generated. In this paper we are interested in ways to reduce the dependency, on , of the computation time time, the memory space, and the number of used random bits, required to answer a query .
We focus on the case of generative models for random graphs, and in particular, on the Barabási-Albert Preferential Attachment model [3] (which we call BA-graphs), on the equivalent linear evolving copying model of Kumar et al. [11], and on the random recursive tree model [25]. The question we address is whether one can design a randomized on-the-fly graph generator that answers adjacency list queries of BA-graphs (or random recursive trees), without having to generate the complete graph. Such a generator outputs answers to adjacency list queries as if it first selected the whole graph at random (according the appropriate distribution) and then answered the queries based on the samples graph.
We are interested in the following resources of a graph generator: (1) the number of random bits consumed per query, (2) the running time per query, and (3) the increase in memory space per query.
Our main result is a randomized on-the-fly graph generator for BA-graphs over vertices that answers adjacency list queries. The generated graph is sampled according to the distribution defined for BA-graphs over vertices, and the complexity upper bounds that we prove hold with probability . That is, with probability each and every query is answered in time, and the increase in space, and the number of random bits consumed during that query are . Our result refutes (definitely for queries) the recent statement of Kolda et al. [10] that: “The majority of graph models add edges one at a time in a way that each random edge influences the formation of future edges, making them inherently serial and therefore unscalable. The classic example is Preferential Attachment, but there are a variety of related models…”
We remark that the entropy of the edges in BA-graphs is per edge in the second half of the graph [24]. Hence it is not possible to consume a sublogarithmic number of random bits per query in the worst case if one wants to sample according to the BA-graph distribution. Similarly, to insure consistency (i.e., answer the same query twice in the same way) one must use space per query.
From a conceptual point of view, the main ingredient of our result are techniques to “invert” the sequential process where each new vertex randomly selects its “parent” in the graph among the previous vertices. Instead, vertices randomly select their “children” among the “future” vertices, while maintaining the same probability distribution as if each child picked “in the future” its parent. We apply these techniques in the related model of random recursive trees [25] (also used within the evolving copying model [11]), and use them as a building block for our main result for BA-graphs.
1.1 Related work
A linear time randomized algorithm for efficiently generating BA-graphs is given in Betagelj and Brandes [4]. See also Kumar et al. [11] and Nobari et al. [19]. A parallel algorithm is given in Alam et al. [1]. See also Yoo and Henderson [26]. An external memory algorithm was presented by Meyer and Peneschuck [17].
Goldreich, Goldwasser and Nussboim initiate the study of the generation of huge random objects [8] while using a “small” amount of randomness. They provide an efficient query access to an object modeled as a function, when the object has a predetermined property, for example graphs which are connected. They guarantee that these objects are indistinguishable from random objects that have the same property. This refers to the setting where the size of the object is exponential in the number of queries to the function modeling the object. We note that our generator provides access to graphs which are random BA-graphs and not just indistinguishable from random BA-graphs.
Mansour, Rubinstein, Vardi and Xie [15] consider local generation of bipartite graphs for local simulation of Balls into Bins online algorithms. They assume that the balls arrive one by one and that each ball picks bins independently, and then assigned to one of them. The local simulation of the algorithm locally generates a bipartite graph. Mansour et al. show that with high probability one needs to inspect only a small portion of the the bipartite graph in order to run the simulation and hence a random seed of logarithmic size is sufficient.
1.2 Applications
One reason for generating large BA-graphs is to simulate algorithms over them. Such algorithms often access only small portions of the graphs. In such instances, it is wasteful to generate the whole graph. An interesting example is sublinear approximation algorithms [21, 27, 18, 20] which probe a constant number of neighbors. 111Strictly speaking, sublinear approximation algorithms apply to constant degree graphs and BA-graphs are not constant degree. However, thanks to the power-law distribution of BA-graphs, one can “omit” high degree vertices and maintain the approximation. See also [22]. In addition, local computation algorithms probe a small number of neighbors to provide answers to optimization problems such as maximal independent sets and approximate maximum matchings [6, 7, 22, 23, 2, 15, 16, 12, 13, 14]. Support of adjacency list queries is especially useful for simulating (partial) DFS and BFS over graphs.
1.3 Techniques
The main difficulty in providing the on-the-fly generator is in “inverting” the random choices of the BA process. That is, we need to be able to randomly choose the next “child” of a given node , although it will only “arrive in the future” and its choice of a parent in the BA-graph will depend on what will have happened until it arrives (i.e., on the node degrees in the BA-graph when that node arrives). One possibility to do so is to maintain, for any future node which does not yet have a parent, how many potential parents it still has, and then go sequentially over the future nodes and randomly decide if its parent will indeed be . This is too costly not only because we will need to go sequentially over the nodes, but mainly because it may be too costly in computation time to calculate, given the random choices already done in response to previous queries, what is the probability that the parent of a node that does not have yet a parent, will be node .
To overcome this difficulty we define for any node, even if it has already a parent, its probability to be a candidate to be a child of . We show how these probabilities can be calculated efficiently given the previous choices taken in response to previous queries, and show how, based on these probabilities, we can define an efficient process to chose the next candidate. The candidate node may however already have a parent, and thus cannot be a child of . If this is the case we repeat the process and choose another candidate, until we chose an eligible candidate which then is chosen to be the actual next child of . We show that with high probability this process terminates quickly and finds an eligible candidate, so that with high probability we have an efficient process to find “into the future” the next child of . This is done while sampling exactly according to the distribution defined by the BA-graphs process.
In addition to the above technique, which is arguably the crux of our result, we use a number of data structures, based on known constructions, to be able to run the on-the-fly generator with polylogarithmic time and space complexities. In the sequel we give, in addition to the formal definitions of the algorithms, some supplementary intuitive explanations into our techniques.
2 Preliminaries
Let . Let denote a directed graph on nodes.222Preferential attachment graphs are usually presented as undirected graphs. For convenience of discussion we orient each edge from the high index vertex to the low index vertex, but the graphs we consider remain undirected graphs. We refer to the endpoints of a directed edge as the head and the tail . Let denote the degree of the vertex in (both incoming and outgoing edges). Similarly, let and denote the in-degree and out-degree, respectively, of the vertex in . The normalized degree distribution of is a vector with coordinates, one for each vertex in . The coordinate corresponding to is defined by
Note that .
We also define the in-degree distribution by
In the sequel, when we say that an event occurs with high probability (or w.h.p) we mean that it occurs with probability at least , for some constant .
For ease of presentation, we use in the algorithms arrays of size . However, in order to give the desired upper bounds on the space complexity, we implement these arrays by means of balanced search trees, where the keys are in . To access item in the “array” key is searched in the tree and the value in that node is returned; if the key is not found, then if is returned. Thus, the space used by the “arrays” is the number of keys stored, and the time complexity of our algorithms is multiplied by a factor of compared to the time complexity that it would have with a standard random-access implementation of the arrays. When we state upper bounds on time, we take into account these factors. As common, we analyze the space complexity in terms of “words” of size
3 Queries
Consider an undirected graph , where . Slightly abusing notation, we sometimes consider and denote node as the integer number and so we have a natural order on the nodes. The access to the graph is done by means of a user-query , where denotes “no additional neighbor”. We number the queries according to the order they are issued, and call this number the time of the query. Let be the node on which the query at time was issued, i.e, at time the query is issued by the user. For each node and any time , let be the largest numbered node which was previously returned as the value of , or if no such query was issued before time . That is,
At time the query returns , or if no such exists. When the implementation of the query has access to a data structure holding the whole of , then the implementation of BA-next-neighbor is straightforward just by accessing this data structure. Figure 1 illustrates a “traditional” randomized graph generation algorithm that generates the whole graph, stores it, and then can answers queries by accessing the data structure that encodes the whole generated graph.
4 On-the-fly Graph Generators
An on-the-fly graph generator is an algorithm that gives access to a graph by means of the BA-next-neighbor query defined above, but itself does not have access to a data structure that encodes the whole graph. Instead, in response to the queries issued by the user, the generator modifies its internal data structure (a.k.a state), which is initially some empty (constant) state. The generator must ensure however that its answers are consistent with some graph . An on-the-fly graph generator for a given distribution on a family of graphs (such as the family of Preferential Attachment graphs on nodes) must in addition ensure that it samples the graphs according to the required distribution. That is, its answers to a sequence of queries must be distributed identically to those returned when a graph was first sampled (according to the desired distribution), stored, and then accessed (See Definition 17 and Theorem 18). Figure 2 illustrates an on-the-fly graph generation algorithm as the one we build in the present paper.
5 Random Graph Models
Preferential attachment [3].
We restrict our attention to the case in which each vertex is connected to the previous vertices by a single edge (i.e., in the terminology of [3]). 333As discussed in Section 2, while the process generates an undirected graph, for ease of discussion we consider each edge as directed from its higher-numbered adjacent node to its lower-numbered adjacent node. We thus denote the random process that generates a graph over according to the preferential attachment model by . The random process generates a sequence of directed edges , where the tail of is , for every . (We abuse notation and let also denote the graph generated by the random process.) We refer to the head of as the parent of .
The process draws the edges sequentially starting with the self-loop . Suppose we have selected , namely, we have drawn the edges , for . The edge is drawn such its head is node with probability .
Note that the out-degree of every vertex in (the directed graph representation of) is exactly one, with only one self-loop in . Hence (without the self-loop) is an in-tree rooted at .
Evolving copying model [11].
Let denote the evolving copying model with out-degree and copy factor . As in the case of , the process selects the edges one-by-one starting with a self-loop . Given the graph , the next edge emanates from . The head of edge is chosen as follows. Let be an unbiased random bit. Let be a uniformly distributed random variable (the random variables and are all pairwise independent.) The head of is determined as follows:
Random recursive tree model [25].
If we eliminate from the evolving copying model the bits and the “copying effect” we get a model where each new node is connected to one of the previous nodes, chosen uniformly at random. This is the extensively studied (random) recursive tree model [25].
We now relate the various models.
Claim 1 ([1]).
The random graphs and are identically distributed.
Proof.
The proof is by induction on . The basis () is trivial. To prove the induction step, assume that and are identically distributed. We need to prove that the next edges and in the two processes are also identically distributed, given a graph as the realization of and , respectively.
The head of is chosen according to the degree distribution . Since the out-degree of every vertex is one,
Thus, an equivalent way of choosing the head of is as follows: (1) with probability , choose a random vertex uniformly (this corresponds to the term), and (2) with probability toss a -dice (this corresponds to the term).
Hence, case (1) above corresponds to the case when , in the process of . To complete the proof, we observe that, conditioned on the event that , the choice of the head of in can be defined as choosing according to the in-degree distribution of the nodes in : indeed, choosing according to the in-degree distribution is identical to choosing a uniformly distributed random edge in and then taking its head. But, since the out-degrees of all the vertices in are all the same (and equal one), this is equivalent to choosing a uniformly distributed random node in . ∎
We use the following claim in the sequel.
Claim 2 (cf. [5], Thm. 6.12 and Thm. 6.32).
Let be a rooted directed tree on nodes denoted , and where node is the root of the tree. If the head of the edge emanating from node is uniformly distributed among the nodes in , then, with high probability, the following two properties hold:
-
1.
The maximum in-degree of a node in the tree is .
-
2.
The height of the tree is .
Note that the claim still holds if we add to the tree a self loop on node .
6 The Pointers Tree
We now consider a graph inspired by the the random recursive tree model [25] and the evolving copying model [11]. Each vertex has a variable that is uniformly distributed over , and can be viewed as a directed edge (or pointer) from to . We denote this random rooted directed in-tree by . Let denote the set . We refer to the set as the u-children of and to as the u-parent of . In conjunction with each pointer, we keep a flag indicating whether this pointer is to be used as a dir (direct) pointer or as a rec (recursive) pointer. We thus use the directed pointer tree to represent a graph in the evolving copying model (which is equivalent, when the flag of each pointer is equality distributed between rec and dir, to the BA model).
In this section we consider the subtask of giving access to a random , together with the flags of each pointer. Ignoring the flags, this section thus gives an on-the-fly random access generator for the extensively studied model of random recursive trees (cf. [25]). We define the following queries.
-
•
: is the parent of in the tree, and is the associated flag.
-
•
, where : is the least numbered node such that the parent of is and the flag of that pointer is of type “type”. If no such node exists then is .
The “ideal” way to implement this task is to go over all nodes, and for each node (1) uniformly at random choose its parent in , (2) uniformly at random chose the associated flag in . Then store the pointers and flags, and answer the queries by accessing this data structure.
In this section we give an on-the-fly generator that answers the above queries, and start with a naïve, non-efficient implementation that illustrates the task to be done. Then we give our efficient implementation.
Notations.
We say that is exposed if (initially all pointers are set to ). We denote the set of all exposed vertices by . We say that is directly exposed if was set during a call to . We say that is indirectly exposed if was determined during a call to . As a result of answering and processing next-child-tp and parent queries, the on-the-fly generator commits to various decisions (e.g., prefixes of adjacency lists). These commitments include edges but also non-edges (i.e., vertices that can no longer serve as for a certain ). For a node , denotes the largest value (node) such that was returned by a query, and if no such returned value exists. Observe that implies that (1) ; and (2) we know already for each node if or not. We denote - roughly speaking - the set of vertices that cannot serve as -parents of by , the nodes that can still be -parents of by , and their number by . The formal definitions are: 444To simplify the definition of the more efficient generator, defined in the sequel, we define and as above even when is exposed. Thus, it might be the case that (although is the u-parent of ).
6.1 A naïve implementation of next-child
We give a naïve implementation of a next-child query, with time complexity , with the purpose of illustrating the main properties of this query and in order contrast it with the more efficient implementation later. We do so in a simpler manner without looking into the “type”. The naïve implementation of next-child is listed in Figure 3. This implementation, and that of parent, share an array of pointers , both updating it. A query is processed by scanning the vertices one-by-one starting from . If , then is the next child. If is , then a coin is flipped and is set when comes out ; the probability that is is . If , we proceed to the next vertex. The loop ends when some is or all vertices have been exhausted. In the latter case the query returns .
The correctness of naïve-next-child, i.e., the fact that the graph is generated according to the required probability distribution, is based on the observation that, conditioned on the event that , all the vertices in are equally likely to serve as . Note that the description above does not explain how is computed.
6.2 An efficient implementation of next-child
We first shortly discuss the challenges on the way to an efficient implementation of next-child. Consider the simple special case where the only two queries issued are, for some , a single followed by a single (to simplify this discussion we assume that the the value of is globally known). Consider the situation after the query . Every vertex may be a -child of I.e., because at this point , for every , and . Let denote the probability that vertex is the first child of . Then and for (i.e., has no child) . Since each of the probabilities can be calculated in time, this random choice can be done in time by a choosing uniformly at random a number in and performing a binary search on to find which index it represents (see a more detailed an accurate statement of this procedure below). However, in general, at the time of a certain next-child query, limitations may exits, due to previous queries, on the possible consistent values of certain pointers . There are two types of limitations: (i) might have been already determined, or (ii) is still but the option of has been excluded since . These limitations change the probabilities and , rendering them more complicated and time-consuming to compute, thus rendering the above-defined process not efficient (i.e., not doable in time). In the rest of this section we define and analyze a modified procedure that uses random bits, takes time, and increases the space by . This procedure will be at the heart of the efficient implementation of next-child.
The efficient implementation of next-child (and of parent) makes use of the following data structures.
-
•
An array of length ,
-
•
An array of length ,
-
•
An array of length , (We also maintain an array with the natural definition.)
-
•
An array of balanced search trees, called , each holding the set of nodes such that . For technical reasons all trees are initiated with .
-
•
A number of additional data structures that are implicit in the listing, described and analyzed in the sequel.
In the implementation of the on-the-fly generator of the pointers tree we will maintain two invariants that are described below. We will later discuss the cost (in running time and space) of maintaining these invariants.
Invariant 3.
For every node , the first query is always preceded by a query.
We will use this invariant to infer that implies that . One can easily maintain this invariant by introducing a query as the first step of the implementation of the query (for technical reasons we do that in a lower-level procedure next-child.)
Invariant 4.
For every vertex , implies that .
The second invariant is maintained by issuing an “internal” query whenever is updated. This is done recursively, the base of the recursion being node . When analyzing the complexities of our algorithm we will take into account these recursive calls. Let denote the vertex such that , if such a vertex exists; (note that there can be at most one such node , except for the case of ); otherwise . We get that if , then .
Definition 5.
At a given time , and for any node , let and be defined as follows:
We note that if at a give time we consider a node such that (i.e., its parent in the pointers tree is not yet determined), then the set is the set of all the nodes that can still be the parent of node in the pointers tree. The set is however defined also for nodes for which their parent is already determined.
Definition 6.
Let denote the following set:
The following lemma proves that is a nondecreasing chain. It also characterizes a sufficient condition for , and a necessary and sufficient condition for (and hence ).
Lemma 7.
For every :
-
1.
.
-
2.
iff .
-
3.
.
Proof.
To prove the lemma we make use of the fact that the changes in the values of the various parameters can occur only as a result of the queries nuc and parent.
We first claim that . This follows from the definition of and the fact that only next-child and parent queries can change the value of . Therefore . The difference may contain and may contain , thus Item 1 follows.
Thus, by Lemma 7, we have that for any :
(1) |
We are now ready to describe the implementation of and . As seen in Figure 4, is merely a loop of , and is essentially a call to . The “real work” is done in the implementation of and that we describe now. Note that if does not have children larger than , then returns .
If when is called, then the next child is already fixed and it is just extracted from the data structures. Otherwise, an interval is defined, and it will contain the answer of . Let if ; and , if . Let if ; and , if (i.e., is the smallest indirectly exposed child of beyond the “fully known area” for , or if no such child exists). Observe that no vertex can satisfy . Hence, the answer is in .
The next child can be sampled according to the desired distribution in a straightforward way by going sequentially over the vertices in , and tossing for each vertex a coin that has probability to be , until indeed one of those coins comes out , or all vertices are exhausted (in which case node is taken as the next child). We denote by , , the probability that is chosen when the the above procedure is applied. This procedure, however, takes linear time.
In order to start building our efficient implementation for next-child we note that by the definition of , , and we consider a process where we toss coins sequentially for the vertices in . The probability that the coin for is is still . We stop as soon as is encountered or on if all coins are . The vertex on which we stop, denote is , is a candidate next -child. If , then cannot be a child of so we proceed by repeating the same process, but with the interval instead of the interval . We denote by , , the probability that is chosen when this procedure is applied.
We now build our efficient procedure that selects the candidate, without sequentially going over the nodes. To this end, observe that the sequence of probabilities of the coins tossed in the last-described process behaves “nicely”. Namely, the probabilities , for , form the harmonic sequence starting from and ending in . Indeed, Eq. (1) implies that if vertex is the smallest vertex in , then and an increment between and occurs if and only if . Let and let , be the probability that the node of rank in is chosen as candidate in the sequential procedure defined above. Since forms the harmonic sequence for , we can, given , calculate in time, for any , the probability (i.e., the probability that a node of some rank , , is chosen). Indeed, for , ; for , ; and for , . Hence, for , , and for , . This allows us to simulate one iteration (i.e., choosing the next candidate next -child) by choosing uniformly at random a single number in , and then performing a binary search over to to decide what rank this number “represents”. After the rank is selected, is then mapped to the vertex of rank in , denote it , and this is the candidate next -child. As before, if , then cannot be a child of so we ignore it and proceed in the same way, this time with the interval . We denote by , the probability that is chosen when this third procedure is applied. See Figure 4 for a formal definition of this procedure and that of next-child.
Observe that this procedure takes time (see Section 6.4 for a formal statement of the time and randomness complexities). We note that we cannot perform this selection procedure in the same time complexity for the set , because we do not have a way to calculate each and every probability , , in time, even if is given.
To conclude the description of the implementation of next-child, we give the following lemma which states that the probability distribution on the next child is the same for all three processes described above.
Lemma 8.
For all , .
Proof.
To prove the claim we prove that and that .
To prove the latter, denote by the nodes in the set , where , and let . For any , and for (which is the node denoted in the discussion above), .
When we consider the sequential process where one tosses a coin sequentially for all nodes in (and not only for the nodes in ) we extend the definition of to be defined also for nodes in . For a node , is the probability that is chosen as a candidate next -child. Thus, if we denote by , , the nodes in we have that , and for (which is the node denoted in the discussion above), . Thus, for any , .
We now extend to be defined for all nodes in . The assertion , for any , follows from the fact a number is selected uniformly at random and then the interval in which it lies is found. That is, is selected if and only if which, by the definitions of and , occurs with probability . ∎
6.3 Implementation of parent
The implementation of parent is straightforward (see Figure 4). However, note that updating the various data structures, while implicit in the listing, is accounted for in the time analysis.
6.4 Analysis of the pointer tree generator
We first give the following claim that we later use a number of times.
Lemma 9.
With high probability, for each and every call to next-child, the size of the recursion tree of that call, for calls to next-child, is .
Proof.
Consider the recursive invocation tree that results from a call to next-child. Observe that (1) by the code of next-child this tree is in fact a path; and (2) this path corresponds to a path in the pointers tree, where each edge of this tree-path is “discovered” by the corresponding call to next-child. That is, the maximum size of a recursion tree of a call of next-child is bounded from above by the height of the pointers tree. By Claim 2, with high probability, this is . ∎
6.4.1 Data structures and space complexity
The efficient implementation of next-child makes use of the following data structures.
-
•
A number of arrays of length , and , and , used to store various values for nodes . Since we implement arrays by means of search trees, the space complexity of each array is , where is the maximum number of distinct keys stored with a non-null value in that array, at any given time. The time complexity for each operation on this arrays is (since they are implemented as balanced binary search trees).
-
•
For each node , a balanced binary search tree called , where all nodes such that (for technical reasons we define to always include node .) 555So that we maintain low space complexity, for a given , is initialized only at the first use of , at which time node is inserted . Observe that for each child stored in one of these trees, is already determined. Thus, the increase, during a given period, in the space used by the child trees is bounded from above by the the number of nodes for which got determined during that period. For the time complexity of the operations on these trees we use a coarse standard upper bound of .
The listings of the implementations of the various procedures leave implicit the maintenance of two data structures, related to the set and to the computation of :
-
•
A data structure that allows one to retrieve the value of for a given vertex . This data structure is implemented by retrieving the cardinality of for a given node . The latter is equivalent to counting how many nodes have and . We use two balanced binary search trees (or order statistics trees) in a specific way and have that by standard implementations of balanced search trees the space complexity is (and all operations are done in time ). Here denotes the number of nodes such that . More details of the implementation of this data structure appear in the appendix (See Section A.1).
-
•
A data structure that allows one to find the vertex of rank in the ordered set . This data structure is implemented by a balanced binary search tree storing the nodes in , augmented with the queries (as in an order-statistics tree) as well as and , i.e., finding the element of rank in the complement of . To find the vertex of rank in we use the query . The space complexity of this data structure is , and all operations are done in time or (for the query). Here denotes the number of nodes in , which is upper bounded by the number of nodes such that . More details of the implementation of this data structure appear in the appendix (See Section A.2).
6.4.2 Time complexity
Time complexity of ).
The time complexity of this procedure is (regardless of whether or not the if condition holds or not), because it performs a binary search on (at most) items, and each iteration of this search takes time.
Time complexity of “ the vertex of rank in ”.
This operation is implemented using the data structure defined above, and takes time.
Time complexity of .
Examining the listing (Figure 4), one observes that the number of operations is constant. However, the access to the “array” takes time, and, though implicit in the listing, one should take into account the update of the data structure that stores the set , taking time. Thus the time complexity of parent is .
Time complexity of next-child.
First consider the time complexity consumed by a single invocation of next-child (i.e., without taking into account the time consumed by recursive calls of next-child):666We talk about an “invocation”, rather than a “call”, when we want to emphasize that we consider only the resources consumed by a single level of the recursion tree. The call to parent takes time. Therefore, until the start of the repeat loop, the time is (the time complexity of succ is ). Now, the time complexity of a single iteration of the loop (without taking into account recursive calls to next-child) is because:
-
•
Each access to an “array” takes time.
-
•
Calculating takes time.
-
•
The call to toss takes time.
-
•
Finding the vertex of rank in takes time.
-
•
Each of the updates of or may change the set , and therefore may take time to update the data structure involving .
-
•
An update of any given binary search tree takes time.
We now examine the number of iterations of the loop.
Claim 10.
With high probability, the number of iterations of the loop in a single invocation of next-child is .
Proof.
We consider a process where the iterations continue until the selected node is node . A random variable, , depicting this number dominates a random variable that depicts the actual number of iterations. For each iteration, an additional node is selected by toss. By Lemma 8 the probability that a node is selected by toss is , and we have that . Thus, , where is iff node was selected, otherwise. Since , using Chernoff bound 777cf. [9], Inequality (8). we have, for any constant , . ∎
We thus have the following.
Lemma 11.
For any given invocation of next-child, with high probability, the time complexity is .
6.4.3 Randomness complexity
Randomness is used in our generator to randomly select the parent of the nodes (in parent) and to randomly select a next child for a node (in toss). We use the common convention that, for any given , one can choose uniformly at random an integer in using random bits and in computation time. We give our algorithms and analyses based on this building block.
In procedure parent we use random bits whenever, for a given , this procedure is called with parameter for the first time.
In procedure toss the if condition holds with probability (where is the constant used in that procedure). Therefore, given a call to toss, with probability this procedure uses bits. By Claim 10, in each call to next-child the number of times that toss is called is, w.h.p., . We thus have the following.
Lemma 12.
During a given call to next-child, w.h.p., random bits are used.
The following lemma states the time, space, and randomness complexities of the queries.
Lemma 13.
The complexities of next-child-tp and parent are as follows.
-
•
Given a call to parent the following hold for this call:
-
1.
The increase, during that call, of the space used by our algorithm is .
-
2.
The number of random bits used during that call is .
-
3.
The time complexity of that call is .
-
1.
-
•
Given an call to next-child-tp, with high probability, all of the following hold for this call:
-
1.
The increase, during that call, of the space used by our algorithm is .
-
2.
The number of random bits used during that call is .
-
3.
The time complexity of that call is .
-
1.
Proof.
parent. During an call to the size of the used space increases when a pointer becomes non-nul or when additional values are stored in . To select , random bits are used, and time is consumed to insert in and to update the data structure for the set (this is implicit in the listing).
next-child-tp. We first consider next-child. Observe that by Lemma 9, w.h.p., each and every root (non-recursive) call of next-child has a recursion tree of size . In each invocation of next-child, variables and may be updated. Therefore, w.h.p., for all root (non-recursive) calls to next-child it holds that the increase in space during this call is (see Section 6.4.1). Using Lemmas 12 and 9 we have that, w.h.p., each root call of next-child uses random bits. Using Lemmas 11 and 9, we have that, w.h.p., the time complexity of each root call of next-child is .
Because the types of the pointers are uniformly distributed in , each call to next-child-tp results, w.h.p., in calls to next-child. The above complexities are thus multiplied by an factor to get the (w.h.p.) complexities of next-child-tp. ∎
7 On-the-fly Generator for BA-Graphs
Our on-the-fly generator for BA-graphs is called O-t-F-BA, and simply calls for each query on node . We present an implementation for the BA-next-neighbor query, and prove its correctness, as well as analyze its time, space, and randomness complexities. The on-the-fly BA generator maintains standard heaps, one for each node. The heaps store nodes, where the order is the natural order of their serial numbers. 888For simplicity of presentation we assume that the initialization of the heap occurs at the first insert, and make sure in our use of the heap that no extraction is performed before the first insert. The heap of node stores some of the nodes already known to be neighbors of . In addition, the generator maintains for purely technical reasons an array of size , , indicating if a BA-next-neighbor query has been issued for a given node. The implementation of the BA-next-neighbor query works as follows (see Figure 5).
-
•
For the first BA-next-neighbor() query, for a given , we proceed as follows. We find the parent of in the BA-graph, which is done by following, in the pointers tree, the pointers of the ancestors of until we find an ancestor pointed to by a dir pointer (and not a rec pointer). See Figure 5. In addition, we initialize the process of finding neighbors of to its right (i.e., with a bigger serial number) by inserting into the heap of the “final node” as well as the first child of .
-
•
For any subsequent BA-next-neighbor() query for node we proceed as follows. Observe that any subsequent query is to return a child of in the BA-graph. The children of in the BA-graph are those nodes which have, in the pointers tree, a path of pointers starting at and ending at and with all pointers on that path, except the last one, being rec (the last one being dir). The query has, however, to report the children in increasing order of their index. To this end the heap of node is used; it stores at any give time some of the children of in the BA-graph, not yet returned by a query. We further have to update this heap so that will continue to return the next child according to the index order. To this end we proceed as follows. Whenever node, is extracted from the heap, in order to be returned as the next child, we update the heap to include the following:
-
–
If has a dir pointer to , then we add to the heap (1) the next, after , node with a dir pointer to , and (2) the first node that has a rec pointer to .
-
–
If has a rec pointer to a node , then we add to the heap (1) the first, after , node with a rec pointer to , and (2) the first node that has a rec pointer to .
-
–
The proof of Lemma 14 below is based on the claim that the heap contains only children of in the BA-graph, and that it always contains the child of just after the one last returned.
Lemma 14.
The procedure BA-next-neighbor returns the next neighbor of .
Proof.
Given a pointers tree we define the following notions:
-
•
The set of nodes which have a dir pointer to a given node . That is, for , .
-
•
The set of nodes which have a rec pointer to a given node . That is, for , .
Given a BA graph, for any node and any prefix length , we denote by the set of the first (according to the index number) neighbors of in the BA graph.
In what follows we consider an arbitrary node .
We consider the actions of BA-next-neighbor (see Figure 5).
Let be the set of nodes returned by the first calls .
We first prove
that the following invariant holds.
Just after call number of :
-
1.
The heap contains only neighbors of in the BA-graph.
-
2.
The heap contains the minimum node in .
-
3.
Let be the first neighbor of in the BA graph. The heap contains, for each node , the minimum node in .
We prove that the invariant holds by induction on .
The induction basis, for call number , holds since the first call to results in inserting into
the first node which has a dir pointer to node and since was previously empty (see Figure 5).
Thus all points of the invariant hold after call .
For assume that the induction hypothesis holds for and let be the node returned by the ’th
call to . We claim that the invariant still holds after call by verifying each one of the two cases for
the pointer of and the insertions into the heap for each such case.
If has a dir pointer, then the following nodes are inserted into :
(1) The first node after with a dir pointer to . Since this is a neighbor of in the BA graph
Point 1 continues to hold. Since
, just extracted from the heap, was the minimum node in the heap, also Point 2 continues to hold.
(2) The first node after which has a rec pointer to . Since this is a neighbor of in the BA graph
Point 1 continues to hold; Point 3 continues to hold since nothing has changed for any
other , , and for the minium node in is just inserted.
If has a rec pointer, and let be the parent of in the pointers tree, then the following nodes are inserted into :
(1) The first node after which has a rec pointer to ; denote it . Since is a neighbor of in the BA graph
Point 1 continues to hold. Since
, just extracted from the heap, was the minimum node in the heap, is the minimum node in
and Point 3 continues to hold (nothing changes for any , ).
(2) The first node after which has a rec pointer to . The same arguments as those for the corresponding case when has
a dir pointer hold, and thus both Point 1 and Point 3 continue to hold.
This concludes the proof of the invariant.
We now use the above invariant in order to prove that, for any , . We do that by induction on . For the claim follows from the facts the first neighbor of node is its parent in the BA graph and that the first call returns the value that returns. This proves the induction basis. We now prove the claim for given the induction hypothesis for all . Let node be the ’th neighbor of . We have two cases: (1) node has a dir pointer to ; (2) node has a rec pointer to another child of in the BA graph (i.e., to another neighbor of in the BA graph, which is not the first neighbor).
Case (1): By the induction hypothesis , hence by Point 2 of the invariant is in the heap when the ’th call occurs. Since any node returned by is no longer in , by Point 1 of the invariant, does not contain any node smaller than . Therefore the node returned by the ’th call of is node .
Case (2): Let node be the parent of node in the pointers tree, i.e., . Since is a neighbor of in the BA graph, and , it follows that , and by the induction hypothesis . Moreover, any node has , if and only if it is a neighbor of , hence any such node is in , and by the induction hypothesis also in . It follows from Point 3 of the invariant that is in the heap when the ’th call occurs. Since any node returned by is no longer in , by Point 1 of the invariant, does not contain any node smaller than . Therefore the node returned by the ’th call of is node . This completes the proof of the lemma. ∎
Since the flags in the pointers tree are uniformly distributed, and by Lemma 13, we have:
Lemma 15.
For any given root (non-recursive) call of BA-parent, with high probability, that call takes time.
We can now conclude with the following theorem.
Theorem 16.
For any given call of BA-next-neighbor, with high probability, all of the following hold for that call:
-
1.
The increase, during that call, of the space used by our algorithm .
-
2.
The number of random bits used during that call is .
-
3.
The time complexity of that call is .
Proof.
Each call of BA-next-neighbor is executed by a constant number of calls to BA-parent and next-child-tp, a constant number of calls to heap-insert and heap-extract-min, and a constant number of accesses to “arrays”. The claim then follows from standard deterministic heap implementations (in space per stored item and time) and from Lemma 13. ∎
We now state the properties of our on-the-fly graph generator for BA-graphs.
Definition 17.
For a number of queries and a sequence of BA-next-neighbor queries , let be the sequence of answers returned by an algorithm on . If is randomized then is a probability distribution on sequences of answers.
Let be the (randomized) algorithm that first runs the Markov process to generate a graph on nodes according to the BA model, stores , and then answers queries by accessing the stored . Let be the algorithm O-t-F-BA run with graph-size . From the definition of the algorithm we have the following.
Theorem 18.
For any sequence of queries , .
We now conclude by stating the complexities of our on-the-fly BA generator.
Theorem 19.
For any and any sequence of queries , when using it holds w.h.p. that, for all :
-
1.
The increase in the used space, while processing query , is .
-
2.
The number of random bits used while processing query is .
-
3.
The time complexity for processing query is .
Proof.
A query at time is a trivial if at some a query returns . Observe that trivial queries take deterministic time, do not use randomness, and do not increase the used space. Since there are less than non-trivial queries, the theorem follows from Theorem 16 and a union bound. ∎
We note that the various assertions in this paper of the form of “with high probability … is ” can also be stated in the form of “with probability … is ”. Therefore, we can combine these various assertions, and together with the fact that the number of non-trivial queries is , we get the final result stated above.
Acknowledgments.
We thank Yishay Mansour for raising the question of whether one can locally generate preferential attachment graphs, and Dimitri Achlioptas and Matya Katz for useful discussions. We further thank an anonymous ICALP reviewer for a comment that helped us simplify one of the data structure implementations.
References
- [1] Md. Maksudul Alam, Maleq Khan, and Madhav V. Marathe. Distributed-memory parallel algorithms for generating massive scale-free networks using preferential attachment model. In International Conference for High Performance Computing, Networking, Storage and Analysis, SC’13, Denver, CO, USA - November 17 - 21, 2013, pages 91:1–91:12, 2013.
- [2] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1132–1139, 2012.
- [3] Albert-László Barabási and Reka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999.
- [4] Vladimir Batagelj and Ulrik Brandes. Efficient generation of large random networks. Physical Review E, 71(3):036113, 2005.
- [5] Michael Drmota. Random Trees: An Interplay Between Combinatorics and Probability. Springer Publishing Company, Incorporated, 1st edition, 2009.
- [6] Guy Even, Moti Medina, and Dana Ron. Best of two local models: Local centralized and local distributed algorithms. CoRR, abs/1402.3796, 2014.
- [7] Guy Even, Moti Medina, and Dana Ron. Deterministic stateless centralized local algorithms for bounded degree graphs. In Algorithms - ESA 2014 - 22th Annual European Symposium, Wroclaw, Poland, September 8-10, 2014. Proceedings, pages 394–405, 2014.
- [8] Oded Goldreich, Shafi Goldwasser, and Asaf Nussboim. On the implementation of huge random objects. SIAM J. Comput., 39(7):2761–2822, 2010.
- [9] Torben Hagerup and Christine Rüb. A guided tour of chernoff bounds. Inf. Process. Lett., 33(6):305–308, 1990.
- [10] Tamara G. Kolda, Ali Pinar, Todd Plantenga, and C. Seshadhri. A scalable generative graph model with community structure. SIAM J. Scientific Computing, 36(5), 2014.
- [11] Ravi Kumar, Prabhakar Raghavan, Sridhar Rajagopalan, D. Sivakumar, Andrew Tomkins, and Eli Upfal. Random graph models for the web graph. In 41st Annual Symposium on Foundations of Computer Science, FOCS 2000, 12-14 November 2000, Redondo Beach, California, USA, pages 57–65, 2000.
- [12] Reut Levi, Guy Moshkovitz, Dana Ron, Ronitt Rubinfeld, and Asaf Shapira. Constructing near spanning trees with few local inspections. CoRR, abs/1502.00413, 2015.
- [13] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Local algorithms for sparse spanning graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, pages 826–842, 2014.
- [14] Reut Levi, Ronitt Rubinfeld, and Anak Yodpinyanee. Brief announcement: Local computation algorithms for graphs of non-constant degrees. In Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures, SPAA 2015, Portland, OR, USA, June 13-15, 2015, pages 59–61, 2015.
- [15] Yishay Mansour, Aviad Rubinstein, Shai Vardi, and Ning Xie. Converting online algorithms to local computation algorithms. In Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, pages 653–664, 2012.
- [16] Yishay Mansour and Shai Vardi. A local computation approximation scheme to maximum matching. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 16th International Workshop, APPROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley, CA, USA, August 21-23, 2013. Proceedings, pages 260–273, 2013.
- [17] Ulrich Meyer and Manuel Penschuck. Generating massive scale-free networks under resource constraints. In Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments, ALENEX 2016, Arlington, Virginia, USA, January 10, 2016, pages 39–52, 2016.
- [18] Huy N Nguyen and Krzysztof Onak. Constant-time approximation algorithms via local improvements. In Foundations of Computer Science, 2008. FOCS’08. IEEE 49th Annual IEEE Symposium on, pages 327–336. IEEE, 2008.
- [19] Sadegh Nobari, Xuesong Lu, Panagiotis Karras, and Stéphane Bressan. Fast random graph generation. In Proceedings of the 14th international conference on extending database technology, pages 331–342. ACM, 2011.
- [20] Krzysztof Onak. New sublinear methods in the struggle against classical problems. Massachusetts Institute of Technology, PhD Thesis, September 2010.
- [21] Krzysztof Onak, Dana Ron, Michal Rosen, and Ronitt Rubinfeld. A near-optimal sublinear-time algorithm for approximating the minimum vertex cover size. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1123–1131, 2012.
- [22] Omer Reingold and Shai Vardi. New techniques and tighter bounds for local computation algorithms. J. Comput. Syst. Sci., 82(7):1180–1200, 2016.
- [23] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. In Innovations in Computer Science - ICS 2010, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, pages 223–238, 2011.
- [24] Martin Sauerhoff. On the entropy of models for the web graph. Manuscript.
- [25] Robert T Smythe and Hosam M Mahmoud. A survey of recursive trees. Theory of Probability and Mathematical Statistics, (51):1–28, 1995.
- [26] Andy Yoo and Keith W. Henderson. Parallel generation of massive scale-free graphs. CoRR, abs/1003.3684, 2010.
- [27] Yuichi Yoshida, Masaki Yamamoto, and Hiro Ito. Improved constant-time approximation algorithms for maximum matchings and other optimization problems. SIAM J. Comput., 41(4):1074–1093, 2012.
Appendix A Implementations of Data Structures
A.1 Data Structure for
We use two balanced binary search trees (or order statistic trees). One, called left, stores all vertices such that . The other, called right, stores (the multi-set) . To determine we find, using tree right, how many nodes have (and ). Let this number be . Using tree left we find how many nodes have . Let this number be . Then .
By standard implementations of balanced search trees the space complexity is and all operations are done in time ) Here denotes the number of nodes such that .
A.2 Data Structure to find the node of rank in
We start with a number of definitions useful for specifying the data structure and its operations.
For a node and a subset of nodes , define as follows:
Note that for technical reasons for we define whether or not .
For a node and a subset of nodes , define as follows:
We note that using these definitions we have that, for any , the number of items in , where , is .
The , and operations are implemented as in a standard order-statistics tree based on a balanced binary search tree. The operation is implemented using the and then performing the calculation above. To implement we proceed as follows. We traverse the search tree with the value , and in each node of the tree that contains the vertex we compare with . Thus, we can find the maximum such that . Denote this node . We then return the node .
The time complexities of , and and are therefore based on standard order statistics trees. The time complexity of is : for each node along the search path of length we need to use the query .