This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Sublinear Random Access Generators for Preferential Attachment Graphs

Guy Even
Tel Aviv University
Tel Aviv 6997801
Israel
Email: guy@eng.tau.ac.il
   Reut Levi
MPI for Informatics
Saarbrücken 66123
Germany
Email: reuti.levi@gmail.com
   Moti Medina
MPI for Informatics
Saarbrücken 66123
Germany
Email: moti.medina@gmail.com
   Adi Rosén
CNRS and Université Paris Diderot
75205 Paris
France
Email: adiro@liafa.univ-paris-diderot.fr. Research supported in part by ANR project RDAM.
Abstract

We consider the problem of sampling from a distribution on graphs, specifically when the distribution is defined by an evolving graph model, and consider the time, space and randomness complexities of such samplers.

In the standard approach, the whole graph is chosen randomly according to the randomized evolving process, stored in full, and then queries on the sampled graph are answered by simply accessing the stored graph. This may require prohibitive amounts of time, space and random bits, especially when only a small number of queries are actually issued. Instead, we propose to generate the graph on-the-fly, in response to queries, and therefore to require amounts of time, space, and random bits which are a function of the actual number of queries.

We focus on two random graph models: the Barabási-Albert Preferential Attachment model (BA-graphs) [3] and the random recursive tree model [25]. We give on-the-fly generation algorithms for both models. With probability 11/poly(n)1-1/\mbox{poly}(n), each and every query is answered in polylog(n)\mbox{polylog}(n) time, and the increase in space and the number of random bits consumed by any single query are both polylog(n)\mbox{polylog}(n), where nn denotes the number of vertices in the graph.

Our results show that, although the BA random graph model is defined by a sequential process, efficient random access to the graph’s nodes is possible. In addition to the conceptual contribution, efficient on-the-fly generation of random graphs can serve as a tool for the efficient simulation of sublinear algorithms over large BA-graphs, and the efficient estimation of their performance on such graphs.

1 Introduction

Consider a Markov process in which a sequence {St}t\{S_{t}\}_{t} of states, St𝒮S_{t}\in\cal{S}, evolves over time t1t\geq 1. Suppose there is a set 𝒫\cal{P} of predicates defined over the state space 𝒮\cal{S}. Namely, for every predicate P𝒫P\in\cal{P} and state S𝒮S\in\cal{S}, the value of P(S)P(S) is well defined. A query is a pair (P,t)(P,t) and the answer to the query is P(St)P(S_{t}). In the general case, answering a query (P,t)(P,t) requires letting the Markov process run for tt steps until StS_{t} is generated. In this paper we are interested in ways to reduce the dependency, on tt, of the computation time time, the memory space, and the number of used random bits, required to answer a query (P,t)(P,t).

We focus on the case of generative models for random graphs, and in particular, on the Barabási-Albert Preferential Attachment model [3] (which we call BA-graphs), on the equivalent linear evolving copying model of Kumar et al. [11], and on the random recursive tree model [25]. The question we address is whether one can design a randomized on-the-fly graph generator that answers adjacency list queries of BA-graphs (or random recursive trees), without having to generate the complete graph. Such a generator outputs answers to adjacency list queries as if it first selected the whole graph at random (according the appropriate distribution) and then answered the queries based on the samples graph.

We are interested in the following resources of a graph generator: (1) the number of random bits consumed per query, (2) the running time per query, and (3) the increase in memory space per query.

Our main result is a randomized on-the-fly graph generator for BA-graphs over nn vertices that answers adjacency list queries. The generated graph is sampled according to the distribution defined for BA-graphs over nn vertices, and the complexity upper bounds that we prove hold with probability 11/poly(n)1-1/\mbox{poly}(n). That is, with probability 11/poly(n)1-1/\mbox{poly}(n) each and every query is answered in polylog(n)\mbox{polylog}(n) time, and the increase in space, and the number of random bits consumed during that query are polylog(n)\mbox{polylog}(n). Our result refutes (definitely for polylog(n)\mbox{polylog}(n) queries) the recent statement of Kolda et al. [10] that: “The majority of graph models add edges one at a time in a way that each random edge influences the formation of future edges, making them inherently serial and therefore unscalable. The classic example is Preferential Attachment, but there are a variety of related models…”

We remark that the entropy of the edges in BA-graphs is Θ(logn)\Theta(\log n) per edge in the second half of the graph [24]. Hence it is not possible to consume a sublogarithmic number of random bits per query in the worst case if one wants to sample according to the BA-graph distribution. Similarly, to insure consistency (i.e., answer the same query twice in the same way) one must use Ω(logn)\Omega(\log n) space per query.

From a conceptual point of view, the main ingredient of our result are techniques to “invert” the sequential process where each new vertex randomly selects its “parent” in the graph among the previous vertices. Instead, vertices randomly select their “children” among the “future” vertices, while maintaining the same probability distribution as if each child picked “in the future” its parent. We apply these techniques in the related model of random recursive trees [25] (also used within the evolving copying model [11]), and use them as a building block for our main result for BA-graphs.

1.1 Related work

A linear time randomized algorithm for efficiently generating BA-graphs is given in Betagelj and Brandes [4]. See also Kumar et al. [11] and Nobari et al. [19]. A parallel algorithm is given in Alam et al. [1]. See also Yoo and Henderson [26]. An external memory algorithm was presented by Meyer and Peneschuck [17].

Goldreich, Goldwasser and Nussboim initiate the study of the generation of huge random objects [8] while using a “small” amount of randomness. They provide an efficient query access to an object modeled as a function, when the object has a predetermined property, for example graphs which are connected. They guarantee that these objects are indistinguishable from random objects that have the same property. This refers to the setting where the size of the object is exponential in the number of queries to the function modeling the object. We note that our generator provides access to graphs which are random BA-graphs and not just indistinguishable from random BA-graphs.

Mansour, Rubinstein, Vardi and Xie [15] consider local generation of bipartite graphs for local simulation of Balls into Bins online algorithms. They assume that the balls arrive one by one and that each ball picks dd bins independently, and then assigned to one of them. The local simulation of the algorithm locally generates a bipartite graph. Mansour et al. show that with high probability one needs to inspect only a small portion of the the bipartite graph in order to run the simulation and hence a random seed of logarithmic size is sufficient.

1.2 Applications

One reason for generating large BA-graphs is to simulate algorithms over them. Such algorithms often access only small portions of the graphs. In such instances, it is wasteful to generate the whole graph. An interesting example is sublinear approximation algorithms [21, 27, 18, 20] which probe a constant number of neighbors. 111Strictly speaking, sublinear approximation algorithms apply to constant degree graphs and BA-graphs are not constant degree. However, thanks to the power-law distribution of BA-graphs, one can “omit” high degree vertices and maintain the approximation. See also [22]. In addition, local computation algorithms probe a small number of neighbors to provide answers to optimization problems such as maximal independent sets and approximate maximum matchings [6, 7, 22, 23, 2, 15, 16, 12, 13, 14]. Support of adjacency list queries is especially useful for simulating (partial) DFS and BFS over graphs.

1.3 Techniques

The main difficulty in providing the on-the-fly generator is in “inverting” the random choices of the BA process. That is, we need to be able to randomly choose the next “child” of a given node xx, although it will only “arrive in the future” and its choice of a parent in the BA-graph will depend on what will have happened until it arrives (i.e., on the node degrees in the BA-graph when that node arrives). One possibility to do so is to maintain, for any future node which does not yet have a parent, how many potential parents it still has, and then go sequentially over the future nodes and randomly decide if its parent will indeed be xx. This is too costly not only because we will need to go sequentially over the nodes, but mainly because it may be too costly in computation time to calculate, given the random choices already done in response to previous queries, what is the probability that the parent of a node yy that does not have yet a parent, will be node xx.

To overcome this difficulty we define for any node, even if it has already a parent, its probability to be a candidate to be a child of xx. We show how these probabilities can be calculated efficiently given the previous choices taken in response to previous queries, and show how, based on these probabilities, we can define an efficient process to chose the next candidate. The candidate node may however already have a parent, and thus cannot be a child of xx. If this is the case we repeat the process and choose another candidate, until we chose an eligible candidate which then is chosen to be the actual next child of xx. We show that with high probability this process terminates quickly and finds an eligible candidate, so that with high probability we have an efficient process to find “into the future” the next child of xx. This is done while sampling exactly according to the distribution defined by the BA-graphs process.

In addition to the above technique, which is arguably the crux of our result, we use a number of data structures, based on known constructions, to be able to run the on-the-fly generator with polylogarithmic time and space complexities. In the sequel we give, in addition to the formal definitions of the algorithms, some supplementary intuitive explanations into our techniques.

2 Preliminaries

Let Vn{v1,,vn}V_{n}\triangleq\{v_{1},\ldots,v_{n}\}. Let G=(Vn,E)G=(V_{n},E) denote a directed graph on nn nodes.222Preferential attachment graphs are usually presented as undirected graphs. For convenience of discussion we orient each edge from the high index vertex to the low index vertex, but the graphs we consider remain undirected graphs. We refer to the endpoints of a directed edge (u,v)(u,v) as the head vv and the tail uu. Let deg(vi,G)\deg(v_{i},G) denote the degree of the vertex viv_{i} in GG (both incoming and outgoing edges). Similarly, let degin(vi,G)\textit{deg}_{\textit{in}}(v_{i},G) and degout(vi,G)\textit{deg}_{\textit{out}}(v_{i},G) denote the in-degree and out-degree, respectively, of the vertex viv_{i} in GG. The normalized degree distribution of GG is a vector Δ(G)\Delta(G) with nn coordinates, one for each vertex in GG. The coordinate corresponding to viv_{i} is defined by

Δ(G)i\displaystyle\Delta(G)_{i} deg(vi,G)2|E|.\displaystyle\triangleq\frac{\deg(v_{i},G)}{2\cdot|E|}~.

Note that i=1nΔ(G)i=1\sum_{i=1}^{n}\Delta(G)_{i}=1.

We also define the in-degree distribution Δin(G)\Delta_{\textit{in}}(G) by

Δin(G)i\displaystyle\Delta_{\textit{in}}(G)_{i} degin(vi,G)|E|.\displaystyle\triangleq\frac{\textit{deg}_{\textit{in}}(v_{i},G)}{|E|}~.

In the sequel, when we say that an event occurs with high probability (or w.h.p) we mean that it occurs with probability at least 11nc1-\frac{1}{n^{c}}, for some constant cc.

For ease of presentation, we use in the algorithms arrays of size nn. However, in order to give the desired upper bounds on the space complexity, we implement these arrays by means of balanced search trees, where the keys are in [1,n][1,n]. To access item ii in the “array” key ii is searched in the tree and the value in that node is returned; if the key is not found, then 𝗇𝗂𝗅\mathsf{nil} if ii is returned. Thus, the space used by the “arrays” is the number of keys stored, and the time complexity of our algorithms is multiplied by a factor of O(logn)O(\log n) compared to the time complexity that it would have with a standard random-access implementation of the arrays. When we state upper bounds on time, we take into account these O(logn)O(\log n) factors. As common, we analyze the space complexity in terms of “words” of size O(logn)O(\log n)

3 Queries

Consider an undirected graph G=(Vn,E)G=(V_{n},E), where Vn={v1,,vn}V_{n}=\{v_{1},\ldots,v_{n}\}. Slightly abusing notation, we sometimes consider and denote node viv_{i} as the integer number ii and so we have a natural order on the nodes. The access to the graph is done by means of a user-query BA-next-neighbor:[1,n][1,n+1]\texttt{BA-next-neighbor}:[1,n]\rightarrow[1,n+1], where n+1n+1 denotes “no additional neighbor”. We number the queries according to the order they are issued, and call this number the time of the query. Let q(t)q(t) be the node on which the query at time tt was issued, i.e, at time tt the query BA-next-neighbor(q(t))\texttt{BA-next-neighbor}(q(t)) is issued by the user. For each node vVv\in V and any time tt, let lastt(j)last_{t}(j) be the largest numbered node which was previously returned as the value of BA-next-neighbor(j)\texttt{BA-next-neighbor}(j), or 0 if no such query was issued before time tt. That is,

lastt(v)=max{0,maxt<t{BA-next-neighbor(q(t))|q(t)=v}.last_{t}(v)=\max\{0,\max_{t^{\prime}<t}\{\texttt{BA-next-neighbor}(q({t^{\prime}}))|q({t^{\prime}})=v\}~.

At time tt the query BA-next-neighbor(v)\texttt{BA-next-neighbor}(v) returns argmini>lastt(j){(i,j)E}\operatorname*{arg\,min}_{i>last_{t}(j)}\{(i,j)\in E\}, or n+1n+1 if no such ii exists. When the implementation of the query has access to a data structure holding the whole of EE, then the implementation of BA-next-neighbor is straightforward just by accessing this data structure. Figure 1 illustrates a “traditional” randomized graph generation algorithm that generates the whole graph, stores it, and then can answers queries by accessing the data structure that encodes the whole generated graph.

4 On-the-fly Graph Generators

An on-the-fly graph generator is an algorithm that gives access to a graph by means of the BA-next-neighbor query defined above, but itself does not have access to a data structure that encodes the whole graph. Instead, in response to the queries issued by the user, the generator modifies its internal data structure (a.k.a state), which is initially some empty (constant) state. The generator must ensure however that its answers are consistent with some graph GG. An on-the-fly graph generator for a given distribution on a family of graphs (such as the family of Preferential Attachment graphs on nn nodes) must in addition ensure that it samples the graphs according to the required distribution. That is, its answers to a sequence of queries must be distributed identically to those returned when a graph was first sampled (according to the desired distribution), stored, and then accessed (See Definition 17 and Theorem 18). Figure 2 illustrates an on-the-fly graph generation algorithm as the one we build in the present paper.

user memory (graph) batch Graph Generator random bits queryanswer
Figure 1: A batch, “traditional”, random graph generator
user on-the-fly Graph Generator random bits memory (state) queryanswer
Figure 2: An on-the-fly random graph generator

5 Random Graph Models

Preferential attachment [3].

We restrict our attention to the case in which each vertex is connected to the previous vertices by a single edge (i.e., m=1m=1 in the terminology of [3]). 333As discussed in Section 2, while the process generates an undirected graph, for ease of discussion we consider each edge as directed from its higher-numbered adjacent node to its lower-numbered adjacent node. We thus denote the random process that generates a graph over VnV_{n} according to the preferential attachment model by BAnBA_{n}. The random process BAnBA_{n} generates a sequence of nn directed edges En{e1,,en}E_{n}\triangleq\{e_{1},\ldots,e_{n}\}, where the tail of eie_{i} is viv_{i}, for every i[1,n]i\in[1,n]. (We abuse notation and let BAn=(Vn,En)BA_{n}=(V_{n},E_{n}) also denote the graph generated by the random process.) We refer to the head of eie_{i} as the parent of viv_{i}.

The process BAnBA_{n} draws the edges sequentially starting with the self-loop e1=(v1,v1)e_{1}=(v_{1},v_{1}). Suppose we have selected BAj1BA_{j-1}, namely, we have drawn the edges e1,,ej1e_{1},\ldots,e_{j-1}, for j>1j>1. The edge eje_{j} is drawn such its head is node viv_{i} with probability deg(vi,G)2(j1)\frac{\deg(v_{i},G)}{2(j-1)}.

Note that the out-degree of every vertex in (the directed graph representation of) BAnBA_{n} is exactly one, with only one self-loop in v1v_{1}. Hence BAnBA_{n} (without the self-loop) is an in-tree rooted at v1v_{1}.

Evolving copying model [11].

Let ZnZ_{n} denote the evolving copying model with out-degree d=1d=1 and copy factor α=1/2\alpha=1/2. As in the case of BAnBA_{n}, the process ZnZ_{n} selects the edges En={e1,,en}E^{\prime}_{n}=\{e^{\prime}_{1},\ldots,e^{\prime}_{n}\} one-by-one starting with a self-loop e1=(v1,v1)e^{\prime}_{1}=(v_{1},v_{1}). Given the graph Zn1=(Vn,En)Z_{n-1}=(V_{n},E^{\prime}_{n}), the next edge ene^{\prime}_{n} emanates from vnv_{n}. The head of edge ene^{\prime}_{n} is chosen as follows. Let bn{0,1}b_{n}\in\left\{0,1\right\} be an unbiased random bit. Let u(n)[1,n1]u(n)\in[1,n-1] be a uniformly distributed random variable (the random variables b1,,bnb_{1},\ldots,b_{n} and u(1),,u(n)u(1),\ldots,u(n) are all pairwise independent.) The head viv_{i} of ene^{\prime}_{n} is determined as follows:

head(en)\displaystyle\textit{head}(e^{\prime}_{n}) {u(n)if bn=1head(eu(n))if bn=0.\displaystyle\triangleq\begin{cases}u(n)&\text{if $b_{n}=1$}\\ \textit{head}(e_{u(n)})&\text{if $b_{n}=0$}\end{cases}~.
Random recursive tree model [25].

If we eliminate from the evolving copying model the bits bib_{i} and the “copying effect” we get a model where each new node nn is connected to one of the previous nodes, chosen uniformly at random. This is the extensively studied (random) recursive tree model [25].


We now relate the various models.

Claim 1 ([1]).

The random graphs BAnBA_{n} and ZnZ_{n} are identically distributed.

Proof.

The proof is by induction on nn. The basis (n=1n=1) is trivial. To prove the induction step, assume that BAn1BA_{n-1} and Zn1Z_{n-1} are identically distributed. We need to prove that the next edges ene_{n} and ene^{\prime}_{n} in the two processes are also identically distributed, given a graph GG as the realization of BAn1BA_{n-1} and Zn1Z_{n-1}, respectively.

The head of ene_{n} is chosen according to the degree distribution Δ(BAn1)=Δ(G)\Delta(BA_{n-1})=\Delta(G). Since the out-degree of every vertex is one,

deg(vi,BAn1)2(n1)\displaystyle\frac{\deg(v_{i},BA_{n-1})}{2(n-1)} =12(1n1+degin(vi,BAn1)n1).\displaystyle=\frac{1}{2}\cdot\left(\frac{1}{n-1}+\frac{\textit{deg}_{\textit{in}}(v_{i},BA_{n-1})}{n-1}\right).

Thus, an equivalent way of choosing the head of ene_{n} is as follows: (1)  with probability 1/21/2, choose a random vertex uniformly (this corresponds to the 121n1\frac{1}{2}\cdot\frac{1}{n-1} term), and (2) with probability 1/21/2 toss a Δin(BAn1)\Delta_{\textit{in}}(BA_{n-1})-dice (this corresponds to the 12degin(vi,BAn1)n1\frac{1}{2}\cdot\frac{\textit{deg}_{\textit{in}}(v_{i},BA_{n-1})}{n-1} term).

Hence, case (1) above corresponds to the case when bn=1b_{n}=1, in the process of ZnZ_{n}. To complete the proof, we observe that, conditioned on the event that bn=0b_{n}=0, the choice of the head of ene^{\prime}_{n} in ZnZ_{n} can be defined as choosing according to the in-degree distribution of the nodes in Zn1=GZ_{n-1}=G: indeed, choosing according to the in-degree distribution Δin(G)\Delta_{\textit{in}}(G) is identical to choosing a uniformly distributed random edge in GG and then taking its head. But, since the out-degrees of all the vertices in Vn1V_{n-1} are all the same (and equal one), this is equivalent to choosing a uniformly distributed random node in Vn1V_{n-1}. ∎

We use the following claim in the sequel.

Claim 2 (cf. [5], Thm. 6.12 and Thm. 6.32).

Let TT be a rooted directed tree on nn nodes denoted 1,,n1,\ldots,n, and where node 11 is the root of the tree. If the head of the edge emanating from node j>1j>1 is uniformly distributed among the nodes in [1,j1][1,j-1], then, with high probability, the following two properties hold:

  1. 1.

    The maximum in-degree of a node in the tree is O(logn)O(\log n).

  2. 2.

    The height of the tree is O(logn)O(\log n).

Note that the claim still holds if we add to the tree a self loop on node 11.

6 The Pointers Tree

We now consider a graph inspired by the the random recursive tree model [25] and the evolving copying model [11]. Each vertex ii has a variable u(i)u(i) that is uniformly distributed over [1,i1][1,i-1], and can be viewed as a directed edge (or pointer) from ii to u(i)u(i). We denote this random rooted directed in-tree by UTUT. Let u1(j)u^{-1}(j) denote the set {i:u(i)=j}\{i:u(i)=j\}. We refer to the set u1(i)u^{-1}(i) as the u-children of ii and to u(i)u(i) as the u-parent of ii. In conjunction with each pointer, we keep a flag indicating whether this pointer is to be used as a dir (direct) pointer or as a rec (recursive) pointer. We thus use the directed pointer tree to represent a graph in the evolving copying model (which is equivalent, when the flag of each pointer is equality distributed between rec and dir, to the BA model).

In this section we consider the subtask of giving access to a random UTUT, together with the flags of each pointer. Ignoring the flags, this section thus gives an on-the-fly random access generator for the extensively studied model of random recursive trees (cf. [25]). We define the following queries.

  • (i,flag)parent(j)(i,flag)\leftarrow\texttt{parent}(j): ii is the parent of jj in the tree, and flagflag is the associated flag.

  • inext-child-tp(j,k,type)i\leftarrow\texttt{next-child-tp}(j,k,type), where kjk\geq j: ii is the least numbered node i>ki>k such that the parent of ii is jj and the flag of that pointer is of type “type”. If no such node exists then ii is n+1n+1.

The “ideal” way to implement this task is to go over all nn nodes, and for each node jj (1) uniformly at random choose its parent in [1,j1][1,j-1], (2) uniformly at random chose the associated flag in {𝚍𝚒𝚛,𝚛𝚎𝚌}\{{\tt dir},{\tt rec}\}. Then store the pointers and flags, and answer the queries by accessing this data structure.

In this section we give an on-the-fly generator that answers the above queries, and start with a naïve, non-efficient implementation that illustrates the task to be done. Then we give our efficient implementation.

Notations.

We say that jj is exposed if u(j)𝗇𝗂𝗅u(j)\neq\mathsf{nil} (initially all pointers u(j)u(j) are set to 𝗇𝗂𝗅\mathsf{nil}). We denote the set of all exposed vertices by FF. We say that jj is directly exposed if u(j)u(j) was set during a call to next-child-tp(i,,)\texttt{next-child-tp}(i,\cdot,\cdot). We say that jj is indirectly exposed if u(j)u(j) was determined during a call to parent(j)\texttt{parent}(j). As a result of answering and processing next-child-tp and parent queries, the on-the-fly generator commits to various decisions (e.g., prefixes of adjacency lists). These commitments include edges but also non-edges (i.e., vertices that can no longer serve as u(j)u(j) for a certain jj). For a node ii, front(i)\textit{front}(i) denotes the largest value (node) k[1,n+1]k\in[1,n+1] such that kk was returned by a next-child-tp(i,,)\texttt{next-child-tp}(i,\cdot,\cdot) query, and 𝗇𝗂𝗅\mathsf{nil} if no such returned value exists. Observe that front(i)=k\textit{front}(i)=k implies that (1) u(k)=iu(k)=i; and (2) we know already for each node j[j+1,k1]j\in[j+1,k-1] if u(j)=iu(j)=i or not. We denote - roughly speaking - the set of vertices that cannot serve as uu-parents of jj by not-u-parent-candidate(j)\textit{not-u-parent-candidate}(j), the nodes that can still be uu-parents of jj by Φ(j)\text{$\Phi$}(j), and their number by φ(j)=|Φ(j)|\varphi(j)=|\text{$\Phi$}(j)|. The formal definitions are: 444To simplify the definition of the more efficient generator, defined in the sequel, we define not-u-parent-candidate(j)\textit{not-u-parent-candidate}(j) and Φ(j)\text{$\Phi$}(j) as above even when jj is exposed. Thus, it might be the case that u(j)not-u-parent-candidate(j)u(j)\in\textit{not-u-parent-candidate}(j) (although u(j)u(j) is the u-parent of jj).

not-u-parent-candidate(j)\displaystyle\textit{not-u-parent-candidate}(j) {i[1,j1]:front(i)j},\displaystyle\triangleq\{i\in[1,j-1]:\textit{front}(i)\geq j\}~,
Φ(j)\displaystyle\text{$\Phi$}(j) [1,j1]not-u-parent-candidate(j),\displaystyle\triangleq[1,j-1]\setminus\textit{not-u-parent-candidate}(j)~,
φ(j)\displaystyle\varphi(j) |Φ(j)|.\displaystyle\triangleq|\text{$\Phi$}(j)|~.

6.1 A naïve implementation of next-child

naïve-next-child 1:procedure naïve-next-child(j,kj,k) 2:  xk+1x\leftarrow k+1. 3:  while xnx\leq n do 4:   if u(x)=ju(x)=j then return (x)(x) 5:   else 6:     if u(x)=𝗇𝗂𝗅u(x)=\mathsf{nil} then 7:      Flip a random bit c(x)c(x) such that 𝐏𝐫[c(x)=1]=1/φ(x)\mathbf{Pr}[c(x)=1]=1/\varphi(x). 8:      if c(x)=1c(x)=1 then 9:       return (x)(x) 10:      end if 11:     end if 12:   end if 13:   xx+1x\leftarrow x+1 14:  end while 15:  return (n+1)(n+1) 16:end procedure
Figure 3: pseudo code of naïve-next-child

We give a naïve implementation of a next-child query, with time complexity O(n)O(n), with the purpose of illustrating the main properties of this query and in order contrast it with the more efficient implementation later. We do so in a simpler manner without looking into the “type”. The naïve implementation of next-child  is listed in Figure 3. This implementation, and that of parent, share an array of pointers uu, both updating it. A query next-child(i,k)\texttt{next-child}(i,k) is processed by scanning the vertices one-by-one starting from k+1k+1. If u(x)=iu(x)=i, then xx is the next child. If u(x)u(x) is 𝗇𝗂𝗅\mathsf{nil}, then a coin c(x)c(x) is flipped and u(x)=iu(x)=i is set when c(x)c(x) comes out 11; the probability that c(x)c(x) is 11 is 1/φ(x)1/\varphi(x). If c(x)=0c(x)=0, we proceed to the next vertex. The loop ends when some c(x)c(x) is 11 or all vertices have been exhausted. In the latter case the query returns n+1n+1.

The correctness of naïve-next-child, i.e., the fact that the graph is generated according to the required probability distribution, is based on the observation that, conditioned on the event that u(x)not-u-parent-candidate(x)u(x)\not\in\textit{not-u-parent-candidate}(x), all the vertices in Φ(x)\text{$\Phi$}(x) are equally likely to serve as u(x)u(x). Note that the description above does not explain how φ(x)\varphi(x) is computed.

6.2 An efficient implementation of next-child

We first shortly discuss the challenges on the way to an efficient implementation of next-child. Consider the simple special case where the only two queries issued are, for some jj, a single parent(j)\texttt{parent}(j) followed by a single next-child(j)\texttt{next-child}(j) (to simplify this discussion we assume that the the value of kk is globally known). Consider the situation after the query parent(j)\texttt{parent}(j). Every vertex x[j+1,n]x\in[j+1,n] may be a uu-child of jj I.e., because at this point front(i)=𝗇𝗂𝗅\textit{front}(i)=\mathsf{nil}, for every ii, φ(x)=x1\varphi(x)=x-1 and 𝐏𝐫[u(x)=j]=1/(x1)\mathbf{Pr}[u(x)=j]=1/(x-1). Let PxP_{x} denote the probability that vertex xx is the first child of jj. Then Px=1x1=j+1x1(111)=j1(x1)(x2)P_{x}=\frac{1}{x-1}\cdot\prod_{\ell=j+1}^{x-1}(1-\frac{1}{\ell-1})=\frac{j-1}{(x-1)(x-2)} and for Pn+1P_{n+1} (i.e., jj has no child) Pn+1=j1n1P_{n+1}=\frac{j-1}{n-1}. Since each of the probabilities Pk=x=j+1kPxP^{\prime}_{k}=\sum_{x=j+1}^{k}P_{x} can be calculated in O(1)O(1) time, this random choice can be done in O(logn)O(\log n) time by a choosing uniformly at random a number in [0,1][0,1] and performing a binary search on [j+1,n+1][j+1,n+1] to find which index it represents (see a more detailed an accurate statement of this procedure below). However, in general, at the time of a certain next-child query, limitations may exits, due to previous queries, on the possible consistent values of certain pointers u(x)u(x). There are two types of limitations: (i) u(x)u(x)might have been already determined, or (ii) u(x)u(x)is still 𝗇𝗂𝗅\mathsf{nil} but the option of u(x)=iu(x)=i has been excluded since front(i)>x\textit{front}(i)>x. These limitations change the probabilities PxP_{x} and PxP^{\prime}_{x}, rendering them more complicated and time-consuming to compute, thus rendering the above-defined process not efficient (i.e., not doable in O(logn)O(\log n) time). In the rest of this section we define and analyze a modified procedure that uses polylog(n)\mbox{polylog}(n) random bits, takes polylog(n)\mbox{polylog}(n) time, and increases the space by polylog(n)\mbox{polylog}(n). This procedure will be at the heart of the efficient implementation of next-child.

The efficient implementation of next-child  (and of parent) makes use of the following data structures.

  • An array of length nn, u(j)u(j)

  • An array of length nn, type(j)type(j)

  • An array of length nn, front(j)\textit{front}(j) (We also maintain an array front1(i)\textit{front}^{-1}(i) with the natural definition.)

  • An array of nn balanced search trees, called child(j)\texttt{child}(j), each holding the set of nodes i>ji>j such that u(i)=ju(i)=j. For technical reasons all trees child(j)\texttt{child}(j) are initiated with n+1child(j)n+1\in\texttt{child}(j).

  • A number of additional data structures that are implicit in the listing, described and analyzed in the sequel.

In the implementation of the on-the-fly generator of the pointers tree we will maintain two invariants that are described below. We will later discuss the cost (in running time and space) of maintaining these invariants.

Invariant 3.

For every node jj, the first next-child-tp(j,,)\texttt{next-child-tp}(j,\cdot,\cdot) query is always preceded by a parent(j)\texttt{parent}(j) query.

We will use this invariant to infer that front(j)𝗇𝗂𝗅\textit{front}(j)\neq\mathsf{nil} implies that u(j)𝗇𝗂𝗅u(j)\neq\mathsf{nil}. One can easily maintain this invariant by introducing a parent(j)\texttt{parent}(j) query as the first step of the implementation of the next-child-tp(j,,)\texttt{next-child-tp}(j,\cdot,\cdot) query (for technical reasons we do that in a lower-level procedure next-child.)

Invariant 4.

For every vertex jj, front(j)𝗇𝗂𝗅\textit{front}(j)\neq\mathsf{nil} implies that front(front(j))𝗇𝗂𝗅\textit{front}(\textit{front}(j))\neq\mathsf{nil}.

The second invariant is maintained by issuing an “internal” next-child(front(j),front(j))\texttt{next-child}(\textit{front}(j),\textit{front}(j)) query whenever front(j)\textit{front}(j) is updated. This is done recursively, the base of the recursion being node n+1n+1. When analyzing the complexities of our algorithm we will take into account these recursive calls. Let front1(j)\textit{front}^{-1}(j) denote the vertex ii such that front(i)=j\textit{front}(i)=j, if such a vertex ii exists; (note that there can be at most one such node ii, except for the case of j=n+1j=n+1); otherwise front1(j)=𝗇𝗂𝗅\textit{front}^{-1}(j)=\mathsf{nil}. We get that if front1(j)𝗇𝗂𝗅\textit{front}^{-1}(j)\neq\mathsf{nil}, then u(j)𝗇𝗂𝗅u(j)\neq\mathsf{nil}.

Definition 5.

At a given time tt, and for any node jj, let Φ(j)\Phi(j) and φ(j)\varphi(j) be defined as follows:

Φ(j){i|i<jand(front(i)<jorfront(i)=𝗇𝗂𝗅)},andφ(j)=|Φ(j)|.\Phi(j)\triangleq\{i~|~i<j~\mbox{\tt and}~(\textit{front}(i)<j~\mbox{\tt or}~\textit{front}(i)=\mathsf{nil})\},~~\mbox{and}~~\varphi(j)=|\Phi(j)|~.

We note that if at a give time we consider a node jj such that u(j)=𝗇𝗂𝗅u(j)=\mathsf{nil} (i.e., its parent in the pointers tree is not yet determined), then the set Φ(j)\Phi(j) is the set of all the nodes that can still be the parent of node jj in the pointers tree. The set Φ\Phi is however defined also for nodes for which their parent is already determined.

Definition 6.

Let KK denote the following set:

K\displaystyle K {i:front(i)𝗇𝗂𝗅 and front1(i)=𝗇𝗂𝗅}.\displaystyle\triangleq\{i:\textit{front}(i)\neq\mathsf{nil}\text{ and }\textit{front}^{-1}(i)=\mathsf{nil}\}~.

The following lemma proves that {Φ(x)}x\{\text{$\Phi$}(x)\}_{x} is a nondecreasing chain. It also characterizes a sufficient condition for φ(x+1)φ(x)1\varphi(x+1)-\varphi(x)\leq 1, and a necessary and sufficient condition for Φ(x+1)=Φ(x)\text{$\Phi$}(x+1)=\text{$\Phi$}(x) (and hence φ(x+1)=φ(x)\varphi(x+1)=\varphi(x)).

Lemma 7.

For every x[1,n1]x\in[1,n-1]:

  1. 1.

    Φ(x)Φ(x+1)Φ(x){x,front1(x)}\text{$\Phi$}(x)\subseteq\text{$\Phi$}(x+1)\subseteq\text{$\Phi$}(x)\cup\{x,\textit{front}^{-1}(x)\}.

  2. 2.

    Φ(x+1)=Φ(x)\text{$\Phi$}(x+1)=\text{$\Phi$}(x) iff xKx\in K.

  3. 3.

    φ(x+1)φ(x)1\varphi(x+1)-\varphi(x)\leq 1.

Proof.

To prove the lemma we make use of the fact that the changes in the values of the various parameters can occur only as a result of the queries nuc and parent.

We first claim that not-u-parent-candidate(x+1)not-u-parent-candidate(x){x}\textit{not-u-parent-candidate}(x+1)\subseteq\textit{not-u-parent-candidate}(x)\cup\{x\}. This follows from the definition of not-u-parent-candidate()\textit{not-u-parent-candidate}(\cdot) and the fact that only next-child and parent queries can change the value of not-u-parent-candidate()\textit{not-u-parent-candidate}(\cdot). Therefore Φ(x)Φ(x+1)\text{$\Phi$}(x)\subseteq\text{$\Phi$}(x+1). The difference Φ(x+1)Φ(x)\text{$\Phi$}(x+1)\setminus\text{$\Phi$}(x) may contain xx and may contain front1(x)\textit{front}^{-1}(x), thus Item 1 follows.

To prove Item 2, assume that Φ(x)=Φ(x+1)\text{$\Phi$}(x)=\text{$\Phi$}(x+1). By Item 1, this implies that xΦ(x+1)x\notin\text{$\Phi$}(x+1) and front1(x)Φ(x+1)\textit{front}^{-1}(x)\notin\text{$\Phi$}(x+1). If xΦ(x+1)x\notin\text{$\Phi$}(x+1), then xnot-u-parent-candidate(x+1)x\in\textit{not-u-parent-candidate}(x+1), namely front(x)x+1\textit{front}(x)\geq x+1 (see the formal definitions), and, in particular, front(x)𝗇𝗂𝗅\textit{front}(x)\neq\mathsf{nil}. Since we have that front1(x)Φ(x+1)\textit{front}^{-1}(x)\notin\text{$\Phi$}(x+1), then front1(x)=𝗇𝗂𝗅\textit{front}^{-1}(x)=\mathsf{nil}, and thus xKx\in K, as required. The converse direction is proved similarly.

Finally, to prove Item 3 we need to show that it is not possible for both xx and front1(x)\textit{front}^{-1}(x) to belong to Φ(x+1)\text{$\Phi$}(x+1). Indeed, if front1(x)Φ(x+1)\textit{front}^{-1}(x)\in\text{$\Phi$}(x+1), then there exists a vertex ii such that front(i)=x\textit{front}(i)=x. Invariant 4 implies that front(x)=front(front(i))𝗇𝗂𝗅\textit{front}(x)=\textit{front}(\textit{front}(i))\neq\mathsf{nil}. However, xΦ(x+1)x\in\text{$\Phi$}(x+1) implies front(x)=𝗇𝗂𝗅\textit{front}(x)=\mathsf{nil}, a contradiction. ∎

Thus, by Lemma 7, we have that for any x[1,n1]x\in[1,n-1]:

φ(x+1)φ(x)={0if xK,1if xK.\varphi(x+1)-\varphi(x)=\begin{cases}0&\text{if $x\in K$,}\\ 1&\text{if $x\notin K$}\end{cases}~. (1)

We are now ready to describe the implementation of next-child-tp(j,k,type)\texttt{next-child-tp}(j,k,type) and next-child(j)\texttt{next-child}(j). As seen in Figure 4, next-child-tp(j,k,type)\texttt{next-child-tp}(j,k,type) is merely a loop of next-child-from(j,k)\texttt{next-child-from}(j,k), and next-child-from(j,k)\texttt{next-child-from}(j,k) is essentially a call to next-child(j)\texttt{next-child}(j). The “real work” is done in the implementation of next-child(j)\texttt{next-child}(j) and next-child-from(j,k)\texttt{next-child-from}(j,k) that we describe now. Note that if jj does not have children larger than kk, then next-child-from(j,k)\texttt{next-child-from}(j,k) returns n+1n+1.

If front(j)>k\textit{front}(j)>k when next-child-from(j,k)\texttt{next-child-from}(j,k) is called, then the next child is already fixed and it is just extracted from the data structures. Otherwise, an interval I=[a,b]I=[a,b] is defined, and it will contain the answer of next-child(j)\texttt{next-child}(j). Let a=front(j)+1a=\textit{front}(j)+1 if front(j)𝗇𝗂𝗅\textit{front}(j)\neq\mathsf{nil}; and a=j+1a=j+1, if front(j)=𝗇𝗂𝗅\textit{front}(j)=\mathsf{nil}. Let b=min{{>front(j),u()=j}{n+1}}b=\min\{\{\ell>\textit{front}(j),u(\ell)=j\}\cup\{n+1\}\} if front(j)𝗇𝗂𝗅\textit{front}(j)\neq\mathsf{nil}; and b=min{{>j,u()=j}{n+1}}b=\min\{\{\ell>j,u(\ell)=j\}\cup\{n+1\}\}, if front(j)=𝗇𝗂𝗅\textit{front}(j)=\mathsf{nil} (i.e., bb is the smallest indirectly exposed child of jj beyond the “fully known area” for jj, or n+1n+1 if no such child exists). Observe that no vertex xF[a,b)x\in F\cap[a,b) can satisfy u(x)=ju(x)=j. Hence, the answer is in I(F{b})I\setminus(F\setminus\{b\}).

The next child can be sampled according to the desired distribution in a straightforward way by going sequentially over the vertices in IF{b}I\setminus F\setminus\{b\}, and tossing for each vertex xx a coin that has probability 1/φ(x)1/\varphi(x) to be 11, until indeed one of those coins comes out 11, or all vertices are exhausted (in which case node bb is taken as the next child). We denote by D(x)D(x), xIFx\in I\setminus F, the probability that xx is chosen when the the above procedure is applied. This procedure, however, takes linear time.

In order to start building our efficient implementation for next-child we note that by the definition of KK, KFK\subseteq F, and we consider a process where we toss |[a,b)K||[a,b)\setminus K| coins sequentially for the vertices in [a,b)K[a,b)\setminus K. The probability that the coin for x[a,b)Kx\in[a,b)\setminus K is 11 is still 1/φ(x)1/\varphi(x). We stop as soon as 11 is encountered or on bb if all coins are 0. The vertex on which we stop, denote is xx, is a candidate next uu-child. If xFK{b}x\in F\setminus K\setminus\{b\}, then xx cannot be a child of jj so we proceed by repeating the same process, but with the interval [x+1,b][x+1,b] instead of the interval [a,b][a,b]. We denote by D(x)D^{\prime}(x), xIFx\in I\setminus F, the probability that xx is chosen when this procedure is applied.

We now build our efficient procedure that selects the candidate, without sequentially going over the nodes. To this end, observe that the sequence of probabilities of the coins tossed in the last-described process behaves “nicely”. Namely, the probabilities 1/φ(x)1/\varphi(x), for x[a,b)Kx\in[a,b)\setminus K, form the harmonic sequence starting from 1/φ(a)1/\varphi(a) and ending in 1/(φ(a)+|[a,b)K|1)1/(\varphi(a)+|[a,b)\setminus K|-1). Indeed, Eq. (1) implies that if vertex ii is the smallest vertex in IKI\setminus K, then φ(i)=φ(a)\varphi(i)=\varphi(a) and an increment between φ(x)\varphi(x) and φ(x+1)\varphi(x+1) occurs if and only if xKx\notin K. Let s=|[a,b)K|s=|[a,b)\setminus K| and let PhP_{h}, 0hs0\leq h\leq s be the probability that the node of rank hh in ([a,b)K){b}([a,b)\setminus K)\cup\{b\} is chosen as candidate in the sequential procedure defined above. Since 1/φ(x)1/\varphi(x) forms the harmonic sequence for x[a,b)Kx\in[a,b)\setminus K, we can, given φ(a)\varphi(a), calculate in O(1)O(1) time, for any 0is+10\leq i\leq s+1, the probability Pi=q<iPqP^{\prime}_{i}=\sum_{q<i}P_{q} (i.e., the probability that a node of some rank qq, q<iq<i, is chosen). Indeed, for i=0i=0, Pi=1φ(a)P_{i}=\frac{1}{\varphi(a)}; for 0<i<s0<i<s, Pi=1φ(a)+i=0i1(11φ(a)+)=φ(a)1(φ(a)+i1)(φ(a)+i)P_{i}=\frac{1}{\varphi(a)+i}\cdot\prod_{\ell=0}^{i-1}\left(1-\frac{1}{\varphi(a)+\ell}\right)=\frac{\varphi(a)-1}{(\varphi(a)+i-1)(\varphi(a)+i)}; and for i=si=s, Ps==0s1(11φ(a)+)=φ(a)1φ(a)+s1P_{s}=\prod_{\ell=0}^{s-1}\left(1-\frac{1}{\varphi(a)+\ell}\right)=\frac{\varphi(a)-1}{\varphi(a)+s-1}. Hence, for 0is0\leq i\leq s, Pi=1φ(a)1φ(a)+(i1)P^{\prime}_{i}=1-\frac{\varphi(a)-1}{\varphi(a)+(i-1)}, and for i=s+1i=s+1, Ps+1=1P^{\prime}_{s+1}=1. This allows us to simulate one iteration (i.e., choosing the next candidate next uu-child) by choosing uniformly at random a single number in [0,1][0,1], and then performing a binary search over 0 to ss to decide what rank hh this number “represents”. After the rank h[0,s]h\in[0,s] is selected, hh is then mapped to the vertex of rank hh in ([a,b)K){b}([a,b)\setminus K)\cup\{b\}, denote it xx, and this is the candidate next uu-child. As before, if xFK{b}x\in F\setminus K\setminus\{b\}, then xx cannot be a child of jj so we ignore it and proceed in the same way, this time with the interval [x+1,b][x+1,b]. We denote by D^(x)\hat{D}(x), xIFx\in I\setminus F the probability that xx is chosen when this third procedure is applied. See Figure 4 for a formal definition of this procedure and that of next-child.

Observe that this procedure takes O(logs)O(\log s) time (see Section 6.4 for a formal statement of the time and randomness complexities). We note that we cannot perform this selection procedure in the same time complexity for the set [a,b)F[a,b)\setminus F, because we do not have a way to calculate each and every probability PiP^{\prime}_{i}, i[a,b)Fi\in[a,b)\setminus F, in O(1)O(1) time, even if φ(a)\varphi(a) is given.

To conclude the description of the implementation of next-child, we give the following lemma which states that the probability distribution on the next child is the same for all three processes described above.

Lemma 8.

For all xIFx\in I\setminus F, D^(x)=D(x)\hat{D}(x)={D}(x).

Proof.

To prove the claim we prove that D^(x)=D(x)\hat{D}(x)={D}^{\prime}(x) and that D(x)=D(x){D}^{\prime}(x)={D}(x).

To prove the latter, denote by x1<x2<<xkx_{1}<x_{2}<\ldots<x_{k} the nodes in the set IFI\setminus F, where k=|IF|k=|I\setminus F|, and let p(xj)=1φ(xj)p(x_{j})=\frac{1}{\varphi(x_{j})}. For any 1jk11\leq j\leq k-1 D(xj)=p(xj)Πi=1j1(1p(xi))D(x_{j})=p(x_{j})\cdot\Pi_{i=1}^{j-1}(1-p(x_{i})), and for xkx_{k} (which is the node denoted bb in the discussion above), D(xk)=1Πi=1k1(1p(xi))D(x_{k})=1-\Pi_{i=1}^{k-1}(1-p(x_{i})).

When we consider the sequential process where one tosses a coin sequentially for all nodes in IKI\setminus K (and not only for the nodes in IFI\setminus F) we extend the definition of D()D^{\prime}(\cdot) to be defined also for nodes in IKI\setminus K. For a node z(IK)Fz\in(I\setminus K)\cap F, D(z)D^{\prime}(z) is the probability that xx is chosen as a candidate next uu-child. Thus, if we denote by y1<y2<<yy_{1}<y_{2}<\ldots<y_{\ell}, =|IK|\ell=|I\setminus K|, the nodes in IKI\setminus K we have that D(yj)=p(yj)Π1i<j;yjIF(1p(yi))D^{\prime}(y_{j})=p(y_{j})\cdot\Pi_{1\leq i<j;y_{j}\in I\setminus F}(1-p(y_{i})), and for yy_{\ell} (which is the node denoted bb in the discussion above), D(x)=1Π1i<;yjIF(1p(yi))D(x_{\ell})=1-\Pi_{1\leq i<\ell;y_{j}\in I\setminus F}(1-p(y_{i})). Thus, for any xIFx\in I\setminus F, D(x)=D(x)D(x)=D^{\prime}(x).

We now extend D^()\hat{D}(\cdot) to be defined for all nodes in IKI\setminus K. The assertion D^(x)=D(x)\hat{D}(x)={D}^{\prime}(x), for any xIKx\in I\setminus K, follows from the fact a number M[0,1]M\in[0,1] is selected uniformly at random and then the interval in which it lies is found. That is, ii is selected if and only if PiM<Pi+1P^{\prime}_{i}\leq M<P^{\prime}_{i+1} which, by the definitions of PiP_{i} and PiP^{\prime}_{i}, occurs with probability Pi=D(xi)P_{i}=D^{\prime}(x_{i}). ∎

next-child-tp
Returns the least i>ki>k, ii is a u-child of jj, ii has type “type”.
Assumes that kfront(j)k\leq\textit{front}(j)
1:procedure next-child-tp(j,k,typej,k,type) 2:  xkx\leftarrow k 3:  repeat 4:   xnext-child-from(j,x)x\leftarrow\texttt{next-child-from}(j,x) 5:  until flag(x)=typeflag(x)=type or x=n+1x=n+1 6:  return xx 7:end procedure
next-child-from
Returns the least iki\geq k, ii is a u-child of jj.
Assumes that kfront(j)k\leq\textit{front}(j).
1:procedure next-child-from(j,kj,k) 2:  If (knk\geq n) return (n+1)(n+1) 3:  qsucc(child(j),k)q\leftarrow\texttt{succ}(\texttt{child}(j),k) 4:  if qfront(j)q\leq\textit{front}(j) then 5:   return qq 6:  else 7:   return next-child(j)\texttt{next-child}(j) 8:  end if 9:end procedure
parent
Returns the uu-parent of jj.
1:procedure parent(jj) 2:  if u(j)=𝗇𝗂𝗅u(j)=\mathsf{nil} then 3:   u(j)R[1,j1]u(j)\leftarrow_{R}[1,j-1] 4:   type(j)R{𝚍𝚒𝚛,𝚛𝚎𝚌}type(j)\leftarrow_{R}\{{\tt dir},{\tt rec}\} 5:   insert(child(u(j)),j)\texttt{insert}(\texttt{child}(u(j)),j) 6:  end if 7:  return (u(j),type(j))(u(j),type(j)) 8:end procedure
next-child
Returns the least i>front(j)i>\textit{front}(j) which is a u-child of jj.
1:procedure next-child(jj) 2:  (p,t)parent(j)(p,t)\leftarrow\texttt{parent}(j) 3:  If (front(j)n\textit{front}(j)\geq n) return (n+1)(n+1) 4:  a{front(j)+1if front(j)𝗇𝗂𝗅j+1if front(j)=𝗇𝗂𝗅a\leftarrow\begin{cases}\textit{front}(j)+1\ &\text{if $\textit{front}(j)\neq\mathsf{nil}$}\\ j+1&\text{if $\textit{front}(j)=\mathsf{nil}$}\end{cases} 5:  b{succ(child(j),front(j))if front(j)𝗇𝗂𝗅succ(child(j),j)if front(j)=𝗇𝗂𝗅b\leftarrow\begin{cases}\texttt{succ}(\texttt{child}(j),\textit{front}(j))&\text{if $\textit{front}(j)\neq\mathsf{nil}$}\\ \texttt{succ}(\texttt{child}(j),j)&\text{if $\textit{front}(j)=\mathsf{nil}$}\end{cases} 6:  repeat 7:   s|[a,b)K|s\leftarrow|[a,b)\setminus K| 8:   htoss(φ(a),s+1)h\leftarrow\texttt{toss}(\varphi(a),s+1) 9:   if h=sh=s then 10:     return bb 11:   else 12:     xx\leftarrow the vertex of rank hh in [a,b)K[a,b)\setminus K 13:     if u(x)=𝗇𝗂𝗅u(x)=\mathsf{nil} then 14:      u(x)=ju(x)=j 15:      type(x)R{𝚍𝚒𝚛,𝚛𝚎𝚌}type(x)\leftarrow_{R}\{{\tt dir},{\tt rec}\} 16:      insert(child(j),x)\texttt{insert}(\texttt{child}(j),x) 17:      front(j)x\textit{front}(j)\leftarrow x 18:      front1(x)j\textit{front}^{-1}(x)\leftarrow j 19:      if (front(x)=𝗇𝗂𝗅\textit{front}(x)=\mathsf{nil}) next-child(x)\texttt{next-child}(x) 20:      return (x)(x) 21:     else   /* i.e., if u(x)𝗇𝗂𝗅u(x)\neq\mathsf{nil} */ 22:      ax+1a\leftarrow x+1 23:     end if 24:   end if 25:  until forever 26:end procedure
toss
Returns a random rank 0yt10\leq y\leq t-1.
1:procedure toss(ξ,t\xi,t) 2:  αnc\alpha\leftarrow n^{c} (for some constant c>1c>1). 3:  Choose uniformly at random an integer M[0,α]M\in[0,\alpha] 4:  HM1αH\leftarrow M\cdot\frac{1}{\alpha} 5:  Using binary search on [0,t1][0,t-1] find 0yt10\leq y\leq t-1 such that PyH<Py+1P^{\prime}_{y}\leq H<P^{\prime}_{y+1} 6:             (where, for 0yt10\leq y\leq t-1, Py=1ξ1ξ+(y1)P^{\prime}_{y}=1-\frac{\xi-1}{\xi+(y-1)}, and Pt=1P^{\prime}_{t}=1) 7:  if (H+1)1αPry+1(H+1)\frac{1}{\alpha}\leq Pr_{y+1} then 8:   return yy 9:  else 10:   ααΠy=0s1(Py+1Py)\alpha\leftarrow\alpha\cdot\Pi_{y=0}^{s-1}(P^{\prime}_{y+1}-P^{\prime}_{y}) 11:   Choose uniformly at random an integer M[0,α]M\in[0,\alpha] 12:   HM1αH\leftarrow M\cdot\frac{1}{\alpha} 13:   Using binary search on [0,t1][0,t-1] find 0yt10\leq y\leq t-1 such that PyH<Py+1P^{\prime}_{y}\leq H<P^{\prime}_{y+1} 14:             (where, for 0yt10\leq y\leq t-1, Py=1ξ1ξ+(y1)P^{\prime}_{y}=1-\frac{\xi-1}{\xi+(y-1)}, and Pt=1P^{\prime}_{t}=1) 15:   return yy 16:  end if 17:end procedure
Figure 4: Pseudo code of the pointers tree generator

6.3 Implementation of parent

The implementation of parent is straightforward (see Figure 4). However, note that updating the various data structures, while implicit in the listing, is accounted for in the time analysis.

6.4 Analysis of the pointer tree generator

We first give the following claim that we later use a number of times.

Lemma 9.

With high probability, for each and every call to next-child, the size of the recursion tree of that call, for calls to next-child, is O(logn)O(\log n).

Proof.

Consider the recursive invocation tree that results from a call to next-child. Observe that (1) by the code of next-child this tree is in fact a path; and (2) this path corresponds to a path in the pointers tree, where each edge of this tree-path is “discovered” by the corresponding call to next-child. That is, the maximum size of a recursion tree of a call of next-child is bounded from above by the height of the pointers tree. By Claim 2, with high probability, this is O(logn)O(\log n). ∎

6.4.1 Data structures and space complexity

The efficient implementation of next-child makes use of the following data structures.

  • A number of arrays of length nn, u(j)u(j) and type(j)type(j), front(j)\textit{front}(j) and front1(j)\textit{front}^{-1}(j), used to store various values for nodes jj. Since we implement arrays by means of search trees, the space complexity of each array is O(m)O(m), where mm is the maximum number of distinct keys stored with a non-null value in that array, at any given time. The time complexity for each operation on this arrays is O(logm)=O(logn)O(\log m)=O(\log n) (since they are implemented as balanced binary search trees).

  • For each node jj, a balanced binary search tree called child(j)\texttt{child}(j), where child(j)\texttt{child}(j) all nodes ii such that u(i)=ju(i)=j (for technical reasons we define child(j)\texttt{child}(j) to always include node n+1n+1.) 555So that we maintain low space complexity, for a given (j)(j), child(j)\texttt{child}(j) is initialized only at the first use of child(j)\texttt{child}(j), at which time node n+1n+1 is inserted . Observe that for each child ii stored in one of these trees, u(i)u(i) is already determined. Thus, the increase, during a given period, in the space used by the child trees is bounded from above by the the number of nodes ii for which u(i)u(i) got determined during that period. For the time complexity of the operations on these trees we use a coarse standard upper bound of O(logn)O(\log n).

The listings of the implementations of the various procedures leave implicit the maintenance of two data structures, related to the set KK and to the computation of φ()\varphi(\cdot):

  • A data structure that allows one to retrieve the value of φ(a)\varphi(a) for a given vertex aa. This data structure is implemented by retrieving the cardinality of not-u-parent-candidate(a)\textit{not-u-parent-candidate}(a) for a given node aa. The latter is equivalent to counting how many nodes i<ai<a have front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil} and front(i)a\textit{front}(i)\geq a. We use two balanced binary search trees (or order statistics trees) in a specific way and have that by standard implementations of balanced search trees the space complexity is O(k)O(k) (and all operations are done in time O(logk)=O(logn)O(\log k)=O(\log n)). Here kk denotes the number of nodes ii such that front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}. More details of the implementation of this data structure appear in the appendix (See Section A.1).

  • A data structure that allows one to find the vertex of rank hh in the ordered set [a,n+1]K[a,n+1]\setminus K. This data structure is implemented by a balanced binary search tree storing the nodes in KK, augmented with the queries rankK(i)\textit{rank}_{K}(i) (as in an order-statistics tree) as well as rankK¯(i)\textit{rank}_{\bar{K}}(i) and selectK¯(s)\textit{select}_{\bar{K}}(s), i.e., finding the element of rank ss in the complement of KK. To find the vertex of rank hh in [a,n+1]K[a,n+1]\setminus K we use the query selectK¯(rankK¯(a)+h)\textit{select}_{\bar{K}}(\textit{rank}_{\bar{K}}(a)+h). The space complexity of this data structure is O(k)O(k), and all operations are done in time O(logk)=O(logn)O(\log k)=O(\log n) or O(log2k)=O(log2n)O(\log^{2}k)=O(\log^{2}n) (for the selectK¯(i)\textit{select}_{\bar{K}}(i) query). Here kk denotes the number of nodes in KK, which is upper bounded by the number of nodes ii such that front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}. More details of the implementation of this data structure appear in the appendix (See Section A.2).

6.4.2 Time complexity

Time complexity of toss(φ,s\texttt{toss}(\varphi,s).

The time complexity of this procedure is O(logn)O(\log n) (regardless of whether or not the if condition holds or not), because it performs a binary search on (at most) nn items, and each iteration of this search takes O(1)O(1) time.

Time complexity of “xx\leftarrow the vertex of rank hh in [a,n+1]K[a,n+1]\setminus K”.

This operation is implemented using the data structure defined above, and takes O(log2n)O(\log^{2}n) time.

Time complexity of parent(j)\texttt{parent}(j).

Examining the listing (Figure 4), one observes that the number of operations is constant. However, the access to the “array” u()u(\cdot) takes O(logn)O(\log n) time, and, though implicit in the listing, one should take into account the update of the data structure that stores the set KK, taking O(logn)O(\log n) time. Thus the time complexity of parent is O(logn)O(\log n).

Time complexity of next-child.

First consider the time complexity consumed by a single invocation of next-child (i.e., without taking into account the time consumed by recursive calls of next-child):666We talk about an “invocation”, rather than a “call”, when we want to emphasize that we consider only the resources consumed by a single level of the recursion tree. The call to parent takes O(logn)O(\log n) time. Therefore, until the start of the repeat loop, the time is O(logn)O(\log n) (the time complexity of succ  is O(logn)O(\log n)). Now, the time complexity of a single iteration of the loop (without taking into account recursive calls to next-child) is (Olog2n)(O\log^{2}n) because:

  • Each access to an “array” takes O(logn)O(\log n) time.

  • Calculating φ(a)\varphi(a) takes O(logn)O(\log n) time.

  • The call to toss takes O(logn)O(\log n) time.

  • Finding the vertex of rank hh in [a,n+1]K[a,n+1]\setminus K takes O(log2n)O(\log^{2}n) time.

  • Each of the O(1)O(1) updates of front()\textit{front}(\cdot) or front1()\textit{front}^{-1}(\cdot) may change the set KK, and therefore may take O(logn)O(\log n) time to update the data structure involving KK.

  • An update of any given child()\texttt{child}(\cdot) binary search tree takes O(logn)O(\log n) time.

We now examine the number of iterations of the loop.

Claim 10.

With high probability, the number of iterations of the loop in a single invocation of next-child is O(logn)O(\log n).

Proof.

We consider a process where the iterations continue until the selected node is node bb. A random variable, RR, depicting this number dominates a random variable that depicts the actual number of iterations. For each iteration, an additional node is selected by toss. By Lemma 8 the probability that a node j<bj<b is selected by toss is 1/φ(j)1/\varphi(j), and we have that 1/φ(j)1j11/\varphi(j)\leq\frac{1}{j-1}. Thus, R=1+j=ab1XjR=1+\sum_{j=a}^{b-1}X_{j}, where XjX_{j} is 11 iff node jj was selected, 0 otherwise. Since μ=j=ab11φ(j)logn\mu=\sum_{j=a}^{b-1}\frac{1}{\varphi(j)}\leq\log n, using Chernoff bound 777cf. [9], Inequality (8). we have, for any constant c>6c>6, P[R>clogn]2clogn=nΩ(1)P[R>c\cdot\log n]\leq 2^{-c\cdot\log n}=n^{-\Omega(1)}. ∎

We thus have the following.

Lemma 11.

For any given invocation of next-child, with high probability, the time complexity is O(log3n)O(\log^{3}n).

6.4.3 Randomness complexity

Randomness is used in our generator to randomly select the parent of the nodes (in parent) and to randomly select a next child for a node (in toss). We use the common convention that, for any given mm, one can choose uniformly at random an integer in [0,m1][0,m-1] using O(logm)O(\log m) random bits and in O(1)O(1) computation time. We give our algorithms and analyses based on this building block.

In procedure parent we use O(logn)O(\log n) random bits whenever, for a given jj, this procedure is called with parameter jj for the first time.

In procedure toss the if condition holds with probability 11/nc11-1/n^{c-1} (where cc is the constant used in that procedure). Therefore, given a call to toss, with probability 11/nc11-1/n^{c-1} this procedure uses O(logn)O(\log n) bits. By Claim 10, in each call to next-child the number of times that toss is called is, w.h.p., O(logn)O(\log n). We thus have the following.

Lemma 12.

During a given call to next-child, w.h.p., O(log2n)O(\log^{2}n) random bits are used.

The following lemma states the time, space, and randomness complexities of the queries.

Lemma 13.

The complexities of next-child-tp and parent are as follows.

  • Given a call to parent the following hold for this call:

    1. 1.

      The increase, during that call, of the space used by our algorithm is O(1)O(1).

    2. 2.

      The number of random bits used during that call is O(logn)O(\log n).

    3. 3.

      The time complexity of that call is O(logn)O(\log n).

  • Given an call to next-child-tp, with high probability, all of the following hold for this call:

    1. 1.

      The increase, during that call, of the space used by our algorithm is O(log2n)O(\log^{2}n).

    2. 2.

      The number of random bits used during that call is O(log4n)O(\log^{4}n).

    3. 3.

      The time complexity of that call is O(log5n)O(\log^{5}n).

Proof.

parent. During an call to parent(j)\texttt{parent}(j) the size of the used space increases when a pointer u(j)u(j) becomes non-nul or when additional values are stored in child(u(j))\texttt{child}(u(j)). To select u(j)u(j), O(logn)O(\log n) random bits are used, and O(logn)O(\log n) time is consumed to insert jj in child(u(j))\texttt{child}(u(j)) and to update the data structure for the set KK (this is implicit in the listing).

next-child-tp. We first consider next-child. Observe that by Lemma 9, w.h.p., each and every root (non-recursive) call of next-child has a recursion tree of size O(logn)O(\log n). In each invocation of next-child, O(1)O(1) variables front(j)\textit{front}(j) and u(j)u(j) may be updated. Therefore, w.h.p., for all root (non-recursive) calls to next-child it holds that the increase in space during this call is O(logn)O(\log n) (see Section 6.4.1). Using Lemmas 12 and 9 we have that, w.h.p., each root call of next-child  uses O(log3n)O(\log^{3}n) random bits. Using Lemmas 11 and 9, we have that, w.h.p., the time complexity of each root call of next-child  is O(log4n)O(\log^{4}n).

Because the types of the pointers are uniformly distributed in {𝚍𝚒𝚛,𝚛𝚎𝚌}\{{\tt dir},{\tt rec}\}, each call to next-child-tp results, w.h.p., in O(logn)O(\log n) calls to next-child. The above complexities are thus multiplied by an O(logn)O(\log n) factor to get the (w.h.p.) complexities of next-child-tp. ∎

7 On-the-fly Generator for BA-Graphs

BA-next-neighbor
Returns the next neighbor of jj in the BA-graph.
1:procedure BA-next-neighbor(jj) 2:  if first_query(j)=𝚝𝚛𝚞𝚎first\_query(j)={\tt true} then 3:/* first query for jj */ 4:   first_query(j)𝚏𝚊𝚕𝚜𝚎first\_query(j)\leftarrow{\tt false} 5:   heap-insert(heapj,n+1)\texttt{heap-insert}(heap_{j},n+1) 6:   heap-insert(heapj,next-child-tp(j,j,𝚍𝚒𝚛))\texttt{heap-insert}(heap_{j},\texttt{next-child-tp}(j,j,{\tt dir})) 7:   return BA-parent(j)\texttt{BA-parent}(j) 8:  else 9:/* all subsequent queries for jj */ 10:   rheap-extract-min(heapj)r\leftarrow\texttt{heap-extract-min}(heap_{j}) 11:   if r=n+1r=n+1 then 12:     heap-insert(heapj,n+1)\texttt{heap-insert}(heap_{j},n+1) 13:     return n+1n+1 14:   else 15:     if type(r)=𝚍𝚒𝚛type(r)={\tt dir} then 16:      heap-insert(heapj,next-child-tp(j,r,𝚍𝚒𝚛))\texttt{heap-insert}(heap_{j},\texttt{next-child-tp}(j,r,{\tt dir})) 17:      heap-insert(heapj,next-child-tp(r,r,𝚛𝚎𝚌))\texttt{heap-insert}(heap_{j},\texttt{next-child-tp}(r,r,{\tt rec})) 18:     else 19:      (q,type)parent(r)(q,type)\leftarrow\texttt{parent}(r) 20:      heap-insert(heapj,next-child-tp(q,r,𝚛𝚎𝚌))\texttt{heap-insert}(heap_{j},\texttt{next-child-tp}(q,r,{\tt rec})) 21:      heap-insert(heapj,next-child-tp(r,r,𝚛𝚎𝚌))\texttt{heap-insert}(heap_{j},\texttt{next-child-tp}(r,r,{\tt rec})) 22:     end if 23:     return rr 24:   end if 25:  end if 26:end procedure
BA-parent
Returns the parent of jj in the
BA-graph.
1:procedure BA-parent(jj) 2:  (i,flag)parent(j)(i,flag)\leftarrow\texttt{parent}(j) 3:  if flag=dirflag=\mbox{{\tt dir}} then 4:   return ii 5:  else 6:   return BA-parent(i)\texttt{BA-parent}(i) 7:  end if 8:end procedure
Figure 5: Pseudo code of the on-the-fly BA generator

Our on-the-fly generator for BA-graphs is called O-t-F-BA, and simply calls BA-next-neighbor(v)\texttt{BA-next-neighbor}(v) for each query on node vv. We present an implementation for the BA-next-neighbor  query, and prove its correctness, as well as analyze its time, space, and randomness complexities. The on-the-fly BA generator maintains nn standard heaps, one for each node. The heaps store nodes, where the order is the natural order of their serial numbers. 888For simplicity of presentation we assume that the initialization of the heap occurs at the first insert, and make sure in our use of the heap that no extraction is performed before the first insert. The heap of node jj stores some of the nodes already known to be neighbors of jj. In addition, the generator maintains for purely technical reasons an array of size nn, first_queryfirst\_query, indicating if a BA-next-neighbor query has been issued for a given node. The implementation of the BA-next-neighbor  query works as follows (see Figure 5).

  • For the first BA-next-neighbor(jj) query, for a given jj, we proceed as follows. We find the parent of jj in the BA-graph, which is done by following, in the pointers tree, the pointers of the ancestors of jj until we find an ancestor pointed to by a dir pointer (and not a rec pointer). See Figure 5. In addition, we initialize the process of finding neighbors of jj to its right (i.e., with a bigger serial number) by inserting into the heap of jj the “final node” n+1n+1 as well as the first child of vv.

  • For any subsequent BA-next-neighbor(jj) query for node jj we proceed as follows. Observe that any subsequent query is to return a child of jj in the BA-graph. The children of jj in the BA-graph are those nodes xx which have, in the pointers tree, a path of u()u(\cdot) pointers starting at xx and ending at jj and with all pointers on that path, except the last one, being rec (the last one being dir). The query BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) has, however, to report the children in increasing order of their index. To this end the heap of node jj is used; it stores at any give time some of the children of jj in the BA-graph, not yet returned by a BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) query. We further have to update this heap so that BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) will continue to return the next child according to the index order. To this end we proceed as follows. Whenever node, rr is extracted from the heap, in order to be returned as the next child, we update the heap to include the following:

    • If rr has a dir pointer to jj, then we add to the heap (1) the next, after rr, node with a dir pointer to jj, and (2) the first node that has a rec pointer to rr.

    • If rr has a rec pointer to a node rr^{\prime}, then we add to the heap (1) the first, after rr, node with a rec pointer to rr^{\prime}, and (2) the first node that has a rec pointer to rr.

The proof of Lemma 14 below is based on the claim that the heap contains only children of vv in the BA-graph, and that it always contains the child of vv just after the one last returned.

Lemma 14.

The procedure BA-next-neighbor returns the next neighbor of vv.

Proof.

Given a pointers tree we define the following notions:

  • The set of nodes which have a dir pointer to a given node ii. That is, for 1jn1\leq j\leq n, D(j){i|u(i)=j,flag(i)=𝚍𝚒𝚛}D(j)\triangleq\{i~|~u(i)=j,flag(i)={\tt dir}\}.

  • The set of nodes which have a rec pointer to a given node ii. That is, for 1jn1\leq j\leq n, R(j){i|u(i)=j,flag(i)=𝚛𝚎𝚌}R(j)\triangleq\{i~|~u(i)=j,flag(i)={\tt rec}\}.

Given a BA graph, for any node 1jn1\leq j\leq n and any prefix length 0n10\leq\ell\leq n-1, we denote by N(j)N^{\ell}(j) the set of the first (according to the index number) \ell neighbors of jj in the BA graph.

In what follows we consider an arbitrary node jj. We consider the actions of BA-next-neighbor (see Figure 5). Let M(j)M^{\ell}(j) be the set of nodes returned by the first \ell calls BA-next-neighbor(j)\texttt{BA-next-neighbor}(j). We first prove that the following invariant holds.
Just after call number 1\ell\geq 1 of BA-next-neighbor(v)\texttt{BA-next-neighbor}(v):

  1. 1.

    The heap heapjheap_{j} contains only neighbors of jj in the BA-graph.

  2. 2.

    The heap heapjheap_{j} contains the minimum node in D(j)M(j)D(j)\setminus M^{\ell}(j).

  3. 3.

    Let qq be the first neighbor of jj in the BA graph. The heap heapjheap_{j} contains, for each node iM(j){q}i\in M^{\ell}(j)\setminus\{q\}, the minimum node in R(i)M(j)R(i)\setminus M^{\ell}(j).

We prove that the invariant holds by induction on \ell. The induction basis, for call number =1\ell=1, holds since the first call to BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) results in inserting into heapjheap_{j} the first node xx which has a dir pointer to node jj and since heapjheap_{j} was previously empty (see Figure 5). Thus all points of the invariant hold after call =1\ell=1. For >1\ell>1 assume that the induction hypothesis holds for 1\ell-1 and let rr be the node returned by the \ell’th call to BA-next-neighbor(j)\texttt{BA-next-neighbor}(j). We claim that the invariant still holds after call \ell by verifying each one of the two cases for the pointer of rr and the insertions into the heap for each such case.
If rr has a dir pointer, then the following nodes are inserted into heapjheap_{j}: (1) The first node after rr with a dir pointer to jj. Since this is a neighbor of jj in the BA graph Point 1 continues to hold. Since rr, just extracted from the heap, was the minimum node in the heap, also Point 2 continues to hold. (2) The first node after rr which has a rec pointer to rr. Since this is a neighbor of jj in the BA graph Point 1 continues to hold; Point 3 continues to hold since nothing has changed for any other iri\neq r, iM(j){q}i\in M^{\ell}(j)\setminus\{q\}, and for rr the minium node in R(i)M(j)R(i)\setminus M^{\ell}(j) is just inserted.
If rr has a rec pointer, and let qq be the parent of rr in the pointers tree, then the following nodes are inserted into heapjheap_{j}: (1) The first node after rr which has a rec pointer to qq; denote it xx. Since xx is a neighbor of jj in the BA graph Point 1 continues to hold. Since rr, just extracted from the heap, was the minimum node in the heap, xx is the minimum node in R(i)M(j)R(i)\setminus M^{\ell}(j) and Point 3 continues to hold (nothing changes for any qqq^{\prime}\neq q, qM(j){q}q^{\prime}\in M^{\ell}(j)\setminus\{q\}). (2) The first node after rr which has a rec pointer to rr. The same arguments as those for the corresponding case when rr has a dir pointer hold, and thus both Point 1 and Point 3 continue to hold.
This concludes the proof of the invariant.

We now use the above invariant in order to prove that, for any 1\ell\geq 1, N(j)=M(j)N^{\ell}(j)=M^{\ell}(j). We do that by induction on \ell. For =1\ell=1 the claim follows from the facts the first neighbor of node jj is its parent in the BA graph and that the first call BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) returns the value that BA-parent(j)\texttt{BA-parent}(j) returns. This proves the induction basis. We now prove the claim for >1\ell>1 given the induction hypothesis for all <\ell^{\prime}<\ell. Let node xx be the \ell’th neighbor of jj. We have two cases: (1) node xx has a dir pointer to jj; (2) node xx has a rec pointer to another child of jj in the BA graph (i.e., to another neighbor of jj in the BA graph, which is not the first neighbor).

Case (1): By the induction hypothesis N1(j)=M1(j)N^{\ell-1}(j)=M^{\ell-1}(j), hence by Point 2 of the invariant xx is in the heap heapjheap_{j} when the \ell’th call occurs. Since any node returned by BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) is no longer in heapjheap_{j}, by Point 1 of the invariant, heapjheap_{j} does not contain any node smaller than ii. Therefore the node returned by the \ell’th call of BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) is node ii.

Case (2): Let node yy be the parent of node xx in the pointers tree, i.e., u(x)=yu(x)=y. Since yy is a neighbor of jj in the BA graph, and y<xy<x, it follows that yN1(j)y\in N^{\ell-1}(j), and by the induction hypothesis yM1(j)y\in M^{\ell-1}(j). Moreover, any node x<xx^{\prime}<x has u(x)=yu(x^{\prime})=y, flag(x)=𝚛𝚎𝚌flag(x^{\prime})={\tt rec} if and only if it is a neighbor of jj, hence any such node xx^{\prime} is in N1(j)N^{\ell-1}(j), and by the induction hypothesis also in M1(j)M^{\ell-1}(j). It follows from Point 3 of the invariant that xx is in the heap heapjheap_{j} when the \ell’th call occurs. Since any node returned by BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) is no longer in heapjheap_{j}, by Point 1 of the invariant, heapjheap_{j} does not contain any node smaller than ii. Therefore the node returned by the \ell’th call of BA-next-neighbor(j)\texttt{BA-next-neighbor}(j) is node ii. This completes the proof of the lemma. ∎

Since the flags in the pointers tree are uniformly distributed, and by Lemma 13, we have:

Lemma 15.

For any given root (non-recursive) call of BA-parent, with high probability, that call takes O(log2n)O(\log^{2}n) time.

We can now conclude with the following theorem.

Theorem 16.

For any given call of BA-next-neighbor, with high probability, all of the following hold for that call:

  1. 1.

    The increase, during that call, of the space used by our algorithm O(log2n)O(\log^{2}n).

  2. 2.

    The number of random bits used during that call is O(log4n)O(\log^{4}n).

  3. 3.

    The time complexity of that call is O(log5n)O(\log^{5}n).

Proof.

Each call of BA-next-neighbor is executed by a constant number of calls to BA-parent and next-child-tp, a constant number of calls to heap-insert and heap-extract-min, and a constant number of accesses to “arrays”. The claim then follows from standard deterministic heap implementations (in O(1)O(1) space per stored item and O(logn)O(\log n) time) and from Lemma 13. ∎

We now state the properties of our on-the-fly graph generator for BA-graphs.

Definition 17.

For a number of queries T>0T>0 and a sequence of BA-next-neighbor queries Q=(q(1),,q(T))Q=(q(1),\ldots,q(T)), let A(Q)A(Q) be the sequence of answers returned by an algorithm AA on QQ. If AA is randomized then A(Q)A(Q) is a probability distribution on sequences of answers.

Let Opt-BAn\texttt{Opt-BA}_{n} be the (randomized) algorithm that first runs the Markov process to generate a graph GG on nn nodes according to the BA model, stores GG, and then answers queries by accessing the stored GG. Let O-t-F-BAn\texttt{O-t-F-BA}_{n} be the algorithm O-t-F-BA run with graph-size nn. From the definition of the algorithm we have the following.

Theorem 18.

For any sequence of queries QQ, Opt-BAn(Q)=O-t-F-BAn(Q)\texttt{Opt-BA}_{n}(Q)=\texttt{O-t-F-BA}_{n}(Q).

We now conclude by stating the complexities of our on-the-fly BA generator.

Theorem 19.

For any T>0T>0 and any sequence of queries Q=(q(1),,q(T))Q=(q(1),\ldots,q(T)), when using O-t-F-BAn\texttt{O-t-F-BA}_{n} it holds w.h.p. that, for all 1tT1\leq t\leq T:

  1. 1.

    The increase in the used space, while processing query tt, is O(log2n)O(\log^{2}n).

  2. 2.

    The number of random bits used while processing query tt is O(log4n)O(\log^{4}n).

  3. 3.

    The time complexity for processing query tt is O(log5n)O(\log^{5}n).

Proof.

A query BA-next-neighbor(v)\texttt{BA-next-neighbor}(v) at time tt is a trivial if at some t<tt^{\prime}<t a query BA-next-neighbor(v)\texttt{BA-next-neighbor}(v) returns n+1n+1. Observe that trivial queries take O(logn)O(\log n) deterministic time, do not use randomness, and do not increase the used space. Since there are less than n2n^{2} non-trivial queries, the theorem follows from Theorem 16 and a union bound. ∎

We note that the various assertions in this paper of the form of “with high probability … is O(logcn)O(\log^{c}n)” can also be stated in the form of “with probability 11nd1-\frac{1}{n^{d}} … is f(d)logncf(d)\cdot\log n^{c}”. Therefore, we can combine these various assertions, and together with the fact that the number of non-trivial queries is poly(n)\mbox{poly}(n), we get the final result stated above.

Acknowledgments.

We thank Yishay Mansour for raising the question of whether one can locally generate preferential attachment graphs, and Dimitri Achlioptas and Matya Katz for useful discussions. We further thank an anonymous ICALP reviewer for a comment that helped us simplify one of the data structure implementations.

References

  • [1] Md. Maksudul Alam, Maleq Khan, and Madhav V. Marathe. Distributed-memory parallel algorithms for generating massive scale-free networks using preferential attachment model. In International Conference for High Performance Computing, Networking, Storage and Analysis, SC’13, Denver, CO, USA - November 17 - 21, 2013, pages 91:1–91:12, 2013.
  • [2] Noga Alon, Ronitt Rubinfeld, Shai Vardi, and Ning Xie. Space-efficient local computation algorithms. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1132–1139, 2012.
  • [3] Albert-László Barabási and Reka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999.
  • [4] Vladimir Batagelj and Ulrik Brandes. Efficient generation of large random networks. Physical Review E, 71(3):036113, 2005.
  • [5] Michael Drmota. Random Trees: An Interplay Between Combinatorics and Probability. Springer Publishing Company, Incorporated, 1st edition, 2009.
  • [6] Guy Even, Moti Medina, and Dana Ron. Best of two local models: Local centralized and local distributed algorithms. CoRR, abs/1402.3796, 2014.
  • [7] Guy Even, Moti Medina, and Dana Ron. Deterministic stateless centralized local algorithms for bounded degree graphs. In Algorithms - ESA 2014 - 22th Annual European Symposium, Wroclaw, Poland, September 8-10, 2014. Proceedings, pages 394–405, 2014.
  • [8] Oded Goldreich, Shafi Goldwasser, and Asaf Nussboim. On the implementation of huge random objects. SIAM J. Comput., 39(7):2761–2822, 2010.
  • [9] Torben Hagerup and Christine Rüb. A guided tour of chernoff bounds. Inf. Process. Lett., 33(6):305–308, 1990.
  • [10] Tamara G. Kolda, Ali Pinar, Todd Plantenga, and C. Seshadhri. A scalable generative graph model with community structure. SIAM J. Scientific Computing, 36(5), 2014.
  • [11] Ravi Kumar, Prabhakar Raghavan, Sridhar Rajagopalan, D. Sivakumar, Andrew Tomkins, and Eli Upfal. Random graph models for the web graph. In 41st Annual Symposium on Foundations of Computer Science, FOCS 2000, 12-14 November 2000, Redondo Beach, California, USA, pages 57–65, 2000.
  • [12] Reut Levi, Guy Moshkovitz, Dana Ron, Ronitt Rubinfeld, and Asaf Shapira. Constructing near spanning trees with few local inspections. CoRR, abs/1502.00413, 2015.
  • [13] Reut Levi, Dana Ron, and Ronitt Rubinfeld. Local algorithms for sparse spanning graphs. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, pages 826–842, 2014.
  • [14] Reut Levi, Ronitt Rubinfeld, and Anak Yodpinyanee. Brief announcement: Local computation algorithms for graphs of non-constant degrees. In Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures, SPAA 2015, Portland, OR, USA, June 13-15, 2015, pages 59–61, 2015.
  • [15] Yishay Mansour, Aviad Rubinstein, Shai Vardi, and Ning Xie. Converting online algorithms to local computation algorithms. In Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, pages 653–664, 2012.
  • [16] Yishay Mansour and Shai Vardi. A local computation approximation scheme to maximum matching. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques - 16th International Workshop, APPROX 2013, and 17th International Workshop, RANDOM 2013, Berkeley, CA, USA, August 21-23, 2013. Proceedings, pages 260–273, 2013.
  • [17] Ulrich Meyer and Manuel Penschuck. Generating massive scale-free networks under resource constraints. In Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments, ALENEX 2016, Arlington, Virginia, USA, January 10, 2016, pages 39–52, 2016.
  • [18] Huy N Nguyen and Krzysztof Onak. Constant-time approximation algorithms via local improvements. In Foundations of Computer Science, 2008. FOCS’08. IEEE 49th Annual IEEE Symposium on, pages 327–336. IEEE, 2008.
  • [19] Sadegh Nobari, Xuesong Lu, Panagiotis Karras, and Stéphane Bressan. Fast random graph generation. In Proceedings of the 14th international conference on extending database technology, pages 331–342. ACM, 2011.
  • [20] Krzysztof Onak. New sublinear methods in the struggle against classical problems. Massachusetts Institute of Technology, PhD Thesis, September 2010.
  • [21] Krzysztof Onak, Dana Ron, Michal Rosen, and Ronitt Rubinfeld. A near-optimal sublinear-time algorithm for approximating the minimum vertex cover size. In Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1123–1131, 2012.
  • [22] Omer Reingold and Shai Vardi. New techniques and tighter bounds for local computation algorithms. J. Comput. Syst. Sci., 82(7):1180–1200, 2016.
  • [23] Ronitt Rubinfeld, Gil Tamir, Shai Vardi, and Ning Xie. Fast local computation algorithms. In Innovations in Computer Science - ICS 2010, Tsinghua University, Beijing, China, January 7-9, 2011. Proceedings, pages 223–238, 2011.
  • [24] Martin Sauerhoff. On the entropy of models for the web graph. Manuscript.
  • [25] Robert T Smythe and Hosam M Mahmoud. A survey of recursive trees. Theory of Probability and Mathematical Statistics, (51):1–28, 1995.
  • [26] Andy Yoo and Keith W. Henderson. Parallel generation of massive scale-free graphs. CoRR, abs/1003.3684, 2010.
  • [27] Yuichi Yoshida, Masaki Yamamoto, and Hiro Ito. Improved constant-time approximation algorithms for maximum matchings and other optimization problems. SIAM J. Comput., 41(4):1074–1093, 2012.

Appendix A Implementations of Data Structures

A.1 Data Structure for φ()\varphi(\cdot)

We use two balanced binary search trees (or order statistic trees). One, called left, stores all vertices ii such that front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}. The other, called right, stores (the multi-set) {front(i)|front(i)𝗇𝗂𝗅}\{\textit{front}(i)~|~\textit{front}(i)\neq\mathsf{nil}\}. To determine φ(a)\varphi(a) we find, using tree right, how many nodes ii have front(i)>a1\textit{front}(i)>a-1 (and front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}). Let this number be RR. Using tree left we find how many nodes i<ai<a have front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}. Let this number be LL. Then φ(a)=RL\varphi(a)=R-L.

By standard implementations of balanced search trees the space complexity is O(k)O(k) and all operations are done in time O(logk)=O(logn)O(\log k)=O(\log n)) Here kk denotes the number of nodes ii such that front(i)𝗇𝗂𝗅\textit{front}(i)\neq\mathsf{nil}.

A.2 Data Structure to find the node of rank hh in [a,n+1]K[a,n+1]\setminus K

We start with a number of definitions useful for specifying the data structure and its operations.

For a node j[1,n+1]j\in[1,n+1] and a subset of nodes Q[1,n+1]Q\subseteq[1,n+1], define Q(j)Q(j) as follows:

Q(j){jifjQorj=1maxjQ{j|j<j}otherwise .{Q}(j)\triangleq\left\{\begin{array}[]{ll}j&\text{if}~j\in Q~\text{or}~j=1\\ \max_{j^{\prime}\in Q}\{j^{\prime}|j^{\prime}<j\}&\text{otherwise\,}\end{array}\right.~.

Note that for technical reasons for j=1j=1 we define Q(j)=1Q(j)=1 whether or not jQj\in Q.

For a node j[1,n+1]j\in[1,n+1] and a subset of nodes Q[1,n+1]Q\subseteq[1,n+1], define rankQ(j)\textit{rank}_{Q}(j) as follows:

rankQ(j)|{i|i<Q(j);iQ}|.\textit{rank}_{Q}(j)\triangleq|\{i~|~i<Q(j);i\in Q\}|~.

We note that using these definitions we have that, for any j[1,n+1]j\in[1,n+1], the number of items i<ji<j in Q¯\bar{Q}, where Q¯=[1,n+1]Q\bar{Q}=[1,n+1]\setminus Q, is (j1)rankQ(j)(j-1)-\textit{rank}_{Q}(j).

The insertQ\textit{insert}_{Q}, deleteQ\textit{delete}_{Q} and rankQ\textit{rank}_{Q} operations are implemented as in a standard order-statistics tree based on a balanced binary search tree. The operation rankQ¯\textit{rank}_{\bar{Q}} is implemented using the rankQ\textit{rank}_{Q} and then performing the calculation above. To implement selectQ¯(s)\textit{select}_{\bar{Q}}(s) we proceed as follows. We traverse the search tree with the value ss, and in each node of the tree that contains the vertex jj we compare ss with (j1)rankQ(j)(j-1)-\textit{rank}_{Q}(j). Thus, we can find the maximum jQj\in Q such that rankQ¯(j)s\textit{rank}_{\bar{Q}}(j)\leq s. Denote this node jj^{\prime}. We then return the node j+[s+1((j1)rankQ(j))]j^{\prime}+\left[s+1-((j^{\prime}-1)-\textit{rank}_{Q}(j^{\prime}))\right].

The time complexities of insertQ\textit{insert}_{Q}, deleteQ\textit{delete}_{Q} and rankQ\textit{rank}_{Q} and rankQ¯\textit{rank}_{\bar{Q}} are therefore O(logn)O(\log n) based on standard order statistics trees. The time complexity of selectQ¯\textit{select}_{\bar{Q}} is O(log2n)O(\log^{2}n): for each node along the search path of length O(logn)O(\log n) we need to use the query rankQ\textit{rank}_{Q}.