The Space Complexity of Generating Tent Codes
Abstract
This paper is motivated by a question whether it is possible to calculate a chaotic sequence efficiently, e.g., is it possible to get the -th bit of a bit sequence generated by a chaotic map, such as -expansion, tent map and logistic map in time/space? This paper gives an affirmative answer to the question about the space complexity of a tent map. We prove that a tent code of -bits with an initial condition uniformly at random is exactly generated in space in expectation.
1 Introduction
A tent map (or simply ) is given by
(1) |
where this paper is concerned with the case of . As Figure 1 shows, it is a simple piecewise-linear map looking like a tent. Let recursively for , where for convenience. Clearly, is a deterministic sequence. Nevertheless, the deterministic sequence shows a complex behavior, as if “random,” when . It is said chaotic [16]. For instance, becomes quite different from for as increasing, even if is very small, and it is one of the most significant characters of a chaotic sequence known as the sensitivity to initial conditions — a chaotic sequence is “unpredictable” despite a deterministic process [15, 23, 31, 4].
This paper, from the viewpoint of theoretical computer science, is concerned with the computational complexity of a simple problem: Given , and , decide whether . Its time complexity might be one of the most interesting questions; e.g., is it possible to “predict” whether in time polynomial in ? Unfortunately, we in this paper cannot answer the question333 We think that the problem might be NP-hard using the arguments on the complexity of algebra and number theory in [6], but we could not find the fact. . Instead, this paper is concerned with the space complexity of the problem.
![]() |
![]() |
1.1 Background and Contribution
Chaos.
Chaotic sequences show many interesting figures such as cobweb, strange attractor, bifurcation, etc. [15, 14, 18, 16, 11]. The chaos theory has been intensively developed in several context such as electrical engineering, information theory, statistical physics, neuroscience and computer science, with many applications, such as weather forecasting, traffic prediction, stock pricing, since the 1960s. For instance, a cellular automaton, including the life game, is a classical topic in computer science, and it is closely related to the “edge of chaos.” For another instance, the “sensitivity to initial conditions” are often regarded as unpredictability, and chaotic sequences are used in pseudo random number generator, cryptography, or heuristics for NP-hard problems including chaotic genetic algorithms.
From the viewpoint of theoretical computer science, the numerical issues of computing chaotic sequences have been intensively investigated in statistical physics, information theory and probability. In contrast, the computational complexity of computing a chaotic sequence seems not well developed. It may be a simple reason that it looks unlike a decision problem.
Tent map: 1-D, piecewise-linear and chaotic.
Interestingly, very simple maps show chaotic behavior. One of the most simplest maps are piece-wise linear maps, including the tent map and the -expansion (a.k.a. Bernoulli shift) which are 1-D maps and the baker’s map which is a 2-D map [15, 22, 19, 20, 23, 31, 4, 9, 21].
The tent map, as well as the -expansion, is known to be topologically conjugate to the logistic map which is a quadratic map cerebrated as a chaotic map. Chaotic behavior of the tent map, in terms of power spectra, band structure, critical behavior, are analyzed in e.g., [15, 23, 31, 4]. The tent map is also used for pseudo random generator or encryption e.g., [1, 2, 13]. It is also used for meta-heuristics for NP-hard problems [29, 7].
Our results and related works.
This paper is concerned with the problem related to deciding whether for the -th iterated tent map . More precisely, we define the tent language in Section 2, and consider a random generation of according to a distribution corresponding to the “uniform initial condition” (see Section 2, for the precise definition). We give an algorithm in expected space (Theorem 2.5), meaning that the average space complexity is exponentially improved compared with the naive computation according to (1).
Our strategy is as follows: we give a compact state-transit model for the tent language in Section 3.4, design a random walk on the model to provide a desired distribution in Section 3.5, and then prove that the expected space complexity is in Section 4. The idea of the compact model and the random walk is similar to [27, 28] for -expansion, while [27, 28] did not give any argument on the space complexity beyond a trivial upper bound. For the compact representation, we use the idea of the location of segments (which we call segment-type in this paper) developed by [17].
Our technique is related to the Markov partition, often appearing in entropy arguments [24, 26]. For some , precisely when is a solution of for an integer , our Markov chain is regarded as a version of Markov partition with a finite state. However, the number of states of our Markov chain is unbounded in general as , and it is our main target.
For an earlier version of this manuscript, Masato Tsujii gave us a comment about the connection to the Markov extension. In 1979, Hofbauer [8] gave a representation of the kneading invariants for unimodal maps, which is known as the Markov extension and/or Hofbauer tower, and then discussed topological entropy. Hofbauer and Keller extensively developed the arguments in 1980s, see e.g., [5, 3]. In fact, some arguments of this manuscript, namely Sections 3.4, 3.5, and 4.3, are very similar to or essentially the same as the arguments of the Markov extension, cf. [5], whereas note that this manuscript is mainly concerned with the computational complexity. It is difficult to describe the idea of our main result, namely Algorithm 1 given in section 3.6 and Theorem 2.5, without describing those arguments, then Sections 3.4, 3.5 and 4.3 are left as it is. We also use some classical techniques of computational complexity, cf. [25, 12], for the purpose.
2 Tent Code and Main Theorem
This paper focusing on the case . We assume that is rational in the main theorem (Theorem 2.5), to make the arguments clear on the Turing computation, but the assumption is not essential444 See Section 5 for real . . Let denote the -times iterated tent map, which is formally given by recursively with respect to . We remark, for the later argument in Section 3.4, that
(2) |
also holds by the associative law of a function composition. It is easy to observe from the definition hat the iterated tent map is symmetric around , meaning that
(3) |
holds for any .
We define a tent-encoding function (or simply ) as follows. For convenience, let for as given . Then, the tent code for is a bit-sequence, where
(4) |
and () is recursively given by
(5) |
where denotes bit inversion of , i.e., and . We remark that the definition (5) is rephrased by
(6) |
Proposition 2.1.
Suppose for . Then, .
See Section A for a proof of Proposition 2.1, proofs of Propositions 2.2, 2.3 and Lemma 2.4 below as well. The proofs are not difficult but lengthy. Thanks to this a little bit artificial definition (5), we obtain the following two more facts.
Proposition 2.2.
For any ,
hold where denotes the lexicographic order, that is and at for and unless .
Proposition 2.3.
The -th iterated tent code is right continuous, i.e., .
These two facts make the arguments simple. The following technical lemma is useful to prove Propositions 2.2 and 2.3, as well as the arguments in Sections 3 and 4.
Lemma 2.4.
Let satisfy . Let and . If hold for all then
(7) |
holds.
Let (or simply ) denote the set of all -bits tent codes, i.e.,
(8) |
and we call tent language (by ). Note that for .
Let (or simply ) denote a probability distribution over which follows for is uniformly distributed over , i.e., represents the probability of appearing as given the initial condition uniformly at random. Our goal is to output according to . To be precise, this paper establishes the following theorem.
Theorem 2.5.
Let be a rational given by an irreducible fraction . Then, it is possible to generate according to in space in expectation (as well as, with high probability).
3 Algorithm for the Proof of Theorem of 2.5
The goal of this section is to design a space efficient algorithm to generate (recall Theorem 2.5), and it will be presented as Algorithm 1 in Section 3.6. To design a space efficient algorithm, we need a compact representation for a recognition of a tent code. For the purpose, we have to develop notions and arguments, far from trivial, that is namely the notion of segment-type (in Section 3.2), transition diagram of segment-types (in Section 3.4), and transition probability to realize (in Section 3.5).
3.1 Compressing lemma—an intuition as an introduction
![]() |
![]() |
As Figure 1 shows, the -th iterated tent map looks complicated, consisting of exponentially many line segments, and it definitely causes the “sensitivity to initial conditions” of a tent map. Roughly speaking, we will prove that the line-segments of the -th iterated map are classified into at most classes by the range ( values) of line-segments (see Theorem 3.3 in Section 3.3, for detail). Before the precise argument, we briefly remark a key observation for an intuition of the argument.
Lemma 3.1 (Compressing lemma).
Let
(9) |
then,
(10) |
holds for and .
Proof.
It is trivial for since . Suppose . Then,
holds, and we obtain the claim for . Once we obtain , the claim is trivial for . ∎
3.2 Sections and segment-types
3.2.1 Sections
To begin with, we introduce a natural equivalent class over with respect to . Let
(11) |
for a bit sequence , and we call it section (of ). For convenience, we define if . We also abuse for as , and call it section of . Let
(12) |
denote the whole set of sections provided by the -th iterated tent map. Since is uniquely defined for any , it is not difficult to see that the set of sections is a partition of , i.e., , and for .
Due to the right continuity of (Proposition 2.3), every section is left-closed and right-open interval, i.e.,
(13) |
for any . We observe that every section is subdivided by an iteration of the tent map, meaning that
(14) |
holds for any . As a consequence, we also observe that
(15) |
hold.
3.2.2 Segment-types
Clearly, the size of is as large as exponential to . Interestingly, we will prove that is classified by their images into at most types. Let
(16) |
for , where we call the segment-type of . We can observe every segment-type is a connected interval on since is a piecewise linear function.We abuse for as . For convenience, we also use for as (). Let
(17) |
denote the set of segment-types (of ). For instance,
(18) |
holds for . Then, Theorem 3.3, appearing later, claims . Before stating the theorem, let us briefly illustrate the segment-types.
3.2.3 Brief remarks on segment-types to get it
Figure 1, illustrating the iterated tent map , shows many “line-segments” provided by
(19) |
for each . It seems that the line-segments corresponds to (and hence ) one-to-one, by the figure. At this moment, we see from Lemma 2.4 that is monotone increasing on if , otherwise, i.e., , the map is monotone decreasing. In fact, Theorem 3.3 appearing later implies the one-to-one correspondence between line-segments and .
Anyway, a segment-type provides another representation of a section , as well as the corresponding bit sequence , through the line-segment. Interestingly, we observe the following connection between segment-types and the tent code, from Lemma 2.4 and Proposition 2.3.
Lemma 3.2.
Let and let . Let and let , for convenience. Then,
(20) |
holds.
Proof.
We recall that every section is left-closed and right-open, i.e., by Proposition 2.3. We also remark that is continuous since is continuous by (1).
Consider the case . As we stated above, Lemma 2.4 implies that is monotone increasing on in the case. Thus, and . Since , we see and . We obtain in the case.
The case is similar. By Lemma 2.4, is monotone decreasing on in the case. Thus, and . Since , we see and . We obtain in the case. ∎
3.3 The number of segment-types
Concerning the set of segment-types we will establish the following theorem, which claims .
Theorem 3.3.
Let . Let , and let
(21) |
for . Then,
(22) |
for , where .
3.3.1 Weak lemma
We here are concerned with the following weaker version, which is enough to claim , to follow the main idea of the arguments in this section.
Lemma 3.4.
Let , and let where and are according to the lexicographic order. Then,
(23) |
for . Furthermore, .
To get Theorem 3.3 from Lemma 3.4, we further need to prove and (Lemma 3.7), and the existence of (Lemma 3.11), which are not difficult but require somehow bothering arguments to prove.
Lemma 3.4 may intuitively and essentially follow from Lemma 3.1, considering the representation of a line-segment of by a segment-type (see Section 3.2.3). For a proof of Lemma 3.4, we start by giving the following strange recursive relationship between sections (cf. Lemma 3.1, for an intuition).
Lemma 3.5.
holds for any . Furthermore, only when or .
The former part of Lemma 3.5 comes from the following fact.
Lemma 3.6.
Let and let . Suppose , and . Then, for .
Proof.
Let and , then holds for by Lemma 3.1. Recall (6) that depends only on and , as well as depends on and , for . Thus, it is enough to prove , then we inductively obtain the claim since .
Proof of Lemma 3.5.
The former part is almost trivial from Lemma 3.6: In fact, if then . Lemma 3.6 implies meaning that .
Now, we prove, for the latter part, that
(24) |
unless or . Firstly, we consider the case . Suppose where . Let , then we claim that if . For the purpose, we calculate . Clearly, since . By lemma 3.6, since . Thus, we obtain , meaning that . Now it is easy to see since . Recall that the set of sections is a partition of for each , that the sections are allocated in [0,1) in lexicographic order by Proposition 2.2, and that for is clearly order preserving. Thus, only may violate , where .
The case of is similar. Suppose where . Let , then we claim that if . For the purpose, we calculate , where we remark that and . Clearly, since . By lemma 3.6, since . Thus, we obtain , meaning that . Now it is easy to see since . Recall that the set of sections is a partition of for each , that the sections are allocated in [0,1) in lexicographic order by Proposition 2.2, and that for is clearly order preserving. Thus, only may violate , where . ∎
Now, we are ready to prove Lemma 3.4.
3.3.2 Lemmas for and
As a remaining part of the proof of Theorem 3.3, here we just refer to the following facts. See Section B.1 for a proof.
Lemma 3.7.
Let . Then, and hold, where and .
The former claim is trivial by Proposition 2.2 and (15). For a proof of the latter claim, we use the following fact, which intuitively seems obvious from Lemma 3.1.
Lemma 3.8.
unless .
Lemma 3.9.
If then , and vice versa.
3.4 Transitions over and Recognition of
3.4.1 Transitions between segment-types
Recall (6) that is determined by and to compute for . More precisely, Lemma 2.4 lets us know if then , otherwise i.e., then . Here, we establish the following lemma, about the transitions over segment-types.
Lemma 3.10 (Transitions of segment-types).
Let . (1) Suppose (). We consider three cases concerning the position of relative to .
-
Case 1-1:
.
-
Case 1-1-1.
If then , and .
-
Case 1-1-2.
If then , and .
-
Case 1-1-1.
-
Case 1-2:
. Then , and .
-
Case 1-3:
. Then , and .
(2) Similarly, suppose ().
-
Case 2-1:
.
-
Case 2-1-1.
If then , and .
-
Case 2-1-2.
If then , and .
-
Case 2-1-1.
-
Case 2-2:
. Then , and .
-
Case 2-3:
. Then , and .
We remark that Lemma 3.10 is rephrased by
if , | (25) | ||||
if . | (26) |
Sketch of proof.
The proof idea is similar to Lemma 3.2. We here prove Case 1-1. Other cases are similar (see Section B.2 for the complete proof).
To begin with, we recall three facts. i) The segment-type is defined by by (16). ii) by (14). Particularly, and by Proposition 2.2. iii) depends on and by (6).
Suppose (), and (Case 1-1). Notice that is monotone increasing in by Lemma 3.2. Let satisfy . Clearly, is divided into and at , i.e., .
If then . Thus, . Accordingly, since (cf. the proof of Lemma 3.2). We obtain . is clear by Lemma 3.2.
If then . Thus, . Accordingly, since . We obtain . ∎
Eqs. (25) and (26), which just rephrase Lemma 3.10, show a transition rule over segment-types, where the next single bit of a tent code makes a transition, and vice versa; Let and . If satisfies the cases of 1-2, 1-3, 2-2 or 2-3, then and is uniquely determined, while and depends on each other in the cases of 1-1 or 2-1. For convenience, let denote that if and then , according to (25) and (26). Notice that Lemma 3.4 implies that may hold even for ; for a typical instance, holds as used in the proof of Lemma 3.4. Then, we observe the following fact.
Lemma 3.11.
Suppose , where . Then, , too. Furthermore, for any satisfying .
Proof.
3.4.2 Recognition of a tent language
Lemma 3.10 provides a natural finite state machine555 Precisely, we need a “counter” for the length of the string, while notice that our main goal is not to design an automaton for . Our main target Theorem 2.5 assumes a probabilistic Turing machine, where obviously we can count the length of a sequence in space. to recognize/generate . We define the set of states by , where is the initial state, and denotes the unique reject state. Recall where and by Theorem 3.3. We let denote the state transition function, which is defined by
(27) |
according to (25) and (26) as far as and are consistent. Let and . For convenience, we define if the pair and contradicts to (26), precisely
(28) |
are the cases, where and .
Now it is not difficult to see from Lemma 3.10 that we can trace a path starting from according to provided by if, and only if, . For the latter argument, we let denote the state transition diagram (directed graph with labeled arcs; see Figure 3), where denote the set of states (vertices) and denote the set of transitions (arcs) labeled by . For convenience, let denotes the outneighbers of on the diagram . Then, we note that
(29) |
holds for any , clearly , and on by definition.
The following lemma is a strait-forward consequence of Lemma 3.10.
Lemma 3.12.
is bijective to the set of paths666 A path may use an arc twice or more (i.e., a path may not be simple). starting from on .
3.5 Draw from by a Markov chain on
Let be a real-valued random variable drawn from uniformly at random. Let . We here are concerned with the conditional probability
(30) |
for and . It is easy to see that
(31) |
holds for by (14). Since a tent map is a piecewise linear function, the following lemma seems intuitively and essentially trivial by Lemma 3.10. See Section B.3 for a proof.
Lemma 3.13.
Let . Then,
holds for , where let if .
We define a transition probability as follows. Let
(32) |
For , let
(33) |
and for any which is neither nor .
In fact, holds (see Lemma B.8), and the transition probability (33) is rephrased by
(34) |
for any and ().
Theorem 3.14.
The random bit sequence given by (35) follows .
Proof.
Let where is drawn from uniformly at random, i.e., follows . Then,
holds for any . We obtain the claim. ∎
3.6 Summary—The algorithm
3.6.1 Algorithm
Now, we give Algorithm 1 for Theorem 2.5, by summarizing the arguments of Section 3. Basically, the algorithm traces the Markov chain and outputs by and respectively, according to the transition probability given in Section 3.5.
In the algorithm, and respectively denote and for . For descriptive purposes, and corresponds to , and and corresponds to the reject state in . The single bit denotes for (recall Theorem 3.3). The pair and represent if (see line 6), otherwise, i.e., , (see line 9), at the -th iteration (for ). To avoid the bothering notation, we define the level of by
(36) |
if or , where recall and for (see Theorem 3.3). Then, represents the transition given by (27) in Section 3.4 (see also (25) and (26)) where and .
Lines 6–14 correspond to a transition from to . Algorithm 1 outputs every bit every iteration at line 7, to avoid storing all that consumes -bits of space. To attain the space for Theorem 2.5, we use the deferred update strategy; we calculate and representing on demand, in lines 15–27 according to (25) and (26). By a standard argument of the space complexity of basic arithmetic operations, see e.g., [12], rationals and requires bits for each , where the rational is given by an irreducible fraction .
Then, we look at the space complexity of the algorithm. Rationals and () consume at most bits in total, where denotes its value in the end of iterations of Algorithm 1. Integers and () consume at most bits in total. Bits () consume at most bits in total. Integers , use bits, and uses a single bit. The value of becomes in the worst case, while we will prove in Section 4 that is in expectation, as well as with high probability.
4 Average Space Complexity
Then, this section analyzes the average space complexity of Algorithm 1, and proves Theorem 2.5. As we stated in Section 3.6, our goal of the section is essentially to prove that the maximum value of of Algorithm 1 in the iterations is with high probability, and in expectation. More precisely, let be a Markov chain on segment-types with given in Section 3.5. Let
(37) |
be a random variable, where denotes or (recall (36) as well as Theorem 3.3). Note that , that is a trivial upper bound. We want . The following fact is the key for the purpose.
Lemma 4.1.
Suppose for that holds for any . Then,
(38) | ||||
(39) |
hold for .
We will prove Lemma 4.1 in Section 4.3. Roughly speaking, Lemma 4.1 implies that the level increases by one, or decreases into (almost) a half by a step of the Markov chain. Thus, the issue is the ratio of the transition. A precise argument follows.
4.1 Proof of Theorem 2.5
In fact, we will prove in Lemma 4.3, since what we really want for Theorem 2.5 is the expected space complexity, instead of itself. Formally, the following lemma claims that the description of the Markov chain on requires space (see also Section 3.6).
Lemma 4.2.
Let be rational given by an irreducible fraction . For any , the Markov chain on is represented by bits.
Proof.
As we stated in Section 3.6, we can describe a segment-type by two numbers representing the interval ( and in Algorithm 1), and a single bit to identify or (, there), by Lemma 3.9. We will prove Lemma 4.12 which claims that ( as well) is either or with and . For any , there exists satisfies and since . Thus, every segment-type requires at most bits. Since contains segment-types with at most arcs, the Markov chain on is represented by bits. ∎
Notice that holds. We prove for the Markov chain, which generates a random bit sequence .
Lemma 4.3.
Let be rational given by an irreducible fraction . Suppose888 This assumption may be redundant by the assumption of rational. It needs for Lemma 4.1. for that holds for any . Then, .
It is not essential that Lemma 4.3 assumes rational, which just follows that of Theorem 2.5 for an argument about Turing comparability. We will establish a similar (but a bit weaker) Proposition 5.1 for any real in Section 5. Before the proof of Lemma 4.3, we prove Theorem 2.5 by Lemmas 4.2 and 4.3.
Proof of Theorem 2.5.
We remark that the space complexity is clearly to if there exists a constant for such that holds for any , meaning that is constant to (recall Theorem 3.3). In the case999 It seems not the case for any rational , though we do not prove it here (cf. [27, 28])., we can prove the following fact (cf. the hypothesis of Lemma 4.1 caused by Lemma 4.3). See Section 4.3 for a proof of Proposition 4.4.
Proposition 4.4.
Let . If there exists () such that holds then holds for any .
4.2 Proof of Lemma 4.3
The proof strategy of Lemma 4.3 is as follows. Our key fact Lemma 4.1 implies that a chain must follow the path (or ) to reach level and the probability is (Lemma 4.6). We then prove that there exists such that (Lemma 4.7), which provides (Lemma 4.9). Lemma 4.3 is easy from Lemma 4.9.
We observe the following fact from Lemma 4.1.
Observation 4.5.
If visits (resp. ) for the first time then (resp. ) for .
Proof.
By Lemma 4.1, all in-edges to (resp. ) for any come from (resp. ), or a node of level or greater. Since has not visited any level greater than by the hypothesis and the above argument again, we obtain the claim. ∎
By Observation 4.5, if a Markov chain visits level for the first time at time then must be . The next lemma gives an upper bound of the probability from level to .
Lemma 4.6.
.
Proof.
By Observation 4.5, the path from to is unique and
(40) |
holds. We remark that holds for any , meaning that , and hence . ∎
The following lemma is the first mission of the proof of Lemma 4.3.
Lemma 4.7.
Let be rational given by an irreducible fraction . Suppose for that holds for any . Then, there exists such that and
(41) |
holds.
To prove Lemma 4.7, we remark the following fact.
Lemma 4.8.
Let be rational given by an irreducible fraction . Then, for any .
Proof.
Then, we prove Lemma 4.7.
Proof of Lemma 4.7.
For convenience, let for . Assume for a contradiction that (41) never hold for any , where for convenience. In other words,
(43) |
holds every . Thus, we inductively obtain that
(44) |
holds. By the definition of ,
(45) |
holds. Lemma 4.8 implies that
(46) |
holds. Then, (44), (45) and (46) imply
(47) |
holds. By taking the of the both sides of (47), we see that
(48) |
holds. Since by definition, it is not difficult to see that
(49) |
holds. Since by definition, it is also not difficult to observe that
(50) |
holds. Equations (48), (49) and (50) imply that , meaning that . At the same time, notice that any segment-type satisfies , meaning that . Contradiction. Thus, we obtain (41) for at least one of .
Finally, we check the size of :
where the last equality follows since and . We obtain a desired . ∎
Lemma 4.9.
Let for convenience. Then
holds.
Proof.
For and , Lemma 4.7 implies that there exists such that and
(51) |
holds. Let () denote the event that reaches the level for the first time. It is easy to see that
(52) |
holds101010Precisely, holds, but we do not use the fact here. by the definition of . We also remark that the event implies not only but also by Observation 4.5. It means that
(53) |
holds. Then,
holds. We remark that is trivial since . ∎
We are ready to prove Lemma 4.3.
Proof of Lemma 4.3.
Let for convenience. Then
holds. Now the claim is easy. ∎
4.3 Proofs of Lemmas 4.1 and 4.4
To begin with, we give two remarks. One is that we know if , otherwise and , by Lemma 3.10. The other is that holds since holds for any by Lemma 3.9. Thus we only need to prove the following lemma as a proof of Lemma 4.1
Lemma 4.11.
Suppose for that holds for any . If holds for () then
(54) |
holds.
We will prove Lemma 4.11. For the purpose, we define
(55) |
for , where we also use as without a confusion. Notice that for any since . We also remark a recursion
(56) |
holds by the definition. Then, we give a refinement of Lemma 3.10 (see also Theorem 3.3).
Lemma 4.12.
For any ,
(57) |
holds where recall .
Proof of Lemma 4.12.
The proof is an induction on . For , notice that for any . Then,
Inductively assuming (57) holds for , we prove it for . We consider the cases or . Firstly, we are concerned with the case of . In the case, by the inductive assumption. We consider the following three cases.
- Case 1-1.
- Case 1-2.
- Case 1-3.
Next, suppose . Then .
- Case 2-1.
- Case 2-2.
- Case 2-3.
Then, we obtain (57). ∎
Lemma 4.13.
Suppose for that holds for any . Then, holds.
Proof.
The proof is an induction on . Notice that , meaning that and . Recall . Thus, and , and we obtain the claim for .
Lemma 4.14.
Suppose for that holds for any . If for () then .
Proof.
Lemma 4.15.
Suppose for that holds for any . Suppose () satisfies . Then, the following (i) and (ii) hold:
-
(i)
.
-
(ii)
.
Proof.
As a preliminary step, we claim that the hypothesis implies
(59) |
holds. Note that , which we have proved in Cases 1-1 and 2-1 in Lemma 4.12. The proof of (59) is similar to them. We consider the cases or . In case of , by Lemma 4.12. Recall Case 1-1-2 in Lemma 3.10, then
holds. In case of , by Lemma 4.12. Recall Case 2-1-1 in Lemma 3.10, then
holds. Thus we obtain (59).
Now we prove claim (i). Notice that by Theorem 3.3. It implies that there exists such that holds. The proof consists of three steps. Firstly, we claim that . In fact, and , accordingly cannot be one of them. Secondly, we claim . Notice that one end of the is by (59). By Lemma 4.12, both ends of are and , as well. The hypothesis requires since as we proved above. Thus, must hold, where the hypothesis again implies . Thirdly, we claim . Now we know one end of is . Thus the other end must satisfy . The hypothesis allows only . Now we got and , which implies (i).
As a consequence of Lemma 4.15, we obtain the following lemma.
Lemma 4.16.
Suppose for that holds for any . If holds for () then .
Proof.
Now, Lemma 4.11 is easy.
Proof of Lemma 4.11.
By the hypotheses,
holds. ∎
Finally, we prove Proposition 4.4, here.
Proposition 4.17 (Proposition 4.4).
Let . If there exists () such that holds then holds for any .
Proof.
Suppose . Let . Firstly, we claim . Let according to Lemma 3.7. We know by Lemma 3.5. Assume for a contradiction, let . Let and let , then hold. We claim both and hold for , and . In fact, holds. By definition (1), for any . Thus, we obtain and . This contradicts to Lemma 2.4, claiming either or . We obtain .
Now, it is easy to see . We obtain the claim by Lemma 3.11. ∎
5 Analysis for Real
We assumed rational in Section 4.1, to avoid some bothering arguments on Turing computability of real numbers, but it is not essential. This section shows that is with high probability, in expectation as well, even for real . To be precise, we prove the following proposition.
Proposition 5.1.
Let be an arbitrary real, and let be a constant. For convenience, let .111111 Notice that for , where its value is asymptotic to as . The term is negligible if , e.g., if . Then,
holds.
Remark that it is possible to establish average space complexity even for some real from Proposition 5.1, using some standard (but bothering) arguments on computations with real numbers, e.g., symbolic treatment of , , , etc. Here, we just prove Proposition 5.1, and omit the arguments on average space for real .
The proof of Proposition 5.1 is similar to Lemma 4.9, but we have to prove the following lemma without the assumption of being rational, in contrast to Lemma 4.7.
Lemma 5.2.
Let be an arbitrary real, and let be a constant. For convenience, let . Then, there exists such that and
(60) |
hold.
Proof.
For convenience, let for . Assume for a contradiction that (60) never hold for any , where let
(61) |
hold. Notice that such exists at most . In other words,
(62) |
holds every . Thus, we inductively obtain that
(63) |
holds. Note that
(64) |
holds. Then, (63) and (64) imply
(65) |
holds. By taking the of the both sides of (65), we see that
(66) |
holds. Notice that () is monotone increasing for . Since ,
(67) |
holds. It is not difficult to see that
hold for sufficiently large , and hence
(68) |
holds for sufficiently large .
Proof of Proposition 5.1.
Lemma 5.2 implies that there exists such that and
(70) |
holds. Let () denote the event that reaches the level for the first time. Then,
holds. We remark that is trivial since . ∎
Corollary 5.3.
Let be an arbitrary real. Let , for convenience. Then, .
Proof.
6 Concluding Remark
This paper showed that is realized in space, in average (Theorem 2.5). An extension to the smoothed analysis, beyond average, is a near future work. Another future work is an extension to the baker’s map, which is a chaotic map of piecewise but 2-dimensional. For the purpose, we need an appropriately extended notion of the segment-type. Another future work is an extension to the logistic map, which is a chaotic map of 1-dimensional but quadratic. Some techniques of random number transformation may be available for it. The time complexity is another interesting topic to decide as given a rational for a fixed . Is it possible to compute in time polynomial in the input size ? It might be NP-hard, but we could not find a result.
Acknowledgement
The authors are grateful to Masato Tsujii for the invaluable comments, particularly about the Markov extension. The authors would like to thank Yutaka Jitsumatsu, Katsutoshi Shinohara and Yusaku Tomita for the invaluable discussions, that inspire this work. The authors are deeply grateful to Jun’ichi Takeuchi for his kind support. This work is partly supported by JSPS KAKENHI Grant Numbers JP21H03396.
References
- [1] T. Addabbo, M. Alioto, A. Fort, S. Rocchi and V. Vignoli, The digital tent map: Performance analysis and optimized design as a low-complexity source of pseudorandom bits. IEEE Transactions on Instrumentation and Measurement, 55:5 (2006), 1451–1458.
- [2] M. Alawida, J. S. Teh, D. P. Oyinloye, W. H. Alshoura, M. Ahmad and R. S. Alkhawaldeh, A New Hash Function Based on Chaotic Maps and Deterministic Finite State Automata, in IEEE Access, vol. 8, pp. 113163-113174, 2020,
- [3] H. Bruin, Combinatorics of the kneading map, International Journal of Bifurcation and Chaos, 05:05 (1995), 1339–1349.
- [4] M. Crampin and B. Heal, On the chaotic behaviour of the tent map Teaching Mathematics and its Applications: An International Journal of the IMA, 13:2 (1994), 83–89.
- [5] W. de Melo and S. van Strien, One-Dimensional Dynamics, Springer-Verlag, 1991.
- [6] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman and Company, 1979.
- [7] G. Gharooni-fard, A. Khademzade, F. Moein-darbari, Evaluating the performance of one-dimensional chaotic maps in the network-on-chip mapping problem, IEICE Electronics Express, 6:12 (2009), 811–817. Algorithms and certificates for Boolean CSP refutation: smoothed is no harder than random
- [8] F. Hofbauer, On intrinsic ergodicity of piecewise monotonic transformations with positive entropy, Israel Journal of Mathematics, 34:3 (1979), 213–237.
- [9] E. Hopf, Ergodentheorie, Springer Verlag, Berlin, 1937.
- [10] A. Kanso, H. Yahyaoui and M. Almulla, Keyed hash function based on chaotic map, Information Sciences, 186 (2012), 249–264.
- [11] T. Kohda, Signal processing using chaotic dynamics, IEICE ESS Fundamentals Review, 2:4 (2008), 16–36, in Japanese.
- [12] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms, Springer-Verlag, 2018.
- [13] C. Li, G. Luo, K, Qin, C. Li, An image encryption scheme based on chaotic tent map, Nonlinear. Dyn., 87 (2017), 127–133.
- [14] T.Y. Li and J.A. Yorke, Period three implies chaos, Amer. Math. Monthly, 82 (1975), 985–995.
- [15] E.N. Lorenz, Deterministic nonperiodic flow, Journal of Atmospheric Sciences, 20:2 (1963), 130–141.
- [16] E. Lorenz, The Essence of Chaos, University of Washington Press, 1993.
- [17] T. Makino, Y. Iwata, K. Shinohara, Y. Jitsumatsu, M. Hotta, H. San and K. Aihara, Rigorous estimates of quantization error for a/d converters based on beta-map, Nonlinear Theory and Its Applications, IEICE, 6:1 (2015), 99–111.
- [18] R. May, Simple mathematical models with very complicated dynamics. Nature, 261 (1976), 459–467.
- [19] W. Parry, On the -expansions of real numbers, Acta Math. Acad. Sci. Hung., 11 (1960), 401–416.
- [20] W. Parry, Representations for real numbers, Acta Math.Acad. Sci.Hung., 15 (1964), 95–105.
- [21] G. Radons, G. C. Hartmann, H. H. Diebner, O. E. Rossler, Staircase baker’s map generates flaring-type time series, Discrete Dynamics in Nature and Society, 5 (2000), 107–120.
- [22] A. Rényi, Representations for real numbers and their ergodic properties, Acta Mathematica Hungarica, 8:3-4 (1957), 477–493.
- [23] H. Shigematsu, H. Mori, T. Yoshida and H. Okamoto, Analytic study of power spectra of the tent maps near band-splitting transitions, J. Stat. Phys., 30 (1983), 649–679.
- [24] Y.G. Sinai, Construction of Markov partitions, Funct. Anal. Its Appl., 2 (1968), 245–253.
- [25] M. Sipser, Introduction to the Theory of Computation, 3rd ed., Cengage Learning, 2012.
- [26] H. Teramoto and T. Komatsuzaki, How does a choice of Markov partition affect the resultant symbolic dynamics?, Chaos, 20 (2010), 037113.
- [27] Y. Tomita, Randomized -expansion, Master’s thesis, Kyushu University, 2018, in Japanese.
- [28] Y. Tomita and S. Kijima, Randomized -expansion, IEICE General Conference 2018, Tokyo, DS–1–3, in Japanese.
- [29] J. Xiao, J. Xu, Z. Chen, K. Zhang and L. Pan, A hybrid quantum chaotic swarm evolutionary algorithm for DNA encoding, Computers & Mathematics with Applications, 57:11–12 (2009), 1949–1958.
- [30] H. Yang, K.-W. Wong, X. Liao, Y. Wang and D. Yang, One-way hash function construction based on chaotic map network, Chaos, Solitons & Fractals, 41:5 (2009), 2566–2574.
- [31] T. Yoshida, H. Mori and H. Shigematsu, Analytic study of chaos of the tent map: Band structures, power spectra, and critical behaviors. J. Stat. Phys., 31 (1983), 279–308.
Appendix A Proofs Remaining from Section 2
Proposition A.1 (Proposition 2.1).
Suppose for . Then, .
Proof.
Proof.
We prove it by an induction on . Consider the case of . By (4), only when , accordingly holds by (1). Similarly, if then by (4), accordingly holds by (1). We obtain (7) for .
Proposition A.3 (Proposition 2.2).
For any ,
hold where denotes the lexicographic order, that is and at for and unless .
Proof.
The claim is trivial for . Suppose , and let , where let if , for convenience. By Lemma 2.4, we know
(75) |
holds. Then, we confirm . Consider two cases. Firstly, suppose . The hypothesis requires . Then, we obtain and by (6), in the case. Next, suppose . The hypothesis requires . Then, we obtain and by (6), in the case. In both cases, we obtain . ∎
Proposition A.4 (Proposition 2.3).
The -th iterated tent code is right continuous, i.e., .
Proof.
The proof is similar to Proposition 2.2. To begin with, we remark that if then , by a contraposition of Proposition 2.2. Assume for contradiction that holds. Let and where and , and let . Recall (7), then we consider two cases.
- Case 1.
-
Case 2.
Suppose . The hypothesis requires due to (6). Let , then holds. Contradiction.
We obtain the claim. ∎
Appendix B Proofs Remaining from Section 3
B.1 Proofs of Lemmas 3.7 and 3.8
Lemma B.1 (Lemma 3.7).
Let . Then, and hold, where and ,
The former claim is trivial by Proposition 2.2 and (15). The latter claim comes from the following fact.
Lemma B.2 (Lemma 3.8).
unless .
Proof.
Without loss of generality, we may assume that . The claim is trivial for since and by (4) unless . Notice that , meaning that .
Inductively assuming the claim holds for , we prove it for . For convenience, let and let . Then, holds for by the inductive assumption, and it is enough to prove . Recall that (resp. ) is determined by and (resp., and ) by (5). We also remark that by (1), and then inductively. By the inductive assumption, holds, which implies with (5) that unless . We obtain in the case.
Lemma B.3.
If then for and .
B.2 Proof of Lemma 3.10
Lemma B.4 (Lemma 3.10).
Let . (1) Suppose (). We consider three cases concerning the position of relative to .
-
Case 1-1:
.
-
Case 1-1-1.
If then , and .
-
Case 1-1-2.
If then , and .
-
Case 1-1-1.
-
Case 1-2:
. Then , and .
-
Case 1-3:
. Then , and .
(2) Similarly, suppose ().
-
Case 2-1:
.
-
Case 2-1-1.
If then , and .
-
Case 2-1-2.
If then , and .
-
Case 2-1-1.
-
Case 2-2:
. Then , and .
-
Case 2-3:
. Then , and .
Proof.
To begin with, we recall three facts. i) The segment-type is defined by by (16). ii) by (14). Particularly, and by Proposition 2.2. iii) depends on and by (6). The following proof is based on the idea similar to Lemma 3.2.
(1) Suppose (), i.e., is monotone increasing in by Lemma 3.2.
- Case 1-1:
-
Case 1-2:
. Then, . Thus, . Accordingly, since . We obtain .
-
Case 1-3:
. Then, . Thus, . Accordingly, since . We obtain .
(2) Suppose (), i.e., is monotone decreasing.
-
Case 2-1:
.
-
Case 2-1-1.
If then . Thus, . Accordingly, since . We obtain .
-
Case 2-1-2.
If then . Thus, . Accordingly, since . We obtain .
-
Case 2-1-1.
-
Case 2-2:
. Then, . Thus, . Accordingly, since . We obtain .
-
Case 2-3:
. Then, . Thus, . Accordingly, since . We obtain .
∎
B.3 Proof of Lemma 3.13
Lemma B.5 (Lemma 3.13).
Let be a real-valued random variable drawn from uniformly at random. Let . Let . Then,
holds for , where let if .
As a preliminary step, we prove the following lemma which proves that is a piecewise linear function, which is almost trivial but we have not proven yet.
Lemma B.6.
is uniformly distributed over .
Proof.
The proof is an induction on . We start with . Notice that . We consider two cases whether or not. If then . Thus,
holds for any , which implies the claim in the case. Similarly, if then . Then, holds for any , which implies the claim for .
Inductively assuming the claim for , we prove it for . We here prove Case 1-1 in Lemma 3.10, but other cases are similar. Suppose and . Let for convenience. If then . Then,
holds for , where holds by the argument similar to the proof of Case 1-1-1 in Lemma 3.10. Thus, is uniformly distributed over .
If then . Then,
holds for . We obtain Case 1-1. It is not difficult to see that other cases are similar. ∎
Next, we prove the following lemma.
Lemma B.7.
Let . Then,
holds for .
Proof.
It is trivial if or , corresponding to Cases 1-2, 1-3, 2-2, 2-3 in Lemma 3.10. Consider Case 1-1 (Case 2-1 is similar). Let , and let satisfy .
Firstly we remark
(77) |
holds since is uniformly distributed over .
Next, we claim that
(78) |
holds similarly from Lemma B.6. In fact,
holds by Lemma B.6. For the right hand side, we see that
(79) | ||||
(80) |
holds. Thus
holds. We obtain (78).
Finally, we observe
and we obtain the claim in the case. Case 2-1 is similar. ∎
Proof of Lemma 3.13.
Lemma B.8.
(81) |
holds for any