On the (Im)plausibility of Public-Key Quantum Money
from Collision-Resistant Hash Functions
Abstract
Public-key quantum money is a cryptographic proposal for using highly entangled quantum states as currency that is publicly verifiable yet resistant to counterfeiting due to the laws of physics. Despite significant interest, constructing provably-secure public-key quantum money schemes based on standard cryptographic assumptions has remained an elusive goal. Even proposing plausibly-secure candidate schemes has been a challenge.
These difficulties call for a deeper and systematic study of the structure of public-key quantum money schemes and the assumptions they can be based on. Motivated by this, we present the first black-box separation of quantum money and cryptographic primitives. Specifically, we show that collision-resistant hash functions cannot be used as a black-box to construct public-key quantum money schemes where the banknote verification makes classical queries to the hash function. Our result involves a novel combination of state synthesis techniques from quantum complexity theory and simulation techniques, including Zhandry’s compressed oracle technique.
1 Introduction
Unclonable cryptography is an emerging area in quantum cryptography that leverages the no-cloning principle of quantum mechanics [WZ82, Die82] to achieve cryptographic primitives that are classically impossible. Over the years, many interesting unclonable primitives have been proposed and studied. These include quantum copy-protection [Aar09], one-time programs [BGS13], secure software leasing [AL21], unclonable encryption [BL20], encryption with certified deletion [BI20], encryption with unclonable decryption keys [GZ20, CLLZ21], and tokenized signatures [BS16].
One of the oldest and (arguably) the most popular unclonable primitives is quantum money, which was first introduced in a seminal work by Wiesner [Wie83]. A quantum money scheme enables a bank to issue digital money represented as quantum states. Informally, the security guarantee states that it is computationally infeasible to produce counterfeit digital money states. That is, a malicious user, given one money state, cannot produce two money states that are both accepted by a pre-defined verification procedure. There are two notions we can consider here. The first notion is private-key quantum money, where the verification procedure is private. That is, in order to check whether a money state is valid, we need to submit the state to the bank which decides its validity. A more useful notion is public-key quantum money, where anyone can verify the validity of money states. While private-key money schemes have been extensively studied and numerous constructions, including information-theoretic ones, have been proposed, the same cannot be said for public-key quantum money schemes.
Aaronson and Christiano [AC13] first demonstrated the feasibility of information-theoretically secure public-key quantum money in the oracle model; meaning that all algorithms in the scheme (e.g., the minting and verification algorithms) query a black box oracle during their execution. In the standard (i.e., non-oracle) model, there are two types of constructions known for building quantum money:
-
•
In the first category, we have constructions borrowing sophisticated tools from different areas of mathematics, such as knot theory [FGH+12], quaternion algebras [KSS21] and lattices [Zha21, KLS22]. The constructions in this category have been susceptible to cryptanalytic attacks as demonstrated by a couple of recent works [Rob21, BDG22]. We are still in the nascent stages of understanding the security of these candidates.
- •
We focus on the second category. Constructions from existing primitives, especially from those that can be based on well-studied assumptions, would position public-key quantum money on firmer foundations. Unfortunately, existing constructions of indistinguishability obfuscation are either post-quantum insecure [AJL+19, JLS21, JLS22] or are based on newly introduced cryptographic assumptions [GP21, BDGM20, WW21, DQV+21] that have been subjected to cryptanalytic attacks [HJL21].
The goal of our work is to understand the feasibility of constructing public-key quantum money from fundamental and well-studied cryptographic primitives. We approach this direction via the lens of black-box separations. Black-box separations have been extensively studied in classical cryptography [Rud91, Sim98, GKM+00, RTV04, BM09, DLMM11, GKLM12, BDV17]. We say that a primitive cannot be constructed from another primitive in a black-box manner if there exists a computational world (defined by an oracle) where exists but does not. Phrased another way, these separations rule out constructions of primitive where primitive is used in a black-box manner. In this case, we say that there is a black-box separation between and . Black-box separations have been essential in understanding the relationship between different cryptographic primitives. Perhaps surprisingly, they have also served as a guiding light in designing cryptographic constructions. One such example is the setting of identity-based encryption (IBE). A couple of works [BPR+08, PRV12] demonstrated the difficulty of constructing IBE from the decisional Diffie Hellman (DDH) assumption using a black-box construction which prompted the work of [DG17] who used non-black-box techniques to construct IBE from DDH.
1.1 Our Work
Black-Box Separations for Unclonable Cryptography.
We initiate the study of black-box separations in unclonable cryptography. In this work, we study a black-box separation between public-key quantum money and (post-quantum secure) collision-resistant hash functions. To the best of our knowledge, our work takes the first step in ruling out certain approaches to constructing public-key quantum money from well-studied assumptions.
Model.
We first discuss the model in which we prove the black-box separation. We consider two oracles with the first being a random oracle (i.e., a uniformly random function) and the second being a oracle (i.e., one that can solve -complete problems). We investigate the feasibility of quantum money schemes and collision-resistant hash functions in the presence of and . That is, all the algorithms of the quantum money schemes and also the adversarial entities are given access to the oracles and .
There are two ways we can model a quantum algorithm to have access to an oracle. The first is classical access, where the algorithms in the quantum money scheme can only make classical queries to the oracle; that is, each query to the oracle is measured in the computational basis before forwarding it to the oracle. If an algorithm has classical access to an oracle, say , we denote this by . The second is quantum access, where the algorithms can make superposition queries. That is, an algorithm can submit a state of the form to the oracle and it receives back . If an algorithm has quantum access to an oracle , we denote this by .
Our ultimate goal is to obtain black-box separations in the quantum access model, where the algorithms in the quantum money scheme can query oracles in superposition. However, there are two major obstacles to achieving this.
First, analyzing the quantum access model in quantum cryptography has been notoriously challenging. For example, it is not yet known how to generalize to the quantum access setting black-box separations between key agreement protocols – a classical cryptographic primitive – and one-way functions [IR90]. Attempts to tackle special cases have already encountered significant barriers [ACC+22], and has connections to long-standing conjectures in quantum query complexity (like the Aaronson-Ambainis conjecture [AA09]).
Second, we have to contend with the difficulty that quantum money is an inherently quantum cryptographic primitive. A black-box separation requires designing an adversary that can effectively clone a quantum banknote given a single copy of it. Here one encounters problems of a uniquely quantum nature, such as the No-Cloning Theorem [WZ82, Die82] and the fact that measuring the banknote will in general disturb it.
We present partial progress towards the ultimate goal stated above by simplifying the problem and focusing exclusively on this second obstacle: we prove black-box separations where the banknote verification algorithm in the quantum money schemes makes classical queries to the random oracle (but still can make quantum queries to the oracle), and the minting algorithm may still make quantum queries to both and oracles. As we will see, even this special case of quantum money schemes is already challenging and nontrivial to analyze. We believe that our techniques may ultimately be extendable to the general setting (if there indeed exists a black-box impossibility in the general setting!), where all algorithms can make quantum queries to all oracles, and furthermore help prove black-box separations of other quantum cryptographic primitives.
Main Theorem.
We will state our theorem more formally. A quantum money scheme consists of three quantum polynomial-time (QPT) algorithms, namely , where produces a public key-secret key pair, uses the secret key to produce money states and a serial number associated with money states and finally, determines the validity of money states using the public key. We consider oracle-aided quantum money schemes, where these algorithms have access to a random oracle and a oracle, defined above.
Theorem 1 (Informal).
Any public-key quantum money scheme is insecure.
By insecurity, we mean the following. There exists a quantum polynomial-time (QPT) adversary such that , given a money state , where is the public key and is a serial number, with non-negligible probability, can produce two (possibly entangled) states that both pass the verification checks with respect to the same serial number . The probability is taken over the randomness of and also over the randomness of and . We note that only and can have quantum access to , while only has classical access. On the other hand, we show that the adversary only needs classical access to .
Furthermore, we note that the random oracle constitutes a collision-resistant hash function against QPT adversaries that can make queries to [Zha15]. We note that still remains collision-resistant even when the adversaries can make quantum queries to , not just classical ones.
Implications.
Our main result rules out a class of public-key quantum money constructions that (a) base their security on collision-resistant hash functions, (b) use the hash functions in a black-box way, and (c) where the verification algorithm makes classical queries to the hash function. Clearly, it would be desirable to generalize the result to the case where the verification algorithm can make quantum queries to the hash function. However, there are some conceptual challenges to going beyond classical verification queries (which we discuss in more detail in Section 2.2.3).
The class of quantum money schemes in this hybrid classical-quantum query model is quite interesting on its own and a well-motivated setting. For example, in Zhandry’s public-key quantum money scheme [Zha21], the mint procedure only needs classical access to the underlying cryptographic primitives (when the component that uses cryptographic primitives is viewed as a black-box) while the verification procedure makes quantum queries. In the constructions of copy-protection due to Coladangelo et al. [CLLZ21, CMP20], the copy-protection algorithm only makes classical queries to the cryptographic primitives in the case of [CLLZ21] and the random oracle in the case of [CMP20] whereas the evaluation algorithm in both constructions make quantum queries. Finally, in the construction of unclonable encryption in [AKL+22], all the algorithms only make classical queries to the random oracle. Given these constructions, we believe it is important to understand what is feasible or impossible for unclonable cryptosystems in the hybrid classical-quantum query model.
Secondly, we believe that the hybrid classical-quantum query model is a useful testbed for developing techniques needed for black-box separations, and for gaining insight into the structure of unclonable cryptographic primitives. Even in this special case, there are a number of technical and conceptual challenges to overcome in order to get our black-box separation of Theorem 1. We believe that the techniques developed in this paper will be a useful starting point for future work in black-box separations in unclonable cryptography.
Other Separations.
As a corollary of our main result, we obtain black-box separations between public-key quantum money and many other well-studied cryptographic primitives such as one-way functions, private-key encryption and digital signatures.
Our result also gives a separation between public-key quantum money and collapsing hash functions in the same setting as above; that is, when makes classical queries to . This follows from a result due to Unruh [Unr16] who showed that random oracles are collapsing. Collapsing hash functions are the quantum analogue of collision-resistant hash functions. Informally speaking, a hash function is collapsing if an adversary cannot distinguish a uniform superposition of inputs, say , mapping to a random output versus a computational basis state obtained by measuring in the computational basis. Zhandry [Zha21] showed that hash functions that are collision-resistant but not collapsing imply the existence of public-key quantum money. Thus our result rules out a class of constructions of quantum money from collapsing functions, improving our understanding of the relationship between them.
Acknowledgments.
We thank anonymous conference referees, Qipeng Liu, Yao Ching Hsieh, and Xingjian Li for their helpful comments. HY is supported by AFOSR award FA9550-21-1-0040 and NSF CAREER award CCF-2144219.
2 Our Techniques in a Nutshell
We present a high-level overview of the techniques involved in proving Theorem 1. But first, we will briefly discuss the correctness guarantee of oracle-aided public-key quantum money schemes.
Reusability.
In a quantum money scheme , we require that accepts a state and a serial number produced by with overwhelming probability. However, for all we know, , during the verification process, might destroy the state. In general, a more useful correctness definition is reusability, which states that a money state can be repeatedly verified without losing its validity. In general, one can show that the gentle measurement lemma [Win99] does prove that correctness implies reusability. However, as observed in [AK22], this is not the case when has classical access to an oracle. Specifically, has classical access to . Hence, we need to explicitly define reusability in this setting. Roughly speaking, we require the following: suppose we execute on a money state produced using and the verification algorithm accepts with probability . The residual state is (possibly) a different state which when executed upon by is also accepted with probability close to . In fact, even if we run the verification process polynomially many times, the state obtained at the end of the process should still be accepted by with probability close to .
2.1 Warmup: Insecurity when is absent
Towards developing techniques to prove Theorem 1, let us first tackle a simpler statement. Suppose we have a secure public-key quantum money scheme . This means that any QPT adversary cannot break the security of this scheme. But what about oracle-aided adversaries? In more detail, we ask the following question: Does there exist a QPT algorithm, given quantum access to a oracle, that violates the security of ? That is, given , where is a serial number and is a valid banknote produced by , it should be able to produce two states, with respect to the same serial number , that are both accepted by the verifier.
Even this seemingly simple question seems challenging! Let us understand why. Classical cryptographic primitives (even post-quantum secure ones) such as encryption schemes or digital signatures can be broken by efficient adversaries who have access to even oracles. This follows from the fact that we can efficiently reduce the problem of breaking the scheme to the problem of determining membership in a language. For instance, in order to succeed in breaking an encryption scheme, the adversary has to decide whether the instance , where is a public key, is a ciphertext, is a message and consists of instances of the form , where is an encryption of with respect to the public key . Implicitly, we are using the fact that are binary strings. Emulating a similar approach in the case of quantum money would result in quantum instances and it is not clear how to leverage , or more generally a decider for any language, to complete the reduction.
Synthesizing Witness States.
Towards addressing the above question, we reduce the task of breaking the security of the quantum money scheme using to the task of finding states accepted by the verifier in quantum polynomial space. This reduction is enabled by the following observation, due to Rosenthal and Yuen [RY21]: a set of pure states computable by a quantum polynomial space algorithm (which may in general include intermediate measurements) can be synthesized by a QPT algorithm with quantum access to a oracle. Implicit in the result of [RY21] is the following important point: in order to synthesize the state using the oracle, it is important that we know the entire description of the quantum polynomial space algorithm generating the pure states.
In more detail, we show the following statement: for every111Technically, we show a weaker statement which holds for an overwhelming fraction of . verification key , serial number , there exists a pure state222Technically, we require that the reduced density matrix of is accepted by . that is accepted by with non-negligible probability and moreover, can be generated by a quantum polynomial space algorithm.
The first attempt is to follow the classical brute-force search algorithm. Namely, we repeat the following for exponential times: guess a quantum state uniformly at random and if is accepted by with non-negligible probability, output and terminate. (Output an arbitrary state if we run out of times.) However, there are two problems with this attempt. Firstly, in general, it’s not clear how to calculate the acceptance probability of in polynomial space ( needs exponential bits to represent). Secondly, might be destroyed when we calculate the acceptance probability.
To fix the first problem, we note that an estimation of the acceptance probability is already good enough and it can be done by using a method introduced by Marriott and Watrous [MW05] (called MW technique). The MW technique allows us to efficiently estimate the acceptance probability of a verification algorithm on a state with only one copy of that state. Furthermore, it does not disturb the state too much in the sense that the expected acceptance probability of the residual state does not decay too significantly, which fixes the second problem.
This brings us to our second attempt. We repeat the following process for exponentially many times: apply MW technique on a maximally mixed state and if the estimated acceptance probability happens to be non-negligible, output the residual state and terminate. (Output an arbitrary state if all the iterations fail.)
As the MW technique is efficient, this algorithm only uses polynomial space. Furthermore, intuitively we can get a state that is accepted by with non-negligible acceptance probability given the fact that such a state exists. Because if such state exists, by a simple convexity argument, we can assume that without loss of generality that it’s pure. Maximally mixed state can be treated as a uniform mixture of a basis containing that pure state. Thus roughly speaking, we start from that pure state with inverse exponential probability, so we can find it in exponentially many iterations with overwhelming probability. This attempt almost succeeds except that it outputs a mixed state in general, but the known approach in [RY21] can only deal with pure states. There are two reasons for this. Firstly, we start with a maximally mixed state and secondly, MW technique involves intermediate measurements.
Our final attempt makes the following minor changes compared to the second attempt. To fix the first issue, it starts with a maximally entangled state (instead of maximally mixed state) and only operates on half of it. To fix the second issue, it runs the MW process coherently by deferring all the intermediate measurements. Then we will end up with a pure state whose reduced density matrix is the same as the output state of the second attempt.
2.2 Insecurity in the presence of
So far, we considered the task of violating the security of a quantum money scheme where the algorithms did not have access to any oracle. Let us go back to the oracle-aided quantum money schemes, where, all the algorithms (honest and adversarial) have access to and . Our goal is to construct an adversary that violates the security of quantum money schemes. But didn’t we just solve this problem? Recall that when invoking [RY21], it was crucial that we knew the entire description of the polynomial space algorithm in order to synthesize the state. However, when we are considering oracle-aided verification algorithms, denoted by , we don’t have the full description of333The fact that we don’t have the description of is the problem here. . Thus, we cannot carry out the synthesizing process.
A naive approach is to sample our own oracle and synthesize the state with respect to . However, this does not help. Firstly, there is no guarantee that accepts any state with high enough probability. Without this guarantee, the synthesizing process does not work. For now, let us take for granted that there does exist some witness state that is accepted by with high enough probability. However, there is no guarantee that is going to be accepted by with better than negligible probability.
Towards addressing these hurdles, we first focus on a simple case when and make classical queries to and we later, focus on the quantum queries case.
2.2.1 and : Classical Queries to
Compiling out .
Suppose we can magically find a database , using only polynomially many queries to , such that all the query-answer pairs made by to are contained in . In this case, there is a QPT adversary that given , can find two states and such that accepts both the states. does the following: it first finds the database and constructs another circuit such that runs and when makes a query to , the query is answered by . Then, synthesizes two states and , using , such that both the states are accepted by . By definition of the database , these two states are also accepted by .
Of course, it is wishful for us to hope that we can find a database by making only polynomially many queries to that is perfectly consistent with the queries made by . Instead, we hope to recover a good enough database . In more detail, we aim to recover a database that captures all the relevant queries made by and .
Let and be the collection of query-answer pairs made by and respectively. A query made by is called bad if this query is in and moreover, this query was not recorded in . If makes bad queries then the answers returned will likely be inconsistent with and thus, there is no guarantee that will work. Our hope is that the probability of making bad queries is upper bounded by an inverse polynomial.
Once we have such a database , by a similar argument, we can conclude that the states synthesized using are also accepted by .
But how do we recover this database ? To see how, we will first focus on a simple case before dealing with the general case.
State-independent database simulation.
Note that the queries made by could potentially depend on its input state. For now, we will assume that the distribution of queries made by is independent of the input state. We will deal with the state-dependent query distributions later.
The first attempt to generate would be to rely upon techniques introduced by Canetti, Kalai and Paneth [CKP15] who, in a different context – that of proving impossibility of obfuscation in the random oracle model – showed how to generate a database that is sufficient to simulate the queries made by the evaluation algorithm. Suppose is the state generated by . Then, run a fixed polynomially many times, referred to as test executions, by querying . In each execution of , record all the queries made by along with their answers. The union of queries made in all the executions of will be assigned the database . In the context of obfuscation for classical circuits, [CKP15] argue that, except with inverse polynomial probability, the queries made by the evaluation algorithm can be successfully simulated by . This argument is shown by proving an upper bound on the probability that the evaluation algorithm makes bad queries.
A similar analysis can also be made in our context to argue that suffices for successful simulation. That is, we can argue that the state we obtain after all the executions of (which could be very different from the state we started off with) can be successfully simulated using . However, it is crucial for our analysis to go through that (the query-answer pairs made during ) is independent of the state input to .
State-dependent database simulation.
For all we know, could indeed depend on the input state. In this case, we can no longer appeal to the argument of [CKP15]. At a high level, the reason is due to the fact that after each execution of , the money state could potentially change and this would affect the distribution of in the further executions of in such a way that the execution of on the final state (which could be different from the input state in the first execution of ) cannot be simulated using the database .
Instead, we will rely upon a technique due to [AK22], who studied a similar problem in the context of copy-protection. They showed that by randomizing the number of executions, one can argue that the execution of on the state obtained after all the test executions can be successfully simulated using , except with inverse polynomial probability. That is, suppose the initial state is and after running , number of times where , let the resulting state be . Let be as defined before. Then, we have the guarantee that accepts , except with inverse polynomial probability, even when is simulated using . This is because the sum of the number of bad queries we encounter during verifications is bounded by . Then there are only at most points such that the probability of making a bad query during the verification is at least . So when is large enough, there is a good chance that we choose such that the probability of making a bad query during the next verification (i.e. the verification if you count from the beginning) is small, in which case can simulate well.
This suggests the following attack on the quantum money scheme. On input a money state , do the following:
-
•
Run , times, also referred to as test executions. The number of times we need to run , namely , is randomized as per [AK22]. Let be the set of query-answer pairs made by to during the test executions. Denote to be the state obtained after executions of .
-
•
Let be the verification circuit as defined earlier.
-
•
Using quantum access to , synthesize two states and , as per Section 2.1, such that both the states are accepted by .
-
•
Output and .
From the witness synthesis method, we have the guarantee that and are both accepted by . However, this is not sufficient to prove that the above attack works. Remember that the adversary is supposed to output two states that are both accepted by . Unfortunately, there is no guarantee that accepts these two states. Indeed, both and could be quite different from . Hence, the above attack does not work.
Every mistake we make is progress.
Let us understand why the above attack does not work. Note that as long as does not make any bad query (i.e., a query in but not contained in ), it cannot distinguish whether its queries are being simulated by or . However, when is executed on or , we can no longer upper bound the probability that will not make any bad queries.
We modify the above approach as follows: whenever makes bad queries, we can update the database to contain the bad queries along with the correct answers (i.e., answers generated using ). Once is updated, we can synthesize two new states using . We repeat this process until we have synthesized two states that are accepted by .
Is there any guarantee that this process will stop? Our key insight is that whenever we make a mistake and synthesize states that are not accepted by then we necessarily learn a new query in that is not contained in . Thus, with each mistake, we make progress. Since there are only a polynomial number of queries in , we will ultimately end up synthesizing two states that are accepted by .
Our Attack.
With this modification, we have the following attack. On input a money state , do the following:
-
•
Test phase: Run , times, also referred to as test executions. The number of times we need to run , namely , is randomized as per [AK22]. Let be the set of query-answer pairs made by to during the test executions. Denote to be the state obtained after executions of .
-
•
Update phase: Repeat the following polynomially many times. Let be the verification circuit as defined earlier. Using quantum access to , synthesize a state as per Section 2.1, such that the state is accepted by . Run on this state and include any new queries made by to in .
-
•
Let be the databases obtained after every execution during the update phase.
-
•
Using quantum access to , synthesize two states and such that both the states are accepted by for some randomly chosen .
-
•
Output and .
In the technical sections, we analyze the above attack and prove that it works.
2.2.2 and : Quantum Queries to
The important point to note here is the form of our aforementioned attacker. It only takes advantage of the fact that makes classical queries to . When and make quantum queries to while makes classical queries to , we can still run the attacker. What is left is to show that the same attacker works even when and make quantum queries to .
The main difficulty in carrying out the intuitions in Section 2.2.1 to the case where and make quantum queries to is that it’s difficult to define analogue of and . To give a flavour of the difficulty, let’s first consider two naive attempts.
The first attempt is to define and to be those query-answer pairs asked (with non-zero amplitudes) during and . However, this attempt suffers from the problem that in this way, can have exponential elements. So even if each time we can make progress in the sense that we recover some new elements in , there is no guarantee that the update phase will terminate in polynomial time.
The second attempt is to only include queries that are asked “heavily” during and . To be more specific, let and be query-answer pairs asked with inverse polynomial squared amplitudes during and . However, with this plausible definition, the claim does not hold that whenever the acceptance probability of is far from that of , then we can recover a query in , which is a crucial idea underlying our intuitions in Section 2.2.1. Let us understand why this claim is not true if we adopt this definition of and .
Consider the following contrived counterexample . Suppose there exists a quantum money scheme . We modify this scheme into as follows:
-
•
: outputs the secret key-public key pair of .
-
•
: takes as input , makes quantum query to on state to get a state , and then measures the first register to get a value . It also runs to get a serial number along with a state . It outputs as the serial number and as the banknote.
-
•
: takes as input and an alleged banknote , makes classical query to on the input to get and checks if . It also checks if is a valid money state with respect to . Accepts if and only if both the checks pass.
In the above counterexample, it is possible that there is no query-answer pair that is asked with inverse polynomial squared amplitudes and thus . At the beginning because we have not started to record query-answer pairs. In this case, the acceptance probability of is smaller than or equal to where is the output length of on input length while the acceptance probability of is 1 if is (perfectly) correct. However, it’s impossible to recover a new query in because it’s empty.
Purified View.
Our insight is to consider an alternate world called the purified view. In this alternate world, we run everything coherently; in more detail, we consider a uniform superposition of , run , and even the attacker coherently (i.e., no intermediate measurements). If the attacker is successful in this alternate world then he is also successful in the real world where and the queries made by to are measured. We then employ the the compressed oracle technique by Zhandry [Zha18] to coherently recover the database of query-answer pairs recorded during and relate this with the database recorded during . Using an involved analysis, we then show many of the insights from the case when make classical queries to can be translated to the quantum query setting.
2.2.3 Challenges To Handling Quantum Verification Queries
It is natural to wonder whether we can similarly use the compressed oracle technique to handle quantum queries made by . Unfortunately, there are inherent limitations. Recall that in our attack, the adversary records the verifier’s classical query-answer pairs in a database, uses this to produce a classical description of a verification circuit (that does not make any queries to the random oracle), and submits the circuit description to a oracle in order to synthesize a money state. If the verifier instead makes quantum queries, then a natural idea is to use Zhandry’s compressed oracle technique to record the quantum queries. However, there are two conceptual challenges to implementing this idea.
First, in the compressed oracle technique, the queries are being recorded by the oracle itself in a “database register”, and not the adversary in the cryptosystem. In our setting, we are trying to construct an adversary to record the queries, but it does not have access to the oracle’s database register. In general, any attempts by the adversary to get some information about the query positions of could potentially disturb the intermediate states of the algorithm; it is then unclear how to use the original guarantees of . Another way of saying this is that Zhandry’s compressed oracle technique is typically used in the security analysis to show limits on the adversary’s ability to break some cryptosystem. In our case, we want to use some kind of quantum recording technique in the adversary’s attack.
Secondly, the natural approach to using the oracle is to leverage it to synthesize alleged banknotes. However, since the oracle is a classical function (which may be accessed in superposition), it requires polynomial-length classical strings as input. In our approach, the adversary submits a classical description of a verification circuit with query/answer pairs hardcoded inside. On the other hand if makes quantum queries, it may query exponentially many positions of the random oracle in superposition, and it is unclear how to “squeeze” the relevant information about the queries into a polynomial-sized classical string that could be utilized by the oracle.
This suggests that we may need a fundamentally new approach to recording quantum queries in order to handle the case when the verification algorithm makes quantum queries.
2.3 Related Work
Quantum Money.
The notion of quantum money was first conceived in the paper by Wiesner [Wie83]. In the same work, a construction of private-key quantum money was proposed. Wiesner’s construction has been well studied and its limitations [Lut10] and security guarantees [MVW12] have been well understood. Other constructions of private-key quantum money have also been studied. Ji, Liu and Song [JLS18] construct private-key quantum money from pseudorandom quantum states. Radian and Sattath [RS22] construct private-key quantum money with classical bank from quantum hardness of learning with errors.
With regards to public-key quantum money, Aaronson and Christiano [AC13] present a construction of public-key quantum money in the oracle model. Zhandry [Zha21] instantiated this oracle and showed how to construct public-key quantum money based on the existence of post-quantum indistinguishability obfuscation (iO) [BGI+01]. Recently, Shmueli [Shm22] showed how to achieve public-key quantum money with classical bank, assuming post-quantum iO and quantum hardness of learning with errors. Constructions [FGH+12, KSS21, KLS22] of public-key quantum money from newer assumptions have also been explored although they have been susceptible to quantum attacks [Rob21, BDG22].
Black-box Separations in Quantum Cryptography.
So far, most of the existing black-box separations in quantum cryptography have focused on extending black-box separations for classical cryptographic primitives to the quantum setting. Hosoyamada and Yamakawa [HY20] extend the black-box separation between collision-resistant hash functions and one-way functions [Sim98] to the quantum setting. Austrin, Chung, Chung, Fu, Lin and Mahmoody [ACC+22] showed a black-box separation between key agreement and one-way functions in the setting when the honest parties can perform quantum computation but only have access to classical communication. Cao and Xue [CX21] extended classical black-box separations between one-way permutations and one-way functions to the quantum setting.
3 Preliminaries
For a string , let denote its length. Let denote the set for any positive integer . Define the symmetric difference of two sets and to be the set of elements contained in exactly one of and , i.e. .
3.1 Quantum States, Algorithms, and Oracles
A register is a finite-dimensional complex Hilbert space. If are registers, for example, then the concatenation denotes the tensor product of the associated Hilbert spaces. For a linear transformation and register , we sometimes write to indicate that acts on , and similarly we sometimes write to indicate that a state is in the register . We write to denote trace, and to denote the partial trace over a register .
For a pure state , we write to denote the density matrix . Let denote the identity matrix. Let denote the trace distance between two density matrices .
For a pure state written in computational basis, we write to denote the conjugate of where is the complex conjugate of the complex number . The following observation shows that what the maximally entangled state looks like in other basis.
Lemma 1.
For two registers and of the same dimension , let be the computational basis and be an arbitrary basis. Then
Proof.
It’s easy to show that also forms a basis. Suppose . Then
where we use the fact that and are identity. ∎
Quantum Circuits
We specify the model of quantum circuits that we work with in this paper. For convenience we fix the universal gate set [NC00, Chapter 4] (although our results hold for any universal gate set consisting of gates with algebraic entries). Quantum circuits can include unitary gates from the aforementioned universal gate set, as well as non-unitary gates that (a) introduce new qubits initialized in the zero state, (b) trace them out, or (c) measure them in the standard basis. We say that a circuit uses space if the total number of qubits involved at any time step of the computation is at most . The description of a circuit is a sequence of gates (unitary or non-unitary) along with a specification of which qubits they act on.
We call a sequence of quantum circuits a quantum algorithm. We say that is polynomial-time if there exists a polynomial such that has size at most . We say that is polynomial-space if there exists a polynomial such that uses at most space.
Let denote a quantum algorithm. Given a string and a state whose number of qubits matches the input size of the circuit , we write to denote the output of circuit on input . The output will in general be a mixed state as the circuit can perform measurements.
We say that a quantum algorithm is time-uniform (or simply uniform) if there exists a polynomial-time Turing machine that on input outputs the description of . Similarly we say that is space-uniform if there exists a polynomial-space Turing machine that on input outputs the description of .
Oracle Algorithms
Oracle algorithms are quantum algorithms whose circuits, in addition to having the gates as described above, have the ability to query (perhaps in superposition) a function (called an oracle) which may act on many qubits. This is essentially the same as the standard quantum query model [NC00, Chapter 6], except the circuits may perform non-unitary operations such as measurement, reset, and tracing out. Each oracle call is counted as a single gate towards the size complexity of a circuit. The notion of time- and space-uniformity for oracle algorithms is the same as with non-oracle algorithms: there is a polynomial-time/polynomial-space Turing machine – which does not have access to the oracle – that outputs the description of the circuits.
Given an oracle where each is an -bit boolean function, we write to denote an oracle algorithm where each circuit can query any of the functions (provided that the oracle does not act on more than the number of qubits of .
In this paper we distinguish between classical and quantum queries. We say that an oracle algorithm makes quantum queries if it can query in superposition; this is akin to the standard query model. We say that makes classical queries if, before every oracle call, the input qubits to the oracle are measured in the standard basis. In this case, the algorithm would be querying the oracle on a probabilistic mixture of inputs. For clarity, we write to denote making quantum queries, and to denote making classical queries.
A specific oracle that we consider throughout is the oracle. What we mean by this is a sequence of functions where for every , the function decides -bit instances of a complete language (such as Quantified Satisfiability [Pap94]).
We state the following observation.
Lemma 2.
Let denote a polynomial-time oracle algorithm (not necessarily uniform) that makes quantum queries to and has one-bit classical output. Then there exists a polynomial-space algorithm such that for all , is a unitary and the functionality of is exactly the same as introducing polynomial number of qubits initialized in the zero state, applying unitary and then measuring the first qubit in computational basis to get a classical output. Furthermore if is uniform, then is space-uniform.
Proof.
This follows because for a polynomial-time (oracle) algorithm, we can always introduce new qubits only at the beginning and defer measurements and tracing out to the end, and each oracle query in to the oracle can be computed by first introducing several ancillas initialized in the zero state and then applying a unitary that implements the classical polynomial space algorithm for the PSPACE-complete language and uncomputes all the intermediate results. Furthermore, the description of the unitary can be generated by polynomial-space Turing machines. ∎
Finally, we will consider hybrid oracles that are composed of two separate oracles and the oracle. In this model, the oracle algorithm makes classical queries to , and quantum queries to . We abuse the notation and refer to algorithms having access to hybrid oracles as oracle algorithms.
State Synthesis
We define the following “state complexity class”. Intuitively it captures the set of quantum states that can be synthesized by polynomial-space quantum algorithms.
Definition 1 ().
is the class of all sequences for some set (called the promise) such that there is a polynomial where each is a density matrix on qubits, and for every polynomial there exists a space-uniform polynomial-space quantum algorithm such that for all , the circuit takes no inputs and outputs a density matrix such that .
We say that the state sequence is pure if each is a pure state ; in that case we usually denote the sequence by .
The following theorem says that, for sequences that are pure, there in fact is a polynomial-time oracle algorithm that makes quantum queries to a oracle to synthesize the state sequence.
Theorem 2 (Section 5 of [RY21]).
Let be a family of pure states. Then there exists a polynomial-time oracle algorithm such that on input , the algorithm outputs a pure state that is -close in trace distance to .
3.2 Public Key Quantum Money Schemes
Definition 2 (Oracle-aided Public Key Quantum Money Schemes).
A oracle-aided public key quantum money scheme consists of three uniform polynomial-time oracle algorithms :
-
•
: takes as input a security parameter in unary notation and generates secret key-public key pair .
-
•
: takes as input and mints banknote associated with the serial number .
-
•
: takes as inputs and an alleged banknote and outputs , where .
For simplicity, when we don’t care about the output in , we sometimes denote the event that as accepts.
We require the above oracle-aided public key quantum money scheme to satisfy both correctness and security properties.
3.2.1 Correctness
We first consider the traditional definition of correctness considered by prior works. Roughly speaking, correctness states that the verification algorithm accepts the money state produced by the minting algorithm. Later, we consider a stronger notion called reusability which stipulates that the verification process on a valid money outputs another valid money state (not necessarily the same as before).
Definition 3 (Correctness).
An oracle-aided public key quantum money scheme is -correct if the following holds for every :
where the probability is also over the randomness of .
We omit when .
Reusability.
In this work, we consider quantum money schemes satisfying the stronger notion of reusability.
Definition 4 (Reusability).
An oracle-aided public key quantum money scheme is -reusable if the following holds for every and for every polynomial :
where the probability is also over the randomness of .
We omit when .
In general, gentle measurement lemma [Win99] can be invoked to prove that correctness generically implies reusability. However, this is not the case in our context. The reason being that the verification algorithm performs intermediate measurements whenever it makes classical queries to an oracle and these measurements cannot be deferred to the end.
3.2.2 Security
We consider the following security notion. Basically, it says that no efficient adversary can produce two alleged banknotes from one valid banknote with the same serial number.
Definition 5 (Security).
An oracle-aided public key quantum money scheme is -secure if the following holds for every and for every uniform polynomial-time oracle algorithm :
where the probability is also over the randomness of . By , we mean the reduced density matrix of on the register.
We omit when .
3.3 Jordan’s Lemma and Alternating Projections
In this section, we analyze alternating projection algorithm, a tool for estimating the acceptance of the verification algorithm on a state with only one copy of that state, which was introduced by Marriott and Watrous [MW05] for witness-preserving error reduction. This section follows section 4.1 in [CMSZ22] mostly.
For two binary-outcome projective measurements , the alternating projection algorithm applies measurements and alternatively ( corresponds to outcome ) until a stopping condition is met. The following lemma can help us analyze the distribution of the outcomes by decomposing it into several small subspaces.
Lemma 3 (Jordan’s Lemma).
For any two projectors , , there exists an orthogonal decomposition of the Hilbert space into one-dimensional and two-dimensional subspaces that are invariant under both and . Moreover, if is a one-dimensional subspace, then and act as identity or rank-zero projectors inside . If is a two-dimensional subspace, then and are rank-one projectors inside . To be more specific, there are two unit vectors and such that inside , projects on and projects on .
Let measurement where is the projector onto the subspace defined above. Then both and commute with . Therefore the distribution of outcomes of each and will not change if we insert at any point of the alternating projections. We can analyze the distribution of outcome sequence by first applying and then applying , alternatively. For each two-dimensional subspace , denote . (This can be seen as a quantity that measures the angle between and inside .)
For now, let’s assume that there are only two-dimensional subspaces in the decomposition. The general case where there exist one-dimensional subspaces is essentially the same and can be handled similarly. Then, .444Generally, for each one-dimensional subspace on which acts as identity, we can set to be the vector that spaces . Let be the set of index such that is not a rank-zero projector inside . Then . Similarly where and are defined in a similar way.
Proposition 1.
If initially the state is 555The same holds for each generally where we define if and act both as identity in subspace and we define if acts as identity while acts as zero-projector in subspace . and we apply , alternatively for times, then the outcome sequence will follow the distribution below
-
1.
Set (because applying to will give outcome 1).
-
2.
For each , we set to be with probability , and otherwise.
Moreover, whenever we measure and get outcome 1, we will go back to state .
Then the fraction of bit-flips in the outcome sequence will be a good estimation of if we start from .
3.4 Compressed Oracle Techniques
In this section, we present some basics of compressed oracle techniques introduced by Zhandry [Zha18].
For a quantum query algorithm interacting with a random oracle, let’s assume that only queries the random oracle with -bit input and gets -bit output for simplicity. By the deferred measurement principle, without loss of generality we can write in the form of a sequence of unitaries where is the unitary that prepares the query of to and maps to where is the chosen random function from all the functions with -bit input and -bit output.
Then the behavior of when it is interacting with a random oracle can be analyzed in the following purified view:
-
•
Initialize register to be the input for (along with enough ancillas ) and initialize register to be a uniform superposition of the truth tables of all functions from to (to be more specific, is initialized to where and consists of one qubit).
-
•
Apply where is acting on and maps to .
In fact, the output (mixed) state of (we also take the randomness of into account) equals to the reduced density matrix on the output register of the state we obtain from the above procedure as commutes with computational basis measurement on . More generally, the output (mixed) state of a sequence of algorithms with access to random oracle can also be analyzed in the same way.
Definition 6 (Fourier basis).
. .
One can easily check that is a basis because it’s just the result of applying hermitian matrix to . We call this basis as Fourier basis.
The following fact is simple and easy to check, but crucial in compressed oracle techniques. Roughly speaking, it says that if we see in Fourier basis, its control bit and target bit swaps.
Fact 1.
The operator defined by for all is the same as the operator defined by for all .
By 1, when we look at the last two registers in Fourier basis, becomes
Initially, is and each call of only changes one position if we look at the last two registers in Fourier basis. So after calls of , the state can be written as
We can record those non- into a database. To be more specific, there exists a unitary that maps those (perhaps along with some ancillas) to a database (perhaps along with some unused space) where and . That is, there exists a unitary that can compress the oracle. Furthermore, the inverse of the unitary can decompress the database back to the oracle.
Chernoff bound
Finally, we state here a variant of the Chernoff bound that we will use.
Theorem 3 (Chernoff Bound).
Suppose are independent random variables taking values from such that each with probability . Let . Then for any ,
4 Synthesizing Witness States In Quantum Polynomial Space
In the classical setting it is easy to see that given a (classical) verifier circuit (which may make oracle queries to ), one can find in polynomial space a witness string that is accepted by : one can simply perform brute-force search over all strings and check whether accepts .
In the section, we prove the quantum counterpart, where now the verifier circuit is quantum and can make quantum queries to the oracle. We show that given the description of such a verifier circuit, with the help of the quantum oracle, we can efficiently synthesize a witness state that is accepted by with probability greater than the desired guarantee (provided that there exists a witness state with acceptance probability greater than the threshold). Formally:
Theorem 4.
Let (called the guarantee), (called the threshold) be functions such that for every where is a polynomial. Let denote a uniform oracle algorithm. Then there exists a uniform oracle algorithm (called the synthesizer) such that for every ,
where .
This theorem follows directly from Theorem 2 and the following lemma.
Lemma 4.
Let be functions such that for every where is a polynomial. Let denote a uniform oracle algorithm, and let be the corresponding set as in Theorem 4. Then there exists a family of pure states where each state is bipartite on two registers (labeled and ) such that for every ,
where is the reduced density matrix of on register , i.e. .
Proof of Theorem 4.
Let and , where are as given by the conditions in Theorem 4. Applying Lemma 4 with functions , we obtain a state sequence such that for every ,
where .
Theorem 2 implies that there exists a polynomial-time oracle algorithm that on input , outputs a pure state that is -close to . This implies that the reduced density matrix of on register , which we denote by , is also -close to (this follows from the fact that trace distance is non-increasing when you discard subsystems). Thus for every , we have
because otherwise would be able to distinguish between and with more than bias.
The synthesizer works as follows: on input it runs the oracle algorithm to obtain a pure state , and then traces out the register and returns the remaining state on the register as output. ∎
The remainder of Section 4 will be devoted to the proof of Lemma 4. We will use the techniques and results from [MW05] (also presented in Section 3.3 for completeness). In Section 4.1 we present the description of the state family along with the description of a circuit family that generates (an approximation of) the state family. In Section 4.2 we prove that the state family satisfies the requirements.
4.1 Description of the State Family and Circuit Family
In this section, we implement our ideas from Section 2.1 in a formal way. Recall that our algorithm in Section 2.1 repeatedly does the following (which we will call a trial): start from a maximally entangled state, estimate the acceptance probability coherently using MW technique and if the estimated acceptance probability high, then output the remaining state. Roughly speaking, the target state we aim to generate will be the remaining state after a successful trial (a trial is successful if the estimated acceptance probability is high). Looking ahead, in order to prove Lemma 4, we only need to show two things. Firstly, our algorithm actually outputs a good approximation of the target state, so our target state forms a family; Secondly, our target state will indeed be accepted with high probability.
Now let’s start by giving a formal description of the state family.
The state family
Let be the uniform oracle algorithm given in the condition of Lemma 4. From Lemma 2, there exists a space-uniform polynomial-space algorithm such that is unitary and the functionality of is exactly the same as introducing new ancilla qubits in , applying unitary and then measuring the first qubit in computational basis where is a polynomial. Let be the number of qubits that takes as input, which is also a polynomial.
Fix . We sometimes omit subscript when it is clear from the context. For convenience, we write as respectively from now on.
Let denote the register containing the input qubits. Let denote the register containing the ancilla qubits. Let denote the first qubit (i.e. the one that will be measured in computational basis to decide whether accepts or rejects, and outcome means accept while outcome means reject). Let denote a register containing another fresh qubits.
Here we define two binary-outcome projective measurements on . Define , and . Intuitively, corresponds to “valid input subspace” (i.e., the ancilla qubits are initialized properly). Define and . Intuitively, corresponds to the state that will be accepted if we apply to it and then measure in computational basis. So checks whether will accept as long as register is initialized properly. The following simple observation is implicitly shown in [MW05].
Observation 1.
The maximum acceptance probability of is exactly the largest eigenvalue of .
Proof.
First, we show that the maximum eigenvalue of is upper bounded by the maximum acceptance probability of .
For any pure state on register , let . Then
Second, we show that the maximum eigenvalue of is lower bounded by the maximum acceptance probability of . By a simple convexity argument, we can assume without loss of generality, the acceptance probability of achieves its maximum on pure state . Let . Then
Therefore, this observation holds true.
∎
We first define a subroutine where is polynomial in (recall that where is a polynomial).
Define (i.e., all registers except ).
Definition 7 (State family ).
Let be the set defined in Lemma 4. When , let denote the state in register after a successful implementation of (i.e., the outcome of is ). When are clear from the context, we also write it as .
Observe that in , we initialize a pure state in register (line 1 - line 4), then apply a unitary on it (line 5- line 9) as all measurements are conducted coherently, and finally do a projective measurement (line 10). So the definition above indeed gives us a family of states such that each is a pure state on qubits where is a polynomial.
The circuit family
Now let’s construct a circuit family (or algorithm) that can generate efficiently an approximation of the state family . For any polynomial and approximation factor , the circuit operates as follows where is exponential in .
4.2 Proof of Lemma 4
In the section, we prove that the pure state family satisfies the requirements in Lemma 4 by applying known result in Section 3.3.
Fix . We associate with , with , with and with , and adopt the notations in Section 3.3. Then . From and 1, we can assume . 666Generally . We can assume .
To begin with, let’s prove that the state family satisfies the first requirement in Lemma 4. That is, it is a family, which can be approximately generated by the circuit family. Notice that outputs whenever one of the exponential s succeeds. So we first analyze the success probability of one .
Lemma 5.
Proof.
The success probability of doesn’t change if we measure each qubit of once the outcome is stored in it because computational basis measurements on commutes with the operations in line 9 and 10 of . Thus we can think of it as measure directly with and alternatively, get a classical outcome sequence and return if and the number of times that is at least (where ), which is an alternating projection algorithm. For simplicity, we will denote by the set of that corresponds to outcome . Now let’s analyze the probability of .
An important observation is that the initial state can also be written in forms of . Because , forms a basis for the Hilbert space . Let be a truncation on of , i.e. . Then forms a basis for . Thus by Lemma 1, the state
Consequently, 777In the general case, the summation is over . That is, Thus for each , we will be in subspace with probability if we apply .
Notice that we can apply on register before the alternating projections without changing the distribution of . Applying to the above state, the post-measurement state will be with probability . And by Proposition 1, when we start from , with probability for each independently.
In particular, we will start from with probability . And when we start from , with probability for each independently. This can be seen as performing independent coin flips with bias . And if the number of heads (denote as ) is an even greater than or equal to .
By Chernoff bound,
Thus by union bound, when the post-measurement state after is , with probability at least .
So
∎
Claim 1.
is a family.
Proof.
We only need to show that our construction satisfies the requirements in Definition 1.
From the construction, can be generated by polynomial space Turing machine on input and uses at most polynomial space at any time. Thus is a space-uniform polynomial-space quantum algorithm. It is obvious from the construction that takes no inputs. The only remaining thing is to prove outputs a good approximation of when .
Fix . Whenever there is a successful implementation of , will output . Moreover, by Lemma 5, recall that ,
That is, except with probability , outputs .
As a result, the state outputted by is -close to in trace distance. ∎
The second requirement in Lemma 4 that needs to satisfy is that the reduced density matrix of will be accepted by with high probability. This is intuitively correct because the real acceptance probability should not be too far from the estimated acceptance probability. Now let’s prove it formally.
Claim 2.
For every , where is the reduced density matrix of on register .
Proof.
Fix . We will omit subscripts when it is clear from the context.
Similar with what we did in Lemma 5, this probability doesn’t change if in the generation of (i.e. ), we measure directly instead (as we only care about the part in ). Let be the state we obtain from a successful implementation of if we measure directly instead. Then the reduced density matrices of and are the same on register .
Notice that is just applying to and then measuring the first qubit in computational basis, or equivalently, it is just measuring with . By the definition, is on register (because the outcome should be 1). So the reduced density matrix of on register is exactly .
Consider the following alternating projection algorithm:
We start from , apply to the state alternatively for times to obtain a classical outcome sequence and if meets the requirement (to be more accurate, where is defined in Lemma 5), we will additionally apply to get an outcome and accept if .
In the above algorithm, if , the (mixed) state remaining is exactly , whose reduced density matrix on is . Recall that is just measuring with . Therefore,
From Section 3.3, will not change if we insert in front of the alternating projections. So we can calculate it by projecting the initial state to one of the subspaces , getting the post-measurement state and then sampling and as if we start from this state (here we also use the fact that can be written in the form ). Denote be the event that we get the post-measurement state . Then 888Generally, for each . And thus all the summations below will be only over . . Therefore,
Same as Lemma 5, when we start from , the probability of is the same as the probability that during independent coin flips with bias , the number of heads (denote as ) is an even greater than or equal to .
So if , by Chernoff bound,
From Lemma 5, . As a result,
where we use the fact that there are only s because has rank .999For the general case, we only sum over and equals to the rank of , which is .
The above inequality can be rearranged into
Therefore, , which ends the proof of this claim. ∎
Lemma 4 follows directly from the above two claims.
5 Insecurity of Oracle-Aided Public-Key Quantum Money
In this section, we will use the synthesizer from Section 4 as a building block to attack the oracle-aided public key quantum money scheme where is a hybrid oracle composed of random oracle and . Formally:
Theorem 5.
Reusable and secure oracle-aided public key quantum money scheme does not exist where is a random oracle.
Informally speaking, our synthesizer in Theorem 4 works for uniform oracle algorithm . However, in the oracle-aided public key quantum money scheme we aim to attack, the verification algorithm has access to random oracle in addition to . Inspired by [CKP15, AK22], we try to remove and simulate it with a good database. Based on the ideas in Section 2.2, we give the following attacker.
Let be the hybrid oracle composed of random oracle and . For a -reusable -secure oracle-aided quantum money scheme where , denote to be the number of queries to made by one execution of and one execution of . By efficiency of , there exists a uniform oracle algorithm such that running is the same as running where is simulated with database .
Let , , . By Theorem 4, there exists a polynomial-time uniform oracle algorithm which can generate an “almost optimal” witness state of with guarantee and threshold . Now let’s construct the adversary .
Adversary .
It takes as input a valid banknote and public key , and behaves as follows.
-
1.
Let . Let , . Run the following times. In iteration,
-
(a)
.
-
(b)
Add query-answer pairs to in item (a) into .
-
(a)
-
2.
Denote .
-
3.
For where is polynomial in ,
-
(a)
.
-
(b)
Run .
-
(c)
Let consist of all the query-answer pairs to in item (b) and the pairs in .
-
(a)
-
4.
.
-
5.
Output where .
Analysis of
Now let’s prove that outputs what we want. We will use the notations defined in the construction of .
Theorem 6.
Given input generated by and , outputs two alleged banknotes associated with the serial number that will be accepted with high probability. Formally:
where the probability is over the randomness of , the randomness of the generation of the input for (that is, the randomness of and ) and the randomness of our adversary .
Proof of Theorem 6.
The proof will be divided into two parts. Informally speaking, in the first part, we will show that for every , works well on the simulation, i.e. accepts with high probability; In the second part, we will show that for every , if behaves far from on input , then we make progress. Then we will combine the results to prove Theorem 6.
The first part
The synthesizer in Theorem 4 works well provided that good witness state for exists. Our candidate for the good witness state is as it is accepted by with high probability by the definition of reusability. We begin by arguing that with high probability, our databases contain necessary information for running verification on and thus can not distinguish whether it is interacting with random oracle or the simulated one. Formally:
Claim 3.
Let be the query-answer pairs made during the generation of the input and (that is, the execution of and ). Then
where the probability is over the randomness of , the randomness of the generation of the input for and the randomness of our adversary .
Proof.
We only care about query positions (those inside ) and we repeatedly sample times. Thus intuitively should reveal all the positions we care about. Formally,
where the probabilities are only over the randomness of our adversary and we use the fact that is picked uniformly random from , so it matches (which may follow some distribution, but is independent of anyway) with probability less or equal .
After taking the randomness of and the randomness of the generation of the input for into account, we can get the claim.
∎
The random oracle can be implemented by on-the-fly simulation. Thus can be implemented by simulating with database . If doesn’t make queries in , then it can not distinguish whether is simulated with or . That is, is a good database to simulate the verification process on input if doesn’t make queries inside . Thus the acceptance probability of should be close to that of , which is high by the definition of reusability. On average, the performance of the simulation on input can only increase if we include more queries into the database. Thus for every , should be a good witness state for . The intuition above is captured by the following claim.
Claim 4.
We use the same definition of and as in 3.
where the probability is over the randomness of , the randomness of the generation of the input for and the randomness of our adversary .
Proof.
This claim follows from Definition 4 and 3. The following probabilities are over the same randomness as the probability in the above claim.
where we use the fact that and the fact that we can use on-the-fly simulation to implement . As a result, can also be seen as simulating with , which is different from only on . Thus can not distinguish whether is simulated with or if it never queries in . The above inequality can be rearranged as
∎
Intuitively, from 4, for a large fraction of , good witness state exists. Therefore, our synthesizer can find an “almost optimal” one. Formally:
Claim 5.
For every
where the probability is over the randomness of , the randomness of the generation of the input for and the randomness of our adversary .
Proof.
The following probabilities are over the same randomness as the probability in the above claim unless otherwise stated.
Define where the probability is only over the randomness of . Then by 4 and averaging argument,
The second part
We already know that is accepted by with high probability. The next step is to associate the acceptance probability of and that of on . If the difference of these two terms is large, the simulation with is not good enough. That is, asks some important queries outside . So in this case, will contain more important queries and we make progress. Formally:
Claim 6.
We use the same notation as above. For every
where the probabilities and the expectations are over the randomness of , the randomness of the generation of the input for and the randomness of our adversary .
Proof.
The following probabilities and expectations are over the same randomness of those in the above claim unless otherwise stated.
Similar as the arguments in 4, and behave differently only when they make queries in . Therefore,
Note that ,
Therefore, the claim holds true. ∎
Now let’s combine the above results to prove Theorem 6. The probabilities and expectations below are over the randomness of , the randomness of the generation of the input for and the randomness of our adversary (thus over the randomness of and ) unless otherwise stated.
Proof of Theorem 5.
The proposed adversary is a valid attack because when ,
which is non-negligible. ∎
6 Extensions to Quantum Access
In this section, we will explore a special case where some algorithms can have quantum access to the random oracle. We consider reusable secure oracle-aided public key quantum money scheme . Formally:
Theorem 7.
Reusable and secure oracle-aided public key quantum money scheme does not exist where is a random oracle.
Without loss of generality, we can suppose the algorithms only make queries to the random oracle on input length and receive bit output where is a polynomial. (If they make queries to on various input lengths, suppose the maximal input length is . Let . We can modify the algorithms so that their queries on input length will be made on input length where the first bits stores the true query position, the middle bits are 0, and the last bits indicates .)
Let make classical queries to . Let and make quantum queries to in total. Denote the reusability and the security of the scheme as and respectively where . When it is clear from the context, we sometimes omit for simplicity.
It’s worth noting that the attacker in Section 5 doesn’t take advantage of the fact that and there can only make classical queries to . In fact, the same attacker works even when and can make quantum queries to (with some modifications on the number of iterations). To be more specific, here is our construction of the attacker where , , the guarantee and the threshold of will be determined later.
It takes as input a valid banknote and public key , and behaves as follows.
-
1.
Test phase: Let . Let , . Run the following times. In iteration,
-
(a)
.
-
(b)
Add query-answer pairs to in item (a) into .
-
(a)
-
2.
Update phase: Let . Let . Run the following times. In iteration,
-
(a)
.
-
(b)
Run .
-
(c)
Let consist of all the query-answer pairs to in item (b) and the pairs in .
-
(a)
-
3.
Synthesize phase: Output where .
This description of is actually equivalent to our adversary in Section 5. We move the line to the front because it will be easier to analyze.
What is left is to prove an analogue of Theorem 6. That is, the output states of will be accepted with high probability.
Theorem 8.
Given input generated by and , outputs two alleged banknotes associated with the serial number that will be accepted with high probability. Formally:
where the probability is over the randomness of , the randomness of the generation of the input for (that is, the randomness of and ) and the randomness of our adversary .
Similar to Theorem 6, we will show that the verification on accepts with high probability and then prove the theorem by union bound.
In Section 5, we crucially rely on the fact that whenever we make a mistake, we make progress in the sense that we recover a query inside . However, now can make quantum queries. As a result, and could “touch” exponentially many positions. Fortunately, the compressed oralce technique introduced by Zhandry [Zha18] can be seen as a quantum analogue of recording queries into a database. Basically, if we run all the algorithms in the purified view and see the register containing the oracle (labeled ) in Fourier basis, then all except polynomial positions are after polynomial quantum queries, and thus the register can be compressed using a unitary. In this work, in order to better mimic and in Section 5, we take advantage of the fact that only makes classical queries. To be more specific, we will maintain a register to store a database for all the classical queries and only record those non- positions outside the database into . These two registers will be our analogue of and . We will elaborate on this idea in Section 6.2.
6.1 A Purified View of the Algorithms
From Section 3.4, for any sequence of algorithms that only make queries to the random oracle on input length and receive 1 bit output, we can analyze the output using a pure state that we obtain by running all the algorithms in the purified view instead. By purified view, we mean that we will purify the execution of the algorithms in the following way:
-
•
We will introduce a register that contains the truth table of the oracle. Before the execution of the first algorithm, it is initialized to a uniform superposition of all the possible truth tables of the oracle, i.e. .
-
•
Instead of quantum query to , we apply a unitary where stores the query position and is for the answer bit. (The subscript in is for Quantum queries.)
-
•
Instead of computational basis measurements, we apply to copy it to a fresh ancilla.
Without loss of generality, we can suppose for any classical query to , the register for query answer is always set to before the query. Notice that a classical query to is equivalent to a computational basis measurement on the query position followed by a quantum query to . An extra computational basis measurement on the answer of the query won’t change the view. So a classical query in the purified view can be treated as applying the unitary
where is a register that we will use to purify the computational basis measurements in the classical queries. By , we mean a sequence of pairs where are not necessary to be distinct but if , then we have the guarantee that . Here has enough space. That is, by , we actually mean where is a special symbol that represents empty. Despite not being standard, we sometimes call database. (The subscript in is for Classical queries.)
Another convenient way to think of the purified view is to treat the execution of the algorithms as an interaction between two parties, the algorithm and the oracle. The oracle will maintain two private registers and (and also some ancillas initialized to be ). If the algorithm is allowed to make quantum queries to , the algorithm will submit to the oracle, and then the oracle will apply and send back to the algorithm. If the algorithm is only allowed to make classical queries, the algorithm will submit to the oracle, and then the oracle will put a fresh ancilla on , apply and send to the algorithm.
We will use and to denote the unitary corresponding to the purified version of , , and on security number respectively. Then and are all in the form of preparing the first query and then repeatively answering the query by applying or and preparing the next query (or the final output if there is no further query). In particular, we will write as . We will omit the subscript when it is clear from the context.
Let where (the subscript is for Recording) is a unitary that in addition to a classical query , it records the query-answer pair into a database maintained by . That is,
where is the register that stores the database maintained by . Again by and , we mean a sequence of query-answer pairs where the query positions are not necessary to be distinct, but the pairs are consistent. has enough space.
It’s easy to see that corresponds to running while the adversary records the query-answer pairs made by .
Then in the purified view, is the following (we will denote by ):
-
1.
Given input public key, serial number, the alleged banknote along with the register containing the truth table of the oracle, introduce a register initialized to be and introduce a register initialized to be .
-
2.
Test phase: Conditioned on the content in is , apply on the banknote for times in sequential. (Or equivalently apply unitary where means applying for times.)
-
3.
Update phase: Conditioned on the content in is , apply the following for times:
-
(a)
Apply on all the query-answer pairs we learn so far (i.e. the contents in ).
-
(b)
Apply on the state synthesized in item (a).
(Or equivalently apply unitary where means alternatively applying and for times.)
-
(a)
-
4.
Synthesize phase: Apply and on the query-answer pairs in to obtain two alleged banknotes where and are that acts on different registers.
Then the acceptance probability of the following algorithms on the corresponding states (taking the randomness of into account) can be analyzed by running the following sequence of algorithms in the purified view.
on
The acceptance probability is the probability that
-
1.
Run the algorithms and in a purified view sequentially where the input of is the security parameter in unary notion, the input of is the output register corresponding to secret key of , and the input of is the output register of and the register for the public key.
-
2.
Furthermore run in a purified view where the input of is the register for the public key, the register for the serial number and the register for (It’s inside working space register of ).
-
3.
Measure the outcome register of the above and obtain .
on
Here means running where the queries to is answered by the database (all the query-answer pairs in ).
Define a unitary
where by , we mean a sequence of query-answer pairs where the query positions are not necessary to be distinct, but the pairs are consistent. By , we mean there exists such that is a pair in and we will denote this as . By , we mean for all , is not a pair in . (The subscript in is for simulating with Database.)
Then applying is exactly answering the query using (If is in the database, then answer the query using ; Otherwise, give a random answer while recording this query-answer pair into the database for later use). Thus the purified version of is
So the acceptance probability of on is the probability that
-
1.
Run the first step in the case on .
-
2.
Furthermore run in a purified view where the input of is the register for the public key, the register for the serial number, and the register for (The input is the same as the case on ).
-
3.
Measure the outcome register of the above and obtain .
on
The acceptance probability is the probability that
-
1.
Run the first step in the case on .
-
2.
Furthermore run in a purified view where the input of is the register for the public key, the register for the serial number, and the register for (It’s the first register of the output state of ).
-
3.
Measure the outcome register of the above and obtain .
on
The acceptance probability is the probability that
-
1.
Run the first step in the case on .
-
2.
Furthermore run in a purified view where the input of is the register for the public key, the register for the serial number, and the register for (The input is the same as the case on ).
-
3.
Measure the outcome register of the above and obtain .
6.2 Compress and Decompress
Intuitively, position in is a uniform superposition of the range and it is unentangled with all other things, so it can be seen as choosing a value from the range uniformly at random independently, which is exactly what the simulation does. It is an analog of those positions that are never asked during the sequence of algorithms in the purely classical query case.
In this subsection, we will show how to extract an analog of and from the pure state. Roughly speaking, the recorded classical queries are an analog of and we will compress the register to extract those non- positions outside to form our analog of .
As it’s easier to write down and analyze the inverse operation of compress, we first give a formal description of decompress unitary . Recall that stores all the classical queries to . is a register for the random function. Define
where can be written as a sequence of pairs , can be written as a sequence of pairs and the input satisfies
-
•
if , then ;
-
•
;
-
•
;
and the output satisfies
-
•
If , then ;
-
•
If , then ;
-
•
If , then .
Roughly speaking, we fill by looking at the pairs and . And we fill all the remaining positions with .
Here our register also have enough space. As our random function only has one-bit outputs, . Recall that . One can check that each two inputs in the above form are orthogonal and they are mapped to orthogonal outputs. So we can define the outputs of other inputs that are not in the above form so that is a unitary.
More Notations
For simplicity, when we write , we mean a sequence of consistent pairs by default. By , we mean , . By where , we mean where . When we write , we mean a sequence that satisfies the second item of the input requirements above. By , we mean , . By where , we mean where . By where , we mean where . By , we mean the sequence we obtain after deleting from the sequence where . Define .
The inverse operation of the above unitary is compress, which can take our database and the truth table in register as inputs and compress them into two databases and . Define it as . These two unitaries enable us to change our view between the decompressed one (a database for classical queries and a truth table) and the compressed one (a database for classical queries and another database). Here is a picture to illustrate this. is an arbitrary state without compression. is a unitary in the decompressed view (It takes a state without compression as input and outputs a state without compression). Then is a compressed view version of (It takes a state after compression as input and outputs a state after compression). From now on, when we write unitary , we mean it is in the compressed view.
Readers can treat as an analog of database in Section 5 and treat as an analog of databases in Section 5. Roughly speaking, stores our classical queries. The query positions in are those asked by and , but not recorded in . We can understand this analog better after taking a look of the following query unitaries in the compressed view.
Recall the unitaries , and from Section 6.1. They are for classical query, classical query while recording and simulated classical query respectively. Their compressed versions are the unitaries , and .
From the description of , we can get for
Since does not act on and , . Thus
By the description of and , whenever we ask a classical query on input , we answer it with our database and record for another time; whenever we ask a classical query on input , we answer it with a random and record in our database for later use; whenever we ask a classical query on input , we actually copy the answer from the , record it, and remove from . The above three cases is analogous to the classical on-the-fly simulation where the query to inside can be answered by , the query to inside should be answered consistently by and we can give a random answer to the query to outside . It is worth pointing out that and maintain the property that is empty (analogous to ).
Furthermore, recall that and does not act on or . Thus , Similarly, recall that the purified version of is . Thus (Because ).
6.3 Analysis of
Here we analyze the acceptance probability of on the output of our . We reuse our ideas in Section 5. Readers are encouraged to refer to our notation table in Appendix C when confused about the notations.
The following proposition can be seen as an analog of 6 except that we work on a general state instead of solely analyzing . It basically says that when the behavior of (corresponding to ) is far from a simulation (corresponding to ), then the number of pairs in (analogous to ) will drop a lot after the verification. The intuition is that roughly speaking, and only behave differently when given a query position , in which case will be excluded from after applying . So it results in a decrement of the number of pairs in . Formally:
Proposition 2.
Denote to be the observable corresponding to the number of pairs in . To be more specific, where is the number of pairs in .
For a state in the following form (i.e. it’s in the compressed view and the contents in and are the same),
Let be the acceptance probability of when the public key, the serial number and the alleged money state are in and respectively (It also equals to the acceptance probability of on because and does not act on the output bit).
Let be the acceptance probability of when the public key, the serial number, the alleged money state and the database for simulating the random oracle are in and respectively (It also equals to the acceptance probability of on ).
Proof.
The first inequality follows immediately from the fact that we can measure a qubit (not in ) of and to obtain whether they accept and the fact that we can not distinguish two states with probability greater than their trace distance.
As for the second inequality, recall that and . Let be the same as except that it uses the contents in for simulation instead of the contents in . To be more specific,
Define where . In order to analyze the difference of and on , it’s enough to analyze the difference between one true query and one simulated query. Formally,
where we use the fact that have the same contents on and and thus equals to .
and act differently only when the query position is inside . So the difference between one true query and one simulated query can be bounded by the weight of queries inside , which equals the decrement of the number of pairs in after the query. Formally, we give the following lemma, whose proof is deferred to Appendix A.
Lemma 6.
We can insert Lemma 6 into the above inequality and obtain
where we use again the fact that does not act on and thus commutes with .
∎
The next claim is an analog of 4. Basically, it argues that on average, is a good witness state for the simulation with database even when and can make quantum queries to .
The intuition of the proof is the following: from Section 6.1, the difference between the acceptance probabilities of and on is the difference between running and in the purified view on the same state (i.e. the difference between applying and on the same state), which can be transformed to the compressed view and by Proposition 2 can be bounded by the decrement of the number of pairs in after the verification. Roughly speaking, the decrement equals to the number of pairs in asked during the verification. But we randomize , so running another verification on should not decrease the number of pairs in too much on average. Formally:
Claim 7.
where the probabilities are taken over the randomness of , the randomness of the inputs to the adversary and the randomness of the adversary (so and are both randomized).
Proof.
From Section 6.1, these two probabilities can be analyzed by running a sequence of algorithms in the purified view. As , we can insert and between the algorithms. So these two probabilities can also be analyzed by running the sequence of algorithms in the compressed view.
Let be the whole pure state we obtain by applying the unitaries , and to the state along with enough ancillas (i.e. the compressed version of the first step of “ on ” in Section 6.1). Let be the register corresponding to .
It’s easy to see that every classical query is recorded by the adversary. That is, has the same contents in and . From Proposition 2,
Denote to be unitary that describes our update phase in the compressed view. Formally, where is running on the state synthesized by .
Let be the whole pure state we obtain by applying the unitaries , and to the state along with enough ancillas (i.e. running the compressed version of the first step of “ on ” until the end of the test phase of ). That is to say, where synthesizes the alleged banknote. Notice that only reads the public key, the serial number and acts on and some fresh ancillas. So commutes with and . (Recall that is the one that does not record queries for . That is, it does not act on .) Thus
The next step is to bound the decrement of the pairs in during the verification on . However, the update phase between and the randomized number of verifications in the test phase may bring some trouble. So we give the following lemma, which is the counterpart of the fact that in the classical case, the number of queries in asked during a verification is no more than the number of queries in asked during a verification.
Lemma 7.
We use the same notation as above. Then
The proof of the lemma is deferred to Appendix B.
Now let’s bound the decrement of the pairs in during the verification. From Lemma 7,
where we use the fact that is the same as except that it records the query-answer pairs for .
Note that we can also write as where is the state after we run iterations in the test phase. Then . As and do not run on , tracing out does not change the quantity. Therefore,
where we use the property of compressed oracle techniques that after quantum queries, there are at most non- elements in . is just the state we obtain after we run and (so there are at most quantum queries) and then apply the unitary . So there are at most pairs in of . Hence .
∎
The next claim is an analog of summing over of 6 to get a bound for how well behaves on . The intuition of the proof is similar to 7.
Claim 8.
where the probabilities are taken over the randomness of , the randomness of the inputs to the adversary and the randomness of the adversary (so and are both randomized).
Proof.
Similar to the proof of 7, these two probabilities can be analyzed by running a sequence of algorithms specificed in Section 6.1 in the compressed view. Let be the same state as in 7 except that now is the first output register of (corresponding to the first synthesized state ). Let be the state we obtain by running the compressed version of the first step of “ on ” in Section 6.1 until the end of the update phase of . That is to say, . Then has the same contents in and .
Recall that and both run the verification for the state in , i.e. the first synthesized state . Recall the fact that is the same as except that it also records query-answer pairs for and that only acts on . From Proposition 2,
This is because acts on the second output register, and thus it commutes with (it verifies the first output state). Moreover, both and do not act on , and thus commute with .
Note that we can write where is the state we obtain after we run a randomized number of iterations in the test phase and iterations in the update phase. Then . Notice that , and do not run on . So tracing out won’t change the value. Therefore,
where we use the property of compressed oracle techniques that after quantum queries, there are at most non- elements in . is just the state we obtain after we run and (so there are at most quantum queries) and run the test phase of (only classical queries with recording) in the compressed view. By the description of , the number of pairs in can not increase when we make classical queries and record them. So there are at most pairs in of . Hence . ∎
Now let’s combine the above results to prove Theorem 8.
Proof.
Let , , , and .
That is, the adversary we construct gives a valid attack to where and , which establishes Theorem 7.
References
- [AA09] Scott Aaronson and Andris Ambainis “The need for structure in quantum speedups” In arXiv preprint arXiv:0911.0996, 2009
- [Aar09] Scott Aaronson “Quantum copy-protection and quantum money” In 24th Annual IEEE Conference on Computational Complexity IEEE Computer Soc., Los Alamitos, CA, 2009, pp. 229–242 DOI: 10.1109/CCC.2009.42
- [AC13] Scott Aaronson and Paul Christiano “Quantum money from hidden subspaces” In Theory Comput. 9, 2013, pp. 349–401 DOI: 10.4086/toc.2013.v009a009
- [ACC+22] Per Austrin et al. “On the Impossibility of Key Agreements from Quantum Random Oracles” In Cryptology ePrint Archive, 2022
- [AJL+19] Prabhanjan Ananth et al. “Indistinguishability obfuscation without multilinear maps: new paradigms via low degree weak pseudorandomness and security amplification” In Annual International Cryptology Conference, 2019, pp. 284–332 Springer
- [AK22] Prabhanjan Ananth and Fatih Kaleoglu “A Note on Copy-Protection from Random Oracles” https://eprint.iacr.org/2022/1109, Cryptology ePrint Archive, Paper 2022/1109, 2022 URL: https://eprint.iacr.org/2022/1109
- [AKL+22] Prabhanjan Ananth et al. “On the feasibility of unclonable encryption, and more” In Annual International Cryptology Conference, 2022, pp. 212–241 Springer
- [AL21] Prabhanjan Ananth and Rolando L La Placa “Secure Software Leasing” In Eurocrypt, 2021
- [BDG22] Andriyan Bilyk, Javad Doliskani and Zhiyong Gong “Cryptanalysis of Three Quantum Money Schemes” In arXiv preprint arXiv:2205.10488, 2022
- [BDGM20] Zvika Brakerski, Nico Döttling, Sanjam Garg and Giulio Malavolta “Factoring and pairings are not necessary for iO: circular-secure LWE suffices” In Cryptology ePrint Archive, 2020
- [BDV17] Nir Bitansky, Akshay Degwekar and Vinod Vaikuntanathan “Structure vs. hardness through the obfuscation lens” In Annual International Cryptology Conference, 2017, pp. 696–723 Springer
- [BGI+01] Boaz Barak et al. “On the (im) possibility of obfuscating programs” In Annual international cryptology conference, 2001, pp. 1–18 Springer
- [BGS13] Anne Broadbent, Gus Gutoski and Douglas Stebila “Quantum one-time programs” In Annual Cryptology Conference, 2013, pp. 344–360 Springer
- [BI20] Anne Broadbent and Rabib Islam “Quantum encryption with certified deletion” In Theory of Cryptography Conference, 2020, pp. 92–122 Springer
- [BL20] Anne Broadbent and Sébastien Lord “Uncloneable Quantum Encryption via Oracles” In 15th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2020) 158, Leibniz International Proceedings in Informatics (LIPIcs) Dagstuhl, Germany: Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2020, pp. 4:1–4:22 DOI: 10.4230/LIPIcs.TQC.2020.4
- [BM09] Boaz Barak and Mohammad Mahmoody-Ghidary “Merkle puzzles are optimal—an o (n 2)-query attack on any key exchange from a random oracle” In Annual International Cryptology Conference, 2009, pp. 374–390 Springer
- [BPR+08] Dan Boneh et al. “On the impossibility of basing identity based encryption on trapdoor permutations” In 2008 49th Annual IEEE Symposium on Foundations of Computer Science, 2008, pp. 283–292 IEEE
- [BS16] Shalev Ben-David and Or Sattath “Quantum tokens for digital signatures” In arXiv preprint arXiv:1609.09047, 2016
- [CKP15] Ran Canetti, Yael Tauman Kalai and Omer Paneth “On obfuscation with random oracles” In Theory of Cryptography Conference, 2015, pp. 456–467 Springer
- [CLLZ21] Andrea Coladangelo, Jiahui Liu, Qipeng Liu and Mark Zhandry “Hidden cosets and applications to unclonable cryptography” In Annual International Cryptology Conference, 2021, pp. 556–584 Springer
- [CMP20] Andrea Coladangelo, Christian Majenz and Alexander Poremba “Quantum copy-protection of compute-and-compare programs in the quantum random oracle model” In arXiv preprint arXiv:2009.13865, 2020
- [CMSZ22] Alessandro Chiesa, Fermi Ma, Nicholas Spooner and Mark Zhandry “Post-quantum succinct arguments: breaking the quantum rewinding barrier” In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science—FOCS 2021 IEEE Computer Soc., Los Alamitos, CA, [2022] ©2022, pp. 49–58 DOI: 10.1109/FOCS52979.2021.00014
- [CX21] Shujiao Cao and Rui Xue “Being a permutation is also orthogonal to one-wayness in quantum world: Impossibilities of quantum one-way permutations from one-wayness primitives” In Theoretical Computer Science 855 Elsevier, 2021, pp. 16–42
- [DG17] Nico Döttling and Sanjam Garg “Identity-based encryption from the Diffie-Hellman assumption” In Annual International Cryptology Conference, 2017, pp. 537–569 Springer
- [Die82] DGBJ Dieks “Communication by EPR devices” In Physics Letters A 92.6 Elsevier, 1982, pp. 271–272
- [DLMM11] Dana Dachman-Soled, Yehuda Lindell, Mohammad Mahmoody and Tal Malkin “On the black-box complexity of optimally-fair coin tossing” In Theory of Cryptography Conference, 2011, pp. 450–467 Springer
- [DQV+21] Lalita Devadas et al. “Succinct LWE sampling, random polynomials, and obfuscation” In Theory of Cryptography Conference, 2021, pp. 256–287 Springer
- [FGH+12] Edward Farhi et al. “Quantum money from knots” In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 2012, pp. 276–289
- [GKLM12] Vipul Goyal, Virendra Kumar, Satya Lokam and Mohammad Mahmoody “On black-box reductions between predicate encryption schemes” In Theory of Cryptography Conference, 2012, pp. 440–457 Springer
- [GKM+00] Yael Gertner et al. “The relationship between public key encryption and oblivious transfer” In Proceedings 41st Annual Symposium on Foundations of Computer Science, 2000, pp. 325–335 IEEE
- [GP21] Romain Gay and Rafael Pass “Indistinguishability obfuscation from circular security” In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, 2021, pp. 736–749
- [GZ20] Marios Georgiou and Mark Zhandry “Unclonable decryption keys” In Cryptology ePrint Archive, 2020
- [HJL21] Sam Hopkins, Aayush Jain and Huijia Lin “Counterexamples to new circular security assumptions underlying iO” In Annual International Cryptology Conference, 2021, pp. 673–700 Springer
- [HY20] Akinori Hosoyamada and Takashi Yamakawa “Finding collisions in a quantum world: quantum black-box separation of collision-resistance and one-wayness” In International Conference on the Theory and Application of Cryptology and Information Security, 2020, pp. 3–32 Springer
- [IR90] Russell Impagliazzo and Steven Rudich “Limits on the provable consequences of one-way permutations” In Advances in cryptology—CRYPTO ’88 (Santa Barbara, CA, 1988) 403, Lecture Notes in Comput. Sci. Springer, Berlin, 1990, pp. 8–26 DOI: 10.1007/0-387-34799-2\_2
- [JLS18] Zhengfeng Ji, Yi-Kai Liu and Fang Song “Pseudorandom quantum states” In Annual International Cryptology Conference, 2018, pp. 126–152 Springer
- [JLS21] Aayush Jain, Huijia Lin and Amit Sahai “Indistinguishability obfuscation from well-founded assumptions” In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, 2021, pp. 60–73
- [JLS22] Aayush Jain, Huijia Lin and Amit Sahai “Indistinguishability Obfuscation from LPN over, DLIN, and PRGs in NC” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2022, pp. 670–699 Springer
- [KLS22] Andrey Boris Khesin, Jonathan Z Lu and Peter W Shor “Publicly verifiable quantum money from random lattices” In arXiv preprint arXiv:2207.13135, 2022
- [KSS21] Daniel M Kane, Shahed Sharif and Alice Silverberg “Quantum money from quaternion algebras” In arXiv preprint arXiv:2109.12643, 2021
- [Lut10] Andrew Lutomirski “An online attack against Wiesner’s quantum money” In arXiv preprint arXiv:1010.0256, 2010
- [MVW12] Abel Molina, Thomas Vidick and John Watrous “Optimal counterfeiting attacks and generalizations for Wiesner’s quantum money” In Conference on Quantum Computation, Communication, and Cryptography, 2012, pp. 45–64 Springer
- [MW05] Chris Marriott and John Watrous “Quantum arthur–merlin games” In computational complexity 14.2 Springer, 2005, pp. 122–152
- [NC00] Michael A. Nielsen and Isaac L. Chuang “Quantum computation and quantum information” Cambridge University Press, Cambridge, 2000, pp. xxvi+676
- [Pap94] Christos H. Papadimitriou “Computational Complexity” Addison-Wesley, 1994
- [PRV12] Periklis A Papakonstantinou, Charles W Rackoff and Yevgeniy Vahlis “How powerful are the DDH hard groups?” In Cryptology ePrint Archive, 2012
- [Rob21] Bhaskar Roberts “Security analysis of quantum lightning” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2021, pp. 562–567 Springer
- [RS22] Roy Radian and Or Sattath “Semi-quantum money” In Journal of Cryptology 35.2 Springer, 2022, pp. 1–70
- [RTV04] Omer Reingold, Luca Trevisan and Salil Vadhan “Notions of reducibility between cryptographic primitives” In Theory of Cryptography Conference, 2004, pp. 1–20 Springer
- [Rud91] Steven Rudich “The Use of Interaction in Public Cryptosystems.” In Annual International Cryptology Conference, 1991, pp. 242–251 Springer
- [RY21] Gregory Rosenthal and Henry Yuen “Interactive Proofs for Synthesizing Quantum States and Unitaries” In arXiv preprint arXiv:2108.07192, 2021
- [Shm22] Omri Shmueli “Public-key Quantum money with a classical bank” In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, 2022, pp. 790–803
- [Shm22a] Omri Shmueli “Semi-Quantum Tokenized Signatures” In Cryptology ePrint Archive, 2022
- [Sim98] Daniel R Simon “Finding collisions on a one-way street: Can secure hash functions be based on general assumptions?” In International Conference on the Theory and Applications of Cryptographic Techniques, 1998, pp. 334–345 Springer
- [Unr16] Dominique Unruh “Computationally binding quantum commitments” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2016, pp. 497–527 Springer
- [Wie83] Stephen Wiesner “Conjugate coding” In ACM Sigact News 15.1 ACM New York, NY, USA, 1983, pp. 78–88
- [Win99] Andreas Winter “Coding theorem and strong converse for quantum channels” In IEEE Transactions on Information Theory 45.7 IEEE, 1999, pp. 2481–2485
- [WW21] Hoeteck Wee and Daniel Wichs “Candidate obfuscation via oblivious LWE sampling” In Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2021, pp. 127–156 Springer
- [WZ82] William K Wootters and Wojciech H Zurek “A single quantum cannot be cloned” In Nature 299.5886 Nature Publishing Group, 1982, pp. 802–803
- [Zha15] Mark Zhandry “A note on the quantum collision and set equality problems” In Quantum Information & Computation 15.7-8 Rinton Press, Incorporated Paramus, NJ, 2015, pp. 557–567
- [Zha18] Mark Zhandry “How to Record Quantum Queries, and Applications to Quantum Indifferentiability” https://eprint.iacr.org/2018/276, Cryptology ePrint Archive, Paper 2018/276, 2018 URL: https://eprint.iacr.org/2018/276
- [Zha21] Mark Zhandry “Quantum lightning never strikes the same state twice. Or: quantum money from cryptographic assumptions” In J. Cryptology 34.1, 2021, pp. Paper No. 6\bibrangessep56 DOI: 10.1007/s00145-020-09372-x
Appendix A Missing Proof of Lemma 6
The proof consists of two parts. In the first part, we will bound by the weight of queries inside . In the second part, we will show that the weight equals the decrement of the number of pairs in after the query.
Proof.
Let’s begin the first part with some notations.
For , prepares the query, thus we can write as follows,
where is the next query position, is for the query answer and contains all other registers including the public key, serial number, , etc. We can divide these terms into two categories, one is , and the other one is . That is, where is the weight of queries inside ,
Recall that
So when , and act exactly the same. As a result, . Therefore,
where we use properties of trace norm of matrices and if where are two bases.
Now let’s move to the second part. We will show that equals to the difference of the number of pairs in on and .
First, let’s calculate the number of pairs in on state .
Notice that does not act on , so it commutes with . Then . Recall that
In the summation, the terms are orthogonal to each other. Thus the expected number of pairs in of is . So
Next, let’s calculate the number of pairs in on state .
By definition of and
is a unitary, so the terms where and , where and , and where and in the above summation are orthogonal to each other.
Thus the expected number of pairs in of is
So . That is to say, the weight of queries outside equals the decrement of the number of pairs in after the query.
Combine the above two parts, and we can obtain
∎
Appendix B Missing Proof of Lemma 7
We prove Lemma 7 by first showing that for two parallel queries, we can remove one of them without decreasing the value and then extending the result to on and (each has polynomial queries).
Proof.
Let act on the first query position register , the first query answer register and while acts on the second query position register , the second query position register and .
We first show that for any state
we have the inequality
In fact, from the same argument in Lemma 6, is exactly the probability that we get outcome such that when we measure the registers and on state . That is
Similarly, is exactly the probability that we get outcome such that when we measure the registers and on state . Notice that we can write as
where is a unitary that only depends on . is a unitary, so the terms in the above summation are orthogonal. Thus
As a result, That is to say, an extra query on another part of the state can only decrease the chance of making a bad query in because that extra query can only make the set of bad queries smaller.
and are composed of and . In fact, the above argument can also be extended to and to capture our intuition that can only decrease the number of bad queries made during because can only make the set of bad queries smaller.
We will first show a fixed number of iterations of the update phase can only decrease the number of bad queries made during and then show it holds for .
Let be the state as in 7. We can write it as .
For any , for unitary that acts on the synthesized state and records the query for , and unitary that acts on , denote . Let prepare the next query for and prepare the next query for the synthesized state. For every , denote and . Apply the above inequality, and we can get that
That is, we can delete a query for the synthesized state without decreasing the number of bad queries made during on . Repetitively removing the queries for the synthesized state, we can get that
where we use the fact that commutes with and , commutes with , and both and commute with .
We successfully removed one on the synthesized state. We can do it until we remove all of on the synthesized state. That is,
where we use the fact that uses the database in and does not record query for . Thus and commute. does not act on and , so it commutes with .
Finally, we will show a randomized number of iterations of the update phase can not increase the number of bad queries made during . As both and do not act on register , tracing out after does not change the quantity. Recall that . Thus
∎
Appendix C Notation Tables
is the security number. makes classical queries. and make quantum queries in total. We sometimes omit . | |
two polynomials that will decide the maximal possible number of iterations we run in the test phase and the update phase. |
the register storing the public key | |
the register storing the serial number | |
the register storing the alleged money state | |
the register storing the classical queries database maintained by | |
the register storing the classical queries we made so far along with their answers (maintained by the oracle) | |
the register storing the oracle if in the decompressed view or the register storing (the database for non- elements) if in compressed view | |
, | the register storing unimportant things for the analysis. For example, it may include the secret key, working space for and , and some unused fresh ancillas. |
the register storing the number of iterations for test phase. | |
the register storing the number of iterations for update phase. | |
the register storing the next query position. | |
the register to store the next query answer. |
A state in the following form (i.e. it’s in the compressed view and the contents in and are the same), |ϕ⟩ = ∑_pk, s, m, D, D_F, g s.t. D ∩D_F = ∅α_pk, s, m, D, D_F, g|pk⟩_Pk|s⟩_S|m⟩_M|D⟩_D_A|D_F⟩_F|D⟩_D_R|g⟩_G. In 7 and 8, we will instantiate it with the pure state we obtain by applying the unitaries , and to the state along with enough ancillas. | |
. It’s the state when we run on in the compressed view until we have answered the query. | |
We abuse the notation. is the pure state we obtain by running the first step in the compressed view in the case on until the end of the test phase (in 7) or the update phase (in 8) of . | |
an arbitary state ready for two “parallel” classical queries on different registers. | |
the pure state we obtain by running the first step in the compressed view in the case on until we finish the test phase and then truncating . | |
We abuse the notation. In 7, is the state after we run iterations in the test phase but not run the update phase in the compressed view. In 8, is the state we obtain after we run a randomized number of iterations in the test phase and iterations in the update phase in the compressed view. |
the observable corresponding to the number of pairs in (i.e. half of the nonempty length in ). Formally, where is the number of pairs in . It will only be applied to those states in compressed view. | |
the unitary defined in Section 6.2. It acts on and and decompresses two databases to one database and the oracle. | |
the inverse of the unitary . | |
. It’s the compressed view version of for a general unitary . See the figure in Section 6.2 for more details. | |
the unitary corresponding to answering a quantum query with the real oracle. | |
the unitary corresponding to answering a classical query with the real oracle. | |
the same as except that it records the query-answer for at the same time. | |
the unitary corresponding to answering a classical query with the database in register while recording the query-answer to for later use. | |
the unitary corresponding to answering a classical query with the database in register while recording the query-answer to for later use. | |
When , it’s the unitary corresponding to the preparation of the query of . When , it’s the unitary after the final query of . | |
the unitary defined in Section 6.1. | |
the unitary defined in Section 6.1, . | |
the unitary corresponding to doing the verification while recording the query-answer pair for , . | |
the unitary corresponding to running where is the content in , . | |
the unitary defined in Section 6.1 that describes our update phase. Formally, it’s the unitary |