An Time Fourier Set Query Algorithm
Abstract
Fourier transformation is an extensively studied problem in many research fields. It has many applications in machine learning, signal processing, compressed sensing, and so on. In many real-world applications, approximated Fourier transformation is sufficient and we only need to do the Fourier transform on a subset of coordinates. Given a vector , an approximation parameter and a query set of size , we propose an algorithm to compute an approximate Fourier transform result which uses Fourier measurements, runs in time and outputs a vector such that holds with probability of at least .
Index Terms:
Sparse Recovery, Fourier Transform, Set Query.I Introduction
Fourier transform is ubiquitous in image and audio processing, telecommunications and so on. The time complexity of classical Fast Fourier Transform (FFT) algorithm proposed by Cooley and Turkey [1] is . Optics imaging [2, 3], magnetic resonance image (MRI) [4] and the physics [5] are benefits from this algorithm. The algorithm proposed by Cooley and Turkey [1] takes samples to compute the Fourier transformation result.
The number of samples taken is an important factor. For example, it influences the amount of ionizing radiation that a patient is exposed to during CT scans. The amount of time a patient spends within the scanner can also be reduced by taking fewer samples. Thus, we consider the Fourier Transform problems in two computational aspects. Thus, two aspects of the Fourier Transform problems are taken into consideration by us. The first aspect is the reconstruction time which is the time of decoding the signal from the measurements. The second aspect is the sample complexity. Sample complexity is the number of noisy samples required by the algorithm. There is a long line of works optimizing the time and the sample complexity of Fourier Transform in the field of signal-processing and the field of TCS [1, 5, 4, 2, 6, 7].
As a result, we can anticipate that algorithms that leverage sparsity assumptions about the input and outperform FFT in applications will be of significant practical utility. In general, the two most significant factors to optimize are the sample complexity and the time complexity of obtaining the Fourier Transform result.
In many real world applications, computing the approximate Fourier transformation results for a set of selective coordinates is sufficient, and we can leverage the approximation guarantee to accelerate the computation. The set query is originally proposed by [8]. The original definition doesn’t have restriction on Fourier measurements. Then [9] generalizes the classical set query definition [8] into Fourier setting. In this paper we consider the set estimation based on Fourier measurement problem (defined by [9]) where given a vector , approximation parameters and a query set and , we want to compute an approximate Fourier transform result in sublinear time and sample complexity and compared with the Fourier transform result , the following approximation guarantee holds:
with probability at least . For a set and a vector , we define by setting if , and otherwise .
For this Fourier set query problem, there are two major prior works [9] and [6]. The [9] studies the problem explicitly and [6] implicitly provides a solution to Fourier set query, we will provide more details in the later paragraphs.
The work by [9] first explicitly define Fourier set query problem and studies it. [9] obtains an algorithm that has sample complexity and running time for Fourier set query. Here, is an upper bound on the norm of the vector. In most applications, are considered . Our approach gives an algorithm with running time. The running time of our result has no dependence on , but our result do not achieve the optimal sample complexity.
The [6] didn’t study Fourier set query problem, instead they study Fourier sparse recovery problem. However, applying their algorithm [6] to Fourier set query, it provides an algorithm with time complexity of and sample complexity of .
Our main contributions are summarized as follows:
-
•
We present a efficient algorithms for Fourier set query problem.
-
•
We provide comprehensive theoretical guarantees to show the predominance of our algorithms over the existing algorithm.
Roadmap. We first present the related work about discrete fourier transform, continuous fourier transform and some applications of fourier transform in Section II. We define our problem and present our main theorem in Section III. We present a high-level overview of our techniques in Section IV. We provide some definitions, notations and technique tools in Section V. And as our main result in this paper, our algorithm (See Algorithm 1.) and the analysis about the correctness and complexity of it is given in Section VI. Finally, we conclude our paper in Section VII.
II Related Work
Discrete Fourier Transform
For computational jobs, among the most crucial and often employed algorithms is the discrete Fourier transform (DFT).. There is a long line of works focus on sparse discrete Fourier transforms. Results can be divided into two kinds: the first kind of results choose sublinear measurements and achieve sublinear or linear recovery time. This kind of work includes [10, 6, 11, 12, 13, 14, 15, 9, 16]. The second kind of results randomly choose measurements and prove that a generic recovery algorithm succeed with high probability. A common generic recovery algorithm that this kind of works used is minimization. These results prove the Restricted Isometry Property [17, 18, 19]. Currently, the first kind of solutions have better theoretical guarantee in sample and time complexity. However, the second kind of algorithm has high success probabilities and higher capability in practice.
Continuous Fourier Transform
[20] studies sparse Fourier transforms on a continuous signals. They apply a discrete sparse Fourier transform algorithm, followed by a hill-climbing method to optimize their solution into a reasonable range. [21] presents an algorithm whose sample complexity is only linear to and logarithmic in the signal-to-noise ratio. Their frequency resolution is suitable for robustly computing sparse continuous Fourier transforms. [22] generalizes [21] into high-dimensional setting. [23] provide an algorithm that support the reconstruction of a signal without frequency gap. They present a solution to approximate the signal using a constant factor noise growth and takes samples polynomial in and logarithmic in the signal-to-noise ratio. Recently [24] improves the approximation ratio of [23].
Application of Fourier Transform
Fourier transformation has a wide application in many fields including physics, mathematics, signal processing, probability theory, statistics, acoustics, cryptography and so on.
Solving partial differential equations is one of the most important application of Fourier transformation. Some differential equations are simpler to understand in the frequency domain because the action of differentiation in the time domain corresponds to the multiplication by the frequency. Additionally, frequency-domain multiplication is equivalent to convolution in the time domain [25], [26], [27].
Various applications of the Fourier transform include nuclear magnetic resonance (NMR) [28], [29], [30] and other types of spectroscopy, such as infrared (FTIR) [31]. In NMR, a free induction decay (FID) signal with an exponential shape is recorded in the time domain and Fourier-transformed into a Lorentzian line-shape in the frequency domain. Mass spectrometry and magnetic resonance imaging (MRI) both employ the Fourier transform. The Fourier transform is also used in quantum mechanics [32].
For the spectrum analysis of time-series [33], [34], the Fourier transform is employed. The Fourier transformation is often not applied to the signal itself in the context of statistical signal processing. It has been discovered in practice that it is best to simulate a signal by a function (or, alternatively, a stochastic process) that is stationary in the sense that its distinctive qualities are constant across all time, even though a genuine signal is in fact transitory. It has been discovered that taking the Fourier transform of the function’s autocorrelation function is more advantageous for the analysis of signals since the Fourier transform of such a function does not exist in the conventional sense.
III Fourier set query
III-A Fourier set query problem
In this section, we give a formal definition of the main problem focused on.
Definition III.1 (Sample Complexity).
Given a vector , we say the sample complexity of an algorithm is (an Algorithm takes samples), when is the number of the coordinates used and .
Definition III.2 (Main problem).
Given a vector and the as the concrete Fourier transformation result, then for every and , any , , the goal is to design an algorithm that
-
•
takes samples from (note that we treat one entry of as one sample)
-
•
takes some time to output a vector such that
We want to optimize both sample complexity (which is the number of coordinates we need to access in ), and also the running time.
III-B Our Result
We present our main theorem as follows:
Theorem III.3 (Main result).
Given a vector and the as the concrete Fourier transformation result, then for every and , any , , there exists an algorithm (Algorithm 1) that takes
samples from , runs in
time, and outputs a vector such that
holds with probability at least .
IV Technique Overview
In this section, we will give an overview about the technique methods used on the proof of our main result and the complexity analysis about time and sample (See Definition III.1.). At first, we will give an introduction about main functions and their time complexity as well as other properties used in our algorithm. And based on the functions, then we will give the analysis about the correctness of our algorithm where with probability at least it can finally produce a which satisfies
The analysis of total complexity comes last, with as the sample complexity (See Definition III.1) and as the time complexity. And then we can make sure the algorithm solve the problem (See Definition III.2.) with better performance compared to the prior works [9] and [6] (See details in Table I).
Technique I: HashToBins
We use the same function HashToBins with the one in [6], which is one of the key part of the function EstimateValues. We can attain a , where the for satisfies the following equation
To help the analysis of the time complexity of our algorithm 1, by Lemma V.15, the time complexity of the function above is with
Technique II: EstimateValues
EstimateValues is an key function in loop (See Section VI-A). By using this function, we attain the new set and the new value to update by
and by
Technique III: Query Set
We use as the query set and is the set attained by updating with iterations. And we use where and .
We demonstrate that we can compress to a small enough value where . Due to reason that is a query set, the above sentence can be said as that we can finish the query of all the elements in with a large enough number of the iterations.
In the proof of above statement, we bring some properties about as follows (See Details in Definition V.9):
-
1.
“Collision”
-
2.
“Large offset”
-
3.
“Large noise”
Given a vector and as a coordinate of it, we also give the definition about “well-isolated” based on concepts above. And then we can prove that with probability at least , we can have is “well-isolated”.
Based on the statement above, we can have small enough by and a large enough here.
Technique IV: Correctness and Complexity
By the upper bound of which we attain in Section VI-A. We can demonstrate the error can satisfy the requirement in the problem. With probability , we can have
Then we can demonstrate
V Preliminary
In this section, we first present some definitions and background for Fourier transform in Section V-A. We introduce some technical tools in Section V-B. Then we introduce spectrum permutations and filter functions in Section V-C. They are used as hashing schemes in the Fourier transform literature. In Section V-D, we introduce collision events. large offset events, and large noise events.
V-A Notations
We use to denote . Note that . For any complex number , we have , where . We define the complement of as . We define . For any complex vector , we use to denote the support of , and then . We define , which is the -th unitary root i.e. .
The discrete convolution of functions and is given by,
For a complex vector , we use to denote its Fourier spectrum,
Then the inverse transform is
We define
We define as a vector by setting if , and otherwise , for a vector and a set .
V-B Technical Tools
We show several technical tools and some lemmas in prior works we used in the following section.
Lemma V.1 (Markov’s inequality).
If is a nonnegative random variable and , then the probability that is at least a is at most the expectation of divided by :
Let (where ); then we can rewrite the previous inequality as
The following two lemmas of complex number are standard. We prove the following two lemmas for the completeness of the paper.
Lemma V.2.
Given a fixed vector and a pairwise independent random variable where with probability respectively. Then we have:
Proof.
We have:
where the first step comes from the linearity of expectation, the second step follows the linearity of expectation, the third step is a pairwise independent random variable, the fourth step follows that , and the final step comes from the definition of and . ∎
Lemma V.3.
Let uniformly at random. Given a fixed vector and , then we have:
Proof.
For any fixed , we have the inequality as follows
(1) |
where the first step comes from geometric sum, and the second step comes from We have:
where the first step follows that for a complex number , , the second step follows the linearity of expectation, the third step follows the linearity of expectation, where the fourth step follows Eq.1, and the final step comes from the definition of . ∎
V-C Permutation and filter function
We use the same (pseudorandom) spectrum permutation as [6],
Definition V.4.
Suppose exists mod . We define the permutation by
We also define . Then we have
Claim V.5.
We have that
is defined as the “bin” with the mapping of frequency onto. We define as the “offset”. We formally define them as follows:
Definition V.6.
Let the hash function be defined as
Definition V.7.
Let the offset function be defined as
Definition V.8.
Given parameters , , . We say that is a filter function if it satisfies the following properties:
-
1.
.
-
2.
if , .
-
3.
if , .
-
4.
for all , .
-
5.
.
V-D Collision event, large offset event, and large noise event
We use three types of events defined in [6] as basic building blocks for analyzing Fourier set query algorithms. For any , we define three types of events associated with and and defined over the probability space induced by and :
Definition V.9 (Collision, large offset, large noise).
The definition of three events are given as follow:
-
•
We say “Large offset” event holds if
-
•
We say “Large noise” event holds if
-
•
We say “Collision” event holds if
Definition V.10 (Well-isolated).
For a vector , we say a coordinate is “well isolated” when none of “Collision” event, “Large offset” and “Large noise” event holds.
Claim V.11 (Claim 3.1 in [6]).
For all , we have
Claim V.12 (Claim 3.2 in [6]).
For all , we have
Claim V.13 (Claim 4.1 in [6]).
For any , the event holds with probability at most
Lemma V.14 (Lemma 4.2 in [6]).
With divide , uniformly sampled from and the others without limitation in
With all of , and not holding and , we have for all ,
Lemma V.15 (Lemma 3.3 in [6]).
Suppose divides . The output of HashToBins satisfies
Let
The running time of HashToBins is
VI Analysis on Fourier Set Query Algorithm
In this section, we will give an total analysis about our Algorithm 1. First, we will provide the iterative loop analysis which is the main part of our main function FourierSetQuery in Section VI-A. By this analysis, we demonstrate an important property of the Algorithm 1 in Section VI-B. In Section VI-C, we prove the the correctness of the algorithm. We also provide the analysis of the complexity (sample and time) of Algorithm 1. Then we can give an satisfying answer to the problem (See Definition III.2) with Algorithm 1 attained by us whose performance (on sample and time complexity) is better than prior works (See Table I).
VI-A Iterative loop analysis
Iterative loop analysis for Fourier set query is more tricky than the classic set query, because in the Fourier case, hashing is not perfect, in the sense that by using spectrum permutation and filter function (as the counterpart of hashing techniques), one coordinate can non-trivially contribute to multiple bins. We give iterative loop induction in Lemma VI.4.
Lemma VI.1.
Given a vector , , , for a coordinate and each , with probability at least , We say that is “well isolated” (See Definition V.10).
Proof.
Collision. Using Claim V.11, for any , the event holds with probability at most
where the first step follows from the definition of and the assumption on , the second step is straightforward, the third step follows from the definition of , , and .
It means
Large offset. Using Claim V.12, for any , the event holds with probability at most , i.e.
Large noise. Using Claim V.13, for any ,
By a union bound over the above three events, we have is “well isolated” with probability at least . ∎
Lemma VI.2.
Given parameters , . For any , . For each , we define
For each : If for all we have
-
1.
.
-
2.
.
-
3.
.
-
4.
.
-
5.
.
Then, with probability , we have
Proof.
We consider a particular step . We can condition on .
By Lemma VI.1, we have is “well isolated” with probability at least .
Therefore, each lies in with probability at least . We have Then by Markov’s inequality (See Lemma V.1) and assumption in the statement, we have
(2) |
with probability . Then we know that
where the first step follows from the definition of , the second step follows from Eq. (2), the third step follows from the definition of and .
∎
Lemma VI.3.
Given parameters , . For any , . For each , we define
For each : If for all we have
-
1.
.
-
2.
.
-
3.
.
-
4.
.
-
5.
.
Then, with probability , we have
Proof.
We define and as follows
(3) |
For a fixed , let . By Lemma V.15, we have
(4) |
For each , we define set . Let be the set of coordinates such that . Then it is easy to observe that
where the first step comes from , and the second step follows that .
We can calculate the expectation of .
We first demonstrate that
then get the upper bound of
.
We have
where the first step follows that summation over , the second step comes from the definition of (in Line 19 in Algorithm 1), the third step follows that
and , the fourth step comes from Eq. (4).
And then we have
where the first step follows the equation above, the second step follows Lemma V.3, the third step follows from expanding the squared sum, the fourth step follows that if , we have
the fifth step follows for two pairwise independent random variable and , we have holds with probability at most , the sixth step comes from the summation over , and the last step follows from and .
Then, using Markov’s inequality, we have,
Note that
where the first step follows by , the second step follows by , the third step follows by , the last step follows by .
Thus, we have
∎
Lemma VI.4.
Given parameters , . For any , . For each , we define
For each : If for all we have
-
1.
.
-
2.
.
-
3.
.
-
4.
.
-
5.
.
Then, with probability , we have
-
1.
.
-
2.
.
-
3.
.
-
4.
.
-
5.
.
Proof.
We will prove the five results one by one.
Part 1.
Part 2.
By Lemma VI.2, we have that
Part 3.
Part 4.
Part 5.
By Lemma VI.3, we have that
(5) |
Recall that
It is obvious that
Conditioning on all coordinates in are well isolated and Eq. (5) holds, we have
where the first step comes from , the second step is due to rearranging the terms, the third step is due to , and the forth step comes from , the fifth step is due to rearranging the terms, the sixth step the comes from a Eq. (5), and the final step comes from merging the terms. ∎
VI-B Induction to all the iterations
For completeness, we give the induced result among the all the iterations (). By the following lemma at hand, we can finally attain the theorem in Section VI-C.
Lemma VI.5.
Given parameters , . For any , . For each , we define
For each , we have with probability , we have
and
Proof.
Our proof can be divided into two parts. At first, we consider the correctness of the inequalities above with . And then based on the result we attain above (See Lemma VI.4 ) and inducing over , the proof will be complete.
Part 1.
Part 2. Given that all coordinates in are well isolated, with probability at least , we have
where the first step comes from , the second step is due to rearranging the terms, the third step is due to , and the forth step comes from , the fifth step is due to rearranging the terms, the sixth step the comes from expanding the terms, and the final step comes from merging the terms.
VI-C Main result
In this subsection, we give the main result as the following theorem.
Theorem VI.6 (Main result).
Given a vector and the as the concrete Fourier transformation result, for every and , any , , there exists an algorithm (Algorithm 1) that takes
samples, runs in
time, and outputs a vector such that
holds with probability at least .
Proof.
By the Setting in the Algorithm 1, we can make the assumption in Lemma VI.4 hold. And by induction on Lemma VI.4, the following conclusion can be attained by us.
By Lemma VI.4 and the parameters as follows
for , we can have that with probability , we have
-
1.
.
-
2.
.
-
3.
.
-
4.
.
-
5.
.
By Lemma VI.5, we can conclude that with iterations, we will attain the result we want. Then we will give the analysis about the time complexity and sample complexity.
Proof of Sample Complexity.
From analysis above, the sample needed in each iteration is then we have the following complexity.
The sample complexity of Estimation is
The time in each iteration mainly from two parts. The EstimateValues and HashToBins functions. For the running time of EstimateValues, its running time is mainly from loop. The number of the iterations of the loop can be bounded by .
By Lemma V.15, we can attain the time complexity of HashToBins with the bound of . This function is used only once at each iteration.
With , we can have the following equation.
Proof of Time Complexity. The Time complexity of Estimation is
Proof of Success Probability.
The failure probability is
Upper bound .
By Lemma VI.4, we have that
(6) |
where the first step comes from the assumption in Lemma VI.4, the second step comes from the assumption in Lemma VI.4, the third step refers to recursively apply the second step, the last step follows by a geometric sum.
Proof of Final Error. We can bound the query error by:
where the first step follows that is well isolated (See Definition V.10.) and , the second step is by Eq. (5), the third step comes from the definition of in Eq. (VI-A), the fourth step follows from the Eq.(VI-C), and the final step follows from the geometric sum, and .
∎
VII Conclusion
Fourier transformation is an intensively researched topic in a variety of scientific disciplines. Numerous applications exist within machine learning, signal processing, compressed sensing, etc. In this paper, we study the problem of Fourier set query. With an approximation parameter , a vector and a query set of size , our algorithm uses Fourier measurements, runs in time and outputs a vector such that with probability of at least .
References
- [1] J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Mathematics of computation, vol. 19, no. 90, pp. 297–301, 1965.
- [2] D. G. Voelz, Computational fourier optics: a MATLAB tutorial. SPIE press Bellingham, Washington, 2011.
- [3] J. Goodman, Introduction to Fourier Optics. W. H. Freeman, 2017. [Online]. Available: https://books.google.com/books?id=9zY8DwAAQBAJ
- [4] A. M. Aibinu, M.-J. E. Salami, A. A. Shafie, and A. R. Najeeb, “Mri reconstruction using discrete fourier transform: a tutorial,” 2008.
- [5] G. O. Reynolds, The New Physical Optics Notebook: Tutorials in Fourier Optics. ERIC, 1989.
- [6] H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Nearly optimal sparse fourier transform,” in Proceedings of the forty-fourth annual ACM symposium on Theory of computing. ACM, 2012, pp. 563–578.
- [7] B. Boashash, Time-frequency signal analysis and processing: a comprehensive reference. Academic press, 2015.
- [8] E. Price, “Efficient sketches for the set query problem,” in Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2011, pp. 41–56.
- [9] M. Kapralov, “Sample efficient estimation and recovery in sparse fft via isolation on average,” in Foundations of Computer Science, 2017. FOCS’17. IEEE 58th Annual IEEE Symposium on. https://arxiv.org/pdf/1708.04544, 2017.
- [10] A. C. Gilbert, S. Muthukrishnan, and M. Strauss, “Improved time bounds for near-optimal sparse Fourier representations,” in Optics & Photonics 2005. International Society for Optics and Photonics, 2005, pp. 59 141A–59 141A.
- [11] H. Hassanieh, P. Indyk, D. Katabi, and E. Price, “Simple and practical algorithm for sparse Fourier transform,” in Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms. SIAM, 2012, pp. 1183–1194.
- [12] M. A. Iwen, “Improved approximation guarantees for sublinear-time Fourier algorithms,” Applied And Computational Harmonic Analysis, vol. 34, no. 1, pp. 57–82, 2013.
- [13] P. Indyk, M. Kapralov, and E. Price, “(Nearly) Sample-optimal sparse Fourier transform,” in Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM, 2014, pp. 480–499.
- [14] P. Indyk and M. Kapralov, “Sample-optimal fourier sampling in any constant dimension,” in Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on. IEEE, 2014, pp. 514–523.
- [15] M. Kapralov, “Sparse Fourier transform in any constant dimension with nearly-optimal sample complexity in sublinear time,” in Symposium on Theory of Computing Conference, STOC’16, Cambridge, MA, USA, June 19-21, 2016, 2016.
- [16] V. Nakos, Z. Song, and Z. Wang, “(nearly) sample-optimal sparse fourier transform in any dimension; ripless and filterless,” in 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2019, pp. 1568–1577.
- [17] E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on pure and applied mathematics, vol. 59, no. 8, pp. 1207–1223, 2006.
- [18] M. Rudelson and R. Vershynin, “On sparse reconstruction from fourier and gaussian measurements,” Communications on Pure and Applied Mathematics, vol. 61, no. 8, pp. 1025–1045, 2008.
- [19] J. Bourgain, “An improved estimate in the restricted isometry problem,” in Geometric Aspects of Functional Analysis. Springer, 2014, pp. 65–70.
- [20] L. Shi, O. Andronesi, H. Hassanieh, B. Ghazi, D. Katabi, and E. Adalsteinsson, “Mrs sparse-fft: Reducing acquisition time and artifacts for in vivo 2d correlation spectroscopy,” in ISMRM13, Int. Society for Magnetic Resonance in Medicine Annual Meeting and Exhibition, 2013.
- [21] E. Price and Z. Song, “A robust sparse Fourier transform in the continuous setting,” in Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on. IEEE, 2015, pp. 583–600.
- [22] Y. Jin, D. Liu, and Z. Song, “A robust multi-dimensional sparse fourier transform in the continuous setting,” arXiv preprint arXiv:2005.06156, 2020.
- [23] X. Chen, D. M. Kane, E. Price, and Z. Song, “Fourier-sparse interpolation without a frequency gap,” in Foundations of Computer Science (FOCS), 2016 IEEE 57th Annual Symposium on. IEEE, 2016, pp. 741–750.
- [24] Z. Song, B. Sun, O. Weinstein, and R. Zhang, “Sparse fourier transform over lattices: A unified approach to signal reconstruction.” http://arxiv.org/abs/2205.00658, 2022.
- [25] C. D. McGillem and G. R. Cooper, Continuous and discrete signal and system analysis. Harcourt School, 1991.
- [26] J. G. Proakis, Digital signal processing: principles algorithms and applications. Pearson Education India, 2001.
- [27] F. G. Friedlander, M. S. Joshi, M. Joshi, and M. C. Joshi, Introduction to the Theory of Distributions. Cambridge University Press, 1998.
- [28] D. I. Hoult and B. Bhakar, “Nmr signal reception: Virtual photons and coherent spontaneous emission,” Concepts in Magnetic Resonance: An Educational Journal, vol. 9, no. 5, pp. 277–297, 1997.
- [29] I. I. Rabi, J. R. Zacharias, S. Millman, and P. Kusch, “A new method of measuring nuclear magnetic moment,” Physical review, vol. 53, no. 4, p. 318, 1938.
- [30] K. Schmidt-Rohr and H. W. Spiess, Multidimensional solid-state NMR and polymers. Elsevier, 2012.
- [31] P. R. Griffiths, “Fourier transform infrared spectrometry,” Science, vol. 222, no. 4621, pp. 297–302, 1983.
- [32] M. M. Wilde, Quantum information theory. Cambridge University Press, 2013.
- [33] P. J. Schreier and L. L. Scharf, Statistical signal processing of complex-valued data: the theory of improper and noncircular signals. Cambridge university press, 2010.
- [34] L. L. Scharf and C. Demeure, Statistical signal processing: detection, estimation, and time series analysis. Prentice Hall, 1991.