Denoising Convolution Algorithms and Applications
to SAR Signal Processing
Abstract
Convolutions are one of the most important operations in signal processing. They often involve large arrays and require significant computing time. Moreover, in practice, the signal data to be processed by convolution may be corrupted by noise. In this paper, we introduce a new method for computing the convolutions in the quantized tensor train (QTT) format and removing noise from data using the QTT decomposition. We demonstrate the performance of our method using a common mathematical model for synthetic aperture radar (SAR) processing that involves a sinc kernel and present the entire cost of decomposing the original data array, computing the convolutions, and then reformatting the data back into full arrays.
1 Introduction
Convolution operations are used in different practical applications. They often involve large arrays of data and require optimization with respect to memory and computational cost. While input data are usually available only in a discrete form, the standard realization based on a vector-matrix representation is not often efficient since it leads to using sparse matrices. On the other hand, a tensor decomposition looks very attractive because it might reduce the volume of data very drastically, minimizing the number of zero elements. In addition, arithmetic operations between tensors can be implemented efficiently.
There are different forms of tensor decomposition. The most popular approach is based on the canonical decomposition [12] where a multidimensional array is represented (might be approximate) via a sum of outer products of vectors. For matrices, such decomposition is reduced to skeleton decomposition. However, it is known to be unstable in the cases of multiple tensor dimensions, also referred to as tensor modes. The Tucker decomposition [27] represents a natural stable generalization of the canonical decomposition and can provide a high compression rate. The main drawback of the Tucker decomposition is related to the so-called curse of dimensionality; that is, the algorithm’s complexity grows exponentially with the number of tensor modes. A way to overcome these difficulties is to use the Tensor Train (TT) decomposition, which was originally introduced in [23, 24]. Effectively, the TT decomposition represents a generalization of the classical SVD decomposition to the case of multiple modes. It can also be interpreted as a hierarchical Tucker decomposition [10].
Computing the TT decomposition fully can be very expensive if we use the standard TT-SVD algorithm given, e.g., by Algorithm 1 below. Therefore, many modifications to this algorithm were proposed in the literature to help speed it up. One such improvement was presented in [18], where results comparable to those obtained by the TT-SVD algorithm were produced in a fraction of the time for sparse tensor data. Another algorithm that uses the column space of the unfolding tensors was designed to compute the TT cores in parallel; see [26]. The most popular approach to efficiently compute the TT decomposition is based on using a randomized algorithm; see, e.g., [13, 1, 5].
Maximal compression with the TT decomposition can be reached with matrices whose dimensions are powers of two, as proposed in the so-called Quantized TT (QTT) algorithm [20]. As shown in [15], the convolution realized for multilevel Toeplitz matrices via QTT has a logarithmic complexity with respect to the number of elements in each mode, , and is proportional to the number of modes. It is proven that the result cannot be asymptotically improved. However, this algorithm is improved for finite and practically important in [25] thanks to the cross-convolution in the Fourier (image) space. The improvement is demonstrated for convolutions with three modes with Newton’s potential. It is to be noted that QTT can also be applied to the Fast Fourier Transform (FTT) to decrease its complexity, as shown in [3]. This super-fast FFT (QTT-FFT) beats the standard FFT for extremely large such as for one mode tensors and for tensors with three modes.
For practical applications, a critical issue is denoising. Real-life data, such as radar signals, are typically contaminated with noise. Denoising is not addressed in the papers we have cited previously. However, TT decomposition itself potentially has the property of denoising, owing to the SVD incorporated in the algorithm [4, 8]. In the current work, we propose and implement the low-rank modifications for the previously developed TT-SVD algorithm of [21]. These modifications speed up the computations. We also demonstrate the denoising capacity of numerical convolutions computed using the QTT decomposition. Specifically, we employ a common model for synthetic aperture radar (SAR) signal processing based on the convolution with a sinc imaging kernel (called the generalized ambiguity function) [6, Chapter 2] and show that when a convolution with this kernel is evaluated in the QTT format, the noise level in the resulting image is substantially reduced compared to that in the original data.
It should be observed that most papers on tensor convolution only consider the run time cost of the convolution after the tensor decomposition has been applied to the objective function and the kernel function and either ignores the cost of the actual tensor decompositions or puts it as a side note. In this paper, we consider every step of computing the convolution using the QTT-FFT algorithm, including the decomposition of the arrays into the QTT format using the TT-SVD algorithm (see Algorithm 1), computation of the QTT-FFT algorithm once in that format, and then extracting the data back after the computation is conducted (Section 5). As the QTT decomposition is computationally expensive, we consider several approaches to speed up the decomposition run time. Without these modifications to the TT-SVD decomposition algorithm, the convolution can take a long to compute and is not a practical approach. We provide more detail in section 7.1.
The methods we use to speed up our TT decompositions are based on truncating SVD ranks in the decomposition algorithm (Algorithm 1) and lead to a significant noise reduction in the data (Section 6). Thus, in Chapter 6, we present algorithms to compute convolutions in a reasonable time while significantly reducing the noise in the data at the same time. Our contribution includes developing and analyzing new approaches to speeding up the tensor train decomposition, see Section 5. In Section 6, we consider the effects of convolutions on removing noise in data. Finally, in Section 7, we show numerical examples and compare our results with other approaches to computing convolutions.
2 Convolution
The convolution operation is widely used in different applications in signal processing, data imaging, physics, and probability, to name a few. This operation is a way to combine two signals, usually represented as functions, and produce a third signal with meaningful information. The -dimensional convolution between two functions and is defined as
(2.1) |
Often to compute the convolution numerically, we assume the support of and , denoted and respectively, are compact. For simplicity, in this paper, we assume for some . Next, we discretize the domain uniformly into points such that
where and . We then let and be -dimensional arrays such that
for all . This leads to the discrete convolution such that
(2.2) |
where and the sums are over all indices that lead to legal subscripts. This Riemann sum approximation (2.2) to the integral (2.1) uses the midpoint rule, thus having accuracy.
Remark 2.1.
The convolution defined in (2.2) is equivalent to Matlab’s convn function with the optional shape input set to ’same’ and then multiplied by .
To compute this convolution directly takes operations, but it can be reduced to by using the fast Fourier transform (FFT) and the discrete convolution theorem. The FFT algorithm is an efficient algorithm used to compute the -dimensional discrete Fourier transforms (DFT) of ,
where the sum is over the multi-indexed array ,
and , where is the imaginary unit. Similarly, the -dimensional inverse discrete Fourier transform (IDFT), such that
of the array is given by
Using the discrete Fourier transform, we can compute the circular convolution defined as
by taking the DFT of and , multiplying the results together, and then taking the IDFT of the given result. Thus, we have
where is Hadamard product (element-wise product) of -dimensional arrays. The circular convolution is the same as the convolution of two periodic functions (up to a constant scaling), thus to obtain the convolution given in (2.2) (also known as a linear convolution), we need to pad the vectors and with at least zeros in each dimension. For example, given the vectors with
and as the circular convolution between them, the linear convolution in (2.2) is given by
In this paper, we let be a predefined kernel, such as the SAR generalized ambiguity function (GAF) (see Section 3 and [6, Chapter 2] for detail) and be a smooth gradually varying function contaminated with white noise. To compute the convolution, we use the QTT decomposition [16] and the QTT-FFT algorithm [3]. The QTT decomposition is a particular case of the more general TT decomposition (see Section 4 and [21] for detail).
3 Synthetic aperture radar (SAR)
SAR is a coherent remote sensing technology capable of producing two-dimensional images of the Earth’s surface from overhead platforms (airborne or spaceborne). SAR illuminates the chosen area on the surface of the Earth with microwaves (specially modulated pulses) and generates the image by digitally processing the returns (i.e., reflected signals). SAR processing involves the application of the matched filter and summation along the synthetic array, which is a collection of successive locations of the SAR antenna along the flight path. Matched filtering yields the image in the direction normal to the platform flight trajectory or orbit (called cross-track or range), while summation along the array yields the image in the direction parallel to the trajectory or orbit (along-the-track or azimuth).
Mathematically, each of the two signal processing stages can be interpreted as the convolution of the signal received by the SAR antenna with a known function. Equivalently, it can be represented as a convolution of the ground reflectivity function, which is the unknown quantity that SAR aims to reconstruct the imaging kernel or generalized ambiguity function. The advantage of this equivalent representation is that it leads to a very convenient partition: the GAF depends on the imaging system’s characteristics, whereas the target’s properties determine the ground reflectivity function. Moreover, image representation via GAF allows one to see clearly how signal compression (a property that pertains to SAR interrogating waveforms) enables SAR resolution, i.e., the capacity of the sensor to distinguish between closely located targets.
In the simplest possible imaging scenario, when the propagation of radar signals between the antenna and the target is assumed unobstructed, and several additional assumptions also hold; the GAF in either range or azimuthal direction is given by the sinc (or spherical Bessel) function:
(3.1) |
where the constant is determined by normalization, denotes a given direction, and the quantity is the resolution in this direction. From the formula (3.1), we see that the resolution is defined as half-width of the sinc main lobe, i.e., the distance from is central maximum to the first zero. When is the range direction (cross-track), the resolution is inversely proportional to the SAR signal bandwidth, see [6, Section 2.4.4]. When is the azimuthal direction (along-the-track), the resolution is inversely proportional to the length of the synthetic array, i.e., synthetic aperture, see [6, Section 2.4.3]. Note that lower values of correspond to better resolution because SAR can tell between the targets located closer to one another. It can also be shown that as the GAF given by (3.1) converges to the -function in the sense of distributions [7, Section 3.3]. In this case, the image, which is a convolution of the ground reflectivity with the GAF, coincides with ground reflectivity. This would be ideal because the image would reconstruct the unknown ground reflectivity exactly. This situation, however, is never realized in practice because having requires either the SAR bandwidth (range direction) or synthetic aperture (azimuthal direction) to become infinitely large, which is not possible.
4 Tensor Train Decomposition
Consider the -mode, tensor such that
where is the size of each mode, and are the elements of the tensor for all and . The tensor train format of decomposes the tensor into cores such that
where the matrices for all , (In Matlab notation, , where squeez() is used to convert the tensor into a matrix). The matrix dimensions , , are referred to as the TT-ranks of the tensor decomposition, and the mode tensors are the TT-cores. Since we are interested in the case when , we impose the condition . Let and , then the the tensor , which has elements, can be represented with elements in the TT format.
We can also represent the TT decomposition as the product of tensor contraction operators. Define the tensor contraction between the tensors and (note that the first dimension size of equals the last dimension size of ) as where
Then the TT format of can be represented as
Before we show how to find the TT-cores, we first need to define a few properties of tensors. First, let the matrix be the -th unfolding of the tensor such that
Thus, we have that which we write as
We denote the process of unfolding a tensor into a matrix as
and folding a matrix into a tensor as
(Note this is to be consistent with the Matlab function reshape()).
From [21] it can be shown that there exist a TT-decomposition of such that
Denote the Frobenius norm of a tensor as
and the -rank of the matrix as
Given a set , we can approximate the tensor with a tensor in the TT format such that it has TT- ranks and
In Algorithm 1, we present the TT-SVD algorithm [21], which computes a TT-decomposition of a tensor with a prescribed accuracy . In Section 5, we present some modifications to this algorithm that relax the prescribed tolerance and allow us to compute an approximate decomposition faster. For a tensor , define
The TT-decomposition can also be applied to tensors with a small number of modes by using the quantized tensor train decomposition (QTT). For instance, let be a vector (-mode tensor). To apply the QTT-decomposition of , we reshape it into the -mode tensor such that
where
then compute the TT-decomposition of the tensor (you can think of as the binary representation of ). Extending the QTT-decomposition to matrices (-mode tensors) can be done similarly by reshaping them into -mode tensors , then computing the TT-decomposition of .
We can approximate the discrete Fourier transform of a vector (or 2D discrete Fourier transform of a matrix ) in the QTT format using what is known as the QTT-FFT approximation algorithm [3]. Let be the discrete Fourier transform of and let and be the tensors in the QTT-format that represent the vectors and respectively. Given , the QTT- FFT approximation algorithm can approximate with a tensor such that
(4.1) |
for some given tolerance . Similarly, we could prescribe some maximum TT-rank, , for the QTT-FFT algorithm such that for all TT-ranks of , . The QTT-FFT algorithm can easily be modified to the inverse Fourier transform of a vector (or matrix) in the QTT format, which we denote as the QTT-iFFT algorithm.
5 Computing the convolution with QTT decomposition
In practice, we often need to compute the convolution (2.1), where is the function of interest and is a given kernel, but is not given explicitly. Instead, we are given noisy data
(5.1) |
at discrete points , . In particular, representing the ground reflectivity function for SAR reconstruction in the form (5.1) helps one model the noise in the received data. We assume that is white noise from a normal distribution with the standard deviation , i.e., .
Since the kernel function is known, we can discretize it as
for the same values as in (5.1). We assume the -dimensional spatial domain is uniformly discretized into points where , see (2). To compute the discrete convolution (2.2), we propose using the quantized tensor train (QTT) decomposition. To represent the arrays in the QTT format, we pad them with zeros such that the new arrays are -mode tensors in . We can relax the condition on the size , but to compute the convolution with an FFT algorithm, we need to zero-pad each dimension with at least extra zeros (see Section 2). Also, for the QTT decomposition, we need each dimension to be of size for some . Let be the zero-padded tensors representing and respectively in the QTT format. Here, we assume that the discretization of , , has a low, but not exactly known, TT-rank in the QTT-format. This is motivated by the fact that many standard piecewise smooth functions naturally have a low TT-rank, see [22, 9, 16].
To find approximations of these tensors in the TT-format, we modify the original TT-SVD algorithm. This is because with the full TT-SVD algorithm, if the tolerance is small, see equation (4.1), the TT-decomposition has close to full rank. Not only does it take a very long time to compute these decompositions, but most of the noise is still present. However, if is too large, the TT-SVD algorithm loses too much information about the true function . For these reasons, we present slight modifications to the TT-SVD algorithm. They are needed to significantly reduce the computing time, as illustrated by the example in Section 7.1.
We consider three different modifications to the TT-SVD algorithm. These modifications are as follows:
-
(1)
Set some max rank and truncate the SVD in Algorithm 1 with ranks less than or equal to this threshold. Denote this method as the max rank TT-SVD algorithm.
-
(2)
Set some max rank and replace the SVD in Algorithm 1 with a randomized SVD (RSVD) given in [11] with max ranks set to (see Appendix A). Denote this method as the max rank TT-RSVD algorithm. Note that for this algorithm, we also need to prescribe an oversampling parameter . We could choose from several randomized SVD algorithms, but due to simplicity and effectiveness, we use the approach described in Appendix A. This algorithm implements the direct SVD.
-
(3)
Truncate the SVD in Algorithm 1 based on when there is a relative drop in singular values, i.e., if for a given threshold , then truncate the singular values less than . Denote this method as the SV drop off TT-SVD algorithm.
For the max rank TT-RSVD, if the unfolding matrices , where , then we revert to the max rank TT-SVD algorithm (without the randomized SVD).
We can modify the QTT-FFT and QTT-iFFT algorithms similarly to our modifications of the TT-SVD algorithms to get a low-rank approximation to the discrete Fourier transform representations of and . For this, we replace the SVD in the QTT-FFT algorithm (QTT-iFFT) with the truncated SVD algorithms (1)-(3) given above, but with possibly a different max rank which we denote for (1) and (2), or different threshold for (3). For the examples in Section 7, we distinguish between and . However, we use the same threshold for in the TT-SVD algorithm and the QTT-FFT algorithm. Thus, we do not distinguish between the two. Note that using the threshold (1) in the QTT-FFT algorithm is not new and is mentioned in [3].
With these above modifications to the TT-SVD algorithm and QTT-FFT (QTT-iFFT) algorithms, we propose the following algorithm (Algorithm 2) to approximate the convolution between the -dimensional arrays and . For this algorithm, we denote
-
•
: QTT-FFT algorithm with a max rank of (or threshold ),
-
•
: QTT-iFFT algorithm with a max rank of (or threshold ).
In Theorem 5.2, we show the asymptotic run time behavior of computing a convolution in one spatial dimension () with the max rank TT-SVD algorithm. First, we prove an auxiliary result about the size of the unfolding matrices for this algorithm; see Lemma 5.1. For Theorem 5.2, we consider the whole process of converting the vector into the QTT-format, computing the convolution, then converting the convolution in the QTT format back into a vector, as is demonstrated in Algorithm 2. For the last step, to convert a tensor in the TT-format back into the standard format, we use the ’full’ algorithm from the Matlab toolbox oseledets/TT-Toolbox. This is given in Algorithm 3. We then reshape this tensor into a vector with a bit of run time.
Lemma 5.1.
Let be a -mode tensor. Let be the unfolding matrices of in the max rank TT-SVD algorithm with a max rank of and with each . Then
Proof.
Since for all , the proof for is trivial by the first line inside the for loop in Algorithm 1. For , we do a proof by induction. First, note that and , thus
Assume for all . Then,
Thus, we get
∎
Theorem 5.2.
Let for some positive integer . Then the computational complexity, , of approximating the convolution with the max rank TT-SVD and max rank QTT-SVD algorithms described above is
where is the prescribed max rank for both the TT-SVD algorithms and the QTT-FFT algorithm.
Proof.
We show that the computational complexity is dominated asymptotically by the max rank TT-SVD algorithms and the full tensor algorithm. First, let be the computational cost of the SVD in big notation. Then, for a matrix , ). Note that, in Algorithm 1 (as well as in our max rank modifications), the computational complexity is dominated by the SVD algorithm. Denote the unfolding matrices at the th iterations as . Hence, the computational cost of the max rank TT-SVD algorithm is
From [3], we have that for the QTT-FFT and QTT-iFFT algorithms, the computational complexity is . In Algorithm 3, the computational complexity comes from the multiplication in every loop. For the th loop, and for , thus the computational complexity is proportional to the cost of multiplying by , i.e.,
Hence, the total computational complexity is
∎
For the randomized SVD, we have the computational complexity . Thus, the run time for the convolution with a max rank TT-RSVD is similar when is small. In spatial dimensions, we can obtain a similar result but by replacing with in the max rank TT-SVD algorithm and the full tensor algorithm, and the QTT-FFT algorithm is . Hence, the total run time complexity in spatial dimensions is .
6 Denoising
It is well known that the SVD can remove noise from matrix data, as seen in [4, 14], but little research has been done in denoising with tensor decompositions. In [17] and [19], the Tucker decomposition was used to help remove noise from point cloud data and electron holograms, respectively. In [8], it was shown that the TT-decomposition might have some advantages to denoising as opposed to the Tucker decomposition. This is because a low-rank Tucker matrix guarantees a low TT-rank for the data. However, the converse statement is not always true.
Let be the low TT-rank tensor representing in the QTT format. Then for some core tensors with tensor slices , . Each element of can be represented in the TT format as
where each is a low rank matrix. In practice, it is unlikely the data collected has a low-rank TT decomposition since almost all real radar data has noise due to hardware limitations or other signals interfering with the data. Instead, we have the noisy data whose tensor representation is
where is the realization of the random noise in the TT format. The tensor almost surely has full TT-rank when represented exactly in the QTT format. Ideally, we would like to be able to find an approximate TT decomposition with TT-cores , , using the noisy data such that . However, it is hard to guarantee any bound on this. We argue, though, that by using our proposed methods when given the noisy data , we can find a TT decomposition with low rank such that .
Consider the first iteration of the for loop of algorithm 1, with as the sum of a smooth tensor () and a noisy tensor (). Then, after it is reshaped, we obtain the matrix
where is a low rank matrix and is added noise. Let be the truncated SVD of and be the SVD of . Note that does not imply that , and thus the TT-core is not guaranteed to be approximately equal to , where is the first TT-core of . However, if we let on the second iteration of the loop in Algorithm 1 (and similarly for ), we do get that the elements of the tensor contraction . Similarly, if we can approximate the noise-free component on every iteration of the for loop, we obtain an approximation for the tensor . While we do not have a theoretical bound on this error, our experiments in Section 7 show that this method works well at removing the noise. Since our method computes multiple SVDs, it can reduce a lot more noise than if we just did a single SVD and can do so without excessive smearing.
7 Numerical simulations
This section presents some examples in one and two spatial dimensions. The original code for the TT-decompositions and the QTT-FFT algorithms comes from the Matlab toolbox oseledets/TT-Toolbox. We have modified it accordingly for the max rank TT-SVD, max rank TT-RSVD, and SV drop off TT-SVD algorithm, as discussed in Section 5. For all our examples, we compare the run time and errors of computing the convolution (2.1) using several methods. The error for every example is the relative error
(7.1) |
where in spatial dimensions
The reference solution, , is the discrete convolution (2.2) computed without any noise. In all of the examples, we compare our methods against computing the convolution with the randomized TT-SVD algorithm from [13], as well as computing the true noisy convolution with FFT. In two space dimensions, we also approximate the convolution using a low matrix rank approximation to the noisy data , where the truncated rank is determined by the actual matrix rank of .
For all of these examples, we use the normalized sinc imaging kernel that corresponds to the GAF (3.1) truncated to a sufficiently large interval :
(7.2) |
for , and
(7.3) |
for , where the resolution in (7.2) and in (7.3) is a given parameter. The one- dimensional kernel (7.2) for is shown in Figure 7.1.
In Table 7.1, we present the relative error for each example for when , and when . In this table, the convolution is denoted by and computed using the FFT algorithm, the QTT-convolution computed with the max rank TT-SVD algorithm is denoted by , the QTT-convolution computed with the max rank TT-RSVD is denoted by , and the convolution computed using the SV drop off TT-SVD algorithm is denoted by . In turn, the convolutions computed using the randomized TT-decomposition are denoted by , and in two dimensions, the convolution computed using low-rank approximations of is denoted by . For and , we also denote what parameter and truncation matrix rank are used, respectively, for each example using a subscript of the error.

For each example, we show the TT-ranks of the original function without noise, , in the QTT format given by the tensor . This QTT approximation is computed with Algorithm 1 with the tolerance . We compute these TT-ranks for when and when . However, it is worth noting that these TT-ranks do not change much for any data size. Notice that the max TT-ranks we choose for our algorithms are less than the TT-ranks of from Algorithm 1, yet still provide a reasonable estimate.
example | ||||||
---|---|---|---|---|---|---|
0.0028 | - | |||||
0.0011 | - | |||||
0.0151 |
7.1 Example 1
For this example, let
and
with . We also set the resolution in , where is the size of the spatial discretization. Thus, the width of the main lobe of the sinc is on the x-axis.
As we can see in Figure 7.2, the FFT-QTT algorithm removed much of the noise in the data compared to the true convolution. For , we also tried computing the convolution using the original TT-SVD algorithm given in Algorithm 1 with multiple values of . The smallest error, as defined in (7.1), occurred when and gave the relative error of . This is close to the error of the true convolution of the noisy data and took over seconds to compute. However, as we can see in Table 7.2, the run times for all of our methods on the same grid took less than a second. This indicates that the original TT-SVD algorithm is practically unsuitable for removing data noise.
The max TT-rank of the discretization of in the QTT format, , is , yet we were able to achieve our approximation using a max rank of for the max rank TT-SVD and max rank TT-RSVD algorithms and for the QTT-FFT algorithm. Thus, even if we do not know the exact TT-rank, we can still compute a good approximation.
Table 7.2 shows run times for different grid sizes for each method. We can see that computing the convolution with FFT is faster than our methods for these values of . However, the convolution with our QTT methods gets closer to the FFT run time as increases. This is shown in the last column of Table 7.2 where we see the ratio of the max rank TT-SVD convolution method to the FFT convolution method is getting smaller as grows. This helps verify our theoretical result that for some constant max rank (and ), the max rank TT-SVD convolution method is asymptotically faster than computing the convolution with FFT. The amount of data needed for our method to outperform the FFT method may be impractical for most real-world applications in 1-2 spatial dimensions.






K | ||||||
---|---|---|---|---|---|---|
0.005 | 65 | |||||
0.067 | 9.7463 | |||||
1.21 | 3.6446 | |||||
5.77 | 3.0763 | |||||
62.49 | 1.4339 |
K | ||
---|---|---|
Table 7.3 shows the number of elements to represent the data fully versus how many elements are required to store the data in the QTT-format with a prescribed max rank of , , in Example 1. As we can see, storing all the elements takes a lot of data and grows exponentially in , while storing the elements in the QTT format takes a lot less data and only grows linearly in . These values for the QTT-data storage can be found by looking at the size of the core tensors. For the tensor in the QTT format and with a max TT-rank of , we have the TT-cores
Thus, the number of elements, , that make up this QTT tensors is:
The max rank TT-RSVD algorithm is not able to produce results as good as the max rank TT-SVD (see Table 7.1 for relative error comparison and Table 7.2 for a run time comparison) but is still able to produce a reasonably low error. While the run time for the max rank TT-SVD is faster than the max rank TT-RSVD for all of our methods, the max rank TT-RSVD can be faster for tensors with larger mode sizes. This is due to the SVD in max rank TT-SVD algorithm with mode sizes, , may be computed on a matrix with rows. In contrast, for the max rank TT-RSVD algorithm, the SVD is computed on a matrix with rows when (such as for the QTT decomposition). The difference in the sizes of does not make up for the extra amount of work the RSVD algorithm does. Although this paper focuses on the QTT-decomposition and thus , we believe this is important to note as the max rank TT-RSVD algorithm can speed up the TT-decomposition for higher mode tensor data and still produce accurate approximations. We verify this by computing the max rank TT-SVD algorithm and the max rank TT-RSVD algorithm on a tensor with modes with each mode of size , . Each element of this tensor is taken from the uniform distribution . The max rank TT-SVD algorithm took seconds, and the max rank TT-RSVD algorithm only took seconds, almost half the time of the max rank TT-SVD algorithm.
7.2 Example 2
If we were to choose a coarser resolution for the example of Section 7.1 (i.e., a wider sinc function), we could reduce the noise using the standard convolution at the cost of smoothing out the solution’s peaks. Doing this gives similar results for the true convolution and with our methods (Section 5). In this section, we show an example where the ground reflectivity is very oscillatory. Here, the resolution determined by the GAF must be small (i.e., the sinc function must be “skinny”). Otherwise, if the sinc window is close to or larger than the characteristic scale of variation of the ground reflectivity, then the convolution can smooth out the actual oscillations instead of just the noise, losing most of the information in .
We choose the ground reflectivity as
and
with . We use and the max TT-rank of the discretization of the smooth function in the QTT-format is .




Again, we use the max ranks of and for the max rank TT-SVD and QTT-SVD algorithms, respectively. As the function is too oscillatory to see a lot of helpful information in the full graph (see top left of Figure 7.3), we show a zoomed-in plot of the graph of , , and (see top right of Figure 7.3). While there is some error, the QTT-FFT convolution agrees with the true convolution very well, whereas has a more considerable noticeable difference. This is verified by the graphs of the absolute error given in Figure 7.3, where the bottom left shows the error for , and the bottom right shows the error for .
7.3 Example 3
Let
and
with . We have . Thus, the diameter of the main lobe of the sinc is on the xy-plane. Here, we show a 2D example whose discretization of a smooth function has a matrix rank of and a TT-rank of 26 when represented in the QTT format. We still use the ranks and for our max rank TT-SVD (max rank TT-RSVD) and max rank QTT-FFT. Thus, our TT-ranks are much smaller than the true TT-ranks. In Figure 7.4 and Table 7.1, notice that our method can still capture the shape of the original function with an error that is an order of magnitude smaller than the error from the true convolution using FFT. The plots on the bottom of Figure 7.4 are a side view of the error graphs, as it is easier to compare the errors in this view. The 2D examples are similar to the previous test case. Thus, it is reasonable to assume our method works about the same regardless of the spatial dimension.






In Table 7.4, we compare the run times for the different methods of computing the convolution in two spatial dimensions. We get similar results as the one-dimensional case, where the fastest run time is from the convolution with FFT, but with the max rank TT-SVD and max rank TT-RSVD methods approaching its run time asymptotically. In 2D, when there is the same amount of data as in the 1D case (for example, in 2D compared to in 1D), the 2D examples do not run as fast as the 1D example. This is due to the extra work in the 2D QTT-FFT algorithm from [3].
K | |||||||
---|---|---|---|---|---|---|---|
0.0034 | 65.088 | ||||||
0.0629 | 14.651 | ||||||
0.948 | 9.121 | ||||||
58.67 | 2.519 |
Again we compare the amount of data stored in the full format versus in the QTT-format. Note that the spatial dimension of the original function does not matter in how much storage it takes, just the dimensionality of the data. For example, it takes just as much data to store a vector in as it does to store a matrix in in the QTT format with a max rank of .
K | ||
---|---|---|
8 Conclusions
In this paper, we have shown that the QTT decomposition, along with the QTT-FFT algorithm, can effectively remove noise from signals with full TT-ranks when the true signal is of low rank. As we have seen in the numerical examples, we could drastically remove the amount of noise from the signal compared to if we took the convolution in the traditional way of using the FFT algorithm. This comes at the cost of run time, but our methods still run at a reasonable speed which got closer to the FFT run time as the dimensionality of the data increased. This is demonstrated by three different examples, two in one spatial dimension and one in two spatial dimensions. We are even able to show that our method works on very oscillatory data where it is required to have a sinc kernel with a narrow main lobe. Using approximate TT-ranks smaller than the TT-ranks of the actual signal data, we are able to recover most of the signal. This indicates that as long as the signal is reasonably smooth, the QTT decomposition can effectively be used for noise reduction for high-dimensional data, even if the true TT-rank is unknown.
From our three new approaches, the max rank TT-SVD convolution algorithm works much better than the max rank TT-RSVD and the SV drop off TT-SVD convolution algorithms. As was stated before, the max rank TT-RSVD can outperform the max rank TT-SVD algorithm when there are larger mode sizes that are used in this paper. For this reason, we present this algorithm, as we have not seen it in the literature elsewhere. The SV drop off TT-SVD convolution algorithms do not produce as accurate of a method as the max rank TT-SVD or the max rank TT-RSVD algorithm; however, in some cases, it does run faster, and this method may give a higher degree of confidence that the truncated singular values are of little importance. Unfortunately, this method can also lead to long run times, as is seen in Table 7.4 when .
Appendix A Randomized SVD
Here, we give a brief overview of the randomized SVD (RSVD) decomposition from [11]. To compute the RSVD of the matrix , the first step is to find such that
where has orthonormal columns and whose columns are approximations for the range of . Here, is the number of singular values that we want in our approximation to be close to the singular values of , and is what is known as an oversampling parameter. To find , we use the following Algorithm 4.
-
1.
Let .
-
2.
Compute SVD: .
-
3.
Let .
With these algorithms, we obtain an approximation to such that
(A.1) |
with probability . If we truncate the SVD to only the leading k singular values in Algorithm 5, then the error on the left-hand side of (A.1) only increases by at most . The computational complexity for each step of this algorithm is given as
-
•
-
•
-
•
,
Thus, for , the overall algorithm requires operations.
Acknowledgments
The work of A. Chertock was supported in part by NSF grants DMS-1818684 and DMS-2208438. The work of C. Leonard was supported in part by NSF grant DMS-1818684. The work of S. Tsynkov was supported in part by US Air Force Office of Scientific Research (AFOSR) under grant # FA9550-21-1-0086.
References
- [1] Maolin Che and Yimin Wei. Randomized algorithms for the approximations of Tucker and the tensor train decompositions. Adv Comput Math, 45:395–428, 2019.
- [2] Margaret Cheney and Brett Borden. Fundamentals of Radar Imaging, volume 79 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 2009.
- [3] Sergey Dolgov, Boris Khoromskij, and Dmitry Savostyanov. Superfast Fourier transform using QTT approximation. J. Fourier Anal. Appl., 18(5):915–953, 2012.
- [4] Brenden P. Epps and Eric M. Krivitzky. Singular value decomposition of noisy data: noise filtering. Experiments in fluids, 60(8):1–23, 2019.
- [5] Krzysztof Fonał and Rafał Zdunek. Distributed and randomized tensor train decomposition for feature extraction. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2019.
- [6] Mikhail Gilman, Erick Smith, and Semyon Tsynkov. Transionospheric synthetic aperture imaging. Applied and Numerical Harmonic Analysis. Birkhäuser/Springer, Cham, Switzerland, 2017.
- [7] Mikhail Gilman and Semyon Tsynkov. A mathematical perspective on radar interferometry. Inverse Problems & Imaging, 16(1):119–152, 2022. doi: 10.3934/ipi.2021043.
- [8] Xiao Gong, Wei Chen, Jie Chen, and Bo Ai. Tensor denoising using low-rank tensor train decomposition. IEEE Signal Processing Letters, 27:1685–1689, 2020.
- [9] Lars Grasedyck. Polynomial approximation in hierarchical tucker format by vector–tensorization. Technical Report 308, Institut für Geometrie und Praktische Mathematik RWTH Aachen, April 2010.
- [10] W. Hackbusch and S. Kühn. A new scheme for the tensor representation. J. Fourier Anal. Appl., 15(5):706–722, 2009.
- [11] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217–288, 2011.
- [12] R.A. Harshman. Foundations of the PARAFAC procedure: Models and conditions for an explanatory multimodal factor analysis. UCLA Working Papers in Phonetics, 16:1–84, December 1970.
- [13] Benjamin Huber, Reinhold Schneider, and Sebastian Wolf. A randomized tensor train singular value decomposition. In Compressed sensing and its applications, Appl. Numer. Harmon. Anal., pages 261–290. Birkhäuser/Springer, Cham, 2017.
- [14] Sunil K. Jha and R. D. S. Yadava. Denoising by singular value decomposition and its application to electronic nose data processing. IEEE Sensors Journal, 11(1):35–44, 2011.
- [15] Vladimir A. Kazeev, Boris N. Khoromskij, and Eugene E. Tyrtyshnikov. Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity. SIAM J. Sci. Comput., 35(3):A1511–A1536, 2013.
- [16] Boris N. Khoromskij. -quantics approximation of - tensors in high-dimensional numerical modeling. Constr. Approx., 34(2):257–280, 2011.
- [17] Jianze Li, Xiao-Ping Zhang, and Tuan Tran. Point cloud denoising based on tensor tucker decomposition. In 2019 IEEE International Conference on Image Processing (ICIP), pages 4375–4379, 2019.
- [18] Lingjie Li, Wenjian Yu, and Kim Batselier. Faster tensor train decomposition for sparse data. Journal of Computational and Applied Mathematics, 405:113972, 2022.
- [19] Yuki Nomura, Kazuo Yamamoto, Satoshi Anada, Tsukasa Hirayama, Emiko Igaki, and Koh Saitoh. Denoising of series electron holograms using tensor decomposition. Microscopy, 70(3):255–264, 09 2020.
- [20] I. V. Oseledets. Approximation of matrices using tensor decomposition. SIAM J. Matrix Anal. Appl., 31(4):2130–2145, 2009/10.
- [21] I. V. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295–2317, 2011.
- [22] I. V. Oseledets. Constructive representation of functions in low-rank tensor formats. Constr. Approx., 37(1):1–18, 2013.
- [23] I. V. Oseledets and E. E. Tyrtyshnikov. Breaking the curse of dimensionality, or how to use SVD in many dimensions. SIAM J. Sci. Comput., 31(5):3744–3759, 2009.
- [24] Ivan Oseledets and Eugene Tyrtyshnikov. TT-cross approximation for multidimensional arrays. Linear Algebra Appl., 432(1):70–88, 2010.
- [25] M. V. Rakhuba and I. V. Oseledets. Fast multidimensional convolution in low-rank tensor formats via cross approximation. SIAM J. Sci. Comput., 37(2):A565–A582, 2015.
- [26] Tianyi Shi, Maximilian Ruth, and Alex Townsend. Parallel algorithms for computing the tensor-train decomposition, 2021.
- [27] L.R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31:279–311, 1966.