This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Learning nonnegative matrix factorizations from compressed data

Abraar Chaudhry and Elizaveta Rebrova
Abstract.

We propose a flexible and theoretically supported framework for scalable nonnegative matrix factorization. The goal is to find nonnegative low-rank components directly from compressed measurements, accessing the original data only once or twice. We consider compression through randomized sketching methods that can be adapted to the data, or can be oblivious. We formulate optimization problems that only depend on the compressed data, but which can recover a nonnegative factorization which closely approximates the original matrix. The defined problems can be approached with a variety of algorithms, and in particular, we discuss variations of the popular multiplicative updates method for these compressed problems. We demonstrate the success of our approaches empirically and validate their performance in real-world applications.

Princeton University, Department of Operations Research and Financial Engineering. Emails: azc@alumni.princeton.edu, elre@princeton.edu. The authors acknowledge partial support by NSF DMS-2309685 and DMS-2108479.

1. Introduction

Low-rank approximations are arguably the main tool for simplifying and interpreting large, complex datasets. Methods based on singular value decomposition of the data matrix deliver deterministic, useful results via polynomial-time algorithms. However, for nonnegative data, spatial localization and interpretability of the features can be boosted by additionally making the factors element-wise nonnegative [23]. In the standard form, given a nonnegative matrix 𝐗m×n{\mathbf{X}}\in{\mathbb{R}}^{m\times n}, and a target rank rr, the Nonnegative Matrix Factoriration (NMF) problem is the task of finding matrices 𝐔m×r{\mathbf{U}}\in{\mathbb{R}}^{m\times r} and 𝐕n×r{\mathbf{V}}\in{\mathbb{R}}^{n\times r} solving the problem

min𝐔,𝐕0𝐗𝐔𝐕TF2.\min_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}. (1.1)

From the component matrices 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}}, one can obtain soft clustering of the data with additional interpretability of the components, compared to the SVD results [23]. NMF became a standard analysis tool across many application areas, such as topic modeling and text mining [2, 40], image processing [23], hyperspectral unmixing [18, 9], genomics [34], and others. It is amenable to numerous extensions, such as, incorporating semi-supervision [24], tree-like hierarchy of the clusters [13, 21], or additional information about the domain [40, 12].

The problem (1.1) is in general NP-hard [38] to solve and it never possesses a unique solution. Despite these challenges, several iterative algorithms have been developed to solve the NMF problem, including multiplicative updates (MU), alternating least squares (ALS), alternating nonnegative least squares (ANLS), and hierarchical alternating least squares (HALS). See [11, Section 3] for a discussion of these methods.

When the size of the data matrix 𝐗{\mathbf{X}} is large, it is challenging or impossible to store it, and even harder to use it within the optimization algorithms searching for the NMF decomposition. Note that the resulting factors 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}} collectively contain only r(n+m)r(n+m) entries which is much less than the total of nmnm entries in 𝐗{\mathbf{X}} if rmin{n,m}r\ll\min\{n,m\}. Thus, it can be possible to store and process the resulting factors, even when processing the whole data is challenging.

This motivates us to apply a sketch-and-solve approach to the NMF problem. This means that we will first compute a linear function of our input matrix (𝐗)\mathcal{L}({\mathbf{X}}), known as a sketch, and then second, find a good factorization 𝐔𝐕T{\mathbf{U}}{\mathbf{V}}^{T} based only on the sketch and the linear function \mathcal{L}, known as the sketching map. If the size of the sketch is much smaller than 𝐗{\mathbf{X}}, the second task may be significantly more practical when there are memory limitations. The practice of using sketches is also applicable to matrix factorization problems in other settings such as when different entries of 𝐗{\mathbf{X}} are revealed over time. In certain streaming applications, it has been shown that there is little benefit to considering nonlinear sketching functions as opposed to linear sketches [25].

A standard and well-developed application of the linear sketch-and-solve approach is for the simpler problem of linear regression [33, 5, 29]. Wide classes of random matrices, defined obliviously to the underlying data, can be used as linear sketching operators, and deterministic conditions related to the geometry preservation properties of the sketching operators were formulated [26, 43]. Another prominent example of the sketch-and-solve approach is randomized SVD algorithms. To find low-rank factorization of a given matrix from its sketched measurements, the sketch should retain spectral properties of the matrix rather than being data oblivious. In [14], a simple procedure of forming refined data-adapted sketches via just one pass over the data – a randomized rangefinder algorithm – was proposed.

In this work, we develop the theory showing why and how these data-adapted sketches are useful in finding nonnegative low-rank components; and we also consider the cases when random, independent from the data, sketches can be used.

One way to sketch a structured object – in our case, a matrix – is to vectorize it and use random linear map on the resulting vector. This includes standard compressive sensing approaches for low-rank matrix recovery such as in [3, 7]. Another way (which is the focus of this work) is to consider sketches that take of the form of left and right matrix products with 𝐗{\mathbf{X}}, e.g., 𝐀𝐗{\mathbf{A}}{\mathbf{X}} or 𝐗𝐁{\mathbf{X}}{\mathbf{B}} for appropriately sized matrices 𝐀{\mathbf{A}} and 𝐁{\mathbf{B}}. Column and row sketching have been used successfully for matrix factorization and approximation problems [37, 8, 44], and its higher order analogue modewise sketching was used to speed up tensor low-rank fitting [19, 15]. An advantage of this approach is in compact sketching matrices 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} and 𝐁n×k{\mathbf{B}}\in{\mathbb{R}}^{n\times k} that contain k(n+m)k(n+m) elements together, compared to the kmnkmn entries in a linear sketch that is applied to a vectorization of the matrix 𝐗{\mathbf{X}} in mn{\mathbb{R}}^{mn}. Another advantage is in preserving the matrix structure of the data throughout the sketching, which is crucial in this work for integrating the compressed matrices within learning algorithms, such as the multiplicative updates algorithm.

1.1. Contributions and outline

The idea to make NMF problem scalable through randomized sketching was considered earlier. In Section 2, we review relevant related work. What was missing in the prior works is the crucial link between the algorithmic outcomes of compressed problems and their fitness for the original problem. Establishing such connection is challenging partially due to the limited theoretical guarantees for the convergence of NMF algorithms (which is essentially inevitable dues to the NP-hardness of the problem). We approach this challenge in the following way: (1) we define the compressed problems such that we can provably compare their optimal solutions with the optimal solutions to the uncompressed problem, and (2) we propose efficient algorithms for these compressed problems. Due to (1), this also means getting good solutions for the original problem.

In Section 3, we formulate optimization problems which depend only on sketches, but whose optimal solutions approximately solve the original NMF problem. In addition, these problems are formulated to be amenable for efficient solvers. We propose three types of such problems: (I) for two-sided row and column sketching, (II) for one-sided sketching with orthogonal data-adapted measurement matrices and (III) with approximately orthogonal (e.g., random) data-oblivious measurement matrices.

The proposed compressed problem with row and column sketching is as follows

(I)𝐔~,𝐕~=argmin𝐔,𝐕0\displaystyle\text{(I)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0} 𝐀1(𝐗𝐔𝐕T)F2+(𝐗𝐔𝐕T)𝐀2F2\displaystyle\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2}
+λ1P𝐀2T𝐗T𝐔𝐕TF2+λ2𝐔𝐕TP𝐀1𝐗F2(in Theorem 3.1).\displaystyle+\lambda_{1}\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2}\quad\text{(in Theorem~{}\ref{thm:exact-two-sided}).}

Theorem 3.1 guarantees that if 𝐗{\mathbf{X}} has an exact nonnegative factorization of rank rr, then the solution to the problem above is also exact 𝐗=𝐔~𝐕~{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}} as long as the sketching dimension is at least rr. Crucially, the matrices 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} can be generic. We explain how to solve this problem in the sketched space despite the inclusion of the regularizer terms involving orthogonal projections. Empirically, as shown in Section 5, this problem can be employed in a simplified form with λ1=λ2=0\lambda_{1}=\lambda_{2}=0, and it is suitable for the data with approximately low nonnegative rank: if the sketching matrices are generic (in particular, not designed to approximate the range of the data), the two-sided method should be employed for tight recovery.

The one-sided sketching can be more convenient for some types of the data and also is more compact. Indeed, iteratively solving the two-sided problem requires storing and using both 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2}, whereas using sketches on one side takes twice less space for the same amount of compression. The proposed one-sided compressed problems formulations are

(II)𝐔~,𝐕~=argmin𝐔,𝐕0[𝐀(𝐗𝐔𝐕T)F2+λP𝐀𝐔𝐕TF2](in Theorem 3.4), or \text{(II)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]\quad\text{(in Theorem~{}\ref{thm:oneside_penalty}), or }
(III)𝐔~,𝐕~=argmin𝐔,𝐕0[𝐀(𝐗𝐔𝐕T)F2+λ𝐔𝐕TF2](in Theorem 3.10). \text{(III)}\quad\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]\quad\text{(in Theorem~{}\ref{thm: apx orthogonal A}). }

So, what is required from the sketching matrices to work successfully within one-sided compression, (II) or (III)? Theorem 3.4 is stated for the sketching matrices with orthonormal rows: in this case, the regularizer in the form P𝐀𝐔𝐕T=(𝐈𝐀𝐀T)𝐔𝐕TP^{\perp}_{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}=({\mathbf{I}}-{\mathbf{A}}{\mathbf{A}}^{T}){\mathbf{U}}{\mathbf{V}}^{T} can be conveniently incorporated in the efficient solvers. Otherwise we can use the simplified regularizer without projection operator, Theorem 3.10, where the resulting loss depends on 𝐀{\mathbf{A}} being approximately orthogonal (to the extent easily satisfied by generic random matrices, as described in Remark 3.11). We note that in the one-sided cases, nonzero regularization λ\lambda is crucial both theoretically an empirically (Figure 5).

Informally, both Theorems 3.4 and 3.10 state that when we find 𝐔~\tilde{{\mathbf{U}}} and 𝐕~T\tilde{{\mathbf{V}}}^{T} solving the compressed problems stated above, the error 𝐗𝐔~𝐕~TF2\|{\mathbf{X}}-\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}\|^{2}_{F} depends on (a) how well an existent (unknown) solution of rank rr fits the uncompressed problem 𝐗𝐔𝐕TF2\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}, (b) how well the sketching matrix approximates the range of the data P𝐀𝐗F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}, and (c) how close to orthonormal are the rows of 𝐀{\mathbf{A}}. In particular, this explains and quantfies how orthogonal data-adapted measurements (e.g., constructed via the randomized rangefinder algorithm [14]) are useful in finding nonnegative low-rank components. By Corollary 3.6, in this case, the solution of the compressed problem above is exact for the original 𝐗{\mathbf{X}} that admits exact rank rr decomposition with the sketch size kk slightly oversamples rr. Compared to that, data-oblivious one-sided measurements incur additional loss, both theoretically and empirically, however they can be are advantageous for other reasons. For example, they can be re-generated when needed without any access to data and they do not require an additional pass over the data to form them.

In Section 4, we move from comparing the optimal values to solving the three compressed problems above. We formulate new variants of the multiplicative updates (MU) algorithm for each of them and show that the losses are nonincreasing during their iterations in Corollaries 4.3, 4.4, and 4.5 respectively. These corollaries follow from a more general result Theorem 4.1 formulated for a generic loss function with sketching. We also briefly discuss using projected gradient descent method on our compressed objectives.

One special technical challenge for using MU on the compressed objectives is that random sketching violates nonnegativity of the compressed components, which is the property that ensures the validity of the step sizes used in the MU algorithm. To address this challenge, we further generalize the defined compressed objective functions to include small shifts of the form σ𝟏mT(𝐗𝐔𝐕T)2\sigma\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}, where 𝟏m\mathbf{1}_{m} is a vector of all ones in m{\mathbb{R}}^{m}. This results in corrections of the form 𝐀T𝐀+σ𝟏m𝟏mT{\mathbf{A}}^{T}{\mathbf{A}}+\sigma\mathbf{1}_{m}\mathbf{1}_{m}^{T} that restore required nonnegativity inside the recovery algorithms (relevant theory is developed in Subsection 3.4).

In Section 5 we give some examples on real and synthetic datasets, in particular, showing successful performance of the proposed methods using about 5%5\% of the initial data. Finally, in Section 6, we conclude with some directions for future research.

2. Related work on scalable NMF

A number of randomized methods were proposed to improve the scalability of NMF, most of them in the form of heuristics.

First, several works propose iterative random sketching approach, which requires sampling new random sketching matrices as the algorithm progresses. Such works include the methods involving random projection streams [45, 46] that allow for Johnson-Lindenstrauss-type sketching matrices (random and oblivious to the data) but require multiple sketches and passes over the initial data. Similarly, in [28, 27] compressed and distributed versions of different descent methods for NMF use different compression matrices in iterations as opposed to a singular compression. Our focus is on the setting that requires one or two passes over the original data in the preprocessing stage while the algorithm that searches for nonnegative factors works solely on the compressed data.

In [35], the factor matrices are additionally assumed to be sparse and they were recovered with compressed sensing techniques. We do not make additional assumptions on the structure of the factor matrices.

Data-adapted sketching with randomized SVD techniques was used in [6] in the context of the hierarchical alternating least squares algorithm, although no theoretical justification of the proposed methods was given. Recently, these ideas were extended to a symmetric variant of the NMF problem in [16]. Additionally, in [49], a randomized SVD approximation is integrated into alternating multiplicative updates in a way that saves space, also without theoretical guarantees.

The two works most closely related to ours are [42] and [36]. Both of these papers derive compressed objective functions and seek to apply semi-NMF methods to iteratively improve a nonnegative factorization. A semi-NMF is a factorization in which one factor is entrywise nonnegative and the other factor is not constrained. Both papers apply the semi-NMF multiplicative updates from [4] and the latter also considers other update methods including updates based on the alternating direction method of multipliers (ADMM). Although the updates of [4] do possess guarantees to not increase their corresponding semi-NMF objectives, neither [42] nor [36] show whether these guarantees can be extended to their NMF objectives. So, the validity of the derived objective functions or the convergence of proposed iterative methods on the original NMF problem was not theoretically justified. A crucial motivation of this work is to create a connection between the algorithms working on the compressed problems and their performance with respect to the solution to the original problem. We achieve this with new variants of the standard NMF algorithms (different from those in [36, 42]) for the newly formulated compressed objective functions. We also provide some numerical comparison between the methods in Section 5.

2.1. Notations

Throughout the text, matrices and vectors are denoted with bold letters. We denote Frobenius matrix norm as .F\|.\|_{F} and the spectral (operator) norm as .\|.\|. The matrix 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}^{m\times n}_{+} means it is element-wise nonnegative, the same is denoted by 𝐗0{\mathbf{X}}\geq 0 when the size of the matrix is clear from the context. Element-wise positive and negative parts of vectors and matrices are denoted as ()+=max(,0)(\cdot)_{+}=\max(\cdot,0) and ()=min(,0)(\cdot)_{-}=-\min(\cdot,0) respectively. Further, \circ denotes element-wise multiplication and // denotes element-wise division. P𝐙P_{\mathbf{Z}} is the linear projection operator onto the column space of a tall matrix 𝐙{\mathbf{Z}}, projection to the orthogonal complement is P𝐙:=𝐈P𝐙P_{\mathbf{Z}}^{\perp}:={\mathbf{I}}-P_{\mathbf{Z}}.

3. Compressed problems with reliable solutions

We formulate optimization problems analogous to (1.1), which do not require storing the entire data matrix 𝐗{\mathbf{X}} and instead use sketched measurements. This is achieved by the use of carefully chosen regularization terms. In this section, we prove that the formulated problems are guaranteed to approximately solve the original NMF problem. In the next section, we show that the standard NMF solvers are easily extendable to these new regularized objective functions.

3.1. Two-sided compression

First, we show that if a matrix has an exact low-rank nonnegative matrix factorization, then one can recover an exact factorization using linear measurements on both sides.

Theorem 3.1.

Suppose 𝐗{\mathbf{X}} has an exact nonnegative factorization 𝐗=𝐔0𝐕0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}, where 𝐔0+m×r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, 𝐕0+n×r{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{n\times r} and they are both full-rank, rmin{n,m}r\leq\min\{n,m\}. Let 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} be matrices of sizes r×mr\times m and n×rn\times r respectively such that 𝐀1𝐗𝐀2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible111Note that this condition holds generically, i.e. for all but a (Lebesgue) measure-zero set of matrices.. For any λ1,λ2>0\lambda_{1},\lambda_{2}>0, let

𝐔~,𝐕~=argmin𝐔,𝐕0𝐀1(𝐗𝐔𝐕T)F2\displaystyle\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2} +(𝐗𝐔𝐕T)𝐀2F2\displaystyle+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2} (3.1)
+λ1P𝐀2T𝐗T𝐔𝐕TF2+λ2𝐔𝐕TP𝐀1𝐗F2,\displaystyle+\lambda_{1}\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2},

where (𝐔,𝐕0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (𝐔+m×r,𝐕+n×r)({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r}). Then 𝐗=𝐔~𝐕~T{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}.

Remark 3.2.

We can similarly take the sketching matrices 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} of any sizes are k1×nk_{1}\times n and m×k2m\times k_{2} respectively with k1,k2rk_{1},k_{2}\geq r.

Remark 3.3 (Implementation considerations).

This and further objective functions are formulated to allow for memory-efficient computations. For example, in the above objective function (3.1), one need not store 𝐗{\mathbf{X}} and can store 𝐀1𝐗{\mathbf{A}}_{1}{\mathbf{X}} and 𝐗𝐀2{\mathbf{X}}{\mathbf{A}}_{2} instead. Likewise, one does not need to store or compute P𝐀2T𝐗TP_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp} which is an m×mm\times m matrix, since one can instead compute

P𝐀2T𝐗T𝐔𝐕TF2\displaystyle\|P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2} =Tr(𝐕𝐔TP𝐀2T𝐗T𝐔𝐕T)\displaystyle=\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T})
=Tr(𝐕𝐔T𝐔𝐕T)Tr(𝐕𝐔T𝐐2𝐐2T𝐔𝐕T),\displaystyle=\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}{\mathbf{V}}^{T})-\textrm{Tr}({\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}{\mathbf{U}}{\mathbf{V}}^{T}),

where 𝐐2{\mathbf{Q}}_{2} is an m×rm\times r matrix with columns that form the orthonormal basis of the columns of 𝐗𝐀2{\mathbf{X}}{\mathbf{A}}_{2}, and so P𝐀2T𝐗T=𝐐2𝐐2TP_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}={\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}. One can do a similar trick to compute P𝐀1𝐗P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp} in terms of an analogous matrix 𝐐1{\mathbf{Q}}_{1}222The efficient ways to find orthogonal bases of the column/row span are well-known, e.g., see the discussion in [14] Section 4.1.. Thus, one can compute the objective function of (3.1) with total storage cost 3r(n+m)3r(n+m), by storing the matrices 𝐔,𝐕,𝐀1𝐗,𝐗𝐀2,𝐐1,𝐐2{\mathbf{U}},{\mathbf{V}},{\mathbf{A}}_{1}{\mathbf{X}},{\mathbf{X}}{\mathbf{A}}_{2},{\mathbf{Q}}_{1},{\mathbf{Q}}_{2}. This and similar considerations are crucial in Section 4, when we study computationally efficient iterative algorithms that solve the optimization problems on compressed data defined here.

The proof of Theorem 3.1 is loosely inspired by the row-column matrix sensing argument from [8].

Proof.

First, we show that the matrices 𝐔0,𝐕0{\mathbf{U}}_{0},{\mathbf{V}}_{0} such that 𝐗=𝐔0𝐕0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T} are not only feasible for the problem (3.1) but also give zero objective value. Indeed, since 𝐀1𝐗𝐀2=𝐀1𝐔0𝐕0T𝐀2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}={\mathbf{A}}_{1}{\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} is invertible, if follows that the square matrices 𝐀1𝐔0{\mathbf{A}}_{1}{\mathbf{U}}_{0} and 𝐕0T𝐀2{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} are also invertible. So, from 𝐗𝐀2=𝐔0𝐕0T𝐀2{\mathbf{X}}{\mathbf{A}}_{2}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2} we can then compute 𝐔0=𝐗𝐀2(𝐕0T𝐀2)1{\mathbf{U}}_{0}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2})^{-1} and similarly 𝐕0T=(𝐀1𝐔0)1𝐀1𝐗{\mathbf{V}}_{0}^{T}=({\mathbf{A}}_{1}{\mathbf{U}}_{0})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}. Then we have

𝐗=𝐔0𝐕0T=𝐗𝐀2(𝐕0T𝐀2)1(𝐀1𝐔0)1𝐀1𝐗=𝐗𝐀2(𝐀1𝐗𝐀2)1𝐀1𝐗=:𝐘.{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{V}}_{0}^{T}{\mathbf{A}}_{2})^{-1}({\mathbf{A}}_{1}{\mathbf{U}}_{0})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}=:{\mathbf{Y}}.

This implies

𝐗P𝐀1𝐗\displaystyle{\mathbf{X}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp} =𝐘P𝐀1𝐗\displaystyle={\mathbf{Y}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}
=𝐗𝐀2(𝐀1𝐗𝐀2)1𝐀1𝐗(𝐈(𝐀1𝐗)T(𝐀1𝐗(𝐀1𝐗)T)1𝐀1𝐗)=𝟎,\displaystyle={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{I}}-({\mathbf{A}}_{1}{\mathbf{X}})^{T}({\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{A}}_{1}{\mathbf{X}})^{T})^{-1}{\mathbf{A}}_{1}{\mathbf{X}})=\mathbf{0},

and similarly P𝐀2T𝐗T𝐗=𝟎P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}{\mathbf{X}}=\mathbf{0} and matrices 𝐔0,𝐕0{\mathbf{U}}_{0},{\mathbf{V}}_{0} give zero objective value.

Then, since 𝐔~,𝐕~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} are optimal for (3.1), they must also result in objective value 0 and all four summands in (3.1) vanish:

𝐀1(𝐗𝐔~𝐕~T)=(𝐗𝐔~𝐕~T)𝐀2=𝟎,{\mathbf{A}}_{1}({\mathbf{X}}-\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T})=({\mathbf{X}}-\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}){\mathbf{A}}_{2}=\mathbf{0}, (3.2)

and

P𝐀2T𝐗T𝐔~𝐕~T=𝐔~𝐕~TP𝐀1𝐗=𝟎.P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}^{\perp}\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}=\tilde{\mathbf{U}}\tilde{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}=\mathbf{0}. (3.3)

By (3.3), we can write

𝐔~𝐕~T=𝐔~𝐕~TP𝐀1𝐗=P𝐀2T𝐗T𝐔~𝐕~TP𝐀1𝐗=𝐗𝐀2𝐌𝐀1𝐗,\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}=P_{{\mathbf{A}}_{2}^{T}{\mathbf{X}}^{T}}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}={\mathbf{X}}{\mathbf{A}}_{2}{\mathbf{M}}{\mathbf{A}}_{1}{\mathbf{X}}, (3.4)

where the matrix

𝐌:=((𝐗𝐀2)T𝐗𝐀2)1(𝐗𝐀2)T𝐔~𝐕~T(𝐀1𝐗)T(𝐀1𝐗(𝐀1𝐗)T)1.{\mathbf{M}}:=(({\mathbf{X}}{\mathbf{A}}_{2})^{T}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}({\mathbf{X}}{\mathbf{A}}_{2})^{T}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}({\mathbf{A}}_{1}{\mathbf{X}})^{T}({\mathbf{A}}_{1}{\mathbf{X}}({\mathbf{A}}_{1}{\mathbf{X}})^{T})^{-1}.

Now,

𝐀1𝐗𝐀2=(3.2)𝐀1𝐔~𝐕~T𝐀2=(3.4)𝐀1(𝐗𝐀2𝐌𝐀1𝐗)𝐀2=(𝐀1𝐗𝐀2)𝐌(𝐀1𝐗𝐀2),{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}\overset{\eqref{eq:sum-12-vanish}}{=}{\mathbf{A}}_{1}\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}{\mathbf{A}}_{2}\overset{\eqref{two-sided-1}}{=}{\mathbf{A}}_{1}({\mathbf{X}}{\mathbf{A}}_{2}{\mathbf{M}}{\mathbf{A}}_{1}{\mathbf{X}}){\mathbf{A}}_{2}=({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}){\mathbf{M}}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2}),

and since 𝐀1𝐗𝐀2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible we have 𝐌=(𝐀1𝐗𝐀2)1{\mathbf{M}}=({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}. Thus, (3.4) implies

𝐔~𝐕~T=𝐗𝐀2(𝐀1𝐗𝐀2)1𝐀1𝐗=𝐘=𝐗.\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}={\mathbf{X}}{\mathbf{A}}_{2}({\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2})^{-1}{\mathbf{A}}_{1}{\mathbf{X}}={\mathbf{Y}}={\mathbf{X}}.

3.2. One-sided compression: orthogonal sketching matrices

Note that the described method requires measurements on both sides (and taking either 𝐀1{\mathbf{A}}_{1} or 𝐀2{\mathbf{A}}_{2} to be the identity matrix results in a necessity to work with the full matrix 𝐗{\mathbf{X}}). Now, we will show that it can be enough to measure the matrix on one side only.

Theorem 3.4.

(Orthogonal 𝐀{\mathbf{A}}) Let 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be any matrix and let 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} be a matrix with orthogonal rows (i.e. 𝐀𝐀T=𝐈{\mathbf{A}}{\mathbf{A}}^{T}={\mathbf{I}}). Let 𝐔0+m×r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, 𝐕0+r×n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and 𝐗0=𝐔0𝐕0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. If 𝐔~,𝐕~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} solve a compressed NMF problem with the same rank rr, that is,

𝐔~,𝐕~=argmin𝐔,𝐕0[𝐀(𝐗𝐔𝐕T)F2+λP𝐀𝐔𝐕TF2],\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right], (3.5)

where λ>0\lambda>0, P𝐀:=𝐈𝐀T𝐀P_{\mathbf{A}}^{\perp}:={\mathbf{I}}-{\mathbf{A}}^{T}{\mathbf{A}}, and (𝐔,𝐕0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (𝐔+m×r,𝐕+r×n)({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{r\times n}). Then 𝐗~:=𝐔~𝐕~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

𝐗𝐗~F2𝐗F2cλ[𝐗𝐗0F2𝐗F2+P𝐀𝐗F2𝐗F2],\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c_{\lambda}\left[\frac{\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\right], (3.6)

where cλ=max(2/λ,6,2λ+2).c_{\lambda}=\max(2/\lambda,6,2\lambda+2).

We note that this and further results do not require the data matrix to have an exact nonnegative low rank rr as in Theorem 3.1 (although if it is, the first term in the bound for the loss vanishes). Before we start the proof, let us recall a simple corollary of the Pythagorean theorem and the triangle inequality to be used several times below: for any matrices 𝐗,𝐘{\mathbf{X}},{\mathbf{Y}} and a projection operator P𝐀P_{\mathbf{A}},

𝐗𝐘F2\displaystyle\|{\mathbf{X}}-{\mathbf{Y}}\|^{2}_{F} P𝐀(𝐗𝐘)F2+2P𝐀𝐘F2+2P𝐀𝐗F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}. (3.7)

Indeed, this follows from

𝐗𝐘F2\displaystyle\|{\mathbf{X}}-{\mathbf{Y}}\|^{2}_{F} =P𝐀(𝐗𝐘)F2+P𝐀(𝐗𝐘)F2\displaystyle=\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}
P𝐀(𝐗𝐘)F2+(P𝐀𝐘F+P𝐀𝐗F)2\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+(\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|_{F}+\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|_{F})^{2}
P𝐀(𝐗𝐘)F2+2P𝐀𝐘F2+2P𝐀𝐗F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.
Proof of Theorem 3.4.

First, note that

𝐀(𝐗𝐗~)F2+λP𝐀𝐗~F2𝐀(𝐗𝐗0T)F2+λP𝐀𝐗0F2,\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}, (3.8)

since 𝐔~,𝐕~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} minimize the objective of (3.5) over all nonnegative matrices of the appropriate sizes. Now, since 𝐀𝐀T=𝐈{\mathbf{A}}{\mathbf{A}}^{T}={\mathbf{I}}, observe that for any 𝐌m×n{\mathbf{M}}\in{\mathbb{R}}^{m\times n} matrix

𝐀𝐌F=𝐀T𝐀𝐌F=P𝐀𝐌F.\|{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{M}}\|_{F}=\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}.

So, using identity (3.7) for the matrices 𝐗{\mathbf{X}} and 𝐗~\tilde{{\mathbf{X}}}, we can estimate

𝐗𝐗~F2\displaystyle\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} 𝐀(𝐗𝐗~)F2+2P𝐀𝐗~F2+2P𝐀𝐗F2\displaystyle\leq\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
c1(𝐀(𝐗𝐗~)F2+λP𝐀𝐗~F2)+2P𝐀𝐗F2\displaystyle\leq c_{1}\big{(}\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\big{)}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
(3.8)c1(𝐀(𝐗𝐗0)F2+λP𝐀𝐗0F2)+2P𝐀𝐗F2\displaystyle\overset{\eqref{feas}}{\leq}c_{1}\big{(}\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}\big{)}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}

for c1=max(2/λ,1).c_{1}=\max(2/\lambda,1). Using identity (3.7) for the matrices 𝐗{\mathbf{X}} and 𝐗0{\mathbf{X}}_{0}, we can estimate the term in parentheses as

𝐀(𝐗𝐗0)F2+λP𝐀𝐗0F2\displaystyle\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F} 𝐀(𝐗𝐗0)F2+2λP𝐀(𝐗𝐗0)F2+2λP𝐀𝐗F2\displaystyle\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
c2𝐗𝐗0F2+2λP𝐀𝐗F2\displaystyle\leq c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+2\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}

for c2=max(2λ,1).c_{2}=\max(2\lambda,1). Combining the estimates and regrouping, we get

𝐗𝐗~F2c1c2𝐗𝐗0F2+(2λc1+2)P𝐀𝐗F2.\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}\leq c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.

With cλ=max(c1c2,2λc1+2)=max(2/λ,6,2λ+2)c_{\lambda}=\max(c_{1}c_{2},2\lambda c_{1}+2)=\max(2/\lambda,6,2\lambda+2), this concludes the proof of Theorem 3.4. ∎

Theorem 3.4 shows that the solution to the compressed NMF problem (1.1) will work for the original uncompressed problem (3.5) as long as the terms 𝐗𝐗~F2\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} and P𝐀𝐗F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F} are small. Luckily, with just one more pass over the original data one can get such sketching matrices using the standard approaches of randomized linear algebra, such as those in [14].

Theorem 3.5 (“Randomized rangefinder algorithm loss”, [14]).

Let r,kr,k be integers such that r2r\geq 2 and r+2kmin{m,n}r+2\leq k\leq\min\{m,n\}. Let 𝐗{\mathbf{X}} be an m×nm\times n matrix and 𝐒{\mathbf{S}} be a n×kn\times k standard Gaussian matrix. Then

𝔼𝐗P𝐒T𝐗T𝐗F(1+rkr1)12(j>rσj2(𝐗))12,\mathbb{E}\|{\mathbf{X}}-P_{{\mathbf{S}}^{T}{\mathbf{X}}^{T}}{\mathbf{X}}\|_{F}\leq\left(1+\frac{r}{k-r-1}\right)^{\frac{1}{2}}\left(\sum_{j>r}\sigma_{j}^{2}({\mathbf{X}})\right)^{\frac{1}{2}},

where σj(𝐗)\sigma_{j}({\mathbf{X}}) is the jj-th largest singular value of 𝐗{\mathbf{X}}.

Corollary 3.6 (Data-adapted one-sided sketches).

Suppose the matrix 𝐗{\mathbf{X}} has an approximate nonnegative factorization, that is, there exist 𝐔0+m×r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, 𝐕0+r×n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} so that 𝐗0=𝐔0𝐕0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T} satisfies 𝐗𝐗0Fε𝐗F\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}\leq\varepsilon\|{\mathbf{X}}\|_{F}.

Take kk such that 2r+1kmin{m,n}2r+1\leq k\leq\min\{m,n\}. Form a sketch 𝐗𝐒{\mathbf{X}}{\mathbf{S}} with 𝐒{\mathbf{S}} is an n×kn\times k standard Gaussian matrix; find 𝐐{\mathbf{Q}}, orthonormal basis of the column space of 𝐗𝐒{\mathbf{X}}{\mathbf{S}}; take a sketching matrix 𝐀:=𝐐T{\mathbf{A}}:={\mathbf{Q}}^{T}. If 𝐔~+m×r,𝐕~+r×n\tilde{{\mathbf{U}}}\in{\mathbb{R}}_{+}^{m\times r},\tilde{{\mathbf{V}}}\in{\mathbb{R}}_{+}^{r\times n} solve a compressed NMF problem (3.5) with this 𝐀{\mathbf{A}} and some λ>0\lambda>0, then

𝔼𝐗𝐗~F𝐗F2cλε\frac{{\mathbb{E}}\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|_{F}}{\|{\mathbf{X}}\|_{F}}\leq\sqrt{2}c_{\lambda}\varepsilon (3.9)

and cλc_{\lambda} is the constant from (3.6).

Proof.

By Theorem 3.5 and approximate low-rankness of 𝐗{\mathbf{X}}, we have

𝔼𝐗P𝐀𝐗F1+rkr1j>rσj(𝐗)2𝐗𝐗0F2ε𝐗F,\mathbb{E}\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|_{F}\leq\sqrt{1+\frac{r}{k-r-1}}\sqrt{\sum_{j>r}\sigma_{j}({\mathbf{X}})}\leq\sqrt{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}\leq\sqrt{2}\varepsilon\|{\mathbf{X}}\|_{F},

using that k2r+1k\geq 2r+1 in the second inequality. Combining this with Theorem 3.4, we have

𝔼𝐗𝐗~Fc(𝐗𝐗0F+𝔼𝐗P𝐀𝐗F)c(1+2)ε𝐗F,{\mathbb{E}}\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|_{F}\leq c(\|{\mathbf{X}}-{\mathbf{X}}_{0}\|_{F}+{\mathbb{E}}\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|_{F})\leq c(1+\sqrt{2})\varepsilon\|{\mathbf{X}}\|_{F},

where c=max(2/λ,6,2λ+2).c=\max(2/\lambda,6,2\lambda+2).

Remark 3.7.

A high probability deviation bound for the loss 𝐗𝐐𝐐T𝐗F\|{\mathbf{X}}-{\mathbf{Q}}{\mathbf{Q}}^{T}{\mathbf{X}}\|_{F} is also known [14, Theorem 10.7]. It implies a high probability estimate for (3.9) in a straightforward way. Instead of Gaussian initial sketching, one can employ subsampled random Fourier transform [14] or other cost-efficient matrices [31, 10].

It is easy to see that the oversampling condition k>2rk>2r can be relaxed to any k>r+1k>r+1 by suitably increasing the constant factor 2\sqrt{2}. Notwithstanding this factor, we see that if 𝐗{\mathbf{X}} has an exact NMF decomposition of rank rr and k>r+1k>r+1 then the error of the optimal solution to the compressed problem must be also zero, comparable with the result of Theorem 3.1.

3.3. One-sided compression: nonorthogonal sketching matrices

The orthogonality assumption on 𝐀{\mathbf{A}} can be relaxed to having approximately orthogonal rows, such as those of appropriately normalized random matrices. This case is more than a straightforward extension of Theorem 3.4 because of the following computational challenge: if the sketching matrix 𝐀{\mathbf{A}} does not have orthogonal rows, the orthogonal projection operator P𝐀P^{\perp}_{\mathbf{A}} does not have a nicely decomposable form 𝐀T𝐀{\mathbf{A}}^{T}{\mathbf{A}}. Theorem 3.10 below shows how to having projection matrices in the regularizer term.

Definition 3.8 (Approximately orthogonal matrices).

For a positive constant ε<1\varepsilon<1, we call a matrix 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} ε\varepsilon-approximately orthogonal if its singular values lie in the interval [1ε,1+ε][1-\varepsilon,1+\varepsilon].

The convenience of the definition above stems from the following simple observation.

Lemma 3.9.

If the matrix 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is ε\varepsilon-approximately orthogonal, then for any 𝐌m×n{\mathbf{M}}\in{\mathbb{R}}^{m\times n} matrix, we have

(1ε)P𝐀𝐌F𝐀𝐌F(1+ε)P𝐀𝐌F.(1-\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}\leq\|{\mathbf{A}}{\mathbf{M}}\|_{F}\leq(1+\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}. (3.10)
Proof.

For a positive semidefinite matrix 𝐙{\mathbf{Z}}, let 𝐙\sqrt{{\mathbf{Z}}} denote the unique positive semidefinite matrix such that (𝐙)2=𝐙(\sqrt{{\mathbf{Z}}})^{2}={\mathbf{Z}}. Then, if 𝐀=𝐔𝚺𝐕T{\mathbf{A}}={\mathbf{U}}\boldsymbol{\Sigma}{\mathbf{V}}^{T} is a compact SVD decomposition of 𝐀{\mathbf{A}}, 𝐀T𝐀=𝐕𝚺𝐕T\sqrt{{\mathbf{A}}^{T}{\mathbf{A}}}={\mathbf{V}}\boldsymbol{\Sigma}{\mathbf{V}}^{T} and P𝐀=𝐕𝐕TP_{\mathbf{A}}={\mathbf{V}}{\mathbf{V}}^{T}. This implies P𝐀𝐌F=𝐕T𝐌F,\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}, 𝐀𝐌F=𝐀T𝐀𝐌F=𝚺𝐕T𝐌F\|{\mathbf{A}}{\mathbf{M}}\|_{F}=\|\sqrt{{\mathbf{A}}^{T}{\mathbf{A}}}{\mathbf{M}}\|_{F}=\|\boldsymbol{\Sigma}{\mathbf{V}}^{T}{\mathbf{M}}\|_{F} and

(1ε)P𝐀𝐌F\displaystyle(1-\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F} (1𝐈𝚺)𝐕T𝐌F𝐕T𝐌F(𝐈𝚺)𝐕T𝐌F\displaystyle\leq(1-\|{\mathbf{I}}-\boldsymbol{\Sigma}\|)\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}-\|({\mathbf{I}}-\boldsymbol{\Sigma}){\mathbf{V}}^{T}{\mathbf{M}}\|_{F}
𝚺𝐕T𝐌F𝚺𝐕T𝐌F(1+ε)P𝐀𝐌F.\displaystyle\leq\|\boldsymbol{\Sigma}{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq\|\boldsymbol{\Sigma}\|\|{\mathbf{V}}^{T}{\mathbf{M}}\|_{F}\leq(1+\varepsilon)\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}.

The next theorem justifies solving a compressed NMF problem with a simplified regularization term:

Theorem 3.10.

(Approximately orthogonal 𝐀{\mathbf{A}}) Let 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be any matrix and let 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} be ε\varepsilon-approximately orthogonal, with ε0.5\varepsilon\leq 0.5. Let 𝐔0+m×r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, 𝐕0+r×n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and 𝐗0=𝐔0𝐕0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. If 𝐔~,𝐕~\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}} solve the following compressed NMF problem with the same rank rr

𝐔~,𝐕~=argmin𝐔,𝐕0[𝐀(𝐗𝐔𝐕T)F2+λ𝐔𝐕TF2].\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left[\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}\right]. (3.11)

Then 𝐗~:=𝐔~𝐕~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

𝐗(1+λ)𝐗~F2𝐗F2c[𝐗𝐗~F2𝐗F2+P𝐀𝐗F2𝐗F2+ε2]\frac{\left\|{\mathbf{X}}-\left(1+\lambda\right)\tilde{{\mathbf{X}}}\right\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c\left[\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\varepsilon^{2}\right] (3.12)

where c=max(4+54λ,6,48λ)c=\max(4+\frac{5}{4\lambda},6,48\lambda).

Proof.

By optimality, for any matrix 𝐘=𝐔𝐕T{\mathbf{Y}}={\mathbf{U}}{\mathbf{V}}^{T} for some nonnegative 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}} of the appropriate size (to be the scaled version of 𝐗0{\mathbf{X}}_{0} as specified below) we have

𝐀(𝐗𝐗~)F2+λ𝐗~F2𝐀(𝐗𝐘)F2+λ𝐘F2.\|{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\lambda\|{\mathbf{Y}}\|^{2}_{F}.

Approximate orthogonality in the form of (3.10) applied to the matrices 𝐌=𝐗𝐗~{\mathbf{M}}={\mathbf{X}}-\tilde{{\mathbf{X}}} and 𝐌=𝐗𝐘{\mathbf{M}}={\mathbf{X}}-{\mathbf{Y}} allows to orthogonalize this inequality:

(1ε)2𝐐(𝐗𝐗~)F2+λ𝐗~F2(1+ε)2𝐐(𝐗𝐘)F2+λ𝐘F2,(1-\varepsilon)^{2}\|{\mathbf{Q}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq(1+\varepsilon)^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\lambda\|{\mathbf{Y}}\|^{2}_{F},

where 𝐐{\mathbf{Q}} denotes the k×mk\times m matrix with orthogonal rows such that P𝐀=𝐐T𝐐P_{\mathbf{A}}={\mathbf{Q}}^{T}{\mathbf{Q}}. Indeed, this implies P𝐀𝐌F=𝐐T𝐐𝐌F=𝐐𝐌F\|P_{\mathbf{A}}{\mathbf{M}}\|_{F}=\|{\mathbf{Q}}^{T}{\mathbf{Q}}{\mathbf{M}}\|_{F}=\|{\mathbf{Q}}{\mathbf{M}}\|_{F} for any 𝐌m×n.{\mathbf{M}}\in{\mathbb{R}}^{m\times n}. So,

𝐐(𝐗𝐗~)F2+δ𝐗~F2𝐐(𝐗𝐘)F2+δ𝐘F2+3ε2𝐐(𝐗𝐘)F2.\|{\mathbf{Q}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\delta\|\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}+\delta\|{\mathbf{Y}}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}. (3.13)

with δ=λ/(1ε)2\delta=\lambda/(1-\varepsilon)^{2}.

We will further rearrange the optimality condition using the following identity based on completion of the square on both sides of (3.13): for any matrices 𝐌1{\mathbf{M}}_{1}, 𝐌2{\mathbf{M}}_{2} of appropriate size,

𝐌1𝐌2F2+δ𝐌2F2\displaystyle\|{\mathbf{M}}_{1}-{\mathbf{M}}_{2}\|_{F}^{2}+\delta\|{\mathbf{M}}_{2}\|_{F}^{2} =𝐌1F2+(1+δ)𝐌2F22𝐌1,𝐌2\displaystyle=\|{\mathbf{M}}_{1}\|_{F}^{2}+(1+\delta)\|{\mathbf{M}}_{2}\|_{F}^{2}-2\langle{\mathbf{M}}_{1},{\mathbf{M}}_{2}\rangle
=δ1+δ𝐌1F2+11+δ𝐌1(1+δ)𝐌2F2.\displaystyle=\frac{\delta}{1+\delta}\|{\mathbf{M}}_{1}\|_{F}^{2}+\frac{1}{1+\delta}\left\|{\mathbf{M}}_{1}-(1+\delta){\mathbf{M}}_{2}\right\|^{2}_{F}.

Using this identity for 𝐌1=𝐐𝐗{\mathbf{M}}_{1}={\mathbf{Q}}{\mathbf{X}} and 𝐌2=𝐐𝐗~{\mathbf{M}}_{2}={\mathbf{Q}}\tilde{{\mathbf{X}}} on the left and 𝐌2=𝐐𝐘{\mathbf{M}}_{2}={\mathbf{Q}}{\mathbf{Y}} on the right of (3.13), we obtain

δ1+δ𝐐𝐗F2+11+δ𝐐(𝐗(1+δ)𝐗~)F2+δP𝐀𝐗~F2\displaystyle\frac{\delta}{1+\delta}\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}+\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}
δ1+δ𝐐𝐗F2+11+δ𝐐(𝐗(1+δ)𝐘)F2+δP𝐀𝐘F2+3ε2𝐐(𝐗𝐘)F2.\displaystyle\leq\frac{\delta}{1+\delta}\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}+\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta){\mathbf{Y}})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}{\mathbf{Y}}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{Y}})\|^{2}_{F}.

Cancelling common terms, letting 𝐘:=𝐗0/(1+δ){\mathbf{Y}}:={\mathbf{X}}_{0}/(1+\delta), we have

11+δ𝐐(𝐗\displaystyle\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}} (1+δ)𝐗~)2F+δ(1+δ)2P𝐀(1+δ)𝐗~2F\displaystyle-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+\frac{\delta}{(1+\delta)^{2}}\|P_{\mathbf{A}}^{\perp}(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}
11+δ𝐐(𝐗𝐗0)F2+δP𝐀𝐗01+δF2+3ε2𝐐(𝐗𝐗01+δ)F2=:𝐖.\displaystyle\leq\frac{1}{1+\delta}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+\delta\|P_{\mathbf{A}}^{\perp}\frac{{\mathbf{X}}_{0}}{1+\delta}\|^{2}_{F}+3\varepsilon^{2}\|{\mathbf{Q}}({\mathbf{X}}-\frac{{\mathbf{X}}_{0}}{1+\delta})\|^{2}_{F}=:{\mathbf{W}}.

To estimate the loss on the uncompressed problem, we use (3.7) with the matrices 𝐗{\mathbf{X}}, (1+δ)𝐗~(1+\delta)\tilde{{\mathbf{X}}} and 𝐐{\mathbf{Q}} to get

𝐗(1+δ)𝐗~F2𝐐(𝐗(1+δ)𝐗~)F2+2P𝐀(1+δ)𝐗~F2+2P𝐀𝐗F2\displaystyle\|{\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}\leq\|{\mathbf{Q}}({\mathbf{X}}-(1+\delta)\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}(1+\delta)\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
2(1+δ)2δ𝐖+2P𝐀𝐗F2\displaystyle\quad\quad\quad\quad\quad\quad\leq\frac{2(1+\delta)^{2}}{\delta}{\mathbf{W}}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
2(1+δ)δ𝐐(𝐗𝐗0)F2+2P𝐀𝐗0F2+6ε2δ𝐐((1+δ)𝐗𝐗0)F2+2P𝐀𝐗F2\displaystyle\leq\frac{2(1+\delta)}{\delta}\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}\|^{2}_{F}+\frac{6\varepsilon^{2}}{\delta}\|{\mathbf{Q}}((1+\delta){\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
(2+12ε2δ+2)𝐐(𝐗𝐗0)F2+4P𝐀(𝐗𝐗0)F2+6P𝐀𝐗F2+12ε2δ𝐐𝐗F2\displaystyle\leq\left(\frac{2+12\varepsilon^{2}}{\delta}+2\right)\|{\mathbf{Q}}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+4\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0})\|^{2}_{F}+6\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}+12\varepsilon^{2}\delta\|{\mathbf{Q}}{\mathbf{X}}\|_{F}^{2}
(4+54λ)𝐗𝐗0F2+6P𝐀𝐗F2+12ε2λ(1ε)2𝐗F2,\displaystyle\leq\left(4+\frac{5}{4\lambda}\right)\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+6\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}+12\frac{\varepsilon^{2}\lambda}{(1-\varepsilon)^{2}}\|{\mathbf{X}}\|_{F}^{2},

using that δ=λ(1ε)2\delta=\frac{\lambda}{(1-\varepsilon)^{2}} and ε12\varepsilon\leq\frac{1}{2}. ∎

Remark 3.11.

We conclude the section with the discussion of Theorem 3.10 result. We note that

  • Theorem 3.10 shows it is possible to regularize the compressed NMF problem without the projection operator and to find a (1+λ)(1+\lambda)-rescaled factors. Note that the rescaling does not affect the learned nonnegative low-rank structure.

  • The property (3.10) P𝐀𝐌F2𝐀𝐌F2\|P_{\mathbf{A}}{\mathbf{M}}\|^{2}_{F}\sim\|{\mathbf{A}}{\mathbf{M}}\|^{2}_{F} is significantly more relaxed than the standard geometry preservation properties of the form 𝐌F2𝐀𝐌F2\|{\mathbf{M}}\|^{2}_{F}\sim\|{\mathbf{A}}{\mathbf{M}}\|^{2}_{F}, such as Johnson-Lindenstrauss, oblivious subspace embedding, or restricted isometry property. The latter won’t be satisfied for, e.g., random Gaussian matrix 𝐀{\mathbf{A}} and arbitrary nonnegative rank rr matrices 𝐌{\mathbf{M}} (as needed within Theorem 3.10), unless there is no compression and kmk\geq m.

  • The approximate orthogonality property (3.10) is not hard to satisfy with generic random matrices. For example, an i.i.d. Gaussian matrix 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} with each entry having mean 0 and variance 1m\frac{1}{m} is ε\varepsilon-approximately orthogonal with probability 12exp(ε2m/8)1-2\exp(-\varepsilon^{2}m/8) as soon as kmε2/4k\leq m\varepsilon^{2}/4 (by [41, Corollary 5.35]).

  • While it is easy to guarantee approximate orthogonality with generic matrices 𝐀{\mathbf{A}} (not learned from 𝐗{\mathbf{X}}), the term P𝐀𝐗P_{\mathbf{A}}^{\perp}{\mathbf{X}} is still the part of the error bound. So, data-oblivious one-sided compression in general is not expected to result in exact recovery even if data matrix 𝐗{\mathbf{X}} admits exact nonnegative factorization.

3.4. Nonnegativity in compression

In the next section, we discuss the approaches to solve compressed nonnegative matrix factorization problems. In particular, we consider the variations of multiplicative updates algorithm to iteratively minimize the objective functions that we have formulated in this section. The multiplicative updates algorithm is valid due to the fact that the matrices involved in the iterative process are nonnegative. This convenient property is destroyed by sketching unless we have that 𝐀T𝐀{\mathbf{A}}^{T}{\mathbf{A}} is an element-wise nonnegative matrix. While this is not expected to be true neither for approximately orthonormal random sketches nor for the data-adapted sketching matrices coming from randomized rangefinder algorithm, to overcome this issue, it suffices to add some extra penalty terms taking the form

σ𝟏mT(𝐗𝐔𝐕T)2and/orσ(𝐗𝐔𝐕T)𝟏n2,\sigma\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}\quad\text{and/or}\quad\sigma\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2}, (3.14)

where 𝟏m\mathbf{1}_{m} is a vector of all ones in m{\mathbb{R}}^{m}.

Corollary 3.12.

Suppose 𝐗{\mathbf{X}} has an exact nonnegative factorization 𝐗=𝐔0𝐕0T{\mathbf{X}}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}, where 𝐔0n×k{\mathbf{U}}_{0}\in{\mathbb{R}}^{n\times k}, 𝐕0m×k{\mathbf{V}}_{0}\in{\mathbb{R}}^{m\times k} and they are both full-rank, kmin{n,m}k\leq\min\{n,m\}. Let 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} are generic random matrices of sizes k×nk\times n and m×km\times k, respectively. If for some λ1,λ2>0\lambda_{1},\lambda_{2}>0 and σ1,σ20\sigma_{1},\sigma_{2}\geq 0

𝐔~,𝐕~=argmin𝐔,𝐕0L\displaystyle\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}L (𝐗𝐔𝐕T)\displaystyle({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}) (3.15)
+σ1𝟏mT(𝐗𝐔𝐕T)2+σ2(𝐗𝐔𝐕T)𝟏n2,\displaystyle+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}+\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2},

where L(𝐗𝐔𝐕T):=𝐀1(𝐗𝐔𝐕T)F2+(𝐗𝐔𝐕T)𝐀2F2+λ1P𝐗𝐀2𝐔𝐕TF2+λ2𝐔𝐕TP𝐀1𝐗F2,L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}):=\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|_{F}^{2}+\lambda_{1}\|P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}^{2}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|_{F}^{2}, then 𝐗=𝐔~𝐕~T{\mathbf{X}}=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T}.

Proof.

Note that

min𝐔,𝐕0L(𝐗𝐔𝐕T)\displaystyle\min_{{\mathbf{U}},{\mathbf{V}}\geq 0}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}) +σ1𝟏mT(𝐗𝐔𝐕T)2\displaystyle+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}
min𝐔,𝐕0𝟏mT𝐗=𝟏mT𝐔𝐕TL(𝐗𝐔𝐕T)+σ1𝟏mT(𝐗𝐔𝐕T)2\displaystyle\leq\min_{\begin{subarray}{c}{\mathbf{U}},{\mathbf{V}}\geq 0\\ \mathbf{1}_{m}^{T}{\mathbf{X}}=\mathbf{1}_{m}^{T}{\mathbf{U}}{\mathbf{V}}^{T}\end{subarray}}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}
=min𝐔,𝐕0𝟏mT𝐗=𝟏mT𝐔𝐕TL(𝐗𝐔𝐕T),\displaystyle=\min_{\begin{subarray}{c}{\mathbf{U}},{\mathbf{V}}\geq 0\\ \mathbf{1}_{m}^{T}{\mathbf{X}}=\mathbf{1}_{m}^{T}{\mathbf{U}}{\mathbf{V}}^{T}\end{subarray}}L({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}),

and similarly for adding the term σ2(𝐗𝐔𝐕T)𝟏nF2\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2}_{F}. Then the statement follows directly from Theorem 3.1. ∎

When we do not assume that 𝐗{\mathbf{X}} has exactly nonnegative rank kk, adding the regularizer of the form (3.14) is still possible under an additional condition, essentially imposing that the column-sums of 𝐗{\mathbf{X}} and 𝐔𝐕T{\mathbf{U}}{\mathbf{V}}^{T} approximately match if 𝐔,𝐕{\mathbf{U}},{\mathbf{V}} are optimal to (1.1).

Corollary 3.13.

Let 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be a nonnegative matrix and 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is a matrix with orthogonal rows. Let 𝐔0+m×r{\mathbf{U}}_{0}\in{\mathbb{R}}_{+}^{m\times r}, 𝐕0+r×n{\mathbf{V}}_{0}\in{\mathbb{R}}_{+}^{r\times n} give a solution to the original NMF problem (1.1) of rank rr and 𝐗0=𝐔0𝐕0T{\mathbf{X}}_{0}={\mathbf{U}}_{0}{\mathbf{V}}_{0}^{T}. Additionally assume that 𝟏T(𝐗0𝐗)/𝟏T𝐗ε<1\left\|\mathbf{1}^{T}({\mathbf{X}}_{0}-{\mathbf{X}})/\mathbf{1}^{T}{\mathbf{X}}\right\|\leq\varepsilon<1, where // denotes element-wise division and 𝟏m\mathbf{1}\in{\mathbb{R}}^{m} is the vector of all ones. If λ,σ>0,\lambda,\sigma>0,

𝐔~,𝐕~=argmin𝐔,𝐕0𝐀(𝐗𝐔𝐕T)F2+λP𝐀𝐔𝐕TF2+σ𝟏T(𝐗𝐔𝐕T)2,\tilde{{\mathbf{U}}},\tilde{{\mathbf{V}}}=\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\left\|{\mathbf{A}}\left({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\right)\right\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}, (3.16)

then 𝐗~:=𝐔~𝐕~T\tilde{{\mathbf{X}}}:=\tilde{{\mathbf{U}}}\tilde{{\mathbf{V}}}^{T} satisfies

𝐗𝐗~F2𝐗F2cλ[𝐗𝐗0F2𝐗F2+𝐗P𝐀𝐗F2𝐗F2+ε2].\frac{\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}\leq c_{\lambda}\left[\frac{\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\frac{\|{\mathbf{X}}-P_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F}}{\|{\mathbf{X}}\|^{2}_{F}}+\varepsilon^{2}\right].

For example, if we have that λ,ε(0,1/2)\lambda,\varepsilon\in(0,1/2) one can bound c=max{6,8/λ}.c=\max\{6,8/\lambda\}.

Proof.

Let 𝐃n×n{\mathbf{D}}\in{\mathbb{R}}^{n\times n} be the diagonal matrix with nonzero entries given by (𝟏T𝐗/𝟏T𝐗0)(\mathbf{1}^{T}{\mathbf{X}}/\mathbf{1}^{T}{\mathbf{X}}_{0}), and so 𝟏T𝐗0𝐃=𝟏T𝐗\mathbf{1}^{T}{\mathbf{X}}_{0}{\mathbf{D}}=\mathbf{1}^{T}{\mathbf{X}}. Note that 𝐗0𝐃{\mathbf{X}}_{0}{\mathbf{D}} is some (not necessarily optimal) solution to (3.16) and so we have

P𝐀(𝐗𝐗~)F2+λP𝐀𝐗~F2\displaystyle\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F} P𝐀(𝐗𝐗~)F2+λP𝐀𝐗~F2+σ𝟏T(𝐗𝐗~)2\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}
P𝐀(𝐗𝐗0𝐃)F2+λP𝐀𝐗0𝐃F2.\displaystyle\leq\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}. (3.17)

Then, for c1=max(2/λ,1),c_{1}=\max(2/\lambda,1),

𝐗𝐗~F2\displaystyle\|{\mathbf{X}}-\tilde{{\mathbf{X}}}\|^{2}_{F} (3.7)P𝐀(𝐗𝐗~)F2+2P𝐀𝐗~F2+2P𝐀𝐗F2\displaystyle\overset{\eqref{eq:triangle}}{\leq}\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
c1(P𝐀(𝐗𝐗~)F2+λP𝐀𝐗~F2)+2P𝐀𝐗F2\displaystyle\leq c_{1}\left(\|P_{\mathbf{A}}({\mathbf{X}}-\tilde{{\mathbf{X}}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}\tilde{{\mathbf{X}}}\|^{2}_{F}\right)+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
(3.4)c1(P𝐀(𝐗𝐗0𝐃)F2+λP𝐀𝐗0𝐃F2)+2P𝐀𝐗F2\displaystyle\overset{\eqref{eq:shifted-optimality}}{\leq}c_{1}\left(\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}\right)+2\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
c1P𝐀(𝐗𝐗0𝐃)F2+2λc1P𝐀(𝐗𝐗0𝐃)F2+(2λc1+2)P𝐀𝐗F2\displaystyle\leq c_{1}\|P_{\mathbf{A}}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+2\lambda c_{1}\|P_{\mathbf{A}}^{\perp}({\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}})\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
c2=max(2λ,1)c1c2𝐗𝐗0𝐃F2+(2λc1+2)P𝐀𝐗F2\displaystyle\overset{c_{2}=\max(2\lambda,1)}{\leq}c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}{\mathbf{D}}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
2c1c2𝐗𝐗0F2+2c1c2𝐗0F2𝐈𝐃2+(2λc1+2)P𝐀𝐗F2\displaystyle\leq 2c_{1}c_{2}\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+2c_{1}c_{2}\|{\mathbf{X}}_{0}\|^{2}_{F}\|{\mathbf{I}}-{\mathbf{D}}\|^{2}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}
2c1c2(1+2ε2)𝐗𝐗0F2+4c1c2ε2𝐗F2+(2λc1+2)P𝐀𝐗F2.\displaystyle\leq 2c_{1}c_{2}(1+2\varepsilon^{2})\|{\mathbf{X}}-{\mathbf{X}}_{0}\|^{2}_{F}+4c_{1}c_{2}\varepsilon^{2}\|{\mathbf{X}}\|^{2}_{F}+(2\lambda c_{1}+2)\|P_{\mathbf{A}}^{\perp}{\mathbf{X}}\|^{2}_{F}.

4. Methods that solve compressed problems

In this section we define iterative methods that solve the formulated optimization problems directly, without referring to the original data or any matrices of the large uncompressed size.

4.1. General convergence for sketched multiplicative updates

Multiplicative updates has been one of the most popular algorithms for NMF since the introduction in [22]. In this section we show how to modify the classical multiplicative updates algorithm for the various objectives we have derived in earlier sections.

To this end, we prove a general theorem for multiplicative updates for multiple minimization terms. We will see in the next Section 4.2 that giving nonnegativity conditions on the sums (4.2) (rather than the stronger conditions on the individual terms) matters so we can put realistic assumption on our regularization terms that include orthogonal projection operators.

Theorem 4.1.

Consider an objective function in the generic form

argmin𝐔,𝐕012i=1s𝐀i(𝐗i(𝐀)𝐔𝐕T)F2+12j=1t(𝐗j(𝐁)𝐔𝐕T)𝐁jF2,\operatorname*{arg\,min}_{{\mathbf{U}},{\mathbf{V}}\geq 0}\frac{1}{2}\sum_{i=1}^{s}\|{\mathbf{A}}_{i}({\mathbf{X}}^{({\mathbf{A}})}_{i}-{\mathbf{U}}{\mathbf{V}}^{T})\|_{F}^{2}+\frac{1}{2}\sum_{j=1}^{t}\|({\mathbf{X}}^{({\mathbf{B}})}_{j}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{B}}_{j}\|_{F}^{2}, (4.1)

where all 𝐗i(𝐀),𝐗j(𝐁)m×n{\mathbf{X}}_{i}^{({\mathbf{A}})},{\mathbf{X}}_{j}^{({\mathbf{B}})}\in{\mathbb{R}}^{m\times n}, 𝐀ik×m{\mathbf{A}}_{i}\in{\mathbb{R}}^{k\times m}, 𝐁jn×k{\mathbf{B}}_{j}\in{\mathbb{R}}^{n\times k} and (𝐔,𝐕0)({\mathbf{U}},{\mathbf{V}}\geq 0) means (𝐔+m×r,𝐕+n×r).({\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r}). Let 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}} be m×rm\times r and n×rn\times r matrices respectively. If all six matrices

{𝐔,𝐕,(i=1s𝐀iT𝐀i),(j=1t𝐁j𝐁jT),(i=1s𝐀iT𝐀i𝐗i(𝐀)),(j=1t𝐗j(𝐁)𝐁j𝐁jT)}\displaystyle\left\{{\mathbf{U}},{\mathbf{V}},(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}),(\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}),(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}),(\sum_{j=1}^{t}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T})\right\} (4.2)

are entry-wise nonnegative, then the objective (4.1) is nonincreasing under the updates

𝐔\displaystyle{\mathbf{U}} 𝐔i=1s𝐀iT𝐀i𝐗i(𝐀)𝐕+j=1t𝐗j(𝐁)𝐁j𝐁jT𝐕i=1s𝐀iT𝐀i𝐔𝐕T𝐕+j=1t𝐔𝐕T𝐁j𝐁jT𝐕,\displaystyle\leftarrow{\mathbf{U}}\circ\frac{\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}{\mathbf{V}}+\sum_{j=1}^{t}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}{\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sum_{j=1}^{t}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}},
𝐕\displaystyle{\mathbf{V}} 𝐕i=1s(𝐗i(𝐀))T𝐀iT𝐀i𝐔+j=1t𝐁j𝐁jT(𝐗j(𝐁))T𝐔i=1s𝐕𝐔T𝐀iT𝐀i𝐔+j=1t𝐁j𝐁jT𝐕𝐔T𝐔.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{\sum_{i=1}^{s}({\mathbf{X}}^{({\mathbf{A}})}_{i})^{T}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}+\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}({\mathbf{X}}^{({\mathbf{B}})}_{j})^{T}{\mathbf{U}}}{\sum_{i=1}^{s}{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}+\sum_{j=1}^{t}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.
Remark 4.2.

(Implementation considerations) Note that we can compute the matrix 𝐀iT𝐀i𝐔𝐕T𝐕{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}} by the multiplication order given by 𝐀iT((𝐀i𝐔)(𝐕T𝐕)){\mathbf{A}}_{i}^{T}(({\mathbf{A}}_{i}{\mathbf{U}})({\mathbf{V}}^{T}{\mathbf{V}})). This order never requires us to store a matrix of size larger than max(n,m)×max(r,k)\max(n,m)\times\max(r,k). Similar procedures can be used for other terms appearing in the updates of Theorem 4.1.

The proof of the theorem follows the standard approach that justifies the validity of multiplicative updates algorithm for NMF problem (e.g., [22]), where nonnegativity assumption ensures its validity on the sketched problem.

Proof of Theorem 4.1.

Let us consider the step updating the matrix 𝐔+m×r{\mathbf{U}}\in\mathbb{R}^{m\times r}_{+}. Let 𝐮+mr{\mathbf{u}}\in{\mathbb{R}}^{mr}_{+} be the vectorization of 𝐔{\mathbf{U}} denoted as 𝐮=vec(𝐔){\mathbf{u}}=\text{vec}({\mathbf{U}}). Define

𝐲i(1)=vec(𝐀i𝐗i(𝐀)),𝐖i(1)=(𝐕𝐀i) for i=1,,s;,\displaystyle{\mathbf{y}}^{(1)}_{i}=\text{vec}({\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}),\quad{\mathbf{W}}^{(1)}_{i}=({\mathbf{V}}\otimes{\mathbf{A}}_{i})\quad\text{ for }i=1,\ldots,s;,
𝐲j(2)=vec(𝐗j(B)𝐁j),𝐖j(2)=(𝐁jT𝐕𝐈m×m) for j=1,,t,\displaystyle{\mathbf{y}}^{(2)}_{j}=\text{vec}({\mathbf{X}}^{(B)}_{j}{\mathbf{B}}_{j}),\quad{\mathbf{W}}^{(2)}_{j}=({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}}_{m\times m})\quad\text{ for }j=1,\ldots,t,

where \otimes denotes matrix Kronecker product. Define 𝐲k(m+n){\mathbf{y}}\in{\mathbb{R}}^{k(m+n)} to be all of the vectors 𝐲i(1){\mathbf{y}}^{(1)}_{i} and 𝐲j(2){\mathbf{y}}^{(2)}_{j} stacked together vertically. Similarly, 𝐖(kn+km)×rm{\mathbf{W}}\in{\mathbb{R}}^{(kn+km)\times rm} is a stack of all of the matrices 𝐖i(1){\mathbf{W}}^{(1)}_{i} and 𝐖j(2){\mathbf{W}}^{(2)}_{j}.

Using the mixed Kronecker matrix-vector product property (𝐌1𝐌2)vec(𝐔)=vec(𝐌2𝐔𝐌1T)({\mathbf{M}}_{1}\otimes{\mathbf{M}}_{2})\text{vec}({\mathbf{U}})=\text{vec}({\mathbf{M}}_{2}{\mathbf{U}}{\mathbf{M}}_{1}^{T}) that holds for any appropriately sized matrices 𝐔,𝐌1,𝐌2{\mathbf{U}},{\mathbf{M}}_{1},{\mathbf{M}}_{2}, we can rewrite the objective function (4.1) as

F(𝐮)=12𝐲𝐖𝐮2.F({\mathbf{u}})=\frac{1}{2}\|{\mathbf{y}}-{\mathbf{W}}{\mathbf{u}}\|^{2}.

Step 1: Define quadratic majorizing function. Consider the function

G(𝐮,𝐮)=F(𝐮)+(𝐮𝐮)TF(𝐮)+12(𝐮𝐮)T𝐊𝐮(𝐮𝐮),G({\mathbf{u}}^{\prime},{\mathbf{u}})=F({\mathbf{u}})+({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}\nabla F({\mathbf{u}})+\frac{1}{2}({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}{\mathbf{K}}_{\mathbf{u}}({\mathbf{u}}^{\prime}-{\mathbf{u}}), (4.3)

where the matrix 𝐊𝐮{\mathbf{K}}_{\mathbf{u}} is a diagonal matrix with the diagonal (𝐖T𝐖𝐮)/𝐮({\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}})/{\mathbf{u}}, recall that // represents elementwise division. We claim that GG majorizes FF, i.e. G(𝐮,𝐮)=F(𝐮)G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}}) and G(𝐮,𝐮)F(𝐮)G({\mathbf{u}}^{\prime},{\mathbf{u}})\geq F({\mathbf{u}}^{\prime}). It is clear that G(𝐮,𝐮)=F(𝐮)G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}}). We can write

G(𝐮,𝐮)F(𝐮)=12(𝐮𝐮)T(𝐊𝐮𝐖T𝐖)(𝐮𝐮)G({\mathbf{u}}^{\prime},{\mathbf{u}})-F({\mathbf{u}}^{\prime})=\frac{1}{2}({\mathbf{u}}^{\prime}-{\mathbf{u}})^{T}({\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}})({\mathbf{u}}^{\prime}-{\mathbf{u}})

from the comparison of (4.3) with the Taylor decomposition for the quadratic function F(𝐮)F({\mathbf{u}}^{\prime}) at 𝐮{\mathbf{u}}. So, to check that G(𝐮,𝐮)F(𝐮)G({\mathbf{u}}^{\prime},{\mathbf{u}})\geq F({\mathbf{u}}^{\prime}), it is sufficient to show that the matrix 𝐊𝐮𝐖T𝐖{\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}} is positive semidefinite. Equivalently, it is sufficient to show that

𝐌=(𝐊u𝐖T𝐖)𝐮𝐮T{\mathbf{M}}=({\mathbf{K}}_{u}-{\mathbf{W}}^{T}{\mathbf{W}})\circ{\mathbf{u}}{\mathbf{u}}^{T}

is positive semidefinite. Indeed, for any vector 𝐳{\mathbf{z}} of the appropiate size, 𝐳T𝐌𝐳=(𝐳𝐮)T(𝐊𝐮𝐖T𝐖)(𝐳𝐮){\mathbf{z}}^{T}{\mathbf{M}}{\mathbf{z}}=({\mathbf{z}}\circ{\mathbf{u}})^{T}({\mathbf{K}}_{\mathbf{u}}-{\mathbf{W}}^{T}{\mathbf{W}})({\mathbf{z}}\circ{\mathbf{u}}), recall that \circ defines element-wise product.


Step 2: Matrix 𝐌{\mathbf{M}} is positive semidefinite. We will check positive semidefitness of the matrix 𝐌{\mathbf{M}} directly, Consider any 𝐯nk{\mathbf{v}}\in{\mathbb{R}}^{nk}. Then

𝐯T𝐌𝐯\displaystyle{\mathbf{v}}^{T}{\mathbf{M}}{\mathbf{v}} =ij𝐯i𝐌ij𝐯j\displaystyle=\sum_{ij}{\mathbf{v}}_{i}{\mathbf{M}}_{ij}{\mathbf{v}}_{j}
=ij𝐮i(𝐖T𝐖)ij𝐮j𝐯i2𝐯i𝐮i(𝐖T𝐖)ij𝐮j𝐯j\displaystyle=\sum_{ij}{\mathbf{u}}_{i}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{j}{\mathbf{v}}_{i}^{2}-{\mathbf{v}}_{i}{\mathbf{u}}_{i}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{j}{\mathbf{v}}_{j}
=ij(𝐖T𝐖)ij𝐮i𝐮j(0.5(𝐯i2+𝐯j2)𝐯i𝐯j)\displaystyle=\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}(0.5({\mathbf{v}}_{i}^{2}+{\mathbf{v}}_{j}^{2})-{\mathbf{v}}_{i}{\mathbf{v}}_{j})
=12ij(𝐖T𝐖)ij𝐮i𝐮j(𝐯i𝐯j)2.\displaystyle=\frac{1}{2}\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}({\mathbf{v}}_{i}-{\mathbf{v}}_{j})^{2}.

Now observe

𝐖T𝐖\displaystyle{\mathbf{W}}^{T}{\mathbf{W}} =i𝐖iT𝐖i+j𝐖jT𝐖j\displaystyle=\sum_{i}{\mathbf{W}}_{i}^{T}{\mathbf{W}}_{i}+\sum_{j}{\mathbf{W}}_{j}^{T}{\mathbf{W}}_{j}
=i(𝐕𝐀i)T(𝐕𝐀i)+j(𝐁jT𝐕𝐈)T(𝐁jT𝐕𝐈)\displaystyle=\sum_{i}({\mathbf{V}}\otimes{\mathbf{A}}_{i})^{T}({\mathbf{V}}\otimes{\mathbf{A}}_{i})+\sum_{j}({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}})^{T}({\mathbf{B}}_{j}^{T}{\mathbf{V}}\otimes{\mathbf{I}})
=i(𝐕T𝐕)(𝐀iT𝐀i)+j(𝐕T𝐁j𝐁jT𝐕)𝐈\displaystyle=\sum_{i}({\mathbf{V}}^{T}{\mathbf{V}})\otimes({\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i})+\sum_{j}({\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}})\otimes{\mathbf{I}}
=(𝐕T𝐕)(i𝐀iT𝐀i)+𝐕T(j𝐁j𝐁jT)𝐕𝐈.\displaystyle=({\mathbf{V}}^{T}{\mathbf{V}})\otimes\big{(}\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}\big{)}+{\mathbf{V}}^{T}\big{(}\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}\big{)}{\mathbf{V}}\otimes{\mathbf{I}}.

Thus, by the entry-wise nonnegativity of 𝐕{\mathbf{V}}, i𝐀iT𝐀i\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}, and j𝐁j𝐁jT\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}, the matrix 𝐖T𝐖{\mathbf{W}}^{T}{\mathbf{W}} is also entrywise nonnegative. Since 𝐔{\mathbf{U}} is also nonnegative,

𝐯T𝐌𝐯=12ij(𝐖T𝐖)ij𝐮i𝐮j(𝐯i𝐯j)20.{\mathbf{v}}^{T}{\mathbf{M}}{\mathbf{v}}=\frac{1}{2}\sum_{ij}({\mathbf{W}}^{T}{\mathbf{W}})_{ij}{\mathbf{u}}_{i}{\mathbf{u}}_{j}({\mathbf{v}}_{i}-{\mathbf{v}}_{j})^{2}\geq 0.

This shows that 𝐌{\mathbf{M}} is positive semidefinite, and therefore, GG majorizes FF.


Step 3: The updates minimize the majorizing function. To conclude, observe that

argmin𝐮G(𝐮,𝐮)=𝐮𝐊𝐮1F(𝐮)=𝐮𝐖T𝐲𝐖T𝐖𝐮.\arg\min_{{\mathbf{u}}^{\prime}}G({\mathbf{u}}^{\prime},{\mathbf{u}})={\mathbf{u}}-{\mathbf{K}}_{\mathbf{u}}^{-1}\nabla F({\mathbf{u}})={\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}}. (4.4)

In matrix form, this corresponds exactly to the update for 𝐔{\mathbf{U}} given by

𝐔i𝐀iT𝐀i𝐗i(𝐀)𝐕+j𝐗j(𝐁)𝐁j𝐁jT𝐕i𝐀iT𝐀i𝐔𝐕T𝐕+j𝐔𝐕T𝐁j𝐁jT𝐕.{\mathbf{U}}\circ\frac{\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}{\mathbf{V}}+\sum_{j}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}{\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sum_{j}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}{\mathbf{V}}}.

From majorization property, we have

F(𝐮𝐖T𝐲𝐖T𝐖𝐮)G(𝐮𝐖T𝐲𝐖T𝐖𝐮,𝐮)(4.4)G(𝐮,𝐮)=F(𝐮)F\big{(}{\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}}\big{)}\leq G\big{(}{\mathbf{u}}\circ\frac{{\mathbf{W}}^{T}{\mathbf{y}}}{{\mathbf{W}}^{T}{\mathbf{W}}{\mathbf{u}}},{\mathbf{u}}\big{)}\overset{\eqref{eq:min-g}}{\leq}G({\mathbf{u}},{\mathbf{u}})=F({\mathbf{u}})

and thus, the iterates do not increase the objective.

The updates for 𝐕{\mathbf{V}} also do not increase the objective by a similar argument. The conditions on (i=1s𝐀iT𝐀i𝐗i)(\sum_{i=1}^{s}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i}) and (j=1t𝐗j𝐁j𝐁jT)(\sum_{j=1}^{t}{\mathbf{X}}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T}) ensure that the matrices 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}} are recursively nonnegative under the updates. ∎

4.2. Multiplicative updates for solving regularized compressed problems

Now, we demonstrate how the general framework (4.1) applies to the compressed problems from Section 3.

Corollary 4.3 (Two-sided updates).

Let 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} be a nonnegative matrix and 𝐀1k×m{\mathbf{A}}_{1}\in{\mathbb{R}}^{k\times m}, 𝐀2n×k{\mathbf{A}}_{2}\in{\mathbb{R}}^{n\times k} are generic conpression matrices such that 𝐀1𝐗𝐀2{\mathbf{A}}_{1}{\mathbf{X}}{\mathbf{A}}_{2} is invertible. Form 𝐐1m×r{\mathbf{Q}}_{1}\in{\mathbb{R}}^{m\times r} and 𝐐2n×r{\mathbf{Q}}_{2}\in{\mathbb{R}}^{n\times r}, the matrices which columns form orthonormal bases of the column space of 𝐗𝐀2{\mathbf{X}}{\mathbf{A}}_{2} and the row space of 𝐀1𝐗{\mathbf{A}}_{1}{\mathbf{X}} respectively. For any λ1,λ20\lambda_{1},\lambda_{2}\geq 0, if

σ1\displaystyle\sigma_{1} max{(𝐀1T𝐀1)},max{(𝐀1T𝐀1+λ1(𝐈𝐐1𝐐1T))},\displaystyle\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}\right)_{-}\},\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\right)_{-}\},
σ2\displaystyle\sigma_{2} max{(𝐀2𝐀2T)},max{(𝐀2𝐀2T+λ2(𝐈𝐐2𝐐2T))},\displaystyle\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}\right)_{-}\},\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\right)_{-}\},

where max\max is taken entry-wise, then the objective

𝐀1(𝐗𝐔𝐕T)F2\displaystyle\|{\mathbf{A}}_{1}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F} +(𝐗𝐔𝐕T)𝐀2F2\displaystyle+\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}){\mathbf{A}}_{2}\|^{2}_{F}
+λ1P𝐗𝐀2𝐔𝐕TF2+λ2𝐔𝐕TP𝐀1𝐗F2\displaystyle+\lambda_{1}\|P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\lambda_{2}\|{\mathbf{U}}{\mathbf{V}}^{T}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\|^{2}_{F}
+σ1𝟏mT(𝐗𝐔𝐕T)2+σ2(𝐗𝐔𝐕T)𝟏n2,\displaystyle\quad\quad\quad\quad+\sigma_{1}\|\mathbf{1}_{m}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}+\sigma_{2}\|({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\mathbf{1}_{n}\|^{2},

is nonincreasing under the updates

𝐔\displaystyle{\mathbf{U}} 𝐔𝐀1,σ𝐗𝐕+𝐗𝐀2,σ𝐕(𝐀1,σ+λ1(𝐈𝐐1𝐐1T))𝐔𝐕T𝐕+𝐔𝐕T(𝐀2,σ+λ2(𝐈𝐐2𝐐2T))𝐕\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}_{1,\sigma}{\mathbf{X}}{\mathbf{V}}+{\mathbf{X}}{\mathbf{A}}_{2,\sigma}{\mathbf{V}}}{({\mathbf{A}}_{1,\sigma}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})){\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+{\mathbf{U}}{\mathbf{V}}^{T}({\mathbf{A}}_{2,\sigma}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})){\mathbf{V}}}
𝐕\displaystyle{\mathbf{V}} 𝐕𝐗T𝐀1,σ𝐔+𝐀2,σ𝐗T𝐔𝐕𝐔T(𝐀1,σ+λ1(𝐈𝐐1𝐐1T))𝐔+(𝐀2,σ+λ2(𝐈𝐐2𝐐2T))𝐕𝐔T𝐔,\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}_{1,\sigma}{\mathbf{U}}+{\mathbf{A}}_{2,\sigma}{\mathbf{X}}^{T}{\mathbf{U}}}{{\mathbf{V}}{\mathbf{U}}^{T}({\mathbf{A}}_{1,\sigma}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})){\mathbf{U}}+({\mathbf{A}}_{2,\sigma}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}},

where

𝐀1,σ:=𝐀1T𝐀1+σ1𝟏m𝟏mT and 𝐀2,σ:=𝐀2𝐀2T+σ2𝟏n𝟏nT.{\mathbf{A}}_{1,\sigma}:={\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}\;\text{ and }\;{\mathbf{A}}_{2,\sigma}:={\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}.

Note that Theorem 3.1 and Corollary 3.12 claim that the optimal solution for (4.5) is the optimal solution for the uncompressed NMF problem if 𝐗{\mathbf{X}} has exactly nonnegative decomposition of the rank at most kk.

Proof.

Consider the setting of Theorem 4.1, with the matrices

{𝐀i}i=1,2,3={𝐀1,σ1𝟏mT,λ1P𝐗𝐀2}={𝐀1,σ1𝟏mT,λ1(𝐈𝐐2𝐐2T)},\displaystyle\{{\mathbf{A}}_{i}\}_{i=1,2,3}=\{{\mathbf{A}}_{1},\sqrt{\sigma_{1}}\mathbf{1}_{m}^{T},\sqrt{\lambda_{1}}P_{{\mathbf{X}}{\mathbf{A}}_{2}}^{\perp}\}=\{{\mathbf{A}}_{1},\sqrt{\sigma_{1}}\mathbf{1}_{m}^{T},\sqrt{\lambda_{1}}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\},
{𝐁j}j=1,2,3={𝐀2,σ2𝟏n,λ1P𝐀1𝐗}={𝐀2,σ2𝟏n,λ1(𝐈𝐐1𝐐1T)},\displaystyle\{{\mathbf{B}}_{j}\}_{j=1,2,3}=\{{\mathbf{A}}_{2},\sqrt{\sigma_{2}}\mathbf{1}_{n},\sqrt{\lambda_{1}}P_{{\mathbf{A}}_{1}{\mathbf{X}}}^{\perp}\}=\{{\mathbf{A}}_{2},\sqrt{\sigma_{2}}\mathbf{1}_{n},\sqrt{\lambda_{1}}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\},
{𝐗i(𝐀)}i=1,2,3={𝐗j(𝐁)}j=1,2,3={𝐗,𝐗,𝟎}.\displaystyle\{{\mathbf{X}}^{({\mathbf{A}})}_{i}\}_{i=1,2,3}=\{{\mathbf{X}}^{({\mathbf{B}})}_{j}\}_{j=1,2,3}=\{{\mathbf{X}},{\mathbf{X}},\mathbf{0}\}.

Clearly, we have that the matrices 𝐗i(𝐀){\mathbf{X}}^{({\mathbf{A}})}_{i} and 𝐗i(𝐁){\mathbf{X}}^{({\mathbf{B}})}_{i} are nonnegative. Then to apply Theorem 4.1, we must check that i𝐀iT𝐀i\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}, i𝐀iT𝐀i𝐗i(𝐀)\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}^{({\mathbf{A}})}_{i}, j𝐁j𝐁jT\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} and j𝐗j(𝐁)𝐁j𝐁jT\sum_{j}{\mathbf{X}}^{({\mathbf{B}})}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} are entry-wise nonnegative. First, we calculate

i𝐀iT𝐀i\displaystyle\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i} =𝐀1T𝐀1+σ1𝟏m𝟏mT+λ1(𝐈𝐐1𝐐1T),\displaystyle={\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T}),
j𝐁j𝐁jT\displaystyle\sum_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} =𝐀2𝐀2T+σ2𝟏n𝟏nT+λ2(𝐈𝐐2𝐐2T).\displaystyle={\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T}).

Thus to ensure entry-wise nonnegativity of both sums, we need

σ1max{(𝐀1T𝐀1+λ1(𝐈𝐐1𝐐1T))},σ2max{(𝐀2𝐀2T+λ2(𝐈𝐐2𝐐2T))}.\sigma_{1}\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\lambda_{1}({\mathbf{I}}-{\mathbf{Q}}_{1}{\mathbf{Q}}_{1}^{T})\right)_{-}\},\;\sigma_{2}\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\lambda_{2}({\mathbf{I}}-{\mathbf{Q}}_{2}{\mathbf{Q}}_{2}^{T})\right)_{-}\}.

Similarly, for

i𝐀iT𝐀i𝐗i\displaystyle\sum_{i}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i} =(𝐀1T𝐀1+σ1𝟏m𝟏mT)𝐗,\displaystyle=({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}+\sigma_{1}\mathbf{1}_{m}\mathbf{1}_{m}^{T}){\mathbf{X}},
j𝐗j𝐁j𝐁jT\displaystyle\sum_{j}{\mathbf{X}}_{j}{\mathbf{B}}_{j}{\mathbf{B}}_{j}^{T} =𝐗(𝐀2𝐀2T+σ2𝟏n𝟏nT).\displaystyle={\mathbf{X}}({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}+\sigma_{2}\mathbf{1}_{n}\mathbf{1}_{n}^{T}).

we need to ensure σ1max{(𝐀1T𝐀1)}\sigma_{1}\geq\max\{\left({\mathbf{A}}_{1}^{T}{\mathbf{A}}_{1}\right)_{-}\} and σ2max{(𝐀2𝐀2T)}\sigma_{2}\geq\max\{\left({\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}\right)_{-}\}. ∎

Corollary 4.4 (One-sided updates for orthogonal 𝐀{\mathbf{A}}).

If 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} and sketching matrix 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} has orthogonal rows, λ[0,1]\lambda\in[0,1] and the nonnegativity correction term σmax{(𝐀T𝐀)}\sigma\geq\max\{\left({\mathbf{A}}^{T}{\mathbf{A}}\right)_{-}\}, then the objective

𝐀(𝐗𝐔𝐕T)F2+λP𝐀𝐔𝐕TF2+σ𝟏T(𝐗𝐔𝐕T)2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2} (4.5)

with respect to the variables 𝐔+m×r,𝐕+n×r{\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r} is nonincreasing under the updates

𝐔\displaystyle{\mathbf{U}} 𝐔𝐀T𝐀𝐗𝐕+σ𝟏𝟏T𝐗𝐕(1λ)𝐀T𝐀𝐔𝐕T𝐕+σ𝟏𝟏T𝐔𝐕T𝐕+λ𝐔𝐕T𝐕\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}{\mathbf{V}}}{(1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\lambda{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}} (4.6)
𝐕\displaystyle{\mathbf{V}} 𝐕𝐗T𝐀T𝐀𝐔+σ𝐗T𝟏𝟏T𝐔(1λ)𝐕𝐔T𝐀T𝐀𝐔+σ𝐕𝐔T𝟏𝟏T𝐔+λ𝐕𝐔T𝐔.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{X}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}}{(1-\lambda){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{V}}{\mathbf{U}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}+\lambda{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.

Here, 𝟏=(1,,1)m\mathbf{1}=(1,\ldots,1)\in{\mathbb{R}}^{m}. Note that Corollary 3.13 claims that the optimal solution for (4.5) results in a good solution for the uncompressed NMF problem as long as the original NMF error min𝐔,𝐕0𝐗𝐔𝐕F2\min\limits_{{\mathbf{U}},{\mathbf{V}}\geq 0}\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}\|^{2}_{F} and P𝐀𝐗F2\|P^{\perp}_{\mathbf{A}}{\mathbf{X}}\|^{2}_{F} term are small.

Proof.

In the language of Theorem 4.1, “𝐗i{\mathbf{X}}_{i}” matrices are {𝐗,𝟎,𝐗}\{{\mathbf{X}},\mathbf{0},{\mathbf{X}}\} and “𝐀i{\mathbf{A}}_{i}” matrices are {𝐀,λ(𝐈𝐀T𝐀),σ𝟏T}\{{\mathbf{A}},\sqrt{\lambda}({\mathbf{I}}-{\mathbf{A}}^{T}{\mathbf{A}}),\sqrt{\sigma}\mathbf{1}^{T}\} for i=1,2,3i=1,2,3 respectively. One can see that

i=13𝐀iT𝐀i=(1λ)𝐀T𝐀+λ𝐈+σ𝟏𝟏T and i=13𝐀iT𝐀i𝐗i=𝐀T𝐀𝐗+σ𝟏𝟏T𝐗.\sum_{i=1}^{3}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}=(1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}+\lambda{\mathbf{I}}+\sigma\mathbf{1}\mathbf{1}^{T}\;\text{ and }\;\sum_{i=1}^{3}{\mathbf{A}}_{i}^{T}{\mathbf{A}}_{i}{\mathbf{X}}_{i}={\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}.

These matrices are entry-wise nonnegative if σmax{(𝐀T𝐀)}\sigma\geq\max\{\left({\mathbf{A}}^{T}{\mathbf{A}}\right)_{-}\} and λ[0,1]\lambda\in[0,1]. Theorem 4.1 applies to justify that the objective (4.5) is nonincreasing under the updates (4.6). ∎

The proof of the next corollary for not necessarily orthogonal sketching matrices is similar to the one above and is omitted for brevity.

Corollary 4.5 (One-sided updates for nonorthogonal sketching matrices).

If 𝐗+m×n{\mathbf{X}}\in{\mathbb{R}}_{+}^{m\times n} and 𝐀k×m{\mathbf{A}}\in{\mathbb{R}}^{k\times m} is an arbitrary matrix, λ[0,1]\lambda\in[0,1] and the nonnegativity correction term σmax((𝐀T𝐀))\sigma\geq\max(({\mathbf{A}}^{T}{\mathbf{A}})_{-}), then the objective

𝐀(𝐗𝐔𝐕T)F2+λ𝐔𝐕TF2+σ𝟏T(𝐗𝐔𝐕T)2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F}+\sigma\|\mathbf{1}^{T}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}

with respect to the variables 𝐔+m×r,𝐕+n×r{\mathbf{U}}\in{\mathbb{R}}_{+}^{m\times r},{\mathbf{V}}\in{\mathbb{R}}_{+}^{n\times r} is nonincreasing under the updates

𝐔\displaystyle{\mathbf{U}} 𝐔𝐀T𝐀𝐗𝐕+σ𝟏𝟏T𝐗𝐕𝐀T𝐀𝐔𝐕T𝐕+σ𝟏𝟏T𝐔𝐕T𝐕+λ𝐔𝐕T𝐕\displaystyle\leftarrow{\mathbf{U}}\circ\frac{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{X}}{\mathbf{V}}}{{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\sigma\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+\lambda{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}}
𝐕\displaystyle{\mathbf{V}} 𝐕𝐗T𝐀T𝐀𝐔+σ𝐗T𝟏𝟏T𝐔𝐕𝐔T𝐀T𝐀𝐔+σ𝐕𝐔T𝟏𝟏T𝐔+λ𝐕𝐔T𝐔.\displaystyle\leftarrow{\mathbf{V}}\circ\frac{{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{X}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}}{{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+\sigma{\mathbf{V}}{\mathbf{U}}^{T}\mathbf{1}\mathbf{1}^{T}{\mathbf{U}}+\lambda{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}}.

Here, 𝟏\mathbf{1} is a vector of all ones in m{\mathbb{R}}^{m}.

4.3. Solving compressed problems with projected gradient descent

Multiplicative updates is a popular approach to find nonnegative factorizations due to its simplicity and good convergence properties. However, other standard methods such as alternating nonnegative least squares, hierarchical least squares, projected gradient descent, among others, can be run on the compressed problems. For comparison and additional example, we will consider projected gradient descent (GD) method on compressed data.

For an arbitrary loss function L(𝐔,𝐕)L({\mathbf{U}},{\mathbf{V}}), nonnegative projected GD can be defined

𝐔\displaystyle{\mathbf{U}} (𝐔α𝐔L(𝐔,𝐕))+\displaystyle\leftarrow({\mathbf{U}}-\alpha\nabla_{\mathbf{U}}L({\mathbf{U}},{\mathbf{V}}))_{+}
𝐕\displaystyle{\mathbf{V}} (𝐕α𝐕L(𝐔,𝐕))+,\displaystyle\leftarrow({\mathbf{V}}-\alpha\nabla_{\mathbf{V}}L({\mathbf{U}},{\mathbf{V}}))_{+},

where α\alpha is the step size. For the sake of concreteness, we will give an example of the updates for one of our formulated objective functions. The projected gradient descent updates for the objective 𝐀(𝐗𝐔𝐕T)F2+λP𝐀𝐔𝐕TF2\|{\mathbf{A}}({\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T})\|^{2}_{F}+\lambda\|P_{\mathbf{A}}^{\perp}{\mathbf{U}}{\mathbf{V}}^{T}\|^{2}_{F} (as in Theorem 3.4) are

𝐔\displaystyle{\mathbf{U}} (𝐔α((1λ)𝐀T𝐀𝐔𝐕T𝐕+𝐔𝐕T𝐕𝐀T𝐀𝐗𝐕))+\displaystyle\leftarrow\left({\mathbf{U}}-\alpha\left((1-\lambda){\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}+{\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{V}}-{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{X}}{\mathbf{V}}\right)\right)_{+} (4.7)
𝐕\displaystyle{\mathbf{V}} (𝐕α((1λ)𝐕𝐔T𝐀T𝐀𝐔+𝐕𝐔T𝐔𝐗T𝐀T𝐀𝐔))+.\displaystyle\leftarrow\left({\mathbf{V}}-\alpha\left((1-\lambda){\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}+{\mathbf{V}}{\mathbf{U}}^{T}{\mathbf{U}}-{\mathbf{X}}^{T}{\mathbf{A}}^{T}{\mathbf{A}}{\mathbf{U}}\right)\right)_{+}.

We can similarly derive updates for our other objective functions. A disadvantage of this method is that it possesses no guarantee of convergence or even a nonincreasing property. Empirically, we see (Figure 3 below) that on some datasets projected GD shows competitive performance and that its convergence is indeed not monotonic, unlike the convergence of the regularized compressed MU algorithm.

5. Experiments

We experiment with three datasets coming from various domains. The 20 Newsgroups dataset (“20News”) [30] is a a standard dataset for text classification and topic modeling tasks. It is a collection of articles divided into a total of 2020 subtopics of the general topics of religion, sales, science, sports, tech and politics. The Olivetti faces (“Faces”) [32] is a standard image dataset containing grayscale facial images. It is often used in the literature as an example for different factorization methods including NMF [47, 48]. Finally, we construct a synthetic dataset with regular random data and nonnegative rank of exactly 2020. Specifically, we let 𝐔{\mathbf{U}} and 𝐕{\mathbf{V}} be 1000×201000\times 20 matrices whose entries are distributed like standard lognormal random variables and define 𝐗=𝐔𝐕T{\mathbf{X}}={\mathbf{U}}{\mathbf{V}}^{T}. The dimensions of the datasets are reported in Table 1.

Dataset nn mm
Synthetic 1000 1000
Faces [32] 400 4096
20News [30] 11314 101322
Table 1. Dimensions of all datasets studied.

All experiments were run on a cluster with 4 2.6 GHz Intel Skylake CPU cores and a NVIDIA V100 GPU. Compressed methods were implemented using the JAX library to take advantage of GPU acceleration. The uncompressed methods that we compare were not implemented to use GPU acceleration since in applications the full data matrix may be too large to store on a GPU. In our case, the 20News data matrix was too large to store as a dense matrix on our GPU.

To measure the convergence of the methods, besides interpretability of the topics or images, we use the relative error metric 𝐗𝐔𝐕TF/𝐗F\|{\mathbf{X}}-{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}/\|{\mathbf{X}}\|_{F} and the scale invariant cosine similarity metric (normalized dot product metric) between 𝐔𝐕T{\mathbf{U}}{\mathbf{V}}^{T} and 𝐗{\mathbf{X}}, defined as 𝐗,𝐔𝐕T/𝐗F𝐔𝐕TF\langle{\mathbf{X}},{\mathbf{U}}{\mathbf{V}}^{T}\rangle/\|{\mathbf{X}}\|_{F}\|{\mathbf{U}}{\mathbf{V}}^{T}\|_{F}.

5.1. Synthetic data: exact recovery from compressed measurements is achievable

Synthetic data has exactly low nonnegative rank k=20k=20. Yet, the theoretical results suggest the possibility of exact recovery from the compressed data in two cases: (a) if two-sided compression was employed (Theorem 3.1), or (b) if the compression matrix is such that the projection of the data onto its orthogonal complement P𝐀𝐗P_{{\mathbf{A}}}^{\perp}{\mathbf{X}} is zero, for example, if the compression matrix is learned via the randomized rangefinder algorithm (one-sided data-adapted compression, Corollary 3.6). Two-sided compression requires twice more memory for the same sketch size but works with generic (data-oblivious) measurements that can be advantageous for various reasons as they do not require access to data or another pass over the data.

Refer to caption
Figure 1. NMF recovery of the synthetic data with MU from full data (Uncompressed); from compressed data with one-sided data-adapted sketches using only 4%4\% of original memory (One-sided), and with two-sided Gaussian sketches using 8%8\% of original memory (Two-sided). All three methods achieve recovery within 10310^{-3} relative error.

In Figure 1, we compare nonnegative factors found by the MU from full data 𝐗{\mathbf{X}}, from the one-sided data-adapted sketches, and from the two-sided oblivious (i.i.d. Gaussian) linear measurements. We take the target rank r=20r=20 and the sketch size k=20k=20 (so, the sketching matrices have the shape 20×100020\times 1000). In this case, the matrix 𝐗{\mathbf{X}} has one million elements whereas the total number of elements in 𝐀{\mathbf{A}} and 𝐗𝐀{\mathbf{X}}{\mathbf{A}} combined is only forty thousand. This represents a memory ratio of 4%4\% for the one-sided method. For the two sided method, the total number of elements in 𝐀1{\mathbf{A}}_{1},𝐀2{\mathbf{A}}_{2},𝐀1𝐗{\mathbf{A}}_{1}{\mathbf{X}}, and 𝐗𝐀2{\mathbf{X}}{\mathbf{A}}_{2} is eighty thousand representing a memory ratio of 8%8\%.

We employ (compressed) MU algorithm as in Corollary 4.4 with λ=0.1\lambda=0.1 and Corollary 4.3 with λ=0\lambda=0. The parameter σ\sigma is chosen minimal so that we have 𝐀𝐀Tσ{\mathbf{A}}{\mathbf{A}}^{T}\geq-\sigma, and similarly for the 𝐀1{\mathbf{A}}_{1}, 𝐀2{\mathbf{A}}_{2}, σ1\sigma_{1} and σ2\sigma_{2} in the two-sided case. We see that the algorithm achieves a near exact fit, eventually reducing relative error below 10310^{-3} which was our stopping criterion in this case. The one-sided method and the oblivious two-sided methods seem to be converging at a similar rate as the uncompressed method, albeit after a “burn-in” phase.

5.2. Synthetic data: effect of compression size

In Figure 2, we compare the compressed multiplicative updates for different amounts of compression. The target rank r=20r=20 and the sketch size kk varies. We employ the compressed MU algorithm as in Corollaries 4.3, 4.4, and 4.5. We choose the compression matrix 𝐀{\mathbf{A}} to be (a) a random Gaussian matrix, (b) a random matrix with orthogonal rows, or (c) via the randomized SVD procedure (3.5). For the two-sided method (d), we choose both compression matrices to be random Gaussian matrices and λ=0\lambda=0. For the one-sided compression, we take λ=.1\lambda=.1. We report cosine similarity loss.

We show that on an “easy” synthetic problem oblivious one-sided measurements can be used: the compressed MU algorithm results in a good, although not perfect, recovery (Figure 2 (a,b)). The amount of limiting loss depending on the amount of compression. Then, the one-sided data-adapted compression and two-sided random compression attain exact reconstruction at different rates depending on the amount of compression (Figure 2 (c,d)). Not surprisingly, in every case, less compression leads to a faster or better convergence performance.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 2. NMF recovery of the synthetic data with MU. MU on the uncompressed data achieve limiting similarity 11 after a few but slow iterations (only the limiting level is shown as “Uncompressed”). Displays (c,d) show that MU on data-adapted and random two-sided sketched data also tend to the limiting similarity 11. Across all methods, less compression (larger kk) improves convergence.

5.3. Real-world data: performance comparison

To showcase the proposed methods, we include recovery from two-sided and one-sided sketches. For the two-sided method we solve a simplified version of the two-sided objective (3.15) with λ=0\lambda=0. We take 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} to be random Gaussian matrices in the oblivious case or according to the randomized rangefinder procedure as in Corollary 3.6 in the adaptive case. For the data-adapted one-sided method, we take λ=0.1\lambda=0.1 and solve the problem (4.5). Then, we include the recovery via projected gradient descent (GD), as described in Section 4.3 with a step size of α=.001\alpha=.001.

We also compare our proposed methods with a “NMF with random projections” method proposed in [42] and in [36]. These works adapted the updates of [4] to the compressed NMF setting resulting in the updates:

𝐔\displaystyle{\mathbf{U}} 𝐔(𝐘2𝐀2T𝐕)++(𝐔𝐕T𝐀2𝐀2T𝐕)(𝐘2𝐀2T𝐕)+(𝐕T𝐀2𝐀2T𝐕)+\displaystyle\leftarrow{\mathbf{U}}\circ\sqrt{\frac{({\mathbf{Y}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{+}+({\mathbf{U}}{\mathbf{V}}^{T}{\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{-}}{({\mathbf{Y}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{-}+({\mathbf{V}}^{T}{\mathbf{A}}_{2}{\mathbf{A}}_{2}^{T}{\mathbf{V}})_{+}}} (5.1)

and similar for 𝐕{\mathbf{V}}, see equations (3.28) and (3.29) of [42]. The work of [42] propose to use the updates (5.1) where 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2} are chosen to be Gaussian matrices. In this case we denote these updates “WL” in the legends. The work of [36] also proposed using the randomized rangefinder procedure [14] as in our Corollary 3.6 to choose the matrices 𝐀1{\mathbf{A}}_{1} and 𝐀2{\mathbf{A}}_{2}. In this case we denote this method “TS” in the legends. Observe that these iterations are approximately two times faster than those of MU.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Refer to caption
(d)
Figure 3. (a,c): recovery from random Gaussian measurements, averaged over 55 runs; our two-sided MU methods lead to better convergence than WL [42]. (b,d): data-adapted methods with sketching matrices obtained with the randomized rangefinder algorithm (like in Corollary 3.6); our two-sided multiplicative updates perform slightly better than TS [36] after enough iterations.

In Figure 3 (a,b), we study the performance of compressed nonnegative matrix factorization for the Faces dataset with target rank r=6r=6 and sketch size k=20k=20. For the memory, the data matrix 𝐗{\mathbf{X}} contains 16384001638400 elements whereas the total number of elements in 𝐀{\mathbf{A}} and 𝐗𝐀{\mathbf{X}}{\mathbf{A}} is 8992089920, representing a memory ratio of approximately 5%5\% in the one-sided case and 10%10\% in the two-sided compression. On the Faces dataset, our methods, especially data-adapted versions, attain almost the quality of MU on the uncompressed data while using only 5%5\% of the memory (10%10\% in the two-sided case).

In Figure 3 (c,d), we study the performance of compressed nonnegative matrix factorization for the 20News dataset with target rank r=20r=20 and sketch size k=100k=100. 20News is a “hard dataset” for NMF – we can see that even in a full uncompressed dataset NMF achieves only 0.20.2 cosine similarity (however, this similarity can be enough to do a meaningful job for topic modeling applications) and our compressed MU from data-adapted measurements achieve higher than 0.170.17 cosine similarity while using only 2%2\% of memory required for the uncompressed MU with the one-sided compression. Indeed, the number of elements in 𝐗{\mathbf{X}} (including zeros) is 11463571081146357108. The total number of elements in 𝐘1{\mathbf{Y}}_{1}, 𝐘2{\mathbf{Y}}_{2}, 𝟏T𝐗\mathbf{1}^{T}{\mathbf{X}}, 𝐗𝟏{\mathbf{X}}\mathbf{1}, 𝐀1{\mathbf{A}}_{1}, 𝐀2{\mathbf{A}}_{2}, 𝐔{\mathbf{U}}, and 𝐕{\mathbf{V}} is 2500519225005192. On the 20News dataset, our compressed MU from data-adapted measurements attain 85%85\% of the similarity using only 2%2\% of the memory (4%4\% in the two-sided case) compared to the uncompressed NMF problem.

Since it might be infeasible to even run an uncompressed problem from a memory perspective, we do not specifically focus on time performance here. However, we note that while it typically requires less iterations for the uncompressed NMF to learn the factorization, the iterations themselves are considerably slower. In Figure 3 (c,d) it would take several hours to run the uncompressed experiment until 60,00060,000 iterations, while the other methods take at most several minutes, so we show only the first 10310^{3} iterations for uncompressed NMF. For the Faces dataset (Figure 3 (b)), it took 77 sec to run compressed MU and 66 sec to run GD until 60,00060,000 iterations, and we can see that 1010 times less iterations would have been enough for approximately same quality. The uncompressed method took 88 sec for the plotted 10310^{3} iterations, so at least 88 min would be required to run it for 60,00060,000 iterations.

5.4. Real-world data: interpretable low rank decompositions

An important property of nonnegative low-rank factorizations is getting interpretable components. In Figure 4, we briefly demonstrate that the components learned from the compressed data are also interpretable. That is, we show the columns of the fitted 𝐕{\mathbf{V}} matrix reshaped back to 64×6464\times 64 images in the same setup as in Figure 3 (b) for the one-sided data-adapted measurements.

Refer to caption
(a)
Refer to caption
(b)
Refer to caption
(c)
Figure 4. Six “representative” faces from the Faces dataset learned from the compressed dataset of the size 5%\sim 5\% of initial data. Data-adapted compression matrix 𝐀{\mathbf{A}} is used.

5.5. Real-world data: choosing regularization parameter λ\lambda

We have chosen the regularization parameter λ=0.1\lambda=0.1 for the one-sided experiments above. Here, we demonstrate that it is important empirically, as well as theoretically, to add nonzero regularization term in the one-sided compression case. In Figure 5, we consider the compressed MU algorithm from one-sided data-adapted measurements for the 20News data with k=100k=100 and r=20r=20. We can see that regularization can have a beneficial effect on performance and λ=0\lambda=0 compromises the convergence. At the same time, too large λ\lambda could slow down the convergence or result in slightly worse limiting loss.

Refer to caption
Figure 5. Effect of regularization parameter λ\lambda on the MU algorithm (4.6). 20News dataset compressed with data-adapted one-sided measurements, σ\sigma is chosen minimal so that we have 𝐀𝐀Tσ{\mathbf{A}}{\mathbf{A}}^{T}\geq-\sigma. The absence of regularization compromises convergence and too strong regularization results in a higher loss.

6. Conclusion and future directions

In this paper, we propose several formulations of the NMF problem that (a) work using only compressed initial data, where the compression is done with linear maps (sketches) that access initial data only once or twice; (b) have optimal solutions that are provably compatible with the optimal solutions of the uncompressed NMF problem; (c) are supplemented with memory-efficient algorithms that solve the compressed problems without returning to the initial large data or forming any matrices of the original size. The convergence of these algorithms is proved in a standard for the NMF-related algorithms form, that is, showing that the objective function (that we provably connect to the original uncompressed objective) does not increase under the updates. We supplement the theoretical results with the experiments showing comparable nonnegative factorization performance using only 5%\sim 5\% of initial data, on artificial, text and image datasets.

There are multiple venues of future work stemming from our approach. For the two-sided measurements, we currently do not have a theorem that holds for the data matrices with approximately low nonnegative rank, like in the one-sided case. The experimental evidence clearly suggests similar more general results should hold in the two-sided case. Also, it would be interesting to explain theoretically why the two-sided compression is less sensitive to the regularization (in practice, we take λ=0\lambda=0 in the two-sided experiments, which significantly simplifies the updates from Corollary 4.3).

Then, it is important to study the scalable versions of other nonnegative factorization algorithms besides multiplicative updates and projected gradient descent. We focused on multiplicative updates because of their relative popularity and simplicity, but perhaps other methods may be better adapted to sketched problems. A related question is to get theoretical guarantees for the methods proposed in [36] that empirically show comparable performance and typically faster convergence than our proposed algorithms.

Further, it is natural and meaningful to extend the framework to the compressed versions of high-order (tensor) problems. It has been recently shown [20, 39] that nonnegative tensor factorization result in more interpretable decomposition for naturally high-order data, such as temporal or multi-agent, than their matrix NMF counterparts. At the same time, the tensor methods are even more computationally demanding and would benefit from scalable approximations. Scalable versions of other NMF-based algorithms such as semi-supervised versions [1] or sparse NMF [20, 17] are also of interest.

References

  • [1] M. Ahn, R. Grotheer, J. Haddock, L. Kassab, A. Kryshchenko, K. Leonard, S. Li, A. Madushani, T. Merkh, D. Needell, E. Sizikova, and C. Wang. Semi-supervised nonnegative matrix factorization models for topic modeling in learning tasks. In Proc. 53rd Asilomar Conference on Signals, Systems and Computers, 2020.
  • [2] Y. Chen, H. Zhang, R. Liu, Z. Ye, and J. Lin. Experimental explorations on short text topic mining between LDA and NMF based schemes. Knowledge-Based Systems, 163:1–13, 2019.
  • [3] M. A. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations. IEEE Journal of Selected Topics in Signal Processing, 10(4):608–622, 2016.
  • [4] C. H. Ding, T. Li, and M. I. Jordan. Convex and semi-nonnegative matrix factorizations. IEEE transactions on pattern analysis and machine intelligence, 32(1):45–55, 2008.
  • [5] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlós. Faster least squares approximation. Numerische mathematik, 117(2):219–249, 2011.
  • [6] N. B. Erichson, A. Mendible, S. Wihlborn, and J. N. Kutz. Randomized nonnegative matrix factorization. Pattern Recognition Letters, 104:1–7, 2018.
  • [7] H. Fawzi and P. A. Parrilo. Lower bounds on nonnegative rank via nonnegative nuclear norms. Mathematical Programming, 153:41–66, 2015.
  • [8] M. Fazel, E. Candes, B. Recht, and P. Parrilo. Compressed sensing and robust recovery of low rank matrices. In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pages 1043–1047. IEEE, 2008.
  • [9] X.-R. Feng, H.-C. Li, R. Wang, Q. Du, X. Jia, and A. Plaza. Hyperspectral unmixing based on nonnegative matrix factorization: A comprehensive review. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15:4414–4436, 2022.
  • [10] M. Ghashami, E. Liberty, J. M. Phillips, and D. P. Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762–1792, 2016.
  • [11] N. Gillis. The why and how of nonnegative matrix factorization. Regularization, Optimization, Kernels, and Support Vector Machines, 12, 2014.
  • [12] N. Gillis and F. Glineur. Nonnegative factorization and the maximum edge biclique problem. arXiv preprint arXiv:0810.4225, 2008.
  • [13] R. Grotheer, L. Huang, Y. Huang, A. Kryshchenko, O. Kryshchenko, P. Li, X. Li, E. Rebrova, K. Ha, and D. Needell. COVID-19 literature topic-based search via hierarchical NMF. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020. Association for Computational Linguistics, 2020.
  • [14] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011.
  • [15] C. A. Haselby, M. A. Iwen, D. Needell, M. Perlmutter, and E. Rebrova. Modewise operators, the tensor restricted isometry property, and low-rank tensor recovery. Applied and Computational Harmonic Analysis, 66:161–192, 2023.
  • [16] K. Hayashi, S. G. Aksoy, G. Ballard, and H. Park. Randomized algorithms for symmetric nonnegative matrix factorization. arXiv preprint arXiv:2402.08134, 2024.
  • [17] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research, 5(9), 2004.
  • [18] R. Huang, X. Li, and L. Zhao. Spectral–spatial robust nonnegative matrix factorization for hyperspectral unmixing. IEEE Transactions on Geoscience and Remote Sensing, 57(10):8235–8254, 2019.
  • [19] M. A. Iwen, D. Needell, E. Rebrova, and A. Zare. Lower memory oblivious (tensor) subspace embeddings with fewer random bits: modewise methods for least squares. SIAM Journal on Matrix Analysis and Applications, 42(1):376–416, 2021.
  • [20] L. Kassab, A. Kryshchenko, H. Lyu, D. Molitor, D. Needell, E. Rebrova, and J. Yuan. Sparseness-constrained nonnegative tensor factorization for detecting topics at different time scales. Frontiers Applied Mathematics and Statistics, accepted.
  • [21] D. Kuang and H. Park. Fast rank-22 nonnegative matrix factorization for hierarchical document clustering. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 739–747, 2013.
  • [22] D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. Advances in neural information processing systems, 13, 2000.
  • [23] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. nature, 401(6755):788–791, 1999.
  • [24] H. Lee, J. Yoo, and S. Choi. Semi-supervised nonnegative matrix factorization. IEEE Signal Processing Letters, 17(1):4–7, 2009.
  • [25] Y. Li, H. L. Nguyen, and D. P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In Proceedings of the forty-sixth annual ACM symposium on Theory of computing, pages 174–183, 2014.
  • [26] M. W. Mahoney et al. Randomized algorithms for matrices and data. Foundations and Trends® in Machine Learning, 3(2):123–224, 2011.
  • [27] Y. Qian, C. Tan, D. Ding, H. Li, and N. Mamoulis. Fast and secure distributed nonnegative matrix factorization. IEEE Transactions on Knowledge and Data Engineering, 2020.
  • [28] Y. Qian, C. Tan, N. Mamoulis, and D. W. Cheung. DSANLS: Accelerating distributed nonnegative matrix factorization via sketching. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 450–458, 2018.
  • [29] G. Raskutti and M. W. Mahoney. A statistical perspective on randomized sketching for ordinary least-squares. Journal of Machine Learning Research, 17(213):1–31, 2016.
  • [30] J. Rennie. 20 Newsgroups, 2008.
  • [31] A. K. Saibaba and A. Miedlar. Randomized low-rank approximations beyond Gaussian random matrices. arXiv preprint arXiv:2308.05814, 2023.
  • [32] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of 1994 IEEE workshop on applications of computer vision, pages 138–142. IEEE, 1994.
  • [33] T. Sarlos. Improved approximation algorithms for large matrices via random projections. In 2006 47th annual IEEE symposium on foundations of computer science (FOCS’06), pages 143–152. IEEE, 2006.
  • [34] C. Shao and T. Höfer. Robust classification of single-cell transcriptome data by nonnegative matrix factorization. Bioinformatics, 33(2):235–242, 2017.
  • [35] V. Sharan, K. S. Tai, P. Bailis, and G. Valiant. Compressed factorization: Fast and accurate low-rank factorization of compressively-sensed data. In International Conference on Machine Learning, pages 5690–5700. PMLR, 2019.
  • [36] M. Tepper and G. Sapiro. Compressed nonnegative matrix factorization is fast and accurate. IEEE Transactions on Signal Processing, 64(9):2269–2283, 2016.
  • [37] J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher. Practical sketching algorithms for low-rank matrix approximation. SIAM Journal on Matrix Analysis and Applications, 38(4):1454–1485, 2017.
  • [38] S. A. Vavasis. On the complexity of nonnegative matrix factorization. SIAM Journal on Optimization, 20(3):1364–1377, 2010.
  • [39] J. Vendrow, J. Haddock, and D. Needell. Neural nonnegative CP decomposition for hierarchical tensor analysis. In 2021 55th Asilomar Conference on Signals, Systems, and Computers, pages 1340–1347. IEEE, 2021.
  • [40] J. Vendrow, J. Haddock, E. Rebrova, and D. Needell. On a guided nonnegative matrix factorization. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3265–32369. IEEE, 2021.
  • [41] R. Vershynin. Compressed Sensing: Theory and Applications, chapter Introduction to the non-asymptotic analysis of random matrices. Cambridge University Press, 2011.
  • [42] F. Wang and P. Li. Efficient nonnegative matrix factorization with random projections. In Proceedings of the 2010 SIAM International Conference on Data Mining, pages 281–292. SIAM, 2010.
  • [43] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends® in Theoretical Computer Science, 10(1–2):1–157, 2014.
  • [44] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert. A fast randomized algorithm for the approximation of matrices. Applied and Computational Harmonic Analysis, 25(3):335–366, 2008.
  • [45] F. Yahaya, M. Puigt, G. Delmaire, and G. Roussel. Gaussian compression stream: Principle and preliminary results, 2020.
  • [46] F. Yahaya, M. Puigt, G. Delmaire, and G. Roussel. Random projection streams for (weighted) nonnegative matrix factorization. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3280–3284. IEEE, 2021.
  • [47] T. Zhang, B. Fang, Y. Y. Tang, G. He, and J. Wen. Topology preserving non-negative matrix factorization for face recognition. IEEE Transactions on Image Processing, 17(4):574–584, 2008.
  • [48] Y. Zhao, H. Wang, and J. Pei. Deep non-negative matrix factorization architecture based on underlying basis images learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1897–1913, 2019.
  • [49] G. Zhou, A. Cichocki, and S. Xie. Fast nonnegative matrix/tensor factorization based on low-rank approximation. IEEE Transactions on Signal Processing, 60(6):2928–2940, 2012.