This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Multi-Task System Identification of Similar Linear Time-Invariant Dynamical Systems

Yiting Chen, Ana M. Ospina, Fabio Pasqualetti, and Emiliano Dall’Anese Y. Chen, A. M. Ospina, and E. Dall’Anese are with the Department of Electrical, Computer and Energy Engineering, University of Colorado Boulder; emails: {yiting.chen-1, ana.ospina, emiliano.dallanese}@colorado.edu. F. Pasqualetti is with the Department of Mechanical Engineering, University of California Riverside; email: fabiopas@engr.ucr.edu. This work was supported in part by the National Science Foundation (NSF) through the awards 1941896 and 1926829, and by the Army Research Office (ARO) through the award W911NF1910360.
Abstract

This paper presents a system identification framework – inspired by multi-task learning – to estimate the dynamics of a given number of linear time-invariant (LTI) systems jointly by leveraging structural similarities across the systems. In particular, we consider LTI systems that model networked systems with similar connectivity, or LTI systems with small differences in their matrices. The system identification task involves the minimization of the least-squares (LS) fit for individual systems, augmented with a regularization function that enforces structural similarities. The proposed method is particularly suitable for cases when the recorded trajectories for one or more LTI systems are not sufficiently rich, leading to ill-conditioning of LS methods. We analyze the performance of the proposed method when the matrices of the LTI systems feature a common sparsity pattern (i.e., similar connectivity), and provide simulations based on real data for the estimation of the brain dynamics. We show that the proposed method requires a significantly smaller number of fMRI scans to achieve similar error levels of the LS.

I Introduction

System identification is a core task where the model of dynamical systems is estimated based on observed inputs and states [1, 2]. In particular, identification of linear time-invariant (LTI) systems is a well-investigated problem that has recently received renewed attention due to lines of research in the context of data-driven control and optimization (see, for example, the representative works in [3, 4, 5, 6, 7, 8]).

When the observation of the state is noise-free, the LTI system matrices can be estimated by leveraging the Willems’ Fundamental Lemma, provided that the recorded trajectory satisfies the persistency of excitation (PE) condition as discussed in, e.g., [9, 10]. On the other hand, when process noise or disturbances enter the LTI system, several existing works focus on the asymptotic and finite time estimation errors and sample complexity of the least squares (LS) estimator; see, for example, the representative works [11, 12, 13, 14, 15, 16] and pertinent references therein. Additionally, regularized system identification methods are investigated in, e.g., [17, 18, 2]; a low-order linear system identification via regularized regression is considered in [19]. These regularized identification methods allow one to add a prior on the system matrices, and to strike a balance between LS fit and model complexity [20, 21].

The performance of the LS estimator hinges on the availability of a recorded trajectory that is sufficiently rich to render the LS well conditioned. In this paper, we are interested in cases where the LS method is ill-conditioned. In particular, we consider the task of estimating the system matrices of N>1N>1 LTI systems, in cases where we do not have sufficiently long (and sufficiently rich) recorded trajectories for at least one of the systems (or for some of the systems). Accordingly, the question posed in this paper is as follows: is it possible to leverage “similarities” among the NN systems to obtain accurate estimates of the system matrices, even if the LS is ill-conditioned? In particular, if one has only a few measurements for the ii-th system, can one use recorded data from the other LTI systems to improve the estimation error?

In this direction, [22] considered estimating the matrices of a linear system from samples generated by a “similar” one; in [22], an LTI system is considered “similar” if its matrices are perturbed versions of a given matrix. Recently, [23] considered a setup where the matrix norm of the difference between the matrices of LTI systems is small. In this paper, we expand the notion of “structural similarity” to account for additional properties that the NN systems may have in common, and propose a new system identification approach that leverages and cross-fertilizes core tools investigated in the context of multi-task learning [24, 25, 26], statistical learning [20], and regularized identification methods [2, 27].

We first consider the case where the NN LTI systems model networked systems with a similar connectivity (sub-)graph; this implies that the matrices of the LTI systems feature a common sparsity (i.e., the system matrices have zeros in a common set of entries). With this model, we formulate the multi-task system identification task as a regularized LS problem where we minimize the LS fit for each of the LTI systems plus a regularization function that enforces a common sparsity pattern [28, 29]. By appropriately tuning (typically via cross-validation [21]) the weight assigned to the regularization function, one can find a balance between fitting of the recorded data and model complexity. We analyze the estimation error of the proposed multi-task system identification approach, with respect to the true matrices of the LTI systems and with respect to an “oracle;” the latter represents the best achievable estimation when considering a (group) sparse model, under a given model compatibility condition [30].

Next, we explain how the proposed multi-task system identification can be adjusted to account for additional structural similarities. In particular, we provide approaches to deal with cases where the matrix feature a small heterogeneity (i.e., the matrix difference is small, as in [23]), and where some of the matrices of the LTI systems can be expressed as linear combinations of each other. In this case, we resort to regularization functions that penalize large matrix deviations and functions that are inspired by nuclear norm minimization [31, 32, 33].

We demonstrate the effectiveness of the proposed multi-task system identification method using: (i) synthetic LTI systems that feature structural similarities, and (ii) real data from the Human Connectome Project (HCP), where blood-oxygen-level-dependent (BOLD) signals are obtained from resting state functional magnetic resonance imaging (fMRI) [34, 35]. For the latter, we show that the proposed method requires a significantly smaller number of fMRI scans to achieve the same error of the LS by simply assuming that the underlying functional or structural connectivity of brain parcellations is similar across subjects. We also consider the case where only a few fMRI readings are available for one subject, showing the ability to “transfer information” from the dynamics of the other subjects.

II Preliminaries and System Identification Setup

Consider NN linear time-invariant (LTI) systems111Notation: We denote by \mathbb{N} and \mathbb{R} the set of natural numbers and the set of real numbers, respectively, and define [n]={1,2,,n}[n]=\{1,2,\ldots,n\}. Given S[n]S\subset[n], |S||S| is the cardinality of SS. We let denote transposition. For a given column vector xnx\in\mathbb{R}^{n}, x2\|x\|_{2} is the Euclidean norm and x1\|x\|_{1} denotes the 1\ell_{1} norm; for a matrix Xn×mX\in\mathbb{R}^{n\times m}, XF\|X\|_{F} denotes the Frobenious norm and X\|X\|_{*} the nuclear norm. Moreover, (x)i(x)_{i} refers to the entry ii of the vector xx, (X)ij(X)_{ij} to the entry (i,j)(i,j) of the matrix XX, and vec(X)\operatorname{vec}(X) is a mn×1mn\times 1 vector stacking the columns of XX. Given a differentiable function f:nf:\mathbb{R}^{n}\rightarrow\mathbb{R}, f(x)\nabla f(x) denotes the gradient of ff at xx (taken to be a column vector). Given a closed convex set CnC\subseteq\mathbb{R}^{n}, projC:nn\textrm{proj}_{C}:\mathbb{R}^{n}\to\mathbb{R}^{n} denotes the Euclidean projection of yy onto CC, namely projC(y):=argminvCyv\textrm{proj}_{C}(y):=\arg\min_{v\in C}\|y-v\|. Given a lower-semicontinuous convex function g:ng:\mathbb{R}^{n}\to\mathbb{R}, the proximal operator is defined as proxλg(y):=argminxng(x)+12λxy22\textrm{prox}_{\lambda g}(y):=\arg\min_{x\in\mathbb{R}^{n}}g(x)+\frac{1}{2\lambda}\|x-y\|_{2}^{2}.

xi(t+1)=Aixi(t)+Biui(t)+wi(t),xi(0)n,\displaystyle x_{i}(t+1)=A_{i}x_{i}(t)+B_{i}u_{i}(t)+w_{i}(t),\quad x_{i}(0)\in\mathbb{R}^{n}, (1)

with i[N]i\in[N] the system index and tt\in\mathbb{N} the time index, Ain×nA_{i}\in\mathbb{R}^{n\times n} and Bin×pB_{i}\in\mathbb{R}^{n\times p}, and where xi(t)nx_{i}(t)\in\mathbb{R}^{n}, ui(t)pu_{i}(t)\in\mathbb{R}^{p}, and wi(t)nw_{i}(t)\in\mathbb{R}^{n} are the state, input and process noise, respectively, of the iith system. Assume that, for each system, the input ui(t)u_{i}(t) and state xi(t)x_{i}(t) can be measured, and Bin×pB_{i}\in\mathbb{R}^{n\times p} is known; on the other hand, the system matrix is unknown and the disturbance wi(t)w_{i}(t) cannot be measured222We note that, while we focus on the estimation of the matrix AiA_{i} for notation simplicity and to streamline exposition, the techniques presented in the paper are straightforwardly applicable to the case where both AiA_{i} and BiB_{i} are estimated from data..

For the ii-th system, suppose that one has access to one trajectory {xi(τ),ui(τ)}τ=1Pi+1\{x_{i}(\tau),u_{i}(\tau)\}_{\tau=1}^{P_{i}+1}, for some PiP_{i}\in\mathbb{N}, for the state and the inputs. With these measurements, the system matrices can be estimated using the following LS criterion:

minAin×ni(Ai),\displaystyle\min_{A_{i}\in\mathbb{R}^{n\times n}}\mathcal{L}_{i}(A_{i}), (2)

where i(Ai):=τ=1Pixi(τ+1)Aixi(τ)Biui(τ)22\mathcal{L}_{i}(A_{i}):=\sum_{\tau=1}^{P_{i}}\|x_{i}(\tau+1)-A_{i}x_{i}(\tau)-B_{i}u_{i}(\tau)\|_{2}^{2}; the LS problem (2) is solved for each of the NN systems independently. The LS estimator (2) has been extensively studied in the literature, especially when the recorded data render the LS (2) well conditioned [13, 12, 14, 11, 19].

In this paper, we are interested in cases where the LS problem (2) is ill-conditioned for some of the NN LTI systems; this may be due to recorded trajectories that are not sufficiently rich, or simply not long enough. In this case, the question we pose in this paper pertains to whether it is possible to leverage “similarities” among the NN systems to obtain accurate estimates of the system matrices, even if the LS is ill-conditioned for some systems. We consider the following structural similarities across the LTI systems:

  • (s1)

    The matrices A1,ANA_{1},\ldots A_{N} have zeros in the same entries; i.e., (A1)ij=(A2)ij==(AN)ij=0(A_{1})_{ij}=(A_{2})_{ij}=\ldots=(A_{N})_{ij}=0 for some entries (i,j)(i,j).

  • (s2)

    For any pair Ai,AjA_{i},A_{j}, i,j[N]i,j\in[N], there exists ϵ>0\epsilon>0 such that AiAjF2ϵ\|A_{i}-A_{j}\|_{F}^{2}\leq\epsilon.

  • (s3)

    For the subset of systems i𝒞,𝒞[N]i\in\mathcal{C},\mathcal{C}\subseteq[N], there exists {αi,j}\{\alpha_{i,j}\in\mathbb{R}\} such that Ai=j=1,jiNαijAjA_{i}=\sum_{j=1,j\neq i}^{N}\alpha_{ij}A_{j}.

We note that (s1) naturally models LTI systems that describe the dynamics of networked systems with similar connectivity; as a concrete example, (s1) emerges from a similar functional or structural connectivity of the brain network across different individuals [34, 35]. Similarity, (s2) models the case where the norm of the matrix difference AiAjA_{i}-A_{j} is small; this is in line with the models considered in [22, 23]. Finally, (s3) models the case where the matrix AiA_{i} of the ii-th system can be expressed as a linear combination of some of the other matrices {Aj}j=1,jiN\{A_{j}\}_{j=1,j\neq i}^{N}; as an example, this model may be applicable to traffic flows and mobility-on-demand services (see, e.g., [36]), where the LTI systems (1) model the evolution of the density of vehicles in given geographical areas over given periods of the day.

Refer to caption
Figure 1: Example of network systems with similar connectivity. the two matrices A1A_{1} and A2A_{2} feature zeros on a common set of entries.

Given the models (s1)-(s3), we consider estimating the matrices {Ai}i[N]\{A_{i}\}_{i\in[N]} jointly by solving the following system identification problem:

min{Ak}k[N]i=1Ni(Ai)+λ(A1,,AN),\displaystyle\min_{\{A_{k}\}_{k\in[N]}}\sum_{i=1}^{N}\mathcal{L}_{i}(A_{i})+\lambda\mathcal{R}(A_{1},\ldots,A_{N}), (3)

where we recall that i(Ai)\mathcal{L}_{i}(A_{i}) is the LS fit for the iith system and, in the spirit of regularized regression methods [20, 2, 27], (A1,,AN)(A1,,AN)(A_{1},\ldots,A_{N})\mapsto\mathcal{R}(A_{1},\ldots,A_{N}) is a lower-semicontinuous convex function that promotes the prior specified by (s1)–(s3), and λ>0\lambda>0 is a tuning parameter. Problem (3) is inspired by multi-task learning methods [24, 25, 26, 37], where learning tasks are performed simultaneously (in our case, the LS fitting) while exploiting commonalities on the parameters that are learned [26, Definition 1]. In the ensuing sections, we will explain specific choices for the regularization function (A1,,AN)\mathcal{R}(A_{1},\ldots,A_{N}); we start with the case where the NN LTI systems feature a common connectivity.

III Systems with similar connectivity

III-A Multi-task System Identification

In this section, we consider the case where the system matrices A1,ANA_{1},\ldots A_{N} have zeros in a common set of entries; i.e., (A1)ij=(A2)ij==(AN)ij=0(A_{1})_{ij}=(A_{2})_{ij}=\ldots=(A_{N})_{ij}=0 for a subset of indices i[n]i\in[n] and j[n]j\in[n]. In the statistical learning literature, this structural similarity leads to a setup where the unknowns {A1,,AN}\{A_{1},\ldots,A_{N}\} feature a group sparsity [28, 29]. Leveraging the technical approach of [28, 29], we then formulate the multi-task system identification problem for LTI systems with similar connectivity as follows:

min{Ak}k[N]\displaystyle\min_{\{A_{k}\}_{k\in[N]}} i=1Nτ=1Pixi(τ+1)Aixi(τ)Biui(τ)22\displaystyle\sum_{i=1}^{N}\sum_{\tau=1}^{P_{i}}\|x_{i}(\tau+1)-A_{i}x_{i}(\tau)-B_{i}u_{i}(\tau)\|_{2}^{2}
+λi=1Nj=1N[(A1)ij,(A2)ij,,(AN)ij]2,\displaystyle+\lambda\sum_{i=1}^{N}\sum_{j=1}^{N}\|[(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}]\|_{2}, (4)

where λ0\lambda\geq 0 and the function (A1,,AN)=i=1Nj=1N[(A1)ij,(A2)ij,,(AN)ij]2\mathcal{R}(A_{1},\ldots,A_{N})=\sum_{i=1}^{N}\sum_{j=1}^{N}\|[(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}]\|_{2} is utilized to enforce group sparsity; that is, the solution of (4) is such that either the whole vector [(A1)ij,(A2)ij,,(AN)ij][(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}] is zero or not. The parameter λ\lambda in (3) strikes a balance between the LS fit (in our case, the LS fit for individual LTI systems) and a number of vectors [(A1)ij,(A2)ij,,(AN)ij][(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}] that are set to zero (as shown shortly).

Problem (4) is an unconstrained convex program; given the composite cost, we consider a proximal-gradient method (with line search) for solving (4) (see, e.g., [38, 39]). The proximal-gradient method is tabulated as Algorithm 1.

Algorithm 1 Proximal gradient method for systems with similar connectivity
  Given: A^1(0),,A^N(0),η(0)\hat{A}_{1}^{(0)},\cdots,\hat{A}_{N}^{(0)},\eta^{(0)}, β(0,1)\beta\in(0,1), and λ\lambda.
  Repeat: m=0,1,2,m=0,1,2,\ldots until convergence  [S1] αη(m)\alpha\leftarrow\eta^{(m)}.
   [S2] Proximal-gradient with line search:   [S2.1] Zi=A^i(m)αi(A^i(m))Z_{i}=\hat{A}_{i}^{(m)}-\alpha\nabla\mathcal{L}_{i}(\hat{A}_{i}^{(m)}), i[N]i\in[N]   [S2.2] Let zjk:=[(Z1)jk,(Z2)jk,,(ZN)jk]z_{jk}:=[(Z_{1})_{jk},(Z_{2})_{jk},\ldots,(Z_{N})_{jk}].      For all j,k[n]j,k\in[n], compute:
yjk=zjkzjk2max(zjk2αλ,0).\displaystyle\hskip 19.91684pty_{jk}=\frac{z_{jk}}{\|z_{jk}\|_{2}}\max(\|z_{jk}\|_{2}-\alpha\lambda,0). (5)
  [S2.3] Form matrices {Y}[N]\{Y_{\ell}\}_{\ell\in[N]} as (Y)jk=(yjk)(Y_{\ell})_{jk}=(y_{jk})_{\ell}.   [S2.4] Break if: i=1Ni(Yi)12λYiA^i(m)F2\sum_{i=1}^{N}\mathcal{L}_{i}(Y_{i})\leq\frac{1}{2\lambda}\|Y_{i}-\hat{A}_{i}^{(m)}\|_{F}^{2}      +i=1N(i(A^i(m))+i(A^i(m))(YiA^i(m)))+\sum_{i=1}^{N}\left(\mathcal{L}_{i}(\hat{A}_{i}^{(m)})+\nabla\mathcal{L}_{i}(\hat{A}_{i}^{(m)})^{\top}(Y_{i}-\hat{A}_{i}^{(m)})\right)   [S2.5] Update αβα\alpha\leftarrow\beta\alpha.
   [S3] η(m+1)α,A^i(m+1)Yi\eta^{(m+1)}\leftarrow\alpha,\hat{A}_{i}^{(m+1)}\leftarrow Y_{i}, i[N]i\in[N].

We note that the step [S2.3] is in fact a closed-form expression for the proximal map:

{Yi}i[N]=proxαλ({Zi}i[N]),\displaystyle\{Y_{i}\}_{i\in[N]}=\textrm{prox}_{\alpha\lambda\mathcal{R}}(\{Z_{i}\}_{i\in[N]}), (6)

and it involves n2n^{2} parallel computations as in (5). The convergence behavior of Algorithm 1 can be readily analyzed by leveraging the results in [38, Chapter 2]. Moreover, Algorithm 1 can be converted into a “standard” proximal-gradient method if the line search is not performed [40].

From the thresholding operation (5), it is clear that increasing λ\lambda has the effect of forcing a higher number of entries of the system matrices to be zero. Unfortunately, tuning λ\lambda is not an easy task, and cross-validation procedures are typically utilized to find the value of λ\lambda such that the estimated matrices yield the lowest error on test data; see, for example [21, 20].

III-B Analysis

In this section, we analyze the performance of the multi-task system identification method. The performance of regression problems with sparsity-enforcing regularization terms is oftentimes compared against an “oracle” that represents the best achievable estimation when considering a (group) sparse model, under a given model compatibility condition [30]. We will provide error bounds with respect to both the oracle and the true system matrices.

To simplify the notation, we outline the results for the case where Pi=PP_{i}=P for all i[N]i\in[N] (though, similar results hold when the {Pi}\{P_{i}\} are different). Let ASn×nA^{S}\in\mathbb{R}^{n\times n} be a matrix formed by selecting the entries of An×nA\in\mathbb{R}^{n\times n} indexed by S[n]×[n]S\subset[n]\times[n] and setting zero to the other entries. For NN matrices Ain×nA_{i}\in\mathbb{R}^{n\times n}, i[N]i\in[N], define for brevity {Ai}i[N]2,1=i=1nj=1n[(A1)ij,(A2)ij,,(AN)ij]2\|\{A_{i}\}_{i\in[N]}\|_{2,1}=\sum_{i=1}^{n}\sum_{j=1}^{n}\|[(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}]\|_{2}. With this notation in place, we first state the main assumptions and outline the compatibility condition associated with the group sparse model [30, Chapter 8].

Assumption 1

The disturbances {wi(k)}\{w_{i}(k)\} are i.i.d. Gaussian random variables 𝒩(0,σ2)\mathscr{N}(0,\sigma^{2}). \Box

Definition 1 ([30])

Let S[n]×[n]S\subset[n]\times[n] and Sc:=([n]×[n])SS^{c}:=([n]\times[n])\setminus S. We say that the compatibility condition holds for the index set SS if for any {Ain×n}i[N]\{A_{i}\in\mathbb{R}^{n\times n}\}_{i\in[N]} with {AiSc}i[N]2,13{AiS}i[N]2,1\|\{A_{i}^{S^{c}}\}_{i\in[N]}\|_{2,1}\leq 3\|\{A_{i}^{S}\}_{i\in[N]}\|_{2,1}, it holds that

{AiS}i[N]2,12|S|i=1Nτ=1PAixi(τ)22Pϕ(S)\displaystyle\|\{A_{i}^{S}\}_{i\in[N]}\|_{2,1}^{2}\leq\frac{|S|\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|A_{i}x_{i}(\tau)\|_{2}^{2}}{P\phi(S)} (7)

for some constant ϕ(S)>0\phi(S)>0. Moreover, we define as 𝒮\mathcal{S} the collection of sets SS for which the compatibility condition holds. \Box

We note that, when SS coincides with the support of the matrices {Ai}i[n]\{A_{i}\}_{i\in[n]}, this technical condition provides bounds on the values of the non-zero entries of the matrices.

Hereafter, we let {AiMT}i[N]\{A_{i}^{MT}\}_{i\in[N]} be an optimal solution of the system identification problem (4), and we denote as {Ai}i[N]\{A_{i}^{\star}\}_{i\in[N]} the true matrices of the LTI systems in (1). The following theorem provides an error bound for the proposed multi-task system identification (4) relative to the (true) system matrices {Ai}i[N]\{A_{i}^{\star}\}_{i\in[N]}.

Theorem 1

Let {Ai}i[N]\{A_{i}^{\star}\}_{i\in[N]} be the true matrices of the LTI systems and define S={(i,j)[(A1)ij,(A2)ij,,(AN)ij]0}S^{\star}=\{(i,j)\mid[(A_{1}^{\star})_{ij},(A_{2}^{\star})_{ij},\ldots,(A_{N}^{\star})_{ij}]\neq 0\}. Let Assumption 1 hold and suppose that S𝒮S^{\star}\in\mathcal{S}. Define

λ0:=2MnP(1+4γ+8lognN+4γ+8lognN)12\displaystyle\lambda_{0}:=\frac{2\sqrt{M}}{nP}\left(1+\sqrt{\frac{4\gamma+8\log n}{N}}+\frac{4\gamma+8\log n}{N}\right)^{\frac{1}{2}} (8)

for some γ>0\gamma>0, where M=σ2max1iN,1jnk=1P[(xi(j))k]2M=\sigma^{2}\max\limits_{1\leq i\leq N,1\leq j\leq n}\sum_{k=1}^{P}[(x_{i}^{(j)})_{k}]^{2}. If λ4nNPλ0\lambda\geq 4nNP\lambda_{0}, then, the following bound holds

i=1Nτ=1P(AiAiMT)xi(τ)2224λ2|S|PNϕ(S)\displaystyle\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i}^{MT})x_{i}(\tau)\|_{2}^{2}\leq\frac{24\lambda^{2}|S^{\star}|}{PN\phi(S^{\star})} (9)

with probability at least 1eγ1-\mathrm{e}^{-\gamma}. \Box

From Theorem 1, it can be seen that the average error E(N,P):=1PNi=1Ni=1Nτ=1P(AiAiMT)xi(τ)22E(N,P):=\frac{1}{PN}\sum_{i=1}^{N}\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i}^{MT})x_{i}(\tau)\|_{2}^{2} is of the order

E(N,P)O(|S|ϕ(S)P1(1+N12+N1)),E(N,P)\approx O\left(\frac{|S^{\star}|}{\phi(S^{\star})}P^{-1}(1+N^{-\frac{1}{2}}+N^{-1})\right),

when taking λ=O(PN+N+1)\lambda=O(\sqrt{P}\sqrt{N+\sqrt{N}+1}). Interestingly, by increasing the number of LTI systems in the multi-task system identification (i.e. NN increases), the average error E(N,P)E(N,P) decreases; however, when NN\rightarrow\infty, the error does not tend to 0. This can be understood as a plateau in the ability to “transfer information” between systems.

One possible shortcoming of Theorem 1 is that the set SS^{\star} describing the (common) support of the matrices {Ai}i[N]\{A_{i}^{\star}\}_{i\in[N]} is assumed to satisfy the compatibility condition. In the following, we offer an additional error bound for cases where {Ai}i[N]\{A_{i}^{\star}\}_{i\in[N]} is not guaranteed to satisfy the compatibility condition; the error bound leverages the notion of oracle [30].

Theorem 2

Consider the oracle {Ai}i[N]\{A_{i}^{\dagger}\}_{i\in[N]} defined as:

{Ai}i[N]argmin{Ai}:S{Ai}𝒮\displaystyle\{A_{i}^{\dagger}\}_{i\in[N]}\in\arg\min_{\{A_{i}\}:S_{\{A_{i}\}}\in\mathcal{S}} {i=1Nτ=1P(AiAi)xi(τ)22\displaystyle\left\{\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i})x_{i}(\tau)\|_{2}^{2}\right.
+4λ2|S{Ai}|PNϕ(S{Ai})},\displaystyle\hskip 17.07182pt\left.+\frac{4\lambda^{2}|S_{\{A_{i}\}}|}{PN\phi\left(S_{\{A_{i}\}}\right)}\right\}, (10)

where S{Ai}={(i,j)[(A1)ij,(A2)ij,,(AN)ij]0}S_{\{A_{i}\}}=\{(i,j)\mid[(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}]\neq 0\}, and let Assumption 1 hold. Set λ0\lambda_{0} and λ\lambda as in Theorem 1. Then, the following bound holds with probability at least 1eγ1-\mathrm{e}^{-\gamma}, for a given γ>0\gamma>0:

i=1Nτ=1P(AiAiMT)xi(τ)22\displaystyle\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i}^{MT})x_{i}(\tau)\|_{2}^{2}
6i=1Nτ=1P(AiAi)xi(τ)22+24λ2|S|PNϕ(S),\displaystyle\hskip 14.22636pt\leq 6\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i}^{\dagger})x_{i}(\tau)\|_{2}^{2}+\frac{24\lambda^{2}|S^{\dagger}|}{PN\phi(S^{\dagger})}, (11)

where S={(i,j)[(A1)ij,(A2)ij,,(AN)ij]0}S^{\dagger}=\{(i,j)\mid[(A_{1}^{\dagger})_{ij},(A_{2}^{\dagger})_{ij},\ldots,(A_{N}^{\dagger})_{ij}]\neq 0\}. \Box

Theorem 2 asserts that the error incurred by the multi-task system identification (4) is bounded by the estimation error associated with the oracle plus an additional term modeling the error between the estimated matrices and the oracle. Here, the oracle represents the best achievable estimation when considering matrices with support index set that satisfies the compatibility condition.

IV Handling Systems with Other Similarities

In this section, we explain how the proposed multi-task system identification method can be adapted to LTI systems that feature additional similarities, as previously explained in Section II.

IV-A Small Heterogeneity

Consider the case where, for any pair Ai,AjA_{i},A_{j}, i,j[N]i,j\in[N], there exists ϵ>0\epsilon>0 such AiAjF2ϵ\|A_{i}-A_{j}\|_{F}^{2}\leq\epsilon. This case is referred to as “small heterogeneity” in [23]. With this prior, the multi-task system identification problem can be formulated as

min{Ak}k[N]i=1Ni(Ai)+λi=1Nj=iNAiAjF2,\displaystyle\min_{\{A_{k}\}_{k\in[N]}}\sum_{i=1}^{N}\mathcal{L}_{i}(A_{i})+\lambda\sum_{i=1}^{N}\sum_{j=i}^{N}\|A_{i}-A_{j}\|_{F}^{2}, (12)

where we recall that i(Ai)\mathcal{L}_{i}(A_{i}) is the LS fit for the iith system and where the regularization term penalizes large deviations between the estimated matrices [20].

Problem (12) is convex and can be solved in closed form. However, to avoid computationally-heavy matrix inversions (especially when the dimension bb is large and several systems are considered in the system identification process), Algorithm 1 can be utilized to solve (12) by replacing [S2.2] with the following n2n^{2} parallel computations:

yij=zij+2αλsij[1,1,,1]2αλN+1,i,j[N],\displaystyle y_{ij}=\frac{z_{ij}+2\alpha\lambda s_{ij}[1,1,\cdots,1]}{2\alpha\lambda N+1},\quad i,j\in[N], (13)

where sij==1N(Z)ijs_{ij}=\sum_{\ell=1}^{N}(Z_{\ell})_{ij}.

From the update (13), it can be seen that yij(sij/N)[1,1,,1]y_{ij}\rightarrow(s_{ij}/N)[1,1,\cdots,1] as λ\lambda\rightarrow\infty, thus setting all the system matrices to be the same. On the other hand, by setting λ=0\lambda=0 one recovers the LS method. Even in this case, cross-validation procedures can be utilized to find the value of λ\lambda such that the estimated matrices yield the lowest error on test data [21].

The estimation performance of (12) can be analyzed by deriving bounds between an optimal solution of  (12) and the one of the LS method. Since the bound is straightforward, and because of space limitations, we omit this result from the paper.

IV-B Linear Combinations

Lastly, we comment on an additional “similarity” where the system matrices are (approximately) linearly dependent. Precisely, we consider a scenario where for a subset of systems, there exists coefficients {αi,j}\{\alpha_{i,j}\in\mathbb{R}\} with αi,j0\alpha_{i,j}\neq 0 for some j[N]j\in[N] such that Aij=1,jiNαijAjA_{i}\approx\sum_{j=1,j\neq i}^{N}\alpha_{ij}A_{j}.

To formalize the setup, suppose that qNq\ll N of the matrices {Ai}i[N]\{A_{i}\}_{i\in[N]} are such that the remaining NqN-q can be represented as a linear combination of these qq matrices. Consider then building the n2×Nn^{2}\times N matrix [vec(A1),vec(A2),,vec(AN)][\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{N})]; it follows that this matrix has rank qNq\ll N. Based on this observation, we propose to formulate a multi-task system identification problem for this case as

min{Ak}k[N]\displaystyle\min_{\{A_{k}\}_{k\in[N]}} i=1Ni(Ai)\displaystyle\sum_{i=1}^{N}\mathcal{L}_{i}(A_{i})
+λ[vec(A1),vec(A2),,vec(AN)],\displaystyle+\lambda\left\|[\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{N})]\right\|_{*},

where the regularization function promotes sparsity in the singular values of the matrix [vec(A1),vec(A2),,vec(AN)][\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{N})] (see, e.g., [31, 32]).

Problem (IV-B) is convex and with a composite cost where the regularization function is not differentiable [31, 32]. Still, the proximal-gradient algorithm tabulated as Algorithm 1 can be modified to solve (IV-B). In particular, one can replace the step [S2.2] is Algorithm 1 with proxαλ[vec(A1),vec(A2),,vec(AN)]({Zi}i[N])\textrm{prox}_{\alpha\lambda\left\|[\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{N})]\right\|_{*}}(\{Z_{i}\}_{i\in[N]}); this proximal map affords a closed-form solution given by:

Y¯=Udiag({max{σiαλ,0}})V,\displaystyle\bar{Y}=U\textrm{diag}(\{\max\{\sigma_{i}-\alpha\lambda,0\}\})V^{*}, (14)

where the singular value decomposition of [vec(A1),vec(A2),,vec(AN)][\operatorname{vec}(A_{1}),\operatorname{vec}(A_{2}),\ldots,\operatorname{vec}(A_{N})] is Udiag({σi})VU\textrm{diag}(\{\sigma_{i}\})V. The matrices {Y}[N]\{Y_{\ell}\}_{\ell\in[N]} are then extracted from the columns of Y¯\bar{Y}. From the computation of Y¯\bar{Y}, it can be seen that higher values of λ\lambda lead to a higher number of singular values of Y¯\bar{Y} that are set to zero.

While the effectiveness of nuclear norm minimization approaches has been verified numerically in the literature, identifiability results and analytical bounds for the estimation error are available only under given assumptions on the regressors [31, 32] that may not be applicable to (IV-B); see also [33]. Deriving analytical error bounds for (IV-B) is subject of our ongoing investigations.

Remark 1

The proposed multi-task system identification problems can be extended to cases where the system matrices {Ai}i[N]\{A_{i}\}_{i\in[N]} are similar according to more than one of the priors (s1)–(s2). For example, if the matrices have a common sparsity pattern and the differences in the non-zero entries are small, one can utilize the composite regularization function λ1i=1Nj=iNAiAjF2\lambda_{1}\sum_{i=1}^{N}\sum_{j=i}^{N}\|A_{i}-A_{j}\|_{F}^{2} +λ2i=1Nj=1N[(A1)ij,(A2)ij,,(AN)ij]2+\lambda_{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\|[(A_{1})_{ij},(A_{2})_{ij},\ldots,(A_{N})_{ij}]^{\top}\|_{2}, where λ1,λ20\lambda_{1},\lambda_{2}\geq 0 are tuning parameters. \Box

V Numerical Experiments

V-A Experiments on brain networks

We test the proposed method for the problem of estimating the dynamics of brain networks, using data corresponding to the resting state fMRI from the Human Connectome Project (HCP)333Data available at https://wiki.humanconnectome.org/ [35, 34, 41]. Here, xi(t)x_{i}(t) is a 116116-dimensional blood-oxygen-level-dependent (BOLD) time series for 116116 parcellations of the brain of the ii-th subject. Our goal here is to estimate N=5N=5 dynamical systems of the form xi(t+1)=Aixi(t)+wi(t)x_{i}(t+1)=A_{i}x_{i}(t)+w_{i}(t), that model the evolution of BOLD signal when the individual is in a resting state, with wi(t)w_{i}(t) capturing process noise (the model does not contain external inputs uiu_{i} due to the resting state condition).

Since the matrices {Ai}i[5]\{A_{i}\}_{i\in[5]} are unknown, we consider the following error for each system:

(A):=1nk=1ni=1p(xi(k)[Axi](k))2i=1p(xi(k)x¯(k))2,\mathcal{E}(A):=\frac{1}{n}\sum_{k=1}^{n}\frac{\sum_{i=1}^{p}(x_{i}(k)-[Ax_{i}](k))^{2}}{\sum_{i=1}^{p}(x_{i}(k)-\bar{x}(k))^{2}},

where nn is the length of the testing vector, pp is the number of testing data and x¯(k):=1pi=1pxi(k)\bar{x}(k):=\frac{1}{p}\sum_{i=1}^{p}x_{i}(k). Note that 1(A)1-\mathcal{E}(A) is precisely the average R2R^{2} indicator of [35].

We consider three different methods: (i) the LS estimator (2), which is utilized per individual; (ii) the Least Absolute Shrinkage and Selection Operator (LASSO), which is again utilized per individual as proposed in [35]; and, (iii) the proposed method (3) with the group-sparsity regularization function (4). The rationale behind the group-sparsity is that the brain dynamics should exhibit the same effective connectivity between parcellations, though the remaining entries acknowledge the diversity in intensities of the interactions across individuals. We note that the effectiveness of the LS and LASSO has been experimentally validated in [35], where their estimation accuracy has been compared with several identification methods. Moreover, we performed a cross-validation procedure to optimize the performance of the LASSO.

Refer to caption
Refer to caption
Figure 2: (a) Mean error of LS, LASSO and multi-task (MT) system identification; “Case kk” means that 100k100k training data points are available for each subject (k=1,2,,9k=1,2,\cdots,9). (b) Mean error for subjects 2-5 and error for subject 1. “Case kk” means that 25k25k fMRI scans are used for subjects 1 (dashed line) while 100k100k (solid line) scans are used for subjects 2-5.
Refer to caption
Refer to caption
Refer to caption
Figure 3: Comparison between LS, LASSO and multi-task (MT) system identification (a) Case 1: 900900 training data points for each subject. (b) Case 2: For subject 1, 7575 training data points, and 300300 for subjects 2-5. (c) Case 3: For subject 1 and 3, 150150 training data points, and 600600 for subjects 2, 4, and 5. In the box plots, the red center line, box limits, and whiskers represent the median, upper and lower quartiles, and the smallest and largest samples, respectively. Red crosses indicate outliers.
Refer to caption
Figure 4: Estimated matrix A^i\hat{A}_{i} for Case 3, individuals 1, 2 and 3, for n=116n=116 brain parcellations.

In Figure 2, we compare the LS, LASSO and our approach (which is labeled as “MT”) in two cases: (a) the same amount of training data is utilized for the five subjects; and, (b) for subject 1, we utilize only 25% of the training data points with respect to the other subjects 2-5. We use 100100 test points. In Figure 2(a) we plot the mean error across the subjects 1-5; in Figure 2(b) we plot the mean error across the subjects 2-5 and the error for subject 1, for which fewer fMRI readings are available. The proposed method outperforms the LS and the LASSO, on par with the number of fMRI scans in both cases. The merits of the proposed method are particularly evident in Figure 2(b), where the proposed method significantly outperforms the LASSO for the subject 1; on the other hand, the LS is ill-conditioned and does not return meaningful estimates. This shows the ability to leverage information and data (in this case, fMRI readings) from the dynamics of subjects 2-5 to assist the estimation of the dynamics in subject 1.

To provide additional comparisons other than the mean error, Figure 3 shows the box plots for the LS, the LASSO, and the proposed approach in three different scenarios. In particular, Figure 3(a) shows that proposed multi-task identification method can achieve a smaller or comparable error (on average) than LS or LASSO when trajectories of 900 time steps are used for each subject (and these training trajectories are sufficiently rich). Figure 3(b) considers the case where 7575 training data points are available for subject 1 and 300300 for subjects 2-5. Here, the LS does not perform well due to ill-conditioning. The performance of the LASSO is comparable with the one of the proposed method in terms of median; however, the proposed method shows smaller upper and lower quartiles. Moreover, Figure 3(c) considers the case where fewer fMRI readings are available for subjects 1 and 3; the proposed method performs better than the LASSO in terms of quartiles and has a significantly less error deviation across the parcellations.

Finally, a representative example of the estimated matrices A^i\hat{A}_{i} for the subjects 1-3 is provided in Figure 4. The estimated matrices are the ones obtained in the case considered in Figure 3(c), where subjects 1 and 3 have fewer training points. It is possible to notice that the three matrices have zeros in many common entries. Based on this result, we will explore additional regularization methods that will combine group sparsity with (entry-wise) sparsity.

V-B Experiments on synthetic data

We provide additional results on synthetic data. We consider 10 systems as in (1), where {Ai}i[10]50×50\{A_{i}\}_{i\in[10]}\in\mathbb{R}^{50\times 50}, {Bi}i[10]50×4\{B_{i}\}_{i\in[10]}\in\mathbb{R}^{50\times 4}, ui(t)u_{i}(t) is the vector of all ones in 4\mathbb{R}^{4}, i.e. ui(t)u_{i}(t) is constant vector and wi(t)𝒩(0,0.12)w_{i}(t)\sim\mathcal{N}(0,0.1^{2}). We consider two different cases: common sparsity and linear combinations. We compare the LS estimator (2) and the proposed method (3) with the group-sparsity regularization (4) and nuclear norm regularization (IV-B).

Figure 5 compares the LS and our approach in two cases: (a) all the 10 systems can be represented by a linear combination of 3 systems and only 25% of the training data points are accessible for the tenth system with respect to the other systems 1-9; (b) all the 10 systems have the same sparsity pattern and only 25% of the training data points are accessible for the tenth system with respect to the other systems 1-9. The testing is on 6060 data points. In Figure 5(a), we plot the mean error across systems 1-9 as well as the error for system 10. The proposed method outperforms the LS approach in both the mean error and the error for system 10, especially in the case of only a small number of data available. In Figure 5(b), we can observe similar results.

Refer to caption
Refer to caption
Figure 5: Mean error curve to compare LS and multi-task (MT) system identification methods. (a) Linear combinations. (b) Common sparsity. “Case kk” means that 10k+1010k+10 samples of the trajectory are used for system 10 (dash line) while 40k+40k40k+40k (solid line) are used for system 1-9, k=1,2,3,4k=1,2,3,4. In “Case 55”, 7575 data points are used for system 10 (dashed line) while 300k300k (solid line) are used for system 1-9.

References

  • [1] L. Ljung, System Identification: Theory for the user.   Prentice-Hall, 1987.
  • [2] G. Pillonetto, T. Chen, A. Chiuso, G. De Nicolao, and L. Ljung, “Regularized system identification: Learning dynamic models from data,” 2022.
  • [3] C. De Persis and P. Tesi, “Formulas for data-driven control: Stabilization, optimality, and robustness,” IEEE Transactions on Automatic Control, vol. 65, no. 3, pp. 909–924, 2019.
  • [4] J. Coulson, J. Lygeros, and F. Dörfler, “Data-enabled predictive control: In the shallows of the DeePC,” in European Control Conference, 2019, pp. 307–312.
  • [5] L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger, “Learning-based model predictive control: Toward safe learning in control,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 3, pp. 269–296, 2020.
  • [6] J. Berberich, J. Köhler, M. A. Müller, and F. Allgöwer, “Data-driven model predictive control with stability and robustness guarantees,” IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1702–1717, 2020.
  • [7] V. Krishnan and F. Pasqualetti, “On direct vs indirect data-driven predictive control,” in IEEE Conference on Decision and Control, 2021, pp. 736–741.
  • [8] L. Li, C. De Persis, P. Tesi, and N. Monshizadeh, “Data-based transfer stabilization in linear systems,” arXiv preprint arXiv:2211.05536, 2022.
  • [9] J. C. Willems, P. Rapisarda, I. Markovsky, and B. L. De Moor, “A note on persistency of excitation,” Systems & Control Letters, vol. 54, no. 4, pp. 325–329, 2005.
  • [10] C. De Persis and P. Tesi, “On persistency of excitation and formulas for data-driven control,” in IEEE Conference on Decision and Control, 2019, pp. 873–878.
  • [11] T. Sarkar and A. Rakhlin, “Near optimal finite time identification of arbitrary linear dynamical systems,” in International Conference on Machine Learning, 2019, pp. 5610–5618.
  • [12] M. Simchowitz, H. Mania, S. Tu, M. I. Jordan, and B. Recht, “Learning without mixing: Towards a sharp analysis of linear system identification,” in Conference On Learning Theory, 2018, pp. 439–473.
  • [13] M. K. S. Faradonbeh, A. Tewari, and G. Michailidis, “Finite time identification in unstable linear systems,” Automatica, vol. 96, pp. 342–353, 2018.
  • [14] S. Oymak and N. Ozay, “Non-asymptotic identification of LTI systems from a single trajectory,” in American control conference, 2019, pp. 5655–5661.
  • [15] Y. Zheng and N. Li, “Non-asymptotic identification of linear dynamical systems using multiple trajectories,” IEEE Control Systems Letters, vol. 5, no. 5, pp. 1693–1698, 2020.
  • [16] L. Xin, G. Chiu, and S. Sundaram, “Learning the dynamics of autonomous linear systems from multiple trajectories,” in American Control Conference, 2022.
  • [17] T. Chen, M. S. Andersen, L. Ljung, A. Chiuso, and G. Pillonetto, “System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques,” IEEE Transactions on Automatic Control, vol. 59, no. 11, pp. 2933–2945, 2014.
  • [18] A. Chiuso and G. Pillonetto, “System identification: A machine learning perspective,” Annual Review of Control, Robotics, and Autonomous Systems, vol. 2, pp. 281–304, 2019.
  • [19] Y. Sun, S. Oymak, and M. Fazel, “Finite sample system identification: Optimal rates and the role of regularization,” in Learning for Dynamics and Control, 2020, pp. 16–25.
  • [20] T. Hastie, R. Tibshirani, J. H. Friedman, and J. H. Friedman, The elements of statistical learning: data mining, inference, and prediction.   Springer, 2009, vol. 2.
  • [21] T. Hastie, R. Tibshirani, and M. Wainwright, “Statistical learning with sparsity,” Monographs on statistics and applied probability, vol. 143, p. 143, 2015.
  • [22] L. Xin, L. Ye, G. Chiu, and S. Sundaram, “Identifying the dynamics of a system by leveraging data from similar systems,” in American Control Conference, 2022.
  • [23] H. Wang, L. F. Toso, and J. Anderson, “Fedsysid: A federated approach to sample-efficient system identification,” arXiv preprint arXiv:2211.14393, 2022.
  • [24] T. Evgeniou and M. Pontil, “Regularized multi–task learning,” in Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, 2004, pp. 109–117.
  • [25] O. Sener and V. Koltun, “Multi-task learning as multi-objective optimization,” Advances in neural information processing systems, vol. 31, 2018.
  • [26] Y. Zhang and Q. Yang, “A survey on multi-task learning,” IEEE Transactions on Knowledge and Data Engineering, 2021.
  • [27] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proceedings of the national academy of sciences, vol. 113, no. 15, pp. 3932–3937, 2016.
  • [28] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 68, no. 1, pp. 49–67, 2006.
  • [29] J. Huang and T. Zhang, “The benefit of group sparsity,” The Annals of Statistics, vol. 38, no. 4, pp. 1978–2004, 2010.
  • [30] P. Bhlmann and S. van de Geer, Statistics for High-Dimensional Data: Methods, Theory and Applications.   Springer Publishing Company, 2011.
  • [31] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky, “Sparse and low-rank matrix decompositions,” IFAC Proceedings Volumes, vol. 42, no. 10, pp. 1493–1498, 2009.
  • [32] M. Mardani, G. Mateos, and G. B. Giannakis, “Subspace learning and imputation for streaming big data matrices and tensors,” IEEE Transactions on Signal Processing, vol. 63, no. 10, pp. 2663–2677, 2015.
  • [33] K. Mohan and M. Fazel, “Reweighted nuclear norm minimization with application to system identification,” in Proceedings of the 2010 American Control Conference.   IEEE, 2010, pp. 2953–2959.
  • [34] P. Srivastava, E. Nozari, J. Z. Kim, H. Ju, D. Zhou, C. Becker, F. Pasqualetti, G. J. Pappas, and D. S. Bassett, “Models of communication and control for brain networks: distinctions, convergence, and future outlook,” Network Neuroscience, vol. 4, no. 4, pp. 1122–1159, 2020.
  • [35] E. Nozari, M. A. Bertolero, J. Stiso, L. Caciagli, E. J. Cornblath, X. He, A. S. Mahadevan, G. J. Pappas, and D. S. Bassett, “Is the brain macroscopically linear? a system identification of resting state dynamics,” arXiv preprint arXiv:2012.12351, 2020.
  • [36] B. Turan and M. Alizadeh, “Competition in electric autonomous mobility on demand systems,” IEEE Transactions on Control of Network Systems, 2021.
  • [37] M. Crawshaw, “Multi-task learning with deep neural networks: A survey,” arXiv preprint arXiv:2009.09796, 2020.
  • [38] A. Beck and M. Teboulle, “Gradient-based algorithms with applications to signal recovery,” Convex optimization in signal processing and communications, pp. 42–88, 2009.
  • [39] P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,” in Fixed-point algorithms for inverse problems in science and engineering.   Springer, 2011, pp. 185–212.
  • [40] N. Parikh, S. Boyd et al., “Proximal algorithms,” Foundations and trends® in Optimization, vol. 1, no. 3, pp. 127–239, 2014.
  • [41] S. Gu, F. Pasqualetti, M. Cieslak, Q. K. Telesford, A. B. Yu, A. E. Kahn, J. D. Medaglia, J. M. Vettel, M. B. Miller, S. T. Grafton et al., “Controllability of structural brain networks,” Nature communications, vol. 6, no. 1, pp. 1–10, 2015.

Acknowledgements

The authors would like to thank Dr. Erfan Nozari (University of California at Riverside) for the assistance with the data used in the simulations, and Dr. Stephen Becker (University of Colorado Boulder) for the discussion on the group Lasso.

Proofs of Theorems 1 and 2. We provide a sketch of the proofs of Theorems 1 and 2. To this end, we introduce some additional notation. For the iith system, collect the recorded trajectories in the matrix Xi:=[xi(1),,xi(Pi)]X_{i}:=[x_{i}(1),...,x_{i}(P_{i})]; then, the LS fit i(Ai)\mathcal{L}_{i}(A_{i}) for the iith system can be equivalently expressed as

i(Ai)=𝐲~iX~i𝐚~i22,\mathcal{L}_{i}(A_{i})=\|\tilde{{\mathbf{y}}}_{i}-\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}\|_{2}^{2},

where 𝐲~i:=vec([xi(2)Biui(1),,xi(Pi+1)Biui(Pi)])\tilde{{\mathbf{y}}}_{i}:=\operatorname{vec}([x_{i}(2)-B_{i}u_{i}(1),...,x_{i}(P_{i}+1)-B_{i}u_{i}(P_{i})]), 𝐚~i=vec(Ai)\tilde{{\mathbf{a}}}_{i}=\operatorname{vec}(A_{i}) and X~i=XiIn\tilde{X}_{i}=X_{i}^{\top}\otimes I_{n}. Moreover, let 𝐀~:=[𝐚~1,,𝐚~N]\mathbf{\tilde{A}}:=[\tilde{{\mathbf{a}}}_{1},...,\tilde{{\mathbf{a}}}_{N}] collect all the vectorized system matrices {Ai}i[N]\{A_{i}\}_{i\in[N]}, 𝐱i(j){\mathbf{x}}_{i}^{(j)} the jjth column of X~i\tilde{X}_{i}, and denote as 𝐀~2,1\|\mathbf{\tilde{A}}\|_{2,1} the sum of the l2l_{2}-norm of each row of 𝐀~\mathbf{\tilde{A}}. Finally, recall that for any matrix 𝐀~n2×N\mathbf{\tilde{A}}\in\mathbb{R}^{n^{2}\times N}, 𝐀~S\mathbf{\tilde{A}}_{S} is a matrix formed by selecting the rows of 𝐀~\mathbf{\tilde{A}} indexed by SS and setting to zero the other rows.

With this notation in place, and recalling that {AiMT}i[N]\{A_{i}^{MT}\}_{i\in[N]} denotes an optimal solution of (4), we have the following result.

Lemma 1

Let Assumption 1 hold. Let λ0\lambda_{0} be defined as in Theorem 1, and take λ4nNPλ0\lambda\geq 4nNP\lambda_{0}. Then, for any 𝐀~n2×N\mathbf{\tilde{A}}\in\mathbb{R}^{n^{2}\times N} and any S𝒮S\in\mathcal{S} satisfying the compatibility condition, the following bound holds with probability at least 1eγ1-\mathrm{e}^{-\gamma}:

i=1NX~i𝐚~iX~i𝐚~iMT22+λN𝐀~MT𝐀~S2,1\displaystyle\sum_{i=1}^{N}\|\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{\star}-\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{MT}\|^{2}_{2}+\frac{\lambda}{\sqrt{N}}\|\mathbf{\tilde{A}}^{MT}-\mathbf{\tilde{A}}_{S}\|_{2,1}
6i=1NX~i𝐚~iX~i𝐚~S,i22+24λ2|S|PNϕ2(S)\displaystyle\hskip 42.67912pt\leq 6\sum_{i=1}^{N}\left\|\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{\star}-\tilde{X}_{i}\tilde{{\mathbf{a}}}_{S,i}\right\|^{2}_{2}+\frac{24\lambda^{2}|S|}{PN\phi^{2}(S)} (15)

for any given γ>0\gamma>0. \Box

The proof of Lemma 1 is omitted due to space limitations. Concisely, the proof of Lemma 1 leverages some arguments from Chapters 6 and 8 in [30]. First, we establish basic inequalities for our problem similarly to [30, Chapter 6], and bound the empirical process by  [30, Lemma. 8.5]. Then, we derive an inequality similar to  [30, Lemma. 6.3] for our group sparse model. Finally, combining these inequalities, we prove the inequalities in Lemma 1 (similar to  [30, Thm. 6.2]). The full proof will be made available on an extended version online.

Based on the result of Lemma 1, the bound in Theorem 1 can be shown by setting 𝐀~S=𝐀~\mathbf{\tilde{A}}_{S}=\mathbf{\tilde{A}}^{\star} in (15), thus obtaining

i=1NX~i𝐚~iX~i𝐚~iMT22+λN𝐀~MT𝐀~2,1\displaystyle\sum_{i=1}^{N}\|\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{\star}-\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{MT}\|^{2}_{2}+\frac{\lambda}{\sqrt{N}}\|\mathbf{\tilde{A}}^{MT}-\mathbf{\tilde{A}}^{\star}\|_{2,1}
24λ2|S|PNϕ2(S)\displaystyle\hskip 156.49014pt\leq\frac{24\lambda^{2}|S^{\star}|}{PN\phi^{2}(S^{\star})} (16)

and noticing that i=1NX~i𝐚~iX~i𝐚~iMT22=i=1Nτ=1P(AiAiMT)xi(τ)22\sum_{i=1}^{N}\|\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{\star}-\tilde{X}_{i}\tilde{{\mathbf{a}}}_{i}^{MT}\|^{2}_{2}=\sum_{i=1}^{N}\sum_{\tau=1}^{P}\|(A_{i}^{\star}-A_{i}^{MT})x_{i}(\tau)\|_{2}^{2}. On the other hand, the bound in Theorem 2 can be shown by setting 𝐀~S=𝐀~\mathbf{\tilde{A}}_{S}=\mathbf{\tilde{A}}^{\dagger} in (15), where we recall that 𝐀~\mathbf{\tilde{A}}^{\dagger} is the oracle solution.