Sample Complexity of Linear Quadratic Regulator
Without Initial Stability
Abstract.
Inspired by REINFORCE, we introduce a novel receding-horizon algorithm for the Linear Quadratic Regulator (LQR) problem with unknown parameters. Unlike prior methods, our algorithm avoids reliance on two-point gradient estimates while maintaining the same order of sample complexity. Furthermore, it eliminates the restrictive requirement of starting with a stable initial policy, broadening its applicability. Beyond these improvements, we introduce a refined analysis of error propagation through the contraction of the Riemannian distance over the Riccati operator. This refinement leads to a better sample complexity and ensures improved convergence guarantees. Numerical simulations validate the theoretical results, demonstrating the method’s practical feasibility and performance in realistic scenarios.
1. Introduction
The Linear Quadratic Regulator (LQR) problem, a cornerstone of optimal control theory, offers an analytically tractable framework for optimal control of linear systems with quadratic costs. Traditional methods rely on complete knowledge of system dynamics, solving the Algebraic Riccati Equation [2] to determine optimal control policies. However, recent real-world scenarios often involve incomplete or inaccurate models. Classical methods in control theory, such as identification theory [6] and adaptive control [1], were specifically designed to provide guarantees for decision-making in scenarios with unknown parameters. However, the problem of effectively approximating the optimal policy using these methods remains underexplored in the traditional literature. Recent efforts have sought to bridge this gap by analyzing the sample complexity of learning-based approaches to LQR [4], providing bounds on control performance relative to the amount of data available.
In contrast, the model-free approach, rooted in reinforcement learning (RL), bypasses the need for explicit dynamics identification, instead focusing on direct policy optimization through cost evaluations. Recent advances leverage stochastic zero-order optimization techniques, including policy gradient methods, to achieve provable convergence to near-optimal solutions despite the inherent non-convexity of the LQR cost landscape. Foundational works, such as [5], established the feasibility of such methods despite the non-convexity of the problem, demonstrating convergence under random initialization. Subsequent efforts, including [11] and [13], have refined these techniques, achieving improved sample complexity bounds. Notably, all of these works assume that the initial policy is stabilizing.
A key limitation of these methods, including [11, 13], is the reliance on two-point gradient estimation, which requires evaluating costs for two different policies while maintaining identical initial states. In practice, this assumption is often infeasible, as the initial state is typically chosen randomly and cannot be controlled externally. Our earlier work [12] addressed this challenge, establishing the best-known result among methods that assume initial stability without having to rely on two-point estimates. Instead, we proposed a one-point gradient estimation method, inspired by REINFORCE [19, 17], that achieves the same convergence rate as the two-point method [11] using only a single cost evaluation at each step. This approach enhances both the practical applicability and theoretical robustness of model-free methods, setting a new benchmark under the initial stability assumption.
The requirement for an initial stabilizing policy significantly limits the utility of these methods in practice. Finding such a policy can be challenging or infeasible and often relies on identification techniques, which model-free methods are designed to avoid. Without getting technical at this point, it is worth pointing out that this initial stability assumption plays a major role in the construction of the mentioned model-free methods, and cannot be removed easily. For instance, this assumption ensures favorable optimization properties, like coercivity and gradient domination, that simplify convergence analysis. In this sense, removing this assumption while maintaining stability and convergence guarantees is essential to generalize policy gradient methods, a challenge that has remained an active research topic [20, 14, 9, 22].
As also pointed out in [20], the -discounted LQR problems studied in [14, 9, 22] are equivalent to the standard non-discounted LQR with system matrices scaled by . In [14, 9, 22], this scaling results in an enlarged set of stabilizing policies when is sufficiently small, enabling policy gradient algorithms to start from an arbitrary policy. However, as noted in [20], this comes at the cost of solving multiple LQR instances rather than a single one, increasing computational overhead. Furthermore, the optimization landscape in the discounted setting remains fundamentally the same as in the undiscounted case, as described in [5, 11]. Consequently, the same difficulties mentioned in [8, 18] persist when extending these methods to output-feedback settings, where additional estimation errors complicate policy search. In contrast, receding-horizon approaches [20] provide a more direct and extensible framework for tackling such challenges [21].
This paper builds on the receding-horizon policy gradient framework introduced in [20], a significant step towards eliminating the need for a stabilizing initial policy by recursively updating finite-horizon costs. While the approach proposed in this work marks an important step forward in model-free LQR, we address the reliance on the two-point gradient estimation, a known limitation discussed earlier. Building on the gradient estimation approach from our earlier work [12], we adapt the core idea to accommodate the new setup that eliminates the initial stability assumption. Specifically, our modified method retains the same convergence rate while overcoming the restrictive assumptions of two-point estimation. Beyond these modifications, we introduce a refined convergence analysis, via an argument based on a Riemannian distance function [3], which significantly improves the propagation of errors. This ensures that the accumulated error remains linear in the horizon length, in contrast to the exponential growth in [20]. As a result, we achieve a uniform sample complexity bound of , independent of problem-specific constants, thereby offering a more scalable and robust policy search framework.
1.1. Algorithm and Paper Structure Overview
The paper is structured into three sections. Section II presents the necessary preliminaries and establishes the notation used throughout the paper. Section III introduces our proposed algorithm, which operates through a hierarchical double-loop structure, an outer loop which provides a surrogate cost function in a receding horizon manner, and an inner loop applying policy gradient method to obtain an estimate of its optimal policy. Section IV delves deeper into the policy gradient method employed in the inner loop, providing rigorous convergence results and theoretical guarantees for this critical component of the algorithm. Section V includes the sample complexity bounds, and comparisons with the results in the literature. Finally, we provide simulations studies verifying our findings in Section VI.
To be more specific, the core idea of the algorithm leverages the observation that, for any error tolerance , there exists sufficiently large finite horizon where the sequence of policies minimizing recursively updated finite-horizon costs can approximate the optimal policy for the infinite-horizon cost within neighborhood. This insight motivates the algorithm’s design: a recursive outer loop that iteratively refines the surrogate cost function over a sequence of finite horizons, and an inner loop that employs policy gradient methods to approximate the optimal policy for each of these costs. Specifically, in the outer loop, the algorithm updates the surrogate cost and the associated policy at each horizon step , starting from the terminal horizon and moving backward to . At each step , the inner loop applies a policy gradient method to compute an approximately optimal policy for the finite-horizon cost over the interval . This step generates a surrogate policy , which is then incorporated into the cost function of the subsequent step in the outer loop.
The main difficulty in analyzing the proposed algorithm stems from the fact that the approximation errors from the policy gradient method in the inner loop propagate across all steps of the outer loop. To ensure overall convergence, the algorithm imposes a requirement on the accuracy of the policy gradient method in the inner loop. Each policy obtained must be sufficiently close to the optimal policy for the corresponding finite-horizon cost. This guarantees that the final policy at the last step of the outer loop converges to the true optimal policy for the infinite-horizon cost.
2. Preliminaries
In this section, we gather the required notation, closely following the ones in [20] which our work builds on. Consider the discrete-time linear system
(1) |
where is the system state at time , is the control input at time , and are the system matrices which are unknown to the control designer. Crucially here, the initial state is sampled randomly from a distribution and satisfies
(2) |
The objective in the LQR problem is to find the optimal controller that minimizes the following cost
where and are the symmetric positive definite matrices that parameterize the cost. We require the pair to be stabilizable, and since , the pair is observable, As a result, the unique optimal controller is a linear state-feedback where is derived as follows
(3) |
and denotes the unique positive definite solution to the discounted discrete-time algebraic Riccati equation (ARE) [2]:
(4) |
2.1. Notations
We use , , , and to denote the 2-norm, Frobenius norm, minimum singular value, and the condition number of a matrix respectively. We also use to denote the spectral radius of a square matrix . Moreover, for a positive definite matrix of appropriate dimensions, we define the -induced norm of a matrix as
Following the notation in [20], we denote the -induced norm by . Furthermore, we denote the Riemannian distance [3] between two positive definite matrices by
We now introduce some important notations which will be used in describing the algorithm and proof of the main result. Let be the horizon length and the initial time step. The true finite-horizon cost of a policy is defined as
(5) |
where:
-
•
denotes the initial state is drawn from the distribution ,
-
•
is the terminal cost matrix, which can be chosen arbitrarily (e.g., ),
-
•
is the feedback gain applied at step ,
-
•
is the feedback gain at step , to be formally defined via the Riccati difference equation in (21);
Finally, for all , the state evolves according to:
with
We also define the surrogate cost
(6) |
where is the feedback gain derived at step of the [outer loop of the] algorithm, and for all , the state evolves as:
with
The key difference between and lies in the use of versus for . This distinction implies that incorporates all errors from earlier steps, precisely the ones at , as the procedure is recursive.
We now define several functions that facilitate the characterization of our gradient estimate, which uses ideas from our earlier work in [12]). To start, we let
(7) | ||||
(8) |
so that
Using (8), we can compute the gradient of with respect to as follows:
(9) | ||||
(10) | ||||
(11) |
and thus,
(12) | ||||
(13) | ||||
(14) |
Moreover, we define
(15) |
so that
and
Having established the cost functions, we now introduce the notation used to describe the policies:
(16) | ||||
(17) |
where denotes the optimal policy for the true cost , and denotes the optimal policy for the surrogate cost . Additionally, represents an estimate of . It is obtained using a policy gradient method in the inner loop of the algorithm, which is applied at each step of the outer loop to minimize the surrogate cost .
We now move on to the recursive equations. First, we have
(18) |
where . In addition,
(19) |
where from (17) can also be computed from
Finally, we have the Riccati difference equation (RDE):
(20) |
where and from (17) can also be computed from
(21) |
As a result, it is easy to follow that
(22) | ||||
(23) | ||||
(24) |
We also define the Riccati operator
(25) |
so that and can also be shown as
(26) | ||||
(27) |
We now introduce the following mild assumption, which will be useful in establishing a key result.
Assumption 2.1.
in (1) is non-singular.
Under this assumption, the following result from [16] holds:
Lemma 2.1.
Having introduced all the necessary definitions, we now turn our attention to the our loop.
3. The Outer Loop (Receding-Horizon Policy Gradient)
It has been demonstrated that the solution to the RDE (20) converges monotonically to the stabilizing solution of the ARE (4) exponentially [7]. As a result, in (21) also converges monotonically to as increases. In particular, we recall the following result from [20, Theorem 1].
Theorem 3.1.
Let , and define
(28) |
where . Then it holds that and for all , the control policy computed by (21) is stabilizing and satisfies for any .
The proof of Theorem 3.1 is provided in Appendix A for completeness (and to account for some minor change in notation). We also note that this theorem relies on a minor inherent assumption that satisfies . A full discussion of this assumption is provided in Remark A.1 in Appendix A.
With this result in place, we provide our proposed algorithm (see Algorithm 1). Note that in this section, we focus on the outer loop of Algorithm 1, analyzing the requirements it imposes on the convergence of the policy gradient method employed in the inner loop. The details of the policy gradient method will be discussed in the next section.
Before we move on to the next result, we define the following constants:
Additionally, given a scalar , we define:
(29) |
We now present a key result, Theorem 3.2, on the accumulation of errors that constitutes an improvement over [20, Theorem 2] (corrected version of which is stated as Theorem 3.3 below); as the proof of Theorem 3.2 demonstrates, this improvement relies on a fundamentally different analysis.
Theorem 3.2.
(Main result: outer loop): Select
(30) |
where , and suppose that Assumption 2.1 holds. Now assume that, for some , there exists a sequence of policies such that for all ,
where is the optimal policy for the Linear Quadratic Regulator (LQR) problem from step to , incorporating errors from previous iterations of Algorithm 1. Then, the proposed algorithm outputs a control policy that satisfies . Furthermore, if is sufficiently small such that
then is stabilizing.
The proof of Theorem 3.2 is presented in Appendix B. A key component of our analysis is the contraction of the Riemannian distance on the Riccati operator, as established in Lemma 2.1. This allows us to demonstrate that the accumulated error remains linear in , in contrast to the exponential growth in [20, Theorem 2].
Given this discrepancy, we revisit [20, Theorem 2] and present a revised version which accounts for some necessary, and non-trivial, modifications to make the statement accurate. For the latter reason, and the fact that this result does not rely on Assumption 2.1, we provide a complete proof in Appendix C.
Theorem 3.3.
(Prior result: outer loop): Choose
(31) |
where . Now assume that, for some , there exists a sequence of policies such that
(32) |
where is the optimal policy for the Linear Quadratic Regulator (LQR) problem from step to , incorporating errors from previous iterations of Algorithm 1. Then, the RHPG algorithm outputs a control policy that satisfies . Furthermore, if is sufficiently small such that
then is stabilizing.
As previously mentioned, Theorem 3.2 significantly improves error accumulation, resulting in much less restrictive requirements than Theorem 3.3. The limitations of Theorem 3.3 stem from the exponent of the constant in (32), which is discussed in detail in Appendix C. It is worth re-iterating that this improvement comes only at the cost of Assumption 2.1, a rather mild structural requirement.
4. The Inner Loop and Policy Gradient
In this section, we focus on the inner loop of Algorithm 1, on which we will implement our proposed policy gradient method.
We seek a way to estimate the gradient of this function with respect to . To remedy, we propose:
(33) |
where is sampled from , and then is chosen randomly from the Gaussian distribution for some . Moreover, we rewrite as
(34) |
where . Substituting (34) in (33) yields
(35) |
This expression corresponds to the gradient estimate utilized in Algorithm 1, as described in its formulation.
Proposition 4.1.
Suppose is sampled from and chosen from as before. Then for any given choice of , we have that
(36) |
Proof.
Similar to [20], we define the following sets regarding the inner loop of the algorithm for each :
(38) |
for some arbitrary . We also define the following constant:
We now provide some bounds in the following lemma.
Lemma 4.1.
Suppose , and
Then for any , we have that
(39) |
with probability at least , where are given by
(40) | ||||
(41) | ||||
(42) |
Moreover,
(43) |
where
(44) |
Proof.
Using the Formulation of derived in (35), we have
(45) | ||||
(46) |
Before we continue, we provide the following bound:
Sublemma 4.1.
Suppose . Then it holds that
(47) |
Proof of Sublemma 4.1. Using (6), we have
(48) | ||||
(49) | ||||
(50) | ||||
(51) | ||||
(52) | ||||
(53) |
Rearranging (53) yields
where (i) follows from the definition of the set in (38). This concludes the proof of Sublemma 4.1.
We now continue with the proof of the Lemma 4.1. Note that
As a result,
(54) | ||||
(55) | ||||
(56) |
where the inequality follows from Sublemma 4.1 along with the fact that by the assumption,
Combining (46) with (56) and (2), we obtain
(57) | ||||
(58) | ||||
(59) | ||||
(60) |
Furthermore, since for any , is distributed according to the chi-squared distribution with degrees of freedom ( for any ). Therefore, the standard Laurent-Massart bounds [10] suggest that for arbitrary , we have that
(61) |
Now if we take , since by our assumption, it holds that . Thus
which after substituting with its value gives
As a result, we have and consequently
with probability at least , which after applying on (60) yields
proving the first claim. As for the second claim, note that using (60), we have
(62) |
Now since whose moments are known, taking an expectation on (62) results in
concluding the proof. ∎
We next provide some useful properties of the cost function in the following lemma.
Lemma 4.2.
For all , the function is -strongly convex, where
and in particular, for all ,
(63) |
where is the global minimizer of . Moreover, assuming that , we have that for all ,
(64) |
where
Proof.
We first prove the strong convexity as follows:
Note the the next inequality is an immediate consequence of the PL-inequality. Now we move on to the -smoothness property:
concluding the proof. ∎
Before introducing the next result, let us denote the optimality gap of iterate by
(65) |
Moreover, let denote the -algebra containing the randomness up to iteration of the inner loop of the algorithm for each (including but not ). We then define
(66) |
which is a stopping time with respect to . Note that we did some notation abuse as and may differ for each . But since these steps of the outer loop do not impact one another, we used just one notation for simplicity.
Lemma 4.3.
Suppose , and the update rule follows
(67) |
where is the step-size. Then for any , we have
(68) |
where is defined in (65).
Proof.
First, note that by -smoothness, we have
which after multiplying by (which is determined by ) and taking an expectation conditioned on gives
(69) | ||||
(70) | ||||
(71) |
where (i) follows from Proposition 4.1, Lemma 4.1 along with the fact that the event implies , and (ii) is due to Lemma 4.2.
Now after some rearranging on (71) and noting that is also determined by , we conclude that
(72) |
finishing the proof. ∎
We are now in a position to state a precise version of our main result for the inner loop.
Theorem 4.1.
(Main result: inner loop): Suppose . For any , if the step-size is chosen as
(73) |
then for a given error tolerance , the iterate of the update rule (67) after
steps satisfies
with a probability of at least .
The proof of this result relies heavily on Proposition 4.2, which we establish next.
Proposition 4.2.
Under the parameter settings of Theorem 4.1, we have that
Moreover, the event happens with probability of at least .
Proof.
We dedicate the following sublemma to prove the first claim.
Sublemma 4.2.
Proof of Sublemma 4.2. We prove this result by induction on as follows:
Base case ():
which after taking expectation proves the claim for .
Inductive step: Let be fixed and assume that
(74) |
holds (the inductive hypothesis). Observe that
(75) | ||||
(76) | ||||
(77) |
where (i) comes from and (ii) from the fact that is determined by . By Lemma 4.3, we have that
(78) | ||||
(79) |
where (i) comes from replacing with its value in Theorem 4.1 along with the fact that . Now taking an expectation on (79) and combining it with (77) yields
(80) | ||||
(81) | ||||
(82) | ||||
(83) | ||||
(84) |
where (i) comes from the induction hypothesis (74), and (ii) from
which is due to the choice of in Theorem 4.1. This proves the claim for , completing the inductive step.
Now utilizing Sublemma 4.2 along with the choice of in Theorem 4.1, we have
concluding the proof of the first claim of Proposition 4.2. Moving on to the second claim, we start by introducing the stopped process
(85) |
We now show this process is a supermartingale. First, observe that
(86) | ||||
(87) | ||||
(88) |
Now note that for the first term of the right-hand side of (88), it holds that
(89) |
where (i) follows from the fact that under the event , we have . Moreover, for the second term of the right-hand side of (88), we have that
(90) | ||||
(91) |
where (i) follows from Lemma 4.3. Combining (89) and (91) with (88), we get
(92) | ||||
(93) | ||||
(94) | ||||
(95) |
where (i) follows from under parameter choice of Theorem 4.1. This finishes the proof of being a supermartingale. Now note that
where (i) follows from the fact that . Using Doob/Ville’s inequality for supermartingales, we have that
Using the choice of in Theorem 4.1, we have that
(96) | ||||
(97) |
This verifies the second claim of Proposition 4.2, concluding the proof. ∎
With this in mind, the proof of Theorem 4.1 is straightforward:
Proof of Theorem 4.1: We now employ Proposition 4.2 to validate the claims of Theorem 4.1. Note that
where (i) follows from applying Markov’s inequality on the first claim of Proposition 4.2, and (ii) comes directly from the second claim of Proposition 4.2. Finally, we utilize the -strong convexity of , along with to write
and hence,
with a probability of at least , finishing the proof.
5. Sample complexity
We now utilize our results on inner loop and outer loop to provide sample complexity bounds. To wit, combining Theorems 4.1 and 3.2, along with applying the union bound on the probabilities of failure at each step, we provide the following result.
Corollary 5.1.
Suppose Assumption 2.1 holds, and choose
where . Moreover, for each , let be as defined in (29). Then Algorithm 1 with the parameters as suggested in Theorem 4.1, i.e.,
and
outputs a control policy that satisfies with a probability of at least . Furthermore, if is sufficiently small such that
then is stabilizing.
The results in Corollary 5.1 provide a rigorous theoretical foundation for Algorithm 1, ensuring it computes a control policy satisfying with high probability. The following corollary formalizes the sample complexity bound of our approach.
Corollary 5.2.
It is worth comparing this result with the one in [20], taking into account the necessary adjustments ala Theorem 3.3, where error accumulation results in a worse sample complexity bound.
Corollary 5.3.
This comparison highlights the advantage of our method, which achieves a uniform sample complexity bound of , independent of problem-specific constants. In contrast, the bound in [20] deteriorates as increases, since their second term scales as
This can be arbitrarily worse than , leading to much higher sample complexity in some cases.
Finally, to validate these theoretical guarantees and assess the algorithm’s empirical performance, we conduct simulation studies on a standard example from [20]. The setup and results are presented in the following section.
6. Simulation Studies


For comparison, we demonstrate our results on the example provided in [20], where , , , and the optimal policy is with . In this example, we select , in alignment with a minor inherent assumption discussed later in Remark A.1 (Appendix A). Additionally, we initialize our policy at each step of the outer loop of Algorithm 1 as . This choice contrasts with [5, 11], which require stable policies for initialization, as the stable policies for this example lie in the set
We set , consistent with (31), and in each inner loop, apply the policy gradient (PG) update outlined in Algorithm 1 using a time-varying step-size as suggested in (73). The algorithm is run for twelve different values of : , with the results shown in Figure 1. To account for the inherent randomness in the algorithm, we perform one hundred independent runs for each value of and compute the average sample complexity and policy optimality gap . As seen in Figure 1, the sample complexity exhibits a slope consistent with , visibly outperforming the method in [20], which demonstrates a much steeper slope of approximately .
7. Conclusion
In this paper, we introduced a novel approach to solving the model-free LQR problem, inspired by policy gradient methods, particularly REINFORCE. Our algorithm eliminates the restrictive requirement of starting with a stable initial policy, making it applicable in scenarios where obtaining such a policy is challenging. Furthermore, it removes the reliance on two-point gradient estimation, enhancing practical applicability while maintaining similar rates.
Beyond these improvements, we introduced a refined outer-loop analysis that significantly enhances error accumulation, leveraging the contraction of the Riemannian distance over the Riccati operator. This ensures that the accumulated error remains linear in the horizon length, leading to a sample complexity bound of , independent of problem-specific constants, making the method more broadly applicable.
We provide a rigorous theoretical analysis, establishing that the algorithm achieves convergence to the optimal policy with competitive sample complexity bounds. Importantly, our numerical simulations reveal performance that surpasses these theoretical guarantees, with the algorithm consistently outperforming prior methods that rely on two-point gradient estimates. This superior performance, combined with a more practical framework, highlights the potential of the proposed method for solving control problems in a model-free setting. Future directions include extensions to nonlinear and partially observed systems, as well as robustness enhancements.
References
- [1] A. M. Annaswamy and A. L. Fradkov. A historical perspective of adaptive control and learning. Annual Reviews in Control, 52:18–41, 2021.
- [2] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995.
- [3] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
- [4] S. Dean, H. Mania, N. Matni, B. Recht, and S. Tu. On the sample complexity of the linear quadratic regulator. Foundations of Computational Mathematics, 20(4):633–679, 2020.
- [5] M. Fazel, R. Ge, S. Kakade, and M. Mesbahi. Global convergence of policy gradient methods for the linear quadratic regulator. In International Conference on Machine Learning, pages 1467–1476. PMLR, 2018.
- [6] M. Gevers. A personal view of the development of system identification: A 30-year journey through an exciting field. IEEE Control systems magazine, 26(6):93–105, 2006.
- [7] B. Hassibi, A. H. Sayed, and T. Kailath. Indefinite-Quadratic Estimation and Control. Society for Industrial and Applied Mathematics, 1999.
- [8] B. Hu, K. Zhang, N. Li, M. Mesbahi, M. Fazel, and T. Başar. Toward a theoretical foundation of policy optimization for learning control policies. Annual Review of Control, Robotics, and Autonomous Systems, 6:123–158, 2023.
- [9] A. Lamperski. Computing stabilizing linear controllers via policy iteration. In Proceedings of the 59th IEEE Conference on Decision and Control (CDC), pages 1902–1907. IEEE, 2020.
- [10] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302–1338, 2000.
- [11] D. Malik, A. Pananjady, K. Bhatia, K. Khamaru, P. L. Bartlett, and M. J. Wainwright. Derivative-free methods for policy optimization: Guarantees for linear quadratic systems. Journal of Machine Learning Research, 21(21):1–51, 2020.
- [12] A. Neshaei Moghaddam, A. Olshevsky, and B. Gharesifard. Sample complexity of the linear quadratic regulator: A reinforcement learning lens, 2024.
- [13] H. Mohammadi, A. Zare, M. Soltanolkotabi, and M. R. Jovanović. Convergence and sample complexity of gradient methods for the model-free linear–quadratic regulator problem. IEEE Transactions on Automatic Control, 67(5):2435–2450, 2022.
- [14] J. C. Perdomo, J. Umenberger, and M. Simchowitz. Stabilizing dynamical systems via policy gradient methods. In Advances in Neural Information Processing Systems, 2021.
- [15] C. M. Stein. Estimation of the mean of a multivariate normal distribution. The Annals of Statistics, 9(6):1135–1151, 1981.
- [16] J. Sun and M. Cantoni. On Riccati contraction in time-varying linear-quadratic control, 2023.
- [17] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, pages 1057–1063, 1999.
- [18] Y. Tang, Y. Zheng, and N. Li. Analysis of the optimization landscape of linear quadratic gaussian (LQG) control. In Proceedings of the 3rd Conference on Learning for Dynamics and Control, volume 144 of Proceedings of Machine Learning Research, pages 599–610, 2021.
- [19] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256, 1992.
- [20] X. Zhang and T. Başar. Revisiting LQR control from the perspective of receding-horizon policy gradient. IEEE Control Systems Letters, 7:1664–1669, 2023.
- [21] X. Zhang, S. Mowlavi, M. Benosman, and T. Başar. Global convergence of receding-horizon policy search in learning estimator designs, 2023.
- [22] F. Zhao, X. Fu, and K. You. Learning stabilizing controllers of linear systems via discount policy gradient. arXiv preprint arXiv:2112.09294, 2021.
Appendix A Proof of Theorem 3.1
We let
and we have
(98) |
where denotes the unique positive semi-definite (psd) square root of the psd matrix , for all , and satisfies . We now use to represent the -induced matrix norm and invoke Theorem 14.4.1 of [7], where our , and correspond to , and in [7], respectively. By Theorem 14.4.1 of [7] and (98), we obtain and given that ,
Therefore, the convergence is exponential such that . As a result, the convergence of to in spectral norm can be characterized as
where we have used to denote the condition number of . That is, to ensure , it suffices to require
(99) |
Lastly, we show that the (monotonic) convergence of to follows from the convergence of to . This can be verified through:
(100) |
Hence, we have and
Substituting in (99) with completes the proof.
Appendix B Proof of Theorem 3.2
We start the proof by providing some preliminary results.
Lemma B.1.
Let and be two positive definie matrices. It holds that
(101) |
Furthermore, if
(102) |
then we have
(103) |
Proof.
First, since and are positive definite, we have that is positive definite, and therefore has a logarithm; so we let
and hence, we can write
The eigenvalues of are precisely the logarithms of the eigenvalues of due to and being similar. As a result,
We now write
and thus,
(104) |
Since whenever , we also have for any matrix , by consider the expansion of :
Since the spectral norm is always bounded by the Frobenius norm, we have:
Finally, recalling that , this becomes:
which after substituting into (104) yields:
concluding the proof of the first claim. We now move on to the second claim. As before, we write
We now define
so that
Moreover, following (102),
and hence, one can use the series expansion of the logarithm
to show
(105) | ||||
(106) | ||||
(107) | ||||
(108) | ||||
(109) |
As a result, we have
finishing the proof. ∎
Building on Lemma B.1, we proceed to state the following result regarding the LQR setting.
Lemma B.2.
Let , select , and suppose Assumption 2.1 holds. Additionally, assume that for all , we have
(110) | |||
(111) |
where satisfies
(112) |
If
(113) |
then the following bounds hold:
(114) | |||
(115) |
Proof.
Before we move on to the proof, we esablish some preliminary results. First, note that since
due to the monotonic convergence of (20) to (see [7]), we have that for all . Therefore, it holds that
(116) |
Moreover, due to (110), we have
(117) |
for all . Now since (116), (117), and Assumption 2.1 all hold, we can apply Lemma 2.1 to show that for all ,
(118) | ||||
(119) |
where (i) follows from (26) and (27). Following (119), we can now write
(120) | ||||
(121) | ||||
(122) | ||||
(123) | ||||
(124) | ||||
(125) | ||||
(126) | ||||
(127) |
where (i) is due to the triangle inequality of the Riemannian distance [3], (ii) follows from , and (iii) from (111). We now start the proof of (114) by writing
(128) |
and trying to provide a bound for both terms of the right-hand side of (128). For the first term, we have
(129) | ||||
(130) | ||||
(131) |
where (i) follows from (101), (ii) from (127), and (iii) from the condition on in (112). As for the second term on the right-hand side of (128), we can write
(132) |
where (i) follows from completion of squares. Combining (132) and (110), we have
(133) | ||||
(134) | ||||
(135) | ||||
(136) |
Finally, substituting (131) and (136) in (128), we have
finishing the proof of (114). Having established this, we proceed to prove (115). Note that similar to (133), we can write
(137) | ||||
(138) |
Moreover, due to (114), we have that , and hence,
(139) |
where (i) follows from (116). Combining (139) and (136), we have
(140) | ||||
(141) | ||||
(142) |
Thus, the condition (102) of Lemma B.1 is met, and we can utilize (103) to write
where (i) follows from (139) and (142), (ii) from (138), and (iii) from condition (113). This verifies (115), concluding the proof. ∎
Proof of Theorem 3.2: First, according to Theorem 3.1, our choice of in (31) ensures that is stabilizing and . Then, it remains to show that the output satisfies .
Now observe that
where substituting and in (100), respectively, with and leads to
Hence, the error size could be bounded by
(143) |
Next, since we have , it suffices to show to fulfill . Then, by (143), in order to satisfy , it remains to show
(144) |
In order to show this, we first let
(145) |
which clearly satisfies (112). Now we want to show, by strong induction, that
for all . For the base case, we have
and hence, it immediately follows that
Now since it holds in the statement of Theorem 3.2 that
which satisfies (113), the inductive step follows directly from Lemma B.2. We have now succesfully established that
(146) | |||
(147) |
for all . As a result, we have
(148) | ||||
(149) | ||||
(150) | ||||
(151) | ||||
(152) | ||||
(153) | ||||
(154) | ||||
(155) |
We now show (144) by writing
(156) |
and providing a bound for both terms of the right-hand side of (156). For the first term, we have
(157) | ||||
(158) | ||||
(159) |
where (i) follows from Lemma B.1, (ii) from (155), and (iii) from (145). As for the second term on the right-hand side of (156), we utilize (132) to write
(160) | ||||
(161) | ||||
(162) | ||||
(163) |
where (i) follows from (146), and (ii) is due to the definition of in (29). Finally, substituting (159) and (163) in (156), we have
thereby establishing (144) and concluding the proof of Theorem 3.2.
Appendix C Proof of Theorem 3.3
First, according to Theorem 3.1, we select
(164) |
where . This ensures that is stabilizing and . Then, it remains to show that the output satisfies .
Recall that the RDE (20) is a backward iteration starting with , and can also be represented as:
(165) | ||||
(166) | ||||
(167) |
where (i) comes from . Moreover, for clarity of proof, we denote the policy optimization error at time by:
We argue that can be achieved by carefully controlling for all . At , it holds that
where substituting and in (100), respectively, with and leads to
Hence, the error size could be bounded by
(168) |
Next, we require and to fulfill . We additionally require to upper-bound the positive definite solutions of (18). Then, by (168), in order to fulfill , it suffices to require
(169) |
Subsequently, we have
(170) |
The first difference term on the RHS of (170) is
(171) | ||||
(172) | ||||
(173) |
Moreover, the second term on the RHS of (170) is
(174) | ||||
(175) |
where (174) follows from completion of squares. Thus, combining (170), (171), and (175) yields
(176) |
Note that the difference between (176) with its counterpart in [20] is because the argument for
does not hold, since is not necessarily positive semi-definite (a product of two symmetric psd matrices is not necessarily psd unless the product is a normal matrix). Now, we require
(177) | ||||
(178) |
Then, conditions (177) and (178) are sufficient for (169) (and thus for ) to hold. Subsequently, we can propagate the required accuracies in (177) and (178) forward in time. Specifically, we iteratively apply the arguments in (176) (i.e., by plugging quantities with subscript into the LHS of (176) and plugging quantities with subscript into the RHS of (176)) to obtain the result that if at all , we require
(179) | |||
We now compute the required accuracy for . Note that since no prior computational errors happened at . By (176), the distance between and can be bounded as
To fulfill the requirement (179) for , which is
it suffices to let
(180) |
Finally, we analyze the worst-case complexity of RHPG by computing, at the most stringent case, the required size of . When , the most stringent dependence of on happens at , which is of the order , and the dependences on system parameters are . We then analyze the case where , where the requirement on is still . Note that in this case, the requirement on is stricter than that on any other for any and by (180):
(181) |
Since we require to satisfy (164), the dependence of on in (181) becomes
with additional polynomial dependences on system parameters since
As a result, it suffices to require error bound for all to be
The difference between our requirement for the case with its counterpart in [20] is due to a calculation error in [20] which incorrectly neglects the impact of the exponent in . Lastly, for to be stabilizing, it suffices to select a sufficiently small such that the -ball centered at the infinite-horizon LQR policy lies entirely in the set of stabilizing policies. A crude bound that satisfies this requirement is
This completes the proof.