Perception-Based Sampled-Data Optimization of Dynamical Systems
Abstract
Motivated by perception-based and sensing-based control problems in autonomous systems, this paper addresses the problem of developing feedback controllers to regulate the inputs and the states of a dynamical system to optimal solutions of an optimization problem when one has no access to exact measurements of the system states. In particular, we consider the case where the states need to be estimated from high-dimensional sensory data received only at some time instants. We develop a sampled-data feedback controller that is based on adaptations of a projected gradient descent method and includes neural networks as integral components to estimate the state of the system from perceptual information. We derive sufficient conditions to guarantee (local) input-to-state stability of the control loop. Moreover, we show that the interconnected system tracks the solution trajectory of the underlying optimization problem up to an error that depends on the approximation errors of the neural network and on the time-variability of the optimization problem; the latter originates from time-varying safety and performance objectives, input constraints, and unknown disturbances.
keywords:
Learning for control, data-driven control, feedback optimization, output regulation.1 Introduction
A major challenge in controlling complex autonomous systems consists of incorporating rich data from perceptual and sensing sensor data. The performance of feedback control systems critically relies on the information extracted from perceptual sensing, which may require the processing of high-dimensional data only available at given spatio-temporal granularities; see, e.g. Dean and Recht (2021); Al Makdah et al. (2020); Xu et al. (2021); Dawson et al. (2022). This paper investigates how to integrate perceptual information into controllers inspired by optimization algorithms where the goal is to steer a dynamic system toward solutions of an optimization problem with costs associated with the system’s inputs and states. For example, in autonomous driving, the optimization problem may formalize way-point tracking and obstacle avoidance, whose information is provided by images from a dashboard camera. Other examples include robotics and power systems (the latter leveraging pseudo-measurements).
The line of research on feedback optimization goes back to earlier concepts of KKT-type controllers in Jokic et al. (2009); Brunner et al. (2012); Hirata et al. (2014), and it was recently expanded to include new classes of controllers inspired by first-order optimization methods in Colombino et al. (2020); Zheng et al. (2020); Hauswirth et al. (2021); Lawrence et al. (2020); Bianchin et al. (2021, 2022); Belgioioso et al. (2021); Agarwal et al. (2022); Simpson-Porco (2021); also see references therein. In this paper, we provide contributions relative to existing works by considering a setup where the cost of the optimization problem evolves over time (for instance, for way-point tracking and to avoid moving obstacles) and the state of the system cannot be directly measured. The latter is a distinctive feature of this work: we address the case in which optimization-based controllers must leverage perceptual information available at given temporal granularities and learning mechanisms to estimate the system state.
We develop a sampled-data feedback controller that is based on an adaptation of a projected gradient descent method. Based on the specified time-varying costs, the gradient-based controller generates inputs for the system which are then passed through a zero-order hold. Importantly, the controller leverages a trained neural network that maps perceptual information into estimates of the state of the system. We derive sufficient conditions to guarantee (local) input-to-state stability (ISS) of the control loop. In particular, we show that the interconnected system tracks the optimal solution trajectory of the optimization problem up to an error that depends on the approximation errors of the neural network and the time-variability of the cost and unknown disturbance. The ISS bounds are derived by leveraging the fundamental results of Jiang and Wang (2001) and Nešić et al. (1999).
We note that a similar perception-based regulation problem was considered in our prior work in Cothren et al. (2022a); however, in Cothren et al. (2022a) controllers operate at continuous time and thus do not account for the sample-data nature of the feedback information. A sampled-data controller was developed in Belgioioso et al. (2021); with respect to Belgioioso et al. (2021), we consider cases where the optimization problem is time-varying and the state is estimated via perception maps.
We test our controller on an autonomous driving application where vehicles are modeled via unicycle dynamics; the controller acquires the position of the vehicle from a neural network that estimates positions from images.
2 Problem Formulation
In the following, we formalize our research problem and discuss the necessary assumptions.111Notation. We denote by the set of natural numbers, positive integers, real numbers, positive real numbers, and non-negative real numbers. For vectors and , is the Euclidean norm of , is the supremum norm, and is their vector concatenation. denotes transposition, and denotes the -th element of . For a matrix , is the induced -norm and is the supremum norm. The set is the open ball in with radius ; is the closed ball. Given two sets and , is their Cartesian product; is the open set defined as . is the Euclidean projection of onto ; or, . A function is of class if it is continuous, , and strictly increasing; it is of class if it is additionally unbounded. A function is of class if for each fixed the function is of class , and if for each fixed the function is decreasing with respect to and is s.t. for . We consider systems that can be modeled using dynamics of the form:
(1) |
where is the state, is the control input, is a time-varying unknown exogenous disturbance, and where the vector field is continuously differentiable on the open and connected domain and is Lipschitz-continuous in its arguments with constants , and respectively. In this paper, motivated by practical hardware and operational requirements, we restrict our attention to cases where at all times, where is compact. We impose the following additional assumptions on the system (1).
Assumption 1
There exists a unique continuously differentiable map such that, for any (constant) and , Moreover, admits the decomposition where and are Lipschitz continuous with constants and , respectively.
Assumption 2
For all , , where is compact. Moreover, is continuous.
Assumption 1 guarantees that, for constant inputs and , system (1) admits the unique equilibrium point . Note that when is invertible for any and , then the existence of is always guaranteed. Furthermore, by the implicit function theorem, is differentiable since is differentiable. We also note that the equilibrium set is compact; this is due to being compact, being continuously differentiable, and the result (Rudin, 1976, Thm. 4.14). For any , we have that since is compact Rudin (1976).
Next, as customary in the context of feedback optimization (see, e.g., Colombino et al. (2020); Hauswirth et al. (2021), we require a stability condition on the system to control. To this end, let , , be a set for which the following assumption holds.
Assumption 3
There exists a continuously differentiable function with constants and a -function such that:
-
1.
For all , , and ,
where ;
-
2.
For any constant ,
From Khalil (2002), Assumption 3 implies that there exists constants such that the following holds:
for some constant , and for , .
We point out that when the (physical) system does not satisfy Assumption 3, then (1) models the pre-stabilized physical system.
2.1 Generative and Perception Maps
In this paper, we assume that the state of (1) is not directly measurable. Instead, one has access to nonlinear and possibly high-dimensional observations of the state , where is an unknown generative map. This setup emerges when information about the state is acquired through perceptual information from sensing and estimation mechanisms. For example, in applications in autonomous driving, vehicle states are often reconstructed from images generated by cameras. See, for example, the models in Dean and Recht (2021); Al Makdah et al. (2020); Murillo-González and Poveda (2022); also see the closely related observer design problems in Marchi et al. (2022); Chou et al. (2022).
Regarding the unknown map , we make the following assumption (see also Dean and Recht (2021)).
Assumption 4
The map is such that the image of is compact for any compact set . Further, there exists a map such that , where where is bounded as for any , for a given finite .
The function in Assumption 4 is referred to as the perception map; for a given observation , it yields a possibly noisy estimate of the state, up to a bounded error . In this paper, we will leverage supervised learning methods to estimate the perception map from data.
2.2 Regulation to Solutions of an Optimization Problem
We focus on regulating the system (1) to the solution of the following time-varying optimization problem:
(2a) | ||||
s.t. | (2b) |
where and are functions that describe costs associated with the system’s inputs and states, respectively. We remark that (2) is a time-varying optimization problem for two reasons: (i) the cost functions are time-varying, which allow us to account for performance and safety objectives that evolve over time, and (ii) the constraint is time-varying since the system’s equilibrium point is parametrized by the time-varying signal . Accordingly, (2) defines optimal trajectories for the system (1). Note that, since is unique for any fixed and (see Assumption 1), the optimization problem (2) can be rewritten as:
(3) |
Given the problem (3) we formalize our control problem.
Problem 1 (Online optimization with state perception)
Remark 1
(Implicit solution) Since the problem (3) is parametrized by the unknown exogenous inputs , the solutions of (3) cannot be computed explicitly via standard numerical optimization methods. We seek feedback controllers that drive inputs and states of (1) to solutions of (3) by relying only on estimates of the state , and without requiring sensing of the disturbance .
Remark 2
(Interpretation of the control problem) Recall that (3) is parametrized by a time-varying disturbance and has time-varying costs. Thus, (3) formalizes an equilibrium seeking problem, where the objective is to select optimal input-state pairs that minimizes the specified time-varying cost at each time (see, e.g., Colombino et al. (2020); Bianchin et al. (2021); Belgioioso et al. (2021)). This is a high-level regulation problem that can be nested with a stabilizing controller.
We conclude this section with some relevant assumptions.
Assumption 5
The following hold:
5(i) The function is -Lipschitz continuous for all , , for all .
5(ii) The function is -Lipschitz continuous for all , , for all .
5(iii) The function is -strongly convex with , for all and for all , where is the Jacobian of w.r.t. .
5(iv) The set is convex.
Note that, from Assumption 5, it follows that the composite cost is -Lipschitz continuous with constant .
3 Perception-Based System Regulation
To address Problem 1, we consider the design of feedback controllers that are inspired by projected-gradient-type methods as in Bianchin et al. (2021); Cothren et al. (2022a). However, to acknowledge the fact that the state estimates are available only at given time intervals (for example, images from a camera are captured at a given frequency), our controller design is based on a sampled data mechanism. The controller is equipped with a supervised learning method to estimate the perception map.
Towards this, let represent the period between two consecutive arrivals of perceptual data (for example, images) and let be the sampling index, so that is the set of times where perceptual information arrives and control inputs are updated. Accordingly, denote as and the sampled states and disturbance at time , and let and for notational brevity. We propose the following projected-gradient-type controller to generate inputs at each time , :
(4) |
where is a tunable parameter (also known as step size in the gradient descent literature), and
where is the Jacobian of evaluated at .
The controller (4) is of the form of a projected gradient-type algorithm; here, we have modified it by including the gradient evaluated at the instantaneous system state to circumvent the need to measure the exogenous input . We also note that the map is applied to the current state , and the previous control input , which is applied to the system over the interval . Critically, the controller relies on the knowledge of the system state at time , which cannot be observed directly. To address this, we consider training a neural network to obtain an estimate of the perception map (as in Assumption 4) which gives estimates of state from the perceptual data .
Towards this, we consider a set of training points to guarantee that the network training is well-posed as stated next.
Assumption 6
The training points are drawn from the compact set , .
Hereafter, let denote the perception set associated with the training set, which is a compact set. Assumption 6 allows us to leverage existing results on the bounds on the approximation error of feedforward neural networks and residual neural networks over the compact set ; see Hornik et al. (1989) and Marchi et al. (2022); Tabuada and Gharesifard (2020).
With an estimate of the perception map obtained via a neural network, the proposed perception-based controller is shown in Figure 1 and is tabulated in Algorithm 1.
# Training
Given: training set
Obtain:
# Gradient-based Sampled-Data Feedback Control
Given: set , funct.s , , gain
Initial conditions: ,
For , :
(5a) | ||||
(5b) | ||||
(5c) | ||||
(5d) |

In the training phase, the operation refers to a generic training procedure for the neural network via empirical risk minimization, which results in the approximate map . In the proposed controller, is utilized to obtain estimates of the state of the dynamical system , which is subsequently utilized to compute the gradient map . Note that, as in sampled-data systems, the input is computed based on the control iterates as the piece-wise constant signal , .
4 Stability and Tracking Analysis
To analyze the performance of the closed-loop system (5), recall that is the sequence of optimizers of the time-varying problem (2) at the times in . As proposed in Belgioioso et al. (2021), we consider a discrete-time counterpart of (5); sampling (5) at times in yields:
(6a) | ||||
(6b) | ||||
(6c) |
where denotes the solution of the initial value problem , with at time , where is the sampling period. The tracking results for (6) will then translate into transient bounds for the sampled-data system (5) by using (Nešić et al., 1999, Theorem 5).
Let and define the matrix as,
(7) |
where , is given in Assumption 3, and ; we further define:
(8) | ||||
(9) |
With these definitions, our main result provides transient bounds for the system (6).
Theorem 1 (Transient bound for (6))
Theorem 1 guarantees exponential convergence of the sampled trajectory of the tracking error to a neighborhood of zero. The size of the neighborhood depends on: , which corresponds to the error associated with the state estimation, , which captures the time-variability of the optimizer, and corresponds to the time-variability of the unknown disturbance. The proof is provided in the extended version of the paper Cothren et al. (2022b).
The transient bound (11) shows that the sampled system (6) is input-to-state stable (ISS), in the sense of Jiang and Wang (2001), with respect to , the drift on the optimal solution , and the error introduced by the estimated perception map. To translate the results of Theorem 1 into transient bounds for the continuous system (5), we leverage Theorem 5 of Nešić et al. (1999). In particular, let , where is piece-wise constant and such that for , and is defined similarly. Then, we obtain the following.
Theorem 2 (Transient bound for (5))
Mirroring (11), the bound (12) shows that the system (5) is ISS with respect to , the drift on the optimal solution , and the error introduced by the neural network.
To provide a connection with the existing literature, we point out that (12) generalizes the following sub-cases: (i) when the state can be observed (without errors), then (12) reduces to a bound similar to Bianchin et al. (2021) (where, however, the controller is a continuous-time gradient flow); (ii) when the state can be observed and the functions and are time invariant, then (12) boils down to the ISS result of Belgioioso et al. (2021) (for sampled-data controllers) and Colombino et al. (2020) (for continuous-time gradient flows) and Zheng et al. (2020).
Remark 3
When a residual network is utilized to estimate the map , the error on the compact set in (12) can be bounded as shown in Marchi et al. (2022); Tabuada and Gharesifard (2020), under given conditions on the selection of the training points. The bound in Marchi et al. (2022); Tabuada and Gharesifard (2020) is particularly interesting because it ties the approximation error with the training error and the geometry of the residual network. We do not include a customization of the bound (12) using the results of Tabuada and Gharesifard (2020) due to space limitations.
Theorem 2 follows from the results of Theorem 1 and Theorem 5 of Nešić et al. (1999) by noticing that (5) is uniformly bounded over an interval in the sense of (Nešić et al., 1999, Definition 2). This is because is piece-wise constant and takes values from the compact set , is compact (and is bounded), and the perception error is bounded; boundedness of w.r.t. the sampled sequence follows from (Belgioioso et al., 2021, Lemma 1). The proof is omitted due to space limitations.
5 Application to Autonomous Driving
We utilize our controller in Algorithm 1 to control a vehicle to track a set of reference points while avoiding obstacles; the position of the vehicle is accessible only through camera images. The vehicle’s movement is modeled by the unicycle dynamics with state , where is the position in the 2D plane and is the orientation with respect to the -axis: , , where are controllable inputs. Importantly, we assume that we do not have direct knowledge of the state , but instead, camera images, represented by the generative map , which return the position . We consider a lower-level stabilizer to stabilize the unicycle dynamics to satisfy Assumption 3. Following (Cothren et al., 2022a, Lemma 2), let denote the control inputs for each direction and , and consider the change of coordinates to the error variables, , . The dynamics of these are and . By setting
(13) |
for some , the unicycle dynamics admit a globally exponentially stable equilibrium point . By setting the constants as in (13), the dynamic plant satisfies Assumption 3.
We consider a sequence of locations that the vehicle would like to follow, denoted as , where . To avoid obstacles around the vehicle, we consider building at each time , , the free workspace of the vehicle defined as , where is the number of obstacles at time , is the center position of the th obstacle, , is a scalar computed depending on vehicle and obstacles positions, and is the position at time . The free workspace describes a local neighborhood of the vehicle that is guaranteed to be free of obstacles.
To track the desired trajectory while avoiding obstacles, we utilize the following waypoint-tracking formulation with a barrier function
where is a tuning parameter; in particular, in the simulations we set . We set .


We used a residual neural network to estimate the position of the vehicle from aerial images. specifically, the network returns estimated coordinates in the . For training, we generate images of size pixels depicting a red bot in the roundabout setting in Figure 2. The images were built using the MATLAB Image Processing Toolbox and basic plotting functions therein by setting the background as an aerial view of a roundabout and plotting a red square for the vehicle. We used the resnet50 structure given in the MATLAB Deep Learning Toolbox and tailored the input ( sized RGB images) and output (total number of labels) sizes to our specific case. For labels, we selected the pixels that corresponded to parts of the image containing the road so that the network only trains on data corresponding to allowable surfaces for the robot, which totaled to unique labels. Finally, we select five checkpoints along the road for the robot to follow (denoted by the red stars in Figure 2) during the execution of the algorithm corresponding to .
Simulation results are given in Fig. 2 for the Algorithm 1. The dashed magenta line corresponds to the trajectory of the unicycle for the neural network-assisted controller, and the yellow line is for the controller with perfect state information. Fig. 3 shows the error between the optimal and actual position of the vehicle. Importantly, the trajectory tracks the time-varying reference points within an error and avoids the obstacles.
6 Conclusion
We proposed an algorithm to regulate dynamical systems towards the solution of a convex optimization problem when we do not have full knowledge of the system states. Specifically, we developed a sampled-data feedback controller that is augmented with a neural network to estimate the state of the system from high-dimensional sensory data. Our results guaranteed exponential convergence of the interconnected system up to an error term dependent on the temporal variability of the problem and the error due to estimating the state from the neural network.
References
- Agarwal et al. (2022) Agarwal, A., Simpson-Porco, J.W., and Pavel, L. (2022). Game-theoretic feedback-based optimization. IFAC-PapersOnLine, 55(13), 174–179.
- Al Makdah et al. (2020) Al Makdah, A.A., Katewa, V., and Pasqualetti, F. (2020). Accuracy prevents robustness in perception-based control. In American Control Conference, 3940–3946.
- Belgioioso et al. (2021) Belgioioso, G., Liao-McPherson, D., de Badyn, M.H., Bolognani, S., Lygeros, J., and Dorfler, F. (2021). Sampled-data online feedback equilibrium seeking: Stability and tracking. In IEEE Conference on Decision and Control.
- Bianchin et al. (2021) Bianchin, G., Cortés, J., Poveda, J.I., and Dall’Anese, E. (2021). Time-varying optimization of LTI systems via projected primal-dual gradient flows. IEEE Trans. on Control of Networked Systems.
- Bianchin et al. (2022) Bianchin, G., Poveda, J.I., and Dall’Anese, E. (2022). Online optimization of switched lti systems using continuous-time and hybrid accelerated gradient flows. Automatica, 146, 110579.
- Brunner et al. (2012) Brunner, F.D., Dürr, H.B., and Ebenbauer, C. (2012). Feedback design for multi-agent systems: A saddle point approach. In IEEE Conference on Decision and Control, 3783–3789.
- Chou et al. (2022) Chou, G., Ozay, N., and Berenson, D. (2022). Safe output feedback motion planning from images via learned perception modules and contraction theory. arXiv preprint arXiv:2206.06553.
- Colombino et al. (2020) Colombino, M., Dall’Anese, E., and Bernstein, A. (2020). Online optimization as a feedback controller: Stability and tracking. IEEE Trans. On Control of Networked Systems, 7(1), 422–432.
- Cothren et al. (2022a) Cothren, L., Bianchin, G., and Dall’Anese, E. (2022a). Online optimization of dynamical systems with deep learning perception. IEEE Open Journal of Control Systems, 1, 306–321.
- Cothren et al. (2022b) Cothren, L., Bianchin, G., Dean, S., and Dall’Anese, E. (2022b). Perception-based sampled-data optimization of dynamical systems (longer version). arXiv:2211.10020.
- Dawson et al. (2022) Dawson, C., Lowenkamp, B., Goff, D., and Fan, C. (2022). Learning safe, generalizable perception-based hybrid control with certificates. IEEE Robotics and Automation Letters, 7(2), 1904–1911.
- Dean and Recht (2021) Dean, S. and Recht, B. (2021). Certainty equivalent perception-based control. In Learning for Dynamics and Control, 399–411. PMLR.
- Hauswirth et al. (2021) Hauswirth, A., Bolognani, S., Hug, G., and Dörfler, F. (2021). Timescale separation in autonomous optimization. IEEE Trans. on Automatic Control, 66(2), 611–624.
- Hirata et al. (2014) Hirata, K., Hespanha, J.P., and Uchida, K. (2014). Real-time pricing leading to optimal operation under distributed decision makings. In American Control Conf.
- Hornik et al. (1989) Hornik, K., Stinchcombe, M., and White, H. (1989). Multilayer feedforward networks are universal approximators. Neural networks, 2(5), 359–366.
- Jiang and Wang (2001) Jiang, Z.P. and Wang, Y. (2001). Input-to-state stability for discrete-time nonlinear systems. Automatica, 37(6), 857–869.
- Jokic et al. (2009) Jokic, A., Lazar, M., and Van Den Bosch, P.P.J. (2009). On constrained steady-state regulation: Dynamic KKT controllers. IEEE Trans. on Automatic control, 54(9), 2250–2254.
- Khalil (2002) Khalil, H.K. (2002). Nonlinear Systems; 3rd ed. Prentice-Hall, Upper Saddle River, NJ.
- Lawrence et al. (2020) Lawrence, L.S., Simpson-Porco, J.W., and Mallada, E. (2020). Linear-convex optimal steady-state control. IEEE Transactions on Automatic Control, 66(11), 5377–5384.
- Marchi et al. (2022) Marchi, M., Bunton, J., Gharesifard, B., and Tabuada, P. (2022). Safety and stability guarantees for control loops with deep learning perception. IEEE Control Systems Letters, 6, 1286–1291.
- Murillo-González and Poveda (2022) Murillo-González, A. and Poveda, J.I. (2022). Data-assisted vision-based hybrid control for robust stabilization with obstacle avoidance via learning of perception maps. In 2022 American Control Conference (ACC), 886–892. IEEE.
- Nešić et al. (1999) Nešić, D., Teel, A.R., and Sontag, E.D. (1999). Formulas relating kl stability estimates of discrete-time and sampled-data nonlinear systems. Systems & Control Letters, 38(1), 49–60.
- Rudin (1976) Rudin, W. (1976). Principles of mathematical analysis, volume 3. McGraw-hill New York.
- Simpson-Porco (2021) Simpson-Porco, J.W. (2021). Low-gain stability of projected integral control for input-constrained discrete-time nonlinear systems. IEEE Control Systems Letters, 6, 788–793.
- Tabuada and Gharesifard (2020) Tabuada, P. and Gharesifard, B. (2020). Universal approximation power of deep residual neural networks via nonlinear control theory. arXiv preprint arXiv:2007.06007.
- Xu et al. (2021) Xu, J., Lee, B., Matni, N., and Jayaraman, D. (2021). How are learned perception-based controllers impacted by the limits of robust control? In Learning for Dynamics and Control, 954–966.
- Zheng et al. (2020) Zheng, T., Simpson-Porco, J., and Mallada, E. (2020). Implicit trajectory planning for feedback linearizable systems: A time-varying optimization approach. In American Control Conference, 4677–4682.
Appendix A Proofs
Proof of Theorem 1.
The proof of the Theorem 1 is divided into two main parts. In the first part, we construct intermediate bounds to be used in the second part of the proof.
(Part 1.a: Lyapunov Function) The first is to leverage the results in Lemma 1 of Belgioioso et al. (2021). By Assumption 3, we have that for a fixed ,
(14) |
Given the initial condition , (14) implies that:
(15) | ||||
(16) |
where . Define and . Then, from Lemma 1 of Belgioioso et al. (2021), it follows that
where , is the Lipschitz constant of the steady state map w.r.t. , and .
(Part 1.b: Bound for ) To simplify notation, rewrite the controller as , where . Moreover, let denote the error in the gradient introduced by the perception map. Using this notation, calculate:
Consider the first term, . We further bound this by applying Assumption 3 and by using the fact that :
To bound the second line of the final inequality, recall that
holds for any due to Assumption 3. Thus,
In total,
(17) |
(Part 1.c: Bound for ) We obtain the following bound by using similar steps as in the earlier computation to bound Calculate,
(Part 1.d: Overall bounds) Putting together the bounds above, we have:
(18a) | ||||
(18b) |
Based on these intermediate results, conditions for ISS are derived next.
(Part 2: Sufficient conditions for stability) Define:
Then, rewrite (18) as,
(19) |
By applying (19) to itself times, we obtain:
(20) | ||||
(21) |
Next, recall from (Jiang and Wang, 2001, Ex. 3.4) that for a Schur matrix , there exists constants and such that . By direct computation of the eigenvalues of , we may enforce to be Schur when satisfies:
(22) |
This condition critically relies on the the fact that is chosen so that . Then, taking the norm on both sides (since the quantities are non-negative), and using the triangle inequality, one gets:
(23) |
Define the following:
(24a) | ||||
(24b) |
Then, (23) becomes,
(25) |
Equivalently,
(26) |
Then, we may apply the geometric series on the second and third terms to obtain:
(27) |
Then, finally apply the Lyaunov quadratic bound to obtain the final form. By Assumption 3, we may write:
(28) |
where
(29) |
Substituting (28) into (27), we obtain:
(30) |
The bound then follows by noticing that:
by using the fact that , and by bounding the first term with .