Asymptotic and Finite Sample Analysis of
Nonexpansive Stochastic Approximations with Markovian Noise
Abstract
Stochastic approximation is an important class of algorithms, and a large body of previous analysis focuses on stochastic approximations driven by contractive operators, which is not applicable in some important reinforcement learning settings. This work instead investigates stochastic approximations with merely nonexpansive operators. In particular, we study nonexpansive stochastic approximations with Markovian noise, providing both asymptotic and finite sample analysis. Key to our analysis are a few novel bounds of noise terms resulting from the Poisson equation. As an application, we prove, for the first time, that the classical tabular average reward temporal difference learning converges to a sample path dependent fixed point.
1 Introduction
Stochastic approximation (SA) algorithms (Robbins & Monro, 1951; Kushner & Yin, 2003; Borkar, 2009) form the foundation of many iterative optimization and learning methods by updating a vector incrementally and stochastically. Prominent examples include stochastic gradient descent (SGD) (Kiefer & Wolfowitz, 1952) and temporal difference (TD) learning (Sutton, 1988). These algorithms generate a sequence of iterates starting from an initial point through the recursive update:
(SA) |
where is a sequence of deterministic learning rates, is a sequence of random noise in a space , and a function maps the current iterate and noise to the actual incremental update. We use to denote the expected update, i.e., , where the expectation will be formally defined shortly.
Despite the foundational role of SA in analyzing reinforcement learning (RL) (Sutton & Barto, 2018) algorithms, most of the existing literature assumes that the expected mapping is a contraction, ensuring the stability and convergence of the iterates under mild conditions. Table 1 highlights the relative scarcity of results concerning nonexpansive mappings. However, in many problems in RL, particularly those involving average reward formulations (Tsitsiklis & Roy, 1999; Puterman, 2014; Wan et al., 2021b, a; He et al., 2022), is only guaranteed to be non-expansive, not contractive.
Nonexpansive | Markovian | Asymptotic | Non-Asymptotic | |
Krasnosel’skii (1955) | ✓ | ✓ | ||
Ishikawa (1976) | ✓ | ✓ | ||
Reich (1979) | ✓ | ✓ | ||
Benveniste et al. (1990) | ✓ | |||
Liu (1995) | ✓ | |||
Szepesvári (1997) | ✓ | |||
Abounadi et al. (2002) | ✓ | ✓ | ||
Tadić (2002) | ✓ | ✓ | ||
Kushner & Yin (2003) | ✓ | |||
Koval & Schwabe (2003) | ✓ | ✓ | ||
Tadic (2004) | ✓ | ✓ | ||
Kim & Xu (2007) | ✓ | ✓ | ||
Borkar (2009) | ✓ | |||
Cominetti et al. (2014) | ✓ | ✓ | ✓ | |
Bravo et al. (2019) | ✓ | ✓ | ✓ | |
Chen et al. (2021) | ✓ | ✓ | ||
Borkar et al. (2021) | ✓ | ✓ | ✓ | |
Karandikar & Vidyasagar (2024) | ✓ | ✓ | ✓ | |
Bravo & Cominetti (2024) | ✓ | ✓ | ✓ | |
Qian et al. (2024) | ✓ | ✓ | ✓ | |
Liu et al. (2025) | ✓ | ✓ | ||
Ours | ✓ | ✓ | ✓ | ✓ |
One tool for analyzing (SA) with nonexpansive which has recently gained renewed attention, is Krasnoselskii-Mann iterations. In their simplest deterministic form, these iterations are given by:
(KM) |
where is some nonexpansive mapping. Under some other restrictive conditions, Krasnosel’skii (1955) first proves the convergence of (KM) to a fixed point of and this result is further generalized by Edelstein (1966); Ishikawa (1976); Reich (1979); Liu (1995). More recently, Cominetti et al. (2014) use a novel fox-and-hare model to connect KM iterations with Bernoulli random variables, providing a sharper convergence rate for .
In practice, algorithms often deviate from (KM) due to noise, leading to the study of inexact KM iterations (IKM) with deterministic noise (Kim & Xu, 2007; Bravo et al., 2019):
(IKM) |
where is a sequence of deterministic noise. Bravo et al. (2019) extend Cominetti et al. (2014) and establish the convergence of (IKM), under some mild conditions on .
However, deterministic noise is still not desirable in many problems. To this end, a stochastic version of (IKM) is studied, which considers the iterates
(SKM) |
where is a Martingale difference sequence. Under mild conditions, Bravo & Cominetti (2024) proves the almost sure convergence of (SKM) to a fixed point of . If we write (SA) as
(1) |
we observe that the convergence result from Bravo & Cominetti (2024) implies the almost sure convergence of (SA) when is i.i.d., since this makes a Martingale difference sequence.
Bravo & Cominetti (2024) is the first to introduce this SKM based method in RL, by using it to prove the almost sure convergence and non-asymptotic convergence rate of a synchronous version of RVI -learning (Abounadi et al., 2001). However, the assumption that is i.i.d only holds for some synchronous RL algorithms. In most practical settings where the RL algorithm is asynchronous, the noise is Markovian, meaning is not a Martingale difference sequence and the results of Bravo & Cominetti (2024) do not apply.
Contribution
Our primary contribution is to close the aforementioned gap by extending the results of Bravo & Cominetti (2024) to the Markovian noise setting. Namely, this work allows to be a Markov chain, and to be a Lipschitz continuous noisy estimate of a non-expansive operator , providing both the first proof of almost sure convergence, and also the first non-asymptotic convergence rate in this setting (Table 1).
- •
-
•
Theorem 3.1 provides the convergence rate of the expected residuals .
-
•
Theorem 4.2 utilizes our SKM results to provide the first proof of almost sure convergence of tabular average reward temporal difference learning (TD) to a (possibly sample path dependent) fixed point.
By extending Bravo & Cominetti (2024) to Markovian noise, we are the first to use the SKM method to analyze asynchronous RL algorithms.
The key idea of our approach is to use Poisson’s equation to decompose the error into boundable error terms (Benveniste et al., 1990). While the use of Poisson’s equation for handling Markovian noise is well-established, our method departs from prior techniques for bounding these error terms in almost sure convergence analyses. Specifically, Benveniste et al. (1990) and Konda & Tsitsiklis (1999) use stopping times, while Borkar et al. (2021) employ a Lyapunov function and use the scaled iterates technique. In contrast, we leverage a 1-Lipschitz continuity assumption on to directly control the growth of error terms.
Notations
In this paper, all vectors are column. We use to denote a generic operator norm and use to denote an all-one vector. We use and to denote norm and infinity norm respectively. We use to hide deterministic constants for simplifying presentation, while the letter is reserved for sample-path dependent constants.
2 Almost Sure Convergence of Stochastic Krasnoselskii-Mann Iterations with Markovian and Additive Noise
To extend the analysis of (SKM) in Bravo et al. (2019); Bravo & Cominetti (2024) to SKM with Markovian and additive noise, we consider the following iterates
(SKM with Markovian and Additive Noise) |
Here, are stochastic vectors evolving in , is a Markov chain evolving in a finite state space , defines the update, is a sequence of stochastic noise evolving in , and is a sequence of deterministic learning rates. Although the primary contribution of this work is to allow to be Markovian, we also include the deterministic noise term in (SKM with Markovian and Additive Noise), as it will later be instrumental in proving the almost sure convergence of average reward TD in Section 4.
We make the following assumptions.
Assumption 2.1 (Ergodicity).
The Markov chain is irreducible and aperiodic.
The Markov chain thus adopts a unique invariant distribution, denoted . We use to denote the transition matrix of .
Assumption 2.2 (1-Lipschitz).
The function is 1-Lipschitz continuous in its first argument w.r.t. some operator norm and uniformly in its second argument, i.e., for any , it holds that
(2) |
This assumption has two important implications. First, it implies that can grow at most linearly. Indeed, let , we get . Define , we get
(3) |
Second, define the function as the expectation of over the stationary distribution :
(4) |
We then have that is non-expansive. Namely,
(5) | ||||
(6) |
This is exactly the non-expansive operator in the SKM literature. We, of course, need to assume that the problem is solvable.
Assumption 2.3 (Fixed Points).
The non-expansive operator adopts at least one fixed point.
We use to denote the set of fixed points of .
Assumption 2.4 (Learning Rate).
The learning rate has the form
(7) |
where .
The primary motivation for requiring is that our learning rates need to decrease quickly enough for certain key terms in the proof to be finite. The specific need for can be seen in the proof of (79) in Lemma B.1.
Next, using this definition of the learning rates, we will define two useful shorthands,
(8) | ||||
(9) |
We now impose assumptions on the additive noise.
Assumption 2.5 (Additive Noise).
(10) | ||||
(11) |
The first part of Assumption 2.5 can be interpreted as a requirement that the total amount of additive noise remains finite, akin to the assumption on in (IKM) in Bravo et al. (2019). Additionally, we impose a condition on the second moment of this noise, requiring it to converge at the rate . While these assumptions on may seem restrictive, it should be noted that even if were absent, our work would still extend the results of (Bravo & Cominetti, 2024) to cases involving Markovian noise, as the Markovian noise component is already incorporated in , which represents a significant result. For most RL applications involving algorithms which have only one set of weights, the additional noise will simply be 0. We are now ready to present the main convergence result.
Theorem 2.6.
Let Assumptions 2.1 - 2.5 hold.
Then the iterates generated by
(SKM with Markovian and Additive Noise)
satisfy
(12) |
where is a possibly sample-path dependent fixed point. Or more precisely speaking, let denote a sample path and write to emphasize the dependence of on . Then there exists a set of sample paths with such that for any , the limit exists, denoted as , and satisfies .
Proof.
We start with a decomposition of the error using Poisson’s equation akin to Métivier & Priouret (1987); Benveniste et al. (1990). Namely, thanks to the finiteness of , it is well known (see, e.g., Theorem 17.4.2 of Meyn & Tweedie (2012) or Theorem 8.2.6 of Puterman (2014)) that there exists a function such that
(13) |
Here, we use to denote the function . The error can then be decomposed as
(14) |
where
(15) | ||||
(16) | ||||
(17) |
Here is a Martingale difference sequence. We then use
(18) |
to denote all the non-Martingale noise, yielding
(19) |
We now define an auxiliary sequence to capture how the noise evolves
(20) | ||||
(21) |
If we are able to prove that the total noise is well controlled in the following sense
(22) | ||||
(23) |
then a result from Bravo & Cominetti (2024) concerning the convergence of (IKM) can be applied on each sample path to complete the almost sure convergence proof. The rest of the proof is dedicated to the verification of those two conditions.
Telescoping (21) yields
(24) | ||||
(25) |
Then, we can upper-bound (22) as
(26) | ||||
(27) |
Lemmas B.8, B.9, and B.10 respectively prove that and in (27) are bounded almost surely. We bound the remaining term needed to verify (22) here as an example of the novelty in bounding these terms. Starting with the definition of from (25), we have,
(28) | ||||
(29) | ||||
(30) | ||||
(31) | ||||
(32) | ||||
(33) |
where the last inequality holds because . Additionally, since , taking the norm gives
(34) | |||
(35) | |||
(36) | |||
(37) |
where the second inequality holds by Lemma B.5, and the last inequality holds because , and that and are monotonically increasing (Lemma A.2).
Then, from the definition of in (22), we have
(38) |
where the inequality holds because and is decreasing. Then, by Lemma B.1, we have , which when combined with the monotone convergence theorem, proves that , verifying (22).
We now verify (23). This time, rewrite as
(39) |
Lemma B.11, Assumption 2.5, and Lemmas B.12, B.13 prove that and for respectively.
Together with (25), this means that . In other words, we have established the stability of (21). Then, it can be shown (Lemma B.14), using an extension of Theorem 2.1 of Borkar (2009) (Lemma D.7), that converges to the globally asymptotically stable equilibrium of the ODE , which is 0. This verifies (23). Lemma B.15 then invokes a result from Bravo & Cominetti (2024) and completes the proof. ∎
Remark 2.7.
We want to highlight that the technical novelty of our work comes from two sources. The first is that while the use of Poisson’s equation for handling Markovian noise is well-established, including the noise representation in (14), previous works with such error decomposition (e.g., Benveniste et al. (1990); Konda & Tsitsiklis (1999); Borkar et al. (2021)) usually only need to bound terms like . In contrast, our setup requires bounding additional terms such as and which appear novel and more challenging. Second, our work extends Theorem 2.1 of Borkar (2009) by relaxing an assumption on the convergence of the deterministic noise term. Instead of requiring the noise to converge to 0, we only require more mild condition on the asymptotic rate of change of this noise term. We believe this extension, detailed in Appendix D, has independent utility beyond this work.
3 Convergence Rate
The previous analysis not only guarantees the almost sure convergence of the iterates, but can also be used to obtain estimates of the expected fixed-point residuals.
Theorem 3.1.
Consider the iteration (SKM with Markovian and Additive Noise) and let Assumptions 2.1 2.5 hold. There there exists a constant such that
(40) |
Proof.
Considering the sequence we have,
(41) | ||||
(42) |
where the inequality holds due to the non-expansivity of as proven in (6). Then, our proof of Theorem 2.6 guarantees the conditions under which the ’s are bounded. Specifically, we proved in Lemma B.15 that if (22) and (23) almost surely, then with , Lemma A.1 can be invoked to bound . This yields,
(43) | |||
(44) |
for . However, is a sample-path dependent constant whose order is unknown, and the random sequence may occasionally become very large. Therefore, we compute the non-asymptotic error bound of the expected residuals , which gives,
(45) | |||
(46) |
Recalling that , we can see that if there exists a deterministic constant such that , we obtain that . Therefore, in order to prove the Theorem, it is sufficient to find such a constant such that , and prove that , and are also .
We proceed by first upper-bounding . Taking the expectation of (25), we have,
(47) | ||||
(48) | ||||
(49) | ||||
(50) | ||||
(51) |
It can be shown (Lemma C.4) that . Then, to prove , since
(52) |
which converges almost surely by Lemma B.1, there exists a such that almost surely.
Additionally, our is of the same order as the analogous in Theorem 2.10 of Bravo & Cominetti (2024). Therefore, we can invoke Lemma C.5, which is a combination of Theorems 2.11 and 3.1 from Bravo & Cominetti (2024), which proves that . Finally, by (51), we directly have that which is dominated by and . ∎
4 Application in Average Reward Temporal Difference Learning
In this section, we provide the first proof of almost sure convergence to a fixed point for average reward TD in its simplest tabular form. Remarkably, this convergence result has remained unproven for over 25 years despite the algorithm’s fundamental importance and simplicity.
4.1 Reinforcement Learning Background
In reinforcement learning (RL), we consider a Markov Decision Process (MDP; Bellman (1957); Puterman (2014)) with a finite state space , a finite action space , a reward function , a transition function , an initial distribution . At time step , an initial state is sampled from . At time , given the state , the agent samples an action , where is the policy being followed by the agent. A reward is then emitted and the agent proceeds to a successor state . In the rest of the paper, we will assume the Markov chain induced by the policy is irreducible and thus adopts a unique stationary distribution . The average reward (a.k.a. gain, Puterman (2014)) is defined as
(53) |
Correspondingly, the differential value function (a.k.a. bias, Puterman (2014)) is defined as
(54) |
The corresponding Bellman equation (a.k.a. Poisson’s equation) is then
(55) |
where is the free variable, is the reward vector induced by the policy , i.e., , and is the transition matrix induced by the policy , i.e., . It is known (Puterman, 2014) that all solutions to (55) form a set
(56) |
The policy evaluation problem in average reward MDPs is to estimate , perhaps up to a constant offset .
4.2 Average Reward Temporal Difference Learning
Temporal Difference learning (TD; Sutton (1988)) is a foundational algorithm in RL (Sutton & Barto, 2018). Inspired by its success in the discounted setting, Tsitsiklis & Roy (1999) proposed using the update rule (Average Reward TD) to estimate (up to a constant offset) for average reward MDPs. The updates are given by:
(Average Reward TD) | ||||
(57) |
where is a trajectory of states and rewards from an MDP under a fixed policy in a finite state space , is the scalar estimate of the average reward , is the tabular value estimate, and are learning rates.
To utilize Theorem 2.6 to prove the almost sure convergence of (Average Reward TD), we first rewrite it in a compact form to match that of (SKM with Markovian and Additive Noise). Define the augmented Markov chain . It is easy to see that evolves in the finite space . We then define a function by defining the -th element of as
(58) | |||
(59) |
Then, the update to in (Average Reward TD) can then be expressed as
(60) |
Here, is the random noise vector defined as . This is the current estimate error of the average reward estimator . Intuitively, the indicator reflects the asynchronous nature of (Average Reward TD). For each , only the -indexed element in is updated.
We are now ready to prove the convergence of (Average Reward TD). Throughout the rest of the section, we utilize the following assumption.
Assumption 4.1 (Ergodicity).
Both and are finite. The Markov chain induced by the policy is aperiodic and irreducible.
Theorem 4.2.
Let Assumption 4.1 hold. Consider the learning rates in the form of with . Then the iterates generated by (Average Reward TD) satisfy
(61) |
where is a possibly sample-path dependent fixed point.
Proof.
We proceed via verifying assumptions of Theorem 2.6. In particular, we consider the compact form (60).
Under Assumption 4.1, it is obvious that is irreducible and aperiodic and adopts a unique stationary distribution.
To verify Assumption 2.2, we demonstrate that is Lipschitz in w.r.t . For notation simplicity, let . We have,
(62) | |||
(63) |
Separating cases based on , if , we have
(64) |
For the case when , we have
(65) |
Therefore
(66) | ||||
(67) |
It is well known that the set of solutions to Poisson’s equation defined in (56) is non-empty (Puterman, 2014), verifying Assumption 2.3. Assumption 2.4 is directly met by the definition of .
To verify Assumption 2.5, we first notice that for (Average Reward TD), we have . It is well-known from the ergodic theorem that converges to almost surely. To verify Assumption 2.5, however, requires both an almost sure convergence rate and an convergence rate. To this end, we rewrite the update of as
(68) |
where we define and . It is now clear that the update of is a special case of linear TD in the discounted setting (Sutton, 1988). Given our choice of , the general result about the almost sure convergence rate of linear TD (Theorem 1 of Tadić (2002)) ensures that
(69) |
where is a sample-path dependent constant. This immediately verifies (10). We do note that this almost sure convergence rate can also be obtained via a law of the iterated logarithm for Markov chains (Theorem 17.0.1 of Meyn & Tweedie (2012)). The general result about the convergence rate of linear TD (Theorem 11 of Srikant & Ying (2019)) ensures that
(70) |
This immediately verifies (11) and completes the proof. ∎
5 Related Work
ODE and Lyapunov Methods for Asymptotic Convergence
A large body of research has employed ODE-based methods to establish almost sure convergence of SA algorithms (Benveniste et al., 1990; Kushner & Yin, 2003; Borkar, 2009). These methods typically begin by proving stability of the iterates (i.e. ). Abounadi et al. (2002) uses this ODE-method to study the convergence of (SKM), but they require the noise sequence to be uniformly bounded, and that the set of fixed points of the nonexpansive map be a singleton to prove the stability of the iterates.
The ODE@ technique (Borkar & Meyn, 2000; Borkar et al., 2021; Meyn, 2024; Liu et al., 2025) is a powerful stability technique in RL. If the so called “ODE@ is globally asymptotically stable, existing results such as Meyn (2022); Borkar et al. (2021); Liu et al. (2025) can be used to establish the desired stability of . However, if we consider a generic non-expansive operator which may admit multiple fixed points or induce oscillatory behavior, we cannot guarantee the global asymptotic stability of the ODE@ without additional assumptions. This limits the ODE method’s utility in analyzing (SKM with Markovian and Additive Noise).
In addition to the ODE method, there are other works that use Lyapunov methods such as (Bertsekas & Tsitsiklis, 1996; Konda & Tsitsiklis, 1999; Srikant & Ying, 2019; Borkar et al., 2021; Chen et al., 2021; Zhang et al., 2022, 2023) to provide asymptotic and nonasymptotic results of various RL algorithms. Both the ODE and Lyapunov based methods are distinct from the fox-and-hare based approach for (IKM) introduced by (Cominetti et al., 2014) that our work is built upon.
Average Reward TD
The (Average Reward TD) algorithm introduced by Tsitsiklis & Roy (1999) is the most fundamental policy evaluation algorithm in average reward settings.
In addition to the tabular setting we study here, (Average Reward TD) has also been extended to linear function approximation (Tsitsiklis & Roy, 1999; Konda & Tsitsiklis, 1999; Wu et al., 2020; Zhang et al., 2021). Instead of using a look-up table to store the value estimate, linear function approximation approximates with . Let be the feature matrix, whose -th row is the , and is the learnable weights. Linear function approximation reduces to the tabular method when . While Tsitsiklis & Roy (1999) proves the almost sure convergence under assumptions such as linear independence of columns in and for any , these conditions fail to hold in the most straightforward tabular case (where ). However, under a non-trivial construction of , it can be shown that the results from Tsitsiklis & Roy (1999) can be used to prove the almost sure convergence of (Average Reward TD) to a set in the tabular case.
Zhang et al. (2021) establishes the convergence of (Average Reward TD), and also provides a convergence rate. However, it is well known that convergence and almost sure convergence do not imply each other. Our work improves upon both of these works by proving that the iterates converge to a fixed point almost surely.
Finally, the (Average Reward TD) algorithm has inspired the design of many other TD algorithms for average reward MDPs, for both policy evaluation and control, including Konda & Tsitsiklis (1999); Yang et al. (2016); Wan et al. (2021a); Zhang & Ross (2021); Wan et al. (2021b); He et al. (2022); Saxena et al. (2023). We envision that our work will shed light on the almost sure convergence of those follow-up algorithms.
6 Conclusion
In this work, we provide the first proof of almost sure convergence as well as non-asymptotic finite sample analysis of stochastic approximations under nonexpansive maps with Markovian noise. As an application, we provide the first proof of almost sure convergence of (Average Reward TD) to a potentially sample-path dependent fixed point. This result highlights the underappreciated strength of SKM iterations, a tool whose potential is often overlooked in the RL community. Addressing several follow-up questions could open the door to proving the convergence of many other RL algorithms. Do SKM iterations converge in ? Do they follow a central limit theorem or a law of the iterated logarithm? Can they be extended to two-timescale settings? And can we develop a finite sample analysis for them? Resolving these questions could pave the way for significant advancements across RL theory. We leave them for future investigation.
Acknowledgements
This work is supported in part by the US National Science Foundation (NSF) under grants III-2128019 and SLES-2331904. EB acknowledges support from the NSF Graduate Research Fellowship (NSF-GRFP) under award 1842490. This work was also supported in part by the Coastal Virginia Center for Cyber Innovation (COVA CCI) and the Commonwealth Cyber Initiative (CCI), an investment in the advancement of cyber research and development, innovation, and workforce development. For more information about CCI, visit www.covacci.org and www.cyberinitiative.org.
Impact Statement
This paper presents work whose goal is to advance the field of reinforcement learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here.
References
- Abounadi et al. (2001) Abounadi, J., Bertsekas, D., and Borkar, V. S. Learning algorithms for markov decision processes with average cost. SIAM Journal on Control and Optimization, 2001.
- Abounadi et al. (2002) Abounadi, J., Bertsekas, D. P., and Borkar, V. Stochastic approximation for nonexpansive maps: Application to q-learning algorithms. SIAM Journal on Control and Optimization, 41(1):1–22, 2002.
- Bellman (1957) Bellman, R. A markovian decision process. Journal of mathematics and mechanics, pp. 679–684, 1957.
- Benveniste et al. (1990) Benveniste, A., Métivier, M., and Priouret, P. Adaptive Algorithms and Stochastic Approximations. Springer, 1990.
- Bertsekas & Tsitsiklis (1996) Bertsekas, D. P. and Tsitsiklis, J. N. Neuro-Dynamic Programming. Athena Scientific Belmont, MA, 1996.
- Borkar et al. (2021) Borkar, V., Chen, S., Devraj, A., Kontoyiannis, I., and Meyn, S. The ode method for asymptotic statistics in stochastic approximation and reinforcement learning. arXiv preprint arXiv:2110.14427, 2021.
- Borkar (2009) Borkar, V. S. Stochastic approximation: a dynamical systems viewpoint. Springer, 2009.
- Borkar & Meyn (2000) Borkar, V. S. and Meyn, S. P. The ode method for convergence of stochastic approximation and reinforcement learning. SIAM Journal on Control and Optimization, 2000.
- Bravo & Cominetti (2024) Bravo, M. and Cominetti, R. Stochastic fixed-point iterations for nonexpansive maps: Convergence and error bounds. SIAM Journal on Control and Optimization, 62(1):191–219, 2024.
- Bravo et al. (2019) Bravo, M., Cominetti, R., and Pavez-Signé, M. Rates of convergence for inexact krasnosel’skii–mann iterations in banach spaces. Mathematical Programming, 175:241–262, 2019.
- Chen et al. (2021) Chen, Z., Maguluri, S. T., Shakkottai, S., and Shanmugam, K. A lyapunov theory for finite-sample guarantees of asynchronous q-learning and td-learning variants. arXiv preprint arXiv:2102.01567, 2021.
- Cominetti et al. (2014) Cominetti, R., Soto, J. A., and Vaisman, J. On the rate of convergence of krasnosel’skii-mann iterations and their connection with sums of bernoullis. Israel Journal of Mathematics, 199:757–772, 2014.
- Edelstein (1966) Edelstein, M. A remark on a theorem of m. a. krasnoselski. American Mathematical Monthly, 1966.
- Folland (1999) Folland, G. B. Real analysis: modern techniques and their applications, volume 40. John Wiley & Sons, 1999.
- He et al. (2022) He, J., Wan, Y., and Mahmood, A. R. The emphatic approach to average-reward policy evaluation. In Deep Reinforcement Learning Workshop NeurIPS 2022, 2022.
- Ishikawa (1976) Ishikawa, S. Fixed points and iteration of a nonexpansive mapping in a banach space. Proceedings of the American Mathematical Society, 59(1):65–71, 1976.
- Karandikar & Vidyasagar (2024) Karandikar, R. L. and Vidyasagar, M. Convergence rates for stochastic approximation: Biased noise with unbounded variance, and applications. Journal of Optimization Theory and Applications, pp. 1–39, 2024.
- Kiefer & Wolfowitz (1952) Kiefer, J. and Wolfowitz, J. Stochastic estimation of the maximum of a regression function. Annals of Mathematical Statistics, 1952.
- Kim & Xu (2007) Kim, T.-H. and Xu, H.-K. Robustness of mann’s algorithm for nonexpansive mappings. Journal of Mathematical Analysis and Applications, 327(2):1105–1115, 2007.
- Konda & Tsitsiklis (1999) Konda, V. R. and Tsitsiklis, J. N. Actor-critic algorithms. In Advances in Neural Information Processing Systems, 1999.
- Koval & Schwabe (2003) Koval, V. and Schwabe, R. A law of the iterated logarithm for stochastic approximation procedures in d-dimensional euclidean space. Stochastic processes and their applications, 105(2):299–313, 2003.
- Krasnosel’skii (1955) Krasnosel’skii, M. A. Two remarks on the method of successive approximations. Uspekhi matematicheskikh nauk, 10(1):123–127, 1955.
- Kushner & Yin (2003) Kushner, H. and Yin, G. G. Stochastic approximation and recursive algorithms and applications. Springer Science & Business Media, 2003.
- Liu (1995) Liu, L.-S. Ishikawa and mann iterative process with errors for nonlinear strongly accretive mappings in banach spaces. Journal of Mathematical Analysis and Applications, 194(1):114–125, 1995.
- Liu et al. (2025) Liu, S., Chen, S., and Zhang, S. The ODE method for stochastic approximation and reinforcement learning with markovian noise. Journal of Machine Learning Research, 2025.
- Métivier & Priouret (1987) Métivier, M. and Priouret, P. Théorèmes de convergence presque sure pour une classe d’algorithmes stochastiques à pas décroissant. Probability Theory and related fields, 74:403–428, 1987.
- Meyn (2022) Meyn, S. Control systems and reinforcement learning. Cambridge University Press, 2022.
- Meyn (2024) Meyn, S. The projected bellman equation in reinforcement learning. IEEE Transactions on Automatic Control, 2024.
- Meyn & Tweedie (2012) Meyn, S. P. and Tweedie, R. L. Markov chains and stochastic stability. Springer Science & Business Media, 2012.
- Puterman (2014) Puterman, M. L. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014.
- Qian et al. (2024) Qian, X., Xie, Z., Liu, X., and Zhang, S. Almost sure convergence rates and concentration of stochastic approximation and reinforcement learning with markovian noise. arXiv preprint arXiv:2411.13711, 2024.
- Reich (1979) Reich, S. Weak convergence theorems for nonexpansive mappings in banach spaces. J. Math. Anal. Appl, 67(2):274–276, 1979.
- Robbins & Monro (1951) Robbins, H. and Monro, S. A stochastic approximation method. The Annals of Mathematical Statistics, 1951.
- Saxena et al. (2023) Saxena, N., Khastagir, S., Kolathaya, S., and Bhatnagar, S. Off-policy average reward actor-critic with deterministic policy search. In International Conference on Machine Learning, pp. 30130–30203. PMLR, 2023.
- Srikant & Ying (2019) Srikant, R. and Ying, L. Finite-time error bounds for linear stochastic approximation andtd learning. In Proceedings of the Conference on Learning Theory, 2019.
- Sutton (1988) Sutton, R. S. Learning to predict by the methods of temporal differences. Machine Learning, 1988.
- Sutton & Barto (2018) Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction (2nd Edition). MIT press, 2018.
- Szepesvári (1997) Szepesvári, C. The asymptotic convergence-rate of q-learning. Advances in neural information processing systems, 10, 1997.
- Tadić (2002) Tadić, V. B. On the almost sure rate of convergence of temporal-difference learning algorithms. IFAC Proceedings Volumes, 35(1):455–460, 2002.
- Tadic (2004) Tadic, V. B. On the almost sure rate of convergence of linear stochastic approximation algorithms. IEEE Transactions on Information Theory, 50(2):401–409, 2004.
- Tsitsiklis & Roy (1999) Tsitsiklis, J. N. and Roy, B. V. Average cost temporal-difference learning. Automatica, 1999.
- Wan et al. (2021a) Wan, Y., Naik, A., and Sutton, R. Average-reward learning and planning with options. Advances in Neural Information Processing Systems, 34:22758–22769, 2021a.
- Wan et al. (2021b) Wan, Y., Naik, A., and Sutton, R. S. Learning and planning in average-reward markov decision processes. In Proceedings of the International Conference on Machine Learning, 2021b.
- Wu et al. (2020) Wu, Y., Zhang, W., Xu, P., and Gu, Q. A finite-time analysis of two time-scale actor-critic methods. In Advances in Neural Information Processing Systems, 2020.
- Yang et al. (2016) Yang, S., Gao, Y., An, B., Wang, H., and Chen, X. Efficient average reward reinforcement learning using constant shifting values. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 30, 2016.
- Zhang et al. (2021) Zhang, S., Zhang, Z., and Maguluri, S. T. Finite sample analysis of average-reward td learning and -learning. Advances in Neural Information Processing Systems, 2021.
- Zhang et al. (2022) Zhang, S., des Combes, R. T., and Laroche, R. Global optimality and finite sample analysis of softmax off-policy actor critic under state distribution mismatch. Journal of Machine Learning Research, 2022.
- Zhang et al. (2023) Zhang, S., Des Combes, R. T., and Laroche, R. On the convergence of sarsa with linear function approximation. In International Conference on Machine Learning, 2023.
- Zhang & Ross (2021) Zhang, Y. and Ross, K. W. On-policy deep reinforcement learning for the average-reward criterion. In International Conference on Machine Learning, pp. 12535–12545. PMLR, 2021.
Appendix A Mathematical Background
Lemma A.1 (Theorem 2.1 from Bravo & Cominetti (2024)).
Let be a sequence generated by (IKM). Let denote the set of fixed points of (assumed to be nonempty). Additionally, let be defined according to (9) and the real function as
(71) |
If is such that for all , then
(72) |
Moreover, if and with , then (72) holds with , and we have as well as for some fixed point
Lemma A.2 (Monotonicity of from Lemma B.1 in Bravo & Cominetti (2024)).
For with and in (8), we have for so that .
Lemma A.4 (Monotone Convergence Theorem from Folland (1999)).
Given a measure space , define as the space of all measurable functions from to . Then, if is a sequence in such that for all j, and , then .
Appendix B Additional Lemmas from Section 2
In this section, we present and prove the lemmas referenced in Section 2 as part of the proof of Theorem 2.6. Additionally, we establish several auxiliary lemmas necessary for these proofs.
We begin by proving several convergence results related to the learning rates.
Lemma B.1 (Learning Rates).
Since this Lemma is comprised of several short proofs regarding the deterministic learning rates defined in Assumption 2.4, we will decompose each result into subsections. Recall that where .
(73):
Proof.
From the definition of in (9), we have
(81) |
Case 1: . It is easy to see .
Case 2: When , we can approximate the sum with an integral, with
(82) |
Therefore we have when . ∎
In analyzing the subsequent equations, we will use the fact that when and when . Additionally, we have .
(74):
Proof.
We have an order-wise approximation of the sum
(83) |
In both cases of and , the series clearly converge as . ∎
(76):
Proof.
We have an order-wise approximation of the sum
(84) |
In both cases of and , the series clearly converge as . ∎
(75):
Proof.
We can give an order-wise approximation of the sum
(85) |
In both cases of and , the series clearly converge as . ∎
(77):
Proof.
Since is strictly decreasing, we have .
Case 1: For the case where , it is trivial to see that,
(86) |
This series clearly converges.
Case 2: For the case where , we have
(87) | ||||
(88) |
To analyze the behavior of this term for large we first consider the binomial expansion of ,
(89) |
Subtracting from :
(90) |
The leading order of the denominator of (88) is clearly , which gives
(91) |
Therefore with ,
(92) |
which clearly converges as for . ∎
(78):
(79):
Proof.
Case 1: For , because we have and from Lemma A.2, we have the order-wise approximation,
( is increasing) | (97) | ||||
(98) | |||||
(99) | |||||
(100) |
which clearly converges.
Case 2: For the case when , we have,
( is increasing) | (101) | ||||
(Lemma A.3) | (102) | ||||
(103) | |||||
(104) |
which converges for . ∎
Then, under Assumption 2.5, we prove additional results about the convergence of the first and second moments of the additive noise .
Proof.
Recall that by Assumption 2.5 we have . Also recall that with . Then, we can prove the following equations:
(105):
By Jensen’s inequality, we have
(110) |
(106):
(111) |
which clearly converges as for .
(107):
(112) |
which clearly converges as for .
(108):
(113) |
which clearly converges as for .
(109):
(Lemma A.2) | (114) | ||||
(Lemma B.2) | (115) |
It can be easily verified with an integral approximation that . This further implies
(116) |
which converges as for . ∎
Next, in Lemma B.3, we upper-bound the iterates .
Lemma B.3.
For each , we have
(117) |
where is a deterministic constant.
Proof.
Applying to both sides of (SKM with Markovian and Additive Noise) gives,
(118) | |||||
(119) | |||||
(By (3)) | (120) | ||||
(121) |
A simple induction shows that almost surely,
(122) |
Since is monotonically decreasing, we have
(123) | ||||
(124) | ||||
(125) |
Therefore, since is monotonically increasing, there exists some constant we denote as such that
(126) |
∎
Lemma B.4.
With as defined in (13), we have
(127) |
which further implies
(128) |
where are deterministic constants.
Proof.
Since we work with a finite , we will use functions and matrices interchangeably. For example, given a function , we also use to denote a matrix in whose -th row is . Similarly, a matrix in also corresponds to a function .
Let denote the function and let denote the function . Theorem 8.2.6 of Puterman (2014) then ensures that
(129) |
where is the fundamental matrix of the Markov chain depending only on the chain’s transition matrix . The exact expression of is inconsequential and we refer the reader to Puterman (2014) for details. Then we have for any ,
(130) |
This implies that
(131) | |||||
(132) | |||||
(Assumption 2.2) | (133) | ||||
(134) |
yielding
(135) |
The equivalence between norms in finite dimensional space ensures that there exists some such that (127) holds. Letting then yields
(136) |
Define , we get
(137) |
∎
Lemma B.5.
We have for any ,
(138) |
where is a possibly sample-path dependent constant. Additionally, we have
(139) |
where is a deterministic constant.
Proof.
Having proven that is Lipschitz continuous in in Lemma B.4, we have
(Lemma B.4) | (140) | ||||
(Lemma B.3) | (141) | ||||
(142) |
Since (10) in Assumption 2.5 assures us that is finite almost surely while is monotonically increasing, then there exists some possibly sample-path dependent constant such that
(143) |
We can also prove a deterministic bound on the expectation of ,
(144) | ||||
(145) |
By Lemma B.2, its easy to see that . Therefore, there exists some deterministic constant such that
(146) |
∎
Although the two statements in Lemma B.5 appear similar, their difference is crucial. Assumption 2.5 and (10) only ensure the existence of a sample-path dependent constant but its form is unknown, preventing its use for expectations or explicit bounds. In contrast, using (11) from Assumption 2.5, we derive a universal constant .
Lemma B.6.
Proof.
Lemma B.7.
For each , defined in (15), we have
(156) |
and
(157) |
where and are deterministic constants and
(158) |
is the -algebra until time .
Proof.
First, to prove (156), we have
(159) |
where the first inequality results form (153) in Lemma B.6 and the second inequality results from Lemma B.4.
Then, to prove (157), from Lemma B.3 we then have,
(160) |
Recall that by Assumption 2.5, . Examining the right-most term we then have,
(Cauchy-Schwarz) | (161) | ||||
(By (107) in Lemma B.2) | (162) | ||||
(163) | |||||
(164) | |||||
(165) |
We then have
(166) |
Because our bound on is independent of , we have
(By (166)) | (167) |
Due to the equivalence of norms in finite-dimensional spaces, there exists a deterministic constant such that (157) holds. ∎
Now, we are ready to present four additional lemmas which we will use to bound the four noise terms in (27).
Lemma B.8.
With defined in (27),
(168) |
Proof.
We first observe that the sequence defined in (27) is positive and monotonically increasing. Therefore by the monotone convergence theorem, it converges almost surely to a (possibly infinite) limit which we denote as,
(169) |
Then, we will utilize a generalization of Lebesgue’s monotone convergence theorem (Lemma A.4) to prove that the limit is finite almost surely. From Lemma A.4, we see that
(170) |
Therefore, to prove that is almost surely finite, it is sufficient to prove that . To this end, we proceed by bounding the expectation of , by first starting with from (25). We have,
(171) | |||||
(Jensen’s Ineq.) | (172) | ||||
( is a Martingale Difference Series) | (173) | ||||
(Lemma B.7) | (174) |
Then using the definition of from (27), we have
(175) |
Then, by (79) in Lemma B.1, we have
(176) |
and since is also monotonically increasing, we have
(177) |
which implies that almost surely. ∎
Lemma B.9.
With defined in (27),
(178) |
Proof.
We first observe that the sequence defined in (27) is positive and monotonically increasing. Therefore by the monotone convergence theorem, it converges almost surely to a (possibly infinite) limit which we denote as,
(179) |
Then, we utilize a generalization of Lebesgue’s monotone convergence theorem (Lemma A.4) to prove that the limit is finite almost surely. By Lemma A.4, we have
(180) |
Therefore, to prove that is almost surely finite, it is sufficient to prove that . To this end, we proceed by bounding the expectation of ,
(181) |
Then, by (109) in Lemma B.2, we have,
(182) |
and since is also monotonically increasing, we have
(183) |
which implies that almost surely.
∎
Lemma B.10.
With defined in (27), we have
(184) |
Proof.
Beginning with the definition of in (25), we have
(185) | |||||
(186) | |||||
(Lemma B.4) | (187) | ||||
(By (SKM with Markovian and Additive Noise)) | (188) | ||||
(By (3)) | (189) | ||||
(Lemma B.3) | (190) |
Because Assumption 2.5 assures us that is almost surely finite, then there exists some sample-path dependent constant we denote as where,
(Assumption 2.5) | (191) | ||||
( is increasing) | (192) | ||||
(193) |
Again, from Assumption 2.5 we can conclude that there exists some other sample-path dependent constant we denote as where
(194) |
Therefore, from the definition of in (22)
(195) |
(196) |
Then, the monotone convergence theorem proves the lemma. ∎
To prove (23) holds almost surely, we introduce four lemmas which we will subsequently use to prove an extension of Theorem 2 from (Borkar, 2009) in Section D.
Lemma B.11.
We have
(197) |
Proof.
Recall that is a Martingale difference series. Then, the Martingale sequence
(198) |
is bounded in with,
(Jensen’s Ineq.) | (199) | ||||
( is a Martingale Difference Series) | (200) | ||||
(Lemma B.7) | (201) |
Lemma B.1 then gives
(202) |
Doob’s martingale convergence theorem implies that converges to an almost surely finite random variable, which proves the lemma. ∎
Lemma B.12.
We have,
(203) |
Proof.
Utilizing the definition of in (16), we have
(204) | |||||
(205) | |||||
( = 0) | (206) |
The triangle inequality gives
(207) | |||||
(Lemma B.5) | (208) | ||||
(209) |
Its easy to see that , and is simply a deterministic and finite constant. Therefore, by Lemma B.1 we have
(210) |
which proves the lemma.
∎
Lemma B.13.
We have,
(211) |
Proof.
Utilizing the definition of in (17), we have
(212) | |||||
(213) | |||||
(Lemma B.4) | (214) | ||||
(215) | |||||
(By (SKM with Markovian and Additive Noise)) | (216) | ||||
(By (3)) | (217) | ||||
(Lemma B.3) | (218) |
Because Assumption 2.5 assures us that is finite, then there exists some sample-path dependent constant we denote as where,
(Assumption 2.5) | (219) | ||||
( is increasing) | (220) |
Lemma B.14.
Let be the iterates defined in (21). Then if , we have almost surely.
Proof.
We use a stochastic approximation argument to show that . The almost sure convergence of is given by a generalization of Theorem 2.1 of (Borkar, 2009), which we present as Theorem D.6 in Appendix D for completeness.
We now verify the assumptions of Theorem D.6. Beginning with the definition of in (18), we have
(221) | ||||
(222) |
We now bound the three terms in the RHS.
For , we have
(223) |
where we have used the fact that the series converges by Assumption 2.5 almost surely.
For , from (206) in Lemma B.12, we have
(224) | ||||
(225) |
Taking the norm and applying the triangle inequality, we have
(226) | |||||
(227) | |||||
(Lemma B.5) | (228) |
where the last inequality holds because is monotonically increasing. Note that
(229) |
Since we have , then
(230) |
where we used the fact that (77) in Lemma B.1 and the monotone convergence theorem prove that the series converges almost surely.
For , following the steps in Lemma B.13 (which we omit to avoid repetition), we have,
(231) |
which further implies that
(232) |
where we use the fact that, by (74) in Lemma B.1, Assumption 2.5, and the monotone convergence theorem, both series on the RHS series converge almost surely. Therefore we have proven that,
(233) |
thereby verifying Assumption D.1.
Lemma B.15.
Proof.
Following the approach of Bravo & Cominetti (2024), we utilize the estimate for inexact Krasnoselskii-Mann iterations of the form (IKM) presented in Lemma A.1 to prove the convergence of (SKM with Markovian and Additive Noise). Using the definition of in (21), we then let and define , which gives
(235) | ||||
(236) | ||||
(237) | ||||
(238) |
which matches the form of (IKM) with . Due to the non-expansivity of from (6), we have
(239) |
The convergence of then follows directly from Lemma A.1 which gives for some , and therefore . We note that here is stochastic while the (IKM) result in Lemma A.1 considers deterministic noise. This means we apply Lemma A.1 for each sample path. ∎
Appendix C Additional Lemmas from Section 3
Corollary C.1.
We have
(240) |
where is a deterministic constant.
Proof.
Corollary C.2.
We have
(242) |
where is a deterministic constant.
Proof.
Starting from (35) to avoid repetition, we have,
(243) |
Now we can take the expectation and apply the sample-path independent bound from Lemma B.5 with,
(Lemma B.5) | (244) | ||||
(245) |
Lemma B.1 and being monotonically increasing for yields,
(246) | |||||
() | (247) | ||||
(Lemma A.2) | (248) |
Therefore, there exists a deterministic constant we denote as such that
(249) |
∎
Corollary C.3.
We have
(250) |
Proof.
Lemma C.4.
For defined in (51), we have
(255) |
Proof.
From (51), we have
(256) |
To prove the Lemma, we will examine each of the four terms and prove they are . For , this is trivial. For , we first recall from Lemma B.1 that and
(257) |
Then we have,
(258) |
Then by Lemma B.2 we have
(Lemma A.2) | (259) | ||||
(260) | |||||
(261) | |||||
(262) | |||||
(263) |
Because we have for , we can see from (258), that is dominated by .
For , for the case when , we have
( increasing) | (265) | ||||
(266) | |||||
(267) |
For the case when , we have
(268) |
which we can approximate by an integral,
(269) |
Therefore,
(270) |
Combining our results from the two cases, we have for
(271) |
Comparing with in (258), since we have for , we can see that is dominated by , thereby proving the lemma.
∎
Lemma C.5.
We have,
(272) |
Proof.
The proof of this Lemma is a straightforward combination of the existing results of Theorems 2.11 and 3.1 from (Bravo & Cominetti, 2024). First, from (51), we have
(273) |
In the proof of Theorem 2.11 of (Bravo & Cominetti, 2024), they prove that if there exists a decreasing convex function of class , and a constant , such that for ,
(274) |
then,
(275) |
Using the fact that , which aligns with the analogous from Bravo & Cominetti (2024), and adopting their definition of , we avoid redundant derivations here.
Theorem 3.1 in Bravo & Cominetti (2024) establishes that for the step size schedule specified in Assumption 2.4, there exist constants and a function satisfying (274). Specifically, they show with
(276) |
for some constant and , (274) is satisfied. Moreover, they demonstrate that the resulting convolution integral in (275) evaluates to .
Appendix D Extension of Theorem 2.1 of Borkar (2009)
In this section, we present a simple extension of Theorem 2 from (Borkar, 2009) for completeness. Readers familiar with stochastic approximation theory should find this extension fairly straightforward. Originally, Chapter 2 of (Borkar, 2009) considers stochastic approximations of the form,
(277) |
where it is assumed that almost surely. However, our work requires that we remove the assumption that , and replace it with a more mild condition on the asymptotic rate of change of , akin to Kushner & Yin (2003).
Assumption D.1.
For any ,
(278) |
where .
The next four assumptions are the same as the remaining assumptions in Chapter 2 of (Borkar, 2009).
Assumption D.2.
The map is Lipschitz: for some .
Assumption D.3.
The step sizes are positive scalars satisfying
(279) |
Assumption D.4.
is a martingale difference sequence w.r.t the increasing family of -algebras
(280) |
That is,
(281) |
Furthermore, are square-integrable with
(282) |
for some constant
Assumption D.5.
The iterates of (277) remain bounded almost surely, i.e.,
(283) |
Theorem D.6 (Extension of Theorem 2.1 from (Borkar, 2009)).
Proof.
We now demonstrate that even with the relaxed assumption on , we can still achieve the same almost sure convergence of the iterates achieved by (Borkar, 2009). Following Chapter 2 of (Borkar, 2009), we construct a continuous interpolated trajectory , and show that it asymptotically approaches the solution set of (284) almost surely. Define time instants . By assumption D.3, . Let . Define a continuous, piece-wise linear by , with linear interpolation on each interval :
(285) |
It is worth noting that almost surely by Assumption D.5. Let denote the unique solution to ‘starting at s’:
(286) |
with . Similarly, let denote the unique solution to ‘ending at s’:
(287) |
with . Define also
(288) |
Lemma D.7 (Extension of Theorem 1 from (Borkar, 2009)).
Proof.
Let be in . Let . Then,
(2.1.6 in (Borkar, 2009)) | (291) |
where . Borkar (2009) then compares this with
(292) | ||||
(2.1.7 in (Borkar, 2009)) | (293) |
Next, Borkar (2009) bounds the integral on the right-hand side by proving
(2.1.8 in (Borkar, 2009)) | (294) |
where almost surely and a.s. by Assumption D.5.
Then, we can subtract (2.1.7) from (2.1.6) and take norms, yielding
(295) | ||||
(296) |
The key difference between (296) and the analogous equation in Borkar (2009) Chapter 2, is that we replace the with a . The reason we can make this change is that we defined to be in the range . Recall that we also defined in Assumption D.1, so we therefore know that in (291). Borkar (2009) unnecessarily relaxes this for notation simplicity, but a similar argument can be found in (Kushner & Yin, 2003).
Also, we have,
(297) | |||||
(by (288)) | (298) | ||||
(299) |
Borkar (2009) proves that is a zero mean, square-integrable martingale. By D.3, D.4, D.5,
(300) |
Therefore, the martingale convergence theorem gives the almost sure convergence of as . Combining this with assumption D.1 yields,
(301) |
Using the definition of given by (Borkar, 2009), we have proven that our slightly relaxed assumption still yields almost surely as . The rest of the argument for the proof of the theorem in Borkar (2009) holds without any additional modification. ∎