A Gradually Reinforced Sample-Average-Approximation Differentiable Homotopy Method for a System of Stochastic Equations
Abstract
This paper intends to apply the sample-average-approximation (SAA) scheme to solve a system of stochastic equations (SSE), which has many applications in a variety of fields. The SAA is an effective paradigm to address risks and uncertainty in stochastic models from the perspective of Monte Carlo principle. Nonetheless, a numerical conflict arises from the sample size of SAA when one has to make a tradeoff between the accuracy of solutions and the computational cost. To alleviate this issue, we incorporate a gradually reinforced SAA scheme into a differentiable homotopy method and develop a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method in this paper. By introducing a series of continuously differentiable functions of the homotopy parameter ranging between zero and one, we establish a differentiable homotopy system, which is able to gradually increase the sample size of SAA as descends from one to zero. The set of solutions to the homotopy system contains an everywhere smooth path, which starts from an arbitrary point and ends at a solution to the SAA with any desired accuracy. The GRSAA differentiable homotopy method serves as a bridge to link the gradually reinforced SAA scheme and a differentiable homotopy method and retains the nice property of global convergence the homotopy method possesses while greatly reducing the computational cost for attaining a desired solution to the original SSE. Several numerical experiments further confirm the effectiveness and efficiency of the proposed method.
Keywords: System of Stochastic Equations, Gradually Reinforced Sample Average Approximation, Differentiable Homotopy Method
1 Introduction
This paper is concerned with the problem of finding an satisfying
(2) |
where is a stochastic parameter in a probability space , is a continuously differentiable mapping and denotes the expected value over . As an effective paradigm for addressing risks and uncertainty, the SSE (2) can be regarded as a natural extension of a system of equations (SE) to a stochastic environment and has been extensively studied and applied in a number of areas including economics, game theory, management science and engineering. An excellent review on these applications can be found in [43]. Furthermore, the SSE is closely associated with several other stochastic problems such as stochastic variational inequalities (SVI) [6, 7, 17, 29, 42], stochastic nonlinear complementarity problems (SNCP) [5, 18, 31] and stochastic optimization (SO) [12, 27, 30, 39].
To solve the deterministic SE, several well-known methods have been proposed in the literature, which include Newton methods, homotopy or path-following methods and their variants. However, the convergence of Newton methods very much depends on the starting points. The homotopy methods are a class of powerful methods for solving SEs and possess the desired property of global convergence, which play an inevitable and lasting role in various fields; see, for instances, [11, 16, 28, 53]. The homotopy methods can be classified as simplicial homotopy methods and differentiable homotopy methods. The simplicial homotopy methods, dating back to the seminal paper by [14, 44], are powerful mechanisms for solving equations with highly nonlinearities or non-smooth structures, but they cannot benefit much from the differentiability of problems and can be extremely time-consuming especially when the problem sizes are large. The differentiable homotopy methods, introduced in [33], perfectly overcome the deficiency of the simplicial homotopy methods and are capable of following a smooth path.
Over the past few decades, the differentiable homotopy methods have been substantively investigated in the literature and applied to various areas such as general equilibrium theory and game theory for solving the problems that can be reformulated as systems of differentiable equations. Two direct proofs were given in [20] to show the feasibility of linear tracing procedure, which was made differentiable for computing Nash equilibria in [22]. A stochastic version of linear tracing procedure was established in [24] for the computation and selection of stationary equilibria or stationary Markov perfect equilibria in stochastic games. Later, a more efficient homotopy method based on the idea of interior-point method was developed in [11]. An all-solution differentiable homotopy method was proposed in [32] to find all equilibria for static and dynamic games with continuous strategies. A convex quadratic penalty differentiable homotopy method was developed in [8] to compute perfect equilibria for large scale noncooperative games. A differentiable homotopy method was described in [45] to solve general equilibrium models with incomplete asset markets and its reliability was evidenced by numerous numerical experiments. A generically convergent differentiable homotopy method was constituted in [21] to compute an equilibrium for a finance version of the general equilibrium problem considered in [45]. A proximal block coordinate homotopy framework was presented in [50] to solve large scale sparse least squares problems through numerically following a piecewise smooth path. A polyhedral homotopy-baed method was designed in [36] to find solutions to generalized Nash equilibrium problems. More differentiable homotopy methods and their applications can be found in the literature such as [23, 35, 37, 51, 54] and the references therein.
Unfortunately, all the existing methods fail to directly solve a stochastic system since the underlying mapping of an SSE is in a form of an expected value with respect to a certain stochastic parameter vector and needs to be evaluated first. As demonstrated in [30], the evaluation of is generally a tough job, because the distribution of the stochastic parameter is usually unknown and can only be simulated with historical data. Even if the distribution is given, computing multidimensional integrals is very costly. To deal with these difficulties, two classes of methods have been proposed in the literature: stochastic approximation (SA) methods and simulation based approaches. It was proved in [41] that the SA method is almost surly convergent to a solution of an SSE, only when satisfies certain conditions and the samples and stepsize are suitably chosen. Notwithstanding, this method is very sensitive to the choice of the stepsize at each iteration and sometimes performs poorly in practice. A modified SA method with a better performance was proposed in [40]. This better performance, however, occurs just for a special class of convex stochastic optimization and saddle point problems.
The simulation based approaches are fashionable tools to address the uncertainty. Among them, the sample-average-approximation (SAA) is one of the most popular representatives. The basic idea of the SAA is rather simple: randomly generating samples for the stochastic parameter and approximating the “true” underlying mapping by the average of several deterministic mappings corresponding to these samples. As demonstrated in [26, 34, 46], an appropriate incorporation of the SAA into a numerical algorithm can lead to a reasonable performance in solving general stochastic problems, and an exponential convergence rate of the SAA for the SVI and SNCP has been established under some mild conditions. Inspired by this success, this paper intends to incorporate the SAA into a differentiable homotopy method for a solution to the SSE (2). To apply the SAA in the existing methods, one needs to make a tradeoff between a cheap coarse estimate and an expensive finer estimate. A common practice is to use a gradually reinforced SAA scheme, that is, initially selecting a small sample size and then gradually increasing the sample size to approximate the target expected value. This allows a rapid progress at early stages and reduces the overall compuational cost for finding a desired solution. However, this practice may fail to converge due to the changes in sample size [4].
It is well known that a differentiable homotopy method attains global convergence through a continuous deformation process. This naturally raises the question: can we devise a differentiable homotopy method, which is able to progressively increase the sample size of SAA during the deformation process? The question seems quite challenging and were not considered in the existing work due to the intuitive collision between the discreteness of samples and the continuous deformation of the homotopy methods. To overcome this hurdle, we introduce a sequence of continuously differentiable functions of the homotopy parameter ranging between zero and one and incorporate with these functions a gradually reinforced sample-average-approximation (GRSAA) scheme into a differentiable homotopy method. As a result of this incorporation, we establish a differentiable homotopy system and reap a GRSAA differentiable homotopy method. As descends from one to zero, the homotopy system gradually increases the sample size and eventually reaches the desired SAA (4) at . The solution set of the homotopy system contains an everywhere smooth path, which starts from an arbitrary point and ends at a solution to the desired SAA or a satisfactory approximate solution to the SSE (2). The GRSAA differentiable homotopy method serves as a bridge to link a gradually reinforced SAA scheme and a differentiable homotopy method and retains the inherent property of global convergence of a standard differentiable homotopy method. To evince the benefit of differentiability, we also present a GRSAA simplicial homotopy method. To make numerical comparisons with a standard differentiable homotopy method and the GRSAA simplicial homotopy method, we have implemented the GRSAA differentiable homotopy method to solve several important applications of the SSE such as the SVI and market equilibrium problems. Numerical results further verify that two main features of the GRSAA differentiable homotopy method, the gradual reinforcement in sample size and differentiability, can significantly enhance the effectiveness and efficiency.
The rest of the paper is organized as follows. In Section 2, we first give a brief review about the gradually reinforced SAA scheme and differentiable homotopy methods and then develop a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method to solve the SAA system for the SSE (2). The convergence properties of the proposed method are discussed in Section 3. For numerical comparisons, Section 4 describes a GRSAA simplicial homotopy method, which belongs to a type of non-differentiable homotopy method. In Section 5, we employ the GRSAA differentiable homotopy method to solve some numerical examples and compare the performance of the GRSAA differentiable homotopy method with that of the GRSAA simplicial homotopy method and a standard differentiable homotopy method. We also apply the GRSAA differentiable homotopy method to solve several large-scale SSEs. All the numerical results are reported in Section 5. The paper is concluded in Section 6.
2 A Gradually Reinforced SAA Differentiable Homotopy Method
2.1 Background of the SAA and Homotopy Methods
The SAA is closely associated with the sample-path optimization (SPO) method, which is an effective paradigm for solving the problems that arise in the study of complex stochastic systems. The basic concept of the SPO method is to design some deterministic SEs with the underlying mappings being a sequence of computable functions, which has the underlying mapping of the original SSE as its limit. Some convergence conditions of the SPO method were provided in [19]. The sample-average-approximation (SAA) is generated by formulating the underlying mappings in the SPO into some sample average mappings. More specifically, suppose that are independently and identically distributed samples of , where is a positive integer. According to the Monte Carlo principle, the expected value can be approximated by the deterministic mapping, , and the well-known strong Law of Large Numbers assures that for any , there exists a sufficiently large number such that when ,
(3) |
Let be a solution to the following SAA with very large value of ,
(4) |
Substituting into (3), we have , which implies that provides a satisfactory approximate solution to the SSE (2) when is sufficiently large. Furthermore, when goes to infinity, provides an accurate solution to the SSE (2) with probability one. However, the rate at which converges to is , that is, should be increased by 100 times in order to improve the accuracy of an estimate by one digit [34, 47]. Therefore, to have a highly accurate final solution, one has to choose a very large .
As stated in Section 1, a differentiable homotopy method provides a globally convergent solution approach to a deterministic SE. Such a method starts from the unique solution of a trivial problem, follows a smooth path in the solution set of a homotopy system and ends at a solution to the targeted problem. One straightforward approach to solving the SAA (4) is to apply a standard differentiable homotopy method and obtain a homotopy system,
where is any given point and is the homotopy parameter ranging between zero and one. Clearly, the sample size remains to be in the above system for any value of and the approach has no improvement to the classic SAA paradigm. Therefore, one would like to incorporate a differentiable homotopy method with a gradually reinforced SAA scheme so that the sample size can be increased progressively as descends from one to zero. For example, we can divide the interval into subintervals, , , . Let be an integer smaller than . At , only the first samples are used, and we are interested in the problem . As decreases from to , enters the picture, and the homotopy system varies from to . In this way, we do not need to use a large number of samples for every value of .
However, there is a natural conflict between the SAA and the homotopy process. The homotopy process is a continuous deformation process, while the sample set for the SAA is discrete. As the homotopy parameter varies, the samples enter the homotopy system one by one, or more generally group by group. In the above example, it is natural to construct a series of homotopies for in different subintervals , , and switch homotopies at each connection point , which leads to a piecewise smooth path. The idea of switching homotopies was used in the literature for computing market equilibria in incomplete markets [3]. Since is very large, switching homotopies will frequently occur, which makes the computation very costly. In the next subsection, we show that this difficulty can be overcome with a sequence of continuously differentiable functions of .
2.2 A Gradually Reinforced SAA Differentiable Homotopy System
In this subsection, we incorporate a gradually reinforced SAA scheme into a differentiable homotopy method and develop a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method. In this paper, the homotopy parameter will descend from one to zero. With the proposed method, the sample size in the homotopy system is getting larger and larger as the homotopy parameter is decreasing and becomes identical to that of the desired SAA as . The formulation of the GRSAA differentiable homotopy system in our method consists of four steps.
-
•
Construction of a sequence of sample-average mappings.
Let be a positive integer with and . We partition the set of samples into subsets. For , let be the number of samples in the first subsets. Clearly, . For each , we define
Let for all . Then , , form a sequence of sample-average mappings.
-
•
Formulation of a continuous mapping deforming from to for each .
Let , , be a sequence with . Let be a mapping on that deforms from to as decreases from to . It is convenient to define
where is a continuous function with and .
-
•
Composition of a continuously differentiable mapping deforming from to .
It is straightforward to define a mapping as when , . Clearly, is a continuous mapping on , which deforms from to as descends from to . To ensure continuous differentiability of on , the function should be continuously differentiable on with and . One potential candidate of is given by
(5) In our development, we will adopt the sequence of functions for all .
-
•
Establishment of a GRSAA differentiable homotopy system.
In order to establish a GRSAA differentiable homotopy system with a unique known solution on the level of , we incorporate with the homotopy parameter an affinely linear function into the homotopy mapping and arrive at a homotopy system,
(6) where is an arbitrarily given point. Clearly, at , the homotopy system (6) has a unique solution . As is decreasing, more and more samples are entering into . As goes to , the system (6) approaches the desired SAA (4).
Lemma 1.
is continuously differentiable on .
Proof.
Clearly, is continuously differentiable on . Next, we show that is continuously differentiable at the connection points , . For each , we have , and
The derivatives at the connection points are given by
Then, for each ,
and
Thus, is continuously differentiable at each connection point. This completes the proof of the lemma. ∎
From Lemma 1, we attain the following corollary immediately.
Corollary 1.
is a continuously differentiable mapping from to .
Corollary 1 shows that the homotopy system (6) is a continuously differentiable system, which successfully settles the conflict between the discreteness of the sample set and the continuity requirement of homotopy deformation. With , we will be able to establish the existence of a smooth path, and switching homotopies is no longer needed. Figure 1 illustrates how the GRSAA differentiable homotopy method with works. In Figure 1, for , the points on the path are solutions to the homotopy system (6). As a result of the continuity of , every limiting point of the path, , satisfies . We remark that a replacement of in the above development yields a GRSAA differentiable homotopy method converging to an accurate solution to (2) with probability one.

3 Development of a Smooth Path
3.1 Convergence Properties
We prove in this subsection that the GRSAA differentiable homotopy method with is globally convergent under some mild conditions. Let us denote by the interior of the set . To ensure the existence of a smooth path that starts from and ends at a solution on the level of , the path is required to be trapped in a non-empty compact set. To meet this requirement, we make the following assumption about .
Assumption 1.
There exist compact convex sets with nonempty interior such that, for any realization of , for all and .
Assumption 1 shows a global convergence condition in fixed point problems and holds in various problem-classes including VIs, MCPs, nonlinear programming problems (NLPs), and minimax problems (MMPs) under some mild conditions; see [2, 38, 48]. With this assumption, the solutions to the original problem, for any realization of , are restricted in a compact convex set. Let be an arbitrary interior point of . Then, for any , and
(7) |
We denote the set of solutions to the system (6) by
Under Assumption 1, we prove in the following that contains a connected component intersecting both and . For any given , we constitute an unconstrained strictly convex optimization problem,
(8) |
An application of the sufficient and necessary optimality condition to the problem (8) yields the system,
(9) |
Let denote the unique solution to the problem (8). Clearly, is a non-empty compact convex set. It follows from the system (9) that . Thus, is a continuous mapping on . Furthermore, we derive from (7) that for any given ,
(10) |
These results together lead us to the following theorem.
Theorem 1.
Suppose that Assumption 1 holds. Then, for any given , has a fixed point in .
Proof.
Let be any given value in . For any given , we denote by the unique solution to the strictly convex quadratic optimization problem,
Clearly, is a continuous mapping from to . Since is a convex compact set, the well-known Brouwer’s fixed point theorem asserts that has a fixed point in . Let be a fixed point of . Then is the fixed point of . Next we prove this assertion by contradicton.
Suppose that . Then we must have and . Let for . since is convex. It follows that
Recall that Thus, when is sufficiently small, one must have
Therefore, . Hence,
A contradiction occurs. This completes the proof. ∎
Theorem 1 says that for any given , there exists an such that . This conclusion shows that the homotopy system (6) is precisely the same as the system (9) and the set of solutions to the homotopy system can be rewritten as
For our further development, we need the following fixed point theorem.
Theorem 2 (Browder’s Fixed Point Theorem).
Let be a non-empty, compact and convex subset of and let be a continuous correspondence. Then, the set contains a connected subset such that and .
As a corollary of Theorem 2, we come to the following result.
Corollary 2.
contains a connected subset such that and .
Corollary 2 assures the global convergence of the GRSAA differentiable homotopy method.
3.2 A GRSAA Differentiable Homotopy Path
To develop an efficient GRSAA differentiable homotopy method for computing a solution to the SAA (4), we need to construct an everywhere smooth path that starts from and ends at a solution to the SAA (4) on the level of . A common sufficient condition for the existence of such a smooth path is that zero is a regular value of a homotopy system. The next lemma shows that zero is a regular value of at the starting point .
Lemma 2.
At , zero is a regular value of on .
Proof.
Taking derivatives of with respect to , we obtain that the Jacobian matrix of is an identity matrix. Thus zero is a regular value of . This completes the proof. ∎
We prove in the following theorem that, under the condition that zero is a regular value of , the set contains an everywhere smooth path leading to a solution to (4) as goes to zero.
Theorem 3.
Suppose that zero is a regular value of on . Then there exists an everywhere smooth path in , which starts from the unique solution on the level of and ends at a solution to (4) on the level of .
Proof.
Corollary 2 tells us that has a connected component that intersects both and . Since the system (6) has a unique solution at , the connected component in is unique. The condition of Theorem 3 together with the well-known implicit function theorem ensures that the connected component forms a smooth path, which starts from and ends at a solution to (4) on the level of . ∎
This theorem relies on the condition that zero is a regular value of on . To get rid of this condition, a general approach is to subtract from a perturbation term of , where and is sufficiently small. Subtracting from the perturbation term, we get . For any fixed , let and . As and , the perturbation term disappears and . Clearly, as goes to zero, the distance between and approaches zero. Therefore, as is sufficiently small, also contains a connected component intersecting both and . These results together lead to the following theorem.
Theorem 4.
For generic with sufficiently small , contains a unique smooth path starting from .
Proof.
As a mapping of on , the Jacobian matrix of is an matrix given by , where is an identity matrix. Therefore, the Jacobian matrix of has full row rank on . This together with Lemma 2 asserts that zero is a regular value of on . We know from the well-known transversality theorem [15] that, for almost all , zero is also a regular value of . Therefore, when is sufficiently small, we derive from the well-known implicit function theorem that the connected component in intersecting forms a unique smooth path starting from . This completes the proof. ∎
4 A Gradually Reinforced SAA Simplicial Homotopy Method
To further evince the advantage of the GRSAA differentiable homotopy method, we describe in this section a gradually reinforced SAA (GRSAA) simplicial homotopy method. As an underlying triangulation of the method, we make use of the -triangulation of with continuous refinement of grid size, whose definition and pivot rules are given in [10]. Given , let , . Then, , , form a descent sequence with and . Given , we define , . Therefore, as , . Given the initial grid size and , let , , which is the gird size of a simplex of the -triangulation in . Let be a given point. We still utilize the continuously differentiable function defined in Section 2 to form a simplicial homotopy system,
(11) |
where for . Clearly, when , the system (11) has a unique solution . As follows, we exploit the homotopy mapping to attain a GRSAA simplicial homotopy method.
Let with . Let be the collection of all simplices of the -triangulation of after the translation of , where the translation ensures that is in the interior of a unique simplex in . We denote a -dimensional simplex in by , where is a vertex of for .
Definition 1.
is a complete simplex if the system,
(12) |
has a solution.
Assumption 2 (Nondegeneracy Assumption).
Following a similar argument to that in [8], we know that there is a unique complete simplex contained in . Given these notations, a simplicial homotopy method for computing an approximate solution to the SSE can be stated as follows.
- Step 0:
-
Let be a given sufficiently small positive number. Let be the unique complete simplex in with and be the unique -dimensional simplex in with as its facet. Let be the vertex of opposite to . Let and go to Step 1.
- Step 1:
-
Perform a linear programming step to bring into the system, Suppose that leaves the system. Let be the facet of opposite to and . Go to Step 2.
- Step 2:
-
If and , the method terminates. Otherwise, let be the unique simplex that shares together with a common facet . Let be the vertex of opposite to and . Go to Step 1.
As follows, we show that the simplicial path generated by the GRSAA simplicial homotopy method leads to a solution to the SSE (2). Let Then it follows from Assumption 1 that is a compact set. As a result of Definition 1 and the continuity of , it is easy to verify that all the complete simplices are contained in a bounded set of . Under the nondegeneracy assumption, following the same argument as that in Todd [48], one can derive that all the simplices generated by the GRSAA simplicial homotopy method are different, that is, no cycle will occur. Let , , be the complete simplices generated by the method. Let be the smallest value in such that . Since all , , are contained in a bounded set, we must have as .
Let be an -dimensional simplex in . For with and , a piecewise linear approximation of is given by
Lemma 3.
For , let be a complete simplex in with . Let be the corresponding unique solution of the system (12). Then, is a zero point of .
Proof.
It follows from the system (12) that . Thus, . Therefore, is a zero point of . ∎
Since is uniformly continuous on , there is a constant such that for any and in .
Lemma 4.
For , let be a zero point of . Then, , where is the gird size of the triangulation restricted on .
Proof.
Assume that . Then,
The proof is completed. ∎
This lemma implies that every limit point of the simplicial path yields a solution to the SSE (2) with probability one as goes to zero.
5 Numerical Performance
We apply in this section the GRSAA differentiable homotopy method to solve several applications of the SSE (2), which include a stochastic market equilibrium problem and a stochastic variational inequality problem. To numerically trace the smooth path, we employ a standard predictor-corrector method.555Interested readers can refer to [1, 15] for more details about the predictor-corrector method. Moreover, we have made numerical comparisons of the GRSAA differentiable homotopy method with the GRSAA simplicial homotopy method and a standard differentiable homotopy method. All these methods are coded in MatLab(R2019a). The computation has been carried out on a 2.00 GHz Windows PC with CORE i7. The numerical results further confirm the effectiveness and efficiency of the GRSAA differentiable homotopy method.
5.1 A Stochastic Market Equilibrium Problem
This subsection is concerned with a stochastic market equilibrium problem. Suppose that there are three goods and two firms in an economy. The consumers in the economy can be represented by one agent, who has a constant elasticity of substitution (CES) utility function with a stochastic substitution parameter, , where , and are the amounts of the three goods being consumed. The initial endowment is given by . In the economy, the agent wants to maximize her utility by determining an optimal consumption plan. A direct application of the sufficient and necessary optimality conditions to the convex optimization problem for the agent yields the excess demand of the agent at the market price ,
where , , and with , , and . The production technology of firms are described by the following matrix,
Since no firm makes a positive profit in an equilibrium, we have the constraint . Assume that .666This constraint ensures that the feasible set is a compact polytope. Then the feasible set reads as , where . We say an equilibrium is reached if and only if there exist vectors and together with satisfying that , , and , where and [49, 52].
The corresponding GRSAA differentiable homotopy method for this problem is as follows. We set the sample size . After randomly generating a batch of samples of the stochastic variable from the uniform distribution on , one can approximate the expected value by an SAA scheme, . Let , and , , be randomly generated from with . We make up the following unconstrained convex optimization problem,
(13) |
where , is any given interior point in and for with as defined in Section 2. An application of the optimality conditions to the problem (13) together with a fixed point argument leads to the following GRSAA differentiable homotopy system,
(14) |
where and with being a new variable.777A well-chosen transformation of variables can significantly reduce the number of variables and constraints so that numerical efficiency can be greatly improved. This technique has been frequently used in the literature such as [8, 25]. Therefore, there exists a smooth path contained in the set of solutions to the system (14), which starts from a unique solution on the level of and ends at an approximate equilibrium for the original market equilibrium problem on the level of . By applying a predictor-corrector method to trace the smooth path, we eventually reach a solution in 12 iterations and 0.6096 seconds. Figure 2 shows the distances from the current point to the desired solution of the original problem at each iteration for the GRSAA differentiable homotopy method. The changes of different variables in iterations are illustrated in Figure 3.


5.2 A Stochastic System of Equations
A simplicial homotopy method was proposed in [9] to compute a solution to a deterministic system of equations, , . This subsection focuses on a stochastic version of this problem, , where the mapping is given by
(15) |
. We employ the GRSAA differentiable homotopy method to solve the SSE (15) under different . For numerical comparisons, we also apply the GRSAA simplicial homotopy method to solve the same problems. In the implementation of the GRSAA simplicial homotopy method, we set the initial grid size of the -triangulation , the factor to refine the grid size for all , , , for . Additionally, we choose the sample size for . Clearly, and as . Moreover, to make the comparisons more convincing, , , for the GRSAA differentiable homotopy method are consistent with those for the GRSAA simplicial homotopy method and both methods start from the same randomly generated starting point. Each case with different is tested for 20 times and the average computational costs are reported in Table 1, where “ITER” is the average number of iterations, “TIME” is the average computational time (in seconds), “GRSAA-DH” and “GRSAA-SH” represent respectively the GRSAA differentiable homotopy method and GRSAA simplicial homotopy method, and “RATIO” stands for the ratio of the numerical results of GRSAA-DH to GRSAA-SH.888Note that if the computational time exceeds 1000 and the number of iterations is larger than , we record the value as “INF”.
GRSAA-DH | GRSAA-SH | RATIO (%) | ||||
ITER | TIME | ITER | TIME | ITER | TIME | |
3 | 78 | 2.72 | 1181 | 2.58 | 6.60 | 105.42 |
4 | 96 | 3.76 | 3707 | 11.30 | 2.59 | 33.27 |
5 | 98 | 4.71 | 5738 | 12.88 | 1.71 | 36.56 |
6 | 105 | 5.49 | 14797 | 32.65 | 0.71 | 16.81 |
7 | 108 | 6.54 | 34011 | 59.83 | 0.32 | 10.93 |
8 | 123 | 8.53 | 78253 | 99.11 | 0.16 | 8.61 |
9 | 129 | 9.45 | 202596 | 198.30 | 0.06 | 4.76 |
10 | 272 | 16.33 | INF | INF | - | - |
From the columns of Table 1, we find that both the average number of iterations (ITER) and the average computational time (TIME) become greater and greater with the increasing of for the two methods. This result coincides with our intuition, that is, the larger the problem is, the harder it is for the methods to solve. From the rows of Table 1, we observe that the GRSAA differentiable homotopy method significantly outperforms the GRSAA simplicial homotopy method both in the number of iterations and computational time when . The advantage of the GRSAA-DH method over the GRSAA-SH method becomes more remarkable when the number of variables is higher, which implies that the GRSAA-DH method is relatively less sensitive to .
Figure 4 presents the changes of the maximal, average and minimal computational time in among the 20 tests for the two methods. Figure 5 illustrates the changes of the maximal, average and minimal number of iterations in for the two methods.




Regarding the GRSAA-DH method, one can observe from Figure 4 and Figure 5 that as , the difference between the maximal computational time and the minimal computational time is fairly small. As , there are some extreme values that enlarge the gap between the maximal value and the minimal value, but the number of such extreme cases are quite few, since the average computational time is just about 12 seconds, which is near the minimal computational time of 10 seconds. This indicates that the computational time for most tests is between 10 and 12 seconds. However, for the GRSAA-SH method, when , one can easily see that the gaps among the maximal, average and minimal computational time are much larger. Similar phenomena can be found for the number of iterations. These numerical results show that the GRSAA-DH method performs much more stable than the GRSAA-SH method.
To further demonstrate the advantage of the GRSAA-DH method, we have used the method to solve several large-scale SSEs, for which the GRSAA-SH method fails to find a reasonable approximate solution after the computational time is over 5000 seconds. We have run the GRSAA-DH method on each test 10 times. The numerical results are reported in Table 2 and Figure 6, which once again certify that the GRSAA differentiable homotopy method is able to effectively and efficiently solve larger-scale problems.
MAX | MIN | AVER | |
---|---|---|---|
15 | 55.99 | 19.65 | 36.389 |
16 | 476.85 | 52.27 | 219.771 |
17 | 774.83 | 102.06 | 416.047 |
18 | 1367.73 | 110.96 | 559.361 |
19 | 1786.92 | 244.85 | 729.847 |
20 | 1247.82 | 464.09 | 821.583 |
21 | 1571.14 | 325.18 | 985.259 |
22 | 1811.67 | 380.60 | 1230.041 |
23 | 2133.52 | 597.04 | 1317.084 |
24 | 2214.6 | 911.96 | 1396.252 |
25 | 2826.06 | 1214.31 | 1479.845 |

5.3 A Stochastic Variational Inequality
This subsection intends to solve a stochastic variational inequality (SVI) problem,
(16) |
where is a continuously differentiable mapping with for , and . To get a GRSAA differentiable homotopy system for the SVI problem, we constitute the optimization problem,
where and with being an identity matrix and . Applying the optimality conditions to this optimization problem, we derive with a fixed point argument an equivalent problem to the SVI problem (16),
(17) |
We have generated numerous samples for the stochastic variable and applied the SAA, , to estimate the expected value . The corresponding GRSAA differentiable homotopy system to the SSE (17) is as follows:
(18) |
where is an interior point in the compact convex set and is defined as in Section 2. After a transformation of variables, that is, and for , the third group of equations of the system (18) vanishes and the system (18) becomes an equivalent form,
(19) |
Clearly, the efficiency of the proposed method depends on the number of variables , the sample size , and the number of divisions . Next, we study the impact of these factors on the performance of the GRSAA differentiable homotopy method.
-
•
First, the GRSAA-DH method has been used to solve the system (19) under different pairs of , where and . The number of divisions for the method is fixed to be . To make a numerical comparison, we have also employed a standard differentiable homotopy method, which can be considered as a special case of the GRSAA-DH method without a gradual reinforcement process, that is, , to solve the same problems. Each case has been run 10 times and the average computational time is reported in Table 3.
Table 3: Numerical Results for the Two Methods GRSAA-DH Stardard Homotopy 1 1.03 1.18 2 3.94 5.64 3 6.57 10.98 1 8.61 10.95 2 35.61 49.32 3 61.91 100.35 1 80.20 95.55 2 358.47 474.51 3 619.35 1052.27 One can see from Table 3 that the computational costs of the two methods become larger and larger with the growing of sample size and the number of variables. For each fixed , the computational time of the GRSAA-DH method increases approximately linearly with . For example, when and changes from to , the computational time of the GRSAA-DH method increases from 3.94 to 35.61. Comparing the last two columns of Table 3, one can observe that the GRSAA-DH method is more competitive than the standard differentiable homotopy method, which verifies that the gradual reinforcement process indeed makes a significant difference.
-
•
Second, we want to explore how the computational time is influenced by the number of divisions and determine an appropriate value of for the GRSAA differentiable homotopy method. In theory, can be chosen as any value that is not larger than the sample size . Nonetheless, it is intuitive that a too large or too small value of may cause a low numerical efficiency, which has been partially acknowledged in Table 3, since the standard differentiable homotopy method is same as the GRSAA-DH method with . In order to achieve a better efficiency, it is necessary to find a suitable value of when implementing the GRSAA-DH method. However, finding an optimal value of is an NP-hard problem in theory and can only be realized through numerical experiments. We have implemented the GRSAA-DH method with different choices of to solve the system (19) and plotted the changes of the computational time in various values of in Figures 7, 8 and 9. In these experiments, and .









From Figures 7, 8 and 9, we find that the computational time presents a highly analogous trend for different cases especially with the relatively high number of variables and a large batch of samples. More specifically, the computational time always decreases as and reaches a local minimum about . Notice that the local minimum also has a sufficiently competitive performance over the entire possible choices of . Hence, when applying the GRSAA-DH method, one can roughly determine the “best” number of divisions according to , which surly performs much better than the standard differentiable homotopy method.
6 Conclusions
This paper has exploited the sample-average-approximation (SAA) scheme to approximate a stochastic system of equations (SSE) and developed a gradually reinforced sample-average-approximation (GRSAA) differentiable homotopy method to solve the SSE with very large sample size. By introducing a sequence of continuously differentiable functions of the homotopy parameter ranging between zero and one, we have established a continuously differentiable homotopy system, which is able to gradually increase the sample size as the homotopy parameter decreases. The solution set of the homotopy system contains an everywhere smooth path, which starts from an arbitrarily given point and ends at a satisfactory approximate solution to the original SSE.
The GRSAA differentiable homotopy method provides an effective link between a standard differentiable homotopy method and a gradually reinforced SAA scheme. The proposed method is able to greatly reduce the computational cost for a solution to the SSE with large sample size and retain the inherent property of global convergence with a standard homotopy method. To make numerical comparisons, we have carried out extensive numerical tests. The numerical results further confirm that two main features of the GRSAA differentiable homotopy method, gradual reinforcement in sample size and differentiability, can significantly enhance numerical effectiveness and efficiency.
References
- [1] Allgower, E.L., Georg, K.: Numerical continuation methods: an introduction, vol. 13. Springer Science & Business Media (2012)
- [2] Awoniyi, S.A., Todd, M.J.: An efficient simplicial algorithm for computing a zero of a convex union of smooth functions. Mathematical Programming 25(1), 83–108 (1983)
- [3] Brown, D.J., Demarzo, P.M., Eaves, B.C.: Computing equilibria when asset markets are incomplete. Econometrica 64(1), 1–27 (1996)
- [4] Byrd, R.H., Chin, G.M., Nocedal, J., Wu, Y.: Sample size selection in optimization methods for machine learning. Mathematical Programming 134(1), 127–155 (2012)
- [5] Chen, L., Liu, Y., Yang, X., Zhang, J.: Stochastic approximation methods for the two-stage stochastic linear complementarity problem. SIAM Journal on Optimization 32(3), 2129–2155 (2022)
- [6] Chen, X., Shen, J.: Dynamic stochastic variational inequalities and convergence of discrete approximation. SIAM Journal on Optimization 32(4), 2909–2937 (2022)
- [7] Chen, X., Wets, R.J.B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sample average approximations. SIAM Journal on Optimization 22(2), 649–673 (2012)
- [8] Chen, Y., Dang, C.: A differentiable homotopy method to compute perfect equilibria. Mathematical Programming pp. 1–33 (2019)
- [9] Dang, C.: The -triangulation of rn for simplicial algorithms for computing solutions of nonlinear equations. Mathematics of Operations Research 16(1), 148–161 (1991)
- [10] Dang, C.: The -triangulation for simplicial homotopy algorithms for computing solutions of nonlinear equations. Mathematical Programming 59(1-3), 307–324 (1993)
- [11] Dang, C., Herings, P.J.J., Li, P.: An interior-point differentiable path-following method to compute stationary equilibria in stochastic games. INFORMS Journal on Computing 34(3), 1403–1418 (2022)
- [12] Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(7) (2011)
- [13] Eaves, B.C.: The linear complementarity problem. Management Science 17(9), 612–634 (1971)
- [14] Eaves, B.C.: Homotopies for computation of fixed points. Mathematical Programming 3(1), 1–22 (1972)
- [15] Eaves, B.C., Schmedders, K.: General equilibrium models and homotopy methods. Journal of Economic Dynamics and Control 23(9), 1249–1279 (1999)
- [16] Eibelshäuser, S., Klockmann, V., Poensgen, D., von Schenk, A.: The logarithmic stochastic tracing procedure: A homotopy method to compute stationary equilibria of stochastic games. INFORMS Journal on Computing 35(6), 1511–1526 (2023)
- [17] Facchinei, F., Pang, J.S.: Finite-dimensional variational inequalities and complementarity problems. Springer Science & Business Media (2007)
- [18] Ferris, M.C., Mangasarian, O.L., Pang, J.S.: Complementarity: Applications, algorithms and extensions, vol. 50. Springer Science & Business Media (2013)
- [19] Gürkan, G., Ozge, A.Y., Robinson, S.M.: Sample-path solution of stochastic variational inequalities. Mathematical Programming 84(2), 313–333 (1999)
- [20] Herings, P.J.J.: Two simple proofs of the feasibility of the linear tracing procedure. Economic Theory 15(2), 485–490 (2000)
- [21] Herings, P.J.J., Kubler, F.: Computing equilibria in finance economies. Mathematics of operations research 27(4), 637–646 (2002)
- [22] Herings, P.J.J., Peeters, R.: A differentiable homotopy to compute Nash equilibria of -person games. Economic Theory 18(1), 159–185 (2001)
- [23] Herings, P.J.J., Peeters, R.: Homotopy methods to compute equilibria in game theory. Economic Theory 42(1), 119–156 (2010)
- [24] Herings, P.J.J., Peeters, R.J.: Stationary equilibria in stochastic games: Structure, selection, and computation. Journal of Economic Theory 118, 32–60 (2004)
- [25] Herings, P.J.J., Schmedders, K.: Computing equilibria in finance economies with incomplete markets and transaction costs. Economic Theory 27(3), 493–512 (2006)
- [26] Hu, J., Homem-de Mello, T., Mehrotra, S.: Sample average approximation of stochastic dominance constrained programs. Mathematical Programming 133(1-2), 171–201 (2012)
- [27] Hu, Y., Chen, X., He, N.: Sample complexity of sample average approximation for conditional stochastic optimization. SIAM Journal on Optimization 30(3), 2103–2133 (2020)
- [28] Huang, Z.H., Li, Y.F., Miao, X.: Finding the least element of a nonnegative solution set of a class of polynomial inequalities. SIAM Journal on Matrix Analysis and Applications 44(2), 530–558 (2023)
- [29] Iusem, A.N., Jofré, A., Thompson, P.: Incremental constraint projection methods for monotone stochastic variational inequalities. Mathematics of Operations Research 44(1), 236–263 (2019)
- [30] Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Transactions on Automatic Control 53(6), 1462–1475 (2008)
- [31] Jiang, J., Sun, H., Zhou, B.: Convergence analysis of sample average approximation for a class of stochastic nonlinear complementarity problems: from two-stage to multistage. Numerical Algorithms 89(1), 167–194 (2022)
- [32] Judd, K.L., Renner, P., Schmedders, K.: Finding all pure-strategy equilibria in games with continuous strategies. Quantitative Economics 3(2), 289–331 (2012)
- [33] Kellogg, R.B., Li, T.Y., Yorke, J.: A constructive proof of the brouwer fixed-point theorem and computational results. SIAM Journal on Numerical Analysis 13(4), 473–483 (1976)
- [34] Kleywegt, A.J., Shapiro, A., Homem-de Mello, T.: The sample average approximation method for stochastic discrete optimization. SIAM Journal on Optimization 12(2), 479–502 (2002)
- [35] Kubler, F., Schmedders, K.: Computing equilibria in stochastic finance economies. Computational Economics 15(1-2), 145–172 (2000)
- [36] Lee, K., Tang, X.: On the polyhedral homotopy method for solving generalized nash equilibrium problems of polynomials. Journal of Scientific Computing 95(1), 13 (2023)
- [37] Li, P., Dang, C.: An arbitrary starting tracing procedure for computing subgame perfect equilibria. Journal of Optimization Theory and Applications 186(2), 667–687 (2020)
- [38] MERRILL, O.H.: Applications and extensions of an algorithm that computes fixed points ofcertain upper semi-continuous point to set mappings. Ph.D. thesis, University of Michigan (1972)
- [39] Na, S., Anitescu, M., Kolar, M.: Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming. Mathematical Programming pp. 1–75 (2023)
- [40] Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization 19(4), 1574–1609 (2009)
- [41] Pflug, G.C.: Optimization of Stochastic Models. Springer US (1996)
- [42] Rockafellar, R.T., Sun, J.: Solving monotone stochastic variational inequalities and complementarity problems by progressive hedging. Mathematical Programming 174(1-2), 453–471 (2019)
- [43] Rockafellar, R.T., Wets, R.J.B.: Variational analysis, vol. 317. Springer Science & Business Media (2009)
- [44] Scarf, H.: The approximation of fixed points of a continuous mapping. SIAM Journal on Applied Mathematics 15(5), 1328–1343 (1967)
- [45] Schmedders, K.: Computing equilibria in the general equilibrium model with incomplete asset markets. Journal of Economic Dynamics and Control 22(8-9), 1375–1401 (1998)
- [46] Shapiro, A., Homem-de Mello, T.: A simulation-based approach to two-stage stochastic programming with recourse. Mathematical Programming 81(3), 301–325 (1998)
- [47] Shapiro, A., Homem-de Mello, T.: On the rate of convergence of optimal solutions of Monte Carlo approximations of stochastic programs. SIAM Journal on Optimization 11(1), 70–86 (2000)
- [48] Todd, M.J.: The computation of fixed points and applications. Lecture Notes in Economics and Mathematical Systems, Springer, Berlin 124 (1976)
- [49] Van Den Elzen, A., Van Der Laan, G., Talman, D.: An adjustment process for an economy with linear production technologies. Mathematics of operations research 19(2), 341–351 (1994)
- [50] Wang, G., Wei, X., Yu, B., Xu, L.: An efficient proximal block coordinate homotopy method for large-scale sparse least squares problems. SIAM Journal on Scientific Computing 42(1), A395–A423 (2020)
- [51] Zhan, Y., Dang, C.: A smooth homotopy method for incomplete markets. Mathematical Programming 190(1-2), 585–613 (2021)
- [52] Zhan, Y., Li, P., Dang, C.: A differentiable path-following algorithm for computing perfect stationary points. Computational Optimization and Applications 76, 571–588 (2020)
- [53] Zhang, J., Xiao, Q., Li, L.: Solution space exploration of low-thrust minimum-time trajectory optimization by combining two homotopies. Automatica 148, 110798 (2023)
- [54] Zhou, Z., Yu, B.: A smoothing homotopy method for variational inequality problems on polyhedral convex sets. Journal of Global Optimization 58(1), 151–168 (2014)