Optimal control of a viscous two-field damage model with fatigue
Abstract
Motivated by fatigue damage models, this paper addresses optimal control problems governed by a non-smooth system featuring two non-differentiable mappings. This consists of a coupling between a doubly non-smooth history-dependent evolution and an elliptic PDE. After proving the directional differentiability of the associated solution mapping, an optimality system which is stronger than the one obtained by classical smoothening procedures is derived. If one of the non-differentiable mappings becomes smooth, the optimality conditions are of strong stationary type, i.e., equivalent to the primal necessary optimality condition.
keywords:
damage models with fatigue, non-smooth optimization, evolutionary VIs, optimal control of PDEs, history-dependence, strong stationarityAMS:
34G25, 34K35, 49J20, 49J27, 74R991 Introduction
Fatigue is considered to be the main cause of mechanical failure [29, 35]. It describes the weakening of a material due to repeated applied loads (fluctuating stresses, strains, forces, environmental factors, temperature, etc.), which individually would be too small to cause its malfunction [1, 35]. Whether in association with environmental damage (corrosion fatigue) or elevated temperatures (creep fatigue), fatigue failure is often an unexpected phenomenon. Unfortunately, in real situations, it is very difficult to identify the fatigue degradation state of a material, which sometimes might result in devastating events. Therefore, it is extremely important to find methods which allows us to describe and control the behaviour of materials exposed to fatigue. While there are very few papers [1] (damage in elastic materials) and [11] (cohesive fracture), concerned with a rigurouos mathematical examination of models describing fatigue damage, the literature regarding the optimal control of fatigue models is practically nonexistent. All the existing results which include the terminology “optimal control” in the context of fatigue damage do not address theoretical aspects nor involve mathematical tools such as optimal control theory in Banach spaces as in the present work, but focus on design of controllers and simulations instead, see e.g. [28, 18] and the references therein.
In this paper we investigate the optimal control of the following viscous two-filed gradient damage problem with fatigue:
(1.1) |
a.e. in . To be more precise, we prove an optimality system that is far stronger than the one obtained by classical smoothening techniques.
The main novelty concerning (1.1) arises from the highly non-smooth structure, which is due to the non-differentiability of the dissipation in the evolution inclusion, in combination with an additional non-smooth fatigue degradation mapping which shall be introduced below. This excludes the application of standard adjoint techniques for the derivation of first-order necessary conditions in form of optimality systems. Not only does the evolution in (1.1) have a highly non-smooth character, but, as we will next see, it is also history-dependent. The fact that the differential inclusion is coupled with a minimization problem (which can be reduced to an elliptic PDE) gives rise to additional challenges [5].
The problem describes the evolution of damage under the influence of a time-dependent load (control) acting on a body occupying the bounded Lipschitz domain , . The induced ’local’ and ’nonlocal’ damage are expressed in terms of the functions and , respectively (states).
In (1.1), the stored energy is given by
(1.2) |
where is the gradient regularization and denotes the penalization parameter. Thus, the two damage variables are connected through the penalty term in the stored energy, so that our model becomes a penalized version of the viscous fatigue damage model addressed in [1] (two-dimensional case); note that, for simplicity reasons, we do not take a displacement variable into account. The type of penalization used in (1.2) has already been proven to be successful in the context of classical damage models (without fatigue). Firstly, it approximates the classical single-field damage model, in the sense that, when , the penalized damage model coincides with the model addressed in [16, 22], cf. [25]. Secondly, the penalization we use is frequently employed in computational mechanics due to the numerical benefits offered by the additional damage variable (see e.g. [13] and the references therein). For more details, we also refer to [24, Sec. 2.1-2.2].
The differential inclusion appearing in (1.1) describes the evolution of the damage variable under fatigue effects. Therein, is a so-called history operator that models how the damage experienced by the material affects its fatigue level. Thus, as opposed to other well-known damage models, cf. e.g. [16, 15, 22], the dissipation in (1.1) is affected by the history of the evolution, . The parameter stands for the viscosity parameter, while the symbol denotes the convex subdifferential of the functional in its second argument. Thus, the non-smooth differential inclusion is to be understood as follows:
The viscous dissipation is defined as
(1.3) |
and features a second non-smooth component, namely the fatigue degradation mapping . This describes in which measure the fatigue affects the fracture toughness of the material. This mapping is non-increasing in applications, since the higher the cumulated damage , the lower the fracture toughness . Whereas usually the toughness of the material is described by a fixed (nonnegative) constant [16, 15], in the present model it changes at each point in time and space, depending on . To be more precise, the value of the fracture toughness of the body at is given by , cf. (1.3). Hence, the model (1.1) takes into account the following crucial aspect: the occurrence of damage is favoured in regions where fatigue accumulates.
We underline that the dissipation accounts for the non-smooth nature of the evolution in the first place: even if is replaced by a (nonnegative) constant, the evolution in (1.1) still describes a non-smooth process. The optimal control thereof is far away from being standard and has been recently addressed in [5, Sec. 4], where strong stationarity for the damage model (1.1) without fatigue is proven. By contrast, in applications which take fatigue into consideration, is constant until its kink point is achieved, after which it monotonically decreases [2, Sec. 2.6.2]. Thus, it is the fatigue degradation mapping which accounts for the highly non-smooth character of our problem.
Deriving necessary optimality conditions is a challenging issue even in finite dimensions, where a special attention is given to MPCCs (mathematical programs with complementarity constraints). In [31] a detailed overview of various optimality conditions of different strength was introduced, see also [20] for the infinite-dimensional case. The most rigorous stationarity concept is strong stationarity. Roughly speaking, the strong stationarity conditions involve an optimality system, which is equivalent to the purely primal conditions saying that the directional derivative of the reduced objective in feasible directions is nonnegative (which is referred to as B stationarity).
While there are plenty of contributions in the field of optimal control of smooth problems, see e.g. [38] and the references therein, fewer papers are dealing with non-smooth problems. Most of these papers resort to regularization or relaxation techniques to smoothen the problem, see e.g. [3, 17, 19] and the references therein. The optimality systems derived in this way are of intermediate strength and are not expected to be of strong stationary type, since one always loses information when passing to the limit in the regularization scheme. Thus, proving strong stationarity for optimal control of non-smooth problems requires direct approaches, which employ the limited differentiability properties of the control-to-state map. In this context, there are even less contributions. We refer to the pioneering work [26] (strong stationarity for optimal control of elliptic VIs of obstacle type), which was followed by other papers addressing strong stationarity of various types of VIs [27, 40, 12, 7, 39, 8]. Regarding strong stationarity for optimal control of non-smooth PDEs, the literature is rather scarce and the only papers known to the author addressing this issue so far are [23, 9, 6, 5, 10].
Let us point out the main contributions of the present work. This paper aims at deriving optimality conditions which - regarding their strength - lie between the conditions derived by classical regularization techniques and the strong stationary ones. Starting from an optimality system obtained via smoothening, we resort to direct methods from previous works [5, 23], in order to improve our initial optimality conditions as far as we can. Note that this is a novel way of obtaining optimality conditions. We emphasize that, in contrast to [5, 23], our state system features two non-differentiable mappings instead of one, so that the methods from the aforementioned works are of limited applicability: Strong stationary conditions are not expected in our complex doubly non-smooth setting. If the fatigue degradation mapping is smooth, strong stationarity conditions are indeed available. We underline that, to the best of our knowledge, optimal control problems featuring two non-differentiable functions have not been tackled so far in the literature, not even in the context of classical smoothening methods.
The paper is structured as follows. After an introduction of the notation, section 2 focuses on the analysis of our fatigue damage model (1.1). Here we address the existence and uniqueness of solutions, by proving that (1.1) is in fact equivalent to a PDE system. This consists of an elliptic PDE and a highly non-smooth differential ODE. The latter one is of particular interest. It features two non-differentiable functions, namely and the fatigue degradation function ; the latter appears in the argument of the initial non-smoothness, cf (2.2a). The properties of the control-to-state operator associated to (1.1) are investigated. In particular, we are concerned with the directional differentiability of the solution mapping of the non-smooth state system. To the best of our knowledge, the sensitivity analysis of non-smooth differential equations containing two non-differentiable functions has never been examined in the literature.
In section 3 we present the optimal control problem and investigate the existence of optimal minimizers. Then, in subsection 3.1 we derive our first optimality conditions, by resorting to a classical smoothening method. These conditions are of intermediate strength. If the non-smoothness is inactive, they coincide with the classical KKT system. However, our first optimality system does not contain any information in those points where the non-differentiable mappings and attain their kink points. This is namely the focus of section 3.2, where the main result is proven in Theorem 21. Here, the initial optimality system is improved by employing the "surjectivity" trick from [5, 23]. The new and final optimality conditions (3.18) are comparatively strong (but not strong stationary). They contain information in terms of sign conditions on sets where the non-smoothness is active; these are not expected to be obtained if one just smoothens the problem, cf. e.g. [6, Remark 3.9]. Moreover, if the fatigue degradation function is smooth, then (3.18) is of strong stationary type (Corollary 22). For completeness, the expected (not proven) strong stationarity system associated to the doubly non-smooth state system is presented in Section 3.3. Here we include a thorough explanation as to why the methods from [23, 5] fail (Remark 28). Finally, we include in Appendix A the proof of Lemma 13, for convenience of the reader.
Notation
Throughout the paper, is a fixed final time. If and are linear normed spaces, then the space of linear and bounded operators from to is denoted by , and means that is densely embedded in . The dual space of will be denoted by . For the dual pairing between and we write . The closed ball in around with radius is denoted by . If is a Hilbert space, we write for the associated scalar product. The following abbreviations will be used throughout the paper:
where is a Banach space. The adjoint operator of a linear and continuous mapping is denoted by By we denote the characteristic function associated to the set . Derivatives w.r.t. time (weak derivatives of vector-valued functions) are frequently denoted by a dot. The symbol stands for the convex subdifferential, see e.g. [30]. With a little abuse of notation, the Nemystkii-operators associated with the mappings considered in this paper will be denoted by the same symbol, even when considered with different domains and ranges. The mapping is abbreviated by . With a little abuse of notation, we use in the following the Laplace symbol for the operator defined by
2 Properties of the control-to-state map
This section is concerned with the investigation of the solvability and differentiability properties of the state system (1.1).
Assumption 1.
For the mappings associated with fatigue in (1.1) we require the following:
-
1.
The history operator satisfies
for all , where is a positive constant. Moreover, is supposed to be Gâteaux-differentiable with continuous derivative on .
-
2.
The non-linear function is assumed to be Lipschitz-continuous with Lipschitz-constant and directionally differentiable.
Remark 2.
Assumption 1.1 is satisfied by the Volterra operator , defined as
where and This type of operator is often employed in the study of history-depedent evolutionary variational inequalities, see e.g.[34, Ch. 4.4].
Concerning Assumption 1.2, we remark that non-differentiable fatigue degradation functions are very common in applications, since such mappings often display at least one kink point, see [2, Sec. 2.6.2]. This basically means that once the cumulated fatigue achieves a certain value, say , the body suddenly starts to become weaker in terms of its fracture toughness (so that is a kink point of ). This abrupt weakening of the material is described by the monotonically decreasing mapping on the interval , see [2, Sec. 2.6.2].
Assumption 1 is supposed to hold throughout the paper, without mentioning it every time.
It is not difficult to check that the Nemytskii operator is Lipschitz continuous with constant . In view of Assumption 1.1, we thus have
(2.1) |
a.e. in , for all .
Proposition 3 (Control-to-state map).
For every , the fatigue damage problem (1.1) admits a unique solution , which is characterized by the following PDE system
(2.2a) | |||
(2.2b) |
a.e. in .
Proof.
Let and be arbitrary, but fixed. Since is strictly convex, continuous and radially unbounded (see (1.2)), the minimization problem admits a unique solution characterized by . In view of (1.2), this means that
(2.3) |
where is the solution operator of
(2.4) |
With the map at hand, the evolution in (1.1) reads
(2.5) |
In the light of (1.2), (1.3), and sum rule for convex subdifferentials, (2.5) is equivalent to
(2.6) |
for all a.e. in , where
(2.7) |
Now we use the result in [5, Lemma 3.3] for each time point and we see that (2.6) is in fact equivalent with
(2.8) |
where we abbreviate for convenience
(2.9) |
In (2.8), stands for the (metric) projection onto the set , i.e., is the unique solution of
for any . In order to compute , we use the definition of the convex subdifferential and the fact that , from which we deduce
Now, in view of (2.7) combined with the fundamental lemma of the calculus of variations we have
This means that and since we can finally write (2.8) as
(2.10) |
To summarize, we have shown that the evolution in (2.5) is equivalent to (2.10).
To solve (2.10), we apply a fixed-point argument. For this, we take a look at the mapping , given by
where is to be determined so that is a contraction. For all the following estimate is true
(2.11) | ||||
where is a positive constant. Here we used the fact that is Lipschitzian with constant , the definition of (see (2.9)) combined with the boundedness of , and the estimate (2.1). From (2.11) we deduce
(2.12) |
which allows us to conclude that is a contraction for a small enough . Thus, the PDE (2.10) restricted on admits a unique solution in (see e.g. [14, Thm. 7.2.3]). Now, the unique solvability of (2.10) on the whole interval and the desired regularity of follows by a concatenation argument.
Lemma 4.
Proof.
Let be arbitrary, but fixed. In the following, we abbreviate and , where is the solution operator of (2.4) . In view of Proposition 3 combined with (2.1), we obtain
where is a constant dependent only on the given data. Then, applying Gronwall’s inequality leads to
where is a constant dependent only on the given data. By employing again (2.2a) and by estimating as above without integrating over time, we obtain
(2.13) |
where is another constant dependent only on the given data. Now, the desired result follows from , and (2.13). ∎
Lemma 5.
The mapping is Hadamard directionally differentiable with
(2.14) |
Moreover, for all , it holds
(2.15) |
a.e. in .
Proof.
In view of the differentiability properties of and , the mapping is Hadamard directionally differentiable [32, Def. 3.1.1, Lem. 3.1.2(b)]. To see this, we first note that is Hadamard directionally differentiable, since it is directionally differentiable (by Assumption 1.2 and Lebesgue’s dominated convergence theorem, see e.g. [36, Lemma A.1]) and Lipschitz-continuous. In view of Assumption 1.1, chain rule [33, Prop. 3.6(i)] implies that is Hadamard directionally differentiable as well, with directional derivative given by (2.14). To prove (2.15), we observe that, as a consequence of (2.1), we have
a.e. in , for all and all . Passing to the limit , where one uses the directional differentiability of and the fact that convergence in implies a.e. convergence in for a subsequence, then yields the desired estimate. ∎
Proposition 6 (Directional differentiability).
The operator is directionally differentiable. Its directional derivative at in direction is the unique solution of
(2.16a) | |||
(2.16b) |
a.e. in , where we abbreviate .
Proof.
We start by examining the solvability of (2.16). To this end, we just check that the mapping , given by
for all , is Lipschitzian from to with constant smaller than , for small enough. Then, by using the arguments employed at the end of the proof of Proposition 3, we can deduce that, for any , (2.16) admits a unique solution . For all the following estimate is true
where is a positive constant; note that here we abbreviated again . Here we used the fact that is Lipschitzian with constant , the boundedness of (see (2.4)), and (2.15) in combination with (2.14). Then, we obtain an estimate similar to (2.12) which allows us to conclude the fact that is a contraction.
Next we focus on the convergence of the difference quotients associated with the mapping . We begin by observing that the operator is Hadamard directionally differentiable [32, Def. 3.1.1, Lem. 3.1.2(b)], since it is directionally differentiable (by Lebesgue’s dominated convergence theorem, see e.g. [36, Lem. A.1]) and Lipschitz-continuous. Moreover,
is directionally differentiable from to , since is linear and bounded between these spaces (cf. (2.4)) and as a result of Lemma 5. Now chain rule [33, Prop. 3.6(i)] implies that
is (Hadamard) directionally differentiable from to with
for all . For simplicity, in the following we abbreviate , where is arbitrary, but fixed. denotes the first component of the map , i.e., is the solution map associated with (2.10). By combining the equations for , and (2.16), we obtain
(2.17) | ||||
This implies
(2.18) | ||||
where is the positive constant appearing in (2.11). In (2.18) we used again the Lipschitz continuity of , the boundedness of (cf. (2.3) and (2.4)), and the estimate (2.1). Applying Gronwall’s inequality in (2.18) yields
(2.19) |
where is a constant dependent only on the given data. Now, (2.17) and estimating as in (2.18), in combination with (2.19), leads to
(2.20) |
where is a constant dependent only on the given data. On the other hand, we recall the definition of in (2.18) and the fact that is directionally differentiable from to , which implies
In view of (2.20), we have shown that is directionally differentiable with . Further, from (2.3) we have for all , where is the second component of the operator , i.e., . Thus, is directionally differentiable as well, since and is directionally differentiable. Its directional derivative is given by , i.e., , see (2.16). The proof is now complete. ∎
3 The optimal control problem
Now, we turn our attention to the optimal control of the fatigue damage model (1.1). In the remaining of the paper, we are concerned with the examination of the following optimal control problem
s.t. |
In view of Proposition 3, this can also be formulated as
(P) |
Assumption 7.
The functional satisfies
where is continuously Fréchet-differentiable.
Proposition 8 (Existence of optimal solutions for (P)).
The optimal control problem (P) admits at least one solution in .
Proof.
3.1 Regularization and passage to the limit
In this section, we are concerned with the derivation of a first optimality system for local optima of (P). Based thereon, we shall improve our optimality conditions in the next section.
To obtain a first strong optimality system, see (3.6) below, we need the following rather non-restrictive assumption:
Assumption 9.
Remark 10.
Similarly to Remark 2, we observe that Assumption 9.1 is satisfied by classical Volterra operators which are employed in the study of history-dependent evolutionary variational inequalities, i.e.,
where and
We underline that Assumption 9.2 is very reasonable from the point of view of applications, since fatigue degradation functions have at most two kink points in practice [2, Sec. 2.6.2]. However, our mathematical analysis can be carried on in an analogous way if has a countable number of non-smooth points; since this is rather uncommon in applications and for the sake of a better overview, we stick to the setting where has a single non-differentiable point.
Assumption 11 (Regularization of ).
For every , there exists a continuously differentiable function such that
-
1.
There exists a constant , independent of , such that
-
2.
is Lipschitz continuous with Lipschitz constant independent of .
-
3.
for every , the sequence converges uniformly towards on as .
Remark 12.
If the fatigue degradation function is piecewise continuously differentiable, which is always the case in applications [2, Sec. 2.6.2], then Assumption 11 is fulfilled. To see this, one defines , where is a standard mollifier. Then, Assumption 11.1 can be easily checked, see e.g. the proof of [21, Thm. 2.4]. Note that it is natural that the Lipschitz continuity of the non-linearity carries over to its regularized counterparts with constant independent of [37, Chp. I.3.3]. We also observe that, since is continuous on , converges uniformly towards on this interval, so that Assumption 11.3 is satisfied as well.
In the rest of the paper, we will tacitly assume that, in addition to Assumptions 1 and 7, Assumptions 9 and 11 are always fulfilled, without mentioning them every time.
For an arbitrary local minimizer of (P), consider the following regularization, also known as "adapted penalization", see e.g. [4]:
(Pε) |
where
Lemma 13.
Proof.
see Appendix A. ∎
The next result is essential for the solvability of the first adjoint equation in (3.6).
Lemma 14.
For all it holds
(3.4) |
where stands for the adjoint operator of .
Proof.
Let be arbitrary, but fixed. By virtue of (2.15) (applied for instead of ), we have
Note that in the first identity we made use of Fubini’s theorem. Now, testing with , where and , , are arbitrary, but fixed yields
Applying the fundamental lemma of the calculus of variations then gives in turn
a.e. in . Since was arbitrary, the proof is now complete. ∎
To show that the relations in (3.13) below are valid, we need to prove that the convergence in (3.3) is true in as well. This is confirmed by the following
Lemma 15.
Proof.
Let us first show that belongs to . The assertion for follows in a complete analogous way. By taking a look at (2.2), we see that, since , the mapping belongs to ; this follows by the so-called Stampacchia method, cf. e.g. [38, Chp. 7.2.2]. Then, by arguing as in the proof of Proposition 3, where one employs Assumption 9.1, one obtains that . Now, to show the convergence (3.5), we subtract the equation associated to (see (2.2a)) from the one associated to (see (A.1a)). By using the fact that , and by relying on the Lipschitz continuity of and , as well as Assumptions 11.1, we arrive at
where is a constant dependent only on the given data; note that in the last inequality we used Assumption 9.1. Then, applying Gronwall’s inequality leads to
where is a constant dependent only on the given data. By employing (3.2), we can finally deduce that In view of (2.2b), the proof is now complete. ∎
We are now in the position to state the main result of this subsection.
Proposition 16.
Proof.
Let be the sequence of local minimizers from Lemma 13. Since is locally optimal for (Pε) and on account of the differentiability properties of , cf. Appendix A, and , see Assumption 7, we can write down the necessary optimality condition
(3.7) |
for all . Now, let us consider the system
(3.8a) | |||
(3.8b) |
a.e. in , where we abbreviate and . In (3.8a), stands for the adjoint operator of .
By arguments inspired e.g. from the proof of [36, Lem. 5.7] in combination with the estimate (3.4), one obtains that (3.8) admits a unique solution . Let us go a little more into detail concerning the solvability of (3.8a). In this context one checks if the mapping , given by
for all , is Lipschitzian from to with constant smaller than , for small enough; here denotes the solution of
for and We observe that, for all , the following estimate is true
where in the first inequality we used the global Lipschitz-continuity of with constant and (3.4); now the reader is referred to the first part of the proof of Proposition 6 where the exact type of estimate was established in order to obtain that admits a solution in ; finally, a transformation of the variables yields that is the solution of the adjoint system (3.8).
Testing (3.8) with and (A.2) with yields
which inserted in (3.7) gives
(3.9) |
for all Further, we observe that
(3.10a) | |||
(3.10b) |
in the light of (3.3) combined with the continuous Fréchet-differentiability of (Assumption 7). Next we focus on proving uniform bounds for the regularized adjoint states. By employing again a transformation of the variables where this time we abbreviate and by relying again on the global Lipschitz-continuity of and (3.4), we obtain from (3.8a)
Now, Gronwall’s inequality gives in turn
for all . Thus, by relying on (3.10a)-(3.10b) and by estimating again as above in (3.8a), this time without integrating, one obtains that there exists a constant, independent of , such that
As a consequence,
and
are uniformly bounded in (recall that and are globally Lipschitz continuous with constants independent of ). From (3.8b) we can further deduce that there exists a constant , independent of , such that , where we use again (3.10b). Therefore, we can extract weakly convergent subsequences (denoted by the same symbol) so that
(3.11) |
Owing to (3.11), (3.10a), (3.10b) and (3.2), we can pass to the limit in (3.8)-(3.9). This results in
(3.12a) | |||
(3.12b) | |||
(3.12c) |
where for the passage to the limit in (3.8a) we also relied on the continuity of the derivative of (see Assumption 1.1) combined with (3.3).
Now, it remains to prove that (3.6c)-(3.6d) is true. To this end, we show that, for each , we have
(3.13a) | |||
(3.13b) |
where we abbreviate , , and .
We begin by observing that
in light of Assumption 9.1. Thus, as a consequence of (3.5), we have
(3.14) |
which then implies
by the Lipschitz continuity of and Assumption 11.1. This means that f.a.a. for small enough, independent of . In view of the definition of we have
for The definition of and (3.11) now yield (3.13a). To show (3.13b), we proceed in a similar way. Thanks to (3.14), there exists an small enough, independent of , so that f.a.a. . Assumption 11.3 applied for then gives in turn the convergence
As another consequence of Assumption 11.3, we obtain that is continuous on since by assumption. Now, (3.14), Assumption 1.2 and Lebesgue dominated convergence imply that
Finally, the convergence of from (3.11) along with the definition of yield that
i.e., (3.13b). Since was arbitrary and since and (up to a set of measure zero), the proof is now complete. ∎
Remark 17.
-
•
If and if a.e. in , then the optimality system in Proposition 16 coincides with the very same optimality conditions which one obtains when directly applying the KKT-theory to (P), cf. [38]. Moreover, we observe that (3.6) does not contain any information as to what happens in those for which and are non-smooth points of the mappings and , respectively. This is the focus of the next section, where the optimality conditions from Proposition 16 shall be improved.
-
•
Indeed, (3.6) is not the best optimality system one could obtain via regularization. Such a system should also contain the relations
(3.15a) (3.15b) We acknowledge the results [37, Thm. 2.4], [6, Prop. 2.17], [9, Thm. 4.4], where the respective limit optimality systems, though not strong stationary, include such relations between multipliers and adjoint states on the sets where the non-smoothness is active. We cannot expect this to happen in the present paper; by contrast to the aforementioned contributions, our adjoint state converges weakly in a space which is not compactly embedded in a Lebesgue space. Although we are able to show
this does not help us conclude (3.15), in view of the lack of space regularity of the adjoint state.
3.2 Towards strong stationarity
In this section, we aim to derive a stronger optimality system than (3.6). To this end, we will employ arguments from previous works [23, 5], which are entirely based on the limited differentiability properties of the non-smooth mappings involved. We begin by stating the first order necessary optimality conditions in primal form.
Lemma 18 (B-stationarity).
If is locally optimal for (P), then there holds
(3.16) |
Proof.
As a result of Proposition 6 and Assumption 7 we have that the composite mapping is (Hadamard) directionally differentiable [32, Def. 3.1.1] at in any direction with directional derivative ; see [32, Lem. 3.1.2(b)] and [33, Prop. 3.6(i)]. The result then follows immediately from the local optimality of and Assumption 7. ∎
In order to improve the optimality conditions from the previous section 3.1, we make use of the following very natural requirement:
Assumption 19.
The history operator satisfies the monotonicity condition
Remark 20.
It is self-evident that the cumulated damage (fatigue level of the material) increases as the damage increases. Hence, the condition in Assumption 19 is always satisfied in applications.
The main result of this section reads as follows.
Theorem 21.
Suppose that Assumptions 7, 9, 11 and 19 are fulfilled. Let be locally optimal for (P) with associated states
Then, there exist adjoint states
and multipliers and such that the following system is satisfied
(3.18a) | |||
(3.18b) | |||
(3.18c) | |||
(3.18d) | |||
(3.18e) |
where we abbreviate . In (3.18d), the mappings are defined as follows
(3.19) | ||||
where, for any , the right- and left-sided derivative of are given by and , respectively.
Proof.
The existence of a tuple satisfying the system (3.18a)-(3.18b)-(3.18c)-(3.18e) is due to Proposition 16. Thus, the rest of the proof is focused on showing (3.18d). In this context, we first follow the ideas from [5, Proof of Lem. 2.8] and prove that the set of arguments of from (2.16a) is dense in (step (I)). With this information at hand, we are then able to show the desired result by employing a technique from [5, Proof of Thm. 2.11], see also [23, Proof of Thm. 5.3] (step (II)).
(I) Let be arbitrary, but fixed. As indicated above, we next show that there exists such that
(3.20) |
where we abbreviate and for all . To this end, we follow the lines of the proof of [5, Lem. 2.8]. We start by noticing that the mapping
satisfies and . Then, we observe that fulfills
(3.21) |
In view of the embedding , there exists a sequence such that
(3.22) |
For any , consider the equation
(3.23) |
By arguing as in the proof of Lemma 6 we see that (3.23) admits a unique solution . Now, we define
(3.24) |
such that the pair solves the system (2.16) associated to with right-hand side ; note that the regularity of in (3.24) is due to the -regularity of . In view of the unique solvability of (2.16), cf. Proposition 6, . Owing to the Lipschitz-continuity of the directional derivative of (w.r.t. direction) and (2.15) , we further obtain from (3.21) and (3.23)
Gronwall’s inequality and (3.22) then give in turn
(3.25) |
where is a constant dependent only on the given data. By relying on the continuity of , cf. (2.15), we have
(3.26) |
as a result of (3.25). Combining (3.22) and (3.26) finally yields
Since we established above that , the proof of this step is now complete.
(II) In the following, remains arbitrary, but fixed. To prove the desired relations in (3.18d), we first make use of the B-stationarity from (3.16). Here we test with the function which was defined in (3.24).
We test (3.18a), (3.18b), and (3.18e) with and , respectively. This leads to
(3.27) | ||||
where the second identity follows from integration by parts, , and ; here we also recall the abbreviation , see (3.20). In view of (3.20) and since , letting in (3.27) leads to
(3.28) | ||||
for all , where we abbreviate
(3.29) |
Here we used the fact that is continuous, by the Lipschitz-continuity of , as well as (3.25) in combination with (2.15) and the fact that .
Next, we take a closer look at the second line in the estimate (3.28). In this context, we first notice that, for all , it holds
(3.30) |
Moreover, we recall that
(3.31) |
Now, let with be arbitrary, but fixed. In view of (3.29) and (3.31), we have and (3.17) implies
Then, by recalling (2.14) and by employing Fubini’s theorem, we obtain
(3.32) | ||||
where the last equality is due to the definition of in (3.19) combined with the second identity in (3.18c). Going back to (3.28), we have
(3.33) | ||||
By means of the fundamental lemma of calculus of variations in combination with the positive homogeneity of the directional derivative w.r.t. direction, we deduce from (3.33) the inequality
(3.34) |
By arguing exactly in the same way as above, where one takes into account the fact that , for , we show
(3.35) | ||||
This gives in turn
(3.36) |
where we relied again on the fundamental lemma of calculus of variations and the positive homogeneity of the directional derivative w.r.t. direction. From (3.34)- (3.36) and the fact that (see (3.31)) we can now conclude the first relation in (3.18d). Finally, the second relation in (3.18d) is a consequence of (3.18c), (3.34)- (3.36) and (3.31). This completes the proof. ∎
Corollary 22 (Strong stationarity in the case that is smooth).
Suppose that Assumptions 7 and 9 are fulfilled. Let be locally optimal for (P) with associated states
If the set has measure zero, then there exist unique adjoint states
and a unique multiplier such that the following system is satisfied
(3.37a) | |||
(3.37b) | |||
(3.37c) | |||
(3.37d) |
where we abbreviate . Moreover, (3.37) is of strong stationary type, i.e., if together with its states , some adjoint states , and a multiplier satisfy the optimality system (3.37a)–(3.37d), then it also satisfies the variational inequality (3.16).
Proof.
The first statement is a consequence of Theorem 21. Note that here we do not ask that Assumption 11 holds true; does not need to be smoothened, as its non-smoothness is never active. Assumption 19 is also not required here; this was necessary in the proof of Theorem 21 only to show (3.32) and (3.35). Since has measure zero, (3.32) and (3.35) follow immediately from the second relation in (3.18c).
To prove the second assertion, we let be arbitrary, but fixed and abbreviate and . By distinguishing between the sets , and , we obtain from (3.37c) and (3.31)
(3.38) | ||||
for all . Now, let be arbitrary but fixed and test (3.37a), (3.37b), and (3.37d) with and , respectively. This leads to
where the second identity follows from integration by parts, , and . Since was arbitrary, the proof is now complete. ∎
Remark 23.
Remark 24.
As opposed to (3.37), the optimality system in Theorem 21 is not strong stationary, as we will see in the next section. However, we emphasize that (3.18) is a comparatively strong optimality system. While countless non-smooth problems have been addressed by resorting to a smoothening procedure as the one in the proof of Proposition 16 (see e.g. [3, 17, 19] and the references therein), we went a step further and improved the optimality conditions from Proposition 16 by proving the additional information contained in (3.18d). Let us point out that sign conditions on the sets where the non-smoothness is active, in our case
are not expected to be obtained by classical regularization techniques, see e.g. [6, Remark 3.9].
3.3 Discussion of the optimality system (3.18). Comparison to strong stationarity
We begin this section by writing down how the strong stationary optimality conditions for the control of (P) should look like.
Proposition 25 (An optimality system that implies B-stationarity).
Suppose that Assumptions 7 is fulfilled. Assume that together with its states , some adjoint states , and some multipliers satisfy the optimality system
(3.39a) | |||
(3.39b) | |||
(3.39c) | |||
(3.39d) | |||
(3.39e) |
where we abbreviate and where, for any , the right- and left-sided derivative of are given by and , respectively. Then, also satisfies the variational inequality (3.16).
Proof.
Let be arbitrary, but fixed. In the proof of Corollary 22 we saw that the first identity in (3.39c) and the first relation in (3.39d) combined with (3.31) imply
(3.40) |
for all Next we abbreviate and , where
From the second identity in (3.39c) and the second relation in (3.39d) we deduce that
(3.41) | ||||
where in the second identity we relied on (3.30). Adding (3.40) and (3.41) yields (3.28). Now, let be arbitrary but fixed and abbreviate . By testing (3.28) with and by arguing step by step backwards as in the proof of (3.27), we finally arrive at the desired result. ∎
Remark 26.
Some words concerning Proposition 25 are in order:
- •
-
•
We point out that (3.39) is not of strong stationary type, as we were not able to show ; the optimality conditions in (3.39) just point out the information that is missing in (3.18), namely
(3.42) Note that the sign condition
is already contained in (3.18d). The proof of Proposition 25 shows that (3.42) is indeed needed for the implication .
-
•
In order to prove that a certain optimality system implies B-stationarity, it is essential that it includes sign conditions for the involved multipliers and/or adjoint states on the sets where the non-smoothness is active. This fact has been observed in many contributuions dealing with strong stationarity [23, Rem. 6.9], [6, Rem. 3.9], [5, Rem. 4.8], [9, Rem. 4.15]. In our case, see (3.18d), the information on is incomplete, while the sign conditions on the set are non-existent and seem to be hidden in the integral formulations (3.19).
Proof.
Remark 28.
The gap between (3.18) and the strong stationary optimality conditions (3.39) is due to the additional non-smooth mapping appearing in the argument of the initial non-smoothness , cf. (2.2a). To see this, let us take a closer look at the proof of Theorem 21. Therein, (3.18d) is proven by relying on direct methods from previous works [5, 23] which deal with strong stationarity in the context of one non-differentiable map. In these findings it has been observed that the set of directions into which the non-smoothness is differentiated - in the "linearized" state equation - must be dense in a suitable (Bochner) space [5, Remark 2.12], [23, Lem. 5.2]. The density of the set of directions into which is differentiated, see (2.16a), is indeed available, as the first step of the proof of Theorem 21 shows. This allowed us to improve the optimality system (3.6) from the previous section. However, the non-differentiable function requires a similar density property too, which reads as follows
(3.44) |
where denotes the first component of the control-to-state operator . By taking a look at the "linearized" state equation (2.16a), we see that (3.44) is not to be expected, due to the lack of surjectivity of the mapping . Thus, the methods from [5, 23] are restricted to one non-smoothness and permit us to improve the limit optimality system (3.6) only up to a certain point. Thus, the strong stationarity for the control of (P) remains an open question.
Appendix A
Proof of Lemma 13.
The arguments are well-known [4] and can be found in [5, App. B] for the case that is constant and the control space is instead of .
(I) Let be arbitrary, but fixed. We begin by recalling the smooth state equation appearing in (Pε):
(A.1a) | |||
(A.1b) |
By employing the exact same arguments as in the proof of Proposition 3, one infers that (A.1) admits a unique solution for every , which allows us to define the regularized solution mapping
The operator is Gâteaux-differentiable and its derivative at in direction , i.e., , is the unique solution of
(A.2) | ||||
where we abbreviate . By arguing as in the proof of Lemma 4 we deduce that is Lipschitz continuous (with constant independent of ). Moreover, we have the convergence
(A.3) |
for . To see this, one first shows that , which follows by estimating as in the proof of Lemma 4 and by using (2.1) applied for along with (3.1). Then, (A.3) is a consequence of the Lipschitz continuity of (with constant independent of ).
(II) Next, we focus on proving that can be approximated via local minimizers of optimal control problems governed by (A.1). To this end, let be the ball of local optimality of and consider the smooth (reduced) optimal control problem
() |
By arguing as in the proof of Proposition 8, we see that () admits a global solution . Since we can select a subsequence with
(A.4) |
where For simplicity, we abbreviate in the following
(A.5a) | |||
(A.5b) |
for all . Due to (A.3) and Assumption 7, it holds
(A.6) |
where for the last inequality we relied on the fact that is a global minimizer of () and that is admissible for (). In view of (A.5b), (A.6) can be continued as
(A.7) | ||||
where we used again (A.3) in combination with the compact embedding , and the continuity of , see Assumption 7; note that for the last inequality in (A.7) we employed the fact that . From (A.7) we obtain that and
Since , one has the convergence
(A.8) |
where we also relied on (A.4). As a consequence, (A.3) yields
(A.9) |
A classical argument finally shows that is a local minimizer of for sufficiently small.
Acknowledgment
This work was supported by the DFG grant BE 7178/3-1 for the project "Optimal Control of Viscous Fatigue Damage Models for Brittle Materials: Optimality Systems".
References
- [1] R. Alessi, V. Crismale, and G. Orlando. Fatigue effects in elastic materials with variational damage models: A vanishing viscosity approach. Journal of Nonlinear Science, 29:1041–1094, 2019.
- [2] R. Alessi, S. Vidoli, and L. De Lorenzis. Variational approach to fatigue phenomena with a phase-field model: the one-dimensional case. Engineering Fracture Mechanics, 190:53–73, 2018.
- [3] V. Barbu. Necessary conditions for distributed control problems governed by parabolic variational inequalities. SIAM Journal on Control and Optimization, 19(1):64–86, 1981.
- [4] V. Barbu. Optimal control of variational inequalities. Research notes in mathematics 100, Pitman, Boston-London-Melbourne, 1984.
- [5] L. Betz. Strong stationarity for optimal control of a non-smooth coupled system: Application to a viscous evolutionary VI coupled with an elliptic PDE. SIAM J. on Optimization, 29(4):3069–3099, 2019.
- [6] L. Betz. Strong stationarity for a highly nonsmooth optimization problem with control constraints. Mathematical Control and Related Fields, doi=10.3934/mcrf.2022047, 2022.
- [7] C. Christof. Sensitivity analysis and optimal control of obstacle-type evolution variational inequalities. SIAM J. Control Optim., 57(1):192–218, 2019.
- [8] C. Christof and M. Brokate. Strong stationarity conditions for optimal control problems governed by a rate-independent evolution variational inequality. arXiv:2205.01196, 2022.
- [9] C. Christof, C. Clason, C. Meyer, and S. Walther. Optimal control of a non-smooth, semilinear elliptic equation. Mathematical Control and Related Fields, 8(1):247–276, 2018.
- [10] C. Clason, V.H. Nhu, and A. Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control and Related Fields, 11(3):521–554, 2021.
- [11] V. Crismale, G. Lazzaroni, and G. Orlando. Cohesive fracture with irreversibility: quasistatic evolution for a model subject to fatigue. Math. Models Methods Appl. Sci., 28:1371–1412, 2018.
- [12] J. C. De los Reyes and C. Meyer. Strong stationarity conditions for a class of optimization problems governed by variational inequalities of the second kind. Journal of Optimization Theory and Applications, 168(2):375–409, 2016.
- [13] B.J. Dimitrijevic and K. Hackl. A method for gradient enhancement of continuum damage models. Technische Mechanik, 28(1):43–52, 2008.
- [14] E. Emmrich. Gewöhnliche und Operator Differentialgleichungen. Vieweg, Wiesbaden, 2004.
- [15] M. Frémond and N. Kenmochi. Damage problems for viscous locking materials. Adv. Math. Sci. Appl., 16(2):697–716, 2006.
- [16] M. Frémond and B. Nedjar. Damage, gradient of damage and principle of virtual power. Int. J. Solids Struct., 33(8):1083–1103, 1996.
- [17] A. Friedman. Optimal control for parabolic variational inequalities. SIAM Journal on Control and Optimization, 25(2):482–497, 1987.
- [18] K. Hammerum, P. Brath, and N. K. Poulsen. A fatigue approach to wind turbine control. J. of Physics: Conf. Series, 75:012–081, 2007.
- [19] Z.X. He. State constrained control problems governed by variational inequalities. SIAM Journal on Control and Optimization, 25:1119–1144, 1987.
- [20] M. Hintermüller and I. Kopacka. Mathematical programs with complementarity constraints in function space: C- and strong stationarity and a path-following algorithm. SIAM Journal on Optimization, 20(2):868–902, 2009.
- [21] W. Kaballo. Aufbaukurs Funktionalanalysis und Operatortheorie. Springer Verlag, Heidelberg, 2014.
- [22] D. Knees, R. Rossi, and C. Zanini. A vanishing viscosity approach to a rate-independent damage model. Mathematical Models and Methods in Applied Sciences, 23(4):565–616, 2013.
- [23] C. Meyer and L.M. Susu. Optimal control of nonsmooth, semilinear parabolic equations. SIAM Journal on Control and Optimization, 55(4):2206–2234, 2017.
- [24] C. Meyer and L.M. Susu. Analysis of a viscous two-field gradient damage model. Part I: Existence and uniqueness. Z. Anal. Anwend., 38(3):249–286, 2019.
- [25] C. Meyer and L.M. Susu. Analysis of a viscous two-field gradient damage model. Part II: Penalization limit. Z. Anal. Anwend., 38(4), 2019.
- [26] F Mignot. Contrôle dans les inéquations variationelles elliptiques. Journal of Functional Analysis, 22(2):130–185, 1976.
- [27] F. Mignot and J.-P. Puel. Optimal control in some variational inequalities. SIAM Journal on Control and Optimization, 22(3):466–476, 1984.
- [28] I. Munteanu, A. I. Bratcu, N.-A. Cutululis, and E. Ceanga. Optimal control of wind energy systems: towards a global approach. Springer Science Business Media, 2008.
- [29] Ritchie R.O. and Launey M.E. Crack growth in brittle and ductile solids. In Wang Q.J., Chung YW. (eds) Encyclopedia of Tribology. Springer, Boston, 2013.
- [30] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970.
- [31] H. Scheel and S. Scholtes. Mathematical programs with complementarity constraints: Stationarity, optimality, and sensitivity. Mathematics of Operations Research, 25(1):1–22, 2000.
- [32] W. Schirotzek. Nonsmooth Analysis. Springer, Berlin, 2007.
- [33] A. Shapiro. On concepts of directional differentiability. Journal of Optimization Theory and Applications, 66(3):477–487, 1990.
- [34] M. Sofonea and M. Andaluzia. Variational Inequalities with Applications. Springer, New York, 2009.
- [35] R. Stephens, A. Fatemi, R. Stephens, and H. Fuchs. Metal Fatigue in Engineering. A Wiley-Interscience publication, Wiley, New York, 2000.
- [36] L.M. Susu. Optimal control of a viscous two-field gradient damage model. GAMM-Mitt., 40(4):287 – 311, 2018.
- [37] D. Tiba. Optimal control of nonsmooth distributed parameter systems. Springer, 1990.
- [38] F. Tröltzsch. Optimal Control of Partial Differential Equations, volume 112 of Graduate Studies in Mathematics. American Mathematical Society, Providence, 2010. Theory, methods and applications, Translated from the 2005 German original by Jürgen Sprekels.
- [39] G. Wachsmut. Elliptic quasi-variational inequalities under a smallness assumption: uniqueness, differential stability and optimal control. Calc. Var., 82(59), 2020.
- [40] G. Wachsmuth. Strong stationarity for optimal control of the obstacle problem with control constraints. SIAM Journal on Optimization, 24(4):1914–1932, 2014.