Optimal one-dimensional structures for the principal eigenvalue of two-dimensional domains
Abstract
A shape optimization problem arising from the optimal reinforcement of a membrane by means of one-dimensional stiffeners or from the fastest cooling of a two-dimensional object by means of “conducting wires” is considered. The criterion we consider is the maximization of the first eigenvalue and the admissible classes of choices are the one of one-dimensional sets with prescribed total length, or the one where the constraint of being connected (or with an a priori bounded number of connected components) is added. The corresponding relaxed problems and the related existence results are described.
Keywords: Optimal reinforcement, eigenvalues of the Laplacian, stiffeners, fastest cooling.
2010 Mathematics Subject Classification: 49J45, 35R35, 35J25, 49Q10.
1 Introduction
The problem of finding the vibration modes of an elastic membrane , fixed at its boundary , is known to reduce to the PDE
The eigenvalues for which the PDE above has nonzero solutions are all strictly positive with no finite limit, hence they can be ordered as
We are interested in the behavior of the first eigenvalue, which can be also characterized via the variational problem
Our goal is to see how the value above modifies when we attach to the membrane a one-dimensional stiffener , modeled by a one-dimensional rectifiable set . In this case the first eigenvalue depends on and is given by
(1.1) |
where is the tangential derivative and the parameter indicates the stiffness coefficient of the material of which is made of.
A similar problem arises in the heat diffusion when a two-dimensional heat conductor, with zero temperature at the boundary and initial temperature , has to be cooled as fast as possible by adding one-dimensional strongly conducting wires . The corresponding second order operator in presence of the structure is given in the weak form by
where and vary in the Sobolev space . By the Fourier analysis we may write the solution of the heat equation
as
where are the eigenvalues of the operator and the corresponding eigenfunctions (normalized with unitary norm). The fastest cooling then reduces to searching the structure providing the maximal first eigenvalue among the class of admissible choices for .
In the present paper we consider the shape optimization problem related to the functional defined in (1.1) on the following two classes of admissible choices for the stiffener , where denotes the length of :
Similarly, we could consider the admissible class of stiffeners having at most connected components
We do not consider this last situation since there are no essential differences between the cases and . For a general presentation of shape optimization problem we refer to the books [6] and [19]
In Section 2 we give the precise formulation of the two optimization problems involving the admissible classes and and their corresponding relaxed formulations. We will show that the relaxed problems admit a solution, which will be in both cases a measure supported in in the first case and on a rectifiable set in the second one. Our main results are that these measures do not have singular parts; more precisely, in the case it is a function , while in the case it is a measure of the form where is a suitable connected set and .
Section 3 contains the proofs of the results. In Section 4 we consider the case when is a disk, in which some explicit calculations can be made for the relaxed optimization problem related to the choice of admissible sets. Section 5 deals with the case in which the admissible sets are connected. Finally, in Section 6 we collected some open questions that in our opinion merit some further investigation.
2 Formulation of the problem and main results
Let be a bounded Lipschitz domain. The two optimization problems we consider are
(2.1) |
(2.2) |
where is defined in (1.1). We now deduce in the two cases the corresponding relaxed problems, obtained by means of the possible limits of admissible . In the following we use the notation:
-
for the (two-dimensional) Lebesgue measure of ;
-
for the length of a one-dimensional set ;
-
for the mass of a measure .
Let be a sequence of admissible stiffeners for problem (2.1); considering the measures we have that , hence a subsequence (that we still indicate by ) weakly* converges to a suitable measure . It is then convenient to define for every measure on by setting
Note that, in general the infimum above is not attained on and minimizing sequences converge, strongly in and weakly in a suitably defined Sobolev space , to solutions of the relaxed problem
Here represents a kind of tangential gradient that was defined in [4] for every measure . In this way, when the tangential gradient coincides with the usual tangential gradient to , so that the definition above of reduces to . The relaxed version of the optimization problem (2.1) then reads
(2.3) |
where is the class of nonnegative measures on with .
Proposition 2.1.
The relaxed optimization problem (2.3) admits a solution.
Proof.
For every fixed the map
is weakly* continuous. Hence is upper semicontinuous for the weak* convergence, being the infimum of continuous functions. The existence result then follows from the fact that, thanks to the bound , the class is weakly* compact. ∎
Remark 2.2.
Following the theory developed in [4] concerning variational integrals with respect to a general measure, the expression of can be equivalently given as
where the Sobolev space and the “tangential gradient” are suitably defined. We refer the interested reader to [4], where the precise definitions and all the details are explained. We will see that for our purposes we do not need these fine tools, since we will obtain that optimal measures for problem (2.3) are actually functions, for which the tangential gradient reduces to the usual gradient and the Sobolev space reduces to the usual Sobolev space .
Before stating our main result, we introduce a slightly technical assumption which ensures a bound on the -norm of the gradient on the boundary of .
Definition 2.3 (External Ball Condition).
A subset satisfies the uniform external ball condition with radius if
We will always require connected to work with the “unique” eigenfunction which is positive on all (see [18, Theorem 1.2.5]) and with fixed -norm. Our main result concerning optimization problem (2.3) is below.
Theorem 2.4.
Let be a connected subset of with Lipschitz boundary satisfying the uniform external ball condition. Then the optimization problem (2.3) admits a solution of the form where is a function belonging to for every , equal to zero almost everywhere on the set
and satisfying the identity
(2.4) |
Furthermore,
-
(i)
if is convex we have ;
-
(ii)
if , then there exists such that ;
The proof of theorem above is given in Section 3. We now consider the relaxation of the optimization problem (2.2) where the connectedness constraint is imposed. In this case, if is sequence in , the limit of (a subsequence of) is still a measure supported by a suitable set . Since the sequence is compact in the Hausdorff convergence the set is closed and connected. In addition, thanks to the Gołab theorem (see [17], and the books [3], [15]), we have , hence the set verifies , so that . Then, introducing the class
the relaxed version of the optimization problem (2.2) reads
(2.5) |
Proposition 2.5.
The relaxed optimization problem (2.5) admits a solution.
Proof.
The proof is similar to the one of Proposition 2.1. The map is upper semicontinuous for the weak* convergence, and the existence result then follows from the compactness, with respect to the weak* convergence, of the class . This is a consequence of the compactness, for the Hausdorff convergence, of the class of closed and connected sets, and of the Gołab theorem, which gives the inequality for a weak* limit of a sequence with connected and converging to in the Hausdorff sense. ∎
Again, the question if the optimization problem (2.5) admits as a solution a measure that is actually a function on a set arises; this would avoid the use of the delicate theory of variational integrals with respect to a general measure and of the related Sobolev spaces. This is indeed the case and our main result concerning optimization problem (2.5) is below.
Theorem 2.6.
The optimization problem (2.5) admits a solution of the form , where is a closed connected subset of with and with on .
Remark 2.7.
If we introduce the class of measures
where -connected means that is has at most connected components, then the proof of Theorem 2.6 easily generalize to the maximization problem
In particular, there exist a solution of the form where the are closed connected subsets of of total length . Moreover, and on its support.
3 Proof of the results
We start to consider the optimization problem (2.3), which is a max-min problem:
Proposition 2.1 gives the existence of an optimal relaxed solution, which is a measure on with . Of course, since the cost above is monotone increasing with respect to , optimal measures will saturate the constraint, so we will have .
The main result of this section asserts that, under mild assumptions on the boundary of , the optimal measures are actually of the form , where is a function that solves the optimization problem
(3.1) |
and is defined by
Furthermore, we will see that the optimal densities satisfy some higher-integrability properties and, if is convex, belong to .
In order to obtain better properties of the optimal measure provided by the existence result seen in Proposition 2.1 it is convenient to consider the optimization problem (2.3) under the stronger constraint that with and . In other words, we consider the class
and the optimization problem
(3.2) |
We still have a max-min problem:
Proposition 3.1.
For every there exist a unique solution of the optimization problem (3.2), given by
where is the unique positive solution with of the auxiliary problem
where is the dual exponent of . Furthermore, the function belongs to and if in addition , then there exists such that . In particular, up to the boundary.
Proof.
Let us denote by and by the functionals
Then problem (3.2) is written as
Interchanging the max and the min above gives the inequality
(3.3) |
The maximum with respect to at the right-hand side above is easily computed and for every fixed this maximum is reached at
Then the right-hnd side in (3.3) becomes the auxiliary minimization problem
A straightforward application of the direct methods of the calculus of variations gives the existence of an optimal solution of the auxiliary problem above. Setting
by (3.3) we obtain
The minimum problem at the left-hand side above has as a solution, as it can be easily verified by performing the corresponding Euler-Lagrange equation. In addition, we have
so that finally we obtain the equality
which proves the first assertion. In addition, the function verifies the PDE
(3.4) |
The fact that is standard (see Remark 3.6). To prove the Hölder-regularity, notice that, if is of class , then by [21] there exists such that
Next, we can apply Theorem 1.2.12 of [18] to conclude that . The reason is that the coefficient of the PDE in (3.4) is Hölder-continuous with a parameter as a consequence of the definition of itself. ∎
The next result shows that we can estimate uniformly with respect to .
Lemma 3.2.
Let . Then there exists a positive constant depending only on , its volume and such that
Proof.
Since is bounded, for we have
Therefore, we can bound the eigenvalue as follows
The latter does not depend on , and a straightforward application of the direct methods of the calculus of variations shows that the minimum is achieved and it is finite. It follows that
for all , and this concludes the proof. ∎
Before passing to prove higher summability and regularity properties of the solutions of the optimization problem (3.1) we need to prove some uniform (with respect to ) estimates of the solutions and . An important step is a -convergence of the related functionals.
3.1 -convergence as of the functions
Let and as before. Consider the family of functionals defined on
and for
For denote by the unique positive minimizer and with unitary norm of the corresponding functional .
Definition 3.3 (-convergence).
Let be a metric space. We say that a sequence of functionals
-converges to a functional if the following hold:
-
(i)
for every sequence in converging to some we have (often called inequality)
-
(ii)
for every there is a sequence in converging to such that (often called inequality)
Proposition 3.4.
Let be a bounded set with . Then
-
(a)
the sequence of functionals -converges to in ;
-
(b)
the sequence of minima converges strongly in to and
3.2 Uniform estimate of
To find a uniform estimate of and we need to assume that a certain geometric condition is satisfied by , namely that there is a uniform such that the external ball condition (see Definition 2.3) holds with at all . Following closely the method developed in [8], we obtain an almost uniform estimate on the -norm of the gradient on the boundary of since the -norm of must be taken into account. We first recall a few regularity properties.
Lemma 3.5.
Let be a bounded Lipschitz domain and let . Then
In particular, if is of class then .
Proof.
It follows from [16, Theorem 9.15] and a standard bootstrap argument. ∎
Remark 3.6.
The Sobolev embedding theorem [14] gives us another proof of the fact that and, more precisely, that when is of class ,
The main ingredients behind the uniform estimate of are two weak comparison principles for eigenfunctions and the fact that on a radially symmetric domain the optimal pair is radial (see Lemma 4.1).
Lemma 3.7 (Weak comparison principle).
Let be a bounded connected open set and let be a convex function such that .
-
(i)
Denote by the unique positive solution of
with unitary norm. Then, for any bounded open subset, it turns out that .
-
(ii)
Let be as above. If is the unique positive solution of
with unitary norm, then .
Remark 3.8.
The assumption connected ensures that we can choose and to be the unique positive solutions with fixed -norm.
Lemma 3.9.
Let be a bounded open set with boundary in for some . Suppose that satisfies the external ball condition at some with radius . Then there is a constant such that
(3.5) |
where
Proof.
Introduce the auxiliary function
and notice that it satisfies the assumptions of Lemma 3.7. We can assume without loss of generality that the center of the external ball at is the origin (i.e., ) so that, setting , we obtain the inclusion
Let be the solution of
with and -norm equal to one, and let be the solution of
with and the same -norm as above. Using and of Lemma 3.7 we find that
since is contained in the annulus . The function is radially symmetric, and therefore we can rewrite the equation in polar coordinates:
Let be the point where attains its maximum value and integrate the previous equation to deduce that
and next we estimate the left-hand side using that the radial derivative is positive. Finally, the inclusion property of eigenvalues,
together with Lemma 3.2, allows us to infer that (3.5) holds. ∎
If now satisfies the uniform external ball condition, we can extend the estimate (3.5) to hold for provided that we replace with the largest possible radius.
Lemma 3.10.
Let be a bounded open set with boundary in for some . The following assertions hold:
- (i)
-
(ii)
If satisfies the uniform external boundary condition with radius , then
The estimate obtained when is convex is more precise, but it is not enough to infer that the optimal density belongs to through the method presented in this section. In any case, the -norm of appears at the denominator: we will see later that this does not lead to any additional problem.
3.3 Uniform estimate of
The next step is to find a uniform estimate for the -norm of ; more precisely, we prove that
where is a positive constant that depends on but not on . It is worth remarking that eigenfunctions are bounded in , so it is convenient to work with their norm only.
Remark 3.11.
Let . Then
which means that we can always find a uniform estimate of the -norm using the -norm. This will be particularly important in the uniform estimate of .
We are now ready to prove the a priori estimate for . The key lemma below is due to De Pascale-Evans-Pratelli in [11] and the proof is adapted to the eigenvalue case in the spirit of [8].
Lemma 3.12 (De Pascale-Evans-Pratelli).
Let be a bounded open set with smooth boundary and let be arbitrary. Let be a convex function with and let
be the unique positive solution of
(3.6) |
where depends on and only, satisfying on and with unitary -norm. Then for every we have the following estimate:
where is the mean curvature of with respect to the outer normal and is the volume of the set .
Proof.
The assumption smooth implies that the solution to (3.6) is smooth up to the boundary, that is, . Set
and use as a test function for the equation (3.6). Using the integration by parts formula on the left-hand side, we find the identity
(3.7) |
The right-hand side of the equality can be easily estimated via the Hölder inequality and Remark 3.11 as:
To estimate the integral we can use as test function for the equation (3.6); it turns out that
(3.8) |
Integrating by parts twice the left-hand side leads to the following chain of equalities,
where and are, respectively, the first-order and second-order derivatives in the direction of the exterior normal to and we introduce the short notation . It follows that
where the -norm associated to the Hessian matrix is given by
Since is smooth up to the boundary of , it is easy to verify that the Laplace operator can be decomposed as
where denotes the mean curvature of . This immediately implies the estimate
(3.9) |
In fact, it is easy to verify that
since the -norm is positive by definition and follows from the convexity assumption on . We now plug (3.9) into (3.8) and use the Hölder inequality with and to obtain the following estimate:
The conclusion follows by combining the inequalities discovered so far with the identity (3.7) and applying repeatedly the Young inequality
which is valid for . In particular, it turns out that
and this concludes the proof of the lemma. ∎
The De Pascale-Evans-Pratelli lemma holds for domains with smooth boundary, so before passing to the uniform estimate of we need to present an approximation argument that allows us to use domains with Lipschitz boundary that only satisfy the uniform external ball condition.
Lemma 3.13.
Let be a bounded open set satisfying the uniform external ball condition and let be a sequence of open sets, with and such that
Fix and let be the minimizer of on and the minimizers of on , all positive with fixed -norm equal to one and extended by zero outside and respectively. Then
strongly in both and .
This is proved in [8, Lemma 3.9] for the energy problem, but the same arguments apply to the eigenvalue case. We finally have all the ingredients we need to obtain a uniform estimate of :
Proposition 3.14.
Let be a set of finite perimeter satisfying the uniform external ball condition with radius . For every , there are constants
satisfying the inequality
(3.10) |
Proof.
We first suppose that is smooth. Set
and notice that is the minimizer of the functional
with -norm equal to one. It is easy to verify that is the derivative, that is,
and, since by smoothness of , it follows from De Pascale-Evans-Pratelli (Lemma 3.12) that
It follows from the definition of that is equal to , which is uniformly bounded (Lemma 3.2) by a positive constant . We Denote by and the sets
and notice that
We now estimate separately the three terms on the right-hand side of the inequality above. The first one gives
For the third term we can use the estimate given in Lemma 3.10 to obtain
Now take in the initial inequality and rearrange the terms in such a way that the following holds:
where
Now notice that the same inequality holds for any as in the statement since we can approximate it by a sequence of smooth sets satisfying the assumptions of Lemma 3.13 and also
Furthermore, since is the limit of , we can always find a small positive number such that the following holds:
Then the estimate above shows that
Finally, from the inequality and applying once again the estimate
the conclusion follows ∎
Remark 3.15.
The estimate (3.10) is not uniform with respect to since the constant depends on and, while it is true that we can write
it is also easy to show that
The reason is that there is a term multiplied by a positive constant that is linear with respect to , namely
Thus, even if we assume that is convex, we cannot get rid of this term because it does not come out of the boundary part. In Section 3.5, we show an alternative approach to the problem that allows us to achieve when is convex, but it requires regularity results in optimal transport theory.
3.4 Proof of Theorem 2.4: -regularity for
We are now ready to use all the results we collected so far to give a proof of the main theorem except for (i) that will be proved in the next section.
Proof of Theorem 2.4.
For , let be the positive minimizer of with fixed -norm (equal to one) and let be given by
Since is admissible, using Proposition 3.14 we can find a constant such that for small enough we have
for some . It follows that is uniformly bounded in and hence, up to subsequences, it converges weakly to a nonnegative function . Since
we easily infer that is admissible. On the other hand, Proposition 3.4 asserts that converges strongly in to the minimum of , which means that
for every . Moreover, it is easy to check that
which implies that is a solution of the equation
for , with Dirichlet boundary condition on . An application of the integration by parts formula shows that
The strong converge of to in and Proposition 3.4 implies that
which means that
Finally the general min-max inequality shows that
and this is enough to conclude that is a solution of the maximization problem (2.3) since it is an admissible competitor.
The regularity of the optimal density follows from the fact that for arbitrarily large (since is a bounded-volume set). It is now trivial to show that is equal to zero almost everywhere on the set
while (2.4) follows from the min-max inequality which gives an equality evaluated at the optimal couple. Finally, the assertion (ii) follows from the regularity of the minimizer of as in Proposition 3.1. ∎
3.5 Proof of Theorem 2.4: -regularity for convex
Let be a bounded open set which is either convex or with a boundary of class . Instead of relaxing the maximization problem to (3.1), we can study
(3.11) |
where is the class defined in (2.3). Set
and consider the functional
(3.12) |
Notice that there are no measures in such that . This is a significant advantage over the energy problem and can be checked easily since
Proposition 3.16.
Proof.
First, we notice that for every fixed the map is continuous with respect to the weak* convergence. Hence, being the infimum of continuous maps, it is weakly* upper semi-continuous. Since we observed already that is weakly* compact and nonempty we infer that (3.12) admits a solution . To prove the other claims, observe that
is always true, but the equality does not hold a priori since we lack concavity of with respect to the first variable . Nevertheless, we have
and it is easy to check that this is achieved using any measure with total mass equal to and support satisfying (3.13). Denote any one of them by and notice that implies
This shows that we can interchange infimum and supremum and therefore the claim (3.14) holds, concluding the proof. ∎
Now that we know the existence of the optimal measure , we can equivalently investigate the minimization problem associated with the functional
satisfying the additional constraint . The proof of the next result follows immediately from [13]. Notice that what we find out here is compatible with the investigation we carried out in Section 3.
Theorem 3.17.
The optimization problem
admits a unique solution for all . If, in addition, is convex, then .
We can now show that the optimal measure belongs to spaces for arbitrary, as in Section 3.4, and show that we can obtain a uniform estimate if convex concluding the proof of Theorem 2.4.
Proof of Theorem 2.4 (i).
Let be the optimal measure given by Proposition 3.16. A standard result in elliptic regularity theory (e.g, [5]) implies
for some . Note that here we use the fact that is either regular ( and thus depends on ) or convex. By Theorem 3.17, we infer that
Thus and solve the problem
The right-hand side of the equation belongs to so we deduce that also for all using the regularity results for the Monge-Kantorovich problem given in [11, 12, 22]. Similarly, we have
and we can apply once more [11, 12, 22] to conclude that . ∎
Remark 3.18.
The approach via -convergence and the approach presented in this section are both needed to prove Theorem 2.4. Indeed, the estimate on is impossible to obtain via -convergence, even if we assume that is convex. On the other hand, the Monge-Kantorovich approach requires to be either convex or even for the higher integrability result , which only requires the uniform external ball condition with -convergence.
4 The radial case
Let be the unit disc of . In this section, we exploit the symmetries of the domain to show that the solution of (2.3) is a radially symmetric function with an explicit formula. First, we prove a technical lemma which allows us to use polar coordinates to deal with the min-max problem.
Lemma 4.1.
The optimal density solution of (2.3) and the corresponding optimal profile are both radially symmetric functions.
Proof.
Let be the function given in Proposition 3.1. Then is the unique solution (with fixed -norm) of the minimization problem
Now recall that the Steiner symmetrization [20, Chapter 7] of a function , denoted by , satisfies the Pólya-Szegö’s inequality
for all (see [18, Theorem 2.2.4]). The unit ball is symmetric so it coincides with its symmetrization and the -norm of coincide with the one of so from the inequality
we infer that each is radially symmetric. On the other hand, we proved in Proposition 3.4 that converges strongly in to as and therefore, up to subsequences, we can assume that converges almost everywhere to . Thus
Finally, if we choose to be the solution with -norm equal to one, then is the unique maximizer of the functional
The function is radial so we can apply (iv) of [18, Theorem 2.2.4] and obtain
which allows us to conclude that is also radially symmetric. ∎
Now fix for simplicity and notice that the optimal profile and the optimal density satisfy the elliptic equation
(4.1) |
so, exploiting the fact that they are both radially symmetric, we can write
(4.2) |
Since by Theorem 2.4 the support of is contained in the set where achieves its maximum value, it is easy to see that there exists such that
Notice that we can avoid placing a constant in front of because we are using that is unique when the -norm is fixed (although we do not care about the actual value here). It follows from (4.2) that
and this leads to a ordinary differential equation in which admits an explicit solution that depends on and , that is,
Suppose that the value of is known. We can find the optimal value by exploiting the integral condition on . Namely, we know that so
This leads to
(4.3) |
which admits a unique solution in the interval provided that is bigger than or equal to the minimum of , which is true for the one given in (4.1).
Remark 4.2.
In the energy problem with , one can prove (see Example 5.1 of [8]) that is the unique solution of the polynomial equation
Observe that for the unique solution of the equation is , and this is compatible with the fact that there is no reinforcement at all. The same is true in the eigenvalue case, but it cannot be inferred from (4.3) since the relation holds only when a density appears.
We can now recover , the optimal profile, completely using the boundary condition naturally arising from the decomposition and the Neumann condition at the origin. Namely, it is easy to verify that
where and are the first Bessel functions of first and second kind respectively. To find the constants we simply notice that by continuity
and, similarly, the Neumann condition gives
Since and , the second condition is satisfied if and only if . This immediately shows that
and therefore the optimal profile is given by
We now show the shape of the optimal density (see Figure 1) via a numerical analysis. The general idea is to fix admissible (bigger than ) and recover the unique from the minimum problem
Finally the length is determined starting from the identity (4.3). The numerical simulation confirms the regularity result in Theorem 2.4 because turns out to be the unique one for which
holds. We would like to point out the main difference with the energy problem. In Example 5.1 of [8] it was proved that the optimal density is linear,
while in our case the optimal density is not linear (it depends on and ) and, coherently with this dependence, it is not strictly increasing but rather


5 The connected case
We now consider the maximization problem (2.2), in which ranges in the class of closed, connected, one-dimensional subsets of . We follow closely the method introduced in [2] for the same optimal reinforcement problem when an external force acts on the membrane , and we show that a small modification of the main proof is enough to get the same conclusion in the eigenvalues’ problem.
5.1 Proof of Theorem 2.6
In Proposition 2.5 we proved that there exists a solution , in the class , to the maximization problem (2.5). It only remains to show that there is some such that . The following technical result was proved in [2, Lemma 3.3].
Lemma 5.1.
Let be a compact set in with . For all there exists a function of class satisfying the following properties:
-
(1)
is locally constant on ;
-
(2)
at all points ;
-
(3)
for all and everywhere except in an open set of measure less than containing .
We shall now prove that the optimal measure is absolutely continuous with respect to . In [2, Lemma 3.4] the same result is obtained in the energy problem, so it is sufficient to estimate the denominator and conclude in the same way.
Proof of Theorem 2.6.
The main difference with the mentioned paper is that we will show that for all and there exists such that
where is the absolutely continuous part of . Indeed, when is small enough we can use Taylor approximation theorem to infer that
where is a positive constant. Now let
where and is a cut-off function identically one on and zero on the complement of . In [2, Lemma 3.4] it was proved that
so we only have to deal with the denominator. A straightforward computation, assuming that , shows that
which immediately leads to the following estimate. If we denote , then we find that
Therefore, to conclude the proof, it suffices to set and continue as in [2, Lemma 3.4]. ∎
5.2 Indirect method and boundary points
Let and let be a solution of the minimization problem
(5.1) |
Then for each there results
which is easily seen to be equivalent to
Since minimizes the functional in (5.1) we can substitute it with to obtain the following identity which is valid for all ,
The integration by parts formula shows that
where does not depend on the choice of an orientation and are, respectively, the positive and negative derivatives of on . Finally, since
we can integrate by parts the second term and obtain
where is the Laplace-Beltrami operator on , and is the set of terminal-type and branching-type points of .
Proposition 5.2.
If is a minimum point of (5.1), then it solves the following boundary-value problem:
For points in the following three situations are all possible and, in the general case, we expect all of them to appear:
-
(i)
Dirichlet. If , then .
-
(ii)
Neumann. If is a terminal point of , then .
-
(iii)
Kirchhoff. If is a branching point of , then
where is the trace of over the -th branch of ending at and the corresponding tangent vector.
As a consequence, using Proposition 5.2 and Theorem 2.6, it is easy to verify that [2, Proposition 4.1] can be proved in the same way.
Proposition 5.3.
Let be a solution of (2.5) and let be the unique positive solution of the associated minimization problem with -norm fixed. Then there exists a positive constant such that
6 Conclusion and open problems
In this section we present and discuss some open questions related to the optimization problems we considered.
Problem 1.
A first problem is related to the regularity of solutions. We have shown (Theorem 2.4) that when is regular enough the optimization problem (2.3) admits a solution that is indeed a function for a suitable . It would be interesting to know if additional regularity properties on hold in general. Similarly, the optimization problem (2.5) admits a solution which is of the form for a suitable closed connected set and a function , with and on . Even if we could expect that and are regular enough, at the moment these regularity results are not available and seem rather difficult. In particular, it would be interesting to prove (or disprove) the regularity of the optimal set up to a finite number of branching points, where the Kirchhoff rule holds.
Problem 2.
For the optimal set of problem (2.5) several necessary conditions of optimality merit to be investigated, for instance we list the following ones, that look similar to other problems studied in the fields of optimal transport and of structural mechanics (see [7], [9], [10]).
-
(a)
Does contain closed loops (i.e. subsets homeomorphic to the circle )? This should not be the case, even if a complete proof is missing.
-
(b)
The branching points of (if any) do have only three branches or a higher number of branches is possible?
-
(c)
Does the optimal set intersect always the boundary ?
-
(d)
Is it possible that or we always have and hence somewhere on ?
Problem 3.
As stated in the Introduction, passing from a single connected set to sets with at most a connected components (with a priori fixed) does not introduce essential differences in the statements and in the proofs. However, it would be interesting to establish if, in the case when connected components are allowed, the optimal set has actually exactly components.
Problem 4.
Finally, the numerical treatment of the optimization problems we considered, present several difficulties, essentially due to the fact that a very large number of local maxima are possible and global optimization algorithms are usually too slow for this kind of problems. In the case of energy optimization, considered in [2] and in [8], some efficient optimization methods have been implemented, but the eigenvalue optimization considered in the present paper seems to present a higher level of complexity.
Acknowledgements
The work of the first author is part of the project 2017TEXA3H “Gradient flows, Optimal Transport and Metric Measure Structures” funded by the Italian Ministry of Research and University. The first author is members of the “Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni” (GNAMPA) of the “Istituto Nazionale di Alta Matematica” (INDAM).
References
- [1]
- [2] G. Alberti, G. Buttazzo, S. Guarino Lo Bianco, E. Oudet: Optimal reinforcing networks for elastic membranes. Netw. Heterog. Media, 14 (3) (2019), 589–615.
- [3] L. Ambrosio, P. Tilli: Selected topics on analysis in metric spaces. Oxford Lecture Series in Mathematics and its Applications 25, Oxford University Press, Oxford, 2004.
- [4] G. Bouchitté, G. Buttazzo, P. Seppecher: Energies with respect to a measure and applications to low dimensional structures. Calc. Var. Partial Differential Equations, 5 (1996), 37–54.
- [5] H. Brezis, G. Stampacchia: Sur la régularité de la solution d’inéquations elliptiques. Bull. Soc. Math. France, 96 (1968), 153–180.
- [6] D. Bucur, G. Buttazzo: Variational methods in shape optimization problems. Progress in Nonlinear Differential Equations and Their Applications 65, Birkhäuser, Basel, 2005.
- [7] G. Buttazzo, E. Oudet, E. Stepanov: Optimal transportation problems with free Dirichlet regions. In “Variational Methods for Discontinuous Structures”, Progr. Nonlinear Differential Equations Appl. 51, Birkhäuser, Basel, 2002, 41–65.
- [8] G. Buttazzo, E. Oudet, B. Velichkov: A free boundary problem arising in PDE optimization. Calc. Var. Partial Differential Equations, 54 (2015), 3829–3856.
- [9] G. Buttazzo, E. Stepanov: Optimal transportation networks as free Dirichlet regions for the Monge-Kantorovich problem. Ann. Sc. Norm. Super. Pisa Cl. Sci., 2 (4) (2003), 631–678.
- [10] A. Chambolle, J. Lamboley, A. Lemenant, E. Stepanov: Regularity for the optimal compliance problem with length penalization. SIAM J. Math. Anal., 49 (2) (2017), 1166–1224.
- [11] L. De Pascale, L.C. Evans, A. Pratelli: Integral estimates for transport densities. Bull. London Math. Soc., 36 (2004), 383–395.
- [12] L. De Pascale, A. Pratelli: Regularity properties for the Monge transport density and for solutions of some shape optimization problem. Calc. Var. Partial Differential Equations, 14 (2002), 249–274.
- [13] L.C. Evans: A second order elliptic equation with gradient constraint. Calc. Var. Partial Differential Equations, 4 (1979), 555–572.
- [14] L.C. Evans: Partial Differential Equations. Graduate Studies in Mathematics 19, American Mathematical Society, Providence, 2010.
- [15] K.J. Falconer: The geometry of fractal sets. Cambridge Tracts in Mathematics 85, Cambridge University Press, Cambridge, 1986.
- [16] D. Gilbarg, N.S. Trudinger: Elliptic partial differential equations of second order. Classics in Mathematics, Springer-Verlag, Berlin, 2001.
- [17] S. Gołab: Sur quelques points de la théorie de la longueur (On some points of the theory of the length). Ann. Soc. Polon. Math., 7 (1929), 227–241.
- [18] A. Henrot: Extremum Problems for Eigenvalues of Elliptic Operators. Frontiers in Mathematics, Birkhäuser, Basel, 2006.
- [19] A. Henrot, M. Pierre: Shape Variation and Optimization: A Geometric Analysis. EMS Tracts in Mathematics 28, European Mathematical Society, Zürich, 2018.
- [20] S.G. Krantz, H.R. Parks: The Geometry of Domains in Space. Birkhäuser Advanced Texts, Birkhäuser, Basel, 1999.
- [21] G.M. Lieberman: Boundary regularity for solutions of degenerate elliptic equations. Nonlinear Anal. Theory Methods Appl., 12 (1988), 1203–1219.
- [22] F. Santambrogio: Absolute continuity and summability of transport densities: simpler proofs and new estimates. Calc. Var. Partial Differential Equations, 36 (2009), 343–354.
Giuseppe Buttazzo:
Dipartimento di Matematica, Università di Pisa
Largo B. Pontecorvo 5, 56127 Pisa - ITALY
giuseppe.buttazzo@unipi.it
http://www.dm.unipi.it/pages/buttazzo/
Francesco Paolo Maiale:
Scuola Normale Superiore
Piazza dei Cavalieri 7, 56126 Pisa - ITALY
francesco.maiale@sns.it
https://poisson.phc.dm.unipi.it/~fpmaiale/