A Crowding Distance That Provably Solves the Difficulties of the NSGA-II in Many-Objective Optimization
Abstract
Recent theoretical works have shown that the NSGA-II can have enormous difficulties to solve problems with more than two objectives. In contrast, algorithms like the NSGA-III or SMS-EMOA, differing from the NSGA-II only in the secondary selection criterion, provably perform well in these situations.
To remedy this shortcoming of the NSGA-II, but at the same time keep the advantages of the widely accepted crowding distance, we use the insights of these previous work to define a variant of the crowding distance, called truthful crowding distance. Different from the classic crowding distance, it has for any number of objectives the desirable property that a small crowding distance value indicates that some other solution has a similar objective vector.
Building on this property, we conduct mathematical runtime analyses for the NSGA-II with truthful crowding distance. We show that this algorithm can solve the many-objective versions of the OneMinMax, COCZ, LOTZ, and problems in the same (polynomial) asymptotic runtimes as the NSGA-III and the SMS-EMOA. This contrasts the exponential lower bounds previously shown for the classic NSGA-II. For the bi-objective versions of these problems, our NSGA-II has a similar performance as the classic NSGA-II, gaining however from smaller admissible population sizes. For the bi-objective OneMinMax problem, we also observe a (minimally) better performance in approximating the Pareto front.
These results suggest that our truthful version of the NSGA-II has the same good performance as the classic NSGA-II in two objectives, but can resolve the drastic problems in more than two objectives.
1 Introduction
In many practical applications, the problems to be solved have several, often conflicting objectives. Since such problems often do not have a single optimal solution, one resorts to computing a set of diverse good solutions (ideally so-called Pareto optima) and lets a human decision maker take the final decision among these.
One of the most successful algorithm for computing such a set of solutions for a multi-objective optimization problem is the Non-dominated Sorting Genetic Algorithm II (NSGA-II) by Deb et al. [DPAM02], currently cited more than 50,000 times according to Google scholar.
While it was always known that the performance of this algorithm becomes weaker with increasing numbers of objectives – this was the main motivation for Deb and Jain [DJ14] to propose the NSGA-III –, very recent mathematical analyses of multi-objective evolutionary algorithms (MOEAs) could quantify and obtain a deeper understanding of this shortcoming. In [ZD23b], it was proven that the NSGA-II with any population size linear in the Pareto front size cannot optimize the simplistic OneMinMax benchmark in subexponential time when the number of objectives is at least three (for two objectives, a small polynomial runtime guarantee was proven by Zheng and Doerr [ZD23a]). In contrast, for the NSGA-III and the SMS-EMOA, two algorithms differing from the NSGA-II only in that the crowding distance is replaced by a different secondary selection criterion, polynomial runtime guarantees could be proven for the OneMinMax and several other benchmarks in any (constant) number of objectives [WD23, ODNS24, ZD24b, WD24]. This different optimization behavior suggests that it is the crowding distance which is the root for the problems of the NSGA-II in higher numbers of objectives.
Given that the NSGA-II is the by far dominant MOEA in practice, clearly beating the NSGA-III (cited less than 6,000 times according to Google scholar) and the SMS-EMOA (cited less than 2,200 times), and speculating that practitioners prefer working with a variant of the NSGA-II rather than switching to a different algorithm (and also noting that the NSGA-III and SMS-EMOA have some known shortcomings the NSGA-II does not have), in this work we propose to use the NSGA-II unchanged apart from a mild modification to the crowding distance. This change will again build on insights from Zheng and Doerr [ZD23b]. We defer the technical details to a separate section below and state here only that we call our crowding distance truthful crowding distance since we feel that it better reflects how close a solution is to others.
Given that we build mostly on previous works of mathematical nature, and strongly profited from the precision of such results, we analyze the NSGA-II with truthful crowding distance also via mathematical means. Our main results are the following. (i) For the standard many-objective benchmarks mOneMinMax, mCOCZ, mLOTZ, and mOJZJ, the NSGA-II with truthful crowding distance computed the whole Pareto front efficiently, in asymptotically the same time as the NSGA-III or the SMS-EMOA. This demonstrates clearly that it was in fact a weakness of the original crowding distance that led to the drastic problems observed for the NSGA-II in many-objective optimization. (ii) For the bi-objective versions of these benchmarks, for which the NSGA-II was efficient, we show that the NSGA-II with truthful crowding distance is equally efficient, and this already from population sizes on that equal the Pareto front of the problem, whereas the previous results needed a population size some constant factor larger. (iii) We also regard the problem of approximating the Pareto front when the population size is too small to cover the full Pareto front. Here we show that our NSGA-II with sequential selection, in an analogous way as the sequential version of the classic NSGA-II, computes good approximations to the Pareto front of the bi-objective OneMinMax problem (the approximation quality is minimally better for our algorithm).
In summary, these results show that the NSGA-II with truthful crowding distance overcomes the difficulties of the classic NSGA-II in many-objective optimization, but preserves its good performance in bi-objective optimization.
2 Preliminaries
In this work, we discuss variants of the NSGA-II, the most prominent multi-objective evolutionary algorithm and one of the most successful approaches to solve multi-objective optimization problems.
A multi-objective optimization problem is a tuple of functions defined on a common search space. As common in discrete evolutionary optimization, we always consider the search space . We call the problem size. When using asymptotic notation, this will be with respect to tending to infinity.
Also always, our goal will be to maximize . Since the individual objective might be conflicting, we usually do not have a single solution maximizing all . In this case, the best we can hope for are solutions that are not strictly dominate by others. We say that a solution dominates a solution , written as , if for all . If in addition one of these inequalities is strict, we speak of strict domination, denoted by . The Pareto set of a problem is the set of solutions that are not strictly dominated by another solution; the set of their objective values is called the Pareto front.
A common solution concept to multi-objective problems is to compute a small set of solutions such that is the Pareto front or approximates it in some sense. The idea is that a human decision maker, based on preferences not included in the problem formulation, can then select from the final solution.
For this problem, evolutionary algorithms have been employed with great success [CLvV07, ZQL+11]. The by far dominant algorithm among these multi-objective evolutionary algorithms (MOEAs) is the non-dominated sorting genetic algorithms II (NSGA-II) proposed by Deb et al. [DPAM02]. This algorithm works with a population , initialized randomly, of fixed size . Each iteration of the main optimization loop consist of creating new solutions from these parents (“offspring”) and selecting from the combined parent and offspring population the next population of individuals.
Various ways of creating the offspring have been used. We shall regard random parent selection (each offspring is generated from randomly chosen parents) and fair parent selection (only with mutation, here from each parent and offspring is generated via mutation), 1-bit and bit-wise mutation with mutation rate , and uniform crossover. When using crossover, we assume that there is a positive constant (crossover rate) and in each iteration, with probability and offspring is created via crossover, else via mutation. Binary tournament parent selection has also been studied, but the existing mathematical results, see, e.g., [ZD23a] suggest that it does not lead to substantially different results, but only to more complicated analyses.
More important and characteristic for the NSGA-II is the selection of the next parent population. The most important selection criterion is non-dominated sorting, that is, the combined parent and offspring population is partitioned into fronts such that consist of all non-dominated elements (that is, elements not strictly dominated by another one) of . Individuals in an earlier front are preferred in the selection of the next population, that is, for the maximum such that contains less than elements, these fronts all fully go into the next population. The remaining elements are selected from the critical front using a secondary criterion, which is the crowding distance for the NSGA-II. We defer the precise definition of the crowding distance to the subsequent section. We note that the non-dominated-sorting partition is uniquely defined and can be computed in time . The crowding distance depends on how ties in sortings are broken, the crowding distance of all individuals in can be computed very efficiently in time .
The pseudocode of the NSGA-II can be found in Algorithms 1, where we note that the presentation here is optimized for a uniform treatment of this NSGA-II and the variant with sequential selection to be discussed now.
Noting that the removal of an individual changes the crowding distance of the remaining individuals, Kukkonen and Deb [KD06] proposed to take into account this change, that is, to sequentially remove individuals with smallest crowding distance and update the crowding distance of the remaining individuals. This was shown to give superior results in the empirical study [KD06] and the mathematical analysis [ZD24a].
3 Classic and Truthful Crowding Distance
In this section, we first describe the original crowding distance used in the NSGA-II of Deb et al. [DPAM02] and compare it with other ways to select a subset of individuals from the critical front of a non-dominated sorting (secondary selection criterion). This comparison motivates the development of a modification of the crowding distance, called truthful crowding distance, done in the second half of this section.
3.1 Original Crowding Distance
When selecting the next population, the NSGA-II, NSGA-III, and SMS-EMOA first perform non-dominated sorting, resulting in a partition of the combined parent and offspring population into fronts of pair-wise non-dominated individuals. For a suitable number , the first fronts are all taken into the next population; from a subset is selecting according to a secondary criterion.
For the NSGA-II, this secondary criterion is the crowding distance. The crowding distance of an individual in a set is the sum, over all objectives, of the normalized distances of the two neighboring objective values. Formally, let be the number of objectives and let be the set of individuals. For each , let be the sorted list of w.r.t. . How ties in this sortings are broken has to be specified by the algorithm designer, we do not take any particular assumptions on that issue. For , we denote by its position in the sorted list w.r.t. , that is, . The crowding distance of then is
(1) |
The simple and intuitive definition of the crowding distance puts it ahead of other secondary criteria in several respects. Compared to the hypervolume contribution used by the SMS-EMOA, the crowding distance can be computed very efficiently, namely in time , which from on is significantly faster than the best known approach to compute the hypervolume contribution, an algorithm with runtime from the break-through paper [Cha13], see also the surveys [SIHP20, GFP21].
Compared to the reference point mechanism employed by the NSGA-III, the crowding distance needs no parameters to be set. In contrast, the NSGA-III requires a normalization procedure (for which several proposals exist) and the set of reference points (for which several constructions exists, all having at least the number of reference points as parameter).
Besides many successful applications in practice, also a decent number of mathematical results show that the NSGA-II with its crowding distance secondary selection criterion is able to compute or approximate the Pareto front of various classic problems [ZLD22, BQ22, DQ23a, DQ23b, DOSS23b, DOSS23a, CDH+23, ZLDD24, ZD24a].
However, the positive results for the NSGA-II are limited to bi-objective problems, and this limitation is intimately connected to the crowding distance. As demonstrated by Zheng and Doerr [ZD23b], the NSGA-II fails to compute the Pareto front of the simple OneMinMax problem once the number of objective is at least three. The reason deduced in that work is the independent treatment of the objectives in calculating the crowding distance. Subsequent positive results for the NSGA-III [WD23, ODNS24] and SMS-EMOA [ZD24b, WD24]) for three or more objectives support the view that the crowding distance has intrinsic short-comings.
3.2 Truthful Crowding Distance
Given the undeniable algorithmic advantages of the crowding distance and its high acceptance by practitioners, we now design a simple and efficient variant of crowding distance that also works well for many objectives.
As pointed out in the example in [ZD23b], the original crowding distance allows that points far away from a solution still cause it to have a small crowding distance. This counter-intuitive and undesired behavior stems from the fully independent consideration of the objectives in the calculation of the crowding distance: In (1), the -th summand only relies on distances w.r.t. and ignores possibly large distances stemming from other objectives .
To avoid the undesired influence of points far away on the crowding distance components, but at the same time allow for a highly efficient computation of the crowding distance, we proceed as follows. (i) We replace the (normalized) distance in the -th objective by the (normalized) distance. This avoids that points far in the objective space lead to low crowding distance values. (ii) We keep the property that the crowding distance is the sum of the crowding distance contributions of the different objectives. This was the key reason why the original crowding distance can be computed very efficiently. (iii) In the computation of the -th crowding distance contribution, we also keep working with the individuals sorted in order of descending value. (iv) Noting that the use of the distance might imply that the point closest to some is not necessarily or , we consider the minimum distance among . We note that this renders our crowding distance less symmetric than the original crowding distance, but we could not see a reason to let, in the language of the original crowding distance, contribute to both the crowding distance of and . In fact, we shall observe that this slightly less symmetric formulation will reduce the number of solutions with identical objective vector and positive crowding distance. (v) Finally, we shall assume that the different sortings used sort individuals with identical objective vectors in the same order (correlated tie-breaking). The original crowding distance does not specify how to break such ties, but any stable sorting algorithm will have this property, so this assumption is not very innovative. As observed in [BQ22], this assumption of correlated tie-breaking can reduce the minimum required population size for certain guarantees to hold.
We now give the formal definition of our crowding distance, which we call truthful crowding distance to reflect that fact that it better describes how isolated a solution is. Let be a set of pair-wise non-dominated individuals. For all , let be a sorted list of in descending order of . Assume correlated tie-breaking, that is, if two individuals have identical objective values, then they appear in all sortings in the same order.
If an individual appears as the first element of some sorting, that is, for some , then its truthful crowding distance is . Otherwise, its crowding distance shall be the sum of the crowding distance contributions , which we define now.
To this aim, let and such that . For , we define the normalized distance by
where we count summands “” as zero (this happens in the exotic case that in some objective, only a single objective value is present in ). With this distance, the -th crowding distance contribution is defined as the smallest distance between and a solution in an earlier position in the -th list:
This defines our variant of the crowding distance, called truthful crowding distance. The pseudocode of an algorithm computing it is given in Algorithm 2. As is easy to see, this algorithm has a time complexity quadratic in the size of the set , more precisely, . This is more costly than the computation of the original crowding distance in time . Since the best known time complexity of non-dominated sorting in the general case is and no better runtime can be expected in the general case [YRL+20], this moderate increase in the complexity of computing the crowding distance appears tolerable.
Input: , a set of individuals
Output: , where is the truthful crowding distance for
As said, we propose in this work to use the classic NSGA-II or the sequential NSGA-II, but with the original crowding distance replaced by the truthful crowding distance. We call the resulting algorithms truthful (sequential) NSGA-II, abbreviated (sequential) NSGA-II-T.
4 Runtime Analysis: Computing the Pareto Front
Having introduced the truthful crowding distance and the truthful (sequential) NSGA-II, denoted by NSGA-II-T, in this and the subsequent section we will conduct several runtime analyses of this algorithm. The results in this section will in particular show that the NSGA-II-T can efficiently optimize the many-objective problems, in contrast to the exponential runtime of the original NSGA-II on mOneMinMax [ZD23b].
4.1 Not Losing Pareto Front Points
The key ingredient to all proofs in this section is what we show in this subsection (in Theorem 2), namely that the NSGA-II-T with sufficiently large population size cannot lose Pareto optimal solution values (and more generally, can lose solution values only by replacing them with better ones). This is a critical difference to the classic NSGA-II, as shown in [ZD24b].
A step towards proving this important property is the following lemma, which asserts that for each objective vector of the population exactly one individual with this function value has a positive crowding distance.
Lemma 1.
Let be the number of objectives of the discussed function . Let be a population of individuals in . Assume that we compute the truthful crowding distance via Algorithm 2. Then for any function value , exactly one individual with has a positive truthful crowding distance (and the others have a truthful crowding distance of zero).
Proof.
Let . Let be the index set of the individuals in with function value . For the sorted list , let be the increasing sequence of the indices of all elements in , that is, we have and for all . By definition of correlated tie-breaking, we know that for all , we have .
We first show that has a positive crowding distance. If , then . If , then all individuals with have function values different from . That is, for each there exists such that . Recalling that we regard a normalized version of the distance, this implies for all , thus as desired.
We end the proof by showing that for , we have . To this aim, we observe that for all , we have , this individual and have the same -value giving , and . Consequently, is zero. Since this holds for all , we have . ∎
From Lemma 1, we derive our main technical tool asserting that a sufficiently high population size ensures that the NSGA-II-T does not lose desirable solutions.
Theorem 2.
Let be the number of objectives for a given , and let be such that any set of incomparable solutions satisfies . Consider using the (sequential) NSGA-II-T with population size to optimize . For any solution in the combined parent and offspring population , in the next and all future generations, there is at least one individual such that .
We note that for many problems, the maximum size of a set of incomparable solutions is already witnessed by the Pareto front (that is, ). Hence the requirement of the theorem is needed anyway to ensure that the algorithm can store a population with .
Proof of Theorem 2.
We conduct the proof for the more complicated case of the sequential NSGA-II-T, the proof for the NSGA-II-T follows from a subset of the arguments.
Let be in the -th front, that is, . If , from the selection in the NSGA-II-T, we know that will enter into the next generation, and suffices. If and , then there exists a solution such and will enter into the next generation. Hence, we only need to discuss the case in the following.
From Lemma 1, we know that for each function value in there is an individual in with positive truthful crowding distance. Let be the individual with and with positive truthful crowding distance. Then as well and .
From the definition of and Lemma 1 again (now referring to the assertion that there is at most one individual per objective value with positive truthful crowding distance), we know that before each the removal, there are at most individuals in with positive truthful crowding distance. Since individuals of survive, this means that in the whole removal procedure, only individuals with zero truthful crowding distance will be removed. By the definition of the truthful crowding distance, a crowding distance of zero means that there is a second individual with same objective value appearing in all sortings before the first. Hence the removal of the first individual does not change the truthful crowding distance of any other individual (by definition of the truthful crowding distance). Hence, all individuals having initially a positive truthful crowding distance, including , will survive to the next population. ∎
Corollary 3.
Under the assumptions of Theorem 2, once a solution with a given Pareto optimal solution value is found, such a solution will be contained in the population for all future generations.
4.2 Runtime Results for Many Objectives
We now build on the structural insights on the NSGA-II-T gained in the previous subsection and show that this algorithm can easily optimize the standard benchmarks, roughly with the same efficiency as the global SEMO algorithm, a minimalistic MOEA mostly used in theoretical analyses, and the two classic MOEAs NSGA-III and SMS-EMOA. This in particular shows that the truthful NSGA-II does not face the problems the classic NSGA-II faces when the number of objectives is three or more.
For our analysis, we are lucky that we can heavily build on the work [WD24], in which near-tight runtime guarantees for many-objective optimization were proven. As discussed in [WD24, Section 5], their proofs only rely on two crucial properties: (i) that solutions values are never lost except when replaced by better ones, and (ii) that there is a number such that for any individual in the population, with probability this is selected as parent and an offspring is generated from it via bit-wise mutation with mutation rate .
It is easy to see that these properties are fulfilled for our (sequential) NSGA-II-T when using bit-wise mutation. Property (i) is just the assertion of Theorem 2. Property (ii) follows immediately from the definition of our algorithm: the probability for this event is for fair selection and for random selection and crossover rate . With these considerations, we immediately extend the results of [WD24] to the truthful (sequential) NSGA-II.
Theorem 4.
Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize the mOneMinMax or mCOCZ benchmarks. Then in an expected number of iterations, the full Pareto front of the mOneMinMax or mCOCZ benchmarks is covered by the population.
Theorem 5.
Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize the mLOTZ benchmark. Then in an expected number of iterations, the full Pareto front of the mLOTZ benchmark is covered.
Theorem 6.
Let . Let . Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize . Then in an expected number of iterations, the full Pareto front of the benchmark is covered.
We have not defined the benchmark problems regarded in the above results, both because they are the most common benchmarks in the theory of MOEAs and because our proof do not directly refer to them (all problem-specific arguments are taken from [WD24]). The reader interested in the definitions can find them all in [WD24].
4.3 Runtime Results for Two Objectives
We now turn to the bi-objective versions of the benchmarks studied above and the DLTB benchmark. Here the classic NSGA-II was shown to be efficient in previous work [ZLD22, ZD23a, BQ22, DQ23a, ZLDD24]. Using the same arguments as in the previous section, we show the following results for the (sequential) NSGA-II-T. In the runtimes, they agree with the known asymptotic results for the classic NSGA-II. However, the minimum required population size (which has a direct influence on the cost of one iteration) is by a factor of two or four smaller than in the previous works. Since it is equal to the size of the Pareto front, it is clear that even smaller population sizes cannot be employed.
Theorem 7.
Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize OneMinMax or COCZ. Then in an expected number of iterations, the full Pareto front of OneMinMax or COCZ is covered.
Theorem 8.
Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize LOTZ. Then in an expected number of iterations, the full Pareto front of LOTZ is covered.
Theorem 9.
Let . Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize . Then in an expected number of iterations, the full Pareto front of is covered.
Theorem 10.
Consider using the (sequential) NSGA-II-T with problem size , fair or random selection, standard bit-wise mutation with mutation rate , and possibly crossover with rate less than one in the case of random selection, to optimize DLTB. Then in expected iterations, the full Pareto front of the DLTB is covered.
5 Approximation Ability and Runtime
In Section 4, we proved that the standard and sequential NSGA-II-T can efficiently optimize the many-objective mOJZJ, mOneMinMax, mCOCZ, and mLOTZ benchmarks as well as the popular bi-objective OneJumpZeroJump, OneMinMax, COCZ, LOTZ, and DLTB benchmarks. In this section, we will consider the approximation ability when the population size is too small to cover the full Pareto front. We will prove that the sequential NSGA-II-T has a slightly better approximation performance than the sequential NSGA-II and the steady-state NSGA-II for OneMinMax [ZD24a].
We note that there are no proven approximation guarantees for non-sequential variants of the NSGA-II (except for the steady-state version) so far and the mathematical results in [ZD24a] suggest that such results might be difficult to obtain. For that reason, we do not aim at such results for the truthful NSGA-II. We also note that so far there is no theoretical study on the approximation ability of the NSGA-II other than for the (bi-objective) OneMinMax benchmark [ZD24a]. We shall therefore also only consider this problem. We expect that results for larger numbers of objectives or other benchmarks need considerably new methods as already the approximation measure MEI is may not be suitable then.
The following lemma gives a useful criterion for individuals surviving into the next generation.
Lemma 11.
Consider using the sequential NSGA-II-T with population size to optimize OneMinMax with problem size . Assume that the two extreme points and are in the population . Then for any generation , in Steps 7 to 10 of Algorithm 1, any individual with truthful crowding distance more than (including two extreme points) will survive to .
Proof.
Consider some iteration . Let denote the combined parent and offspring population. We recall that is constructed from by sequentially removing individuals with the smallest current -value. By definition, the removal of an individual will not decrease the truthful crowding distance of the remaining individuals. In particular, individuals that initially have an infinite truthful crowding distance or have a crowding distance of at least will keep this property throughout this iteration.
It is not difficult to see that there are exactly one copy of and of with infinite truthful crowding distance. Since , both individuals will be kept to the next and all future generations.
Now consider at some stage of the sequential selection process towards , that is, with some individuals already removed. Let and let and be the two lists representing w.r.t. decreasing values of and , respectively. Let be the position of in the sorted list w.r.t. , that is, . Likewise, let be the position of in the sorted list w.r.t. , that is, . For any , we have unique such that . Since and are in , then and , and for with and , we have
Noting that since , we compute
where the last inequality uses and due to the sorted lists, and further and since in a bi-objective incomparable set, any sorting with respect to the first objective is a sorting in inverse order for the second objective. Since , we know that at least one of individuals in will have its , and thus any individual with its will not be removed. ∎
The following lemma shows that once the two extreme points are in the population, a linear runtime suffices to obtain a good approximation of the Pareto front of OneMinMax.
Lemma 12.
Consider using the sequential NSGA-II-T with population size , fair or random parent selection, one-bit mutation or standard bit-wise mutation, crossover with constant rate below or no crossover, to optimize OneMinMax with problem size . Let . Assume that the two extreme points and are in the population for the first time at some generation . Then after more iterations (both in expectation and with high probability), the population will have its an MEI value of at most . It remains in this state for all future generations.
Proof sketch.
We use similar argument as in the proof of [ZD24a, Lemma 14] and only show the difference here. Let . Let and be the lengths of the empty intervals containing in and in , respectively. We first show that if for some , then for all . If not, since , w.l.o.g., we assume that . Let be the individual whose removal lets the length of the empty interval containing increase from a value of at most to a value larger than . Then before the removal, must be one of the end points of the empty interval containing , w.l.o.g, the left end point (the smaller value). We also know that the empty interval containing after the removal of has lengths equal to the truthful crowding distance of multiplied by . Hence, we know that
which is contradicts our insight from Lemma 11 that an with cannot be removed.
The remaining argument about the first time the population has is exactly the same as in the proof of [ZD24a, Lemma 14], except for the case with crossover. Since crossover is used with a constant probability less than one, there is a constant rate of iterations using mutation only. The arguments analyzing the selection are independent of the variation operator (so in particular, the empty interval lengths are non-increasing when at least ). Consequently, by simply ignoring a possible profit from crossover iterations, we obtain the same runtime guarantee as in the mutation-only case. ∎
Noting that the maximal function value of and cannot decrease (there always is one individual witnessing this value and having infinite truthful crowding distance), we easily obtain that in expected iterations both extreme points and are reached for the first time, and also for all future iterations. This can be shown with a proof analogous to the one of [ZD24a, Lemma 15]. Therefore, we have the following main result on the approximation ability and runtime of the sequential NSGA-II-T.
Theorem 13.
Consider using the sequential NSGA-II-T with population size , fair or random parent selection, one-bit mutation or standard bit-wise mutation, crossover with constant rate below or no crossover, to optimize OneMinMax with problem size . Then after an expected number of iterations, the population contains and and satisfies . After that, these conditions will be kept for all future iterations.
Note that the best possible value for the MEI is [ZD24a]. Theorem 13 shows that the sequential NSGA-II-T can reach a good approximation guarantee of MEI of the optimal value multiplied by at most only a factor of two. We also note that for the NSGA-II with sequential survival selection using the classic crowding distance, an approximation guarantee of (also within iterations) was shown in [ZD24a]. The slightly better approximation ability shown above (within the same runtime), as the proof of Lemma 12 shows, stems from the fact that our definition of the crowding distance admits at most one individual with infinite crowding distance contribution per objective, whereas the classic crowding distance admits two.
6 Experiments
To complement our theoretical findings, we now show a few experimental results. These meant as illustration of our main (mathematical) results, not as substantial stand-alone results.
Computing the full Pareto front, many objectives: Our main theoretical result was a proof that the NSGA-II-T can efficiently solve many-objective problems, different from the classic NSGA-II, where an exponential lower bound was shown for the OneMinMax problem. To illustrate how the NSGA-II-T solves many-objective problems, we regard the 4-objective OneMinMax problem. That the NSGA-II cannot solve this problem efficiently was shown, also experimentally, in [ZD23b]. We hence study only the (sequential) NSGA-II-T as algorithm, using random selection, bit-wise mutation, no crossover, and the minimal possible population size (the Pareto front size) and . We also use the GSEMO toy algorithm. In Figure 1, we display the median (over 20 runs) number of function evaluations these algorithms took to cover the full Pareto front of -OneMinMax for different problem sizes .

We observe that all algorithms efficiently find the full Pareto front, in drastic contrast to the results for the classic NSGA-II in [ZD23b, Figure 1 and 2]. Not surprisingly, a larger population size is not helpful, which shows that it is good that the NSGA-II-T admits smaller population sizes than the classic NSGA-II. Also not surprisingly, the sequential versions give slightly better results. The minimal inferiority of the (sequential) NSGA-II-T (with ) to the toy GSEMO does not mean a lot given that this algorithm is rarely used in practice.
Computing the full Pareto front, two objectives: We conducted analoguous experiments for two objectives, where a comparison with the NSGA-II is interesting. The proven guarantees for the NSGA-II require a population size of , where this algorithm is clearly slower than all others regarded by us. We therefore did some preliminary experiments showing that already for the NSGA-II consistently is able to solve our problem instances. The results for this NSGA-II, the NSGA-II-T with optimal population size and with , and the GSEMO are shown in Figure 2. With this optimized population size for the NSGA-II, all algorithms show a roughly similar performance on the -objective OneMinMax problem, with the NSGA-II slightly ahead.

Approximation results: To analyze who well the different NSGA-II variants with small population size approximate the Pareto front, we conduct the following experiments. Note that the GSEMO cannot be used for approximative solution and is therefore not included. Following experimental settings in the only previous theoretical work [ZD24a] on the approximation topic, we regard the bi-objective OneMinMax problem with problem size . We use the same algorithms as above (except for the GSEMO), with population sizes . As before, we measure the approximation quality via the MEI. We note that the best possible MEI values are for respectively.
As in [ZD24a], we regard the approximation quality in two time intervals, namely in iterations and after the two extreme points of the Pareto front have entered the population.

Figure 3 shows the MEI values for the different algorithms in a single run, for reasons of space only for (but the other population sizes gave a similar picture). We clearly see a much better performance of the sequential algorithms, with no significant differences between the classic and the truthful sequential NSGA-II.
7 Conclusion and Future Work
To overcome the difficulties the NSGA-II was found to have in many-objective optimization, we used the insights from several previous theoretical works, most profoundly [ZD23b], to design a truthful crowding distance for the NSGA-II. Different from the original crowding distance, this new measure has the natural and desirable property that solutions with objective vector far from all others receive a large crowding distance value. The truthful crowding distances are slightly more complex to compute, but asymptotically not more complex than the non-dominated sorting step of the NSGA-II.
Via mathematical runtime analyses on several classical benchmark problems, we prove that the NSGA-II with the truthful crowding distance indeed is effective in more than two objectives, admitting the same performance guarantees as previously shown for the harder to use NSGA-III and the SMS-EMOA, which is computationally demanding due to the use of the hypervolume contribution. For the bi-objective benchmarks, for which the classic NSGA-II has been analyzed, we prove the same runtime guarantees (however, only requiring the population size to be at least the size of the Pareto front). Similarly, the truthful NSGA-II admits essentially the same (that is, minimally stronger) approximation guarantees as previously shown for the classic NSGA-II.
Consequently, the NSGA-II with truthful crowding distance overcomes the difficulties of the classic NSGA-II in many objective without that we observe disadvantages in two objectives, where the classic NSGA-II has shown a very good performance.
References
- [BQ22] Chao Bian and Chao Qian. Better running time of the non-dominated sorting genetic algorithm II (NSGA-II) by using stochastic tournament selection. In Parallel Problem Solving From Nature, PPSN 2022, pages 428–441. Springer, 2022.
- [CDH+23] Sacha Cerf, Benjamin Doerr, Benjamin Hebras, Jakob Kahane, and Simon Wietheger. The first proven performance guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) on a combinatorial optimization problem. In International Joint Conference on Artificial Intelligence, IJCAI 2023, pages 5522–5530. ijcai.org, 2023.
- [Cha13] Timothy M. Chan. Klee’s measure problem made easy. In IEEE Symposium on Foundations of Computer Science, FOCS 2013, pages 410–419. IEEE Computer Society, 2013.
- [CLvV07] Carlos Artemio Coello Coello, Gary B. Lamont, and David A. van Veldhuizen. Evolutionary Algorithms for Solving Multi-Objective Problems. Springer, 2nd edition, 2007.
- [DJ14] Kalyanmoy Deb and Himanshu Jain. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Transactions on Evolutionary Computation, 18:577–601, 2014.
- [DOSS23a] Duc-Cuong Dang, Andre Opris, Bahare Salehi, and Dirk Sudholt. Analysing the robustness of NSGA-II under noise. In Genetic and Evolutionary Computation Conference, GECCO 2023, pages 642–651. ACM, 2023.
- [DOSS23b] Duc-Cuong Dang, Andre Opris, Bahare Salehi, and Dirk Sudholt. A proof that using crossover can guarantee exponential speed-ups in evolutionary multi-objective optimisation. In Conference on Artificial Intelligence, AAAI 2023, pages 12390–12398. AAAI Press, 2023.
- [DPAM02] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6:182–197, 2002.
- [DQ23a] Benjamin Doerr and Zhongdi Qu. A first runtime analysis of the NSGA-II on a multimodal problem. IEEE Transactions on Evolutionary Computation, 27:1288–1297, 2023.
- [DQ23b] Benjamin Doerr and Zhongdi Qu. Runtime analysis for the NSGA-II: provable speed-ups from crossover. In Conference on Artificial Intelligence, AAAI 2023, pages 12399–12407. AAAI Press, 2023.
- [GFP21] Andreia P. Guerreiro, Carlos M. Fonseca, and Luís Paquete. The hypervolume indicator: Computational problems and algorithms. ACM Computing Surveys (CSUR), 54:1–42, 2021.
- [KD06] Saku Kukkonen and Kalyanmoy Deb. Improved pruning of non-dominated solutions based on crowding distance for bi-objective optimization problems. In Conference on Evolutionary Computation, CEC 2006, pages 1179–1186. IEEE, 2006.
- [ODNS24] Andre Opris, Duc Cuong Dang, Frank Neumann, and Dirk Sudholt. Runtime analyses of NSGA-III on many-objective problems. In Genetic and Evolutionary Computation Conference, GECCO 2024, pages 1596–1604. ACM, 2024.
- [SIHP20] Ke Shang, Hisao Ishibuchi, Linjun He, and Lie Meng Pang. A survey on the hypervolume indicator in evolutionary multiobjective optimization. IEEE Transactions on Evolutionary Computation, 25:1–20, 2020.
- [WD23] Simon Wietheger and Benjamin Doerr. A mathematical runtime analysis of the Non-dominated Sorting Genetic Algorithm III (NSGA-III). In International Joint Conference on Artificial Intelligence, IJCAI 2023, pages 5657–5665. ijcai.org, 2023.
- [WD24] Simon Wietheger and Benjamin Doerr. Near-tight runtime guarantees for many-objective evolutionary algorithms. In Parallel Problem Solving From Nature, PPSN 2024. Springer, 2024. To appear. Preprint at https://arxiv.org/abs/2404.12746.
- [YRL+20] Sorrachai Yingchareonthawornchai, Proteek Chandan Roy, Bundit Laekhanukit, Eric Torng, and Kalyanmoy Deb. Worst-case conditional hardness and fast algorithms with random inputs for non-dominated sorting. In Genetic and Evolutionary Computation Conference, GECCO 2020, Companion Volume, pages 185–186. ACM, 2020.
- [ZD23a] Weijie Zheng and Benjamin Doerr. Mathematical runtime analysis for the non-dominated sorting genetic algorithm II (NSGA-II). Artificial Intelligence, 325:104016, 2023.
- [ZD23b] Weijie Zheng and Benjamin Doerr. Runtime analysis for the NSGA-II: proving, quantifying, and explaining the inefficiency for many objectives. IEEE Transactions on Evolutionary Computation, 2023. In press, https://doi.org/10.1109/TEVC.2023.3320278.
- [ZD24a] Weijie Zheng and Benjamin Doerr. Approximation guarantees for the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). IEEE Transactions on Evolutionary Computation, 2024. In press, https://doi.org/10.1109/TEVC.2024.3402996.
- [ZD24b] Weijie Zheng and Benjamin Doerr. Runtime analysis of the SMS-EMOA for many-objective optimization. In Conference on Artificial Intelligence, AAAI 2024, pages 20874–20882. AAAI Press, 2024.
- [ZLD22] Weijie Zheng, Yufei Liu, and Benjamin Doerr. A first mathematical runtime analysis of the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). In Conference on Artificial Intelligence, AAAI 2022, pages 10408–10416. AAAI Press, 2022.
- [ZLDD24] Weijie Zheng, Mingfeng Li, Renzhong Deng, and Benjamin Doerr. How to use the Metropolis algorithm for multi-objective optimization? In Conference on Artificial Intelligence, AAAI 2024, pages 20883–20891. AAAI Press, 2024.
- [ZQL+11] Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagaratnam Suganthan, and Qingfu Zhang. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation, 1:32–49, 2011.