Jump to content

Talk:Multi-objective optimization

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

First revision on the article done

[edit]

The first revision based on the previous announcement (see below) is now done. I hope that you will improve on the page and correct the possible mistakes (if any). We will be working on improving the sections that are still too brief, but the current version was available for putting it up here. We have tried to use the material of the previous article as much as possible. However, if you feel that something that you had added here previously is now missing, we are sorry for the inconvenience and we ask you politely to add it here again.

The current version tries to better cover the whole field of multiobjective optimization. However, some essential parts may still be missing, and we hope that you work on the article to improve it as we will also be doing. Hopefully, we can make a good article that will cover nicely the important field of multiobjective optimization.


Revision of the Multi-objective optimization -article

[edit]

As with the MCDA-article the Wikipedia article on MCDA, we have been discussing the idea of making contributions to the article on multi-objective optimization in Wikipedia in the lists of the International Society on MCDM and INFORMS Section on MCDM. According to our opinion, the current version is somewhat hard to follow and missing some essential parts of multi-objective optimization. Thus, we would like to propose that we will change the contents of the article to the following:

  1. Multi-objective optimization: Problem and Definitions
    • Definition of the problem and other definitions
  2. Multiobjective optimization applications
    • Applications of multiobjective optimization
    • This section would extend the current Applications-section
  3. Solving a multiobjective optimization problem
    • Different definitions of what it means to solve a multiobjective optimization problem
    • According to different researchers, this could to be to
      1. find a preferred Pareto optimal solution,
      2. find a Pareto front approximation, or
      3. generate the whole Pareto front.
    • In this section, it is clarified how this affects how the problem is approached
    • This section would also include the classification of multiobjective optimization methods to no-preference, a priori, a posteriori and interactive methods
  4. Scalarization
    • Different methods for scalarizing multiobjective optimization problems
    • Having this section avoids having to repeat scalarizations in further sections
  5. No-preference methods
    • Some no-preference methods
  6. A priori methods
    • Some A priori methods
  7. A posteriori methods
    • Some a posteriori methods
    • This section partly extends the current Solution methods -section
  8. Interactive methods
    • Some interactive methods
  9. Hybrid Methods
    • Methods combining methods from different classes
  10. Approximation methods in multiobjective optimization
    • This section would include many of the methods surveyed in S. Ruzika and M. Wiecek. Approximation Methods in Multiobjective Programming, J Optimiz Theory App 126, p. 473-501 and some other methods
  11. Visualizations
    • Methods for visualizing solutions to a multiobjective optimization problem
  12. Software
    • Multiobjective optimization software

We could make a collaborative effort in writing the first draft of the proposed article and then everybody could improve it. At the same time, we would like to start Wikipedia articles on Linear and discrete multiobjective optimization and evolutionary multiobjective optimization. We would then briefly describe within the article how these fields relate and differ.

Please, comment and improve our proposition.

--Markus Hartikainen (talk) 12:11, 14 December 2011 (UTC)[reply]

Comment by Mullur in 2007

[edit]

I will be adding quite a few examples and methods from my experience of multiobjective optimization. Please contribute by adding more stuff/fixing any incorrect things. --Mullur1729 03:21, 30 March 2007 (UTC)[reply]

Why I'm deleting the graphs

[edit]

Sorry, but I'm very experienced with multi-objective optimization and in particular with the graphical representation of such problems, but I just find the two graphs to be worse than useless -- I spent a long time staring at them and still don't really understand them. They have two additive components of the objective function on the two axes, and one of those functions is itself highly non-linear. The captions say that the objective function parameters a and b affect the convexity or non-convexity of the "outcome set" (presumably intended to mean the efficient frontier) -- this makes no sense, since the efficient set is a statement of what is possible independent of any preference parameters. And as far as I can see the graph purports to present the problem without any assumptions about how much x1 must be sacrificed in order to obtain a given increase in x2, which is the information the efficient set is supposed to contain.

The standard and straightforward approach is to plot the objectives themselves, x1 and x2, on the axes, show a (possibly convex) efficient frontier, plot (possibly concave) indifference curves (iso-value curves for the objective function), and look at the feasible point that is on the best possible indifference curve. Duoduoduo (talk) 20:35, 24 June 2011 (UTC)[reply]

Perhaps the preference-dependent Pareto set was intended? (C.f. Yves Balasko's recent book.)  Kiefer.Wolfowitz 21:05, 24 June 2011 (UTC)[reply]
Conceivably so, but the section should start out with the basic and easiest-to-understand approach. If I can't understand the graphs, very few readers will be able to, but they might waste their time trying.
In any event, as far as I can see from the caption, no information about technological trade-offs is assumed, which makes it impossible for it to be right. Duoduoduo (talk) 21:16, 24 June 2011 (UTC)[reply]
[edit]

I have seen many selection methods that appear to be missing from this page. I am going to list some methods from some class notes of mine. I know some of these are already listed, but I feel like making a complete list here.

1) Sequential Optimization: Optimize the most important objective first. Then optimize the next most important variable, subject to the additional constraint that the previous objective function remains the same. Repeat for all the objectives. This is essentially already in the article.

2) Method of weights - same as in the article.

3) Distance based methods (using norms). (A) Minimize the distance to some desired point. (B) Maximize the distance from the nadir (worst case scenario)

4) Direction-based methods. (A) Take the status quo or worst point and move in the direction of improvement (need not be quickest improvement, but sometimes is). (B) Pick some ideal infeasible point and a relaxation vector to move in until constraints are satisfied.

5) Modified Nash Bargaining (Nash product - has its roots in game theory). I am going to slightly adapt this to make more sense in the context of multiobjective optimization. Maximize (f1-f1*)(f2-f2*), where f1* is the best possible outcome if optimized last using sequential optimization. Let f2* be similarly defined (in game theory f1* and f2* are the payoffs without cooperation). Similarly, axiomatic bargaining.

6) Area monotonic solution. This approach is easiest explained in two dimensions, but it can work in higher dimensions. Let f1* and f2* be the status quo outcomes for the first and second objective functions. Imagine a graph of the Pareto front with dashed lines marking f1* and f2* along the two axes. Bisect the Pareto front such that the area left of the bisection line is equal to the area right of the bisection line. More specifically, let g(f1) describe the optimal second objective in terms of the first objective. Then the basic idea is to set the two areas A1 and A2 equal. Where A1 = integral(g(t)dt, f1*, f) - (f-f1*)f2* - (f-f1*)(g(f)-f2*)/2 and A2 = integral(g(t)dt, f, g_inverse(f2*)) + (f-f1*)(g(f)+f2*)/2 - (g_inverse(f2*) - f1*)f2*.

7) Equal sacrifice: Use the point along the Pareto curve which provides an equal sacrifice for all objectives. If one objective when optimized alone, could achieve 15 and another could achieve 20, a point that achieves 10 and 15 would have an equal sacrifice (5 for each).

8) Kalai Smorodinsky solution. This is like the direction based methods except that the ideal point is determined as (f1*, f2*, etc.). The starting point is the status quo. — Preceding unsigned comment added by 150.135.222.234 (talk) 23:32, 18 March 2013 (UTC)[reply]

Pareto Order is missing

[edit]

See https://link.springer.com/article/10.1007/s11047-018-9685-y#citeas Biggerj1 (talk) 10:54, 5 January 2024 (UTC)[reply]