Jump to content

User:Stefanakvark/sandbox

From Wikipedia, the free encyclopedia

Variable Neighborhood Search (VNS) proposed by (Mladenović, Hansen, 1997 [1]) is a metaheuristic method for solving a set of combinatorial optimization and global optimization problems. It explores distant neighborhoods of the current incumbent solution, and moves from there to a new one if and only if an improvement was made. Local search method is applied repeatedly to get from solutions in neighborhood to local optima. VNS was designed for approximating solutions of optimization problems and continuous nonlinear problems, and according to these, it is aimed for solving mixed integer program problems, nonlinear program problems, mixed integer nonlinear program problems, etc..


Introduction

[edit]

VNS systematically changes neighborhood in two phases, firstly, descent to find a local optimum and finally, a perturbation phase to get out of the corresponding valley.

Applications are rapidly increasing in number and pertain to many fields: location theory, cluster analysis, scheduling, vehicle routing, network design, lot-sizing,artificial intelligence, engineering, pooling problems, biology, phylogeny, reliability, geometry, telecommunication design, etc.

There are several books important for understanding VNS, such as: Handbook of Metaheuristics, 2010[2], Handbook of Metaheuristics, 2003[3] and Search methodologies, 2005[4]. Earlier work that motivated this approach can be found in 1.Davidson, W.C.[5], 2.Fletcher, R., Powell, M.J.D.[6], 3. Mladenovi´c, N. [7] and 4. Brimberg, J., Mladenovi´c, N.[8]. Recent surveys on VNS methodology as well as numerous applications can be found in 4OR, 2008.[9] and and Annals of OR, 2010[10].

Basic Description

[edit]

Define one deterministic optimization problem with

, (1)

where S, X, x, and f are the solution space, the feasible set, a feasible solution, and a real-valued objective function, respectively. If S is a finite but large set, a combinatorial optimization problem is defined. If , there is continuous optimization model.

A solution is optimal if

.

Exact algorithm for problem (1) is to be found an optimal solution x*, with the validation of its optimal structure, or if it is unrealisable, in procedure have to be shown that there is no achievable solution, i.e., , or the solution is unbounded. CPU time have to be finite and short. For continuous optimization, it is reasonable to allow for some degree of tolerance, i.e., to stop when a feasible solution has been found such that

or

Some heuristics speedily accept an approximate solution, or optimal solution but one with no validation of its optimality. Some of them have a incorrect certificate, i.e., the solution obtained satisfies

for some , though this is rarely small.

Heuristics are faced with problem of local optima as a result of avoiding boundless computing time. A local optimum , of problem is such that

where denotes a neighbourhood of

VNS Description

[edit]

According to (Mladenovic, 1995.), VNS is a metaheuristic which systematically performs the procedure of neighbourhood change, both in descent to local minima and in escape from the valleys which contain them.

VNS is built upon the following perceptions:

Fact 1 A local minimum with respect to one neighbourhood structure is not necessarily a local minimum for another neighbourhood structure.

Fact 2 A global minimum is a local minimum with respect to all possible neighbourhood structures.

Fact 3 For many problems local minima with respect to one or several neighbourhoods are relatively close to each other.

Unlike many other metaheuristics, the basic schemes of VNS and its extensions are simple and require few, and sometimes no parameters. Therefore, in addition to providing very good solutions, often in simpler ways than other methods, VNS gives insight into the reasons for such a performance, which, in turn, can lead to more efficient and sophisticated implementations.

There are several papers where it could be studied among recently mentioned, such as (Hansen and Mladenovi´c 1999, 2001a, 2003, 2005; Moreno-Pérez et al. [11];)

Local search

A local search heuristic is performed through choosing an initial solution x, discovering a direction of descent from x, within a neighbourhood N(x), and proceeding to the minimum of f(x) within N(x) in the same direction. If there is no direction of descent, the heuristic stops; otherwise, it is iterated. Usually the highest direction of descent, also related to as best improvement, is used. This set of rules is summarized in Algorithm 1, where we assume that an initial solution x is given. The output consists of a local minimum, also denoted by x, and its value. Observe that a neighbourhood structure N(x) is defined for all x ∈ X. At each step, the neighbourhood N(x) of x is explored completely. As this may be timeconsuming, an alternative is to use the first descent heuristic. Vectors are then enumerated systematically and a move is made as soon as a direction for the descent is found. This is summarized in Algorithm 2.

Algorithm 1 Best improvement (highest descent) heuristic

Function BestImprovement(x)

 1: repeat
 2:     x' ← x
 3:     x←argmin_{y∈N(x)} f (y) 
 4: until ( f (x) ≥ f (x'))
 5: return x

Algorithm 2 First improvement (first descent) heuristic

Function FirstImprovement(x)

 1: repeat
 2:    x' ← x; i←0
 3:    repeat
 4:       i←i+1
 5:       x←argmin{ f (x), f (x^i)}, x^i  ∈ N(x)
 6:    until ( f (x) < f (x') or i = |N(x)|)
 7: until ( f (x) ≥ f (x'))
 8: return x


Let us denote , a finite set of pre-selected neighborhood structures, and with the set of solutions in the kth neighborhood of x.

We will also use the notation when describing local descent. Neighbourhoods or may be induced from one or more metric (or quasi-metric) functions introduced into a solution space S. An optimal solution (or global minimum) is a feasible solution where a minimum of problem ( is reached. We call x' ∈ X a local minimum of problem with respect to , if there is no solution such that .

In order to solve problem by using several neighbourhoods, facts 1–3 can be used in three different ways: (i) deterministic; (ii) stochastic; (iii) both deterministic and stochastic. We first give in Algorithm 3 the steps of the neighbourhood change function which will be used later. Function NeighbourhoodChange() compares the new value f(x') with the incumbent value f(x) obtained in the neighbourhood k (line 1). If an improvement is obtained, k is returned to its initial value and the new incumbent updated (line 2). Otherwise, the next neighbourhood is considered (line 3).

Algorithm 3 Neighborhood change

Function NeighborhoodChange (x, x', k)

1: if f (x') < f(x) then
2:    x ← x' // Make a move
3:    k ← 1 // Initial neighborhood
4: else
5:    k ← k+1 // Next neighborhood


When VNS does not render good solution there are several steps which could be helped in process, such as comparing first and best improvment strategies in local search, reducing neighborhood, intensifying shaking, adopting VND, adopting FSS, and experimenting with parameter settings.


The Basic VNS (BVNS) method (Mladenovic and Hansen 1997) combines deterministic and stochastic changes of neighbourhood. Its steps are given in Algorithm 4. Often successive neighbourhoods will be nested. Observe that point x' is generated at random in Step 4 in order to avoid cycling, which might occur if a deterministic rule were applied. In Step 5 the first improvement local search (Algorithm 2) is usually adopted. However, it can be replaced with best improvement (Algorithm 1).

Algorithm 4: Basic VNS

Function VNS (x, kmax , tmax );

1: repeat
2:    k ← 1;
3:    repeat
4:       x' ←Shake(x, k) /* Shaking */;
5:       x ← FirstImprovement(x' ) /* Local search */;
6:       NeighbourhoodChange(x, x', k) /* Change neighbourhood */;
7:    until k = k_max ;
8:    t ←CpuTime()
9: until t > t_max ;

The basic VNS is a first improvement descent method with randomization. Without much additional effort it can be transformed into a descent-ascent method: in NeighbourhoodChange() function, replace also x by x" with some probability, even if the solution is worse than the incumbent. It can also be changed into a best improvement method: make a move to the best neighbourhood k* among all k_max of them. Another variant of the basic VNS can be to find a solution x' in the “Shaking” step as the best among b (a parameter) randomly generated solutions from the kth neighbourhood. There are two possible variants of this extension: (i) to perform only one local search from the best among b points; (ii) to perform all b local searches and then choose the best. In paper (Fleszar and Hindi [12]) could be found algorithm.

Extensions

[edit]
  • VND

The variable neighborhood descent (VND) method is obtained if a change of neighborhoods is performed in a deterministic way. In the descriptions of its algorithms, we assume that an initial solution x is given. Most local search heuristics in their descent phase use very few neighbourhoods. The final solution should be a local minimum with respect to all neighbourhoods; hence the chances to reach a global one are larger when using VND than with a single neighbourhood structure.


  • RVNS

The reduced VNS (RVNS) method is obtained if random points are selected from and no descent is made. Rather, the values of these new points are compared with that of the incumbent and an update takes place in case of improvement. Is is assumed that a stopping condition has been chosen like the maximum CPU time allowed or the maximum number of iterations between two improvements. To simplify the description of the algorithms it is used below. Therefore, RVNS uses two parameters: and . RVNS is useful in very large instances, for which local search is costly. It has been observed that the best value for the parameter k_max is often 2. In addition, the maximum number of iterations between two improvements is usually used as a stopping condition. RVNS is akin to a Monte-Carlo method, but is more systematic.

  • Skewed VNS

The skewed VNS (SVNS) method (Hansen et al. [13] ) addresses the problem of exploring valleys far from the incumbent solution. Indeed, once the best solution in a large region has been found, it is necessary to go some way to obtain an improved one. Solutions drawn at random in distant neighbourhoods may differ substantially from the incumbent and VNS can then degenerate, to some extent, into the Multistart heuristic (in which descents are made iteratively from solutions generated at random, a heuristic which is known not to be very efficient). Consequently, some compensation for distance from the incumbent must be made.


  • Variable Neighbourhood Decomposition Search

The variable neighbourhood decomposition search (VNDS) method (Hansen et al. [14]) extends the basic VNS into a two-level VNS scheme based upon decomposition of the problem. For ease of presentation, but without loss of generality, it is assumed that the solution x represents the set of some elements.

  • Parallel VNS

Several ways of parallelizing VNS have recently been proposed for solving the p-Median problem. In García-López et al. [15] three of them are tested: (i) parallelize local search; (ii) augment the number of solutions drawn from the current neighbourhood and make a local search in parallel from each of them and (iii) do the same as (ii) but update the information about the best solution found. Three Parallel VNS strategies are also suggested for solving the Travelling purchaser problem in Ochi et al.[16].

  • Primal-dual VNS

For most modern heuristics, the difference in value between the optimal solution and the obtained one is completely unknown. Guaranteed performance of the primal heuristic may be determined if a lower bound on the objective function value is known. To this end, the standard approach is to relax the integrality condition on the primal variables, based on a mathematical programming formulation of the problem. However, when the dimension of the problem is large, even the relaxed problem may be impossible to solve exactly by standard commercial solvers. Therefore, it seems a good idea to solve dual relaxed problems heuristically as well. It was obtained guaranteed bounds on the primal heuristics performance. In Primal-dual VNS (PD-VNS) (Hansen et al. [17]) one possible general way to attain both the guaranteed bounds and the exact solution is proposed.

  • Variable Neighborhood Branching

The mixed integer linear programming (MILP) problem consists of maximizing or minimizing a linear function, subject to equality or inequality constraints, and integrality restrictions on some of the variables.

  • Variable Neighborhood Formulation Space Search


FSS is method which is very useful because, one problem could be defined in addition formulations and moving through formulations is legitimate. It is proved that local search works within formulations, implying a final solution when started from some initial solution in first formulation. Local search systematically alternates between different formulations which was investigated for circle packing problem (CPP) where stationary point for a nonlinear programming formulation of CPP in Cartesian coordinates is not strictly a stationary point in polar coordinates.

Developing VNS

[edit]

In order to make simple version of VNS, here is the list of steps which should be made. Most of it is very similar with steps in other metaheuristics.

1. It is necessary to be involved in problem, give some examples and try to solve them

2. Study books, surveys and scientific papers

3. Try to test some benchmarks

4. Choose appropriate data structure for representing in memory

5. Find initial solution

6. Calculate objective function

7. Design a procedure for Shaking

8. Choose an local search heuristic with some moves as drop, add, swap, interchange, etc.

9. Compare VNS with other methods from the literature

Applications

[edit]

Applications of VNS, or of varieties of VNS are very abundant and numerous. We mention some fields where it could be found collections of scientific papers:

1. Industrial applications

2. Design problems in communication

3. Location problems

4. Data mining

5. Graph problems

6. Knapsack and packing problems

7. Mixed integer problems

8. Time tabling

9. Scheduling

10. Vehicle routing problems

11. Arc routing and waste collection

12. Fleet sheet problems

13. Extended vehicle routing problems

14. Problems in biosciences and chemistry

15. Continuous optimization

16. Other optimization problems

17. Discovery science

Conclusion

[edit]

VNS implies several features which are presented in Hansen and Mladenovic [18] and we are presenting some here:

(i) Simplicity: VNS is simple a simple and clear which is universally applicable;

(ii) Precision: VNS is formulated in precise mathematical definitions;

(iii) Coherence: all actions of the heuristics for solving problems follow from the VNS principles;

(iv) Effectiveness: VNS supplies optimal or near-optimal solutions for all or at least most realistic instances;

(v) Efficiency: VNS takes a moderate computing time to generate optimal or near-optimal solutions;

(vi) Robustness: the functioning of the VNS is coherent over a variety of instances;

(vii) User friendliness: VNS has no parameters, so it is easy for understanding, expressing and using;

(viii) Innovation: VNS is generating new types of application.

(ix) Generality: VNS is inducing to good results for a wide variety of problems;

(x) Interactivity: VNS allows the user to incorporate his knowledge to improve the resolution process;

(xi) Multiplicity: VNS is able to produce a certain near-optimal solutions from which the user can choose;

Interest in VNS is growing quickly. This is evidenced by the increasing number of papers published each year on this topic (10 years ago, only a few; 5 years ago, about a dozen; and about 50 in 2007). Moreover, the 18th EURO mini-conference held in Tenerife in November 2005 was entirely devoted to VNS. It led to special issues of IMA Journal of Management Mathematics in 2007, European Journal of Operational Research (http://www.journals.elsevier.com/european-journal-of-operational-research/), and Journal of Heuristics (http://www.springer.com/mathematics/applications/journal/10732/) in 2008.

[edit]

EURO Mini Conference XXVIII on Variable Neighbourhood Search (http://toledo.mi.sanu.ac.rs/~grujicic/vnsconference/)


References

[edit]
  1. ^ Nenad Mladenovi´c, Pierre Hansen (1997). "Variable neighborhood search". Computers and Operations Research. 24 (11): 1097–1100.
  2. ^ Gendreau, M.; Potvin, J-Y. (2010). "Handbook of Metaheuristics". Springer. {{cite journal}}: Text "Handbook of Metaheuristics" ignored (help)
  3. ^ Glover, F.; Kochenberger, G.A. (2003). "Handbook of Metaheuristics". Kluwer Academic Publishers. {{cite journal}}: Text "Handbook of Metaheuristics" ignored (help)
  4. ^ Burke, EK.; Kendall, G. (2005). "Search methodologies. Introductory tutorials in optimization and decision support techniques". Springer. {{cite journal}}: Text "Springer" ignored (help)
  5. ^ Davidson, W.C. (1959). "Variable metric algorithm for minimization". Argonne National Laboratory Report ANL-5990. {{cite journal}}: Text "Argonne National Laboratory Report" ignored (help)
  6. ^ Fletcher, R.; Powell, M.J.D. (1963). "Rapidly convergent descent method for minimization". Comput.J. 6: 163–168. {{cite journal}}: Text "Comput.J." ignored (help)
  7. ^ Mladenovi´c, N. (1995). "A variable neighborhood algorithm—a new metaheuristic for combinatorial optimization". Abstracts of papers presented at Optimization Days, Montr´eal. {{cite journal}}: Text "Optimization Days" ignored (help); Text "pages 112" ignored (help)
  8. ^ Brimberg, J.; Mladenovi´c, N. (1996). "A variable neighborhood algorithm for solving the continuous location-allocation problem". Stud. Locat. Anal. 10: 1–12. {{cite journal}}: Text "Stud. Locat. Anal" ignored (help)
  9. ^ Hansen, P.; Mladenovi´c, N.; Perez (2008). "Variable neighbourhood search: methods and applications". 4OR. 6: 319–360. {{cite journal}}: Text "4OR" ignored (help); Text "first3J.A.M" ignored (help)
  10. ^ Hansen, P.; Mladenovi´c, N.; Perez (2010). "Variable neighbourhood search: methods and applications". Annals of Operations Research. 175: 367–407. {{cite journal}}: Text "Annals of Operations Research" ignored (help); Text "first3J.A.M" ignored (help); line feed character in |title= at position 39 (help)
  11. ^ Moreno-Pérez, JA.; Hansen, P.; Mladenovic, N. (2005). "Parallel variable neighborhood search". Alba E (ed) Parallel metaheuristics: a new class of algorithms. {{cite journal}}: Text "Alba E (ed) Parallel metaheuristics: a new class of algorithms, Wiley, New York" ignored (help)
  12. ^ Fleszar, K; Hindi, KS (2004). "Solving the resource-constrained project scheduling problem by a variable neighborhood search". Eur J Oper Res. 155 (2): 402–413. {{cite journal}}: Text "Eur J Oper Res" ignored (help)
  13. ^ Hansen, P.; Jaumard, B; Mladenovi´c, N; Parreira, A (2000). "Variable neighborhood search for weighted maximum satisfiability problem". Les Cahiers du GERAD G–2000–62, HEC Montréal, Canada. {{cite journal}}: Text "Les Cahiers du GERAD G–2000–62" ignored (help)
  14. ^ Hansen, P; Mladenovi´c, N; Pérez-Brito, D (2001). "Variable neighborhood decomposition search". J Heuristics. 7 (4): 335–350. {{cite journal}}: Text "J Heuristics" ignored (help)
  15. ^ García-López, F; Melián-Batista, B; Moreno-Pérez, JA (2002). "The parallel variable neighborhood search for the p-median problem". J Heuristics. 8 (3): 375–388. {{cite journal}}: |first4= missing |last4= (help); Text "J Heuristics" ignored (help)
  16. ^ Ochi, LS; Silva, MB; Drummond, L (2001). "Metaheuristics based on GRASP and VNS for solving traveling purchaser problem". MIC’2001, Porto: 489–494. {{cite journal}}: Text "MIC’2001" ignored (help)
  17. ^ Hansen, P; Brimberg, J; Uroševi´c, D; Mladenovi´c, N (2007a). "Primal-dual variable neighborhood search for the simple plant location problem". INFORMS J Comput. 19 (4): 552–564. {{cite journal}}: Text "INFORMS J Comput" ignored (help)
  18. ^ Hansen, P; Mladenovi´c, N (2003). "Variable neighborhood search". Glover F, Kochenberger G (eds) Handbook of Metaheuristics (Kluwer, Dordrecht): 145–184. {{cite journal}}: Text "Glover F, Kochenberger G (eds) Handbook of Metaheuristics" ignored (help); line feed character in |journal= at position 40 (help)