In general, there’s a few main types of optimization algorithm:
- Local (quasi-newton, conjugate gradient, trust region…)
- Global heuristics (multi-start/cluster search, evolutionary algorithms, annealing…)
- Specialized for certain classes of problems (LP, QP, ILP…)
Of increasing interest in recent years has been the field of hybrid algorithms, which combine the robustness of global heuristics with the speed of local and specialized methods. For instance, the combination of evolutionary and local algorithms has been researched under the name of Memetic Algorithms.
Such algorithms are of interest in many business problems – for instance, consider the problem of optimizing profit by adjusting price. Any price lower than cost results in a profit margin of zero, whilst above that point profits are positive, increasing for a while, then decreasing and asymptoting to zero:
Making optimal decisions in this environment can be tricky, because many local searches that optimize a parameter in the area where profits are negative, will simply suggest an approach which reduces volume to zero. Therefore, we need an algorithm which can find the "profitable region" first, and then optimize within that area. A similar problem arises when trying to minimize a function which looks like this:
In my testing, hybrid algorithms work very well in this situation. With the above function, Differential Evolution (DE) finds the optimum (at x==20, y==10) using around 600 function evaluations. However, using DE to simply find the "profitable region", and then using local search from there, takes around 150 evaluations. Whilst there’s no "magic bullet" to solve all optimization problems, simple hybrid algorithms may be a good choice for a range of real-world business problems.