Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms
Presented at the Genetic and Evolutionary Computation Conference, London
Recent research into single-objective continuous Estimation-of-Distribution Algorithms (EDAs) has shown that when maximum-likelihood estimations are used for parametric distributions such as the normal distribution, the EDA can easily suffer from premature convergence. In this paper we argue that the same holds for multi-objective optimization. Our aim in this paper is to transfer a solution called Adaptive Variance Scaling (AVS) from the single-objective case to the multi-objective case. To this end, we zoom in on an existing EDA for continuous multi-objective optimization, the MIDEA, which employs mixture distributions. We propose a means to combine AVS with the normal mixture distribution, as opposed to the single normal distribution for which AVS was introduced. In addition, we improve the AVS scheme using the Standard-Deviation Ratio (SDR) trigger. Intuitively put, variance scaling is triggered by the SDR trigger only if improvements are found to be far away from the mean. For the multi-objective case, this addition is important to keep the variance from being scaled to excessively large values. From experiments performed on five well-known benchmark problems, the addition of SDR and AVS is found to enlarge the class of problems that continuous multi-objective EDAs can solve reliably.
|D. Thierens (Dirk)|
|Decision Support Systems for Logistic Networks and Supply Chain Optimization|
|Genetic and Evolutionary Computation Conference|
|Organisation||Intelligent and autonomous systems|
Bosman, P.A.N, & Thierens, D. (2007). Adaptive Variance Scaling in Continuous Multi-Objective Estimation-of-Distribution Algorithms. In D Thierens (Ed.), Proceedings of the Genetic and Evolutionary Computation Conference (pp. 500–507). ACM Press.