Adapted Maximum-Likelihood Gaussian Models for Numerical Optimization with Continuous EDAs
This article focuses on numerical optimization with continuous Estimation-of-Distribution Algorithms (EDAs). Specifically, the focus is on the use of one of the most common and best understood probability distributions: the normal distribution. We first give an overview of the existing research on this topic. We then point out a source of inefficiency in EDAs that make use of the normal distribution with maximum-likelihood (ML) estimates. Scaling the covariance matrix beyond its ML estimate does not remove this inefficiency. To remove the inefficiency, the orientation of the normal distribution must be changed. So far, only Evolution Strategies (ES) and particularly Covariance Matrix Adaptation ES (CMA-ES) are capable of achieving such re-orientation. In this article we provide a simple, but effective technique for achieving re-orientation while still only performing the well-known ML estimates. We call the new technique Anticipated Mean Shift (AMS). The resulting EDA, called Adapted Maximum-Likelihood Gaussian Model -- Iterated Density-Estimation Evolutionary Algorithm (AMaLGaM-IDEA) adapts not only the ML estimate for the covariance matrix, but also the ML estimate for the mean. AMaLGaM-IDEA has an improved performance compared to previous EDAs that use ML estimates as well as compared to previous EDAs that scale the variance adaptively. Also, we indicate the circumstances under which AMaLGaM-IDEA is found to be robust to rotations of the search space. A comparison with CMA-ES identifies the conditions under which AMaLGaM-IDEA is able to outperform CMA-ES and vice versa. We conclude that AMaLGaM-IDEA is currently among the most efficient real-valued continuous EDAs while at the same time it is relatively simple to understand (especially in the naive, univariate case). Pseudo-code is provided in this article; source-code can be downloaded from the web.