2023-07-15
Mini-Batching, Gradient-Clipping, first-versus second-order: What works in Gradient-Based coefficient optimisation for Symbolic Regression'
Publication
Publication
The aim of Symbolic Regression (SR) is to discover interpretable expressions that accurately describe data. The accuracy of an expression depends on both its structure and coefficients. To keep the structure simple enough to be interpretable, effective coefficient optimisation becomes key. Gradient-based optimisation is clearly effective at training neural networks in Deep Learning (DL), which can essentially be viewed as large, over-parameterised expressions: in this paper, we study how gradient-based optimisation techniques as often used in DL transfer to SR. In particular, we first assess what techniques work well across random SR expressions, independent of any specific SR algorithm. We find that mini-batching and gradient-clipping can be helpful (similar to DL), while second-order optimisers outperform first-order ones (different from DL). Next, we consider whether including gradient-based optimisation in Genetic Programming (GP), a classic SR algorithm, is beneficial. On five real-world datasets, in a generation-based comparison, we find that second-order optimisation outperforms coefficient mutation (or no optimisation). However, in time-based comparisons, performance gaps shrink substantially because the computational expensiveness of second-order optimisation causes GP to perform fewer generations. The interplay of computational costs between the optimisation of structure and coefficients is thus a critical aspect to consider.
Additional Metadata | |
---|---|
, , , , | |
doi.org/10.1145/3583131.3590368 | |
Evolutionary eXplainable Artificial Medical INtelligence Engine | |
Genetic and Evolutionary Computation Conference, GECCO 2023 | |
Organisation | Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands |
Harrison, J., Virgolin, M., Alderliesten, T., & Bosman, P. (2023). Mini-Batching, Gradient-Clipping, first-versus second-order: What works in Gradient-Based coefficient optimisation for Symbolic Regression'. In Proceedings of the Genetic and Evolutionary Computation Conference Companion (pp. 1127–1136). doi:10.1145/3583131.3590368 |