On the deep active-subspace method
SIAM/ASA Jounal on Uncertainty Quantification , Volume 11 - Issue 1 p. 62- 90
The deep active-subspace method is a neural-network based tool for the propagation of uncertainty through computational models with high-dimensional input spaces. Unlike the original active-subspace method, it does not require access to the gradient of the model. It relies on an orthogonal projection matrix constructed with Gram–Schmidt orthogonalization to reduce the input dimensionality. This matrix is incorporated into a neural network as the weight matrix of the first hidden layer (acting as an orthogonal encoder), and optimized using back propagation to identify the active subspace of the input. We propose several theoretical extensions, starting with a new analytic relation for the derivatives of Gram–Schmidt vectors, which are required for back propagation. We also study the use of vector-valued model outputs, which is difficult in the case of the original active-subspace method. Additionally, we investigate an alternative neural network with an encoder without embedded orthonormality, which shows equally good performance compared to the deep active-subspace method. Two epidemiological models are considered as applications, where one requires supercomputer access to generate the training data.
|, , , , ,|
|SIAM/ASA Jounal on Uncertainty Quantification|
|Verified Exascale Computing for Multiscale Applications|
Edeling, W.N. (2023). On the deep active-subspace method. SIAM/ASA Jounal on Uncertainty Quantification, 11(1), 62–90. doi:10.1137/21M1463240