In X-ray Computed Tomography (CT), projections from many angles are acquired and used for 3D reconstruction. To make CT suitable for in-line quality control, reducing the number of angles while maintaining reconstruction quality is necessary. Sparse-angle tomography is a popular approach for obtaining 3D reconstructions from limited data. To optimize its performance, one can adapt scan angles sequentially to select the most informative angles for each scanned object. Mathematically, this corresponds to solving an optimal experimental design (OED) problem. OED problems are high-dimensional, non-convex, bi-level optimization problems that cannot be solved online, i.e., during the scan. To address these challenges, we pose the OED problem as a partially observable Markov decision process in a Bayesian framework, and solve it through deep reinforcement learning. The approach learns efficient non-greedy policies to solve a given class of OED problems through extensive offline training rather than solving a given OED problem directly via numerical optimization. As such, the trained policy can successfully find the most informative scan angles online. We use a policy training method based on the Actor-Critic approach and evaluate its performance on 2D tomography with synthetic data.

, , ,
doi.org/10.1109/TCI.2024.3414273
IEEE Transactions on Computational Imaging
Enabling X-ray CT based Industry 4.0 process chains by training Next Generation research expert

Wang, T., Lucka, F., & van Leeuwen, T. (2024). Sequential experimental design for X-Ray CT using deep reinforcement learning. IEEE Transactions on Computational Imaging, 10, 953–968. doi:10.1109/TCI.2024.3414273