Although X-ray imaging is used routinely in industry for high-throughput product quality control, its capability to detect internal defects has strong limitations. The main challenge stems from the superposition of multiple object features within a single X-ray view. Deep Convolutional neural networks can be trained by annotated datasets of X-ray images to detect foreign objects in real-time. However, this approach depends heavily on the availability of a large amount of data, strongly hampering the viability of industrial use with high variability between batches of products. We present a computationally efficient, CT-based approach for creating artificial single-view X-ray data based on just a few physically CT-scanned objects. By algorithmically modifying the CT-volume, a large variety of training examples is obtained. Our results show that applying the generative model to a single CT-scanned object results in image analysis accuracy that would otherwise be achieved with scans of tens of real-world samples. Our methodology leads to a strong reduction in training data needed, improved coverage of the combinations of base and foreign objects, and extensive generalizability to additional features. Once trained on just a single CT-scanned object, the resulting deep neural network can detect foreign objects in real-time with high accuracy.

, ,
doi.org/10.1038/s41598-023-29079-w
Nature Scientific Reports
Real-Time 3D Tomography

Andriiashen, V., van Liere, R., van Leeuwen, T., & Batenburg, J. (2023). CT-based data generation for foreign object detection on a single X-ray projection. Nature Scientific Reports, 13(1). doi:10.1038/s41598-023-29079-w