Ponente
Descripción
Compton imaging has long been constrained by intrinsic limitations in sensitivity, resolution, and computational efficiency. Traditional reconstruction methods, largely based on analytic backprojection or iterative schemes, often fail to fully exploit the complex statistical and structural information contained in the measured data. These deficiencies translate into blurred images, loss of fine spatial detail, and excessive computational costs that hinder real-time applications.
To overcome these barriers, we propose a new reconstruction paradigm that combines virtual orthogonal decompositions with transformer-based architectures. This approach enables a multi-scale, data-driven decomposition of the input signal, which can then be reprojected with improved accuracy and robustness. By coupling numerical decomposition methods with the representational power of transformers, we open a path toward more precise, adaptive, and efficient Compton image reconstruction. This work suggests that the next generation of Compton cameras may benefit from hybrid numerical–AI frameworks capable of addressing the long-standing bottlenecks of the field.