El objetivo de estas Jornadas es reunir a la comunidad científica española integrada en la Agrupación CPAN (Centro Nacional de Física de Partículas, Astropartículas y Nuclear) en torno a una discusión conjunta sobre la situación actual del campo y su prospectiva. Durante las jornadas habrá conferencias invitadas y presentaciones cortas de carácter científico sobre las distintas líneas de investigación que abarca el CPAN. Asimismo, en las jornadas se celebrarán reuniones de las diferentes redes y sesiones paralelas de discusión de las áreas del CPAN con el objeto de potenciar la cooperación de los grupos españoles de investigación y articular de forma conjunta las líneas prioritarias de actuación.
Las sesiones tendrán lugar en ADEIT (Plaza Virgen de la Paz, 3, Valencia)
COMITÉ CIENTÍFICO:
Beatriz Fernández Domínguez (IGFAE)
Carlos Alberto Salgado López (IGFAE)
David García Cerdeño (IFT)
Diego Blas Temiño (IFAE)
Isidro González Caballero (UO)
José Santiago Pérez (U. Granada)
Mª Carmen Palomares Espiga (CIEMAT)
María José Costa Mezquita (IFIC, CSIC-UV)
Teresa Kurtukian Nieto (IEM)
COMITÉ ORGANIZADOR LOCAL:
José Enrique García Navarro (IFIC, CSIC-UV)
Sergio Pastor Carpi (IFIC, CSIC-UV)
Sonja Orrigo (IFIC, CSIC-UV)
This conference has received funding from the GVA Research Project CIPROM/2022/70


Future experiments such as HL-LHC plan to reach unprecedented energies and amount of data to look for Beyond the Standard Model Physics - about 10^{10} tracks per second or more, thus pushing the data challenge to new frontiers when processing these events at the various stages of the experimental pipeline.
Within this context, the use of Quantum Computing for this type of fundamental research seems like the natural choice, since particle physics has quantum mechanics at its core. In this talk, I will give an overview of recent developments of QC applications in experimental HEP, with a focus on experimental particle physics and track reconstruction, as well as other ongoing and future lines of research, scalability and energy measurements.
The identification of anomalous events – not explained by the Standard Model of particle physics – and the possible discovery of exotic physical phenomena pose significant theoretical, experimental and computational challenges. It is anticipated that these challenges will increase significantly with the operation of next-generation colliders, such as the High-Luminosity Large Hadron Collider (HL-LHC). At least 140 collisions will be produced each time two particle bunches meet at the heart of the ATLAS and CMS detectors, compared to around 40 collisions at present. Consequently, significant challenges are to be expected in terms of data processing, reconstruction, and analysis. This project sets out to explore the development of unsupervised anomaly detection methods that do not rely on prior knowledge of the underlying physics models.
With this in mind, the project exploits the theoretical and practical advantages of utilising qutrits in Quantum Machine Learning (QML) models for the purpose of anomaly detection in high-energy physics data, with a particular focus on the context of experiments at CERN’s Large Hadron Collider. The development of a quantum model based on qutrits is proposed, with a comparison with its qubit counterpart undertaken to evaluate its effectiveness in terms of accuracy, scalability and computational efficiency. The objective is threefold: first, to enhance comprehension of multilevel quantum systems and their capacity for the development of more compact quantum algorithms; second, to examine fresh possibilities for the analysis of complex data; and third, to collaborate in the advancement of this field.
To achieve the desired objectives, a high-fidelity autoencoder structure has been utilised as a QAE reference, with CMS real jet data being employed to train the model. This model has been extrapolated to the qutrit state space, with the introduction of novel logic gates according to the parameters of this state space.
References
[1] A. Bal, M. Klute, B. Maier, M. Oughton, E. Pezone, M. Spannowsky, "1 Particle - 1 Qubit: Particle Physics Data Encoding for Quantum Machine Learning", (2025).
[2] S. Dogra, K. Dorai, Arvind, "Majorana representation, qutrit Hilbert space and NMR implementation of qutrit gates", Journal of Physics B: Atomic, Molecular and Optical Physics 51, 045505 (2018).
[3] S. K. Goyal, B. N. Simon, R. Singh, S. Simon, "Geometry of the generalized Bloch sphere for qutrits", Journal of Physics A: Mathematical and Theoretical 49, 165203 (2016).
Muography is an emergent non-destructive testing technique that uses cosmic muons to probe the interior of objects and structures. This technique can be employed to perform preventive maintenance of critical equipment in the industry in order to test the structural integrity of the facility. Several muography imaging algorithms based on machine learning methods are being developed in the recent years. These algorithms make exhaustive use of simulated data, usually using packages such as GEANT4, that exhaustively simulate the detector, to produce training samples. This work presents a faster alternative for the generation of simulated samples based on generative adversarial neural networks. A speed up factor of 80 is observed with this system without any significant degradation of the quality of the simulation.
The generation of hard-scattering events in high-energy physics, such as the process $gg \to t\bar{t}g$, is one of the computational bottlenecks in collider phenomenology. MadGraph provides a flexible framework to evaluate these matrix elements, but the sheer scale of Monte Carlo event production required at the LHC drives both execution time and power consumption to critical levels. In this work, we explore the use of Adaptive Compute Acceleration Platforms (ACAPs) and, in particular, their AI Engine (AIE) cores to accelerate the evaluation of matrix elements for the $gg \to t\bar{t}g$ process. We design and map the helicity-amplitude and color-summation structure of the computation onto clusters of AIE cores, exploiting both vectorized arithmetic and dataflow pipelining across tiles. Preliminary results indicate that the AIE-based implementation can significantly reduce latency while offering superior power efficiency compared to CPU and GPU architectures. While the complexity of multi-leg processes presents challenges for full FPGA acceleration, our study demonstrates the viability of AIE-based event generation as a scalable approach for next-generation Monte Carlo simulations at the LHC.
For the HL-LHC era, the Phase-2 CMS upgrade includes a full replacement of the trigger and data acquisition system. The upgraded readout electronics will support a maximum Level-1 (L1) accept rate of 750 kHz with a latency of 12.5 µs. The muon trigger is implemented as a multi-layered system that reconstructs and measures muon momenta by correlating signals from different muon chambers within dedicated muon track finders. This reconstruction relies on advanced pattern recognition algorithms executed on FPGA processors.
In the barrel muon system, stub building proceeds in two stages: the first constructs stubs using local information from individual muon stations, while the second combines, refines, and correlates information across multiple chambers before passing it to the track finders.
This work presents a muon shower tagging algorithm designed to efficiently detect and reconstruct muon showers, with potential application in the barrel muon system of the CMS experiment. The algorithm clusters hits to identify showers and then matches those clusters to muon stubs in neighboring stations. Such a method is particularly valuable for recovering efficiency lost when high-momentum muons radiate while traversing the detector.
We present the reconstruction and identification of the main decay modes of the lepton $\tau$ in the framework of the Future Circular Collider electron-positron (FCC-ee), using the CLD detector, being one of the first FCC-ee studies based on a realistic (full simulation) detector simulation. Using simulated data from the $e^+e^- \rightarrow Z \rightarrow \tau^+\tau^-$ process, different reconstruction methodologies have been evaluated, comparing classical strategies with machine learning techniques. Specifically, the reconstruction of the $\tau$ lepton -a complex process- has been used to examine in detail the performance of different Particle Flow strategies, comparing established versions -such as the well-known PandoraPFA- with state-of-the-art developments like MLPF. In addition, $\tau$ decay mode identification has been studied for its main channels ($\pi^\pm$, $\rho$, $a_1$), comparing classical strategies (based on the identification of PF-reconstructed candidates such as tracks and photons) with the output of a dedicated neural network, MLID, trained directly on detector signals to infer the decay mode without going through PF. The results show competitive performance in both particle reconstruction and decay mode identification across the different methodologies, reinforcing the potential of these techniques to improve electroweak precision measurements at the FCC-ee and motivating further steps in their development and adaptation. This work provides a foundation for future precision studies of key electroweak observables in the FCC-ee physics program, such as the asymmetries $\mathcal{A}_e$ or $\mathcal{A}_\tau$. For these studies, excellent measurement of $\tau$ lepton properties is essential, including both its energy and position, as well as the characterization of its decay mode.
The LHCb experiment relies on a two-level trigger system to efficiently select events of interest among the vast number of proton-proton collisions that occur at the LHC. In this work, we present a proof-of-concept study exploring the integration of an autoencoder into the High Level Trigger 2 (HLT2) as a novel strategy for event selection. Autoencoders, as unsupervised machine learning algorithms, are capable of learning compact representations of signal events while rejecting background in a model-independent way. This approach offers a key advantage over traditional supervised classifiers such as Boosted Decision Trees, as it does not require explicit background samples, thereby reducing dependence on potentially incomplete or biased background modeling. Using simulated signal data, we train an autoencoder to capture the characteristic features of signal decays, and we demonstrate its ability to identify and reject unseen background-like events. Preliminary results highlight the potential of this method as a tool for signal selection and background suppression, and open the door to further studies on deploying unsupervised machine learning models in real-time selection at LHCb.
Compton imaging has long been constrained by intrinsic limitations in sensitivity, resolution, and computational efficiency. Traditional reconstruction methods, largely based on analytic backprojection or iterative schemes, often fail to fully exploit the complex statistical and structural information contained in the measured data. These deficiencies translate into blurred images, loss of fine spatial detail, and excessive computational costs that hinder real-time applications.
To overcome these barriers, we propose a new reconstruction paradigm that combines virtual orthogonal decompositions with transformer-based architectures. This approach enables a multi-scale, data-driven decomposition of the input signal, which can then be reprojected with improved accuracy and robustness. By coupling numerical decomposition methods with the representational power of transformers, we open a path toward more precise, adaptive, and efficient Compton image reconstruction. This work suggests that the next generation of Compton cameras may benefit from hybrid numerical–AI frameworks capable of addressing the long-standing bottlenecks of the field.
We present a novel deep learning approach, inclusive flavor tagging (IFT), to determine the production flavor of B mesons at the LHCb. This technique is designed to overcome the challenges faced by classical taggers’ performance in the current and future environment, where luminosity and event track multiplicity increase. The IFT utilizes state-of-the-art deep learning models to process information from all particles in the proton-proton collision event, excluding the signal. Our implementation using a DeepSet architecture shows a significant performance gain, increasing the effective tagging power by 35 % for B0 mesons and 20 % for Bs0 mesons over classical taggers, crucial for measurements of flavour oscillation frequencies and time-dependent charge-parity(CP) asymmetries of neutral B mesons. We will also present promising preliminary results from ongoing studies that apply Transformer architectures to this task, highlighting their potential for further improvement.
We present the first comprehensive study of fragmentation into fully heavy tetraquarks, based on the newly released TQ4Q1.1 set of collinear, variable-flavor-number-scheme fragmentation functions. Covering scalar ($0^{++}$), axial-vector ($1^{+-}$), and tensor ($2^{++}$) configurations, our analysis provides a pioneering framework to explore exotic-matter production with precision QCD tools. For the first time, we quantify and propagate key sources of uncertainty, from color-composite long-distance matrix elements to missing higher-order contributions in both the hard and fragmentation sectors. This work offers a robust reference for ongoing analyses at the LHC and for future explorations at its High-Luminosity upgrade and next-generation accelerators. As an outlook, we discuss the potential of a multimodal fragmentation strategy, aimed at combining different production mechanisms, including diquark clusters and molecular components, to further refine predictions and improve the connection between QCD dynamics and exotic matter phenomenology.
Abstract:
Quantum Chromodynamics predicts that protons are shaped not only by parton densities but also by quantum interference between quarks and gluons. In this talk I'll present our most recent work on the extraction of this interference effects, establishing their presence at a 2-3 sigma significance.
Neutrinos are a valuable probe for measuring Parton Distribution Functions (PDFs) due to the flavor-dependent nature of their interactions with quarks. Recent comparisons between neutrino-nucleon and charged lepton-nucleon deep inelastic scattering (DIS) data show emerging tensions, which may lead to neutrino exclusive nuclear physics. However, previous neutrino studies have been hindered, in part, by low statistics. Recent advances in neutrino experiments open the door for a new era of high-statistics, good final-state reconstruction, neutrino-nucleus interactions measurements. The Deep Underground Neutrino Experiment (DUNE) is a prime example. This work is a first step to investigate the capability of DUNE to determine PDFs in high Bjorken $x$ and low $Q^2$ regions. We find that DUNE may be able to constrain and reduce the error of PDFs in these regions, that currently are in mild tension among different datasets. Moreover, we have conducted an analysis with similar results for charm-tagged events, which show potential to improve the understanding of the strange quark content of nuclei.
We present a discussion of effective CP-violating interactions of electrons with nucleons emerging from a heavy scalar sector linearly realized. In particular, we investigate the aligned 2HDM in the decoupling limit. This model which contains sources of CP-violation that, after integrating out the heavy scalars, generate 4-fermion operators.
There are operators involving both a lepton current and a light quark current ($u$, $d$ and $s$ quarks), which have a non-zero matrix element with the nucleon. There are also operators involving heavy quark currents ($c$, $b$ and $t$ quarks), which match at 1-loop level into dimension-7 lepton-gluon operators.
Some of these effective interactions are also relevant for the determination of the 'effective' Electric Dipole Moment (EDM) of the electron, an experimental observable which is both sensitive to the intrinsic EDM of the electron and electron-nucleon interactions.
In this talk, a dispersive approach is presented to extract the s-wave $\eta\pi$ scattering phase shift in the elastic regime from BESIII $\eta^{\prime} \rightarrow \eta \pi \pi $ experimental data. This approach relies on unitarity and the structure of two-body partial wave amplitudes with an analytical closed formula for the Right-Hand-Cut and a conformal mapping for the Left-Hand-Cut. Finally, three-body final state interactions are modeled with the Khuri-Treiman equations
Quantum Chromodynamics (QCD) dictates that in extreme conditions, as those reproduced in heavy-ion collisions, hadronic matter turns into a new form of elementary matter: the quark-gluon plasma (QGP). The theoretical interpretation of QCD jet observables in heavy-ion collisions is nonetheless to this day a complex task and there are still competing explanations for the physical origin of the measured medium-induced modifications.
I will present a new approach to compute groomed jet sub-structure observables. The core idea is to treat medium effects a posteriori by an effective energy shift of the hard, vacuum-like jet substructure. Moreover, these medium-induced effects include a gradual onset of colour coherence originated from the in-medium propagation of a set of two subjets.
This simplified approach was first applied to a NLO-exact dijet vacuum configuration, which was able to qualitatively capture the narrowing trend of groomed observables. Afterwards this was extended to full events obtained by matching the NLO matrix-element to a leading-logarithm accurate parton shower, resulting in a very good theory-to-data agreement (within 10%) for a broad range of observables.
Given the current standard of a LO baseline for in-medium jet analysis, this study also contributes for the necessary theoretical development anticipating the upcoming heavy-ion programme at the LHC, which will carry a broad range of precision measurements to faithfully characterise the QGP.
The transverse momentum dependent (TMD) factorization theorem incorporates several types of power corrections to the leading term. For large values of transverse momentum, $q_T/Q$ corrections become significant. They arise from higher-twist TMD distribtuions, which are singular at small transverse distances. We propose a method that reveals this singularity and makes the $q_T/Q$ correction manifest. As an application, we consider twist-three TMD distributuions and compute the $q_T/Q$
corrections to next-to-leading power angular coefficients for unpolarized Drell-Yan. The result is in complete agreement with the data.
RENATA (Red Nacional Temática de Astropartículas)
The Deep Underground Neutrino Experiment (DUNE) is a next-generation international experiment that will redefine our understanding of neutrino physics. The combination of a powerful wide-band neutrino beam complemented by a high-performance and movable near detector complex in Fermilab and a far detector with massive Liquid Argon Time Projection Chambers located 1,300 km deep at the Sanford Underground Research Facility (SURF), will allow DUNE not only to determine the neutrino mass ordering and measure potential CP violation in the lepton sector, but also to test the completeness of the three-flavour paradigm itself. DUNE’s broad energy coverage and long baseline will give access to all oscillation parameters, providing unprecedented sensitivity to matter effects and potential observations of deviations from standard oscillation physics, such as non-standard interactions, sterile neutrinos, or CPT violation. DUNE will also explore a rich landscape of astrophysical and beyond-Standard-Model phenomena, from supernova neutrinos to dark sector signatures. While the demonstrators at CERN and Fermilab continue delivering physics results, most of the detector components are already in production, as the first cryostat is planned to be installed next year in the recently completed SURF caverns.
The ND280 near detector of the T2K experiment at J-PARC plays a crucial role in minimizing the systematic uncertainties related to the neutrino flux and neutrino-nucleus cross-sections, as it measures the neutrino beam before it oscillates. The ND280 detector has recently been upgraded with a new suite of sub-detectors: a high-granularity SuperFGD with 2 million optically-isolated scintillating cubes read out by wavelength-shifting fibres and 55000 Multi-Pixel Photon Counters; two horizontal Time-Projection Chambers instrumented with resistive Micromegas, and additionally six panels of scintillating bars for precise time-of-flight measurements. The installation of the new subdetectors was completed in May 2024, and since then, the T2K collaboration has been successfully taking neutrino beam data with the upgraded ND280. The talk will cover the performance of the upgraded ND280 and will also address the importance of ND280 for Hyper-Kamiokande.
The next generation long-baseline neutrino experiment, Hyper-Kamiokande (HK) will rely on precise measurements of the neutrino and secondary particle interactions to reduce systematic uncertainties. The Water Cherenkov Test Experiment (WCTE), a 50-ton prototype operated at the end of the T9 beam line at CERN during 2024-2025, will deliver valuable measurements of processes relevant for neutrino interaction modelling in water Cherenkov detectors, including pion absorption and scattering, lepton scattering and secondary neutron production. Additionally, the WCTE detector has provided a unique opportunity to validate photon detection technologies and calibration strategies for future HK detectors. In this contribution, we present the current status of the WCTE measurements, including both the beam data and the dedicated calibration sources, and discuss the status of the calibration systems under development for the HK intermediate and far detectors.
The T2K experiment in Japan is a long-baseline neutrino oscillation experiment searching for CP violation in the leptonic sector. To enhance the precision of its measurements, the near detector ND280 has recently been upgraded with two new High-Angle Time Projection Chambers (HA-TPCs). The HA-TPCs improve the tracking of particles from neutrino interactions at high angles.
The HA-TPCs combine two key innovations: a lightweight composite field cage that maximises the active volume while reducing the material budget and Encapsulated Resistive Anode Micromegas (ERAMs), a novel readout technology providing stability and robustness without sacrificing spatial resolution. All detectors in the upgrade project were installed at J-PARC between autumn 2023 and spring 2024. The detectors were successfully commissioned with cosmic rays and the neutrino beam and, since June 2024, have been taking data as the fully upgraded ND280.
First performance studies show that the HA-TPCs are meeting the design goals, with promising results in spatial, momentum, and energy resolution. Dedicated analyses of spatial resolution in the drift direction, along with studies of electric-field behaviour, further confirm their performance for long-term operation. These achievements mark a key milestone for T2K and confirm the robustness of HA-TPCs for long-term operation for next-generation neutrino experiments such as Hyper-Kamiokande.
The Short-Baseline Near Detector (SBND) is one of the three experiments in the Short-Baseline Neutrino (SBN) Program at Fermilab. Located only 110 m downstream of the Booster Neutrino Beam (BNB) target, SBND is the detector closest to the neutrino source. The detector is a Liquid Argon Time Projection Chamber (LArTPC) with a 112-ton active volume which enables unprecedented precision measurements of neutrino-nucleus interaction in liquid Argon. The detector began taking data in July 2024 and has already completed its first year of running. Its Photon Detection System (PDS) is a hybrid system consisting of 120 photomultiplier tubes (PMTs) and 192 novel X-ARAPUCA devices, accompanied by highly reflective panels coated with a wavelength-shifting compound covering the cathode and reflecting light towards the optical devices. An X-ARAPUCA functions as a light trap that captures photon emitted by the Argon, shifts their wavelength to a detectable spectrum using a coating, and then guides these photons to an array of silicon photomultipliers (SiPMs) for detection. The X-ARAPUCA system represents an R&D opportunity to demonstrate the performance of this novel technology in a LArTPC exposed to a neutrino beam over several years. There are two types of X-ARAPUCA installed: one sensitive to vacuum ultraviolet (VUV) argon scintillation light and another sensitive to visible light. This talk presents an overview of the SBND detector, focusing on its X-ARAPUCA system and the path towards detecting the first X-ARAPUCA signals.
The Short-Baseline Near Detector (SBND) is a 112-ton liquid argon time projection chamber 110 m away from the Booster Neutrino Beam (BNB) target at Fermilab (Illinois, USA). In addition to its role as the SBN program's near detector,enabling precision searches for short-baseline neutrino oscillations, the proximity of SBND to the BNB target makes the experiment ideal for many beyond the Standard Model (BSM) searches for new particles produced in the beam. The nanosecond-timing resolution of the scintillation light detectors further boosts the experiment capabilities. In this talk, we present the status and expected sensitivity to new BSM particles produced in the decay of mesons and in proton-target interactions.
The NEXT experiment aims to detect neutrinoless double beta decay in $^{136}$Xe using a high-pressure gas Time Projection Chamber with electroluminescent amplification. This technology features an excellent energy resolution (<1% FWHM @ $Q_{\beta\beta}$) and the ability to extract topological information for background rejection. The physics program of NEXT-White, a ~5 kg detector, was successfully completed in 2022 with the first double beta decay measurements. The collaboration has since begun operating NEXT-100, a detector twice the size in each dimension capable of holding ~100 kg of xenon at 15 bar. Located at Laboratorio Subterráneo de Canfranc (LSC), NEXT-100 aims to demonstrate quasi-background-free conditions for this technology at the 100 kg scale. The detector has recently concluded its first run at a reduced pressure of 4 bar and will begin its physics run at high pressure in early 2026. In this presentation, we will report the status of the detector, summarize the main results from the commissioning, calibration, and background assessment campaigns, as well as outline the plans for the next stage of operation.
The COLINA project aims to develop an innovative single-phase noble liquid time projection chamber (TPC) to detect CEνNS. Two distinct ideas are combined to maximize the potential of the technique. 1) The signal will be amplified through electroluminiscence (EL). 2) The TPC will be shaped as a conical frustum.
Single-phase EL is unaffected by charge trapping which is a major deterrent of dual-phase noble liquid TPCs for CEνNS searches at shallow depths. However, it requires extremely high electric fields. Such fields can be reached by using very thin wires – μm-scale diameter. This is an impediment to produce large amplification regions. Common TPC shapes are thus limited in size and target mass. The conical shape allows to maximize the mass by drifting all charges towards a small amplification region at the smaller circle of the cone. Such scheme appears as cost-efficient as it allows for good coverage with few sensors.
The final goal is to deploy COLINA, a conical TPC capable of holding ∼50 kg of LXe, at the largest spallation neutrino source, the European Spallation Source. Simulations point to a conservative energy threshold as low as ∼0.5 keVnr. The detector will allow for operation with different noble gases. The increase in density of liquid-phase, compared to gaseous-phase, results in large CEνNS interaction rate with rather small detectors. In fact, COLINA will produce the larger CEνNS statistics in all the considered isotopes, Xe, Kr and Ar, and will do so in unexplored energy regions for the process, where the physics relevance is maximal.
The project was recently funded and is now starting its active development and prototyping. In this talk I’ll give a brief overview of the project highlighting the expected performance and the various challenges that are expected to be tackled during the coming years.
We will show the complementarity between COHERENT and LHC searches in testing neutrino nonstandard interactions (NSIs) through the completion of the effective field theory approach within a Z′ simplified model. Our results reveal that LHC bounds are strongly dependent on the Z′ mass, with relatively large masses excluding regions in the parameter space that are allowed by COHERENT data and its future expectations. We demonstrate that the combination of low- and high-energy experiments results in a viable approach to break NSI degeneracies within the context of simplified models.
Improving the accuracy of the neutron capture and fission cross sections of $^{239}$Pu is listed as a High Priority Request by the Nuclear Energy Agency (NEA/OECD), due to their central importance for nuclear applications and reactor technology. To address this, a dedicated experimental campaign was carried out at n_TOF, the CERN time-of-flight facility, where $^{239}$Pu was measured for the first time. The experiment employed ten high-purity $^{239}$Pu samples (total mass less than 10 mg) produced at JRC-Geel and SCK CEN and placed in a custom ionization chamber capable of operating under the high α-decay background of $^{239}$Pu. The fission tagging technique, based on the use of fission fragment detectors in coincidence with the n_TOF Total Absorption Calorimeter, enabled a precise determination of the capture cross section by suppressing the dominant fission background. Additionally, a 100 mg $^{239}$Pu sample was used to extend the capture measurement up to 10 keV.
This contribution to the XVII CPAN Days will present the final results of the campaign. The complete data analysis, including the procurement of the resonance parameters using the SAMMY code, provides high-precision cross sections for both capture and fission reactions. Particular emphasis will be placed on the $^{239}$Pu(n,γ) results, presented here for the first time, including detailed resonance analysis and a direct comparison with evaluated nuclear data libraries and previous experimental datasets. The outcomes confirm the reliability of the n_TOF measurements and provide improved constraints for nuclear data evaluations, contributing directly to the NEA High Priority Request List.
Carbon isotopes provide a rich testing ground for the evolution of shell structure and halo phenomena in light neutron-rich nuclei. In particular, $^{15}\mathrm{C}$ [1] is a well-known one-neutron halo candidate, with the valence neutron weakly bound ($S_n \approx 1.2$ MeV) in a $2s_{1/2}$ orbital.
Its first excited state at 0.74 MeV has a dominant single-particle configuration with a neutron in the $1d_{5/2}$ orbital and a lifetime of 2.61 ns [2]. The transition between these states is expected to involve weak core polarization due to the inert $^{14}\mathrm{C}$ core, which may be further reduced by the spatial decoupling of the halo neutron. Understanding how the halo in $^{15}\mathrm{C}$ impacts core polarization is directly relevant for constraining the quadrupole moments of $^{16}\mathrm{C}$ [3].
To address these questions, we studied the one-neutron transfer $^{16}\mathrm{C}(p,d)^{15}\mathrm{C}$, the two-neutron transfer $^{16}\mathrm{C}(p,t)^{14}\mathrm{C}$, and the deuteron-induced transfer $^{16}\mathrm{C}(d,t)^{15}\mathrm{C}$. These complementary reactions provide sensitivity to single-particle and pairing correlations in neutron-rich carbon isotopes and serve as benchmarks for theoretical models of transfer reactions with exotic beams.
The experiment was performed in 2023 at the Argonne Tandem Linac Accelerator System [4] (ATLAS) using the Active Target Time Projection Chamber (AT-TPC) [5] and HELIOS solenoidal spectrometer [6,7]. A primary $^{18}\mathrm{O}$ beam with an energy of $222.72 \pm 0.43$ MeV was degraded to produce a $^{16}\mathrm{C}$ secondary beam, which was subsequently used to study these transfer channels.
This work has received financial support from the Xunta de Galicia (CIGUS Network of Research Centres) and the European Union. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Contracts No. DE-AC02-06CH11357. This research used resources of ANL’s ATLAS facility, which is a DOE Office of Science User Facility.
_
References:
[1] U. Datta Pramanik, T. Aumann, K. Boretzky and et al., Phys. Lett. B 551, 63 (2003).
[2] D. E. Alburger and D. J. Millener, Phys. Rev. C 20, 1891 (1979).
[3] J. Chen et al., Physical Review C 106.6 (2022): 064312.
[4] C. Hoffman, T. Tang, M. Avila, Y. Ayyad, K. Brown, J. Chen, K. Chipps, H. Jayatissa, B. Kay, C. MüllerGatermann, H. Ong, J. Song, and G. Wilson, Nucl. Instr. Meth. Phys. Res. Sect. A 1032, 166612 (2022).
[5] J. Bradt, D. Bazin, F. Abu-Nimeh, T. Ahn, Yassid Ayyad, S. Beceiro Novo, L. Carpenter et al. Nucl. Instr. Meth. Phys. Res. Sect. A 875 (2017): 65-79.
[6] ] A. Wuosmaa, J. Schiffer, B. Back, C. Lister, and K. Rehm, Nucl. Instr. Meth. Phys. Res. Sect. A 580, 1290 (2007).
[7] J. Lighthall, B. Back, S. Baker, S. Freeman, H. Lee, B. Kay, S. Marley, K. Rehm, J. Rohrer, J. Schiffer, D. Shetty, A. Vann, J. Winkelbauer, and A. Wuosmaa, Nucl. Instr. Meth. Phys. Res. Sec. A 622, 97 (2010).
Wide multigap glass RPCs deployed in the miniTRASGO muon monitor, developed at LIP [1] provide a field reference for readout optimization. Continuous operation has been used to validate mechanics and HV distribution, grounding and shielding, environmental corrections, and a lightweight trajectory reconstruction chain. These lessons inform a timing-oriented architecture in which a single thin, multigap chamber [2], equipped with narrow strips and signal merging/multiplexing, dispenses with a separate thick-strip plane to reduce channel count and material [3]. Reference measurements with a double-stack system indicate operating regimes where single-plane efficiency can degrade even while timing remains competitive; planned high-voltage and power-delivery scans, together with complementary fast/slow shaping on the thin strips, are designed to isolate and mitigate these effects. The target performance is hundreds-of-picoseconds time resolution with sub-millimetre position from a simplified readout, scalable from compact telescopes to large-area Time-Of-Flight detectors. Emphasis is placed on detector engineering, readout topology and reconstruction workflow.
References
Los núcleos ligeros cercanos a la línea de goteo neutrónica presentan propiedades exóticas, como el \textit{clustering} y la formación de halo. Determinar su estructura ha representado un desafío importante durante las últimas décadas.
El núcleo $11Li$ posee un halo de dos neutrones, y aunque su estado fundamental está consolidado como una combinación de ondas $s \ (35 (4)\%)$, $p \ (59 (1)\%)$ y $d \ (6 (4)\%)$; la naturaleza de sus estados excitados sigue siendo motivo de debate. Por otro lado, el núcleo no ligado $13Be$ es esencial para estudiar la estructura del núcleo $14Be$, un sistema deformado que presenta un halo de dos neutrones.
El experimento IS690 realizado en HIE-ISOLDE (CERN) tiene como objetivo arrojar nueva luz sobre estos núcleos exóticos mediante reacciones de transferencia en cinemática inversa, empleando los haces exóticos de $9Li$ y $11Be$ con energías por encima de la barrera de 7 MeV/u y 5.4 MeV/u, sobre una lámina de Tritio/Titanio. Estas reacciones permiten estudiar la transferencia de dos neutrones y acceder a la estructura de los núcleos de interés.
En este trabajo, presentaré los dispostivos experimentales utilizados, y los resultados pariales obtenidos hasta la fecha.
The total neutron yield and neutron spectra from (α,xn) reactions are relevant for basic nuclear physics, nuclear technology, and applications$^1$. These fields rely on accurate experimental data of the nuclear reactions involved. Yet most of the available data were measured decades ago, are incomplete, or have large uncertainties. Updating these libraries for (α,xn) requires, among others, to carry out new experimental measurements$^2$.
The Measurement of Alpha Neutron Yields and spectra (MANY) collaboration is carrying out a broad program on the measurement of (α,xn) reactions for improving and expanding the existing databases. In the last years, a set of measurements on production yields, cross sections, and neutron and γ-ray energy spectra in the 27Al(α,xn)30P reaction$^3$ , at both Centro de Micro-Análisis de Materiales (CMAM)$^4$ and Centro Nacional de Aceleradores (CNA)$^{5, 6}$ facilities, have been carried out.
We will present here the neutron energy spectra obtain from the analysis of the measurement of the 27Al(α,xn)30P reaction at the CNA HiSPANoS facility, and their comparison with previous data$^7$. During the experiment, six cells of the MOdular Neutron time-of-flight SpectromeTER (MONSTER)$^{8, 9, 10}$ were used to measure time of flight of neutrons emitted. Various unfolding techniques$^{11, 12}$ and Monte Carlo simulations$^{13}$ were used to obtain the experimental energy spectra and will be presented here. In addition, plans of future measurements of other reactions at CMAM and/or CNA will be discussed.
[1] D. Cano-Ott et al, Journal of Physics G: Nuclear and Particle Physics, (2025)
[2] A. Junghans et al., INDC International Nuclear Data Committee (IAEA), (2023) 10.61092/iaea.d2d0-encd
[3] N. Mont-Geli et al., EPJ Web of Conferences, 284, (2023) 06004
[4] A. Redondo-Cubero et al., The European Physical Journal Plus, 136, (2021) 175
[5] J. Gómez-Camacho et al., The European Physical Journal Plus, 136, (2021) 273
[6] M.A. Millán-Callado et al., Radiation Physics and Chemistry, 217, (2024) 111464
[7] G.J.H. Jacobs et al., Annals of nuclear energy, 10, (1983) 541-552
[8] A.R. Garcia et al., Journal of Instrumentation, 7, (2012) C05012
[9] T. Martínez et al., Nucl. Data Sheets, 120, (2014) 78-80
[10] A. Pérez de Rada Fiol et al., β-delayed neutron spectroscopy of 85,86As with MONSTER.
[11] J.L. Tain et al., Nuclear Instruments and Methods in Physics Research, 571, (2007) 728-738
[12] A. Pérez de Rada Fiol et al., Radiation Physics and Chemistry, 226, (2025) 112243
[13] S. Agostinelli et al., Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 506, (2003) 250-303
Pablo González Rusell for the R^{3}B collaboration
The atomic nuclear structure is still one of the most complex problems in modern physics. This is due to the fact that many-body correlations beyond the symmetries of the nucleon-nucleon potential leads to the existence of a large number of nuclear systems whose properties differ significantly from what can be expected based on the simple addition of nucleons. An example of these phenomena is the drastic extension of the neutron drip line for compared with isotopes [1, 2]. The neutron drip line is the limit of nuclear binding from which adding more neutrons is no longer possible, resulting in the spontaneous emission of neutrons (dripping). In order to understand the drip line phenomena, studying and characterizing the structure of and isotopes through one nucleon removal reactions is fundamental.
In particular, our study aims to deploy the reaction F(p,2p)O, in inverse kinematics, in order to characterize the final states of the residual O core, similar to what was done in [3], but with significantly higher resolution, statistics, and acceptance provided by the R3B (Reactions with Relativistic Radioactive Beams) experimental setup at GSI/FAIR. More thoroughly, an incoming F-beam with an energy of MeV/nucleon will impinge onto a LH-target of 15 cm length. The outgoing heavy fragments of O will be measured in coincidence with the (p,2p) reaction, providing an indication of populated excited or ground states of O. Furthermore, since there are no bound excited states of O, the de-excitation process will proceed through one or two neutron emissions, which will be measured with high resolution in the neutron detector NeuLAND, allowing us to resolve and study the excited states of O. The cross-sections to populate individual final states together with the reconstructed momentum distribution of the decaying O system would help to accurately determine the configuration of the excited O core in F.
[1] D. S. Ahn et al. “Location of the Neutron Dripline at Fluorine and Neon”. In: Phys. Rev.
Lett 123.21, 212501 (Nov. 2019), p. 212501. doi: 10.1103/PhysRevLett.123.212501.
[2] C. Caesar et al. “Beyond the neutron drip line: The unbound oxygen isotopes 25O and 26O”.
In: Phys. Rev. C 88.3, 034313 (Sept. 2013), p. 034313. doi: 10.1103/PhysRevC.88.034313.
arXiv: 1209.0156 [nucl-ex].
[3] T. L. Tang et al. “How Different is the Core of F 25 from O\textsubscript{g.s.} 24 ?” In: Phys. Rev. Lett
124.21, 212502 (May 2020), p. 212502. doi: 10.1103/PhysRevLett.124.212502.
Accurate neutron-capture cross sections are essential for modelling the slow neutron capture process, which governs the synthesis of roughly half of the elements heavier than 56Fe and determines their isotopic ratios. In particular, the case of 146Nd is of astrophysical interest due to the lack of capture data in the resolved resonance region below 5 keV (RRR) [1], the persistent disagreement between experimental measurements in the unresolved resonance region above 5 keV (URR) [2,3], and discrepancies between reference data and isotopic ratios inferred from presolar stardust grains [4,5,6].
To address these challenges, a multi-facility campaign has been undertaken combining neutron time-of-flight (TOF) and activation techniques. At CERN’s n_TOF facility, high-resolution TOF measurements have been performed in the EAR2 station [7] to study the RRR up to 5 keV [8], while activation experiments are pursued both at the new CERN’s NEAR station [9, 10] and at CNA’s HiSPANoS neutron source in Seville [11]. HiSPANoS is a well-characterised facility uniquely suited to provide a quasi-stellar neutron spectrum at kT=25 keV via the 7Li(p,n) reaction [12], enabling a direct and complementary determination of Maxwellian-Averaged Cross Section (MACS) at the stellar temperature of reference for the main s-process nucleosynthesis.
This contribution will present the current status of the data analysis from the CNA HiSPANoS campaign, with first results on the 146Nd(n,γ) activation leading to 147Nd (T1/2 ≈ 11 d). These data provide an essential benchmark for the URR, where previous measurements are inconsistent, and are key to resolving the discrepancies between experimental nuclear data and astrophysical observations.
By combining international large-scale infrastructures such as CERN n_TOF with national facilities like CNA, this work highlights the strategic role of HiSPANoS in complementing global efforts to produce high-precision nuclear data for astrophysics.
(1) H. Tellier, CEA-N-1459 (1971)
(2) Z.Y. Bao et al., Atomic Data Nucl. Data Tables 76, 70 (2000)
(3) K. Wisshak et al., Phys. Rev. C 57, 391 (1998)
(4) S. Richter et al., Abstracts Lunar and Planetary Science Conf., 23, 1147, (1992)
(5) T.R. Ireland et al., Geochimica et Cosmochimica Acta 221, 200-218 (2018)
(6) Q.Z. Yin et al., The Astrophysical Journal, 647, 676–684 (2006)
(7) C. Weiss et al., Nucl. Inst. Methods A, 799, 90-98 (2015)
(8) J. Lerendegui-Marco et al., INTC-P-671 (2023)
(9) N. Patronis et al., Eur. Phys. J. A 61, 215 (2025)
(10) B. Gameiro et al., INTC-P-671-ADD-1 (2025)
(11) M.A. Millán-Callado et al., Radiation Physics and Chemistry 217 (2024)
(12) P. Pérez-Maroto, C. Guerrero, B. Fernández, A. Casanovas-Hoste, M.E. Stamati, Physics Letters B 862, 139360 (2025)
(On behalf of S505-DESPEC experiment collaboration)
Our understanding of the production of the heavy elements in the Universe is still incomplete. In particular the contribution of the rapid neutron capture (r-) process to the observed stellar abundances around mass number A~195 (the 3rd r-process peak), which is linked to the effect of N=126 shell closure in the production path. Given the lack of nuclear data, astrophysical abundance calculations must rely on theoretical predictions, for the important parameters T1/ 2 (half-life) and Pn (neutron emission probability). Both parameters are extracted from theoretical beta-strength distributions, which depend on nuclear structure. However, large discrepancies exist among different theoretical models [Mor14,Cab16]. Our aim is to discern between models by comparing with measured beta-strengths. For this we will use Total Absorption Gamma-ray Spectroscopy (TAGS) which is the most effective method for obtaining beta-strength distributions across the entire decay energy window [Rub05].
With this purpose, an experiment was performed at the GSI/FAIR facility in June 2022. During the experiment, the decay of Au and Pt isotopes with N=125-27 were measured. These isotopes were produced by high-energy nuclear reactions using a beam of Pb on a Be target and selected and identified using the FRagment Separator (FRS) [Win08]. Ion implants and decay electrons were measured with the Advanced Implantation and Decay Array (AIDA) [Hal23], while isomeric and β-delayed γ-ray cascades were measured with the Decay Total Absorption Spectrometer (DTAS) [Gua18], both developed within the NUSTAR/DESPEC collaboration.
We succeeded in performing clean implanted-ion identification, minimizing the contamination of ionic charge states and reactions in the FRS. We obtained for each isotope beta-gated TAGS decay spectra by combining the information from the three systems. We did a preliminary analysis of half-life and beta intensity distribution. We will discuss the status of the analysis and the work remaining.
References
[Cab16] R. Caballero-Folch et al., Phys. Rev. Lett. 117, 012501 (2016)
[Mor14] A.I. Morales et al., Phys. Rev. Lett. 113, 022702 (2014).
[Rub05] B. Rubio et al., J. Phys. G 31, S1477 (2005).
[Gua18] V. Guadilla et al., Nucl. Instrum. Meth. Phys. Res. Sect. A 910, 79 (2018).
[Hal23] O. Hall et al., Nucl. Instrum. Meth. Phys. Res. Sect. A 1050, 168166 (2023)
[Win08] M. Winkler, et al., Nuclear Instruments and Methods in Physics Research Section B 266 (2008) 4183.
Nicolás Sánchez Vázquez for the n_{TOF} collaboration.
Nuclear fission is a complex process characterized by the splitting of a nucleus into two fragments of comparable mass. The mass and charge distributions of the resulting fission fragments are governed by the interplay between macroscopic nuclear properties and microscopic shell effects under extreme deformation. This competition manifests in the asymmetric mass distributions observed in actinide fission and, more recently, in the sub-lead region [1]. Previous studies of high-energy fission in systems around Z=60 have suggested possible asymmetric distributions [2], though limited statistics have prevented definitive conclusions. Current understanding indicates that favored deformed shells—specifically Z=52, 56 for heavy fragments in actinides [3] and Z=34 for light fragments in sub-lead systems [4,5]—serve as primary drivers of these asymmetries . These findings motivate further investigation into the role of nuclear structure in lighter fissioning systems.
In 2024, an experiment was conducted at the n_TOF/EAR1 facility to determine the symmetric or asymmetric character of fission yields in ^{nat}Ce(n,f) reactions. The experimental setup employed ten tilted Parallel Plate Avalanche Counters (PPACs) to maximize angular coverage [6]. The configuration included seven ^{nat}Ce targets, along with ^{197}Au and ^{238}U targets as references, each positioned between two PPACs. This arrangement enables comprehensive measurements of the reaction cross-section, along with angular and mass distributions of the fission fragments.
This presentation will provide an overview of the current analysis status and present preliminary results from the experiment.
[1] A. N. Andreyev et al., Phys. Rev. Lett. 105, 252502 (2010).
[2] H. A. Gustafsson et al., Phys. Rev. C 24, 769 (1981).
[3] G. Scamps and C. Simenel, Nature 564, 382 (2018).
[4] G. Scamps and C. Simenel, Phys. Rev. C 100, 041602(R) (2019).
[5] Morfouace, P., Taieb, J., Chatillon, A. et al. An asymmetric fission island driven by shell effects in light fragments. Nature 641, 339–344 (2025). https://doi.org/10.1038/s41586-025-08882-7
[6] D. Tarrío et al., Nucl. Instr. Methods A 743, 79 (2014).
Reunion de la red de futuros colisionadores.
Group II-VI semiconductors are being explored for room-temperature X-ray imaging due to their excellent properties, especially high resistivity and wide bandgap. However, their performance is often limited by crystalline defects and surface imperfections that trap charges and increase leakage currents.
Wafer inspection to ensure the quality of base materials through optical means prior to hybridization could positively impact detector production yield with these materials. These techniques include non-destructive examination of the wafer's surface topography and internal structure to guarantee chip functionality and reliability.
This work focuses on the development of a quality-control protocol to characterize semiconductor base materials, including complementary techniques such as IR transmission microscopy for bulk defects and Scanning Electron Microscopy (SEM) for surface morphology.
The development of very low background γ-ray spectrometers has led to multidetector systems, such as Mazinger. Mazinger is an array of two HPGe detectors and two NaI(Tl) anti-Compton rings in anticoincidence configuration. The detector shielding combines passive shielding, composed of three layers of iron, lead and copper and active shielding, consisting of two anti-muon veto detectors in addition to the previously mentioned anti-Compton rings. High efficiency and background reduction are achieved for low-level activity measurements, approaching the limit of the technique. However, true-coincidence-summing (TCS) effects become a drawback in Mazinger due to its specific anticoincidence configuration. This occurs because any simultaneous triggering of
more than one of the four detectors within the coincidence window results in the event being rejected. Following the implementation of Mazinger in the Monte Carlo simulations with Geant4, TCS correction factors were calculated, reaching values as high as 1200 % in some cases. This work presents the successful results obtained for both natural and artificial multi-γ-emitting radionuclides, including 228Ac, 133Ba, 214Bi, 139Ce, 134Cs, 60Co, 152Eu, and 209Tl.
Boron Neutron Capture Therapy (BNCT) is an experimental form of radiotherapy that uses boron, injected to the patient within a target molecule that accumulates selectively in cancerous cells. This therapy exploits the large boron neutron capture cross-section to deliver a targeted dose from neutron irradiation. BNCT has shown great promise with the advent of accelerator-based technologies, which facilitate high-quality neutron beams in clinical environments [1].
One of the primary challenges in current BNCT is the accurate determination of the dose delivered to the patient. Since neutron captures in boron produce 478 keV gamma rays, this could be potentially used for real-time dose monitoring. To date, the main challenges remain dealing with very intense radiation fields that generate large count rates above detector reach; and in achieving enough boron sensitivity to image the boron in the tumor (65 ppm) above the overall boron in nearby tissues (18 ppm), on top of the strong background induced by harsh neutron and gamma ray fields generated during the treatments.
The i-TED Compton Camera array, originally designed for nuclear physics measurements of astrophysics interest, has expanded into medical physics through ion-range monitoring in HT [2], and further aiming now at BNCT [3]. Its large efficiency design and low neutron sensitivity make i-TED especially well suited for this task.
The state-of-the-art i-TED modules consist of large monolithic crystals of 15 mm thickness for the scatterer and 25 mm for the 4 absorbers. In the context of BNCT treatments, we require the use of new solutions for pixelated detectors in order to cope with the very large count rates present in these treatments. For such a task, the use of pixelated scintillators offers an approach to reduce the SiPM pixel firing rates without an overall efficiency loss.
This contribution will present the adaptations of the original i-TED imager, to optimize its performance for BNCT dosimetry. We will present the main results and observations from our last year campaign at Institut Laue Langevin (Grenoble, France), the changes implemented since then, and the first estimations from the most recent experiment at the LENA reactor (Pavia, Italy). In the latter, we have measured with a state-of-the-art i-TED module and an optimized one, consisting of thinner absorbers and a thin, pixelated CLLBC crystal as a scatterer. Comparisons between the performance of both versions will be discussed.
References
[1] K. Hirose et al., “Boron neutron capture therapy using cyclotron-based epithermal neutron source and borofalan (10B) for recurrent or locally advanced head and neck cancer (JHN002): An open-label phase II trial”, Rad. & Onc. Vol 155, pp. 182-187, (2021)
[2] J. Balibrea-Correa et al., “Hybrid compton-PET imaging for ion-range verification: a preclinical study for proton, helium, and carbon therapy at HIT”, The Eur. Phys. Jour. Plus, Volume 140, 870 (2025)
[3] P. Torres-Sánchez et al., “The potential of the i-TED Compton camera array for real-time boron imaging and determination during treatments in Boron Neutron Capture Therapy”, App. Radiat. Isot. 217, 111649 (2025)
Scintillator-based diagnostics, such as Fast Ion Loss Detectors (FILD) [1] and Ion-Neutral Particle Analyzers (INPA) [2], play a crucial role in characterizing energetic particle behavior in magnetic confinement fusion devices. These diagnostics rely on visible light emission induced by energetic ion irradiation (ionoluminescence). A common assumption in theses diagnostics is that light emission is isotropic [3]; however, this has not been experimentally validated for materials of interest in fusion research. In this work, we present a study of the angular dependence of light emission in two scintillator materials used in fusion diagnostics: TG-Green (SrGa₂S₄:Eu²⁺) and β-SiAlON (SiAlON). Experiments were conducted at the 3 MV Tandem accelerator of the Centro Nacional de Aceleradores (CNA, Seville), where samples were irradiated with 3.5 MeV He beams, an energy relevant to fusion applications. Light emission was collected through an optical fiber mounted on a rotating stage, with the other end coupled to an optical spectrometer. This configuration allowed measurements at different observation angles with respect to the ion beam axis. Prior to the angular measurements, we evaluated two potential sources of systematic error: the bending induced transmission loss in the optical fiber and the ion beam induced degradation of the scintillator. This ensured that any observed variation in light intensity with angle could be confidently attributed to the emission anisotropy rather than to optical fiber bending or progressive damage to the scintillator material. Preliminary results indicate measurable variations in emission intensity with observation angle, suggesting that the isotropic emission assumption may require revision. Furthermore, both scintillators exhibited gradual degradation in light output under sustained ion exposure, with material-dependent resilience. These findings have direct implications for the calibration and interpretation of scintillator-based diagnostics in current devices and for the design of future systems for ITER and other next-generation fusion reactors.
(1) García-Muñoz, M.; Kocan, M.; Ayllon-Guerola, J.; Bertalot, L.; Bonnet, Y.; Casal, N.; Galdon, J.; García López, J.; Giacomin, T.; González-Martín, J.; Gunn, J. P.; Jiménez-Ramos, M. C.; Kiptily, V.; Pinches, S. D.; Rodríguez-Ramos, M.; Reichle, R.; Rivero-Rodríguez, J. F.; Sanchis-Sánchez, L.; Snicker, A.; Vayakis, G.; Veshchev, E.; Vorpahl, Ch.; Walsh, M.; Walton, R. Rev. Sci. Instrum. 2016, 87, 11D829.
(2) J. Rueda-Rueda, M. Garcia-Munoz, E. Viezzer, P. A. Schneider, J.Garcia-Dominguez, J. Ayllon-Guerola, J. Galdon-Quiroga, A. Herrmann, X. Du, M. A. Van Zeeland, P. Oyola, M. Rodriguez-Ramos, the ASDEX-Upgrade team.. Rev. Sci. Instrum. 2021, 92, 043554.
(3) M Rodriguez-Ramos, M Garcia-Munoz, M C Jimenez-Ramos, J Garcia Lopez, J Galdon-Quiroga, L Sanchis-Sanchez, J Ayllon-Guerola, M Faitsch, J Gonzalez-Martin, A Hermann, P de Marne, J F Rivero-Rodriguez, B Sieglin, A Snicker and the ASDEX Upgrade Team. Plasma Phys. Control. Fusion. 2017, 59, 105009.
Compton cameras are emerging as an interesting tool in medical imaging. The IRIS group of IFIC has been working on such systems for 20 years and has developed several prototypes with different types of detectors. The group currently coordinates the European project AIDER for developing a clinical system and testing it with patients. The main results achieved in the projects MIDAS and ICOR and the work currently ongoing will be presented.
This talk summarises the software and computing activities of the LHCb UB group.
The Center for Astroparticles and High Energy Physics (CAPA), recently recognized as Research Institute of the University of Zaragoza, is an interdisciplinary research group encompassing high-energy, nuclear and particle physics, as well as astrophysics, cosmology, astroparticles, theoretical physics, and the related technological developments. Progress in these research areas poses new challenges requiring the implementation of cutting-edge computational techniques and the use of specialized software for the analysis, reconstruction, and selection of complex physical events.
This talk will provide an overview of the activities carried out at CAPA in this context, with particular emphasis on the use of machine learning techniques, digital signal processing, and the implementation of advanced algorithms in dedicated hardware. These activities include, among other aspects, trajectory and topology analysis, event classification, and background suppression in rare-event detectors (scintillators, gaseous and liquid TPCs, among others), as well as the application of machine learning to the exploration of correlations in astroparticle experiments aimed at the search for new physics beyond the Standard Model.
We will summarize the software work performed at the HEP group of university of A Coruna. This includes LHCb Real Time Analysis, offline analysis in GPU, Flavor tagging, green algorithms, QC, reconstruction software for HyperK or data compression for KOTO.
This talk summarises the software and computing activities of the High-Low team at IFIC.
The ROOT project is an open-source, modular scientific software toolkit for data analysis, developed at CERN primarily for high-energy physics. This project can help address the future computing challenges that HL-LHC and other scientific experiments.
IRIS-HEP is a software institute funded by the National Science Foundation. It is developing state-of-the-art software cyberinfrastructure required for the challenges of data intensive scientific research at the High Luminosity Large Hadron Collider (HL-LHC) at CERN, and other planned HEP experiments of the 2020’s.
The HEP Software Foundation (HSF) is an international community that facilitates cooperation and common efforts in high energy physics (HEP) software and computing. Its goal is to help developers and users create, discover, and use common software, while also supporting the career development of software and computing specialists.
We perform dimensional reduction of the electroweak sector of the dimension-six SMEFT to order $\mathcal{O}(g^4)$ in coupling constants $g$. This analysis includes one-loop contributions to kinetic terms and quartic couplings; as well as two-loop contributions, where operators such as four-fermion interactions first appear, to squared mass terms. Using lattice data, we also provide evidence that, in contrast with previous statements in the literature, the SMEFT may undergo a first-order electroweak phase transition even without significant direct modifications of the Higgs potential at zero temperature.
In this work, we study the renormalization-group evolution of parameters in the dimensionally reduced three-dimensional effective field theory (3D EFT) that describes thermally driven electroweak phase transitions of the Standard Model Higgs field, triggered by Beyond the Standard Model physics.
We compute the two-loop running of the 3D EFT including the effect of the leading non-renormalizable terms.
We then analyze how the running affects the thermodynamic observables characterizing the phase transition, such as the critical temperature and the transition strength.
By incorporating higher-order corrections in the mass parameter evolution, as well as the running of other effective operators, we set the stage for testing their impact on phase transition dynamics in lattice simulations.
Skyrmions were first proposed in QCD, where they provide a topological description of baryons as solitonic excitations of the pion field. Their emergence as a new degree of freedom without the need of additional field content, and their potential as a dark matter candidate, has since motivated the search for analogous configurations in other theories.
In this work, we shall address the search for Skyrmions in electroweak-like sectors. We will briefly discuss three essential aspects of Skyrmions: 1) the requirements for their stability, 2) their existence in weakly-coupled UV completions and 3) whether high-temperature effects might aid in stabilizing them.
We determine the expression for the entanglement entropy at finite temperature for disk regions in conformal field theories that are dual to black holes in Einstein gravity and Gauss-Bonnet gravity, in the context of the AdS/CFT correspondence. We use the Ryu-Takayanagi formula and its generalization for higher-curvature gravities, respectively. We compute the low temperature expansion of the holographic entanglement entropy relative to the vacuum state, up to second order. The results are expressed in terms of the thermal entropy charge and the central charge in the correlator of two stress-energy tensors, applying the AdS/CFT dictionary. In Einstein gravity, the expansion coefficients are fixed. In Gauss-Bonnet gravity, the first order coefficient adopts a simple form in terms the thermal entropy charge, suggesting a possible universal character for this coefficient.
Dimensional regularization is nowadays the most used technique to perform loop calculations in Quantum Field Theories. However, it faces some complications when applied to chiral gauge theories such as the Standard Model of particle physics because it is not possible to define a mathematically consistent D-dimensional scheme that preserves gauge symmetries. Nonetheless, for a theory free of physical anomalies it is always possible to restore the symmetry by the addition of the appropriate set of both infinite and finite counterterms. In our work, we present the complete calculation of such finite counterterms needed for the consistent renormalization of the dimension six four-fermion operators of the Standard Model Effective Field Theory at one loop using the Breitenlohner-Maison-’t Hooft-Veltman scheme.
We present a novel approach to compute two-loop beta functions in effective field theories obtained via dimensional reduction from five to four dimensions. We isolate UV divergences in the 4D theory from the IR divergences arising in the matching procedure (remarkably 1 and 2 loop UV divergences in 5D vanish). This method provides a straightforward way to disentangle 4D from 5D contributions, allowing the computation of 4D two-loop beta functions without introducing infrared regulators or employing more intricate techniques such as R*-methods. Our approach thus offers a clean, efficient, and conceptually transparent alternative for higher-order renormalization in dimensionally reduced effective field theories.
We discuss various aspects of multi-Higgs boson production from longitudinal electroweak (EW) gauge boson scattering in the TeV region as the necessary information to characterise the Flare function, F(h), which determines whether the Standard Model EFT (SMEFT) or the Higgs EFT (HEFT -also sometimes referred as the EW Chiral Lagrangian-) is the appropriate description. We analyze various correlations among Higgs couplings that help decide, from experimental data, whether we have a viable SMEFT low-energy scenario. We present an effective field theory study of scattering into states with one, two, three and four Higgs bosons in the final state, in addition to possible extra EW gauge bosons. We show several important cancellations and simplifications which allows us to display these amplitudes in a very compact form. We show that for a growing number of Higgs bosons in the final state, SMEFT leads to an important suppression of the cross sections with a large number of Higgses, while this does not happen for general HEFT low-energy scenarios (which do not accept a SMEFT description). We provide some numerical estimates of these multi-Higgs cross sections based on current experimental bounds. Finally, we show how field redefinitions -or an appropriate choice of the scalar manifold coordinates- can provide a more transparent picture of these processes.
In this work we analyze how NA64μ can contribute to the global SMEFT program by probing two effective four lepton operators completely unbounded so far and break one of the current flat directions. Furthermore, we also study the potential of NA64 testing an extension of SMEFT that includes fermion singlets of the SM gauge group in the low energy field content. This effective field theory, usually dubbed νSMEFT, is well motivated by the observation of light neutrino masses and leptonic mixing. We find that NA64μ can constrain three unbounded four fermion operators of the νSMEFT. We derive the current leading bounds on these operators and compute the future sensitivity. Our results fill the gap between the current experimental program and a possible future muon collider able to probe this type of New Physics.
RENATA (Red Nacional Temática de Astropartículas)
NA64 is a fixed target experiment searching for Dark Sectors with the missing energy/momentum technique by employing high energy electron, positron, muon and hadron beams at the CERN Super Proton Synchrotron accelerator. In this talk, we focus on the status of the program using the high intensity M2 muon beamline. The first results obtained with $1.98\times10^{10}$ muons on target (MOT) collected in 2022 demonstrated the feasibility of the technique and were published in Phys. Rev. Lett. 132, 211803 (2024) and Phys Rev. D 110, 112015 (2024). In 2023 and 2024, the experimental setup was significantly improved allowing us to double the beam intensity, further background suppression, and accumulating 10 times more data with ~$3.5\times10^{11}$ MOT in total. In this talk, I will report the status of the ongoing 2023 data analysis and the experiment prospects in probing the parameter space of the well-motivated benchmark Light Dark Matter models and other scenarios of New Physics below the electroweak scale.
DarkSide-20k is under construction at LNGS and is designed to lead the search for heavy WIMPs in the coming years. In addition to this, it has prospects to lead other DM searches and perform relevant detections of neutrinos from the Sun, the atmosphere, and Supernovae. Argon has the advantage of pulse shape discrimination compared to other noble elements, but has the drawback of the cosmogenically induced 39Ar content with an activity of 0.96 Bq/kg. Getting rid of this background is pivotal for the success of our scientific program. For this reason, the experiment will use underground Argon, in which the concentration of 39Ar is depleted by at least a factor of 1400. The extraction of the needed 120 tonnes will take place at the Urania plant in Colorado, the purification at the ARIA plant in Sardinia, and the characterization at the DArT experiment in Canfranc. In this talk, I will present the sensitivity of the DarkSide-20k to different rare events and the status of the overall program, with a focus on the Spanish contributions.
DEAP-3600 is an experiment performing direct dark matter searches since 2016. The detector has just undergone a third fill in order to achieve its goal sensitivity of 1e-46 cm2 for the WIMP-nucleon interaction cross section. This science case is achievable thanks to its location 2 km underground at SNOLAB, a thorough RnD to minimize its background and the background discrimination capabilities only achievable with liquid Ar.
Because of its uniqueness, DEAP is leading the search of some exotic candidates and is producing physically relevant results in the field of rare event searches. It is moreover playing a pivotal role in the framework of the Global Argon Dark Matter Collaboration. The expertise accumulated and the analyses performed are central for the success of the DarkSide-20k.
In this talk I plan to address the status and prospects of the experiment, including dark matter searches and potential for neutrino physics.
The evidence for the existence of dark matter from astrophysical observations is irrefutable. However, there has not been a conclusive direct detection of dark matter that does not rely on gravitational interaction with visible matter. One experiment, DAMA/LIBRA, claimed to have observed an annual modulation signal in a sodium-iodide-based detector consistent with that expected from dark matter which persisted for over two decades.
The ANAIS (Annual modulation with NaI(Tl) Scintillators) experiment, housed in LSC, is intended to directly test this claim by searching for the dark matter annual modulation with ultrapure NaI(Tl) scintillators. These efforts provide a model independent confirmation or refutation of the DAMA/LIBRA signal by using the same experimental target. Since data taking began in August 2017 and has been continuing smoothly, data for six years of exposure show consistency with the no modulation hypothesis to ~4σ. Additionally, the impact of different scintillation quenching factors, the main systematics in the comparison with the DAMA/LIBRA results, has also been investigated. Finally, there has been a further exclusion of the DAMA results by the combined annual modulation search for dark matter with the COSINE-100 experiment, located in South Korea.
This talk will present the most recent results of the ANAIS-112 dark matter search, our incorporation of the quenching factor in our comparison with DAMA/LIBRA, and our joint dark matter search with the COSINE-100 collaboration.
TREX-DM (TPC for Rare Event eXperiments – Dark Matter) is designed for the direct search of WIMPs in the low-mass region. For the detection of these rare interactions, ultra-low background levels and a low energy threshold are required. TREX-DM meets these conditions by operating a high-pressure TPC filled with argon- (or neon-) based gas mixtures with a large microbulk Micromegas, chosen for its intrinsic radiopurity and low energy threshold capability. The experiment is located at the Canfranc Underground Laboratory (LSC), which significantly reduces cosmic-induced events. Additionally, the detector is shielded with multiple layers to suppress ambient backgrounds—copper and lead for gamma rays, and polyethylene and water tanks for neutrons.
To further improve the low-energy threshold, a novel detection approach consisting on a GEM preamplification stage above the Micromegas is being tested. This has shown great potential for its application in the TREX-DM detector, demonstrating a threshold of O(10) eV.
This talk will present an overview of the TREX-DM experiment and the current status of the detector, including preliminary results of the GEM+Micromegas detection system, as well as the latest updates on the detectors cathode to further reducing the background.
HENSA is a high efficiency neutron spectrometer based on the same principle than Bonner sphere systems. The detector has been used for years in the Canfranc Underground Laboratory (LSC) in order to assess the neutron flux underground. In particular, for more than 3 years HENSA has been being used in hall B of the LSC with obejective to characterize the neutron flux that could affect the ANAIS-112 dark-matter experiment.
In this work, last results from the HENSA campaign at LSC will be discussed, including the temporal evolution and energy spectra. In addition, the status of the recently started HENSA collaboration at the Gran Sasso National Laboratory (LNGS) will be shown.
Axions and axion-like particles (ALPs) are well-motivated candidates to solve both the strong CP problem and the dark matter puzzle. One of the most promising experimental strategies to detect them is the axion helioscope, which searches for solar axions through their conversion into X-ray photons in a strong magnetic field. The International Axion Observatory (IAXO) is conceived as the next-generation helioscope, aiming for a sensitivity improvement of more than an order of magnitude compared to CAST, the most sensitive helioscope to date. BabyIAXO, currently under construction at DESY, will act as a demonstrator for the key technologies required by IAXO, while already providing competitive physics reach.
A key requirement for the BabyIAXO helioscope is achieving ultra-low background levels —below 10⁻⁷ counts/keV/cm²/s— while maintaining high detection efficiency in the Region of Interest. Meeting these requirements involves the development of highly sensitive and radiopure X-ray detectors. The baseline detection technology, a gaseous TPC with Micromegas readout plane, is being optimized through a set of complementary prototypes. At surface level, detectors operating in Zaragoza and at CEA-Saclay (recently moved to DESY) are focused on understanding and mitigating the impact of cosmic backgrounds, including dedicated studies with shielding and active muon veto systems. In parallel, a third prototype is operated at the Canfranc Underground Laboratory. This underground setup provides a unique environment to characterize the intrinsic performance and background contributions of the detectors.
This talk will present the IAXO project and the current status of the IAXO-D1 prototypes, including preliminary results of background characterization.
In this talk, I explore how neutrinophilic dark sectors can impact core-collapse supernovae by extracting energy out of the proto-neutron star and show preliminary cooling bounds on two different models: a Dirac fermion with s-wave annihilation and a Majorana fermion with p-wave annihilation to neutrinos of all flavours. For each model I present the cooling bounds for two cases: a light mediator and a heavy mediator. We find that luminosity bounds lie in the overabundant region if the dark sector fermions are to be considered dark matter candidates. Finally, I briefly discuss limitations to the luminosity calculations like diffusive transport and important changes that can happen before core-collapse.
We consider the possible production of a new MeV-scale fermion at the COHERENT, LZ and XENONnT experiments, and the future DUNE detector. The new fermion, belonging to a dark sector, can be produced through the up-scattering process of neutrinos off the nuclei and the electrons of the detector material, via the exchange of a light mediator. We explore the possibility of generalized interactions, that is a scalar, pseudoscalar, vector, axial or tensor mediator. We perform a detailed statistical analysis of the COHERENT, LZ and XENONnT datasets and obtain up-to-date constraints on the couplings and masses of the dark fermion and mediators. Likewise, we include sensitivities for the DUNE detector. Finally, we briefly comment on the stability of the dark fermion.
The exploration of physics beyond the Standard Model in nuclear physics is closely tied to investigating rare electroweak transitions. The most promising process is neutrinoless double-beta decay ($0\nu\beta\beta$), a nuclear transition where two neutrons simultaneously convert into two protons with the emission of only two electrons. If observed, this second-order decay would prove that neutrinos are Majorana particles, shed light on the existence of massive neutrinos, and help explain the matter–antimatter imbalance in the universe. The half-lives depend on the square of the nuclear matrix elements (NMEs), which must be computed since $0\nu\beta\beta$ has not yet been observed.
In this talk, I will discuss the novel corrections in chiral effective field theory for $0\nu\beta\beta$ and related second-order weak processes as $2\nu\beta\beta$. These calculations aim to reduce uncertainty in the NMEs. First, I will present the contribution of ultrasoft (low-momentum) neutrinos, which can dominate in scenarios involving light sterile neutrinos, then show the full N$^2$LO NME results, and provide further detailed analysis. Finally, I will briefly address the recent addition of novel next-to-leading order (NLO) terms in two-neutrino double-beta decay NMEs within the NSM, focusing on transitions from the ground state to the first $0^+$ excited states.
The study of reactions involving weakly bound exotic nuclei is an active field due to advances in radioactive beam facilities. Many of these nuclei can be approximately described by a model consisting of an inert core and one or more valence nucleons. However, to properly describe some of these nuclei within few-body models, additional effects must be considered, such as deformations and possible excitations of the core. This is the case of $^{17}$C and $^{19}$C, which can be approximately described as a deformed core and a weakly-bound neutron.
The carbon isotopes $^{17}$C and $^{19}$C are studied using the novel NAMD model resulting from the combination of the Nilsson and PAMD models from [Phys. Rev. C 108 (2023) 024613]. The proposed formalism follow the Nilsson model scheme but including microscopic information of the core based on Antisymmetrized Molecular Dynamics (AMD) calculations. The bound states wavefunctions obtained for $^{17}$C have been already applied to the $^{16}\text{C}(d,p)^{17}\text{C}$ transfer reaction, providing a good agreement with the experimental data from [Phys. Lett. B 811 (2020) 135939].
The same transfer reaction is studied also populating unbound states in the continuum of $^{17}$C.
In our calculations, the continuum spectrum of unbound states of the nucleus is discretized using the transformed harmonic oscillator basis (THO) [Phys. Rev. C 80 (2009) 054605], which has been successfully applied to the analysis of breakup and transfer reactions [Phys. Rev. Lett. 109 (2012) 232502]. The unbound states of $^{17}$C and $^{19}$C are also studied in breakup reactions $^{17}\text{C}(p,p')^{16}\text{C}+n$ and $^{19}\text{C}(p,p')^{18}\text{C}+n$. Promising results have been found in the comparison of the XCDCC calculations [Phys. Rev. C 95 (2017) 044611] using the NAMD model with the experimental data from [Phys. Lett. B 660 (2008) 320].
In this talk, I will be covering one of the newest methods for nuclear structure calculations, Neural Quantum States (NQS). While it is not specific to nuclear physics [1,2], since its first application for computing the deuteron bound state [3], its application to nuclear ground states has been consistently gaining momentum [4,5]. The claim of NQS is that, by introducing a highly-expressive neural-network ansatz in a Variational Monte Carlo (VMC) setting, we can obtain a system’s wave function with only a polynomial cost in the number of particles. In the talk, I will briefly cover the optimization algorithms that power NQS nowadays, to then present our most novel optimizer, Decisional Gradient Descent (DGD) [6]. Whereas Stochastic Reconfiguration (SR) has been the preferred optimizer in VMC calculations, we have shown that it is not well-suited as a second-order optimization algorithm. Whereas SR performs poorly when used within Newton’s method, DGD manages to reach the ground state of a variety of physical systems in a reduced number of iterations. Having been put to test in both continuous-coordinate and discrete-coordinate systems, this work paves the way for subsequent applications to the more complex nuclear systems.
[1] G. Carleo and M. Troyer, Science 355 602-606 (2017)
[2] D. Pfau, J. Spencer et al., Phys. Rev. Research 2, 033429 (2020)
[3] J. Keeble and A. Rios, Phys. Lett. B 135743 (2020)
[4] A. Gnech, B. Fore et al., Phys. Rev. Lett. 133, 142501 (2024)
[5] M. Rigo, B. Hall et al., Phys. Rev. E 107, 025310 (2023)
[6] M. Drissi, J. Keeble et al., Phil. Trans. R. Soc. A 38220240057 (2024)
The unique structure of the halo nucleus $^{11}$Be continues to challenge the traditional understanding of nuclear stability and weak interaction dynamics. In this nuclei, the characteristics of a weakly-bound single-particle orbital wave function, defined by its closeness to the confinement threshold, are central to many nuclear phenomena. The weak binding of the halo neutron in $^{11}$Be, positioned near the proton emission threshold, creates a quantum environment where the halo neutron’s wave function extends into the continuum. This open quantum system behavior allows for an enhanced decay channel via a narrow resonant state closer to the daughter nuclei $^{11}$B proton separation energy, which significantly increases the proton emission branching ratio. Resulting in a considerable branching ratio of proton emission in a neutron rich nuclei, which is in actual experimental discrepancy with theoretical models.
This coupling to continuum states manifests as a Fano resonance phenomenon, where the interference between the discrete resonant state and the background continuum produces asymmetric line shapes in nuclear spectroscopy. Moreover, weak binding behavior significantly affects our insights into the evolution of single-particle orbitals, the positioning and significance of the light particle drip lines, and the emergence of nuclear halo states. This fact, opens the possibility of more exotic decay modes, such as the hypothesized dark decay, such process would evade direct detection, representing an intriguing interface between nuclear structure physics and particle physics beyond the standard model
Regions near closed shells in areas of the nuclear chart far from stability are very interesting from the point of view of nuclear structure, since a shell model description based on single-particle states can be challenged by collective effects. One of the most interesting regions is the one around the doubly-magic $^{78}$Ni nucleus, with $Z=28$ and $N=50$ [1],
The systematics of transitions from the first-excited to ground states of the odd-$A$ $N=50$ isotones [2,3] is very enlightening, since M1 transitions are expected to be $l$ forbidden, resulting in long half-lives with small transition probabilities [4,5,6,7]. A more complete understanding of these $l$ forbidden M1 transition could be achieved by extending the systematics. To this end, two complementary experiments were performed at the ISOLDE (CERN) facility and ILL reactor in Grenoble, France.
The nuclei of interest were populated in $\beta$ decay and investigated by fast-timing techniques. The first experiment was aimed at the study the half-life of the first excited state of the $^{83}$As via a $\beta$-decay experiment of $^{83}$Ga at the ISOLDE Decay Station.
In the second experiment, the half-lives of the first excited states in $^{85}$Br and $^{87}$Rb [8] were investigated at ILL, where the parent nuclei, $^{85}$Se and $^{87}$Kr, were transported and mass-separated by the LOHENGRIN is a recoil mass spectrometer.
The presentation will address the analysis of both experiments, discussing the methodologies used and the preliminary results obtained. Additionally, conclusions regarding the systematics of the $l$-forbidden M1 transitions in $N=50$ isotones will be drawn, highlighting the implications for nuclear structure.
References:
[1] R. Taniuchi et al. “$^{78}$Ni revealed as a doubly magic stronghold against nuclear deformation”. En: Nature 569.7754 (2019), págs. 53-58. doi: https://doi.org/10.48550/
arXiv.1912.05978.
[2] V. Paziy. “Ultra fast timing study of exotic nuclei around Ni: the β decay chain of $^{81}$Zn”. Tesis doct. Universidad Complutense de Madrid, 2016.
[3] P.D. Bond y G.J. Kumbartzki. “Coulomb excitation of $^{85}$Rb and $^{87}$Rb”. En:
Nuclear Physics A 205.2 (1973), págs. 239-248. issn: 0375-9474. doi: https://doi.
org/10.1016/0375-9474(73)90207-8.
[4] R. G. Sachs y M. Ross. “Evidence for Non-Additivity of Nucleon Moments”. En:
Phys. Rev. 84 (2 oct. de 1951), págs. 379-380. doi: 10.1103/PhysRev.84.379.2.
[5] I.M. Govil y C.S. Khurana. “Systematics of l-forbidden M1 transitions”. En:
Nuclear Physics 60.4 (1964), págs. 666-671. issn: 0029-5582. doi: https://doi.org/
10.1016/0029-5582(64)90102-6.
[6] A. B. Volkov. “A Modified Shell Model of Odd-Even Nuclei”. En: Phys. Rev. 94 (6
jun. de 1954), págs. 1664-1670. doi: 10.1103/PhysRev.94.1664.
[7] R. G. Sachs y M. Ross. “Evidence for Non-Additivity of Nucleon Moments”. En:
Phys. Rev. 84 (2 oct. de 1951), págs. 379-380. doi: 10.1103/PhysRev.84.379.2.
[8] T.D. Johnson y W.D. Kulp. “Nuclear Data Sheets for A = 87”. En: Nuclear Data Sheets
129 (2015), págs. 1-190. issn: 0090-3752. doi: https://doi.org/10.1016/j.nds.
2015.09.001.
Abstract
The region near ${^{78}\mathrm{Ni}}$ is crucial for nuclear structure studies, as it lies around a doubly-magic shell closure ($Z = 28$, $N = 50$), making it an ideal testing ground for shell evolution and the interplay between single-particle and collective effects. Currently, many experimental and theoretical efforts are dedicated to investigating this region of the nuclear chart [1-3], aiming to understand the robustness of nuclear shells far from stability and the emergence of collective effects as nucleons are added. The interaction among valence nucleons may be capable of attenuating the magic nature of a nucleus very close to shell closures [4]. From this perspective, isotopes of Ge ($Z = 32$), could be of significant interest to understand the evolution of the $N = 50$ gap.
In the recent IS771 experimental campaign, neutron-rich Ge isotopes were investigated via decay spectroscopy at the ISOLDE Decay Station (ISOLDE, CERN) using very neutron-rich Ga beams, produced using the PSB protons impinging on a proton-to-neutron converter to fission a thick $\mathrm{UC}_x$ target. High production yields were achieved for isotopes such as ${^{83-85}\mathrm{Ga}}$ [5], populating ${^{83-85}\mathrm{Ge}}$ through $\beta$-decay and $\beta$-delayed neutron emission. The calculated yields for the different decays of this experiment were consistent with previous measurements.
The high yields together with the spectroscopic capabilities of the ISOLDE Decay Station, equipped with 10 HPGe detector clovers in a compact geometry, enabled a significant expansion of previous knowledge, including the identification of new transitions and levels, as well as the ability to carry out angular correlations measurements for spin-parity assignments. In addition, two $\mathrm{LaBr}_3$ and three beta detectors were used to perform lifetime measurements of excited states in the subnanosecond range via fast-timing techniques.
In this contribution, the current status of the analysis of the experiment will be presented, focusing on the obtained yields, the extended level schemes extracted through high-resolution $\gamma$-ray spectroscopy and the preliminary results for lifetime measurements.
References
[1] R. Yokoyama et al., $\beta$-delayed neutron emissions from $N > 50$ gallium isotopes, Physical Review C 108 (2023) 064307.
[2] K. Sieja et al., Laboratory versus intrinsic description of nonaxial nuclei above doubly magic ${^{78}\mathrm{Ni}}$, Physical Review C 88 (2013) 034327.
[3] C. Delafosse et al., Pseudospin Symmetry and Microscopic Origin of Shape Coexistence in the ${^{78}\mathrm{Ni}}$ Region: A Hint from Lifetime Measurements, Physical Review Letters 121 (2018) 192502.
[4] A. Huck et al., Beta decay of the new isotopes ${^{52}\mathrm{K}}$, ${^{52}\mathrm{Ca}}$, and ${^{52}\mathrm{Sc}}$; a test of the shell model far from stability, Physical Review C 31 (1985) 2226.
[5] ISOLDE yield database (development version), https://isoyields2.web.cern.ch/YieldDetail.aspx?Z=31.
Neutrino oscillation experiments have revealed that neutrinos have mass, providing the first clear evidence of physics beyond the Standard Model. These experiments are essential to achieve key goals in neutrino physics, such as measuring the CP-violating phase, determining neutrino mixing angles and mass ordering, and probing possible new physics. Future facilities such as DUNE and Hyper-Kamiokande aim to obtain these measurements with unprecedented precision.
In current and future accelerator-based neutrino experiments, detectors are composed of complex nuclei such as oxygen, carbon, or argon. Because neutrino beams are not monochromatic, a detailed understanding of neutrino–nucleus scattering across a broad energy range is essential to reduce systematic errors in neutrino energy reconstruction and oscillation analyses.
Reliable predictions of these interactions require advanced nuclear models that account for various nuclear effects. The relativistic mean-field (RMF) model provides an independent-particle description of the nucleus within a microscopic, quantum-mechanical framework, allowing consistent assessment of nuclear effects across different interaction channels.
In this talk, I will discuss several nuclear effects relevant to neutrino–nucleus cross sections at low and intermediate energies within the relativistic mean-field framework, focusing on quasielastic and single-pion production processes that are particularly important for neutrino oscillation experiments.
L. Alvarez-Ruso et al., Progress in Particle and Nuclear Physics 100 (2018)
R. González-Jiménez et al., Phys. Rev. C 100, 045501 (2019).
T. Franco-Munoz et al., Phys.Rev.C 108, 064608 (2023).
J. García-Marcos et al., Phys.Rev.C 109, 024608 (2024).
T. Franco-Munoz et al., J.Phys.G 52, 025103 (2025).
Abstract
This study investigates the coexistence of regular and intruder configurations in odd gold isotopes, the latter being proposed as a one-particle–one-hole excitation above the Z=82 energy gap [1].
Experimental data on the systematic of the energy spectra are analysed for the A=179–195 Au chain. The work employs the Interacting Boson–Fermion Model (IBFM) [2] to describe this behaviour, examining how the unpaired particle affects the collective core. The energy splitting arising from the boson–fermion interactions is presented, employing this vision to reproduce the systematics of the different bands [3].
Finally, the main aspects required to describe all the bands within a unified framework (the IBFM with Configuration Mixing) are discussed.
References:
[1] K. Heyde et al. “Coexistence in Odd-Mass Nuclei”. In: Physics Reports (Physics Letters)(1983).
[2] P. Van Isacker F. Iachello. "The Interacting Boson Fermion Model." Cambridge Monographs onMathematical Physics, 1987.
[3] A.E.L. Dieperink R. Bijker. “Descriprion of odd-A Nuclei in the Pt Region in the Interacting Boson-Fermion Model”. In: Nuclear Physics (1982).
An optimization framework is presented for a Parallel-Plate Avalanche Counter (PPAC) with Optical Readout for heavy-ion tracking and imaging. In a previous work, a differentiable optimization framework was developed in which a surrogate model predicted reconstructed positions of impinging charged particles as a function of detector parameters. This approach is extended by introducing a generative surrogate that simulate full detector events as produced by Geant4, while the subsequent position reconstruction is formulated as a differentiable step within the optimization pipeline. The performance of several generative models is compared, and their potential for automated detector design is discussed.
The HL-LHC project is driving significant upgrades to the ATLAS experiment to enhance data processing and maintain its discovery potential under high-luminosity conditions. A key aspect of this upgrade is the replacement of the readout electronics for the ATLAS Tile Hadronic Calorimeter. The new Tile PreProcessor (TilePPr) system, equipped with Kintex Ultrascale FPGAs, serves as the interface between the front-end electronics and the first level of the future ATLAS Trigger system. The TilePPr will perform real-time signal reconstruction, delivering calibrated data for each bunch crossing at 40 MHz with a fixed and low-latency path.
This contribution will focus on the design, implementation, and performance evaluation of Machine Learning-based reconstruction algorithms within the TilePPr, designed to meet the HL-LHC requirements. Machine learning algorithms are trained first to distinguish between signal and background noises, after which samples are used to train different neural networks to achieve accurate and efficient energy reconstruction while keeping computational and storage demands low. Given the constraints of real-time processing, special emphasis is placed on model optimization strategies, ensuring fast inference on FPGAs without loss of precision.
The High-Luminosity upgrade of the LHC will increase the collision rate by a factor of five, resulting in dense environments with dozens of overlapping interactions. Within this context, the LHCb Upgrade II and its next-generation electromagnetic calorimeter, the PicoCal, will face major challenges in the accurate energy reconstruction of photons, electrons, and neutral pions. To address these conditions, we present a novel Graph Neural Network (GNN) approach in which clusters of calorimeter cells are represented as graphs.The model learns to mitigate the pile-up contribution, outperforming standard reconstruction techniques in energy resolution.
A lightweight, attention-enhanced variant, known as GarNet, is also explored, achieving similar accuracy with up to eight times faster inference, opening the door to real-time applications in future LHC runs.
The Tile Calorimeter (TileCal), a central component of the ATLAS detector at the Large Hadron Collider (LHC), plays a crucial role in measuring the energy of hadronic particles produced in high-energy collisions. As the Large Hadron Collider (LHC) enters its High Luminosity phase (HL-LHC), the Tile Calorimeter (TileCal) will face substantial challenges arising from elevated radiation levels, increased data throughput, and unprecedented pile-up conditions. These conditions demand more efficient and robust signal processing techniques to ensure accurate energy reconstruction.
This study explores strategies to optimize the trade-off between FPGA resource usage and latency in the implementation of signal reconstruction algorithms. Accurate reconstruction of the calorimeter pulse is essential, as it directly reflects the energy deposited by particles. Currently, the Optimal Filtering (OF) algorithm is employed to extract pulse amplitudes from digitized samples. While OF has proven effective under nominal conditions, its performance deteriorates in high pile-up environments.
To address this limitation, the study investigates the potential of Neural Network-based approaches, specifically Single Layer Perceptrons (SLP) and Multilayer Perceptrons (MLP), trained on simulated TileCal pulse data, as alternatives to traditional methods.
Graph Neural Networks (GNNs) have become promising candidates for particle reconstruction and identification in high-energy physics, but their computational complexity makes them challenging to deploy in real-time data processing pipelines. In the next-generation LHCb calorimeter, detector hits — characterized by energy, position, and timing—can be naturally encoded as node features, with spatial and energy-based relationships captured through edge features. This study investigates strategies to reduce both the structural complexity and numerical precision of GNNs to meet stringent real-time processing and resource constraints. We demonstrate that omitting explicit edge features and replacing conventional full message passing with learnable, permutation-invariant aggregation functions results in up to an 8× reduction in CPU inference time, while maintaining or even surpassing the energy resolution and classification performance of baseline methods. Furthermore, we explore post-training quantization, reducing model weights from 32-bit floating point (FP32) to 16-bit or 8-bit integers. While quantization could potentially offer additional efficiency gains, lightweight GNNs with approximately ˜100k parameters exhibit minor inference time performance degradation under aggressive precision reduction. We also present our knowledge distillation experiment, where we train a compact student model to mimic the performance of a larger, more complex teacher network. Our findings provide practical design guidelines for developing fast, efficient, and high-performing GNNs for real-time particle reconstruction in LHCb’s upgraded calorimeter, while also highlighting the limitations of quantization in small neural network architectures.
The aim of this contribution is to show a comprehensive ML framework compiled after a period of application of ML/DL methods in the context of physics analysis in ATLAS experiment. From technical and organizational point of view, we addressed the use different ML/DL libraries, the managing of relevant computting infrastructures, the processing of different kinds of datasets, etc. Another important aspect discussed in this contribution is the imbrication of this ML framework in the Analysis Facility concept of ATLAS Computing. We have considered a possible workflow including the worldwide GRID infrastructure and the local resources (Tier-3 and the IFIC Artificial Intelligence Infrastructure -ARTEMISA).
The experience gained in recent years through the development of undergraduate and master's theses has led to the systematization of optimization processes for ML/DL methods, both at the hyperparameter level and in the use of loss functions with controlled metrics. We have addressed classification and regression problems, which has allowed us to develop structured approaches for ML analysis. One of these approaches involves extracting the ttbar resonance signal from background events (Standard Model events). Another approach we have explored is regression, specifically for the study of missing transverse energy (MET) in dileptonic ttbar event channels. For both of these issues, diverse methods have been applied, resulting in varying accuracies of over 95.5% in SM vs BSM ttbar classification. A key addition is the inclusion of interpretability analysis using SHAP.
Imaging Atmospheric Cherenkov Telescopes (IACT) rely on the Electromagnetic Calorimetry technique to record gamma rays of cosmic origin. Therefore, they use combined analog and digital electronics for their trigger systems, implementing simple but fast algorithms. Such trigger techniques are forced by the extremely high data rates and strict timing requirements. In recent years, a design of an Advanced Camera as an upgrade for the Large-Sized Telescopes (LSTs) of the Cherenkov Telescope Array Observatory (CTAO) has been proposed. This camera will be based on Silicon PhotoMultipliers (SiPM) and a new fully digital trigger system incorporating Machine Learning algorithms. The critical improvement relies on implementing those algorithms in Field Programmable Gate Arrays (FPGAs), to increase the sensitivity and efficiency of real-time decision-making while fulfilling timing constraints. In addition, building on our prior experience in IACT event reconstruction using Deep Learning (DL), we are currently engaged in applying analogous algorithms to address the challenge of offline reducing the CTA data volume.
We are currently developing all the elements of an ML-based IACT trigger system, including a PCB prototype to test multi-gigabit optical transceivers and using development boards as an ML-algorithm testbench. Additionally, we also aim to integrate DL capabilities into the CTA offline analysis pipeline, seeking a more efficient processing chain in both computational and storage aspects.
We propose an extension of the electroweak sector of the Standard Model in which the gauge group $SU(2)_L$ is promoted to $SU(2)_1 \times SU(2)_2$. This framework naturally includes a viable dark matter candidate and generates neutrino masses radiatively à la Scotogenic. Our scenario can be viewed as an ultraviolet extension of the Scotogenic mechanism, addressing some of its shortcomings. The resulting phenomenology may be probed through a range of experimental signatures, from precision electroweak measurements to searches for lepton flavor violation.
Baryon and lepton number are excellent low-energy symmetries of the Standard Model (SM) that tightly constrain the form of its extensions. In this paper we investigate the possibility that these accidental symmetries are violated in the deep UV, in such a way that one multiplet necessary for their violation lives at an intermediate energy scale M above the electroweak scale. We write down the simplest effective operators containing each multiplet that may couple linearly to the SM at the renormalisable level and estimate the dominant contribution of the underlying UV model to the pertinent operators in the SMEFT: the dimension-5 Weinberg operator and the baryon-number-violating operators up to dimension 7. Our results are upper bounds on the scale M for each multiplet–operator pair, derived from neutrino-oscillation data as well as prospective nucleon-decay searches. We also analyse the possibility that both processes are simultaneously explained within a natural UV model. In addition, we advocate that our framework provides a convenient and digestible way of organising the space of UV models that violate these symmetries.
The lack of observation of experimental signals pointing to the existence of physics beyond the Standard Model (SM) suggests that the coupling between SM particles and hidden sectors is likely small. This suppression leads to relatively long lifetimes for BSM particles when their masses lie within the MeV–GeV range. In this talk, the regime of long-lived particles (LLPs) is considered, motivated by their potential to serve as portals to hidden sectors that address different open problems of the SM.
Recent studies indicate that Liquid Argon Time Projection Chambers (LArTPC) , as the prototype of DUNE far detectors (ProtoDUNE) installed at CERN, have the potential to detect long-lived BSM particles from one of the targets in CERN’s North Area exposed to the 400 GeV SPS beam. A key demonstration lies in observing SM neutrinos—well-known weakly interacting particles. Feasibility studies are ongoing with a test carried out using one of the ProtoDUNE detectors that aimed at demonstrating the potential of these detectors for BSM searches. In this talk, the status of our analysis and our plans for future BSM prospects will be highlighted.
Neutrino non-standard interactions (NSI) have been studied in a variety of contexts and suggested as a possible mechanism for resolving certain unexpected results in oscillation experiments. NSI may be generated in various ways and take a variety of forms. We focus on flavor-changing neutral current NSIs involving tau neutrinos. In simple scenarios in which these are generated by dimension-six effective operators, it is often the case that flavor-violating interactions of charged leptons are also generated; the strengths of these interactions are then related by SU(2)_L symmetry. We investigate for this subset of NSI the restrictions which can be obtained by utilizing the stringent experimental limits on charged lepton flavor-violating tau decays, finding that the quark contributions to these operators are often constrained to be on the order of 10^-3.
Modular symmetries have emerged as a promising and elegant approach to the flavor problem, with the discrete group $A_4$ as a benchmark example. In this work, we study the impact of modular UV interactions generating dimension-6 SMEFT operators with leptons. Restricting to extensions with a single mediator and at most one modular form insertion, we classify the possible scenarios and compute their one-loop matching. We show how charged lepton flavor violating observables provide stringent constraints, offering a systematic path to test modular flavor symmetries in the future.
We show that, contrary to common expectations, the observed charged leptons can have a substantial mixing with new, heavier fermions. This can happen, in the language of effective theories, when the effect of mixing with heavier fermions vanishes at tree level in operators of mass-dimension 6 (or it is suppressed by the small charged lepton masses), a cancellation that can be naturally ensured by symmetries. Other observable effects from fermion mixing appear then, either at tree-level via operators of mass dimension 8, or at one-loop order in operators of mass-dimension 6.
Using a model that realizes this scenario we consider all current direct and indirect constraints and show that experimental constraints on the mixing are so mild that, given the current direct limit on the mass of the heavy fermions, theoretical considerations, mainly instability of the Higgs potential, presence of Landau poles and strong coupling, become the leading current constraints on the mixing.
Currently the right handed electron could have a $21\%$ component of EW non-singlet and still be compatible with all current experimental and theoretical constraints. The equivalent limits for muons and taus are, respectively, $18\%$ and $16\%$. Future experiments, including the high-luminosity of the LHC and, most notably, the FCC-ee, will be precise enough to make the experimental limits on the mixing surpase the theoretical ones.
The temperature and polarization of the cosmic microwave background (CMB), as measured today, may offer key insights into the topology of the early universe prior to inflation, for example, by discriminating between flat and warped geometries. In this paper, we focus on a Kaluza-Klein model with an extra spatial dimension that compactifies at the Grand Unified Theory (GUT) epoch, subject to mixed Neumann/Dirichlet boundary conditions at fixed points. As a consequence, a set of infrared cutoffs naturally emerges in both the scalar and tensor spectra, leading to observable consequences in the CMB. We examine in detail the possible signatures of such a topology, particularly in relation to the even-odd parity imbalance already reported by the COBE, WMAP and Planck missions in the temperature angular correlations at large scales. Furthermore, we extend our analysis to the existing Planck E-mode polarization data, and to the high-precision B-mode polarization measurements expected from the forthcoming LiteBIRD mission.
Apart from its gravitational interactions, dark matter (DM) has remained so
far elusive in laboratory searches. One possible explanation is that the relevant interactions
to explain its relic abundance are mainly gravitational. In this work we consider an extra-
dimensional Randall-Sundrum scenario with a TeV-PeV IR brane, where the Standard
Model is located, and a GeV-TeV deep IR (DIR) one, where the DM lies. When the
curvatures of the bulk to the left and right of the IR brane are very similar, the tension
of the IR brane is significantly smaller than that of the other two branes, and therefore
we term it “evanescent”. In this setup, the relic abundance of DM arises from the freeze-
out mechanism, thanks to DM annihilations into radions and gravitons. Focusing on a
scalar singlet DM candidate, we compute and apply current and future constraints from
direct, indirect and collider-based searches. Our findings demonstrate the viability of this
scenario and highlight its potential testability in upcoming experiments. We also discuss
the possibility of inferring the number of branes if the radion and several Kaluza-Klein
graviton resonances are detected at a future collider.
RENATA (Red Nacional Temática de Astropartículas)
We discuss the phenomenology of neutrino decoupling in the early universe, by summarising the details of the calculation in standard and non-standard scenarios. We show how non-standard physics can affect the amount of neutrinos that exist in the universe and how we can adopt cosmological observations in order to constrain neutrino properties such as their mass, effective number, interactions, and non-standard cosmological evolution. Implications for Big Bang Nucleosynthesis are also briefly discussed.
Multi-messenger astronomy is an emerging field that aims to combine the information carried by different cosmic messengers (cosmic rays, photons, neutrinos, and gravitational waves) originating at a common source. Neutrinos, being stable and neutral particles, are especially valuable as they can escape dense environments. Furthermore, they are not absorbed during propagation to Earth and constitute an unambiguous signature for hadronic processes at the source.
KM3NeT is a deep-sea infrastructure currently under construction at the bottom of the Mediterranean Sea, hosting a 3-dimensional array of light sensors designed to detect the Cherenkov light induced by neutrino interactions. Two separate arrays are already operational using partial configurations: ORCA, optimised for the GeV-TeV energy range, and ARCA, optimised for the TeV-PeV energy range. In this talk, the latest results of the real-time follow-up searches for neutrino counterparts in coincidence with external triggers are presented, with special emphasis on the follow-up to gravitational wave events. In addition to these real-time studies, a stacking search for cosmic neutrinos coming from gamma-ray bursts is also presented, conducted using data from the period when ARCA was operational with 21 detection lines.
KM3NeT, a deep-sea Cherenkov neutrino telescope with MeV–PeV sensitivity, comprises two detectors: ARCA (high-energy astrophysical neutrinos) and ORCA (low-energy oscillations/atmospheric studies). With roughly 25% of the detectors deployed, it is partially operational, and the full completion is expected by the end of the decade.
Its design provides a large field of view including the galactic center, a high duty cycle, an angular resolution as low as 0.1º above 100 TeV and a sensitivity to energies from MeV to PeV. These features enable it to address diverse physics goals, including the search for astrophysical neutrino sources or the detection of very high-energy neutrinos.
In the era of time-domain and multi-messenger astronomy, rapid sharing of transient observations is critical. KM3NeT’s capabilities, coupled to an online analysis pipeline make it a key asset for identifying neutrino candidates and triggering follow-up.
This contribution describes KM3NeT’s Alert System, covering the Architecture allowing for a low-latency data processing and event reconstruction; the Selection criteria used for background suppression and candidate prioritization, and the Alert protocol describing the dissemination thresholds and strategy. We highlight the system’s role in enabling prompt follow-up of neutrino transients within the global multi-messenger network.
Nowadays, deep-sea neutrino telescopes, e.g. KM3NeT, are based on the detection of the Cherenkov light produced after a neutrino interaction by a large 3D array of optical sensors. These detectors also have an acoustic system associated for monitoring the position of optical sensors. In this paper, we discuss the possibility of using the acoustic sensors of the positioning system for the detection of the thermo-acoustic pulse produced after the interaction of an ultra-high energy neutrino, to explore the possibility of having a hybrid optical–acoustic detector. This has become even more relevant after the observation of a very high-energy neutrino in KM3NeT/ARCA detector. We consider that the main limitation for the hybrid detector comes in the difficulty of triggering interesting acoustic events, which is due to the characteristics of the signature: a very weak and short bipolar pulse and of the large separation between acoustic sensors. To overcome these difficulties, we are working in two research topics. Firstly, we are developing an acoustic antenna formed by an array of 4 close hydrophones that complements the hydrophones of the telescope. Secondly, we are working on the deep-sea acoustic monitoring and data analysis, proposing a new triggering method based on spectrogram analysis. As it will be presented, this method is more appropriate for finding the weak short signal than the cross-correlation method used for acoustic positioning.
KM3NeT is a next-generation neutrino telescope currently under construction in the Mediterranean Sea. It consists of two detectors, ARCA and ORCA, both equipped with multi-PMT optical modules designed to detect the Cherenkov light produced by charged particles originating from neutrino interactions in the surrounding medium. ARCA, optimized for energies from TeV to PeV, is dedicated to the study of cosmic neutrinos, while ORCA focuses on atmospheric neutrino oscillations in the GeV energy range. Despite not yet being fully completed, KM3NeT is already taking data with partial configurations, such as ORCA18, which comprises 18 detection units.
In this work, we explore the detection prospects for a Beyond Standard Model (BSM) particle known as the Heavy Neutral Lepton (HNL). The HNL signature left in ORCA is particularly distinctive: it is expected to produce two spatially separated cascades of light, an event topology not anticipated from any Standard Model process in the same energy regime. Using a dedicated simulation based on the SIREN lepton injector to model HNL signals in KM3NeT/ORCA18, we assess the potential of modern Deep Learning techniques - such as ParticleNeT - together with Boosted Decision Trees (BDTs) implemented with the XGBoost library, to reconstruct and discriminate this unique signal.
The Cherenkov Telescope Array Observatory (CTAO) is an international project aimed at advancing our understanding of the gamma-ray sky with the most sensitive gamma-ray observatory ever built. CTAO will consist of two arrays of Imaging Atmospheric Cherenkov Telescopes (IACTs), comprising more than 60 telescopes in total. The northern array is under construction at the Roque de los Muchachos Observatory (ORM) on the Canary Island of La Palma, while the southern array will be located at Paranal, Chile. Three different telescope sizes will be used to cover a broad energy range from 20 GeV to 300 TeV. CTAO will feature fast science alert processing and rapid telescope repointing, making it a premier facility for studying high-impact transient phenomena such as gamma-ray bursts and gravitational-wave counterparts.
Spain plays a key role in the development of CTAO, particularly at the northern site, with significant contributions to the Large-Sized Telescopes (LSTs), site infrastructure, software, computing systems, and atmospheric characterization. Following the establishment of the CTAO European Research Infrastructure Consortium (ERIC) in January 2025, construction activities have accelerated. At the CTAO-North site, one LST is already operational, three more are under construction, and work on the operations and technical building is about to begin. Infrastructure development has also started at the southern site, where the first telescope is expected to be deployed in 2026.
In this contribution, we present an update on the status of CTAO, focusing on construction progress, planned science operations, and Spain’s central role in the project.
Extreme high-synchrotron-peaked blazars (EHSPs), defined by synchrotron peaks above 10^17 Hz, represent an uncommon subclass of blazars that challenge conventional blazar emission models and probe the limits of particle acceleration in relativistic jets. Yet, the number of identified EHSPs remains small, limiting comprehensive studies of their population and physical characteristics. In this contribution, we present a systematic study aimed at identifying and characterizing new EHSP candidates using a sample of 124 gamma-ray blazars selected from a wider catalogue based on their high synchrotron peak frequencies, low variability, and good broadband data coverage. The spectral energy distributions (SEDs) of the sample blazars are built using archival data complemented by Swift and Fermi-LAT observations, and modelled within a one zone synchrotron/synchrotron-self-Compton (SSC) framework. We identify 66 new EHSP candidates, significantly expanding the known population. A clear correlation emerges between synchrotron peak frequency and the magnetic-to-kinetic energy density ratio, with the most extreme sources approaching equipartition. This indicates that as the synchrotron peak shifts to higher frequencies, the energy stored in the magnetic field becomes comparable to that of the relativistic electrons, suggesting a more balanced and energetically efficient jet environment in the most extreme blazars. Our results suggest that 9 high-synchrotron peaked/EHSPs could be detected by the Cherenkov Telescope Array Observatory (CTAO) above 5σ significance (and 20 above 3σ) in 20-hour observations, implying that while the overall detection rate remains modest, a subset of these sources is within reach of next-generation very-high-energy gamma-ray instruments.
Light primordial black holes (PBHs) may have originated in the early Universe, and could contribute to the dark matter in the Universe.
Their Hawking evaporation into particles could eventually lead to the production of antinuclei, which
propagate and arrive at Earth as cosmic rays with a flux peaked at GeV energies.
In 2505.04692 we revisit the antiproton and antideuteron signatures from PBH evaporation, relying on a lognormal PBH mass distribution, state-of-the-art propagation models, and an improved coalescence model for fusion into antideuterons.
Our predictions are then compared with AMS-02 data on the antiproton flux.
We find that the AMS-02 antiproton data severely constrain the Galactic PBH density, setting bounds that depend significantly on the parameters of the
lognormal mass distribution, and that are comparable to or slightly stronger than bounds set from diverse messengers.
We also discuss prospects for future detection of antideuterons. Given the bounds from AMS-02 antiproton data, we predict that if antideuterons were to be measured by AMS-02 or GAPS, since the secondary contribution is subdominant, they would clearly be a signal of new physics, only part of which could, however, be explained by PBH evaporation.
Neutrons from ($\alpha$,n) reactions are essential for astrophysics, dark matter experiments, and nuclear material interrogation, yet available cross-section and yield data are limited and often uncertain [1]. To improve this situation, the Spanish nuclear physics community has established the MANY Collaboration (Measurement of Alpha Neutron Yields).
At CNA [2], the 3 MV tandem accelerator provides alpha beams up to 9 MeV, either in continuous mode (maximum 500 nA with the ALPHATROSS source [3]) for activation and neutron counting, or in pulsed mode (2% duty cycle) for time-of-flight [4].
Recent and planned upgrades at the neutron beam line CNA-HiSPANoS aim to expand its capabilities for ($\alpha$,n) measurements: a new buncher system has been installed, improving the structure of the pulsed beam and allowing for more precise time-of-flight measurements, a more intense He$^{++}$ ion source (NEC-TORVIS [5]) has been acquired and is expected to increase the beam intensity by an order of magnitude, and the purchase of a new array of fast neutron detectors (EJ-309) is underway in order to improve detection efficiency and angular coverage.
In parallel, an innovative technique for beam current determination in non-conductive materials and/or some gaseous elements, using aluminum alloys. By employing an AlN alloy target, the ($\alpha$,n) reaction on $^{14}$N, in principle a gaseous target that would pose a serious difficulty, is measured with respect to the well-known $^{27}$Al($\alpha$,n)$^{30}$P reaction, enabling a reliable Thick Target Yield (TTY) measurement by means of activation.
This contribution will include the definitive results for the measurement of the $^{27}$Al($\alpha$,n)$^{30}$P reaction by activation as well as preliminary results for the $^{14}$N($\alpha$,n)$^{17}$F measurements and the commissioning of the new bunching system for $\alpha$ beams. Then, planned upgrades of the ion source and the neutron detectors will be presented.
[1] D. Cano-Ott et al., Review of Neutron Yield from ($\alpha$, n) Reactions: Data, Methods, and Prospects, [https://arxiv.org/abs/2405.07952]
[2] J. Gómez-Camacho, J. García López, C, Guerrero et al. Research facilities and highlights at the Centro Nacional de Aceleradores (CNA), Eur. Phys. J. Plus 136, 273 (2021). https://doi.org/10.1140/epjp/s13360-021-01253-x
[3] NEC Alphatross Source RF-Charge Exchange Ion Source https://www.pelletron.com/
[4] M.A. Millán-Callado et al., Continuous and pulsed fast neutron beams at the CNA HiSPANoS facility. Rad. Phys. & Chem. 217 (2024) 111464
[5] NEC Toroidal Volume Ion Source https://www.pelletron.com/
Neutrons produced in α-induced reactions play important roles in fields such as nuclear astrophysics, neutron background in underground laboratories, fission and fusion reactors and non-destructive assays for non-proliferation and spent fuel management applications. However, most of the available data on (α,n) reactions was measured decades ago, is incomplete and/or present large discrepancies not compatible with the declared uncertainties. Thus, new measurements addressing current needs are required [1, 2]. To that end the Measurement of Alpha Neutron Yields and spectra (MANY) collaboration was formed.
This contribution reports on the commissioning of the modular neutron counter miniBELEN at the Centro Nacional de Aceleradores (CNA). This detector has already been succesfully used to measure the 27Al(α,n)30P reaction yields and cross-sections at the Centro de Micro-Análisis de Materiales (CMAM). The performance of the system at this facility and its readiness for (α,n) measurement campaigns will be described.
In addition, we present first results on auxiliary detectors developed to complement miniBELEN. The recently characterized Ymon detector provides neutron flux measurements with angular sensitivity and exhibits a flat response in the ~1 keV to 8 MeV energy range. We also introduce a highly segmented LaCl3 array under development, designed to be embedded in future neutron counters to enable hybrid neutron–gamma detection. Beyond α–γ studies, this capability might allow the extraction of partial cross sections feeding different excited states in (α,n) reactions, thereby enhancing the experimental reach of MANY.
[1] S.S. Westerdale et al, IAEA technical meeting INDC(NDS)-0836 (2021)
[2] A. Junghans et al., IAEA technical meeting INDC(NDS)-0894 (2023)
[3] N Mont-Geli et al. EPJ Web of Conferences 284 (2023) 06004
AGATA (Advanced GAmma Tracking Array) is an European collaboration devoted to developing a next-generation γ-ray spectrometer for nuclear structure research at facilities employing both radioactive and stable ion beams. Once completed, AGATA will consist of 180 high-purity germanium (HPGe) detectors, arranged in triple cluster structures (ATCs), providing an overall solid-angle coverage of approximately 82% of 4π. A distinctive feature of AGATA detectors is their electrical segmentation design: each detector comprises 36 isolated segments plus a central contact (core). This segmentation enables γ-ray tracking, i.e., the reconstruction of the γ-ray interaction sequence within the crystal. Tracking significantly reduces background without the need for antiCompton shielding and improves Doppler correction, enhancing, as a result, both system efficiency and energy resolution. The interaction positions and the energies deposited by the photons within the crystal must be determined through pulse shape analysis (PSA) before tracking can be performed. Currently, PSA involves comparing the detector’s experimental pulse shapes with simulated responses. Within the collaboration, four research groups are dedicated to the experimental characterization of AGATA detectors, including the LRI-D laboratory at the University of Salamanca (USAL). At USAL, the characterization system under development is based on the SALSA method (SAlamanca Lyso-Based Scanning Array), which enables 3D scanning of AGATA detectors. This R&D technique employs a position-sensitive γ-camera with 256 pixels and an actively collimated γ-ray beam. After performing scans in two different measurement configurations and comparing the resulting electrical pulse shapes (PSC), an experimental database is generated which correlates interaction positions within the crystal to the corresponding electrical responses of the AGATA detector. Nowadays, the University of Salamanca is characterizing the AGATA detector B003. The experimental setup and the measurements performed from two different configurations have been completed, and ongoing efforts focus on developing data-processing algorithms and software. Completed stages include matching between AGATA and the γ-camera events, core pulse-shape comparisons, and filtering and signal treatment of the triggered and neighbouring segments, according to various criteria. Upcoming stages involve analysing transient signals induced in neighbouring segments and reconstructing the γ-ray trajectories to accurately determine the interaction positions within the detector.
The complex nature of the nucleon-nucleon interaction allows for spherical, oblate and prolate deformations to appear at similar energies within the same nucleus. This phenomenon, known as shape coexistence, is widespread across the nuclear chart and it provides a crucial role in understanding nuclear structure [1].
In our study we complement shell-model calculations [2] with beyond-mean-field Hartree-Fock-Bogoliubov techniques [3] to shed light on the rich coexistence of differently deformed structures. We infer shape coexistence from multiple observables such as: quadrupole moments, $E2$ transitions, collective wavefunctions, and shape invariants. The combination of all these hints allows us to understand the complexities of shape coexistence and the notion of nuclear shape itself.
Particularly, the shape invariants provide a model-independent framework to quantify the deformation parameters and their fluctuations [4], which are significant in most nuclei. We analyze how nuclear shapes evolve across the band using an extended sum-rule method to compute the shape invariants for $J\neq0$ states. This method sheds light on long-standing questions, such as whether doubly-magic nuclei are truly spherical, whether rigid triaxial nuclei exist, and how axially symmetric prolate and oblate nuclei really are.
For instance, $^{28}$Si presents a competition between the oblate ground state and the excited prolate rotational band ($6.5$ MeV), with a possible superdeformed structure at higher energies ($\sim10$-$20$ MeV). We find that $sdpf$ excitations are needed to correctly describe $^{28}$Si and that superdeformed shapes appear at 18-20 MeV [5].
The doubly-magic nucleus $^{40}$Ca also presents shape coexistence between the spherical ground state, the normal deformed rotational band ($3.4$ MeV) and the superdeformed rotational band ($5.2$ MeV) [6]. We analyze the fluctuations of the deformation parameters associated to these states.
Additionally, we study the impact of differences in shapes of the initial and final nuclei for double-beta decay [7], including triaxiality. We find that larger deformation differences between the initial and final states lead to smaller nuclear matrix elements.
[1] P. E. Garrett, M. Zielińska, and E. Clément, Prog. Part. Nucl. Phys. 124, 103931 (2022).
[2] E. Caurier and F. Nowacki, Acta Phys. Pol. B 30, 705 (1999).
[3] B. Bally, A. Sánchez-Fernández, and T. R. Rodríguez, Eur. Phys. J. A 57, 69 (2021).
[4] A. Poves, F. Nowacki, Y. Alhassid, Phys. Rev. C 101, 054307 (2020)
[5] D. Frycz, J. Menéndez, A. Rios, B. Bally, T. R. Rodríguez, and A. M. Romero, Phys. Rev. C 110, 054326 (2024)
[6] E. Caurier, J. Menéndez, F. Nowacki, and A. Poves, Phys. Rev. C 75, 054317 (2007).
[7] T. R. Rodríguez, G. Martínez-Pinedo, Phys. Rev. Lett. 105, 252503 (2010).
The WASA-FRS HypHI Experiment focuses on the study of light hypernuclei by means of heavy-ion induced
reactions in 6Li collisions with 12C at 1.96GeV/u. It is part of the WASA-FRS experimental campaign, and so
is the eta-prime experiment [1]. The distinctive combination of the high-resolution spectrometer FRagment
Separator (FRS) [2] and the high-acceptance detector system WASA [3] is used. The experiment was success-
fully conducted at GSI-FAIR in Germany in March 2022 as a component of the FAIR Phase-0 Physics Program,
within the Super-FRS Experiment Collaboration. The primary objectives of this experiment are twofold: to
shed light on the hypertriton puzzle [4] and to investigate the existence of the previously proposed nnΛ bound
state [5]. Currently, the data from the experiment is under analysis.
Part of the data analysis is to provide a precise ion-optics of the measurement of the fragment originated from
the mesonic weak decay of the hypernuclei of interest. The reconstruction the ion-optics of fragments is based
on the calibration run of FRS optics. We have proposed to implement machine learning models and neural
networks to represent the ion-optics of FRS: While the current state of the problem involves solving equations
of motion of particles in non-ideal magnetic fields - which leads to the application of approximations in the
calculations - the implementation of data-driven models allows us to obtain accurate results with possible
better momentum and angular resolution.
Another important contribution to the analysis would be the correct identification of signal versus background
in the experimental data. For this purpose, we present an analysis using ML techniques as opposed to typical
selection conditions methods. The interest of this new approach comes from the fact that the models interpret
the physics behind the data by making more accurate cuts and more consistent with the experiment.
In this presentation, we will show two different results of the current status of the R&D in machine learning
model of the ion-optics and the prospect of the inference of the track parameters of the fragments based on the
calibration data recorded during the WASA-FRS experimental campaign of 2022 and the signal to background
ratio enhancement with ML. For the ion optics part: our model selection optimization follows this approach:
we utilize AutoML environments [6], to determine the best pipeline for our data. Once identified, this opti-
mized pipeline is implemented in a PyTorch model. Regarding the signal to background ratio enhancement,
we will make use of autoML libraries such as autogluon [7] to identify the H3Λ hypernuclei present in the
experimental datafile.
The results of this study demonstrate a robust reconstruction of the track angles in the FRS mid-focal plane,
achieving an improvement of up to a ~40%. A resolution of 0.65 mrad and 0.46 mrad was achieved for the
horizontal and vertical angular track plane, respectively. Additionally, the reconstruction of the magnetic
rigidity in the final focal plane attained a resolution Δp/p of 5 10⁻⁴. From these results, we demonstrated that
a data-driven model of non-linear ion optics is feasible. We also observed that training the full model can be
achieved very quickly, paving the way for online training during data collection at the FRS. This capability
will enable more accurate real-time analysis of fragment identification and improve the quality of the exotic
beam obtained from the fragment separator.
Also, a correct identification of signal events in the experimental data has also been carried out, which al-
lows a precise analysis of the properties of the H3Λ from the experimental data, such as the lifetime of the
hypernuclei.
[1] Y.K. Tanaka et al., J. Phys. Conf. Ser. 1643 (2020) 012181.
[2] H. Geissel et al., Nucl. Instr. and Meth. B 70 (1992) 286-297.
[3] C. Bargholtz et al., Nucl. Instr. and Meth. A 594 (2008) 339-350.
[4] T.R. Saito et al., Nature Reviews Physics 3 (2021) 803-813.
[5] C. Rappold et al., Phys. Rev. C 88 (2013) 041001.
[6] M. Feurer et al., JMLR 23 261 (2022) 1-61.
[7] N. Erickso et al., 7th ICML Workshop on AutoML (2020).
The High Efficiency Neutron Spectrometry Array (HENSA) project focuses on the development and scientific application of high-efficiency neutron spectrometers [1], with uses in underground laboratories, rare-events experiments, cosmic-ray neutron studies, space weather research, and environmental dosimetry. The detection principle of HENSA is based on the Bonner Spheres System (BSS) [2], but incorporates a topological modification in detector geometry, achieving up to a tenfold increase in overall detection efficiency compared to standard BSS [3]. The extended-energy-range version of HENSA is sensitive to neutrons from thermal energies up to 10 GeV, enabling full-spectrum measurements of cosmic-ray neutrons. Its high efficiency and wide energy sensitivity allow for the determination of the neutron spectrum and the ambient neutron dose equivalent within 30–60-minute intervals, complementing ground data from the Neutron Monitor Network [4]. This capability enables near real-time analysis of spectral fluctuations throughout the solar cycle and during high-intensity solar events, such as Ground Level Enhancements (GLEs) and Forbush Decreases (FDs).
In 2020, a HENSA detector was deployed in a measurement campaign to study the cosmic-ray neutron spectrum under quiet solar conditions at the beginning of Solar Cycle 25. This campaign enabled the mapping of cosmic-ray neutrons across magnetic rigidities from 5.5 to 8.5 GV and altitudes from sea level to 3000 m, complementing previous studies [5]. Building on these results, a new spectrometer, HENSA++, has been developed with optimized energy resolution for cosmic-ray neutron studies. Since 2024, HENSA++ has begun commissioning, first in Valencia city (sea level, Rc = 7.5 GV) and later at the Observatorio Astrofísico de Javalambre (OAJ) in Teruel, Spain (1957 m above sea level, Rc = 7.07 GV) [6].
In this talk, we present an overview of the HENSA project for cosmic-ray neutron studies, including results from the 2020 measurement campaign and preliminary findings from the commissioning phase. Finally, we discuss the status and future perspectives for continuous cosmic-ray neutron monitoring with HENSA++ during the second half of Solar Cycle 25.
References
[1] https://www.hensaproject.org/
[2] D.J. Thomas and A.V. Alevra (2002). NIMA, 476, p. 12–20.
[3] B. Wiegel, A.V. Alevra (2002). NIMA 476 (2002) 36–41.
[4] https://www.nmdb.eu/
[5] M. S. Gordon, et al. (2004). IEEE Transactions on Nuclear Science, 51(6)
[6] https://www.cefca.es/observatorio/descripcion
Conventional X-ray radiography is the current standard for industrial non-destructive testing of metallic parts. However, some limitations arise due to the low penetration depth of X-rays for certain materials or their thickness. This is the case for metal additive manufacturing, where quality control and internal inspection are critical in most applications. In this paper we report the use of a $^{137}$Cs gamma source ($\sim$180 MBq) and a $^{60}$Co gamma irradiator ($\sim$47 TBq) for gamma radiography and tomography of steel samples from additive manufacturing. A simple experimental device composed of a scintillating screen and a frame-based camera was used to digitally capture the sample images. Internal details were observed with a spatial resolution of 1 -- 3 mm. The same imaging device was adapted and utilized for neutron radiography using thermal neutrons from a particle accelerator with similar results. This demonstrates the potential flexibility of the device for different source types in a portable and cost-effective system for non-destructive tests.
Introduction:
Neutron dosimetry still faces major challenges, especially in high-energy radiation fields found in facilities like particle accelerators and hadron therapy centers, where neutrons are generated as secondary particles. The difficulty intensifies in pulsed radiation structures, typical of advanced treatments like FLASH therapy or pulsed neutron field generated by Synchro-Cyclotron machine. Another critical issue is the limited portability of many active neutron dosimetry devices currently used in radioactive environments. To overcome these shortcomings, the Spanish initiative, the LINrem Project, aims to develop innovative solutions for neutron area dosimetry and spectrometry, addressing the key limitations that currently affect this field.
Materials and methods:
The LINrem Project consists of two patented active dosimeters: the LINrem, with sensitivity up to 10 MeV, and LINremext dosimeter, designed for an extended range [1,2,3]. Their innovative design enables their use in very high dose environments, in both continuous and pulsed radiation fields.
The LINpass is the LINrem passive version based on the ThermoLuminiscent Detectors (TLD) enriched with Li6 and 7. The LINpass can provide solutions for areas where ionizing radiation is too high and active instruments saturate, such as in pulsed beam/FLASH therapy. Furthermore, it can be used for radiation protection purposes.
In the LINrem family there is also present a neutron spectrometer called NESTA: a Nested Neutron Spectrometry Array. It is composed of 16 matrixes with a respose from thermal neutron until 10GeV. The innovative concept involves different blocks that nest one inside the other, making the device easy to transport. All these instruments were tested in presence of high-energy neutron reference fields (CERF at CERN). As regards the LINrem active ambient neutron dosimeters were employed to measure secondary neutrons generated in hadron therapy facilities under near-clinical conditions, IBA Proteus Plus and IBA Proteus One.
Results and Conclusions:
The LINrem dosimeters showed reliable performance in validation tests at reference facilities (within 10% deviation) and excellent agreement with commercial devices, such as WENDI-II, in hadron therapy environments. We present a summary of the project milestones, showcasing the key findings from validation and intercomparison exercises conducted under challenging conditions (fast, high-energy and pulsed neutrons).
References:
[1] Ariel Tarifeño-Saldivia et al. ”Ambient dosimetry in pulsed neutron fields with LINrem
detectors”. Radiation Physics and Chemistry, 2024, 224:112101.
[2] Ariel Tarifeño-Saldivia and Francisco Calviño Tavares, 2024. “Neutron Dosimeter”.
European Patent Office EP4097510A1, US Patent Office US-12123991-B2, Japanese
Patent Office JP7601420B2.
[3] Ariel Tarifeño-Saldivia et al. “Calibration methodology for proportional counters applied to
yield measurements of a neutron burst”. Rev. Sci. Instrum. 85 (2014) 013502.
Over the past few years, Low-Gain Avalanche Detectors (LGADs) have demonstrated excellent timing performance, showing great potential for use in 4D tracking of high-energy charged particles. Carbon co-doping is a key factor for enhancing LGAD performance, which are detectors with intrinsic amplification, in harsh radiation environments. This work presents a broad pre-irradiation characterization of the latest carbon-co-implanted (or carbonated) LGADs fabricated at IMB-CNM. The results indicate that the addition of carbon reduces the nominal gain of the devices compared with non-carbonated detectors. Furthermore, a comprehensive study is presented on how carbon co-implantation can either enhance or suppress the diffusion of the multiplication layer during LGAD fabrication, depending on the device structure and fabrication parameters.
External trigger systems are essential in ion beam facilities because they enable precise synchronization and detection with other experimental or diagnostic equipment. This synchronization is crucial for achieving reproducible measurements and improving the temporal resolution of some experiments [1] . In this contribution, we report on the development and commissioning of a external trigger device based on an ultra-thin EJ-214 plastic scintillator [2] at the ion beam microprobe of the National Accelerator Center [3] (CNA, Seville). Unlike conventional self-trigger modes, this setup provides enhanced temporal stability and enables both single-ion recognition and Time Of Flight applications. The thickness and uniformity of the scintillator were assessed using Rutherford Backscattering Spectrometry, which revealed deviations from the nominal design. Although the reduced thickness lowered the output signal amplitude, it also decreased energy straggling, helping to preserve beam quality and enabling more precise timing analyses. Experimental tests confirmed a strong dependence of detector response on the ion impact position and transmission studies showed that less than 2% of protons in the 2-3 MeV energy range passed through the collimator slits, highlighting the device’s suitability for high-current conditions thanks to the radiation tolerance of plastic scintillators. These results establish the system as a reliable trigger for techniques with high temporal resolution like in the Time-Resolved Ion Beam Induced Charge experiments and as a diagnostic tool for microbeam applications.
1) Magalhaes-Martins, P.; Dal-Bello, R.; Seimetz M.; Hermann, G.; Kihm, T.; Seco, J. Front. Phys. 2020, 8:169.
2) Seimetz, M.; Bellido, P.; Soriano, A.; López, J.G.; Jiménez Ramos, M.C.; Fernández, B.; Conde, P.; Crespo, E.; González, A.J.; Hernández, L.; et al. IEEE Trans. Nucl. Sci. 2015, 62, 3216-3224.
3) Lopez, J.G.; Ager, F.J.; Rank, M.B.; Madrigal, M.; Ontalba, M.A.; Respaldiza, M.A.; Ynsa, M.D. Nucl. Instrum. Methods B. 2000, 161¬:163, 1137-1142.
Compton imaging is a promising technique for Prompt Gamma (PG) imaging in range verification during hadron therapy (HT). In neutron monitoring, however, most existing systems register only integral off-field neutron fluence values, without providing information on the spatial origin. Dual neutron–gamma imaging is also of significant interest for applications in nuclear safety and security. To address these challenges, we have designed and patented an innovative dual neutron and γ-ray imaging system, so-called GN-Vision, which aims to overcome the current limitations in these fields. The device is compact, portable, and capable of simultaneously measuring and imaging γ-rays and slow neutrons, from thermal energies up to 100 eV.
GN-Vision builds on the design of the previously developed i-TED detector [1], an array of Compton cameras based on large monolithic position-sensitive LaCl₃(Ce) crystals originally conceived for neutron-capture experiments at CERN [2]. The applicability of i-TED has already been demonstrated for range verification in ion-beam therapy [3,4,5] and for imaging-based dosimetry in Boron Neutron Capture Therapy (BNCT) [6,7]. In addition to these features, GN-Vision incorporates a neutron–gamma discriminating detector and a passive collimator to enable neutron imaging while preserving Compton γ-ray imaging capabilities.
The dual imaging functionality of GN-Vision was first conceptually demonstrated through Monte Carlo simulations [8]. More recently, we have concentrated our research on developing and validating the neutron imaging capability with a CLYC-based neutron-gamma discrimination detector [9], and on evaluating and optimizing the performance of the full prototype through detailed simulations [10]. This contribution will summarize the latest experimental advances in this project, with particular emphasis on the development and characterization of the first demonstrator integrating both neutron and γ-ray imaging in a single device with compact electronics. Moreover, this contribution will present the results of the first field tests performed in the context of BNCT, carried out at ILL [11] and at the research reactor in Pavia. Finally, we will outline future plans for pilot experiments to validate the system in clinically and technologically relevant scenarios.
References
[1] C. Domingo-Pardo et al., Nucl. Phys. A 851, 78-86 (2016)
[2] V. Babiano-Suárez et al., Eur. Phys. J. A 57, 197 (2021)
[3] J. Lerendegui-Marco et al., Sci Rep 12, 2735 (2022)
[4] J. Balibrea-Correa et al., Eur. Phys. J. Plus 137, 1258 (2022)
[5] J. Balibrea-Correa et al., Eur. Phys. J. Plus 140, 870 (2025)
[6] J. Lerendegui-Marco et al., App. Rad. Isot. 225, 112009 (2025)
[7] P. Torres-Sánchez et al., App. Rad. Isot. 217, 111649 (2025)
[7] J. Lerendegui-Marco et al., EPJ Techn Instrum 11, 2 (2024)
[9] J. Lerendegui-Marco et al., Nucl. Inst. Methods A 1079, 170594 (2025)
[10] J. Lerendegui-Marco et al., App. Rad. Isot. 224, 111826 (2025)
[11] A. Sanchis-Moltó et al., EPJ Web of Conferences, Proceedings ANIMMA (submitted) (2025)
Boron Neutron Capture Therapy (BNCT) is an experimental radiotherapy technique in which boron linked to a drug, 10B-BPA, is administered to the patient that selectively accumulates in cancer cells. This therapy relies on the large boron neutron capture cross-section to deliver a targeted dose from neutron irradiation. With the development of accelerator-based technologies, which enable the production of high-quality neutron beams in clinical settings, BNCT has demonstrated significant potential [1].
An unresolved problem in BNCT is the real-time dosimetry, which aims to determine the dose delivered to the patient's tissues during the treatment. The current method uses simple extrapolations from previous PET scans and online monitoring of boron concentration in blood [2]. Since neutron captures in boron produce 478 keV gamma rays, this radiation could be used for real-time dose monitoring. To date, the main challenges remain dealing with very intense radiation fields that generate high count rates above detector reach; and in achieving enough boron sensitivity to image the boron in the patient, on top of the strong background induced by harsh neutron and gamma ray fields generated during the treatments; while attaining the spatial resolution required and moving towards true online capabilities during treatment.
Therefore, a detector with low neutron sensitivity and high-count rate capabilities could be ideal for dosimetry in these treatments. The i-TED Compton Camera Array, developed by the Gamma Ray and Neutron Spectroscopy group at the IFIC (CSIC-UV) within the HYMNS-ERC project for nuclear physics research, has expanded into medical physics through ion-range monitoring in HT [3] and is now aiming at BNCT [4,5]. Taking advantage of its low neutron sensitivity, large efficiency, and other technical aspects makes i-TED especially well-suited for this task.
This contribution will present the developments implemented in i-TED for dose monitoring via Compton imaging, which includes the use of CLLBC segmented crystals that could allow us to work under the very high count rates produced during the treatment. We will discuss the characterization process of a CLLBC crystal and the integration of the first segmented crystal into an i-TED module. Finally, we will outline the future plans for i-TED as a dosimetry system using 3D image reconstruction capabilities with GPU acceleration towards real-time.
References
[1] K. Hirose et al., “Boron neutron capture therapy using cyclotron-based epithermal neutron source and borofalan (10B) for recurrent or locally advanced head and neck cancer (JHN002): An open-label phase II trial”, Rad. & Onc. Vol 155, pp. 182-187, (2021)
[2] International Atomic Energy Agency. Advances in Boron Neutron Capture Therapy. Non-serial Publications. IAEA, Vienna, 2023.
[3] J. Balibrea-Correa et al., “Hybrid compton-PET imaging for ion-range verification: a preclinical study for proton, helium, and carbon therapy at HIT”, The Eur. Phys. Jour. Plus, Volume 140, 870 (2025)
[4] P. Torres-Sánchez et al., “The potential of the i-TED Compton camera array for real-time boron imaging and determination during treatments in Boron Neutron Capture Therapy”, App. Radiat. Isot. 217, 111649 (2025)
[5] Lerendegui-Marco, J., et al. “First pilot tests of Compton imaging and boron concentration measurements in BNCT using i-TED”, App. Radiat. Isot. 225, 112009 (2025)
Las mediciones precisas de radionúclidos son cruciales en la datación absoluta en estudios paleoclimáticos. Mazinger, un espectrómetro de rayos γ de muy bajo nivel de fondo y alta eficiencia, se ha actualizado con dos detectores antimuones tipo veto y se ha reconfigurado su electrónica de adquisición, logrando mejorar los límites de detección y gestionar tasas de eventos más altas provenientes del sistema de veto. La actualización duplicó la figura de mérito de Mazinger y redujo significativamente el nivel de fondo. Estas mejoras optimizan el rendimiento de Mazinger para aplicaciones geocronológicas. Su uso en la datación por 210Pb de un testigo sedimentario de baja actividad de la Lagoa de Sobrado dos Monxes (Galicia) produjo un modelo de edad con incertidumbres inferiores al 5%, que coincide correctamente con el perfil de 137Cs.
Precise radionuclide measurements are crucial for absolute dating in paleoclimatic studies. Mazinger, a very-low-level background and high-efficiency γ-ray spectrometer, was upgraded with two anti-muon veto detectors and its acquisition electronics have been reconfigured, both to improve detection limits and handle higher event rates coming from the veto system. The upgrade led to a doubling of Mazinger's figure of merit and significantly reduce the background level. These improvements enhance Mazinger’s performance for geochronological applications. Application to 210Pb dating of a low-level activity sediment core from Sobrado dos Monxes Lagoon (Galicia) produced an age model with uncertainties under 5%, matching correctly with the 137Cs profile.
The $^{14}C$ dating facility at the University of Salamanca, based on the compact MICADAS (MIni CArbon DAting System) [1], has been in operation for over three years. This work presents a detailed status report on the facility's performance, sample preparation methodologies, and statistical validation of results.
A wide variety of materials have been processed, including collagen, corals, wood, charcoal and sediments. For sediments, different pretreatment methods are commonly used for ¹⁴C dating, such as acid-alkali-acid (AAA), acid dissolution, and carbonate removal through fumigation [2]. In our facility, we have employed the fumigation method for sediment pretreatment, which has been proven effective for sample decontamination while preserving the integrity of the organic fraction. We present results obtained at different stages of method optimization, ensuring the highest accuracy and reproducibility. We describe the rigorous procedures to perform this pretreatment, including sediment homogenization to ensure consistent measurements. To ensure reliability, we perform three replicates of each sample, allowing us to detect potential issues such as poor homogenization, which is particularly challenging when dating organic carbon in sediments. Statistical analysis of replicates confirms high reproducibility, with deviations well within expected uncertainties, demonstrating not only the robustness of our methodology but also the precision and effectiveness of its implementation in our laboratory.
Overall, the facility has achieved an average background value of $42560 \pm 4060$ years B.P., reaching up to $50000$ years after ion source cleaning. For standard samples, the facility has achieved average F14C values of $1.3407 \pm 0.0026$ for OxII, $0.2302 \pm 0.0016$ for IAEA C5, and $0.0031 \pm 0.0013$ for IAEA C9. Furthermore, we present results from the GIRI intercomparison samples [3] and collagen samples of known ages have been successfully dated, reinforcing the system's reliability. In conclusion, this article provides updated technical specifications of the AMS system, details on our quality control measures and results of the optimised fumigation method in our facility. These enhancements underscore the utility of this AMS facility for research in archaeology, geology, and climate science.
REFERENCES
[1] Hans-Arno Synal, M. S. (2007). MICADAS: A new compact radiocarbon AMS system. Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms, 259(1), 7-13.
[2] Komada, T., Anderson, M. R., & Dorfmeier, C. L. (2008). Carbonate removal from coastal sediments for the determination of organic carbon and its isotopic signatures, δ13C and Δ14C: comparison of fumigation and direct acidification methods. Limnology and Oceanography: Methods, 6(6), 254-262.
[3] Scott EM, Naysmith P, Dunbar E. Preliminary results from Glasgow International Radiocarbon Intercomparison. Radiocarbon. 2024;66(5):1302-1309. doi:10.1017/RDC.2023.64