Cargando…

BraggNN: fast X-ray Bragg peak analysis using deep learning

X-ray diffraction based microscopy techniques such as high-energy diffraction microscopy (HEDM) rely on knowledge of the position of diffraction peaks with high precision. These positions are typically computed by fitting the observed intensities in detector data to a theoretical peak shape such as...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Zhengchun, Sharma, Hemant, Park, Jun-Sang, Kenesei, Peter, Miceli, Antonino, Almer, Jonathan, Kettimuthu, Rajkumar, Foster, Ian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: International Union of Crystallography 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8733885/
https://www.ncbi.nlm.nih.gov/pubmed/35059215
http://dx.doi.org/10.1107/S2052252521011258
Descripción
Sumario:X-ray diffraction based microscopy techniques such as high-energy diffraction microscopy (HEDM) rely on knowledge of the position of diffraction peaks with high precision. These positions are typically computed by fitting the observed intensities in detector data to a theoretical peak shape such as pseudo-Voigt. As experiments become more complex and detector technologies evolve, the computational cost of such peak-shape fitting becomes the biggest hurdle to the rapid analysis required for real-time feedback in experiments. To this end, we propose BraggNN, a deep-learning based method that can determine peak positions much more rapidly than conventional pseudo-Voigt peak fitting. When applied to a test dataset, peak center-of-mass positions obtained from BraggNN deviate less than 0.29 and 0.57 pixels for 75 and 95% of the peaks, respectively, from positions obtained using conventional pseudo-Voigt fitting (Euclidean distance). When applied to a real experimental dataset and using grain positions from near-field HEDM reconstruction as ground-truth, grain positions using BraggNN result in 15% smaller errors compared with those calculated using pseudo-Voigt. Recent advances in deep-learning method implementations and special-purpose model inference accelerators allow BraggNN to deliver enormous performance improvements relative to the conventional method, running, for example, more than 200 times faster on a consumer-class GPU card with out-of-the-box software.