Cargando…

Cuprate superconducting materials above liquid nitrogen temperature from machine learning

The superconductivity of cuprates remains a challenging topic in condensed matter physics, and the search for materials that superconduct electricity above liquid nitrogen temperature and even at room temperature is of great significance for future applications. Nowadays, with the advent of artifici...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Yuxue, Su, Tianhao, Cui, Yaning, Ma, Xianzhe, Zhou, Xue, Wang, Yin, Hu, Shunbo, Ren, Wei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society of Chemistry 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10315706/
https://www.ncbi.nlm.nih.gov/pubmed/37404317
http://dx.doi.org/10.1039/d3ra02848h
Descripción
Sumario:The superconductivity of cuprates remains a challenging topic in condensed matter physics, and the search for materials that superconduct electricity above liquid nitrogen temperature and even at room temperature is of great significance for future applications. Nowadays, with the advent of artificial intelligence, research approaches based on data science have achieved excellent results in material exploration. We investigated machine learning (ML) models by employing separately the element symbolic descriptor atomic feature set 1 (AFS-1) and a prior physics knowledge descriptor atomic feature set 2 (AFS-2). An analysis of the manifold in the hidden layer of the deep neural network (DNN) showed that cuprates still offer the greatest potential as superconducting candidates. By calculating the SHapley Additive exPlanations (SHAP) value, it is evident that the covalent bond length and hole doping concentration emerge as the crucial factors influencing the superconducting critical temperature (T(c)). These findings align with our current understanding of the subject, emphasizing the significance of these specific physical quantities. In order to improve the robustness and practicability of our model, two types of descriptors were used to train the DNN. We also proposed the idea of cost-sensitive learning, predicted the sample in another dataset, and designed a virtual high-throughput search workflow.