Cargando…

Device‐Algorithm Co‐Optimization for an On‐Chip Trainable Capacitor‐Based Synaptic Device with IGZO TFT and Retention‐Centric Tiki‐Taka Algorithm

Analog in‐memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on‐chip training due to the lack of means to control the amount of resistance change and large device varia...

Descripción completa

Detalles Bibliográficos
Autores principales: Won, Jongun, Kang, Jaehyeon, Hong, Sangjun, Han, Narae, Kang, Minseung, Park, Yeaji, Roh, Youngchae, Seo, Hyeong Jun, Joe, Changhoon, Cho, Ung, Kang, Minil, Um, Minseong, Lee, Kwang‐Hee, Yang, Jee‐Eun, Jung, Moonil, Lee, Hyung‐Min, Oh, Saeroonter, Kim, Sangwook, Kim, Sangbum
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10582414/
https://www.ncbi.nlm.nih.gov/pubmed/37559176
http://dx.doi.org/10.1002/advs.202303018
Descripción
Sumario:Analog in‐memory computing synaptic devices are widely studied for efficient implementation of deep learning. However, synaptic devices based on resistive memory have difficulties implementing on‐chip training due to the lack of means to control the amount of resistance change and large device variations. To overcome these shortcomings, silicon complementary metal‐oxide semiconductor (Si‐CMOS) and capacitor‐based charge storage synapses are proposed, but it is difficult to obtain sufficient retention time due to Si‐CMOS leakage currents, resulting in a deterioration of training accuracy. Here, a novel 6T1C synaptic device using only n‐type indium gaIlium zinc oxide thin film transistor (IGZO TFT) with low leakage current and a capacitor is proposed, allowing not only linear and symmetric weight update but also sufficient retention time and parallel on‐chip training operations. In addition, an efficient and realistic training algorithm to compensate for any remaining device non‐idealities such as drifting references and long‐term retention loss is proposed, demonstrating the importance of device‐algorithm co‐optimization.