Cargando…

Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs

Field programmable gate arrays (FPGAs) offer flexibility in programmable systems, making them ideal for hardware implementations of machine learning algorithms. The effectiveness of machine learning (ML) methods has been demonstrated successfully in particle physics computations, particularly in Lar...

Descripción completa

Detalles Bibliográficos
Autor principal: Rao, Richa
Lenguaje:eng
Publicado: 2020
Materias:
Acceso en línea:http://cds.cern.ch/record/2729154
_version_ 1780966394046709760
author Rao, Richa
author_facet Rao, Richa
author_sort Rao, Richa
collection CERN
description Field programmable gate arrays (FPGAs) offer flexibility in programmable systems, making them ideal for hardware implementations of machine learning algorithms. The effectiveness of machine learning (ML) methods has been demonstrated successfully in particle physics computations, particularly in Large Hadron Collider (LHC) physics. Their use in FPGA hardware, however, has been restricted due to the complex implementation and significant resource demands. Thus, the need for FPGA resource estimation, as well as a means to simplify ML implementation on FPGAs is being fulfilled by HLS4ML [1] (High-Level Synthesis for Machine Learning). HLS4ML is a framework that translates traditional open-source machine learning package models into HLS, and thus maps neural networks directly onto an FPGA using HLS tools. Facilitating a drastic decrease in firmware development time against traditional VHDL/Verilog algorithms, HLS4ML increases accessibility across the user community. By understanding the mechanism of this framework, we implement a Long Short-Term Memory (LSTM) network targeting an FPGA. We take a Top Tagging LSTM model and translate it to HLS code. Further, using an HLS tool we obtain reports and analyze the overall latency and resources required by the model. The motivation for using LSTMs, its current state of development, and my personal work on the inclusion of this neural network into the HLS4ML framework are explained in this thesis.
id cern-2729154
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2020
record_format invenio
spelling cern-27291542020-09-28T12:49:06Zhttp://cds.cern.ch/record/2729154engRao, RichaImplementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAsDetectors and Experimental TechniquesEngineeringField programmable gate arrays (FPGAs) offer flexibility in programmable systems, making them ideal for hardware implementations of machine learning algorithms. The effectiveness of machine learning (ML) methods has been demonstrated successfully in particle physics computations, particularly in Large Hadron Collider (LHC) physics. Their use in FPGA hardware, however, has been restricted due to the complex implementation and significant resource demands. Thus, the need for FPGA resource estimation, as well as a means to simplify ML implementation on FPGAs is being fulfilled by HLS4ML [1] (High-Level Synthesis for Machine Learning). HLS4ML is a framework that translates traditional open-source machine learning package models into HLS, and thus maps neural networks directly onto an FPGA using HLS tools. Facilitating a drastic decrease in firmware development time against traditional VHDL/Verilog algorithms, HLS4ML increases accessibility across the user community. By understanding the mechanism of this framework, we implement a Long Short-Term Memory (LSTM) network targeting an FPGA. We take a Top Tagging LSTM model and translate it to HLS code. Further, using an HLS tool we obtain reports and analyze the overall latency and resources required by the model. The motivation for using LSTMs, its current state of development, and my personal work on the inclusion of this neural network into the HLS4ML framework are explained in this thesis.CERN-THESIS-2020-103oai:cds.cern.ch:27291542020-08-28T22:46:46Z
spellingShingle Detectors and Experimental Techniques
Engineering
Rao, Richa
Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title_full Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title_fullStr Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title_full_unstemmed Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title_short Implementation of Long Short-Term Memory Neural Networks in High-Level Synthesis Targeting FPGAs
title_sort implementation of long short-term memory neural networks in high-level synthesis targeting fpgas
topic Detectors and Experimental Techniques
Engineering
url http://cds.cern.ch/record/2729154
work_keys_str_mv AT raoricha implementationoflongshorttermmemoryneuralnetworksinhighlevelsynthesistargetingfpgas