Cargando…

Deep Generation of Coq Lemma Names Using Elaborated Terms

Coding conventions for naming, spacing, and other essentially stylistic properties are necessary for developers to effectively understand, review, and modify source code in large software projects. Consistent conventions in verification projects based on proof assistants, such as Coq, increase in im...

Descripción completa

Detalles Bibliográficos
Autores principales: Nie, Pengyu, Palmskog, Karl, Li, Junyi Jessy, Gligoric, Milos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7324046/
http://dx.doi.org/10.1007/978-3-030-51054-1_6
_version_ 1783551870055219200
author Nie, Pengyu
Palmskog, Karl
Li, Junyi Jessy
Gligoric, Milos
author_facet Nie, Pengyu
Palmskog, Karl
Li, Junyi Jessy
Gligoric, Milos
author_sort Nie, Pengyu
collection PubMed
description Coding conventions for naming, spacing, and other essentially stylistic properties are necessary for developers to effectively understand, review, and modify source code in large software projects. Consistent conventions in verification projects based on proof assistants, such as Coq, increase in importance as projects grow in size and scope. While conventions can be documented and enforced manually at high cost, emerging approaches automatically learn and suggest idiomatic names in Java-like languages by applying statistical language models on large code corpora. However, due to its powerful language extension facilities and fusion of type checking and computation, Coq is a challenging target for automated learning techniques. We present novel generation models for learning and suggesting lemma names for Coq projects. Our models, based on multi-input neural networks, are the first to leverage syntactic and semantic information from Coq ’s lexer (tokens in lemma statements), parser (syntax tree s), and kernel (elaborated terms) for naming; the key insight is that learning from elaborated terms can substantially boost model performance. We implemented our models in a toolchain, dubbed Roosterize, and applied it on a large corpus of code derived from the Mathematical Components family of projects, known for its stringent coding conventions. Our results show that Roosterize substantially outperforms baselines for suggesting lemma names, highlighting the importance of using multi-input models and elaborated terms.
format Online
Article
Text
id pubmed-7324046
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73240462020-06-30 Deep Generation of Coq Lemma Names Using Elaborated Terms Nie, Pengyu Palmskog, Karl Li, Junyi Jessy Gligoric, Milos Automated Reasoning Article Coding conventions for naming, spacing, and other essentially stylistic properties are necessary for developers to effectively understand, review, and modify source code in large software projects. Consistent conventions in verification projects based on proof assistants, such as Coq, increase in importance as projects grow in size and scope. While conventions can be documented and enforced manually at high cost, emerging approaches automatically learn and suggest idiomatic names in Java-like languages by applying statistical language models on large code corpora. However, due to its powerful language extension facilities and fusion of type checking and computation, Coq is a challenging target for automated learning techniques. We present novel generation models for learning and suggesting lemma names for Coq projects. Our models, based on multi-input neural networks, are the first to leverage syntactic and semantic information from Coq ’s lexer (tokens in lemma statements), parser (syntax tree s), and kernel (elaborated terms) for naming; the key insight is that learning from elaborated terms can substantially boost model performance. We implemented our models in a toolchain, dubbed Roosterize, and applied it on a large corpus of code derived from the Mathematical Components family of projects, known for its stringent coding conventions. Our results show that Roosterize substantially outperforms baselines for suggesting lemma names, highlighting the importance of using multi-input models and elaborated terms. 2020-06-06 /pmc/articles/PMC7324046/ http://dx.doi.org/10.1007/978-3-030-51054-1_6 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Nie, Pengyu
Palmskog, Karl
Li, Junyi Jessy
Gligoric, Milos
Deep Generation of Coq Lemma Names Using Elaborated Terms
title Deep Generation of Coq Lemma Names Using Elaborated Terms
title_full Deep Generation of Coq Lemma Names Using Elaborated Terms
title_fullStr Deep Generation of Coq Lemma Names Using Elaborated Terms
title_full_unstemmed Deep Generation of Coq Lemma Names Using Elaborated Terms
title_short Deep Generation of Coq Lemma Names Using Elaborated Terms
title_sort deep generation of coq lemma names using elaborated terms
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7324046/
http://dx.doi.org/10.1007/978-3-030-51054-1_6
work_keys_str_mv AT niepengyu deepgenerationofcoqlemmanamesusingelaboratedterms
AT palmskogkarl deepgenerationofcoqlemmanamesusingelaboratedterms
AT lijunyijessy deepgenerationofcoqlemmanamesusingelaboratedterms
AT gligoricmilos deepgenerationofcoqlemmanamesusingelaboratedterms