Cargando…

PromoterLCNN: A Light CNN-Based Promoter Prediction and Classification Model

Promoter identification is a fundamental step in understanding bacterial gene regulation mechanisms. However, accurate and fast classification of bacterial promoters continues to be challenging. New methods based on deep convolutional networks have been applied to identify and classify bacterial pro...

Descripción completa

Detalles Bibliográficos
Autores principales: Hernández, Daryl, Jara, Nicolás, Araya, Mauricio, Durán, Roberto E., Buil-Aranda, Carlos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9325283/
https://www.ncbi.nlm.nih.gov/pubmed/35885909
http://dx.doi.org/10.3390/genes13071126
Descripción
Sumario:Promoter identification is a fundamental step in understanding bacterial gene regulation mechanisms. However, accurate and fast classification of bacterial promoters continues to be challenging. New methods based on deep convolutional networks have been applied to identify and classify bacterial promoters recognized by sigma ([Formula: see text]) factors and RNA polymerase subunits which increase affinity to specific DNA sequences to modulate transcription and respond to nutritional or environmental changes. This work presents a new multiclass promoter prediction model by using convolutional neural networks (CNNs), denoted as PromoterLCNN, which classifies Escherichia coli promoters into subclasses [Formula: see text] , [Formula: see text] , [Formula: see text] , [Formula: see text] , [Formula: see text] , and [Formula: see text]. We present a light, fast, and simple two-stage multiclass CNN architecture for promoter identification and classification. Training and testing were performed on a benchmark dataset, part of RegulonDB. Comparative performance of PromoterLCNN against other CNN-based classifiers using four parameters (Acc, Sn, Sp, MCC) resulted in similar or better performance than those that commonly use cascade architecture, reducing time by approximately 30–90% for training, prediction, and hyperparameter optimization without compromising classification quality.