-
61por Sallou, Olivier, Duek, Paula D., Darde, Thomas A., Collin, Olivier, Lane, Lydie, Chalmel, Frédéric“…In this study, it was used to prioritize 21 dubious protein-coding genes among the 616 annotated in neXtProt for reannotation. PepPSy is freely available at http://peppsy.genouest.org. …”
Publicado 2016
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
62“…Here, we propose a graphical representation of the functional mitochondrial proteome by retrieving mitochondrial proteins from the NeXtProt database and adding to the network their interactors as annotated in the IntAct database. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
63por Yerram, Varun, Takeshita, Hiroyuki, Iwahori, Yuji, Hayashi, Yoshitsugu, Bhuyan, M. K., Fukui, Shinji, Kijsirikul, Boonserm, Wang, Aili“…The proposed approach uses novel U-net and Resnet architectures called U-net++ and ResNeXt. The state-of-the-art model is combined with the proposed efficient post-processing approach to improve the overlap with ground truth labels. …”
Publicado 2022
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
64por Maracani, Andrea, Pastore, Vito Paolo, Natale, Lorenzo, Rosasco, Lorenzo, Odone, Francesca“…Finally, we design and test an ensemble of our Vision Transformers and the ConvNeXt, outperforming the state-of-the-art existing works on plankton image classification on the three target datasets. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
65por Kim, Hee E., Maros, Mate E., Miethke, Thomas, Kittel, Maximilian, Siegel, Fabian, Ganslandt, Thomas“…Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
66por Qureshi, Amad, Lim, Seongjin, Suh, Soh Youn, Mutawak, Bassam, Chitnis, Parag V., Demer, Joseph L., Wei, Qi“…In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
67“…First, features from the infrared thermal images of sinter cross section at the tail of the sinterer are extracted based on ResNeXt. Then, to eliminate the irrelevant, redundant and noisy features, an efficient feature selection method based on binary state transition algorithm (BSTA) is proposed to find the truly useful features. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
68“…Our end-to-end convolutional neural network consists of two stages, where at first a view-invariant trajectory descriptor for each body joint is generated from RGB images, and then the collection of trajectories for all joints are processed by an adapted, pre-trained 2D convolutional neural network (CNN) (e.g., VGG-19 or ResNeXt-50) to learn the relationship amongst the different body parts and deliver a score for the movement quality. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
69por Chen, Zhonghao, He, Pengguang, He, Yefan, Wu, Fan, Rao, Xiuqin, Pan, Jinming, Lin, Hongjian“…The image dataset of individual eggshell was collected from the blunt-end region of 770 chicken eggs using an image acquisition platform. The ResNeXt network was then trained as a texture feature extraction module to obtain sufficient eggshell texture features. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
70“…In particular, our PLG-ViT models outperformed similarly sized networks like ConvNeXt and Swin Transformer, achieving Top-1 accuracy values of 83.4%, 84.0%, and 84.5% on ImageNet-1K with 27M, 52M, and 91M parameters, respectively.…”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
71por Wei, Heng-Le, Wei, Cunsheng, Feng, Yibo, Yan, Wanying, Yu, Yu-Sheng, Chen, Yu-Chen, Yin, Xindao, Li, Junrong, Zhang, Hong“…The prediction accuracies of the ResNet34, ResNet50, ResNeXt50, DenseNet121, and 3D ResNet18 models were 0.65, 0.74, 0.65, 0.70, and 0.78, respectively. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
72por Gobeill, Julien, Gaudet, Pascale, Dopp, Daniel, Morrone, Adam, Kahanda, Indika, Hsu, Yi-Yu, Wei, Chih-Hsuan, Lu, Zhiyong, Ruch, Patrick“…The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. …”
Publicado 2018
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
73por Gessert, Nils, Nielsen, Maximilian, Shaikh, Mohsin, Werner, René, Schlaefer, Alexander“…. • We address skin lesion classification with an ensemble of deep learning models including EfficientNets, SENet, and ResNeXt WSL, selected by a search strategy. • We rely on multiple model input resolutions and employ two cropping strategies for training. …”
Publicado 2020
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
74“…Firstly, Resnet-152 and ResNeXt-101 are used to extract features from videos. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
75“…As a novelty, the paper proposes an intelligent decision system for segmenting liver and hepatic tumors by integrating four efficient neural networks (ResNet152, ResNeXt101, DenseNet201, and InceptionV3). Images from computed tomography for training, validation, and testing were taken from the public LiTS17 database and preprocessed to better highlight liver tissue and tumors. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
76“…Compared with the deep learning models ResNet and ResNeXt, experimental results show that the YOLOv5 achieves lower accuracy for ingredient recognition, but it can locate and classify multiple ingredients in one shot and make the scanning process easier for users. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
77“…In this article, based on the feasibility of forecasting the compressor geometric variable system, an enhanced ConvNeXt model utilizing the Sliding Window Algorithm mechanism is proposed. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
78por Ponomarenko, Elena A., Poverennaya, Ekaterina V., Ilgisonis, Ekaterina V., Pyatnitskiy, Mikhail A., Kopylov, Arthur T., Zgoda, Victor G., Lisitsa, Andrey V., Archakov, Alexander I.“…Here, meta-analysis of neXtProt knowledge base is proposed for theoretical prediction of the number of different proteoforms that arise from alternative splicing (AS), single amino acid polymorphisms (SAPs), and posttranslational modifications (PTMs). …”
Publicado 2016
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
79por Paladini, Emanuela, Vantaggiato, Edoardo, Bougourzi, Fares, Distante, Cosimo, Hadid, Abdenour, Taleb-Ahmed, Abdelmalik“…For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. …”
Publicado 2021
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
80“…In order to reduce the complexity of the model, we present a lightweight yet powerful backbone network (named SA-MobileNeXt) that incorporates channel and spatial attention. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto