Mostrando 37,641 - 37,660 Resultados de 37,890 Para Buscar '"forestal"', tiempo de consulta: 0.35s Limitar resultados
  1. 37641
    “…Feature selection methods were used to select for subsets of transcripts to be used in the selected classification approaches: support vector machine, logistic regression, decision trees, random forest, and extremely randomized decision trees (extra-trees). …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  2. 37642
    “…Second, to address the problems of many types of ambient air quality parameters in sheep barns and possible redundancy or overlapping information, we used a random forests algorithm (RF) to screen and rank the features affecting CO(2) mass concentration and selected the top four features (light intensity, air relative humidity, air temperature, and PM2.5 mass concentration) as the input of the model to eliminate redundant information among the variables. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  3. 37643
  4. 37644
    “…In the discovery set, the ICS was constructed using a random forest algorithm and confirmed in the validation set to predict overall survival (OS) and event-free survival (EFS). …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  5. 37645
    “…We measured antigen-specific immunoglobulin responses against antigens using a customised Luminex assay and used conditional random forest models to examine which baseline biomarkers were most important for classifying individuals who went on to develop infection versus those who remained uninfected or asymptomatic. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  6. 37646
    “…It was also used to conduct machine learning exercises such as random forest and regression to identify the best candidate for immune-related central genes. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  7. 37647
    “…To set up benchmarking ML models to predict LBW, we applied 7 classic ML models (ie, logistic regression, naive Bayes, random forest, extreme gradient boosting, adaptive boosting, multilayer perceptron, and sequential artificial neural network) while using 4 different data rebalancing methods: random undersampling, random oversampling, synthetic minority oversampling technique, and weight rebalancing. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  8. 37648
    “…In the training set, gradient boosting decision tree (GBDT), extremely random trees (ET), random forest, logistic regression and extreme gradient boosting (XGBoost) obtained AUROC values > 0.90 and AUPRC > 0.87. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  9. 37649
    “…Data included year of birth, sex, height, weight, motivation to join the program, use statistics (eg, weight entries, entries into the food diary, views of the menu, and program content), program type, and weight loss. Random forest, extreme gradient boosting, and logistic regression with L1 regularization models were developed and validated using a 10-fold cross-validation approach. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  10. 37650
    “…The results were synthesized through information extraction and presented in tables and forest plots. RESULTS: In total, 5 RCTs were included in this systematic review, with 3 (60%) providing information for the meta-analysis. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  11. 37651
    “…Along with conventional ML models such as logistic regression (LR), random forest (RF), and gradient boosting (GB), the DNN model to discern recurrences was trained using a dataset of 778 consecutive patients with primary head and neck cancers who received CCRT. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  12. 37652
  13. 37653
    “…METHODS: Our study included 5,420,640 participants with fatty liver from Meinian Health Care Center. We used random forest, elastic net (EN), and extreme gradient boosting ML algorithms to select important features from potential predictors. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  14. 37654
    “…Classes were balanced using a synthetic minority oversampling technique and Boruta, a feature selection algorithm, was used to refine gene lists. We performed random forest and calculated “out of box” (OOB) area under the curve (AUC) and OOB error rate (ER), measures of predictive accuracy and robustness, respectively. …”
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  15. 37655
    por Hofman, P., Calabrese, F., Kern, I., Adam, J., Alarcão, A., Alborelli, I., Anton, N.T., Arndt, A., Avdalyan, A., Barberis, M., Bégueret, H., Bisig, B., Blons, H., Boström, P., Brcic, L., Bubanovic, G., Buisson, A., Caliò, A., Cannone, M., Carvalho, L., Caumont, C., Cayre, A., Chalabreysse, L., Chenard, M.P., Conde, E., Copin, M.C., Côté, J.F., D’Haene, N., Dai, H.Y., de Leval, L., Delongova, P., Denčić-Fekete, M., Fabre, A., Ferenc, F., Forest, F., de Fraipont, F., Garcia-Martos, M., Gauchotte, G., Geraghty, R., Guerin, E., Guerrero, D., Hernandez, S., Hurník, P., Jean-Jacques, B., Kashofer, K., Kazdal, D., Lantuejoul, S., Leonce, C., Lupo, A., Malapelle, U., Matej, R., Merlin, J.L., Mertz, K.D., Morel, A., Mutka, A., Normanno, N., Ovidiu, P., Panizo, A., Papotti, M.G., Parobkova, E., Pasello, G., Pauwels, P., Pelosi, G., Penault-Llorca, F., Picot, T., Piton, N., Pittaro, A., Planchard, G., Poté, N., Radonic, T., Rapa, I., Rappa, A., Roma, C., Rot, M., Sabourin, J.C., Salmon, I., Prince, S. Savic, Scarpa, A., Schuuring, E., Serre, I., Siozopoulou, V., Sizaret, D., Smojver-Ježek, S., Solassol, J., Steinestel, K., Stojšić, J., Syrykh, C., Timofeev, S., Troncone, G., Uguen, A., Valmary-Degano, S., Vigier, A., Volante, M., Wahl, S.G.F., Stenzinger, A., Ilié, M.
    Publicado 2023
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  16. 37656
    “…Three ML algorithms—a regression model with elastic net regularization (glmnet), a random survival forest (RSF), and a gradient tree-boosting technique (xgboost)—were evaluated for 5 combinations of clinical data, tumor radiomics, and whole-liver features. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  17. 37657
    “…RESULTS: We developed a random forest classifier over features derived from Gene Ontology annotations and genetic context scores provided by STRING database for predicting Mtb and CD interactions independently. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  18. 37658
    “…Reported estimates of effect on the probability of surgery from analyses adjusting for confounders were summarised in narrative form and synthesised in odds ratio (OR) forest plots for individual determinants. RESULTS: The review included 26 quantitative studies−23 on individuals’ decisions or views on having the operation and three about health professionals’ opinions-and 10 qualitative studies. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  19. 37659
    “…BACKGROUND: Pine moths (Lepidoptera; Bombycoidea; Lasiocampidae: Dendrolimus spp.) are among the most serious insect pests of forests, especially in southern China. Although COI barcodes (a standardized portion of the mitochondrial cytochrome c oxidase subunit I gene) can distinguish some members of this genus, the evolutionary relationships of the three morphospecies Dendrolimus punctatus, D. tabulaeformis and D. spectabilis have remained largely unresolved. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  20. 37660
    “…Binary classification performance for biweekly PHQ-9 samples (n=143), with a cutoff of PHQ-9≥11, based on Random Forest and Support Vector Machine leave-one-out cross validation resulted in 60.1% and 59.1% accuracy, respectively. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
Herramientas de búsqueda: RSS