Mostrando 601 - 620 Resultados de 2,741 Para Buscar '"CPU"', tiempo de consulta: 0.40s Limitar resultados
  1. 601
    “…AliSim-HPC parallelizes the simulation process at both multi-core and multi-CPU levels using the OpenMP and message passing interface (MPI) libraries, respectively. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  2. 602
    “…Transformation matrices gathered from the overlays between these diverse structures and the 3D conformer dataset allowed us to drastically (100-fold) reduce the CPU time required for shape overlay. The alignment-recycling heuristic produces results consistent with de novo alignment calculation, with better than 80% hit list overlap on average. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Texto
  3. 603
    por Su, Xiaoquan, Xu, Jian, Ning, Kang
    Publicado 2012
    “…RESULT: In this paper, we proposed Parallel-META, a GPU- and multi-core-CPU-based open-source pipeline for metagenomic data analysis, which enabled the efficient and parallel analysis of multiple metagenomic datasets and the visualization of the results for multiple samples. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  4. 604
    por Pelletier, Mathew G.
    Publicado 2008
    “…This research examines the use of programmable graphic processing units (GPU) as an alternative to the PC's traditional use of the central processing unit (CPU). The use of the GPU, as an alternative computation platform, allowed for the machine vision system to gain a significant improvement in processing time. …”
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  5. 605
    “…Ventroposterior thalamic stimulation elicited c-Fos-positivity in few cells in the iS1FL and caudate putamen (iCPu). Medial thalamic stimulation, however, produced numerous c-Fos-positive cells in the iCC and iCPu. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  6. 606
    “…However, compared to calculating correlations on one core of a contemporary central processor unit (CPU), running gEMpicker on a modern GPU gives a speed-up of about 27 ×. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  7. 607
    “…Dorsal striatal (caudoputamen, CPu) dopamine depletion by 6-hydroxydopamine resulted in reduced activity of the CPu, globus pallidus externa (GPe), and STN but increased activity of the GPi, SNr, and putative layer V neurons in the motor cortex. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  8. 608
    “…We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total). …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  9. 609
    por Tang, Ke, Zhang, Jinfeng, Liang, Jie
    Publicado 2014
    “…The computational cost of generating 1,000 conformations for each of the x loops in a benchmark dataset is only about [Image: see text] cpu minutes for 12-residue loops, compared to ca [Image: see text] cpu minutes using the FALCm method. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  10. 610
    por Calus, Mario PL
    Publicado 2014
    “…The RHS-updating algorithm reduced CPU time by 74.5 to 93.0% and memory requirements by 13.1 to 66.4% compared to the original algorithm. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  11. 611
    “…A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  12. 612
    por Zhang, Xueying, Song, Qinbao
    Publicado 2015
    “…Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  13. 613
    por Guinness, Robert E.
    Publicado 2015
    “…Lastly, we measured the computational complexity of the classifiers, in terms of central processing unit (CPU) time needed for classification, to provide a rough comparison between the algorithms in terms of battery usage requirements. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  14. 614
    “…We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  15. 615
    “…Results indicate that ppMCMC achieves 1.96x higher sampling efficiency than pMCMC when using sequential CPU implementations. The FPGA architecture of pMCMC is 12.1x and 10.1x faster than state-of-the-art, parallel CPU and GPU implementations of pMCMC and up to 53x more energy efficient; the FPGA architecture of ppMCMC increases these speedups to 34.9x and 41.8x respectively and is 173x more power efficient, bringing previously intractable SSM-based data analyses within reach.…”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  16. 616
    por Kim, Bongsong, Beavis, William D
    Publicado 2017
    “…The multithreading allows computational routines to concurrently run on multiple central processing unit (CPU) processors. The forward chopping addresses memory limitation by dividing a dataset into appropriately sized subsets. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  17. 617
    por Lawrie, David S.
    Publicado 2017
    “…The presented GPU Optimized Wright–Fisher simulation, or “GO Fish” for short, can be used to simulate arbitrary selection and demographic scenarios while running over 250-fold faster than its serial counterpart on the CPU. Even modest GPU hardware can achieve an impressive speedup of over two orders of magnitude. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  18. 618
    “…To take advantage of the large number of CPU cores in the NewSQL server to optimize deduplication performance, DOMe parallelizes the deduplication method based on the fork-join framework. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  19. 619
    “…Three speed optimisation strategies for the CPU are discussed: single-core optimisation, parallelisation for multiple cores and vectorisation. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  20. 620
    “…CONCLUSIONS: Experimental results show that our alignment kernel with traceback is up to 80x and 14.14x faster than its CPU counterpart with synthetic datasets and real datasets, respectively. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
Herramientas de búsqueda: RSS