Mostrando 161 - 180 Resultados de 272 Para Buscar '"Gigabyte"', tiempo de consulta: 0.13s Limitar resultados
  1. 161
    “…For the human genome, Burrows-Wheeler indexing allows Bowtie to align more than 25 million reads per CPU hour with a memory footprint of approximately 1.3 gigabytes. Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Texto
  2. 162
    “…The main subjects are magnetic sensors with high resolution and magnetic read heads with high sensitivity, required for hard-disk drives with recording densities of several gigabytes. Another important subject is novel magnetic random-access memories (MRAM) with non-volatile non-destructive and radiation-resistant characteristics. …”
    Enlace del recurso
    Enlace del recurso
  3. 163
    “…In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. AVAILABILITY: Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  4. 164
    “…Typical databases used by KrakenUniq are tens to hundreds of gigabytes in size. The original KrakenUniq code required loading the entire database in RAM, which demanded expensive high-memory servers to run it efficiently. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  5. 165
    “…The format allows fast random access to hundreds of gigabytes of data, while retaining a small disk space footprint. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Texto
  6. 166
  7. 167
    “…Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  8. 168
    “…Modern microscopes create a data deluge with gigabytes of data generated each second, and terabytes per day. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  9. 169
    por Sharma, Sudip, Kumar, Sudhir
    Publicado 2022
    “…ModelTamer selects models hundreds to thousands of times faster than the full data analysis while needing megabytes rather than gigabytes of computer memory.…”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  10. 170
    por Larsen, Peter E.
    Publicado 2016
    “…The results of the DNA sequencing experiments can generate gigabytes to terabytes of information, however, making it difficult for the citizen scientist to grasp and the educator to convey this data. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  11. 171
    por Manekar, Swati C, Sathe, Shailesh R
    Publicado 2018
    “…The rapid development of high-throughput sequencing technologies means that hundreds of gigabytes of sequencing data can be produced in a single study. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  12. 172
    “…This could be a fly avoiding predators, or the retina processing gigabytes of data to guide human actions. In this work we draw parallels between these and the efficient sampling of biomolecules with hundreds of thousands of atoms. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  13. 173
    por Almomani, Osama
    Publicado 2022
    “…The datasets are very huge in size -Gigabytes to Terabytes-, and only metadata information is generated as JSON records that go directly to the portal by the data curation scripts. data curation scripts contain a collection of data ingestion and curation tools used to prepare the datasets’ metadata, software, and any accompanying material for public open data releases on the CERN Open Data portal.[2]…”
    Enlace del recurso
  14. 174
    “…To push the boundaries of MSA utilization, we conducted a petabase-scale search of the Sequence Read Archive (SRA), resulting in gigabytes of aligned homologs for CASP15 targets. These were merged with default MSAs produced by ColabFold-search and provided to ColabFold-predict. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  15. 175
    “…In the process, a vast quantity of unrefined data, that can amount to several hundred gigabytes per tissue section, is produced. Managing, analysing and interpreting this data is a significant challenge and represents a major barrier to the translational application of MSI. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  16. 176
    por Cromaz, Mario
    Publicado 2021
    “…The data stream will reach 480 thousand events per second at an aggregate data rate of 4 gigabytes per second at full design capacity. We have been able to simplify the architecture of the streaming system greatly by interfacing the FPGA-based detector electronics with the computing cluster using standard network technology. …”
    Enlace del recurso
  17. 177
    “…However, as the research software used becomes increasingly complex, the software images grow easily to sizes of multiple gigabytes. Downloading the full image onto every single compute node on which the containers are executed becomes unpractical. …”
    Enlace del recurso
    Enlace del recurso
  18. 178
    “…We were able to achieve a high DNA data density of 7.0 × 10(9) gigabytes per gram using a hydrogel-based system.…”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Online Artículo Texto
  19. 179
    “…BACKGROUND: The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Texto
  20. 180
    por Li, Heng, Durbin, Richard
    Publicado 2010
    “…Results: We designed and implemented a new algorithm, Burrows-Wheeler Aligner's Smith-Waterman Alignment (BWA-SW), to align long sequences up to 1 Mb against a large sequence database (e.g. the human genome) with a few gigabytes of memory. The algorithm is as accurate as SSAHA2, more accurate than BLAT, and is several to tens of times faster than both. …”
    Enlace del recurso
    Enlace del recurso
    Enlace del recurso
    Texto
Herramientas de búsqueda: RSS