-
161“…For the human genome, Burrows-Wheeler indexing allows Bowtie to align more than 25 million reads per CPU hour with a memory footprint of approximately 1.3 gigabytes. Bowtie extends previous Burrows-Wheeler techniques with a novel quality-aware backtracking algorithm that permits mismatches. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Texto -
162“…The main subjects are magnetic sensors with high resolution and magnetic read heads with high sensitivity, required for hard-disk drives with recording densities of several gigabytes. Another important subject is novel magnetic random-access memories (MRAM) with non-volatile non-destructive and radiation-resistant characteristics. …”
Enlace del recurso
Enlace del recurso
-
163“…In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. AVAILABILITY: Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
164“…Typical databases used by KrakenUniq are tens to hundreds of gigabytes in size. The original KrakenUniq code required loading the entire database in RAM, which demanded expensive high-memory servers to run it efficiently. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
165“…The format allows fast random access to hundreds of gigabytes of data, while retaining a small disk space footprint. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Texto -
166por Wang, S M, Acosta, D, Madorsky, A, Scurlock, B, Atamanchuk, A G, Golovtsov, V L, Razmyslovich, B V“…It receives approximately 3 gigabytes of data every second from a custom backplane operating at 280 MHz. …”
Publicado 2001
Enlace del recurso
Enlace del recurso
-
167“…Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
168por Cheeseman, Bevan L., Günther, Ulrik, Gonciarz, Krzysztof, Susik, Mateusz, Sbalzarini, Ivo F.“…Modern microscopes create a data deluge with gigabytes of data generated each second, and terabytes per day. …”
Publicado 2018
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
169“…ModelTamer selects models hundreds to thousands of times faster than the full data analysis while needing megabytes rather than gigabytes of computer memory.…”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
170por Larsen, Peter E.“…The results of the DNA sequencing experiments can generate gigabytes to terabytes of information, however, making it difficult for the citizen scientist to grasp and the educator to convey this data. …”
Publicado 2016
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
171“…The rapid development of high-throughput sequencing technologies means that hundreds of gigabytes of sequencing data can be produced in a single study. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
172“…This could be a fly avoiding predators, or the retina processing gigabytes of data to guide human actions. In this work we draw parallels between these and the efficient sampling of biomolecules with hundreds of thousands of atoms. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
173por Almomani, Osama“…The datasets are very huge in size -Gigabytes to Terabytes-, and only metadata information is generated as JSON records that go directly to the portal by the data curation scripts. data curation scripts contain a collection of data ingestion and curation tools used to prepare the datasets’ metadata, software, and any accompanying material for public open data releases on the CERN Open Data portal.[2]…”
Publicado 2022
Enlace del recurso
-
174por Lee, Sewon, Kim, Gyuri, Karin, Eli Levy, Mirdita, Milot, Park, Sukhwan, Chikhi, Rayan, Babaian, Artem, Kryshtafovych, Andriy, Steinegger, Martin“…To push the boundaries of MSA utilization, we conducted a petabase-scale search of the Sequence Read Archive (SRA), resulting in gigabytes of aligned homologs for CASP15 targets. These were merged with default MSAs produced by ColabFold-search and provided to ColabFold-predict. …”
Publicado 2023
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
175por Veselkov, Kirill, Sleeman, Jonathan, Claude, Emmanuelle, Vissers, Johannes P. C., Galea, Dieter, Mroz, Anna, Laponogov, Ivan, Towers, Mark, Tonge, Robert, Mirnezami, Reza, Takats, Zoltan, Nicholson, Jeremy K., Langridge, James I.“…In the process, a vast quantity of unrefined data, that can amount to several hundred gigabytes per tissue section, is produced. Managing, analysing and interpreting this data is a significant challenge and represents a major barrier to the translational application of MSI. …”
Publicado 2018
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
176por Cromaz, Mario“…The data stream will reach 480 thousand events per second at an aggregate data rate of 4 gigabytes per second at full design capacity. We have been able to simplify the architecture of the streaming system greatly by interfacing the FPGA-based detector electronics with the computing cluster using standard network technology. …”
Publicado 2021
Enlace del recurso
-
177“…However, as the research software used becomes increasingly complex, the software images grow easily to sizes of multiple gigabytes. Downloading the full image onto every single compute node on which the containers are executed becomes unpractical. …”
Enlace del recurso
Enlace del recurso
-
178“…We were able to achieve a high DNA data density of 7.0 × 10(9) gigabytes per gram using a hydrogel-based system.…”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Online Artículo Texto -
179“…BACKGROUND: The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Texto -
180“…Results: We designed and implemented a new algorithm, Burrows-Wheeler Aligner's Smith-Waterman Alignment (BWA-SW), to align long sequences up to 1 Mb against a large sequence database (e.g. the human genome) with a few gigabytes of memory. The algorithm is as accurate as SSAHA2, more accurate than BLAT, and is several to tens of times faster than both. …”
Enlace del recurso
Enlace del recurso
Enlace del recurso
Texto