Materias dentro de su búsqueda.
Materias dentro de su búsqueda.
educación superior
30
higher education
27
Educación superior
15
educación básica
11
teacher training
10
México
9
escritura
9
aprendizaje
8
educación
8
estudiantes
8
teachers
8
youth
8
basic education
7
competencias
7
educación primaria
7
interculturalidad
7
lectura
7
política educativa
7
Argentina
6
educational research
6
environmental education
6
estudiantes indígenas
6
intercultural education
6
investigación educativa
6
learning
6
profesores
6
students
6
Educación ambiental
5
Educación intercultural
5
Mexico
5
-
1281por Bandieramonte, Marilena, Chapman, John Derek, Gray, Heather, Muskinja, Miha, Chiu, Yu Him Justin“…Estimations of the CPU resources that will be needed to produce simulated data for the future runs of the ATLAS experiment at the LHC, indicate a compelling need to speed-up the process to reduce the computational time required. …”
Publicado 2020
Enlace del recurso
Enlace del recurso
-
1282“…While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. …”
Enlace del recurso
Enlace del recurso
-
1283por Chapman, John Derek, Cranmer, Kyle, Gadatsch, Stefan, Golling, Tobias, Ghosh, Aishik, Gray, Heather, Lari, Tommaso, Pascuzzi, Vincent, Raine, John Andrew, Rousseau, David, Salamani, Dalila, Schaarschmidt, Jana“…However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future. …”
Publicado 2020
Enlace del recurso
Enlace del recurso
-
1284por Sottocornola, Simone“…Even though the consumption expected for each VME crate of the FTK system is high compared to a common VME setup, the 8 FTK core crates will use ~ 50 kW, which is just a fraction of the power and the space needed for a CPU farm performing the same task. We report on the integration of 32 PUs and 8 SSBs inside the FTK system, on the infrastructures needed to run and cool them, and on the tests performed to verify the system processing rate and the temperature stability at a safe value.…”
Publicado 2020
Enlace del recurso
-
1285por Hafych, Vasyl“…Solving inference problems in the natural sciences, in particular High Energy Physics, often requires flexibility in using multiple programming languages, differentiable programming, and parallel execution on both CPU and GPU architectures. BAT.jl enables this by drawing on the unique capabilities of the Julia Programing Language. …”
Publicado 2021
Enlace del recurso
-
1286por BREGEON, Johan“…The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. …”
Publicado 2021
Enlace del recurso
-
1287por Nobe, Takuya“…We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities and outline the system to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios. …”
Publicado 2021
Enlace del recurso
-
1288por Okumura, Yasuyuki“…We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities and outline the system to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios. …”
Publicado 2021
Enlace del recurso
-
1289por Okumura, Yasuyuki“…We present the tools that allow us to predict and optimise the trigger rates and CPU consumption for the anticipated LHC luminosity. …”
Publicado 2021
Enlace del recurso
Enlace del recurso
-
1290por Arrabito, Luisa, Bregeon, Johan, Maeght, Patrick, Sanguillon, Michèle, Tsaregorodtsev, Andrei“…The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. …”
Publicado 2021
Enlace del recurso
Enlace del recurso
-
1291por Tarafdar, Naif, Di Guglielmo, Giuseppe, Harris, Philip C, Krupa, Jeffrey D, Loncar, Vladimir, Rankin, Dylan S, Tran, Nhan, Wu, Zhenbin, Shen, Qianfeng Clark, Chow, Paul“…AIgean provides a full end-to-end multi-FPGA/CPU implementation of a neural network. The user supplies a highlevel neural network description, and our tool flow is responsible for the synthesizing of the individual layers, partitioning layers across different nodes, as well as the bridging and routing required for these layers to communicate. …”
Publicado 2022
Enlace del recurso
Enlace del recurso
-
1292“…Using the data obtained from conducting several experiments with the forecasted data, we present the potential reductions on the carbon footprint of these computing services, from the perspective of CPU usage. The results show significant improvements to the computing power usage of the service (60% to 80%) as opposed to just keeping machines running or using simple heuristics that do not look too far into the past.…”
Enlace del recurso
Enlace del recurso
-
1293por Perez-Calero Yzquierdo, Antonio Maria, Kizinevic, Edita, Khan, Farrukh Aftab, Kim, Hyunwoo, Mascheroni, Marco, Acosta Flechas, Maria, Tsipinakis, Nikos, Haleem, Saqib“…It currently aggregates nearly 400k CPU cores distributed worldwide from Grid, HPC and cloud providers. …”
Publicado 2023
Enlace del recurso
-
1294“…Improving memory layout and data access is vital to use modern, massively parallel GPU hardware efficiently, contributing to the challenge of migrating traditional CPU based data structures to GPUs in AdePT. The low-level abstraction of memory access (LLAMA) is a C++ library that provides a zero-runtime-overhead data structure abstraction layer, focusing on multidimensional arrays of nested, structured data. …”
Enlace del recurso
-
1295“…They are well-suited for track reconstruction tasks by learning on an expressive structured graph representation of hit data and considerable speedup over CPU-based execution is possible on FPGAs. The focus of this publication is a study of track reconstruction for the Phase-II EF system using GNNs on FPGAs. …”
Enlace del recurso
-
1296“…They are well-suited for track reconstruction tasks by learning on an expressive structured graph representation of hit data and considerable speedup over CPU-based execution is possible on FPGAs. The focus of this talk is a study of track reconstruction for the Phase-II EF system using GNNs on FPGAs. …”
Enlace del recurso
-
1297“…The source model reduces accelerator simulation CPU time by a factor of 7500 relative to full Monte Carlo approaches. …”
Enlace del recurso
-
1298por Müller, H“…SCI [Ref 1] allows for a memory bus-like interconnection between the data sources and the CPU farm, this implies that sources can directly write data to event-buffers in the farm. …”
Publicado 1998
Enlace del recurso
-
1299“…Design optimizations have been made during the development cycle of the firmware to maximize the data throughput and reduce the PCI bus overhead as well as the CPU load. In a PC with multiple PCI segments an aggregate data throughput of over 1.5 Gbyte/s has been measured and transfer rates of more than 100 kHz have been achieved.…”
Enlace del recurso
Enlace del recurso
-
1300por Engel, H, Alt, T, Breitner, T, Ramirez, A Gomez, Kollegger, T, Krzewicki, M, Lehrbach, J, Rohr, D, Kebschull, U“…It uses a combination of CPU, GPU and FPGA processing. For Run 2, the HLT has replaced all of its previous interface boards with the Common Read-Out Receiver Card (C-RORC) to enable read-out of detectors at high link rates and to extend the pre-processing capabilities of the cluster. …”
Publicado 2016
Enlace del recurso
Enlace del recurso