Mostrando 1,281 - 1,300 Resultados de 2,741 Para Buscar '"CPU"', tiempo de consulta: 0.19s Limitar resultados
  1. 1281
    “…Estimations of the CPU resources that will be needed to produce simulated data for the future runs of the ATLAS experiment at the LHC, indicate a compelling need to speed-up the process to reduce the computational time required. …”
    Enlace del recurso
    Enlace del recurso
  2. 1282
    “…While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. …”
    Enlace del recurso
    Enlace del recurso
  3. 1283
    “…However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future. …”
    Enlace del recurso
    Enlace del recurso
  4. 1284
    por Sottocornola, Simone
    Publicado 2020
    “…Even though the consumption expected for each VME crate of the FTK system is high compared to a common VME setup, the 8 FTK core crates will use ~ 50 kW, which is just a fraction of the power and the space needed for a CPU farm performing the same task. We report on the integration of 32 PUs and 8 SSBs inside the FTK system, on the infrastructures needed to run and cool them, and on the tests performed to verify the system processing rate and the temperature stability at a safe value.…”
    Enlace del recurso
  5. 1285
    por Hafych, Vasyl
    Publicado 2021
    “…Solving inference problems in the natural sciences, in particular High Energy Physics, often requires flexibility in using multiple programming languages, differentiable programming, and parallel execution on both CPU and GPU architectures. BAT.jl enables this by drawing on the unique capabilities of the Julia Programing Language. …”
    Enlace del recurso
  6. 1286
    por BREGEON, Johan
    Publicado 2021
    “…The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. …”
    Enlace del recurso
  7. 1287
    por Nobe, Takuya
    Publicado 2021
    “…We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities and outline the system to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios. …”
    Enlace del recurso
  8. 1288
    por Okumura, Yasuyuki
    Publicado 2021
    “…We present the tools that allow us to predict and optimize the trigger rates and CPU consumption for the anticipated LHC luminosities and outline the system to monitor deviations from the individual trigger target rates, and to quickly react to the changing LHC conditions and data taking scenarios. …”
    Enlace del recurso
  9. 1289
    por Okumura, Yasuyuki
    Publicado 2021
    “…We present the tools that allow us to predict and optimise the trigger rates and CPU consumption for the anticipated LHC luminosity. …”
    Enlace del recurso
    Enlace del recurso
  10. 1290
    “…The associated processing needs are also very high, of the order of hundreds of millions of CPU HS06 hours per year. In order to optimize the instrument design and study its performances, during the preparatory phase (2010-2017) and the current construction phase, the CTA consortium has run massive Monte Carlo productions on the EGI grid infrastructure. …”
    Enlace del recurso
    Enlace del recurso
  11. 1291
    “…AIgean provides a full end-to-end multi-FPGA/CPU implementation of a neural network. The user supplies a highlevel neural network description, and our tool flow is responsible for the synthesizing of the individual layers, partitioning layers across different nodes, as well as the bridging and routing required for these layers to communicate. …”
    Enlace del recurso
    Enlace del recurso
  12. 1292
    “…Using the data obtained from conducting several experiments with the forecasted data, we present the potential reductions on the carbon footprint of these computing services, from the perspective of CPU usage. The results show significant improvements to the computing power usage of the service (60% to 80%) as opposed to just keeping machines running or using simple heuristics that do not look too far into the past.…”
    Enlace del recurso
    Enlace del recurso
  13. 1293
  14. 1294
    “…Improving memory layout and data access is vital to use modern, massively parallel GPU hardware efficiently, contributing to the challenge of migrating traditional CPU based data structures to GPUs in AdePT. The low-level abstraction of memory access (LLAMA) is a C++ library that provides a zero-runtime-overhead data structure abstraction layer, focusing on multidimensional arrays of nested, structured data. …”
    Enlace del recurso
  15. 1295
    “…They are well-suited for track reconstruction tasks by learning on an expressive structured graph representation of hit data and considerable speedup over CPU-based execution is possible on FPGAs. The focus of this publication is a study of track reconstruction for the Phase-II EF system using GNNs on FPGAs. …”
    Enlace del recurso
  16. 1296
    “…They are well-suited for track reconstruction tasks by learning on an expressive structured graph representation of hit data and considerable speedup over CPU-based execution is possible on FPGAs. The focus of this talk is a study of track reconstruction for the Phase-II EF system using GNNs on FPGAs. …”
    Enlace del recurso
  17. 1297
    “…The source model reduces accelerator simulation CPU time by a factor of 7500 relative to full Monte Carlo approaches. …”
    Enlace del recurso
  18. 1298
    por Müller, H
    Publicado 1998
    “…SCI [Ref 1] allows for a memory bus-like interconnection between the data sources and the CPU farm, this implies that sources can directly write data to event-buffers in the farm. …”
    Enlace del recurso
  19. 1299
    por Haas, S, Joos, M, Iwanski, W
    Publicado 2004
    “…Design optimizations have been made during the development cycle of the firmware to maximize the data throughput and reduce the PCI bus overhead as well as the CPU load. In a PC with multiple PCI segments an aggregate data throughput of over 1.5 Gbyte/s has been measured and transfer rates of more than 100 kHz have been achieved.…”
    Enlace del recurso
    Enlace del recurso
  20. 1300
    “…It uses a combination of CPU, GPU and FPGA processing. For Run 2, the HLT has replaced all of its previous interface boards with the Common Read-Out Receiver Card (C-RORC) to enable read-out of detectors at high link rates and to extend the pre-processing capabilities of the cluster. …”
    Enlace del recurso
    Enlace del recurso
Herramientas de búsqueda: RSS