Mostrando 1,141 - 1,160 Resultados de 2,741 Para Buscar '"CPU"', tiempo de consulta: 0.13s Limitar resultados
  1. 1141
    por Schaefer, D, Lipeles, E, Ospanov, R
    Publicado 2012
    “…We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to prepare the ATLAS trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades.…”
    Enlace del recurso
  2. 1142
    “…These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\% and subsequently problems with data throughput for file access from the storage elements. …”
    Enlace del recurso
  3. 1143
    por Lipeles, E, Ospanov, R, Schaefer, D
    Publicado 2012
    “…We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to prepare the ATLAS trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades.…”
    Enlace del recurso
    Enlace del recurso
  4. 1144
    por Kama, S
    Publicado 2013
    “…Complimenting this, GOODA, which is an in-house built in collaboration with Google and based on hardware performance monitoring unit events, is used to figure out hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOODA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several points such as Runge-Kutta propagation code.…”
    Enlace del recurso
  5. 1145
    “…In these periods it is possible to profit from the unused processing capacity to reprocess earlier datasets with the newest applications (code and calibration constants), thus reducing the CPU capacity needed on the Grid. The offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control) to process physics data on the Grid. …”
    Enlace del recurso
    Enlace del recurso
  6. 1146
    por Debenedetti, C
    Publicado 2013
    “…We present the main concepts of the ISF, which allows a fine-tuned detector simulation targeted at specific physics cases with a decrease in CPU time per event by orders of magnitudes. Additionally, we will discuss the implications of a customized simulation in terms of validity and accuracy and will present new concepts in digitization and reconstruction to achieve a fast Monte Carlo chain with a per event execution time of a few seconds.…”
    Enlace del recurso
  7. 1147
    “…The estimated data flow rate exported by the ATLAS TDAQ system for future long term analysis is about 2.5 PB/year. The number of CPU cores installed in the system will exceed 10000 during 2010.…”
    Enlace del recurso
  8. 1148
    “…Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or pre­staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. …”
    Enlace del recurso
  9. 1149
    por Salzburger, Andreas
    Publicado 2015
    “…The ATLAS experiment has performed a two year long software campaign which aimed to reduce the reconstruction rate by a factor of three to meet the resource limitations for Run-2: the majority of the changes to achieve this were improvements to the track reconstruction software. The CPU processing time of ATLAS track reconstruction was reduced by more than a factor of three during this campaign without any loss of output information of the track reconstruction. …”
    Enlace del recurso
  10. 1150
    por Kama, Sami
    Publicado 2015
    “…Here we report on the major considerations of the group, which was charged with considering the best strategies to exploit current and anticipated CPU technologies. The group has re-examined the basic architecture of event processing and considered how the building blocks of a framework (algorithms, services, tools and incidents) should evolve. …”
    Enlace del recurso
  11. 1151
    por Binet, Sebastien, COUTURIER, Ben
    Publicado 2015
    “…`Docker` containers provide an interesting avenue for packaging applications and development environment, relying on the Linux kernel capabilities for process isolation, adding "git"-like capabilities to the filesystem layer and providing (close to) native CPU, memory and I/O performances. This paper will introduce in more details the modus operandi of `Docker` containers and then focus on the `hepsw/docks` containers which provide containerized software stacks for -among others- `LHCb`. …”
    Enlace del recurso
  12. 1152
    por Hubacek, Zdenek
    Publicado 2016
    “…Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. …”
    Enlace del recurso
  13. 1153
    por Schaarschmidt, Jana
    Publicado 2016
    “…Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. …”
    Enlace del recurso
  14. 1154
    “…The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. …”
    Enlace del recurso
  15. 1155
    por Keyes, Robert
    Publicado 2016
    “…Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. …”
    Enlace del recurso
    Enlace del recurso
  16. 1156
    por Estrada Pastor, Oscar
    Publicado 2017
    “…Offline track alignment of the ATLAS tracking system has to deal with about 700,000 degrees of freedom (DoF) defining its geometrical parameters, representing a considerable numerical challenge in terms of both CPU time and precision. An outline of the track based alignment approach and its implementation within the ATLAS software will be presented. …”
    Enlace del recurso
  17. 1157
    por Estrada Pastor, Oscar
    Publicado 2017
    “…The offline track alignment of the ATLAS tracking system has to deal with about 700,000 degrees of freedom (DoF) defining its geometrical parameters, representing a considerable numerical challenge in terms of both CPU time and precision. An outline of the track based alignment approach and its implementation within the ATLAS software is presented. …”
    Enlace del recurso
  18. 1158
    “…CMS can use this information to discover useful patterns and enhance the overall efficiency of the distributed data, improve CPU and site utilization as well as tasks completion time. …”
    Enlace del recurso
    Enlace del recurso
  19. 1159
    por Barton, Adam Edward
    Publicado 2018
    “…With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. …”
    Enlace del recurso
  20. 1160
    por Vlimant, Jean-Roch
    Publicado 2018
    “…The challenge will run in two phases: the first on accuracy (no stringent limit on CPU time), starting in April 2018, and the second (starting in the summer 2018) on the throughput, for a similar accuracy.…”
    Enlace del recurso
Herramientas de búsqueda: RSS