Materias dentro de su búsqueda.
Materias dentro de su búsqueda.
educación superior
30
higher education
27
Educación superior
15
educación básica
11
teacher training
10
México
9
escritura
9
aprendizaje
8
educación
8
estudiantes
8
teachers
8
youth
8
basic education
7
competencias
7
educación primaria
7
interculturalidad
7
lectura
7
política educativa
7
Argentina
6
educational research
6
environmental education
6
estudiantes indígenas
6
intercultural education
6
investigación educativa
6
learning
6
profesores
6
students
6
Educación ambiental
5
Educación intercultural
5
Mexico
5
-
1141“…We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to prepare the ATLAS trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades.…”
Enlace del recurso
-
1142por Arrabito, L, Bernardoff, V, Bouvet, D, Cattaneo, M, Charpentier, P, Clarke, P, Closier, J, Franchini, P, Graciani, R, Lanciotti, E, Mendez, V, Perazzini, S, Nandkumar, R, Remenska, D, Roiser, S, Romanovskiy, V, Santinelli, R, Stagni, F, Tsaregorodtsev, A, Ubeda Garcia, M, Vedaee, A, Zhelezov, A“…These changes led to shortages in the offline distributed data processing resources, an increased need of cpu capacity by a factor 2 for reconstruction, higher storage needs at T1 sites by 70\% and subsequently problems with data throughput for file access from the storage elements. …”
Publicado 2012
Enlace del recurso
-
1143“…We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to prepare the ATLAS trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades.…”
Enlace del recurso
Enlace del recurso
-
1144por Kama, S“…Complimenting this, GOODA, which is an in-house built in collaboration with Google and based on hardware performance monitoring unit events, is used to figure out hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOODA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several points such as Runge-Kutta propagation code.…”
Publicado 2013
Enlace del recurso
-
1145por Cardoso, LG, Gaspar, C, Callot, O, Closier, J, Neufeld, N, Frank, M, Jost, B, Charpentier, P, Liu, G“…In these periods it is possible to profit from the unused processing capacity to reprocess earlier datasets with the newest applications (code and calibration constants), thus reducing the CPU capacity needed on the Grid. The offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control) to process physics data on the Grid. …”
Publicado 2012
Enlace del recurso
Enlace del recurso
-
1146por Debenedetti, C“…We present the main concepts of the ISF, which allows a fine-tuned detector simulation targeted at specific physics cases with a decrease in CPU time per event by orders of magnitudes. Additionally, we will discuss the implications of a customized simulation in terms of validity and accuracy and will present new concepts in digitization and reconstruction to achieve a fast Monte Carlo chain with a per event execution time of a few seconds.…”
Publicado 2013
Enlace del recurso
-
1147por Dobson, M, Unel, G, Caramarcu, C, Dumitru, I, Valsan, L, Darlea, G L, Bujor, F, Bogdanchikov, A G, Korol, A A, Zaytsev, A S, Ballestrero, S“…The estimated data flow rate exported by the ATLAS TDAQ system for future long term analysis is about 2.5 PB/year. The number of CPU cores installed in the system will exceed 10000 during 2010.…”
Publicado 2013
Enlace del recurso
-
1148por Calafiura, Paolo, De, Kaushik, Guan, Wen, Maeno, Tadashi, Nilsson, Paul, Oleynik, Danila, Panitkin, Sergey, Tsulaia, Vakhtang, van Gemmeren, Peter, Wenaus, Torre“…Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours/day). Input data flows utilize remote data repositories with no data locality or prestaging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. …”
Publicado 2015
Enlace del recurso
-
1149por Salzburger, Andreas“…The ATLAS experiment has performed a two year long software campaign which aimed to reduce the reconstruction rate by a factor of three to meet the resource limitations for Run-2: the majority of the changes to achieve this were improvements to the track reconstruction software. The CPU processing time of ATLAS track reconstruction was reduced by more than a factor of three during this campaign without any loss of output information of the track reconstruction. …”
Publicado 2015
Enlace del recurso
-
1150por Kama, Sami“…Here we report on the major considerations of the group, which was charged with considering the best strategies to exploit current and anticipated CPU technologies. The group has re-examined the basic architecture of event processing and considered how the building blocks of a framework (algorithms, services, tools and incidents) should evolve. …”
Publicado 2015
Enlace del recurso
-
1151“…`Docker` containers provide an interesting avenue for packaging applications and development environment, relying on the Linux kernel capabilities for process isolation, adding "git"-like capabilities to the filesystem layer and providing (close to) native CPU, memory and I/O performances. This paper will introduce in more details the modus operandi of `Docker` containers and then focus on the `hepsw/docks` containers which provide containerized software stacks for -among others- `LHCb`. …”
Enlace del recurso
-
1152por Hubacek, Zdenek“…Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. …”
Publicado 2016
Enlace del recurso
-
1153por Schaarschmidt, Jana“…Many physics and performance studies with the ATLAS detector at the Large Hadron Collider require very large samples of simulated events, and producing these using the full GEANT4 detector simulation is highly CPU intensive. Often, a very detailed detector simulation is not needed, and in these cases fast simulation tools can be used to reduce the calorimeter simulation time by a few orders of magnitude. …”
Publicado 2016
Enlace del recurso
-
1154“…The ATLAS Trigger system has two levels, hardware-based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. …”
Enlace del recurso
-
1155por Keyes, Robert“…Performance metrics ranging from low-level memory and CPU requirements, to distributions and efficiencies of high-level physics quantities are visualized and validated by a range of experts. …”
Publicado 2016
Enlace del recurso
Enlace del recurso
-
1156por Estrada Pastor, Oscar“…Offline track alignment of the ATLAS tracking system has to deal with about 700,000 degrees of freedom (DoF) defining its geometrical parameters, representing a considerable numerical challenge in terms of both CPU time and precision. An outline of the track based alignment approach and its implementation within the ATLAS software will be presented. …”
Publicado 2017
Enlace del recurso
-
1157por Estrada Pastor, Oscar“…The offline track alignment of the ATLAS tracking system has to deal with about 700,000 degrees of freedom (DoF) defining its geometrical parameters, representing a considerable numerical challenge in terms of both CPU time and precision. An outline of the track based alignment approach and its implementation within the ATLAS software is presented. …”
Publicado 2017
Enlace del recurso
-
1158por Meoni, Marco, Kuznetsov, Valentin, Menichetti, Luca, Rumševičius, Justinas, Boccali, Tommaso, Bonacorsi, Daniele“…CMS can use this information to discover useful patterns and enhance the overall efficiency of the distributed data, improve CPU and site utilization as well as tasks completion time. …”
Publicado 2017
Enlace del recurso
Enlace del recurso
-
1159por Barton, Adam Edward“…With the evolution of the CPU market to many-core systems, both the ATLAS offline reconstruction and High-Level Trigger (HLT) software will have to transition from a multi-process to a multithreaded processing paradigm in order not to exhaust the available physical memory of a typical compute node. …”
Publicado 2018
Enlace del recurso
-
1160por Vlimant, Jean-Roch“…The challenge will run in two phases: the first on accuracy (no stringent limit on CPU time), starting in April 2018, and the second (starting in the summer 2018) on the throughput, for a similar accuracy.…”
Publicado 2018
Enlace del recurso