Cargando…
Example of shared ATLAS Tier2 and Tier3 facilities
The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of...
Autores principales: | , , , , , |
---|---|
Lenguaje: | eng |
Publicado: |
2012
|
Materias: | |
Acceso en línea: | https://dx.doi.org/10.1088/1742-6596/396/3/032051 http://cds.cern.ch/record/1446558 |
_version_ | 1780924793713852416 |
---|---|
author | Gonzalez de la Hoz, S Villaplana, M Kemp, Y Wolters, H Severini, H Bhimji, W |
author_facet | Gonzalez de la Hoz, S Villaplana, M Kemp, Y Wolters, H Severini, H Bhimji, W |
author_sort | Gonzalez de la Hoz, S |
collection | CERN |
description | The ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required: a) any site can replicate data from any other site. b) Dynamic data caching. Analysis sites receive datasets from any other site “on demand” based on usage pattern, and possibly using a dynamic placement of datasets by centrally managed replication of whole datasets. Unused data is removed. c) Remote data access. Local jobs could access data stored at remote sites using local caching on a file or sub-file level. In this contribution, the model of shared ATLAS Tier2 and Tier3 facilities in the EGI/gLite flavour is explained. The Tier3s in the US and the Tier3s in Europe are rather different because in Europe we have facilities, which are Tier2s with a Tier3 component (Tier3 with a co-located Tier2). Data taking in ATLAS has been going on for more than one year. The Tier2 and Tier3 facility setup, how do we get the data, how do we enable at the same time grid and local data access, how Tier2 and Tier3 activities affect the cluster differently and process of hundreds of million of events, will be presented. Finally, an example of how a real physics analysis is working at these sites will be shown, and this is a good occasion to see if we have developed all the Grid tools necessary for the ATLAS Distributed Computing community, and in case we do not, to try to fix it, in order to be ready for the foreseen increase in ATLAS activity in the next years. |
id | cern-1446558 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2012 |
record_format | invenio |
spelling | cern-14465582019-09-30T06:29:59Zdoi:10.1088/1742-6596/396/3/032051http://cds.cern.ch/record/1446558engGonzalez de la Hoz, SVillaplana, MKemp, YWolters, HSeverini, HBhimji, WExample of shared ATLAS Tier2 and Tier3 facilitiesDetectors and Experimental TechniquesThe ATLAS computing and data models have moved/are moving away from the strict MONARC model (hierarchy) to a mesh model. Evolution of computing models also requires evolution of network infrastructure to enable any Tier2 and Tier3 to easily connect to any Tier1 or Tier2. In this way some changing of the data model are required: a) any site can replicate data from any other site. b) Dynamic data caching. Analysis sites receive datasets from any other site “on demand” based on usage pattern, and possibly using a dynamic placement of datasets by centrally managed replication of whole datasets. Unused data is removed. c) Remote data access. Local jobs could access data stored at remote sites using local caching on a file or sub-file level. In this contribution, the model of shared ATLAS Tier2 and Tier3 facilities in the EGI/gLite flavour is explained. The Tier3s in the US and the Tier3s in Europe are rather different because in Europe we have facilities, which are Tier2s with a Tier3 component (Tier3 with a co-located Tier2). Data taking in ATLAS has been going on for more than one year. The Tier2 and Tier3 facility setup, how do we get the data, how do we enable at the same time grid and local data access, how Tier2 and Tier3 activities affect the cluster differently and process of hundreds of million of events, will be presented. Finally, an example of how a real physics analysis is working at these sites will be shown, and this is a good occasion to see if we have developed all the Grid tools necessary for the ATLAS Distributed Computing community, and in case we do not, to try to fix it, in order to be ready for the foreseen increase in ATLAS activity in the next years.ATL-SOFT-PROC-2012-006oai:cds.cern.ch:14465582012-05-08 |
spellingShingle | Detectors and Experimental Techniques Gonzalez de la Hoz, S Villaplana, M Kemp, Y Wolters, H Severini, H Bhimji, W Example of shared ATLAS Tier2 and Tier3 facilities |
title | Example of shared ATLAS Tier2 and Tier3 facilities |
title_full | Example of shared ATLAS Tier2 and Tier3 facilities |
title_fullStr | Example of shared ATLAS Tier2 and Tier3 facilities |
title_full_unstemmed | Example of shared ATLAS Tier2 and Tier3 facilities |
title_short | Example of shared ATLAS Tier2 and Tier3 facilities |
title_sort | example of shared atlas tier2 and tier3 facilities |
topic | Detectors and Experimental Techniques |
url | https://dx.doi.org/10.1088/1742-6596/396/3/032051 http://cds.cern.ch/record/1446558 |
work_keys_str_mv | AT gonzalezdelahozs exampleofsharedatlastier2andtier3facilities AT villaplanam exampleofsharedatlastier2andtier3facilities AT kempy exampleofsharedatlastier2andtier3facilities AT woltersh exampleofsharedatlastier2andtier3facilities AT severinih exampleofsharedatlastier2andtier3facilities AT bhimjiw exampleofsharedatlastier2andtier3facilities |