Cargando…
III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism
<!--HTML-->Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-s...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2016
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2135999 |
_version_ | 1780949953104838656 |
---|---|
author | VYSKOČIL, Jiří |
author_facet | VYSKOČIL, Jiří |
author_sort | VYSKOČIL, Jiří |
collection | CERN |
description | <!--HTML-->Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty.
These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods in a way which keeps the "front end" code very readable.
---
LECTURE 3
In this lecture, we will look into a specific technique to parallelize a large data-centric workload iterating over a multi-dimensional array. We will show how to separate iteration and computation and how the "front-end" algorithm can then be made independent on the dimensionality, coordinate system, or order of numerical approximation. We will show how this separation further helps to implement thread-level parallelism into the "back-end" and explore some common cases of data dependency. We will finally take a look at an example code combining the ideas of all three lectures. |
id | cern-2135999 |
institution | Organización Europea para la Investigación Nuclear |
language | eng |
publishDate | 2016 |
record_format | invenio |
spelling | cern-21359992022-11-02T22:32:25Zhttp://cds.cern.ch/record/2135999engVYSKOČIL, JiříIII - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelisminverted CERN School of Computing 2016inverted CSC<!--HTML-->Large scale scientific computing raises questions on different levels ranging from the fomulation of the problems to the choice of the best algorithms and their implementation for a specific platform. There are similarities in these different topics that can be exploited by modern-style C++ template metaprogramming techniques to produce readable, maintainable and generic code. Traditional low-level code tend to be fast but platform-dependent, and it obfuscates the meaning of the algorithm. On the other hand, object-oriented approach is nice to read, but may come with an inherent performance penalty. These lectures aim to present he basics of the Expression Template (ET) idiom which allows us to keep the object-oriented approach without sacrificing performance. We will in particular show to to enhance ET to include SIMD vectorization. We will then introduce techniques for abstracting iteration, and introduce thread-level parallelism for use in heavy data-centric loads. We will show to to apply these methods in a way which keeps the "front end" code very readable. --- LECTURE 3 In this lecture, we will look into a specific technique to parallelize a large data-centric workload iterating over a multi-dimensional array. We will show how to separate iteration and computation and how the "front-end" algorithm can then be made independent on the dimensionality, coordinate system, or order of numerical approximation. We will show how this separation further helps to implement thread-level parallelism into the "back-end" and explore some common cases of data dependency. We will finally take a look at an example code combining the ideas of all three lectures.oai:cds.cern.ch:21359992016 |
spellingShingle | inverted CSC VYSKOČIL, Jiří III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title | III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title_full | III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title_fullStr | III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title_full_unstemmed | III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title_short | III - Template Metaprogramming for massively parallel scientific computing - Templates for Iteration; Thread-level Parallelism |
title_sort | iii - template metaprogramming for massively parallel scientific computing - templates for iteration; thread-level parallelism |
topic | inverted CSC |
url | http://cds.cern.ch/record/2135999 |
work_keys_str_mv | AT vyskociljiri iiitemplatemetaprogrammingformassivelyparallelscientificcomputingtemplatesforiterationthreadlevelparallelism AT vyskociljiri invertedcernschoolofcomputing2016 |