back to top
EXA-DUNE - Flexible PDE Solvers, Numerical Methods, and Applications |
Principal Investigators | Peter Bastian (University of Heidelberg) Olaf Ippisch (TU Clausthal) Mario Ohlberger (University of Münster) Christian Engwer (University of Münster) Stefan Turek (TU Dortmund) Dominik Göddeke (University of Stuttgart) Oleg Iliev (Fraunhofer ITWM and TU Kaiserslautern) |
Contact | Peter Bastian |
The aim of this interdisciplinary project, bringing together experts from the open source projects DUNE and FEAST, is to develop, analyse and realise new numerical, algorithmic and computational techniques to enable exascale computing for partial differential equations (PDEs) on heterogeneous massively parallel architectures. As the life time of PDE software is typically much longer than for hardware, flexible but nevertheless hardware-specific software components are developed based on the DUNE platform, which uses state-of-the-art programming techniques to achieve great flexibility and high efficiency to the advantage of a steadily growing user-community. Hardware-oriented numerical techniques of the FEAST project are integrated to optimally exploit the performance of the local (heterogeneous) nodes (multi-core multi-purpose CPUs, special purpose acceleration units like GPUs, etc.), w.r.t. specific structures of the given PDEs. The introduction of a hardware abstraction layer will make it possible to perform the necessary hardware-specific changes of essential components at compile time with at most minimal changes of the application code. Further adding to the great benefits from a combination of the strengths of DUNE and FEAST, modern numerical discretisations and solver approaches like adaptive multi-grid, localised spectral methods (e.g. higher-order Discontinous Galerkin schemes) and a hybrid parallel grid will increase the scalability. The EXA-DUNE toolbox is extended from petascale towards exascale level computing by introducing multi-level Monte Carlo methods for uncertainty quantification and multi-scale techniques which both add an additional layer of coarse grained parallelism, as they require the solution of many weakly coupled problems. The new methodologies and software concepts are applied to flow and transport processes in porous media (fuel cells, CO2 sequestration, large scale water transport), which are grand challenge problems of high relevance to society. |
back to top
DASH - Hierarchical Arrays for Efficient and Productive Data-Intensive Exascale Computing |
Principal Investigators | Karl Fürlinger (LMU München) | José Gracia (HLRS Stuttgart) | Andreas Knüpfer (TU Dresden) | Jie Tao (KIT Karlsruhe) | Lizhe Wang (CEODE, China) | |
Contact | Karl Fürlinger |
Link | http://www.dash-project.org/ |
Exascale computing systems will be characterized by extreme scale and a multilevel hierarchical organization. Efficient and productive programming of these systems will be a challenge, especially in the context of data-intensive applications. We propose DASH, a data-structure oriented C++ template library which provides hierarchical PGAS-like abstractions for essential data types (multidimensional arrays, lists, hash tables, etc.) and allows a programmer to control (and explicitly take advantage of) the hierarchical data layout of global data structures. In contrast to other PGAS approaches such as UPC, DASH does not propose a new language or require compiler support to realize global address space semantics. Instead, operator overloading and other advanced C++ features will be used to provide the semantics of data residing in a global and hierarchically partitioned address space based on a runtime system with one-sided messaging primitives provided by MPI or GASNet 1. As such, DASH will co-exist with parallel programming models already in widespread use (like MPI) and developers can take advantage of DASH by incrementally replacing existing data structures with the implementation provided by DASH. Efficient I/O directly to and from the hierarchical structures and DASH-optimized algorithms such as map-reduce will also be part of the project. |
back to top
TERRA-NEO - Integrated Co-Design of an Exa-Scale Earth Mantle Modeling Framework |
Principal Investigators | Hans-Peter Bunge (LMU München) | Ulrich Rüde (Friedrich-Alexander-University Erlangen-Nürnberg) | Gerhard Wellein (Friedrich-Alexander-University Erlangen-Nürnberg) | Barbara Wohlmuth (TU München) | |
Contact | Ulrich Rüde |
Link | http://terraneo.fau.de/ |
Modeling and simulating Earth mantle dynamics requires a resolution in space and time that makes it one of the grand challenge applications in the computational sciences. With the exa-scale systems of the future it will be possible to advance beyond the deterministic forward problem to a stochastic uncertainty analysis for the inverse problem. Future geophysics research depends crucially on a new kind of Earth mantle simulation framework. With this proposal, we plan to create TERRA-NEO as an exa-scale enabled community code for geophysicists worldwide, opening a new area in quantifying geophysical phenomena and their impact on society. TERRA-NEO will be based on a carefully designed multiscale space-time approximation, built on modern finite element technology and communication-avoiding ultra-scalable multigrid for multi-physics Earth mantle models. The TERRA-NEO software framework will be developed specifically for the upcoming heterogeneous exa-scale computers by using an advanced architecture-aware co-design approach that is driven by a systematic performance engineering methodology. A successful and sustainable co-design beyond the state-of-the art can only be achieved by integrating leading edge research in the geophysical application, in numerical mathematics, and in high performance computing. |
back to top
EXASTEEL - Bridging Scales for Multiphase Steels |
Principal Investigators | Daniel Balzani (TU Dresden) | Axel Klawonn (University of Cologne) | Oliver Rheinbach (TU Bergakademie Freiberg) | Jörg Schröder (University of Duisburg-Essen) | Gerhard Wellein (Friedrich-Alexander-University Erlangen-Nürnberg) | |
Contact | Axel Klawonn |
Link | http://www.numerik.uni-koeln.de/14079.html |
The computational simulation of advanced high strength steels, incorporating phase transformation phenomena at the microscale, on the future supercomputers developed for exascale computing is considered in this project. To accomplish this goal, new ultra-scalable, robust algorithms and solvers have to be developed and incorporated into a new application software for the simulation of this three dimensional multiscale material science problem. Such algorithms must specifically be designed to allow the efficient use of the hardware. Here, a direct multiscale approach (FE2) will be combined with new, highly efficient, parallel solver algorithms. For the latter algorithms, a hybrid algorithmic approach will be taken, combining nonoverlapping parallel domain decomposition (FETI) methods with efficient, parallel multigrid preconditioners. A comprehensive performance engineering approach will be implemented to ensure a systematic optimization and parallelization process across all software layers. The envisioned scale-bridging will still require a computational power which will only be obtainable when exascale computing becomes available. |
back to top
GROMEX - Unified Long-range Electrostatics and Dynamic Protonation for Realistic Biomolecular Simulations on the Exascale |
Principal Investigators | Helmut Grubmüller (Max Planck Institute for Biophysical Chemistry) | Holger Dachsel (Jülich Supercomputing Centre) | Berk Hess (Stockholm University) | |
Contact | Carsten Kutzner |
Link | http://www.mpibpc.mpg.de/grubmueller/sppexa |
In this project, we target a flexible, portable and ultra-scalable solver for potentials and forces, which is a prerequisite for exascale applications in particle-based simulations with long-range interactions in general. As a particularly challenging example that will prove and demonstrate the capability of our concepts, we use the popular molecular dynamics (MD) simulation software GROMACS. MD simulation has become a crucial tool to the scientific community, especially as it probes time- and length scales difficult or impossible to probe ex- perimentally. Moreover, it is a prototypic example of a general class of complex multiparticle systems with long-range interactions. MD simulations elucidate detailed, time-resolved behaviour of biology's nanomachines. From a computational point of view, they are extremely challenging for two main reasons. First, to properly describe the functional motions of biomolecules, the long-range effects of the electrostatic interactions must be explicitly accounted for. Therefore, techniques like the particle-mesh Ewald method were adopted, which, however, severely limits the scaling to a large number of cores due to global communication requirements. The second challenge is to realistically describe the time-dependent location of (partial) charges, as e.g. the proto- nation states of the molecules depend on their time-dependent electrostatic environment. Here we address both tighly interlinked challenges by the development, implementation, and optimization of a unified algorithm for long-range interactions that will account for realistic, dynamic protonation states and at the same time overcome current scaling limitations. |
back to top
ExaStencils - Advanced Stencil-Code Engineering |
Principal Investigators | Christian Lengauer (University of Passau) | Armin Größlinger (University of Passau) | Ulrich Rüde (Friedrich-Alexander-University Erlangen-Nürnberg) | Harald Köstler (Friedrich-Alexander-University Erlangen-Nürnberg) | Sven Apel (University of Passau) | Jürgen Teich (Friedrich-Alexander-University Erlangen-Nürnberg) | Frank Hannig (Friedrich-Alexander-University Erlangen-Nürnberg) | Matthias Bolten (University of Wuppertal) | |
Contact | Christian Lengauer |
Link | http://www.exastencils.org/ |
ExaStencils has the goal to develop a software technology which enables the largely automatic derivation of highly optimized, exascale-ready stencil codes. The main distinguishing quality of the project is that domain knowledge of the specific application and the execution platform used is going to be leveraged at several levels of abstraction in tuning the implementation. The major steps are (1) tuning the mathematical formulation of the problem, (2) converting it to a domain-specific programming language, (3) employing software product-line technology for an effective management of the commonalities and variabilities of stencil codes and for domain-specific optimization and generation, (4) applying polyhedral techniques of loop optimization, and (5) adapting to the specific features of the execution platform used. The first two case studies will be in particle simulation and quantum chemistry.
Article about ExaStencils on InSide |
back to top
ExaFSA - Exascale Simulation of Fluid-Structure-Acoustics Interactions |
Principal Investigators | Miriam Mehl (University of Stuttgart) | Hester Bijl (TU Delft) | Thomas Ertl (University of Stuttgart) | Sabine Roller (University of Siegen) | Dörte Sternel (TU Darmstadt) | |
Contact | Miriam Mehl |
Link | http://ipvs.informatik.uni-stuttgart.de/SGS/EXAFSA/ |
In scientific computing, an increasing need for ever more detailed insights and optimization leads to improved models often including several physical effects described by different types of equations. The complexity of the corresponding solver algorithms and implementations typically leads to coupled simulations reusing existing software codes for different physical phenomena (multiphysics simulations) or for different parts of the simulation pipeline such as grid handling, matrix assembly, system solvers, and visualization. Accuracy requirements can only be met with a high spatial and temporal resolution making exascale computing a necessary technology to address runtime constraints for realistic scenarios. However, running a multicomponent simulation efficiently on massively parallel architectures is far more challenging than the parallelization of a single simulation code. Open questions range from suitable load balancing strategies over bottleneck-avoiding communication, interactive visualization for online analysis of results, synchronization of several components to parallel numerical coupling schemes. We intend to tackle these challenges for fluid-structure-acoustics interactions, which are extremely costly due to the large range of scales. Specifically, this requires innovative surface and volume coupling numerics between the different solvers as well as sophisticated dynamical load balancing and in-situ coupling and visualization methods.
Article about ExaFSA on Inside |
back to top
EXAHD - An Exa-Scalable Two-Level Sparse Grid Approach for Higher-Dimensional Problems in Plasma Physics and Beyond |
Principal Investigators | Dirk Pflüger (University of Stuttgart) | Hans-Joachim Bungartz (TU München) | Michael Griebel (University of Bonn) | Markus Hegland (ANU) | Frank Jenko (IPP Garching/UCLA) | Hermann Lederer (MPG Garching) | |
Contact | Dirk Pflüger |
Link | http://ipvs.informatik.uni-stuttgart.de/SGS/EXAHD/ |
Higher-dimensional problems (i.e., beyond four dimensions) appear in medicine, finance, and plasma physics, posing a challenge for tomorrow's HPC. As an example application, we consider turbulence simulations for plasma fusion with one of the leading codes, GENE, which promises to advance science on the way to carbon-free energy production. While higher-dimensional applications involve a huge number of degrees of freedom such that exascale computing gets necessary, mere domainde composition approaches for their parallelization are infeasible since the communication explodes with increasing dimensionality. Thus, to ensure high scalability beyond domain decomposition, a second major level of parallelism has to be provided. To this end, we propose to employ the sparse grid combination scheme, a model reduction approach for higher-dimensional problems. It computes the desired solution via a combination of smaller, anisotropic and independent simulations, and thus provides this extra level of parallelization. In its randomized asynchronous and iterative version, it will break the communication bottleneck in exascale computing, achieving full scalability. Our two-level methodology enables novel approaches to scalability (ultra-scalable due to numerically decoupled subtasks), resilience (fault and outlier detection and even compensation without the need of recomputing), and load balancing (high-level compensation for insufficiencies on the application level). |
back to top
EXAMAG - Exascale simulations of the evolution of the universe including magnetic fields |
Principal Investigators | Volker Springel (University of Heidelberg) | Christian Klingenberg (University of Würzburg) | |
Contact | Volker Springel |
Link | http://www.mathematik.uni-wuerzburg.de/~klingen/EXAMAG.html |
We aim to bring the Millennium Simulation, one of the largest and most successful numerical simulations of the Universe ever carried out, to a much higher level of physical fidelity on future exaflop computing platforms. In this project we shall take crucial steps towards much more self-consistent simulations beginning soon after the Big Bang and ending with the formation of realistic stellar systems like the Milky Way. This is a multi-scale problem of vast proportions. It requires the development of new numerical methods that excel in accuracy, parallel scalability, and physical fidelity to the processes relevant in galaxy formation. To this end a new moving-mesh technique for hydrodynamics recently developed by us provides a significant opportunity to improve the accuracy and flexibility of methods commonly employed in astrophysical fluid dynamics. Building on the first successes with the new moving mesh code (the AREPO code) we propose a dedicated effort to further extend this numerical framework with the goal of producing an internationally leading application code on upcoming large computing platforms. In an interdisciplinary effort of astrophysicists and applied mathematicians we aim in this project for drastic improvements in the raw performance and scalability of our existing AREPO code. State of the art PDE solvers will be developed to introduce additional physics into the model. This will allow transformative simulations of individual galaxies and galaxy clusters with several tens of billion hydrodynamical resolution elements and full adaptivity. |
back to top
FFMK - A fast and fault tolerant microkernel-based system for exascale computing |
Principal Investigators | Hermann Härtig (TU Dresden) | Alexander Reinefeld (Zuse-Institute Berlin) | Amnon Barak (Hebrew University Jerusalem Israel) | Wolfgang E. Nagel (TU Dresden) | |
Contact | Hermann Härtig |
Link | www.zib.de/projects/ffmk-fast-and-fault-tolerant-microkernel-based-system-exascale-computing |
This project addresses three key scalability obstacles of future exa-scale systems: the vulnerability to system failures due to transient or permanent errors, the performance losses due to imbalances and the noise due to unpredictable interactions between HPC applications and the operating-system. We address these obstacles by designing, implementing and evaluating a prototypical system, which integrates three well-proven technologies: - Microkernel-based operating systems to eliminate operating system noise impacts of feature-heavy all-in-one operating systems and to make kernel influences more deterministic and predictable,
- Erasure-code protected in-memory checkpointing to provide a fast checkpoint and restart mechanism capable of keeping up with the increase in the mean-time between failures (MTBF). We expect the MTBF to soon outrun the time needed to write traditional checkpoints onto external file systems,
- Mathematically sound MosiX management system and load balancing algorithms to adjust the system to the highly dynamic and wide variety of requirements for today's and future HPC applications.
The resulting system will be a fluid self-organizing platform for applications that require scaling up to exa-scale performance. An important component of the project will be the adaptation of suitable HPC work loads to showcase our new platform. A demonstration of such applications on a prototype implementation is the primary objective of our project in the first SPP funding period.
Article about FFMK on InSide |
back to top
ESSEX - Equipping Sparse Solvers for Exascale |
Principal Investigators | Gerhard Wellein (Friedrich-Alexander-University Erlangen-Nürnberg) | Achim Basermann (German Aerospace Center) | Holger Fehske (University of Greifswald) | Georg Hager (Friedrich-Alexander-University Erlangen-Nürnberg) | Bruno Lang (University of Wuppertal) | |
Contact | Gerhard Wellein |
Link | http://blogs.fau.de/essex/ |
This project develops and investigates programming concepts and numerical algorithms for scalable, efficient and robust iterative sparse matrix applications on exascale systems. Scalability and performance issues of widely used subspace methods are addressed by employing evolutionary techniques such as scalable preconditioners, and by investigating heterogeneous node architectures together with functional parallelism instead of relying on simple data-parallel approaches. We further plan to perform high-risk research tailored to exascale-inherent opportunities and challenges. First, we explore numerical alternatives to the Jacobi-Davidson method that exhibit additional multi-level parallelism and thus offer better scalability. Second, we address the exascale reliability problem by developing automatic fault tolerance concepts beyond classic checkpoint/restart. In this context, a comprehensive performance engineering approach will guide systematic, energyefficient optimization and parallelization efforts on all levels. Eventually, the components will be integrated in a package that permits selective calculation of bulks of eigenpairs in sparse eigenvalue problems. Building blocks, computational algorithms, and corresponding software will finally be validated on extreme-scale systems and be applied to quantum informatics, quantum physics and quantum chemistry problems, whose solution is known to require exascale resources.
Article about ESSEX on InSide |
back to top
EXASOLVERS - Extreme scale solvers for coupled problems |
Principal Investigators | Lars Grasedyck (RWTH Aachen) | Wolfgang Hackbusch (MPI MIS Leipzig) | Rolf Krause (University of Lugano) | Michael Resch (HLRS/ University of Stuttgart) | Volker Schulz (University of Trier) | Gabriel Wittum (Goethe University of Frankfurt) | |
Contact | Gabriel Wittum |
Link | gepris.dfg.de/gepris/projekt/230946257 |
Today, exascale computers are characterized by billion-way parallelism. Computing on such extreme scale needs methods which scale perfectly and have optimal complexity. This project proposal brings together several crucial aspects of extreme scale solving. First, the solver itself must be of optimal numerical complexity - a requirement becoming more an more severe with increasing problem size - and at the same time scale efficiently on extreme scales of parallelism. Second, simulations on exascale systems will consume a lot of electric power, requiring algorithms and implementations with low power consumption. To that end, the present project combines domain decomposition, parallel multigrid and H-matrices. This technique has the potential to gain top efficiency on extreme scales while still maintaining optimal complexity. To further improve parallelism, this approach is combined with special methods for parallelization in time and solvers for optimization problems. Both cases have additional parallelization potential. Algorithms and implementations will be evaluated for energy efficiency in problem solving. Criteria and models for energy efficiency of numerical solvers will be developed in the project. The team has long standing experience in developing algorithms and software for large scale HPC cooperatively. |
back to top
CATWALK - A Quick Development Path for Performance Models |
Principal Investigators | Felix Wolf (TU Darmstadt) | Christian Bischof (TU Darmstadt) | Torsten Hoefler (ETH Zurich) | Bernd Mohr (Jülich Supercomputing Centre) | Gabriel Wittum (Goethe University Frankfurt) | |
Contact | Felix Wolf |
Link | http://www.vi-hps.org/projects/catwalk/ |
The cost of running applications at exascale will be tremendous. Reducing runtime and energy consumption of a code to a minimum is therefore crucial. Moreover, many existing applications suffer from inherent scalability limitations that will prevent them from running at exascale in the first place. Current tuning practices, which rely on diagnostic experiments, have drawbacks because (i) they detect scalability problems relatively late in the development process when major effort has already been invested into an inadequate solution and (ii) they incur the extra cost of potentially numerous full-scale experiments. Analytical performance models, in contrast, allow application developers to address performance issues already during the design or prototyping phase. Unfortunately, the difficulties of creating such models combined with the lack of appropriate tool support still render performance modeling an esoteric discipline mastered only by a relatively small community of experts. The objective of this project is therefore to provide a flexible set of tools to support key activities of the performance modeling process, making this powerful methodology accessible to a wider audience of HPC application developers. The tool suite will be used to study and help improve the scalability of applications from life sciences, fluid dynamics, and particle physics. |
back to top