Project G: Central soft matter simulation platform

The goals of project G in the second funding phase of the TRR 146 have been the implementation of new methods of general interest into the molecular dynamics simulation environment ESPResSo++ Guzman et al. (2019), which can be used as foundation for research projects inside the TRR 146, and the optimization of ESPResSo++ to efficiently use modern HPC resources and therefore to become performance competitive with state-of-the-art MD environments like LAMMPS.

Project G has been successful integrating new simulation methods by

  • coupling ESPResSo++ with the ScaFaCos library Hofmann et al. (2018), Arnold et al. (2013) to provide fast parallelized long-range interaction algorithm (e.g. P3M / multipolar P3M),
  • developing and implementing a new approach for Lees-Edwards boundary conditions to provide a fast parallel implementation of shear boundary conditions.

The performance optimization of the ESPResSo++ environment included to

  • change the memory layout to benefit from better cache usage,
  • vectorize codes to support modern CPU architectures, and
  • investigate the loosely coupled parallel programing paradigm HPX and integrate basic concepts into ESPResSo++ to improve scalability.

The change of the memory layout and the vectorization of the code improved the overall performance of ESPResSo++ by a factor of 3.12, while the HPX paradigm delivered addition 30% performance improvement for imbalanced simulations.

The G project has also provided the infrastructure for the scientific data management and courses on data management, performance optimization, and ESPResSo++.

ESPResSo++ 2.0: Advanced methods for multiscale molecular simulation
Horacio V. Guzman, Nikita Tretyakov, Hideki Kobayashi, Aoife C. Fogarty, Karsten Kreis, Jakub Krajniak, Christoph Junghans, Kurt Kremer, Torsten Stuehn
Computer Physics Communications 238, 66-76 (2019)
see publication


Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
Horacio V. Guzman, Christoph Junghans, Kurt Kremer, Torsten Stuehn
Physical Review E96 (5), (2017)
see publication


MERCURY: a Transparent Guided I/O Framework for High Performance I/O Stacks
Giuseppe Congiu, Matthias Grawinkel, Federico Padua, James Morse, Tim Süß and André Brinkmann
25th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP 2017),IEEE Press (2017)
see publication


Deduplication Potential of HPC Applications' Checkpoints
Jürgen Kaiser, Ramy Gad, Tim Süß, Federico Padua, Lars Nagel and André Brinkmann
IEEE International Conference on Cluster Computing (Cluster'16), Pages413--422,IEEE Press (2016)
see publication


Analysis of the ECMWF Storage Landscape
Matthias Grawinkel, Lars Nagel, Markus Mäsker, Federico Padua, André Brinkmann, Lennart Sorth
Proceedings of the 13th USENIX Conference on File and Storage Technologies {FAST} 2015, Santa Clara, CA, USA,Pages15 - 27,Usenix (2015)
see publication


Optimizing scientific file I/O patterns using advice based knowledge
Giuseppe Congiu, Matthias Grawinkel, Federico Padua, James Morse, Tim Süß, André Brinkmann
Proceedings of the International Conference on Cluster Computing (CLUSTER), Madrid, Spain,IEEE (2014)
see publication