|
Adaptive Selection of Communication Methods to Optimize Collective MPI Operations
Olaf Hartmann1, Matthias Kühnemann1, Thomas Rauber2 and Gudula Rünger1
Many parallel applications from scientific computing use collective MPI communication operations to distribute or collect data. The execution time of collective MPI communication operations can be significantly reduced by a restructuring based on orthogonal processor structures. The performance improvement depends strongly on numerous factors, like the collective MPI communication operation, the specific group layout, the message size, the MPI library and the architecture parameters of the parallel target platform. In this paper we describe an adaptive approach to determine and select a specific processor group layout with the objective of minimizing the communication overhead. Furthermore the adaptive approach evaluates the execution time of the required communication pattern based on single-to-single MPI communication operations that are used to realize different algorithms to distribute or collect data. In the case that a communication method is faster than the collective MPI communication operation the specific communication pattern or the orthogonal processor layout is applied to perform the communication operation.
Please contact our webadmin with any comments or changes.
|