Introduction to Parallel Programming Using the Message Passing Interface (MPI)

Sadarbībā ar HPC kompetences centru Polijā piedāvājam apgūt paralēlās programmēšanas pamatus. Mācības notiks angļu valodā, tādēļ tālākā informācija par kursu aprakstīta angliski.

This session introduces the fundamentals of parallel programming using the Message Passing Interface (MPI), a standard for writing programs that run on distributed memory systems. Attendees will explore the Single Program Multiple Data (SPMD) model, core MPI concepts such as communicators and ranks, and essential communication techniques including point-to-point, collective, and non-blocking operations. The training also covers launching MPI applications on HPC systems and presents hybrid approaches that integrate MPI with other parallel paradigms. Practical exercises, including a distributed inner product and a halo exchange, provide hands-on experience with key concepts.

This course will take place online. The link to the streaming platform will be provided to the registered participants only.

Timetable: Wednesday, April 23, 2025, 09:30-13:30 CET

Prerequisites: basic C++ programming, Access to a Linux system with MPI installed.

Instructor: Jakub Gałecki, Interdisciplinary Centre for Mathematical and Computational Modelling at University of Warsaw(ICM UW).

Target audience: students and researchers interested in HPC, without experience in distributed memory programming.

Fee: free of charge

The exercises can be followed during the hands-on part in two ways:

  • By remotely accessing ICM UW computational facility – for the interested participants: Please apply for an ICM account at https://granty.icm.edu.pl/account_applications/new (please make sure to do this by April 20 at the latest to allow us time to set up your access credentials and a training allocation; in the ID verification field, please indicate the training title);
  • Using your own computer with a GCC (g++) compiler and MPI library installed.

Agenda

  • Introduction: the need for message passing, the SPMD model, working with distributed memory;
  • Basic concepts: communicator, rank, message;
  • Point-to-point communication: the basic building block of MPI programs;
  • Collective communication: expressing distributed parallel algorithms and common communication patterns;
  • Non-blocking communication: how to overlap computation and communication;
  • Hybrid parallelism: leveraging MPI to scale other parallel paradigms;
  • Launching MPI programs on HPC machines;
  • Hands-on exercise: distributed inner product;
  • Hands-on exercise: halo exchange.

Learning Outcomes:

By the end of this session, participants will be able to:

  • Explain the need for message passing in parallel computing and the basics of the SPMD model.
  • Understand and use key MPI constructs: communicators, ranks, and messages.
  • Implement point-to-point and collective communication patterns in MPI programs.
  • Apply non-blocking communication to overlap computation with communication.
  • Integrate MPI with other parallel programming models in hybrid architectures.
  • Compile and execute MPI applications on high-performance computing (HPC) systems.
  • Develop simple distributed-memory parallel algorithms through hands-on MPI exercises.

Organizers: the course is organized by EuroCC-Poland and EuroCC-Latvia in collaboration with Riga Technical University’s HPC Center and the University of Warsaw.