Parallel Programming with MPI
Description: MPI (Message Passing Interface) is the de facto standard for parallel programming, defining how concurrent processes can communicate and hence work together to complete a given task in a shorter time. This course will introduce the concepts and terminology of High Performance Computing (HPC), before providing a comprehensive and detailed introduction to programming HPC machines using MPI. After an in-depth look at point-to-point and collective communication, we will study some more advanced but potentially very useful topics: Cartesian topologies, MPI derived data types, user-defined binary operators, groups and communicators. Each section of the course is supported by practical exercises.
For more information see the syllabus.
Aimed at: Anyone interested in writing parallel code.
Prerequisites: Attendees should be able to program in either Fortran or C
and be familiar with working in a
(i.e., you should be able to connect to a machine remotely,
UNIX commands, edit a source file and understand
the elementary steps in compiling object files and creating executables).
Duration: 3 days.
After Course Attendees Will: Be able to parallelise an existing serial code, or write a parallel code from scratch, using MPI.
Registration: To register for HECToR courses go to the booking form.