The short course on parallel computing with Message Passing Interface (MPI) will provide a practical introduction to parallel computing. MPI is the most widely used protocol in scientific parallel computing and MPI codes are portable to most high performance computers.
The goal of the course is to cover all the essential steps involved in parallel computing, from designing and programming an efficient MPI code, to implementing and running large-scale parallel simulations on shared computational resources. The participants will gain hands-on experience developing parallel codes and evaluating performance on the U of S training cluster “Socrates”
The course is open to students, faculty, and staff who are interested in learning the basics of parallel computing and developing competency with the MPI library.
The course will be taught by Philip LePoudre (College of Engineering). Philip has been working with high performance computing since 2002 and has developed parallel MPI programs for large scale simulations in Computational Fluid Dynamics (CFD) and Computational Aeroacoustics (CAA). His current research work involves CAA simulation of sound generation and propagation by the fan and stationary guide vanes in a high-bypass turbofan engine.
Once registered for an offering, we will contact you and ask that you provide the following information:
In this course, you will:
Course outline may be downloaded here
Class resource files:
There are no additional recommendations with this course
There is no required software for this course.
There are no additional resources for this course.
NOTE: Classes may have limited registration. If an offering is designated as 'Full', please email Training Services in order that we may accommodate your training needs. Due to last minute withdrawls, this class is subject to cancellation on short notice. A customized training solution may be offered in its place.