Parallel Programming
Full course description
Parallel programming introduces the students to the paradigm of parallel
computing on a computer. Nowadays almost all computer systems include so-called multi-core chips. Hence, in order to exploit the full performance of such systems one needs to employ parallel programming.
This course covers shared-memory parallelization with OpenMP and java-Threads as well as parallelization with message passing on distributed-memory architectures with MPI. The course starts with a
recap of the programming language C followed by a brief theoretical introduction to parallel computing. Next, the course treats theoretical aspects like MPI communication, race conditions, deadlocks, efficiency
as well as the problem of serialization. This course is accompanied by practical labs in which the students have the opportunity to apply the newly acquired concepts. After completing this course students will be
able to write parallel programs with MPI and OpenMP on a basic level, and deal with any difficulties they may encounter.
This is an optional course: Third year students choose three electives per period out of the optional courses during period 1 and 2
Maximum number of 35 students can follow this course.
Prerequisites
Procedural Programming (formerly known as Introduction to Computer Science 1);
Objects in Programming (formerly known as Introduction to Computer Science 2);
Data Structures and Algorithms.
Recommended reading
Parallel programming with MPI; Peter Pacheco; Morgan Kaufmann (1996); (a very early revision is available online).