Parallel and Distributed Programming
Keywords |
Classification |
Keyword |
OFICIAL |
Computer Science |
Instance: 2010/2011 - 1S
Cycles of Study/Courses
Teaching language
Portuguese
Objectives
Provide students with advanced concepts of theory and practice of computational model for parallel and distributed memory architectures. Hands-on experience on programming distributed memory architectures with MPI, programming shared memory architectures using processes, threads and OpenMP and programming many-core architectures with CUDA.
Program
Introduction and foundations:
Parallel programming, concurrency and parallelism, Flynn taxonomy. Foster's programming methodology. major parallel programming models and paradigms.
Programming for distributed memory architectures using MPI:
MPI specification, explicit message passing, communication protocols, Derived types and data packing, collective communication, communicators, topologies.
Programming for shared memory architectures with processes:
Processes, shared memory segments, shared memory through file mapping, spinlocks, semaphores.
Programming for shared memory architectures with threads:
Multithreading processes with Pthreads, mutexs, conditional variables, keys, implementation of Pthreads.
Programming for shared memory architectures with OpenMP:
OpenMP specification, compilation directives, work-sharing constructors, basic constructors, sincronisation constructors, basic functions, locking functions, environment variables, removing data dependencies, performance, combining OpenMP with MPI.
Programming for many-core architectures with CUDA:
GPU architectures. CUDA specification, programming interface and main constructs. Examples of CUDA programs.
Performance metrics:
Speedup measures, efficiency, redundancy, usability and quality of a parallel application. Amdahl law. Gustafson-Barsis law. Karp-Flatt metrics.
Parallel algorithms:
Scheduling and load balancing strategies. Parallel algorithms for sorting, search, monte-carlo simulation, matrix and multiplication.
Mandatory literature
Michael J. Quinn; Parallel Programming in C with MPI and OPenMP, McGraw-Hill.
B. Nichols, D. Buttlar and J.P. Farrell; Pthreads Programming, O'Reilly
R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald and R. Menon; Parallel Programming in OpenMP, Morgan Kaufmann
P. Pacheco.; Parallel Programming with MPI, Morgan Kaufmann
M. Mitchell, J. Oldham and A. Samuel; Advanced Linux Programming, New Riders
B. Wilkinson, M. Allen.; Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers , Prentice Hall.
Evaluation Type
Distributed evaluation without final exam
Assessment Components
Description |
Type |
Time (hours) |
Weight (%) |
End date |
Attendance (estimated) |
Participação presencial |
63,00 |
|
|
|
Total: |
- |
0,00 |
|
Calculation formula of final grade
The assessment is made based on the students performance on two practical assignments and two written tests.
- Each practical assignments weight 4 out of 20 in the final evaluation mark for this course. The minimal classification in the sum of both practical assignments is 4 out of 8.
- Each written test weight 6 out of 20 in the final evaluation mark for this course. The minimal classification in the sum of both written tests is 4 out of 12.
Examinations or Special Assignments
The assessment is made based on the students performance on:
- two practical assignments, one concerning parallel programming for distributed memory architectures and the other concerning parallel programming for shared memory architectures.
- two written tests, one at the middle of the semester and the other at the end.