Go to:
Logótipo
You are in:: Start > CC430

Parallel and Distributed Programming

Code: CC430     Acronym: CC430

Keywords
Classification Keyword
OFICIAL Computer Science

Instance: 2013/2014 - 1S

Active? Yes
Web Page: http://www.dcc.fc.up.pt/~fds/aulas/PPD/1314/
Responsible unit: Department of Computer Science
Course/CS Responsible: Master's Degree in Network and Information Systems Engineering

Cycles of Study/Courses

Acronym No. of Students Study Plan Curricular Years Credits UCN Credits ECTS Contact hours Total Time
M:CC 10 PE do Mestrado em Ciência de Computadores 1 - 7,5 67 202,5
2
MI:ERS 11 Plano de Estudos a partir de 2007 4 - 7,5 67 202,5

Teaching language

English

Objectives

Introduce the students to advanced concepts on the theory and practice of computational models for parallel and distributed memory architectures. Hands-on experience on programming distributed memory architectures with MPI, programming shared memory architectures using processes, threads and OpenMP and programming many-core architectures with GPU.


Learning outcomes and competences

On completing this course, the students must be able to:

- understand and assess the concepts related to performance of parallel programs;
- be aware of the main recent models and languages for parallel programming, for example programming for GPU, OpenCL, Map-Reduce, Chapel.
- formulate solutions in the main parallel programming paradigms, namely MPI, Pthreads and OpenMP.

Working method

Presencial

Program

Introduction and foundations:
Parallel programming, concurrency and parallelism, Flynn taxonomy. Foster's programming methodology. major parallel programming models and paradigms.

Programming for distributed memory architectures using MPI:
MPI specification, explicit message passing, communication protocols, Derived types and data packing, collective communication, communicators, topologies.

Programming for shared memory architectures with processes:
Processes, shared memory segments, shared memory through file mapping, spinlocks, semaphores.

Programming for shared memory architectures with threads:
Multithreading processes with Pthreads, mutexs, conditional variables, keys, implementation of Pthreads.

Programming for shared memory architectures with OpenMP:
OpenMP specification, compilation directives, work-sharing constructors, basic constructors, synchronisation constructors, basic functions, locking functions, environment variables, removing data dependencies, performance, combining OpenMP with MPI.

Programming for many-core architectures with GPU:
GPU architectures. OpenCL and CUDA programming interfaces and main constructs. Examples of programs in OpenCL and CUDA.

Performance metrics:
Speedup measures, efficiency, redundancy, usability and quality of a parallel application. Amdahl law. Gustafson-Barsis law. Karp-Flatt metrics.

Parallel algorithms:
Scheduling and load balancing strategies. Parallel algorithms for sorting, search, monte-carlo simulation, matrix and multiplication.

Mandatory literature

Michael J. Quinn; Parallel Programming in C with MPI and OPenMP, McGraw-Hill.
P. Pacheco.; Parallel Programming with MPI, Morgan Kaufmann
B. Nichols, D. Buttlar and J.P. Farrell; Pthreads Programming, O'Reilly
R. Chandra, L. Dagum, D. Kohr, D. Maydan, J. McDonald and R. Menon; Parallel Programming in OpenMP, Morgan Kaufmann
M. Mitchell, J. Oldham and A. Samuel; Advanced Linux Programming, New Riders
B. Wilkinson, M. Allen.; Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers , Prentice Hall.

Teaching methods and learning activities

Lecture classes to introduce the concepts and practicals to motivate students on experiencing parallel programming in more then one paradigm.

keywords

Physical sciences > Computer science > Computer architecture > Distributed computing
Physical sciences > Computer science > Computer architecture > Parallel computing

Evaluation Type

Distributed evaluation without final exam

Assessment Components

designation Weight (%)
Exame 30,00
Teste 30,00
Trabalho escrito 10,00
Trabalho laboratorial 30,00
Total: 100,00

Amount of time allocated to each course unit

designation Time (hours)
Elaboração de relatório/dissertação/tese 20,00
Estudo autónomo 65,50
Frequência das aulas 67,00
Trabalho laboratorial 50,00
Total: 202,50

Eligibility for exams

Students must achieve a minimum average mark of 40% in the assignments.

Calculation formula of final grade

The final grade is obtained by adding the partial grades of assignments and exams using the formula:

Final = (2*A1+3*A2+3*A3+6*E1+6*E2)/20

where:

A1, A2 and A3 are the assignments
E1 and E2 are the mid-term and final exam

Marks for A1, A2, A3, E1, and E2 are given in the scale 0..20.










Examinations or Special Assignments

Assessment will include the following components:

A1 - practical assignment on MPI programming (10%)
A2 - written essay and presentation on a course topic (15%)
A3 - practical assignment on OpenMP or Pthreads (15%)
E1 - mid-term exam (30%)
E2 - final exam (30%)

The weight of each component is the one given in parenthesis.

Special assessment (TE, DA, ...)

Working students are also required to fulfill the assignments requirement.

Classification improvement

Improvement can only be made to the written exams.

Recommend this page Top
Copyright 1996-2024 © Faculdade de Ciências da Universidade do Porto  I Terms and Conditions  I Acessibility  I Index A-Z  I Guest Book
Page created on: 2024-11-04 at 08:18:37 | Acceptable Use Policy | Data Protection Policy | Complaint Portal