International Computing Institute

This is the current page!
 

 

UBE 520
PARALLEL  PROGRAMMING
2008-2009 SPRING


Parallel and Distributed Systems Programming, 3 hours lecture.
Parallel programming techniques, architectures and algorithms.

You may learn your final exam grades from TA (İlker kocabaş)

 


Instructor: PROF.DR.M.E.DALKILIC

Textbook: Barry Wilkinson & Michael Allen, Parallel Programming Techniques and Applications Using

                Networked Workstations and Parallel Computers Second edition, Pren-Hall 2004

                http://www.cs.uncc.edu/par_prog

References:

 

http://www-users.cs.umn.edu/~karypis/parbook

  • Calvin Lin & Larry Snyder, Principles of Parallel Programming, Pren-Hall 2009

 

Goals: To introduce the theory and implementation of parallel programming techniques.

Programming platform:

·         MPI (Message Passing Interface) on a 15 node Linux Cluster

·         Threads / OpenMP

 


Prerequisites: Data Structures and Algorithms, Computer Architecture and  C/Java Programming

Topics:

 
PART I: BASIC TECHNIQUES
Chapter 1. Parallel Computers
Chapter 2. Message Passing Computing
Chapter 3. Embarrasingly Parallel Computations
Chapter 4. Partitioning and Divide-and-Conquer Strategies
Chapter 5. Pipelined Computations
Chapter 6. Synchronous Computations
Chapter 7. Load Balancing and Termination Detection

Chapter 8. Programming with Shared Memory
Chapter 9. Distributed Shared Memory Systems and Programming

PART II  ALGORITHMS and APPLICATIONS
Chapter 10. Sorting Algorithms
Chapter 11. Numerical Algorithms
Chapter 12. Image Processing

Grading

  • Assignments, 40% (+ 5/10)
  • Mid-term Exam, 25% (+/- 5)
  • Final Exam, 35% (+/- 5)

 

Homework #1 (due March 3rd, 2009)

1. Describe Gustafson’s Law as related to maximum speedup and Amdahl’s law.

Specifically explain how Gustafson invalidates Amdahl’s limit.

2. Search top500 list of the supercomputers. Who is the current champion? Search for the properties of

IBM Sequoia supercomputer. How much will Sequoia be faster than current fastest supercomputer?

3. Try your linux lab accounts and play with (get accustomed to) MPI hello program.

Use startup document for MPI UBE_LAM_MPI.doc 

Note: dns'te makine isimleri örnek olarak linux02 "ubepc-40-102.ege.edu.tr" biçiminde kayıt edilmiş..

erişimde bu ismi kullanmak veya ip'lerini örn: 155.223.40.102 (101-108 arası)
gerekli. mkineler arasında erişim içinse...yani örnek olarak linux01'den

diğer makinelere sadece linuxXX (linux06 gb) biçiminde mümkün..

 

Homework #2 (due March 10th, 2009)

1.  Write an MPI program to send a message around a ring of processors. That is,

Processor 0 sends it to processor 1, that sends to processor 2, and so on.

The last processor returns the message to processor 0. Provide sample output

which allows user to follow the message along its route. Use MPI_Wtime to measure

the execution time of your code for p = 4, 8, 16, 32.

 

2. Noting that the operation defined in the first problem is in fact a form of broadcast.

Write a second MPI program that broadcasts the same message as in first problem.

Measure the execution time and compare it to that of problem 1 then interpret the

Results.

 

3. Search Internet for free MPI analyzer/profilers. Give a brief (at most one page)

Report on your findings.

Homework #3 (due March 17th, 2009)

1. Write an MPI program to distribute MPI source files to all machines in your

Lamhosts file and to compile them automatically. To be more specific your program

should take at least two arguments: a source file name and a distribution list.

Correction: When MPI nodes activated (e.g., lamboot) MPI assigns a node id to each

Machine. Thus, you can use these node ids instead of taking a distribution list as

an input parameter. In summary, you don’t actually need a distribution list parameter.

(Thanks Murat Kurt for pointing this out.)

Homework #4 (due March 24th,2009 Sample Homework by Esra Ruzgar)

•         Write and execute an embarrassingly parallel program

–        First write and test non-MPI sequential program

–        Write an MPI version

–        Execute on one computer and on more than one computer

•         Need a host file

–        Time execution

•         Use MPI-Wtime() function

Homework #5 (due March 31st, 2009)

Gravitational N-body simulation details

 

Homework #6 (due April 7th,2009)

Compare experimentally a fully synchronous parallel MPI implementation

and a partially synchronous parallel MPI implementation of the

heat-distribution problem described in Section 6.3.2 of the textbook.

Try different values of s in the convergence condition specified in

Section 6.4. Write a report on your findings that includes the specific

Speed improvements you obtained.

 

Homework #7 (due May 5th, 2009)

Simulate Cellular Automata Problem aircraft and Birds using Java Threads.

For problem details see.

 

Homework #8 (due May 12th, 2009)

Write a Java multithreaded program consisting of two threads in which

a file reads into a buffer by one thread and written out to another file

by another thread. Provide sample outputs and clear explanations of the code.

 

Final Homework #9 (due May 26th,2009)

1. Problem 10.4 (on MPI platform)

2. Problem 10.22 (using Java threads)

Provide sample outputs and comments on code.

MPI tutorials one, two,three,four, GroupComm

 

Startup document for MPI UBE_LAM_MPI.doc

 

 

 


Send any comments or suggestions to dalkilic
Last revised in Feb 12,1996

 

 

 

 

Send your comments, suggestions & problems to Webadmin

(Please use 800x600 resolution and 16 bit color depth to see this site better!)

 

Latest Update :

 

Copyright © International Computing Institute, 2001