ENEE459P/ENEE699*: Parallel Algorithms, Fall 2010
* 1. New in Fall 2010: This course is offered as a joint inter university course on parallel computing with the University of
2. To take the course as ENEE699 (Section 3201)
please contact the instructor at firstname.lastname@example.org.
Why is it a good idea to take this course?
Click for a brief flyer addressing this question
Time and Location MW 9:30-10:45. ITV 1100.
Instructor Dr. U. Vishkin.
Instructor's E-mail and Office Hours
email@example.com, M 2:00-3:00 (by appointment) at AVW 2365.
Grader Umer Ikram. E-mail firstname.lastname@example.org
Time and Location MW 8:30-9:45. 1109 Siebel Center.
Instructor Dr. D. Padua.
The Illinois course website
Slides for classes starting on September 20
Homework 1: Exercises 1-5. Due: October 6.
Homework 2: Exercises 6-9. Due: October 13.
Homework 3: Exercises 10-12. Due: October 25.
Homework 4: Exercises 14,15,21-23. Due: November 3.
Homework 5: Exercises 24,29-31. Due: November 17.
Homework 6: Exercises 32,35-37. Due: November 29.
XMT Programming Assignments
Programming Assignments, Tutorial, Manual, Software Tools and Other
OpenMP Programming Assignments
OpenMP Homework 1:
Do the 3-credit version of MP3 on the University of Illinois web site. Due: October 18, 2010.
OpenMP Homework 2:
Do the OpenMP part of MP5 on the University of Illinois web site. Note that much of this homework statement is based on references to XMT HW2 (which is due Nov 2).
Due: November 8, 2010.
Parallel algorithmic thinking and programming is in increasing demand as multi-cores, and other
chip-multi-processing approaches are being deployed. These approaches are in line with the roadmap charted by the major industry vendors.
In fall 2010, Professors Padua and Vishkin co-teach part of their respective
parallel algorithms and programming
undergraduate courses, each one teaching one part of the course to both classes.
The increasing need merits this broader coverage.
Much of the core of the computing field needs to be reinvented for many-core parallelism.
The productivity of parallel programming has long been recognized as a limiting factor for parallel computing.
However, the computing research community is yet to develop agreed ways ('benchmarks') for assessing such productivity.
While major vendors urge rising mainstream programmers (e.g., every Computer Science and Computer Engineering major) to embrace general-purpose parallelism,
the history of parallel computing has taught us that parallel programming of many parallel machines tend to be quite challenging.
We believe that developing ways for evaluating (and benchmarking) productivity of parallel programming and building some consensus around them can be a first step towards
changing that. This is not without challenges.
Core computer science and engineering research tends to put an emphasis on things we can "truly measure" analytically
(e.g., complexity analysis, or power analysis), or empirically using wall-clock timing such as the computer system quantitative approach. But, the type of social-science
evaluations that involve a human factor, as needed here, are yet to gain traction in core CS areas such as algorithms, computer systems and
Idea: A necessary condition for improved productivity is ease-of-programming.
A necessary condition for ease-of-programming is ease-of-teaching, and, of course, ease-of-learning.
Our idea is to make progress toward better understanding of the relative productivity of parallel programming
approaches by experimentally comparing their ease-of-teaching/learning.
Course Teaching Goals
- Introduction to the theory of parallel algorithms.
The course will highlight parallel algorithmic thinking for obtaining
good speed-ups with respect to serial algorithms.
- Introduction to parallel programming. The course will cover several parallel programming paradigms as well as strategies for performance tuning for shared, SIMD, and distributed memory target machines.
Course Research Goals
This joint class is part of a research project whose objective is experimentally evaluating different parallel programming paradigms
from the point of view of ease-of-teaching/learning ('teachability') and ease-of-use ('usability') and comparing them. One specific goal is identifying the
characteristics of the most important parallel algorithms and programming models, when these models are seen as tools to learn
parallel programming concepts. A secondary goal is studying the feasibility, advantages and disadvantages of co-teaching a course
involving faculty from different institutions.
Plan: A group of software engineering expert(s), and learning (as understood in) education expert(s) is being assembled to provide guidance
on the procedure to carry out the experimental evaluation.
All students will be asked to (make a small effort and) contribute to the research side of the course, but will be given the option not to without affecting
- Basic knowledge and understanding of algorithms and data structures
and computing systems.
- Class notes will be available for all the material covered during the semester.
Some of the material will be drawn from these
class notes (104 pages)
- V. Kumar , A. Grama, A. Gupta, and G. Karypis, Introduction to Parallel Computing: Design and Analysis of Algorithms. Benjamin-Cummings Publishing Co. 1994
J. JaJa, An Introduction to Parallel Algorithms, Addison Wesley, 1992
D.E. Culler and J.P. Singh, Parallel Computer Architecture, Morgan Kaufmann,
- J.L. Hennessy and D.A. Patterson. Computer Architecture a Quantitative
Approach, 4th Edition. Morgan-Kaufmann, San Francisco, 2007
- W. Gropp, E. Lusk, and A. Skjellum Using MPI (2nd Ed.): Portable Parallel Programming with the Message-Passing Interface, MIT Press, 1999
Some Core Topics:
- Introduction to Parallel Algorithms:
Performance of Parallel Algorithms, Optimality Notions, Key Abstractions
- Basic Techniques: Balanced Trees, Pointer Jumping, Divide-and-Conquer,
Partitioning, Pipelining, Accelerated Cascading,
- Numerical algorithms: Matrix-Matrix multiplication, Reduction, Scan, Recurrence Solvers, Linear System Solvers, Iterative Synchronous and Asynchronous Algorithms
- Trees and Lists: List Ranking, The Euler Tour Techniques,
Tree Contraction, Evaluation of Arithmetic Expressions
- Searching, Merging and Sorting
- Graphs: Connected Components, Transitive Closure, Paths Problems
- 10% - Written homework
- 40% - Programming projects
- 20% - Mid-term exam, October 25, in class
- 30% - Final exam (comprehensive), 8:00-10:00 Saturday, December 18 (based on Testudo).