This course studies efficient algorithms to exploit the full potential of the
parallel computer technology. Emphasis is on different techniques for obtaining
maximum parallelism in various numerical algorithms, especially those occurring
when solving matrix problems and partial differential equations, and the subsequent
mapping onto the computer. Example applications include: image processing, computational
fluid dynamics, structural analysis. Assignments will involve programming on
existing parallel machines as available.
Introduction to Parallel Computing, by V. Kumar, A. Grama, A. Gupta, G. Karypis,
Three hours of lecture per week.
Vector processors, distributed memory, SMP, NUMA, Beowulf clusters. Interconnection topologies. Bandwidth, latency, speedup, Amdahl's law.
High level description with examples and implementation. MPI, PVM, OpenMP, automatic parallelizing compilers, threads.
Dense and sparse matrix multiplication, Gaussian elimination, tridiagonal solvers. Data mapping onto parallel computers. Jacobi, Gauss-Seidel, SOR, Krylov subspace methods.
Butterfly algorithm, fast multipole methods.
Bisection, spectral, Metis, ParMetis.
Algorithm decomposition: additive/multiplicative Schwarz, overlapping/nonover-lapping methods.