Parallel and Distributed Computing

(COSC 6422) was 5494

Note: although the title of this course is Parallel and Distributed Computing, the real focus this year will be on parallel computing.


Please remember to occasionally reload this page as it will be frequently modified.


Contents


Instructor


Class Times and Location

Tuesday   10:30 - 12:00 COSC 6422    321 Petrie  *** note the change
Thursday  10:30 - 12:00 COSC 6422    321 Petrie  *** note the change

Please notify me if you have a time conflict with another
graduate course you would like to take.

Office Hours

List of current office hours

Some of the Overheads Used in Class

  • Architectures
  • Problem Partitioning / Matrix Multiply
  • Parallel Program Performance I
  • Parallel Program Performance II

    Goals / Purpose


    Evaluation

    One Assignment           20%       Due : Tuesday,  February 10
    Project Proposal          5%       Due : Tuesday,  February 10
    Term Exam                20%       On  : Tuesday,  February 24
    Project Presentation     10%       Due : Last week of classes
    Final Project            35%       Due : At your presentation
    Class Participation      10%
    

    You should begin thinking about a project from day one and start working on a project proposal shortly after. You should also start assignment one as soon as possible.


    Assignment


    Mid Term Exam

    The midterm will cover material covered in class up to the day of the exam.

    LCSR Information

    Your assignments and likely your projects will be done in the LCSR (Laboratory for Computer Systems Research).

    General LCSR and Departmental Computing Information


    Calendar Description

    This course investigates fundamental problems in writing efficient and scalable parallel applications with emphasis on operating systems support and performance evaluation techniques. Part of the course will involve designing, writing, and comparing parallel programs written using message-passing and shared-memory models while considering the support for effective design, implementation, debugging, testing, and performance evaluation of parallel applications and operating systems.

    Expanded Course Description

    The purpose of this course is to present students with an introduction to state-of-the-art techniques for implementing software for high performance computers. This course will first motivate the need for higher performance computers (parallel processing) by providing a high level introduction to a few computationally intensive but significant problem areas. We discuss general issues in parallel computing including: speedup, efficiency, limits to speedup, Amdahl's Law, iso-efficiency, problem decomposition, granularity of computation, load balancing, data locality, and the relationship between software and architecture. Different approaches to writing parallel software for shared-memory and message-passing paradigms are discussed including: parallelizing compilers, parallel languages, and parallel language extensions. We examine current operating systems and issues related to their support of parallel computation (or lack thereof). Other possible topics are: the design and implementation of efficient and effective thread packages, communication mechanisms, process management, virtual memory, and file systems for scalable parallel processing.

    This course will not only provide students with the background required to conduct research in the area of parallel applications and operating system design but will also train them to critically and effectively evaluate application and system software performance. Students are expected to have experience in C and UNIX programming as well knowledge of operating systems fundamentals at least at the level of COSC3321. As well, a basic knowledge of uniprocessor and multiprocessor architectures is helpful.


    List of Some Possible Topics

    Introduction

    Parallel Program Metrics

    Performance Evaluation Methods

    Approaches to Parallelization

    Programming / Performance Issues

    Scheduling

    The World-Wide Supercomputing Project

    Consistency Models

    Distributed Shared Memory (DSM)

    Other Topics


    Some Systems for Parallel Computing


    Potential Reading Topics


    Possible Sources of Information


    Possible Project Topics




    Last modified: January 4, 1998