December 11, 2013
CS 251: Computer Organization and Design
General description
This course enables students to develop an accurate mental model of the architecture and organization of physical computers and it introduces them to theoretical and practical concepts relevant to the structure and design of modern digital computers. The course also helps students understand and predict the behaviour and performance of computer programs executing on real machines.
Logistics
Audience
- 2B Computer Science students
- Students in the Digital Hardware option should take ECE 222 or 223 instead of CS 251
Normally available
- Fall, Winter, and Spring
Related courses
- Predecessors: CS 136, 138, 145 (before Fall 2011), or 146 (imperative programming and the basic memory model)
- Successors: CS 350, CS 370, CS 450 (and then many upper-year CS courses)
- Co-requisites: Taking CS 241 along with CS 251 works well
- Conflicts: Courses that explore the basics of computer hardware
For official details, see the UW calendar.
Typical reference(s)
- D. Patterson, J. Hennessy, Computer Organization and Design, 5th ed., Morgan Kaufmann, 2014
- D. M. Harris, S. L. Harris, Digital Design and Computer Architecture, 2nd edition, 2013
Required preparation
At the start of the course, students should be able to
- Understand the imperative programming model: use array indexing for both read and write (mutable memory model); follow programs written using imperative control flow
- Use basic algebra, calculus, and probability
- Algebra is used for combinational design, where Boolean formulas need to be manipulated according to a set of axioms and theorems
- Calculus is used for some aspects of performance analysis, e.g., limits are taken to predict theoretically obtainable speedup in parallel computers (Amdahl's law)
- Basic probability theory is assumed when average-case performance is computed using a weighted average of probabilities, e.g., computing average clocks per instruction, cache-miss penalties, and average disk rotational latencies
Learning objectives
At the end of the course, students should be able to
- Design simple combinational and sequential hardware at the logic gate level in order to implement simple algorithms and computations, such as addition, multiplication, and data-path control for processors
- Describe number systems used by computer hardware, including IEEE floating point and two's complement, at the bit level, including being able to
- Operate on these number representations and compute their values
- Explain the limits of these representations, including rounding error and overflow conditions
- Explain how machine language is executed by hardware, and describe a simple processor architecture for executing a RISC machine language
- Explain some simple processor implementation optimization techniques, in particular pipelining, and how differences in code ordering can impact performance for processors using these optimizations, including being able to
- Analyze performance of a code sequence executing on a pipelined architecture
- Reorganize (given) code in order to improve performance on a pipelined architecture
- Explain basic cache and virtual-memory architectures, and how they can impact performance, including being able to
- Analyze a memory-access trace relative to these architectures in order to predict performance
- Compute average-case performance given miss probabilities and miss penalties
- Identify specific cases (such as loop index ordering) that can impact performance, and reorganize such code to improve performance
- Describe a basic multicore and multiprocessor architecture and explain the key factors that will affect performance and scalability of programs written for these architectures
Typical syllabus
Introduction (2 hours)
- Overview of computer organization
- Measuring performance
Digital logic design (6 hours)
- Gates, truth tables, and logic equations
- Combinational logic and basic components
- PLAs and ROMs
- Memory elements
- Finite-state machines (informally)
Data representation and manipulation (6 hours)
- Signed and unsigned integer representations
- Addition and subtraction
- Using (informal) finite-state machines to control datapaths
- Multiplication
- IEEE floating-point representation
Basic processor design (5 hours)
- Basic processor datapaths
- Processor design using single-cycle control
- Processor design using multi-cycle control
Pipelining (5 hours)
- Pipelined datapaths
- Data hazards, branch hazards, load-use hazards
Input/Output (3 hours)
- Buses and bus arbitration
- Storage (disks)
- Interrupts
- Point-to-point (routed) networks
- Networks and clusters
Memory Hierarchies (3 hours)
- Caches: direct-mapped, fully-associative, set-associative
- Virtual memory
- Page tables and TLBs
Multi-processing (3 hours)
- Multi-processor systems and core processors
- Synchronization and locking
- Cache consistency