#
CS472/CM472/CS672 - Goals

## Objectives

In many domains, large linear systems must routinely be solved.
For instance, in many branches of engineering and physics, partial
differential equations are often approximated by large linear
systems; in machine learning, least square linear regression also
leads to large linear systems; in computational vision, image denoising
can also be formulated as a linear system; in decision-theoretic
planning, policy iteration algorithms for Markov decision processes
require the solution of a series of linear systems.

This course surveys a wide range of numerical techniques to solve large
linear systems. More precisely, the course will focus on data
structures and algorithms that exploit sparsity and redundancy in order
to solve linear systems as large as possible. Special attention
will also be paid to stability and convergence issues. Given the
wide range of applications where linear systems arise and the generic
nature of the techniques surveyed, this course should interest all
practitioners from computer science, applied mathematics, computational
mathematics, optimization and
engineering.

## Outline

The topics we will cover include:

- Data structures
- Sparse representations
- Algebraic decision diagrams

- Direct methods
- Sparse Gaussian elimination
- Ordering to minimize work: reverse Cuthill-McKee and minimum
degree
- Stability analysis

- Basic iterative methods
- Richardson, Jacobi, Gauss-Seidel, SOR
- Preconditioning, ILU

- Convergence analysis

- Gradient methods
- Steepest descent, conjugate gradient

- convergence analysis

- Krylov subspace methods
- Arnoldi, Lanczos
- GMRES, CGS

- Potential applications
- Heat diffusion
- Image de-noising
- Least square linear regression

- Policy iteration for Markov decision processes