Fundamental Theorem of Markov Chains, PageRank

In this lecture, we will prove the Fundamental Theorem of Markov Chains and discuss the PageRank algorithm. In order to prove the Fundamental Theorem of Markov Chains, we need to review some concepts from linear algebra.

Linear Algebra Review

Eigenvalues, Eigenvectors, and Spectral Radius

Given a square matrix $A \in \mathbb{R}^{n \times n}$, a scalar $\lambda \in \mathbb{C}$ is called an eigenvalue of $A$ if there exists a non-zero unit vector $v \in \mathbb{C}^n$ (that is, $|v|_2 = 1$) such that $Av = \lambda v$. The vector $v$ is called an eigenvector of $A$ corresponding to the eigenvalue $\lambda$.

The eigenvalues of a matrix $A$ are the roots of the characteristic polynomial $\det(A - tI) = 0$, where $I$ is the identity matrix of size $n \times n$. The characteristic polynomial is a univariate polynomial of degree $n$ in the variable $t$.

The eigenspace corresponding to an eigenvalue $\lambda$ is the set of all eigenvectors corresponding to $\lambda$, together with the zero vector. The eigenspace corresponding to an eigenvalue $\lambda$ is a subspace of $\mathbb{C}^n$.

There are two ways of defining the multiplicity of an eigenvalue $\lambda$:

  1. The algebraic multiplicity of an eigenvalue $\lambda$ is the multiplicity of $\lambda$ as a root of the characteristic polynomial.
  2. The geometric multiplicity of an eigenvalue $\lambda$ is the dimension of the eigenspace corresponding to $\lambda$.

These two notions of multiplicity are equal for symmetric matrices (by the spectral theorem), but can differ for non-symmetric matrices. For instance, the matrix $$A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$$ has a single eigenvalue $\lambda = 1$ with algebraic multiplicity 2 and geometric multiplicity 1.

The spectral radius of a matrix $A$ is defined as $$\rho(A) = \max { |\lambda| : \lambda \text{ is an eigenvalue of } A }.$$

The Frobenius norm of a matrix $A$ is defined as $$|A|F = \sqrt{\sum{i=1}^n \sum_{j=1}^n A_{ij}^2} = \sqrt{\text{trace}(A^T A)}.$$

Note that the Frobenius norm of a matrix upper bounds the spectral radius of the matrix, i.e., $\rho(A) \leq |A|_F$. One can see this as follows: Let $\lambda$ be an eigenvalue of $A$ with eigenvector $v$. Then, we have $$|\lambda|^2 = |A v|_2^2 = \langle A v, A v \rangle = \langle v, A^T A v \rangle = \text{trace}(v^T A^T A v) = \\ = \text{trace}(A^T A v v^T) \leq \text{trace}(A^T A \cdot I) = |A|_F^2.$$

Note that the above argument also shows that the following inequality holds for any unit vector $v$: $$|A v|_2 \leq |A|_F.$$


Proposition 1 (Gelfand’s formula): For any matrix $A \in \mathbb{R}^{n \times n}$, we have $$\rho(A) = \lim_{k \to \infty} |A^k|_F^{1/k}.$$


For two vectors $u, v \in \mathbb{R}^n$, we say that $u \geq v$ if $u_i \geq v_i$ for all $i \in [n]$. We say that $u > v$ if $u \geq v$ and $u \neq v$. With this definition at hand, we have the following easy lemma.


Lemma 2 (Positivity Lemma): Let $A \in \mathbb{R}^{n \times n}$ be a positive matrix, i.e., $A_{ij} > 0$ for all $i, j \in [n]$. Let $u,v \in \mathbb{R}^n$ be distinct vectors such that $u \geq v$. Then, we have $Au > Av$. Moreover, there is $\varepsilon > 0$ such that $Au > (1+\varepsilon) Av$.


Proof: Since $u \geq v$, and $u \neq v$ we have $u - v \neq 0$ and $u - v \geq 0$. Let $\alpha := \min_{i,j \in [n]} A_{ij}$. Then, we have $$(A(u-v))_i = \sum_{j=1}^n A_{ij} (u_j - v_j) \geq \alpha \sum_{j=1}^n (u_j - v_j) > 0.$$ Therefore, $Au > Av$. The moreover part follows from taking a small enough $\varepsilon$. $\quad \blacksquare$

We are now ready to state and prove the main tool that we will use to prove the Fundamental Theorem of Markov Chains.

Perron-Frobenius Theorem

We begin with Perron’s theorem for positive matrices.


Theorem 3 (Perron’s Theorem): Let $A \in \mathbb{R}^{n \times n}$ be a positive matrix. Then, the following hold:

  1. The spectral radius $\rho(A)$ is an eigenvalue of $A$, and it has a positive eigenvector $v \in \mathbb{R}^n_{> 0}$.
  2. $\rho(A)$ is the only eigenvalue of $A$ in the complex circumference ${ z \in \mathbb{C} : |z| = \rho(A) }$.
  3. $\rho(A)$ has geometric multiplicity 1.
  4. $\rho(A)$ is simple, i.e., its algebraic multiplicity is 1.

Proof: By definition of $\rho(A)$, there exists an eigenvalue $\lambda \in \mathbb{C}$ of $A$ such that $|\lambda| = \rho(A)$. Let $v \in \mathbb{C}^n$ be an eigenvector corresponding to $\lambda$. Let $u \in \mathbb{R}^n$ be defined as $u_i := |v_i|$ for all $i \in [n]$. Then, we have $$(Au)_i = \sum_{j=1}^n A_{ij} u_j \geq \left| \sum_{j=1}^n A_{ij} v_j \right| = |\lambda v_i| = \rho(A) |v_i| = \rho(A) u_i.$$ Therefore, $Au \geq \rho(A) u$. If the inequality is strict, then by Lemma 2 we have $A^2 u > \rho(A) A u$, and there is some $\varepsilon > 0$ such that $A^2 u > (1+\varepsilon) \rho(A) A u$. By induction, we have $A^{k+1} u > (1+\varepsilon)^k \rho(A)^k A u$ for all $k \in \mathbb{N}$. Hence, by Gelfand’s formula, by setting $w := A u/\|A u\|_2$, we have $$\rho(A) = \lim_{k \to \infty} \|A^k\|_{F}^{1/k} \geq \lim_{k \to \infty} \|A^k w\|_2^{1/k} \geq \lim_{k \to \infty} \left( (1+\varepsilon)^k \rho(A)^k \right)^{1/k} = (1+\varepsilon) \rho(A),$$ which is a contradiction. Therefore, the inequality $Au \geq \rho(A) u$ must be an equality, and $u$ is a non-negative eigenvector of $A$ corresponding to $\rho(A)$. However, since $A$ is positive, the eigenvector $u$ must be positive, as $$\rho(A) u_i = (Au)_i = \sum_{j=1}^n A_{ij} u_j > 0 \text{ for all } i \in [n].$$ This proves the first part of the theorem.

To prove the second part, let $\lambda \in \mathbb{C}$ be an eigenvalue of $A$ such that $|\lambda| = \rho(A)$, but $\lambda \neq \rho(A)$. Let $z \in \mathbb{C}^n$ be an eigenvector corresponding to $\lambda$. Let $w \in \mathbb{R}^n$ be defined as $w_i := |z_i|$ for all $i \in [n]$. Then, by the above discussion, we must have $$Aw = \rho(A) w \Leftrightarrow \sum_{j=1}^n A_{ij} w_j = \rho(A) w_i = \rho(A) |z_i| = |\lambda z_i| = \left| \sum_{j=1}^n A_{ij} z_j \right| \text{ for all } i \in [n].$$

From the above conditions, we can deduce that there is $\alpha \in \mathbb{C}$ such that $\alpha z = w$ (as the triangle inequality must be an equality). But in this case, we have $\lambda \alpha z = \alpha A z = A w = \rho(A) w = \rho(A) \alpha z \Rightarrow \lambda = \rho(A)$, which is a contradiction. This proves the second part of the theorem.

Now we are ready to prove item 3: the geometric multiplicity of $\rho(A)$ is 1.

Suppose, for the sake of contradiction, that the geometric multiplicity of $\rho(A)$ is greater than 1. Let $u, v \in \mathbb{R}^n$ be linearly independent eigenvectors corresponding to $\rho(A)$ (by the above discussion, we know that such eigenvectors must be real vectors). Let $\beta > 0$ be such that $u - \beta v \geq 0$ and at least one of the components of $u - \beta v$ is zero. Note that $u - \beta v \neq 0$ as $u$ and $v$ are linearly independent. Then, by Lemma 2, we have $$\rho(A) (u - \beta v) = A(u - \beta v) > 0$$ which contradicts the fact that $u - \beta v$ has a zero component. This proves the third part of the theorem.

Finally, we prove the fourth part of the theorem: the algebraic multiplicity of $\rho(A)$ is 1.

Let $v \in \mathbb{R}^n$ be a positive eigenvector corresponding to $\rho(A)$, and let $u \in \mathbb{R}^n$ be a positive eigenvector of $A^T$, corresponding to $\rho(A^T)$ (which is equal to $\rho(A)$, by Gelfand’s formula). We know $u$ exists by the first part of the theorem.

Claim: the space $u^\perp := { x \in \mathbb{R}^n : u^T x = 0 }$ is invariant under $A$.

Proof of Claim: Let $x \in u^\perp$. Then, we have $u^T A x = (A^T u)^T x = \rho(A^T) u^T x = 0$.

Note that $u^\perp$ is a subspace of $\mathbb{R}^n$ of dimension $n-1$, and $v \notin u^\perp$, as $u^T v > 0$, since both vectors are positive. Hence, we have that $\mathbb{R}^n$ is the direct sum of $u^\perp$ and $\mathrm{span}(v)$. Let $w_2, \ldots, w_n$ be a basis of $u^\perp$, and $B \in \mathbb{R}^{n \times n}$ be the matrix whose columns are $v, w_2, \ldots, w_n$.

By the above, $B$ is invertible, and we have that $B A B^{-1}$ leaves the subspaces $B^{-1} \mathrm{span}(v) = \mathrm{span}(e_1)$ and $B^{-1} u^\perp = \mathrm{span}(e_2, \ldots, e_n)$ invariant. Thus, $B^{-1} A B$ is a block matrix of the form $$B^{-1} A B = \begin{bmatrix} \rho(A) & 0 \ 0 & C \end{bmatrix}.$$

Since $A$ and $B^{-1} A B$ are similar, they have the same eigenvalues. Moreover, we have $\det(A - t I) = \det(B^{-1} A B - t I) = \det(C - t I) \cdot (\rho(A) - t)$. Thus, if $\rho(A)$ had algebraic multiplicity greater than 1, then $C$ would have $\rho(A)$ as an eigenvalue, and therefore $A$ would have $\rho(A)$ as an eigenvalue with geometric multiplicity greater than 1, which is a contradiction. This proves the fourth part of the theorem. $\quad \blacksquare$


The Perron-Frobenius theorem is a generalization of Perron’s theorem to non-negative matrices.


Theorem 4 (Perron-Frobenius Theorem): Let $A \in \mathbb{R}^{n \times n}$ be a non-negative matrix, which is irreducible and aperiodic. Then, the following hold:

  1. The spectral radius $\rho(A)$ is an eigenvalue of $A$, and it has a positive eigenvector $v \in \mathbb{R}^n_{> 0}$.
  2. $\rho(A)$ is the only eigenvalue of $A$ in the complex circumference ${ z \in \mathbb{C} : |z| = \rho(A) }$.
  3. $\rho(A)$ has geometric multiplicity 1.
  4. $\rho(A)$ is simple, i.e., its algebraic multiplicity is 1.

Proof: By Lemma 1 of Lecture 9, we know that there is a positive integer $m$ such that $A^m$ is positive. Apply Perron’s theorem to $A^m$, and note that the eigenvalues of $A^m$ are the $m$-th powers of the eigenvalues of $A$, with the same eigenvectors. $\quad \blacksquare$

Fundamental Theorem of Markov Chains

We are now ready to prove (most of) the Fundamental Theorem of Markov Chains.


Theorem 5 (Fundamental Theorem of Markov Chains): Let $P$ be the transition matrix of a finite, irreducible and aperiodic Markov chain. Then, the following statements hold:

  1. There exists a unique stationary distribution $\pi$ of the Markov chain, where $\pi_i > 0$ for all $i \in [n]$, where $n$ is the number of states of the Markov chain.
  2. For any initial distribution $p_0$, we have $$ \lim_{t \to \infty} \Delta_{TV}(P^t \cdot p_0, \pi) = 0.$$
  3. The stationary distribution $\pi$ is given by $$\pi_i = \lim_{t \to \infty} P_{ii}^{t} = \frac{1}{\tau_{ii}}.$$

Proof: We will prove items 1 and 2 of the theorem. As $P$ is the transition matrix of an irreducible and aperiodic Markov chain, we know that $P$ is non-negative, irreducible, and aperiodic. By the Perron-Frobenius theorem, we know that there exists a unique positive eigenvector $v \in \mathbb{R}^n_{> 0}$ of $P$ corresponding to the spectral radius $\rho(P)$. Moreover, we know that $\rho(P) = 1$, since for any non-negative vector $u \in \mathbb{R}^n$ with $|u|_1 = 1$, we have $| Pu |_1 = 1$, as it is the probability distribution of the next state of the Markov chain. Hence $\pi := v/|v|_1$ is the unique stationary distribution of the Markov chain.

To prove item 2, let $B$ the the change of basis matrix used in the proof of Perron’s theorem. Then, we have that $B^{-1} P B$ is a block matrix of the form $$B^{-1} P B = \begin{bmatrix} 1 & 0 \\ 0 & C \end{bmatrix},$$ where $C$ is a matrix of size $(n-1) \times (n-1)$ with eigenvalues strictly inside the unit circle, which implies that $\lim_{t \to \infty} C^t = 0$. Thus, we have that $P^t = B \begin{bmatrix} 1 & 0 \\ 0 & C^t \end{bmatrix} B^{-1}$ and therefore $\lim_{t \to \infty} P^t = B \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} B^{-1}$.

PageRank Algorithm

Previous
Next