Linear Programming & Duality Theorems

Mathematical Programming deals with problems of the form

$$ \min f(x) $$ $$ \text{subject to } \quad g_1(x) \leq 0, \quad g_2(x) \leq 0, \quad \dots, \quad g_m(x) \leq 0, \quad x \in \mathbb{R}^n $$

If we do not impose any constraints in the functions $f$ and $g_i$, the above is a very general family of problems, which includes for instance many NP-hard problems, such as quadratic programming, integer programming, etc.

In this lecture we will focus on a particular case of mathematical programming, called Linear Programming, where the functions $f$ and $g_i$ are affine linear functions. In particular, in this lecture we will learn about Linear Programming and its strong duality theorem.

Traces of the idea of linear programming can be found in the works of Fourier, and linear programming was first formally studied in the works of Kantorovich, Koopmans, Dantzig, and von Neumann in the 1940s and 1950s.

Linear Programming

An affine linear function $f : \mathbb{R}^n \to \mathbb{R}$ is a function of the form $$ f(x) = c^T x + b = c_1 x_1 + c_2 x_2 + \dots + c_n x_n + b $$ where $c \in \mathbb{R}^n$ and $b \in \mathbb{R}$.

A linear program is a mathematical programming problem of the form $$ \min c^T x $$ $$ \text{subject to } \quad Ax \leq b, \quad x \in \mathbb{R}^n $$ where $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$.

Given any linear program, we define the feasible region as the set of points that satisfy the constraints of the linear program, i.e., for the above linear program we define the feasible region as $$ \text{Feasible region} := { x \in \mathbb{R}^n : Ax \leq b }. $$

One important property of linear programs is that the feasible region is always a convex set. A convex set $K$ is a set such that for any two points $x, y \in K$, the line segment connecting $x$ and $y$ is also in $K$. In other words, for any $x, y \in K$ and any $\lambda \in [0, 1]$, we have that the convex combination $\lambda x + (1-\lambda) y \in K$.

Since the feasible region is defined by a finite set of linear inequalities, we also know that it is a convex polytope.

As convex combinations will be important for us, let us define them more formally. Given a set of points $x_1, x_2, \dots, x_k \in \mathbb{R}^n$, a convex combination of these points is a point of the form $$ \lambda_1 x_1 + \lambda_2 x_2 + \dots + \lambda_k x_k $$ where $\lambda_1, \lambda_2, \dots, \lambda_k \geq 0$ and $\lambda_1 + \lambda_2 + \dots + \lambda_k = 1$.

Standard Form

We can always represent a linear program in the following standard form: $$ \min c^T x $$ $$ \text{subject to } \quad Ax = b, \quad x \geq 0, \quad x \in \mathbb{R}^n $$ where $A \in \mathbb{R}^{m \times n}$, $b \in \mathbb{R}^m$, $c \in \mathbb{R}^n$, and we say that $x \geq 0$ if $x_i \geq 0$ for all $i = 1, 2, \dots, n$.

Important Questions

Given a linear program, which we assume is in standard form, we are interested in answering the following questions:

  1. When is a linear program feasible (i.e., is there a solution to the constraints)?

  2. When is a linear program bounded (i.e., is there a minimum value to the objective we are trying to minimize)?

  3. Can we characterize the optimal solutions to a linear program?

    3.1 How do we know if a solution is optimal?

    3.2. Do the optimal solutions have a nice description?

    3.3. Do the optimal solutions have small bit complexity?

  4. Can we efficiently solve a linear program?

Structure of Linear Programs

To address the questions above, we will first study the structure of linear programs.

A first observation is that the feasible region of a linear program is a convex polytope, as it is the intersection of a finite number of half-spaces.

We are now ready to state the fundamental theorem of linear inequalities, proved by Farkas (1894, 1898) and Minkowski (1896).


Theorem 1 (Fundamental Theorem of Linear Inequalities): Let $a_1, a_2, \dots, a_m, b \in \mathbb{R}^n$, and let $r := \text{rank}{a_1, a_2, \dots, a_m, b}$. Then, exactly one of the following holds:

  1. $b$ is a non-negative linear combination of $a_1, a_2, \dots, a_m$.

  2. There exists a hyperplane $H := {x \in \mathbb{R}^n : c^T x = 0}$ such that

    2.1. $b$ is in the half-space $H^+ := {x \in \mathbb{R}^n : c^T x > 0}$.

    2.2. $a_1, a_2, \dots, a_m$ are in the half-space $H^- := {x \in \mathbb{R}^n : c^T x \leq 0}$.

    2.3. $H$ contains $r-1$ linearly independent vectors from ${a_1, a_2, \dots, a_m}$.


Translating to the affine setting, if one takes vectors $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$ and a vector $\vec{b}$, given by $\vec{a_i} = \begin{pmatrix}1 \\ a_i\end{pmatrix}$ and $\vec{b} = \begin{pmatrix}1 \\ b\end{pmatrix}$, the theorem states that exactly one of the following holds:

  1. $\vec{b}$ is a convex combination of $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$.

  2. There exists a hyperplane $H := {x \in \mathbb{R}^{n+1} : c^T x = 0}$ such that

    2.1. $\vec{b}$ is in the half-space $H^+ := {x \in \mathbb{R}^{n+1} : c^T x > 0}$.

    2.2. $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$ are in the half-space $H^- := {x \in \mathbb{R}^{n+1} : c^T x \leq 0}$.

    2.3. $H$ contains $r-1$ linearly independent vectors from ${\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}}$.

One can see the above follows from the fundamental theorem as any non-negative linear combination of $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$ giving $\vec{b}$ must be a convex combination of $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$ (due to the first coordinate being $1$ for all these vectors).

Remark 1: Any hyperplane $H$ which separates $\vec{b}$ from $\vec{a_1}, \vec{a_2}, \dots, \vec{a_m}$ is called a separating hyperplane.

Farkas’ Lemma


Lemma 1 (Farkas’ Lemma): Let $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$. The following two statements are equivalent:

  1. There exists $x \in \mathbb{R}^n$ such that $Ax = b$ and $x \geq 0$.
  2. For all $y \in \mathbb{R}^m$, if $y^T A \geq 0$, then $y^T b \geq 0$.

There are two equivalent formulations of Farkas’ Lemma, which will be useful for us.


Lemma 2 (Farkas’ Lemma - variant 1): Let $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$. Then exactly one of the following holds:

  1. There exists $x \in \mathbb{R}^n$ such that $Ax = b$ and $x \geq 0$.
  2. There exists $y \in \mathbb{R}^m$ such that $y^T b > 0$ and $y^T A \leq 0$.


Lemma 3 (Farkas’ Lemma - variant 2): Let $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$. The following two statements are equivalent:

  1. There exists $x \in \mathbb{R}^n$ such that $Ax \leq b$
  2. For all $y \in \mathbb{R}^m$ such that $y^T A = 0$ and $y \geq 0$, we have $y^T b \geq 0$.

Proof of Lemma 3: Let $M = [I \quad A \quad -A]$. Then $Ax \leq b$ has a solution if and only if $Mz = b$ has a solution where $z \geq 0$. By Farkas’ Lemma (Lemma 1), this is equivalent to the statement that for all $y \in \mathbb{R}^m$ such that $y^T M \geq 0$, we have $y^T b \geq 0$. Since $y^T M = [y^T \quad y^T A \quad -y^T A]$, this is equivalent to the statement that for all $y \in \mathbb{R}^m$ such that $y^T A = 0$ and $y \geq 0$, we have $y^T b \geq 0$.

Duality Theory

Given a linear program in standard form $$ \min c^T x $$ $$ \text{subject to } \quad Ax = b, \quad x \geq 0, \quad x \in \mathbb{R}^n $$ from Farkas’ Lemma, we know that the feasible region is non-empty if and only for any $y \in \mathbb{R}^m$ such that $y^T A \geq 0$, we have $y^T b \geq 0$.

If we look at what happens when we multiply the constraints by $y^T$, we note the following: $$ y^T A \leq c^T \Rightarrow y^T A x \leq c^T x \Rightarrow y^T b \leq c^T x. $$

Thus, if we can find a $y$ such that $y^T A \leq c^T$, then we have that $y^T b \leq c^T x$ for all feasible $x$. Thus, $y^T b$ is a lower bound on the optimal value of the linear program.

This motivates the following definition.


Definition 1 (Dual Linear Program): The dual linear program of a linear program in standard form $$ \min c^T x $$ $$ \text{subject to } \quad Ax = b, \quad x \geq 0, \quad x \in \mathbb{R}^n $$ is the linear program $$ \max y^T b $$ $$ \text{subject to } \quad y^T A \leq c^T, \quad y \in \mathbb{R}^m. $$


Practice Problem: prove that the dual of the dual linear program is the primal linear program.

By the above discussion we have proved that the optimal value of the dual linear program is a lower bound on the optimal value of the primal linear program. This is the content of the following theorem, known as the Weak Duality Theorem.


Theorem 2 (Weak Duality Theorem): Let $x$ be a feasible solution to the primal linear program and let $y$ be a feasible solution to the dual linear program. Then $c^T x \geq y^T b$.


Let $\alpha$ be the optimal value of the primal linear program and let $\beta$ be the optimal value of the dual linear program. The Weak Duality Theorem states that $\alpha \geq \beta$. Moreover, if the primal problem is unbounded, i.e. $\alpha = -\infty$, then the dual problem is infeasible, i.e. $\beta = -\infty$. Similarly, if the dual problem is unbounded, i.e. $\beta = \infty$, then the primal problem is infeasible, i.e. $\alpha = \infty$.

Now it is natural to ask whether the inequality $\alpha \geq \beta$ can be strict, or if it is actually an equality. The answer to this question is given by the Strong Duality Theorem, which states that under feasibility conditions, the optimal values of the primal and dual linear programs are always equal!


Theorem 3 (Strong Duality Theorem): If the primal and dual linear programs have feasible solutions, then the optimal values of the primal and dual linear programs are equal, i.e., $\alpha = \beta$.


Proof of Strong Duality Theorem: Since we have proved weak duality, to prove that $\alpha = \beta$ it suffices to show that the following LP has a feasible solution: $$ \max 0 $$ $$ \text{subject to } \quad c^T x - y^T b \leq 0, \quad Ax = b, \quad x \geq 0, \quad y^T A \leq c^T, \quad y \in \mathbb{R}^m. $$ To show that this program has a feasible solution, given that the primal and dual linear programs have feasible solutions, we can use variant 2 of Farkas’ Lemma (Lemma 3).

The above LP can be encoded in matrix form as $$ \max 0 $$ $$ \text{subject to } \quad \begin{pmatrix} c^T & -b^T \\ A & 0 \\ -A & 0 \\ 0 & A^T \\ -I & 0 \end{pmatrix} \begin{pmatrix}x \\ y\end{pmatrix} \leq \begin{pmatrix} 0 \\ b \\ -b \\ c \\ 0 \end{pmatrix}.$$

Variant 2 of Farkas’ Lemma (Lemma 3) states that this LP has a feasible solution if and only if for all $z \in \mathbb{R}^{1+2(n+m)}$ such that $z^T B = 0$ and $z \geq 0$, we have $z^T \begin{pmatrix} 0 \\ b \\ -b \\ c \\ 0 \end{pmatrix} \geq 0$.

Let $z^T = \begin{pmatrix}\lambda & u^T & v^T & w^T & e^T\end{pmatrix}$. Then the above inequality is equivalent to the following: $$ z^TB = 0 \text{ and } z \geq 0 \Rightarrow u^T b - v^T b + w^T c \geq 0.$$ We have two cases to consider:

  1. $\lambda > 0$: In this case, from $z^TB = 0$ we get the following equations: $$ \lambda c^T + u^T A - v^T A -e^T = 0, \quad -\lambda b + A w = 0. $$ From the first equation we get that $u^T A - v^T A = -\lambda c^T + e^T \geq -\lambda c^T$, as $e^T \geq 0$. Since $w \geq 0$, we have that $$ (u^T - v^T) A w \geq -\lambda c^T w$$ which when combined with the second equation gives $$ \lambda (u^T - v^T) b \geq -\lambda c^T w \Rightarrow u^T b - v^T b + w^T c \geq 0.$$ where the last inequality follows from the fact that $\lambda > 0$.

  2. $\lambda = 0$: In this case, from $z^TB = 0$ we get the following equations: $$ u^T A - v^T A -e^T = 0, \quad A w = 0. $$

Let $x, y$ be feasible solutions to the primal and dual linear programs, respectively (which exist by assumption). Then $x \geq 0$, $Ax = b$, and $y^T A \leq c^T$.

Thus, from $w \geq 0$ and from the second equality above we have that $c^T w \geq y^T A w = 0$. Moreover, from the first equality above and from $Ax = b, x \geq 0$ we have that $$ (u^T - v^T) A = e^T \geq 0, x \geq 0 \Rightarrow (u^T - v^T) A x \geq 0 \Rightarrow u^T b - v^T b \geq 0.$$ Thus, we have that $u^T b - v^T b + w^T c \geq 0$.

In both cases we have that $u^T b - v^T b + w^T c \geq 0$, which completes the proof of the Strong Duality Theorem.

Farkas’ Lemma - Affine Form

A consequence of Strong Duality is the following affine form of Farkas’ Lemma.


Lemma 4 (Farkas’ Lemma - Affine Form): Let $A \in \mathbb{R}^{m \times n}$ and $b \in \mathbb{R}^m$. Let the system $Ax \leq b$ be feasible, and suppose that inequality $c^T x \leq \delta$ holds whenever $x$ satisfies $Ax \leq b$. Then there exists $\delta’ \leq \delta$ such that the linear inequality $c^Tx \leq \delta’$ is a non-negative linear combination of the inequalities in the system $Ax \leq b$.


Practice Problem: use LP duality and Farkas’ Lemma to prove the above lemma.

Complementary Slackness

If both primal and dual linear programs have feasible solutions, and if $x$ is a feasible solution to the primal linear program and $y$ is a feasible solution to the dual linear program, then the following conditions are equivalent:

  1. $x$ is optimal for the primal linear program and $y$ is optimal for the dual linear program.
  2. $c^T x = y^T b$.
  3. For all $i \in [n]$, if $x_i > 0$, then the $i$-th inequality in $y^T A \leq c^T$ is tight at $y$. That is, $y^T A_i = c_i$.

Note that 1 and 2 are equivalent by the Strong Duality Theorem, and 2 and 3 are equivalent by the following equation: $$ c^T x - y^T b = c^T x - y^T Ax = (c^T - y^T A)x = \sum_{i=1}^n (c_i - y^T A_i)x_i$$

Conclusion

In this lecture, we have learned about mathematical programming, its generality, and we have studied the structure of a particular case of mathematical programming, called Linear Programming.

As we will see in the next lecture, Linear Programming is a very powerful tool not only for optimization, but the duality theory of Linear Programming has many applications in computer science, economics, and other areas, as we will see in the next lecture.

References

This lecture was prepared based on the following references:

  1. Schrijver, A. (1986). Theory of Linear and Integer Programming.
Previous
Next