Linear Programming Relaxations and Rounding

Many important optimization problems are NP-hard. A prominent example is the Integer Linear Programming (ILP) problem, which is defined as follows: mincTx subject to Axb,xNn where ARm×n, bRm, and cRn are given. The goal is to find an integer vector x that minimizes the objective function while satisfying the constraints.

One advantage of ILPs is that they can model a wide range of combinatorial optimization problems. However, this very same property makes them computationally intractable in general. But we do know how to solve linear programming (LP) problems efficiently. So, a natural question is: can we get good solutions when we relax the integrality constraints from an ILP and solve the corresponding LP instead? Can we round the solution we obtained from the LP to get a good solution for the ILP?

If we manage to do the above, we would be getting “pretty good” solutions for a wide range of combinatorial optimization problems. This is the idea behind the method of linear programming relaxations and rounding: we are content with approximately optimal solutions that we can find efficiently.

Here is a high-level overview of the method:

  1. Formulate the combinatorial optimization problem as an ILP. (say by minimizing some objective function)

  2. Relax the integrality constraints to get an LP. (This is called the linear programming relaxation of the ILP.)

    2.1. We are still minimizing the same objective function, but over a (potentially) larger feasible region. Hence opt(LP)opt(ILP)

  3. Solve the LP to get a fractional solution. This can be done efficiently.

    3.1. If the fractional solution is already integral, we are done, as it will be a solution to the ILP.

    3.2. Otherwise, we need to round the fractional solution to get an integral solution. This is the tricky part. In this case, we need to design a rounding algorithm that transforms the fractional solution into an integral solution while preserving the objective function value as much as possible. Thus, we would like to find a rounding algorithm and a value α1 such that opt(LP) value of the rounded solutionαopt(LP)αopt(ILP).

Example: Vertex Cover

Given a weighted graph G=(V,E,w), where w:ER+, we would like to find a minimum weight vertex cover. That is, we want to find a set SV such that for every edge u,vE, at least one of u or v is in S, and the total weight of the vertices in S is minimized.

Following our strategy, we can formulate the problem as an ILP: minvVw(v)xv subject to xu+xv1 for every edge u,vE xv{0,1} for every vV

We can interpret the above ILP as follows: xv=1 if and only if vertex v is in the vertex cover S. The inequality constraints ensure that for every edge u,v, at least one of u or v is in the vertex cover. The objective function is the total weight of the vertices in the vertex cover, which we want to minimize.

The linear programming relaxation of the above ILP is obtained by relaxing the integrality constraints on xv. This gives us the following LP: minvVw(v)xv subject to xu+xv1 for every edge u,vE 0xv1 for every vV

Now we use an efficient LP solver to solve the above LP. Let z the the optimal solution we obtain. Now we need to devise a rounding algorithm that transforms the fractional solution z into an integral solution S.

In this case, we can use the following simple rounding algorithm: round each zv to the nearest integer. Let yv be the rounded value of zv. Then, yv=1 if zv1/2, and yv=0 otherwise. Note that yv2zv, for every vertex vV. Moreover, we can see that y encodes a vertex cover: for every edge u,v, since zu+zv1, we have yu+yv1, which means that at least one of u or v is in the vertex cover encoded by y.

Finally, we can analyze the approximation guarantee of the above rounding algorithm. The cost of the solution given by y is: vVw(v)yv2vVw(v)zv2opt(LP)2opt(ILP).

Thus, we have obtained a 2-approximation algorithm for the vertex cover problem using linear programming relaxations and rounding.

Main Example: Set Cover

The set cover problem is a classic combinatorial optimization problem, generalizing the vertex cover problem.

  • Input: A finite set U (the universe) and a collection S={S1,S2,,Sn} of subsets of U.
  • Output: A minimum-size subcollection CS such that SCS=U.

We could also have a weighted version of the set cover problem, where each set Si has a non-negative weight wi associated with it, and the goal is to minimize the total weight of the sets in the subcollection C.

Let us now formulate the weighted set cover problem as an ILP:

mini=1nwixi subject to i:vSixi1 for every vU xi{0,1} for every i=1,2,,n

The above ILP can be interpreted as follows: xi=1 if and only if set Si is in the subcollection C. The inequality constraints ensure that every element vU is covered by at least one set in C. The objective function is the total weight of the sets in the subcollection C, which we want to minimize.

Now we can proceed with our method of linear programming relaxations and rounding. The linear programming relaxation of the above ILP is obtained by relaxing the integrality constraints on xi. With this, we get the following LP:

mini=1nwixi subject to i:vSixi1 for every vU 0xi1 for every i=1,2,,n Suppose we solve the above LP and obtain an optimal solution z. We now need to come up with a rounding algorithm that transforms the fractional solution z into an integral solution C.

Can we just round each zi to the nearest integer as we did in the vertex cover example? Not really. Say a given vertex v is an element of 20 of the sets in S, and the optimal solution z assigns a value of 1/20 to each of these sets. Then the above rounding algorithm would round each of these values to 0, which is not good, as it would not cover v.

Instead, we can think of zi as the “probability” that we would pick set Si in an optimal solution. This way, z is describing a set of “optimal probability distributions” over each of the sets in S. Given this interpretation, we can use the following randomized rounding algorithm:


Algorithm 1: Random Pick

  • Input: A fractional solution z to the LP relaxation of the weighted set cover problem.
  • Output: A subcollection C of sets in S. (which hopefully covers U)
  1. Set C=.
  2. For i=1,2,,n:
    • Pick set Si with probability zi.
    • If Si is picked, add it to C.
  3. Return C.

Note that the expected weight of the subcollection C output by the above algorithm is exactly the value of the fractional solution z. But will be C be a set cover?

Let us consider the Random Pick algorithm from the perspective of an element vU. Let vS1,S2,,Sk (for simplicity) and vSk+1,Sk+2,,Sn. As long as we select at least one of S1,S2,,Sk, we are good (with respect to v). Note that we select set Si with probability zi, and we know that i=1kzi1, as z is a feasible solution to the LP. What is the probability that v is covered by Random Pick? It is definitely not 1, as you can see from the case where k=2 and z1=z2=1/2, for instance.

The following lemma gives us a bound on the probability that an element vU is covered by the Random Pick algorithm:


Lemma 1 (Probability of covering an element): in a sequence of k independent experiments, in which the ith experiment succeeds with probability pi, and i=1kpi1, the probability that at least one of the experiments succeeds is at least 11/e.


Proof: The probability that no experiment succeeds is i=1k(1pi)ei=1kpie1.

Thus, the probability that at least one experiment succeeds is at least 11/e.


By the above lemma, we see that the probability that an element vU is covered by the Random Pick algorithm is at least 11/e. But we could still have many elements that are not covered by C. How do we deal with this? By perseverance!


Algorithm 2: Randomized Rounding

  • Input: A fractional solution z to the LP relaxation of the weighted set cover problem.
  • Output: A subcollection C of sets in S that covers U.
  1. Set C=.
  2. While there is an uncovered element vU:
    • C=CRandom Pick(z).
  3. Return C.

The Randomized Rounding algorithm repeatedly applies the Random Pick algorithm until all elements in U are covered. To show that the above is a good algorithm, we need to show that we will not execute the Random Pick algorithm too many times (with high probability). This is captured by the following lemma:


Lemma 2 (Probability decay): Let tN. The probability that the Randomized Rounding algorithm executes the Random Pick algorithm more than t+ln(|U|) times is at most et.


Proof: The probability that the Randomized Rounding algorithm executes the Random Pick algorithm more than t+ln(|U|) times is the probability that there is an uncovered element after t+ln(|U|) iterations.

Let vU. For each iteration, by Lemma 1, the probability that v is covered is at least 11/e. Hence, the probability that v is not covered after t+ln(|U|) iterations is at most 1/eln(|U|)+t=1|U|et.

By the union bound, the probability that there is an uncovered element after t+ln(|U|) iterations is at most et.


Now that we know that we wil cover U with high probability, we need to bound the cost of the solution we came up with. Suppose that the Randomized Rounding algorithm executes the Random Pick algorithm T times. Let X be the total weight of the subcollection C output by the Randomized Rounding algorithm.

At each implementation of the Random Pick process, the expected weight of the subcollection C output is exactly the value of the fractional solution z. That is i=1nwizi.

After T calls to Random Pick, expected total weight of the subcollection C is at most E[X]=Ti=1nwizi By Markov: P[X4E[X]]1/4.

We are now ready to get a bound on the cost of the solution output by the Randomized Rounding algorithm:


Lemma 3 (Cost of Rounding): Given a fractional solution z to the LP relaxation of the weighted set cover problem, the Randomized Rounding algorithm outputs, with probability at least 0.7, a set cover C of weight at most 4(ln(|U|)+3)OPT(ILP).


Proof: Let T:=ln(|U|)+3. By Lemma 2, there is a probability of at most e3<0.05 that the Randomized Rounding algorithm executes the Random Pick algorithm more than T times. After T calls to Random Pick, the expected total weight of the subcollection C is Ti=1nwiziTOPT(ILP). By Markov, the probability that the total weight of the subcollection C is at least 4TOPT(ILP) is at most 1/4. By the union bound, with probability at most 1/4+0.05=0.3, the algorithm either makes more than T calls to Random Pick or our solution has weight >4TOPT(ILP).

Thus, with probability 0.7 we stop at T iterations and construct a solution with weight at most 4TOPT(ILP).


Previous
Next