Solutions to Homework 3

Problem 1

  1. Let $$ S = \{a > 0 \quad : \quad 0 \leq y_v - a \leq y_u + a \leq 1 \quad \forall 0 < y_u < 1 \} .$$ Note that $ S $ is non-empty because for every $ y_v \in (0, 1) $, all $ a \leq \min\{y_v, 1 - y_v\} $ will satisfy the constraint for that $ y_v $. Since there are a finite number of $ y_v $, $ S $ must be non-empty.

Let $ c_1 \in S $. Then $ y^+ $ is feasible:

  • Consider the constraints of the form $ 0 \leq y_u^+ \leq 1 $. Since $ 0 \leq y_u - c_1 \leq y_u \leq y_u + c_1 \leq 1 $ because of the way we chose $ c_1 $, these constraints are still satisfied.
  • Consider the constraints of the form $ x_u + x_v \geq 1 $. We must have at least one of $ y_u $ or $ y_v $ greater than or equal to $ \frac{1}{2} $. So at most one of $ y_u $ or $ y_v $ is strictly less than $ \frac{1}{2} $.

Without loss of generality, if $ y_u < \frac{1}{2} $, then $ y_v > \frac{1}{2} $ and $ y_u^+ + y_v^+ \geq 1 $ because either $ y_v = 1 $ or $ y_u > 0 $ and $ y_u^+ + y_v^+ = y_u - c_1 + y_v + c_1 = y_u + y_v \geq 1 $.

Note that if both $ y_u, y_v \geq \frac{1}{2} $, then $ y_u^+ \geq y_u $ and $ y_v^+ \geq y_v $, so the constraint is still satisfied.

Thus, $ y^+ $ is feasible.

Let $ T = \{a > 0 : a \leq \frac{1}{2} |y_u - y_v| \ \forall y_u \neq \frac{1}{2} \} $. Using a similar argument as above, $ T $ is non-empty.

Let $ c_2 \in T $. Then $ y^- $ is feasible:

  • Consider the constraints of the form $ 0 \leq y_u^- \leq 1 $. Since the non-half-integral $ y_v $ become closer to $ \frac{1}{2} $ in $ y_v^- $ because of the way we chose $ c_2 $, these constraints are still satisfied.
  • Consider the constraints of the form $ x_u + x_v \geq 1 $. Without loss of generality, $ x_u \geq \frac{1}{2} $. If $ x_u \geq \frac{1}{2} $, then $ x_u - 2c_2 \geq \frac{1}{2} $ from the way we chose $ c_2 $. So $ x_u^+ + x_v^- \geq x_u + x_v - 2c_2 \geq 1 $, so the constraint is satisfied.

Thus, $ y^- $ is feasible.

Note that for both sets $ S $ and $ T $, if $ x \in S, T $ then $ y \in S, T $ for all $ 0 < y < x $.

Now we can pick any $ c_1 \in S $ and $ c_2 \in T $. Let $ c = \min\{c_1, c_2\} $. Now we are done because $ c > 0 $ and both $ y^+ $ and $ y^- $ are feasible for the LP.

  1. Let $ z $ be a non-half-integral point of the LP. Then by part 1, we can find $ c > 0 $ such that all of $ z, z^+ $ and $ z^- $ are distinct feasible solutions, and $ z = \frac{1}{2}(z^+ + z^-) $. Thus, $ z $ is not an extreme point. By the contrapositive, all extreme points of the LP are half-integral.

  2. Since the extreme points of the LP are half-integral, given the optimal solution $ z^* $, we can simply round each $ z^{*}_v $ to the nearest integer (sending $ z^{*}_v $ to $1 $ if it is greater than or equal to $ \frac{1}{2} $ and to $ 0 $ otherwise) to get a feasible solution for the ILP. Observe (just as we saw in class) that this yields a 2-approximation for the ILP.

Problem 2

Consider the ILP

$$ \begin{align*} \max \quad & \sum_{e \in E} x_e \\ \text{s.t.} \quad & \sum_{v \in V} y_v \leq k \\ & y_u + y_v - x_e \geq 0 \quad \forall e \in E, (u, v) = e \\ & x_e \in {0, 1} \quad \forall e \in E \\ & y_v \in {0, 1} \quad \forall v \in V \end{align*} $$

and its LP relaxation

$$ \begin{align*} \max \quad & \sum_{e \in E} x_e \\ \text{s.t.} \quad & \sum_{v \in V} y_v \leq k \\ & y_u + y_v - x_e \geq 0 \quad \forall e \in E, (u, v) = e \\ & 0 \leq x_e \leq 1 \quad \forall e \in E \\ & 0 \leq y_v \leq 1 \quad \forall v \in V \end{align*} $$

Let $(\bar{x}, \bar{y})$ be an optimal solution for the LP relaxation. Then construct a cover in the following way: for each $v \in V$, include it in the cover with probability $\bar{y}_v$. If the cover has size greater than $2k$ then repeat the process until it is not. Let $Y_v$ be the indicator variable such that $\Pr[Y_v = 1]$ is the probability that vertex $v$ is picked in the cover. Then let $Y = \sum_{v \in V} Y_v$. We have $\mathbb{E}[Y] = \sum_{v \in V} \bar{y}_v \leq k$, so by Markov’s inequality

$$ \Pr[Y \geq 2k] \leq \Pr[Y \geq 2\mathbb{E}[Y]] \leq \frac{1}{2}. $$

So each time we pick a cover, we will succeed with probability at least $\frac{1}{2}$, so with high probability we can find an appropriate cover in a constant number of repeats. Call the cover we end up with $C$.

For each edge $e = (u, v) \in E$, let $X_e$ be an indicator variable such that $\Pr[X_e = 1]$ is the probability that $e$ is covered by vertices in $C$. For every such $e$, the probability that it is covered is at least $\max\{\bar{y}_u, \bar{y}_v\} \geq \frac{1}{2} \bar{x}_e$ (in other words, as long as the vertex $v$ with greater $\bar{y}_v$ is chosen in the set, $e$ will be covered).

Let $X = \sum_{e \in E} X_e$. Then $\mathbb{E}[X] \geq \frac{1}{2} \sum_{e \in E} \bar{x}_e = \frac{1}{2} \text{OPT}_{LP}(G, k) \geq \frac{1}{2} \text{OPT}(G, k)$.

Let $Z = 2 \cdot \text{OPT}(G, k) - X$. We have $\mathbb{E}[Z] \leq \frac{3}{2} \cdot \text{OPT}(G, k)$. Note that $Z$ is non-negative because a set of $2k$ vertices cannot cover more than $2 \cdot \text{OPT}(G, k)$ edges (divide the set of $2k$ into two sets of $k$, then we can write the number of edges covered as the sum of edges covered minus the intersection), so we can use Markov’s inequality:

$$ \Pr \left[Z \geq \frac{7}{4} \cdot \text{OPT}(G, k) \right] \leq \frac{\mathbb{E}[Z]}{\frac{7}{4} \cdot \text{OPT}(G, k)} \leq \frac{\frac{3}{2} \cdot \text{OPT}(G, k)}{\frac{7}{4} \cdot \text{OPT}(G, k)} = \frac{6}{7} $$

so we have $\Pr \left[ X \leq \frac{1}{4} \cdot \text{OPT}(G, k) \right] = \Pr \left[Z \geq \frac{7}{4} \cdot \text{OPT}(G, k) \right] \leq \frac{6}{7}$.

Thus, with probability at least $\frac{1}{7}$, $\frac{1}{4} \cdot \text{OPT}(G, k) \leq X \leq 2 \cdot \text{OPT}(G, k)$.

So we have an algorithm that identifies a set of at most $2k$ vertices that touches at least $\frac{1}{4} \text{OPT}(G, k)$ edges with constant ($\geq 1/7$) probability. We can amplify the probability as required to get the desired performance.

Problem 3

Part 1

Primal in standard form: $$ \begin{align*} &\min \quad \alpha \langle E_{11}, X \rangle \\ &\text{s.t.} \quad \langle E_{22}, X \rangle = 0 \\ &\langle E_{11} + E_{23} + E_{32}, X \rangle = 1 \\ &X \succeq 0 \end{align*} $$ where $E_{ij}$ is the 3x3 matrix with a 1 in the $i,j$ entry and 0 elsewhere.

Hence, the dual is $$ \begin{align*} &\max \quad y_2 \\ &\text{s.t.} \quad y_1 E_{22} + y_2 (E_{11} + E_{23} + E_{32}) \preceq \alpha E_{11} \\ \end{align*} $$ which is exactly the system in part 1.

Part 2

$X_{22} = 0$, and $X \succeq 0$, implies that the $\{2,3 \}$ principal minor is non-negative. Since this minor equals $-X_{23}^2$, we have $X_{23} = 0$. The second constraint and $X_{23} = 0$ imply that $X_{11} = 1$. Hence, the value of the primal is $\alpha$.

Note that $X = E_{11}$ satisfies the constraints of the primal, and has value $\alpha$.

Part 3

$$ \begin{pmatrix} \alpha - y_2 & 0 & 0 \\ 0 & -y_1 & -y_2 \\ 0 & -y_2 & 0 \end{pmatrix} \succcurlyeq 0 \Rightarrow \det \begin{pmatrix} -y_1 & -y_2 \\ -y_2 & 0 \end{pmatrix} \geq 0 \Rightarrow -y_2^2 \geq 0 \Rightarrow y_2 = 0 $$ which implies that the value of the dual is $0$. Note that $y_1 = y_2 = 0$ satisfies the constraints of the dual, and has value $0$.

Part 4

In standard form, the primal is $$ \begin{align*} &\min \quad \alpha \langle E_{11}, X \rangle \\ &\text{s.t.} \quad \langle E_{12} + E_{21}, X \rangle = 2 \\ &X \succcurlyeq 0 \end{align*} $$

Thus, the dual is $$ \begin{align*} &\max \quad 2z \\ &\text{s.t.} \quad z (E_{12} + E_{21}) \preceq \alpha E_{11} \end{align*} $$

Part 5

Primal is strictly feasible, for instance $x = y = 2$ is feasible and yields a positive definite matrix.

The dual is not strictly feasible, since the only feasible point is $z = 0$, which yields a singular matrix.

Part 6

Strong duality does not hold for part 1 when $\alpha > 0$ since the primal and dual values will be distinct. Strong duality does not hold for part 4 since we cannot find optimal values for the primal as $x > 0$ for any feasible solution (even though we have shown that the values will be the same, but the issue here is that the primal will never achieve its value).

These are consistent with Slater’s conditions, since in 1 neither the primal nor the dual are strictly feasible, and in 4 the primal is strictly feasible but the dual is not. And in both cases, we see strong duality does not hold.

Problem 4

Part 1

Let $Z$ be the $4 \times 4$ symmetric matrix with the block structure $$ Z = \begin{pmatrix} X & U \\ U^T & Y \end{pmatrix} $$ Then the given SDP in normal form can be written as $$ \begin{align*} &\min \quad \langle C, Z \rangle \\ &\text{s.t.} \quad \langle A_i, Z \rangle = b_i \quad \forall i \in [7] \\ &Z \succeq 0 \end{align*} $$ where $b_1 = 1, b_2 = 2, b_3 = 0, b_4 = 0, b_5 = 0, b_6 =0, b_7=0$ and $$C = \begin{pmatrix} I_2 & 0 \\ 0 & 0 \end{pmatrix}, \quad A_1 = \begin{pmatrix} 2 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad A_2 = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1
\end{pmatrix}, \quad A_3 = \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad A_4 = \begin{pmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{pmatrix} $$ $$ A_5 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix}, \quad A_6 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}, \quad A_7 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{pmatrix} $$

Thus we get the following dual:

$$ \begin{align*} &\max \quad y_1 + 2y_2 \\ &\text{s.t.} \quad \sum_{i=1}^7 y_i A_i \preceq C \\ &y_i \in \mathbb{R} \quad \forall i \in [7] \end{align*} $$

Part 2

This SDP is already in normal form, so we can write the dual as follows: $$ \begin{align*} &\max \quad y_1 \\ &\text{s.t.} \quad y_1 A_1 + y_2 A_2 \preceq C \\ &y_1, y_2 \in \mathbb{R} \end{align*} $$

Problem 5

Part 1

Formulation as a quadratic program:

  • To make sure $x_i \in {-1, 1}$, we can add the constraints $x_i^2 = 1$ for all $i \in V$.
  • given hyperedge $e = {i, j, k}$, we want quadratic function which is 1 if $e$ is not monochromatic and 0 if it is. Note that if $e$ is not monochromatic, then $x_i + x_j + x_k = \pm 1$ and if $e$ is monochromatic, then $x_i + x_j + x_k = 3$. Hence, we can use the function $f(x_i, x_j, x_k) = \dfrac{9 - (x_i + x_j + x_k)^2}{8}$, which is 1 if $x_i + x_j + x_k = \pm 1$ and 0 if $x_i + x_j + x_k = 3$.

Our quadratic program is then

$$ \begin{align*} &\max \quad \sum_{\substack{e \in E \\ e = \{i,j,k\}}} \dfrac{9 - (x_i + x_j + x_k)^2}{8} \\ &\text{s.t.} \quad x_i^2 = 1 \quad \forall i \in V \end{align*} $$

Part 2

We are now ready to obtain our SDP relaxation, by going to higher dimensions, then rewriting it in terms of an SDP matrix. Replacing $x_i \in \mathbb{R}$ by $u_i \in \mathbb{R}^d$ for some $d$, we can rewrite the objective as

$$ \begin{align*} &\max \quad \sum_{\substack{e \in E \\ e = \{i,j,k\}}} \dfrac{9 - (u_i + u_j + u_k)^T(u_i+u_j+u_k)}{8} \\ &\text{s.t.} \quad u_i^T u_i = 1 \quad \forall i \in V \end{align*} $$

Writing $X$ as the matrix whose $i,j$ entry is $u_i^T u_j$, we can rewrite the objective as

$$ \begin{align*} &\max \quad \sum_{\substack{e \in E \\ e = \{i,j,k\}}} \dfrac{9 - (X_{ii} + X_{jj} + X_{kk} + 2X_{ij} + 2X_{ik} + 2X_{jk})}{8} \\ &\text{s.t.} \quad X \succeq 0 \\ &\text{s.t.} \quad X_{ii} = 1 \quad \forall i \in V \end{align*} $$

To see that the above SDP is a relaxation of the original problem, note that if $u_i \in \mathbb{R}$, then we have exactly the original problem.

Problem 6

see piazza

Problem 7

see piazza

Previous
Next