Solutions to Homework 1

Problem 1

Our data structure will be a doubly-linked list $L$ of the elements of $S$.

  • To insert an element $x$ into $S$, we will simply append it to the end of $L$.
  • To delete the larger half of the elements of $S$, we will use the linear time median finding algorithm to find the median $\mu$ of $S$. Then, we will traverse $L$ and delete $\lceil|L|/2\rceil$ elements larger than $\mu$. (We can assume we always know the size of the list, as a simple traversal will give us the size in linear time.)

To analyze the amortized cost of the above operations, we will use the accounting method.

Suppose that the time to insert an element is $1$ and delete-larger half by the algorithm above is $c \cdot |L|$. Then, we will charge $2c +1$ to each element in the list at the time of insertion, and we will charge $0$ to each delete-larger-half operation.

Now, we need to show that the total charge is an upper bound on the total cost of the operations. To see this, note that the cost to insert element $x$ is $1$, so the charge left on $x$ is $2c$. The cost to delete-larger-half is $c \cdot |L|$, and if $x_1, \ldots, x_t$ are the deleted elements, then we must have that $t = \lceil |L|/2 \rceil$ which means that the total charge left on the deleted elements is $2c \cdot \lceil |L|/2 \rceil \geq c \cdot |L|$. This is enough to cover the cost of the delete-larger-half operation.

Hence, the total charge is an upper bound on the total cost of the operations, and we have that the amortized cost of each operation is $O(1)$.

Problem 2

We will again use the accounting method. Suppose the cost of each regular push and pop in the the ordered stack is $1$, and suppose also that the cost of comparison between two items is $1$. Then, we will charge $4$ to each OrderedPush and to each Pop operation. Note that whenever we perform $OrderedPush(x)$ we will assign this charge $4$ to $x$.

Now, we need to show that the total charge is an upper bound on the total cost of the operations. We will do that by proving that at any point in time, each element in our stack has a remaining charge of at least $2$ assigned to it (after we discount the true cost of previous operations). Note that the above is true in the beginning, since we have an empty stack.

We begin with the OrderedPush operation.

Suppose that the OrderedPush operation removes items $y_1, \ldots, y_t$ and pushes $x$ to the stack. Then, the true cost of the OrderedPush operation is $2(t+1)$, as we have done $t+1$ comparisons and $t+1$ simple push and pop operations done. By assumption, the charge left on each $y_i$ was $2$, and we charged $4$ to the OrderedPush operation on $x$.

Now, to account for the true cost of the OrderedPush operation, we will say that whenever we compare $x, y$ and $x > y$, we remove $1$ charge from $y$ (so $y$ is paying for the cost of this operation). Thus, note that in the OrderedPush operation, we are removing $2$ charge from each $y_i$ (for the comparison, and the pop of $y_i$) and $2$ charge from $x$ (for the last comparison with the element after $y_t$ and the push of $x$). Thus, $x$ is left with $2$ charge, and this completes our analysis of the OrderedPush operation.

We now analyze the Pop operation.

Suppose that the Pop operation pops $y$ from the stack. Then, the cost of the Pop operation is $1$, and the charge left on $y$ is $2$, which is enough to account for this change in potential.

Since we have shown that the charging scheme is enough to account for the true cost of the operations, we have that the total charge is an upper bound on the total cost of the operations, and we have that the amortized cost of each operation is $4 = O(1)$.

Problem 3

Let us consider the following random variables $X_{ijk} := \begin{cases} 1, & \text{if } {i,j,k} \text{ is a triangle in } G \ 0, & \text{otherwise}. \end{cases}$

Thus, we have that $X := \sum_{i,j,k} X_{ijk}$ is the number of triangles in $G$. Since the edges are added uniformly at random, with probability $1/2$ each, we have that $\mathbb{E}[X_{ijk}] = \Pr[{i,j}, {i,k}, {j,k} \text{ are added }] = 1/8$.

Part 1

We have $$\mathbb{E}[X] = \sum_{i,j,k} \mathbb{E}[X_{ijk}] = \dfrac{1}{8} \cdot \binom{n}{3}$$

Part 2

To compute the variance, we will use the formula from class, which states that $$\text{Var}[X] = \sum_{i,j,k} \text{Var}[X_{ijk}] + \sum_{{i,j,k} \neq {i’, j’, k’}} \text{Cov}[X_{ijk}, X_{i’j’k’}]$$

Now, we have that $\text{Var}[X_{ijk}] = \mathbb{E}[X_{ijk}^2] - \mathbb{E}[X_{ijk}]^2 = 1/8 - 1/64 = 7/64$.

To compute the covariance, we have to divide into two cases, depending on the sizes of the intersections of the sets ${i,j,k}$ and ${i’,j’,k’}$.

  • Case 1: $|{i,j,k} \cap {i’,j’,k’}| \leq 1$.

Then, we have that $\text{Cov}[X_{ijk}, X_{i’j’k’}] = 0$ as these are independent random variables.

  • Case 2: ${i,j,k} \cap {i’,j’,k’} = 2$.

Then, we can assume $i = i’, j = j’$, and we have $$\text{Cov}[X_{ijk}, X_{ijk’}] = \mathbb{E}[X_{ijk} X_{ijk’}] - \mathbb{E}[X_{ijk}] \cdot \mathbb{E}[X_{ijk’}] = \mathbb{E}[X_{ijk} X_{ijk’}] - 1/64$$

Now, we have that $\mathbb{E}[X_{ijk} X_{ijk’}] = \Pr[{i,j,k} \text{ and } {i,j,k’} \text{ are triangles}] = (1/2)^5 = 1/32$. Hence, we have that $\text{Cov}[X_{ijk}, X_{ijk’}] = 1/32 - 1/64 = 1/64$.

Given any triple of indices ${i,j,k}$, there are $3 \cdot (n-3)$ other triples ${i’,j’,k’}$ such that ${i,j,k} \cap {i’,j’,k’} = 2$.

Putting everything together, we have that $$\text{Var}[X] = \sum_{i,j,k} \text{Var}[X_{ijk}] + \sum_{{i,j,k} \neq {i’, j’, k’}} \text{Cov}[X_{ijk}, X_{i’j’k’}] = $$ $$= \dfrac{7}{64} \cdot \binom{n}{3} + \binom{n}{3} \cdot \left( 3 \cdot (n-3) \cdot \dfrac{1}{64} \right) = \dfrac{1}{64} \cdot \binom{n}{3} \cdot (3n-2)$$

Part 3

We need to use Chebyshev’s inequality:

$$\Pr[|X - \mathbb{E}[X]| \geq \delta \mathbb{E}[X]] \leq \dfrac{\text{Var}[X]}{\delta^2 \mathbb{E}[X]^2} = \dfrac{8}{\delta^2} \cdot \dfrac{(3n-2) }{\binom{n}{3}}$$

The above gives us an expression for the bound on decay of probability as $\delta$ increases. Now, setting for instance $\delta = 24/n$, the above inequality becomes $$\Pr[|X - \mathbb{E}[X]| \geq 24 \mathbb{E}[X]/n] \leq \dfrac{8}{(24/n)^2} \cdot \dfrac{(3n-2) }{\binom{n}{3}} = \dfrac{1}{72} \cdot \dfrac{n^2 \cdot (3n-2)}{\binom{n}{3}} \leq \dfrac{1}{4}$$

Thus we have that the number of triangles is concentrated around the expectation.

Problem 4

Part 1

We can assume, for the sake of simplicity, that $n$ is an odd number.

Let $T := { 1/nk, 1-1/nk \ \mid \ \text{ for } 1 \leq k < n/2 }$. If $S_1 := {0} \cup T$ and $S_2 := {1} \cup T$, then note that $S_1$ and $S_2$ are both sets of size $n$ and $|S_1 \cap S_2| = |T| = n-1$. Now, the median of $S_1$ is $1/n$ and the median of $S_2$ is $1-1/n$.

Additionally, note that with $k = o(n)$ samples, the probability that we sample an element outside of $T$ is $\leq k/n = o(1)$. Thus, with high probability (when $k = o(n)$), we will only sample elements of $T$, and output with probability $1/2$ a number in $[0, 1/n]$ and with probability $1/2$ a number in $[1-1/n, 1]$. Thus, with probability $1/2$, we will not output the median of $S_1$ or $S_2$ correctly.

Now, for the approximation factor, whenever we output a wrong median, notice that the multiplicative error will be (for $S_1$) at least $$ \dfrac{1-1/n}{1/n} = n-1 $$ and (for $S_2$) at least $$ \dfrac{1/n}{1-1/n} = \dfrac{1}{n-1}.$$

Thus, we have that the approximation factor is at least $n-1$.

Part 2

For this question, we can assume without loss of generality that $S = [n]$. We will begin by bounding the probability of our first bad event: that the median is in the set $[n/2 - t]$.

Let $X_i$ be the indicator random variable for the event that the $i^{th}$ sample is in the set $[n/2 - t]$. Then, we have that $X = \sum_{i=1}^k X_i$ is the number of samples in the set $[n/2 - t]$. In particular, we have that the median is in the set $[n/2 - t]$ if and only if $X \geq k/2$.

Note that $X_i$ is a Bernoulli random variable with probability $p = (n/2 - t)/n$. Then, we have that $X$ is a binomial random variable with parameters $k$ and $p$. Hence, the expected value of $X$ is given by $$\mathbb{E}[X] = \sum_{i=1}^k \mathbb{E}[X_i] = kp = k(1/2 - t/n) = (1-\delta) k/2$$ for $\delta = 2t/n$.

Now, we can use the Chernoff bound to bound the probability that $X$ is greater than $k/2$. Thus, we have that $$\Pr[X \geq k/2] \leq \Pr[X \geq (1 + \delta)\mathbb{E}[X]] \leq e^{- \delta^2 \mathbb{E}[X]/3} = e^{-\delta^2(1-\delta)k/6}.$$

By symmetry, we have that the probability that the median is in the set $[n/2 + t, n]$ is also bounded by $e^{-\delta^2(1-\delta)k/6}$.

Thus, to reach high success probability (which we agreed $3/4$ would be ok), we need that the failure probability is at most $1/4$. So we will choose (to make the numbers pretty) failure probability $2/e^3$.

Putting the above together, we need $$2e^{-\delta^2(1-\delta)k/6} \leq 2/e^3 \Leftrightarrow -\delta^2(1-\delta)k \leq -18 \Leftrightarrow k \geq \dfrac{9n^2}{2t^2 (1 - 2t/n)} $$

Problem 5

Part 1

Taking $X_i$ to be the indicator random variable for the $i^{th}$ bin being empty after the $m$ tosses, we have $X = \sum_{i=1}^n X_i$. Now, we can compute the variance of $X$ as follows: from class, we know that $p = (1 - 1/n)^m$ is the probability that a given bin is empty after $m$ tosses. Then, we have that $\mathbb{E}[X_i] = p$ and $\text{Var}[X_i] = p(1-p)$.

Additionally, we have that $\text{Cov}[X_i, X_j] = \mathbb{E}[X_i X_j] - \mathbb{E}[X_i] \mathbb{E}[X_j]$. Now, note that $\mathbb{E}[X_i X_j] = \mathbb{P}[\text{bins } i \text{ and } j \text{ are empty}] = (1 - 2/n)^m =: q$.

Hence, by the formula for variance from class, we have that $$\text{Var}[X] = \sum_{i=1}^n \text{Var}[X_i] + \sum_{i \neq j} \text{Cov}[X_i, X_j] = n \cdot p \cdot (1-p) + n(n-1) \cdot (q - p^2) =$$ $$ = n \cdot p + n(n-1) \cdot q - n^2 \cdot p^2$$

which is exactly the formula we wanted.

Part 2

Using the approximation $(1-x/t)^t \sim e^{-x} \cdot (1-x^2/2t)$, whenever $x/t \in (0,3 \log t)$, we have:

  • $t= m$, $x = m/n \Rightarrow$ $p \sim e^{-m/n} \cdot (1 - m/2n^2)$,
  • $t= 2m$, $x = 2m/n \Rightarrow$ $p^2 \sim e^{-2m/n} \cdot (1 - m/n^2)$ and
  • $t= m$, $x = 2m/n \Rightarrow$ $q \sim e^{-2m/n} \cdot (1 - 2m/n^2)$.

Plugging these values into the formula for variance from part 1, we have that $$\text{Var}[X] \sim n \cdot e^{-m/n} \cdot (1 - m/2n^2) + n(n-1) \cdot e^{-2m/n} \cdot (1 - 2m/n^2) - n^2 \cdot e^{-2m/n} \cdot (1 - m/n^2) =$$ $$ = n \cdot e^{-m/n} \cdot (1 - m/2n^2) - e^{-2m/n} \cdot \left( m + n \cdot (1-2m/n^2) \right) $$

When $m = n$, the above simplifies to $$\text{Var}[X] \sim n \cdot e^{-1} \cdot (1 - 1/2n) - 2 e^{-2} \cdot (n - 1/n) \sim n \cdot (e^{-1} -2e^{-2})$$

When $m = n \log n$, the above simplifies to $$\text{Var}[X] \sim n \cdot e^{-\log n} \cdot (1 - \log n/2n) - e^{-2 \log n} \cdot \left( n \log n + n \cdot (1-2 \log n/n) \right) \sim 1 $$

Part 3

Let $X_i$ be the indicator variable which is $1$ if the $i^{th}$ bin has exactly $1$ ball in it after $n$ tosses. Then, we have that $X = \sum_{i=1}^n X_i$ is the number of bins with exactly $1$ ball in them after $n$ tosses.

Note that $$\Pr[X_i = 1] = \sum_{j=1}^n \Pr[j^{th} \text{ ball falls in } i^{th} \text{ bin and all others fall outside bin } i] = $$ $$= \sum_{i=1}^n (1/n) \cdot (1-1/n)^{n-1} = n \cdot (1/n) \cdot (1-1/n)^{n-1} = (1-1/n)^{n-1}$$ where the first equality holds because the events are disjoint.

Then, we have that $$\mathbb{E}[X] = \sum_{i=1}^n \mathbb{E}[X_i] = n \cdot (1 - 1/n)^{n-1} \sim n/e$$

Problem 6

Part 1

Let $X_i$ be the random variable that is $1$ if the $i^{th}$ bin is empty after $m$ tosses and $0$ otherwise. As we saw in class, we have $\Pr[X_i = 1] = (1-1/n)^m$. And as we saw in Problem 5, we have $\Pr[X_i = 1 \text{ and } X_j = 1] = (1-2/n)^m$.

By inclusion-exclusion, we have that $$\Pr[\text{some bin is empty}] \geq \sum_{i=1}^n \Pr[X_i = 1] - \sum_{i \neq j} \Pr[X_i = 1 \text{ and } X_j = 1] = $$ $$ = n \cdot (1-1/n)^m - \binom{n}{2} \cdot (1-2/n)^m \geq n \cdot (1-1/n)^m - (n^2/2) \cdot (1-2/n)^m$$ for $m = n \log n + cn$, this simplifies to (using the approximation $(1-\alpha/n)^n \sim e^{-\alpha}$): $$n \cdot (1-1/n)^{n \log n + cn} - (n^2/2) \cdot (1-2/n)^{n \log n + cn} \sim \dfrac{1}{e^c} - (n^2/2) \cdot e^{-2c} \cdot n^{-2} = \dfrac{1}{e^c} - \dfrac{1}{2e^{2c}} $$

Part 2

This is exactly the same setting as the coupon collector problem, if you think of bin $i$ to be the pair of coupons ${2i-1, 2i}$. Note that in this case, even though the number of coupons is $2n$, the number of pairs of coupons is only $n$, and buying a coupon at ranomd from the $2n$ coupons gives you a uniform probability of landing in one of the pairs.

Thus, the expected number is the same one from class, that is: $\sim n \log n$.

Problem 7

Part 1

Not $2$-universal. Take $p = n+2$ (for instance $n=3, p=5$). Consider the numbers $x = 1, y = n+1$. If we pick $a = 1, n+1$, then $h_a(x) = h_a(y)$, so we have $$ \Pr_{h \in \mathcal{H}}[h(x) = h(y)] \geq 2/(p-1) > 1/n.$$

Part 2

Not $2$-universal. Take $m = 2n$. Consider the numbers $x = 0, y = n$. Since $2n \equiv 0 \bmod m$, we know that if $a$ is even then any choice for $b$ will cause $x, y$ to collide. This gives us at least $m(m-2)/2$ pairs $(a,b)$ which cause $x,y$ to collide. Hence, $$ \Pr_{h \in \mathcal{H}}[h(x) = h(y)] \geq m(m-2)/2m(m-1) > 1/n.$$

Part 3

This function is $2$-universal (as noted in class). To see this, let $x \neq y \in \mathbb{F}_p$. Now note that because $a \neq 0$, we have that the pair $(ax + b \bmod p, ay + b \bmod p) =: (r, s)$ is such that $r \neq s$, for any choice of $a \neq 0, b \in \mathbb{F}_p$. Hence, we have that the two sets $\{ (ax + b \bmod p, ay + b \bmod p) \}_{a \neq 0, b \in \mathbb{F}_p}$ and ${(r, s)}_{r \neq s \in \mathbb{F}_p}$ are the same set (both with $p(p-1)$ elements).

Thus, the probability that $h_{a,b}(x) = h_{a,b}(y)$ is equal to the fraction of pairs $(r,s)$ above such that $r \neq s$ and $r \equiv s \bmod n$.
To compute this fraction, note that for each $r \in \mathbb{F}_p$, letting $\alpha_r := r \bmod n$, there are at most $\lfloor (p - \alpha_r)/n \rfloor$ choices for $s$ such that $r \equiv s \bmod n$ and $s \neq r$. Thus, the fraction of pairs is upper bounded by: $$\Pr_{h \in \mathcal{H}}[h(x) = h(y)] \leq \frac{p(\lfloor p/n \rfloor)}{p(p-1)} \leq \frac{p(p - 1)}{np(p-1)} = 1/n.$$

Bonus remark: To see that it is not $2$-strongly universal, note that in the above calculation we did not tightly bound the number of pairs $(r,s)$ such that $r \equiv s \bmod n$. For many values of $r$, we will have that $\lfloor (p - \alpha_r)/n \rfloor < (p - \alpha_r)/n < (p-1)/n$. Thus, the inequality above is actually a strict inequality.

Hence, we have: $$ 1 = \sum_{0 \leq \alpha, \beta < n} \Pr_{h \in \mathcal{H}}[h(x) = \alpha \wedge h(y) = \beta] = $$ $$ = \sum_{\alpha} \Pr_{h \in \mathcal{H}}[h(x) = \alpha \wedge h(y) = \alpha] + \sum_{\alpha \neq \beta} \Pr_{h \in \mathcal{H}}[h(x) = \alpha \wedge h(y) = \beta] <$$ $$ < 1/n + \sum_{\alpha \neq \beta} \Pr_{h \in \mathcal{H}}[h(x) = \alpha \wedge h(y) = \beta] \leq 1/n + n(n-1) \cdot \Pr_{h \in \mathcal{H}}[h(x) = \alpha^* \wedge h(y) = \beta^*]$$ where $\alpha^*, \beta^*$ are the values such that $\Pr_{h \in \mathcal{H}}[h(x) = \alpha^* \wedge h(y) = \beta^*]$ is maximized. Hence, we conclude that $$ \Pr_{h \in \mathcal{H}}[h(x) = \alpha^* \wedge h(y) = \beta^*] > \dfrac{1-1/n}{n(n-1)} = 1/n^2.$$

Problem 8

Part 1

Each run of Karger’s algorithm performs $n-2$ contractions, as each contraction reduces the number of vertices by $1$, and we start with $n$ vertices and stop when we have only $2$ vertices left. Thus, the total number of edge contractions when running Karger’s algorithm twice is $2(n-2)$.

Since we are running the algorithm twice (indepenently), and outputting the smallest cut, the probability of outputting a minimum cut is: $$\Pr[\text{outputting a minimum cut}] = \Pr[\text{at least one of the runs outputs a minimum cut}] \ \leq 2 \cdot \Pr[\text{one run outputs a minimum cut}] = 2 \cdot \dfrac{2}{n(n-1)} = \dfrac{4}{n(n-1)}.$$

Part 2

# of edge contractions: $(n-k) + \ell \cdot (k-2) = n + (\ell-1)(k-2) + 2$.

Probability of outputting a minimum cut: given a min-cut $(S, \overline{S})$ we have $$\Pr[\text{output } (S, \overline{S})] = \ \Pr[ (S, \overline{S}) \text{ survived first } n-k \text{ contractions}] \cdot \ \Pr[ (S, \overline{S}) \text{ is the cut in one of the } \ell \text{ runs } \mid (S, \overline{S}) \text{ survived first } n-k \text{ contractions}]$$

Now, we have $$\Pr[ (S, \overline{S}) \text{ survived first } n-k \text{ contractions}] \geq \ (1 - 2/n) \cdot (1 - 2/(n-1)) \cdots (1 - 2/(n- (n-k)+1)) = \ \dfrac{k(k-1)}{n(n-1)} = \dfrac{\binom{k}{2}}{\binom{n}{2}}.$$

Letting $\Gamma$ be the event that $(S, \overline{S}) \text{ survived first } n-k \text{ contractions}$, we have

$$\Pr[ (S, \overline{S}) \text{ is the cut in one of the } \ell \text{ runs } \mid \Gamma ] = \ 1 - \Pr[ (S, \overline{S}) \text{ is not the cut in any of the } \ell \text{ runs } \mid \Gamma ]^\ell \geq \ 1 - \left(1 - \dfrac{1}{\binom{k}{2}} \right)^\ell \geq 1 - \left[ 1 - \dfrac{\ell}{\binom{k}{2}} + \binom{\ell}{2} \left( \dfrac{1}{\binom{k}{2}} \right)^2 \right] = \dfrac{\ell}{\binom{k}{2}} \cdot \left(1 - \dfrac{\ell}{2} \cdot \dfrac{1}{\binom{k}{2}} \right) $$ where the last inequality follows from $(1-x)^\ell \leq 1 - \ell x + \binom{\ell}{2} x^2$ for $0 \leq x \leq 1$.

Since $$\Pr[\text{outputting a minimum cut}] \geq \Pr[ (S, \overline{S}) \text{ survives} ] \geq \dfrac{\binom{k}{2}}{\binom{n}{2}} \cdot \dfrac{\ell}{\binom{k}{2}} \cdot \left(1 - \dfrac{\ell}{2} \cdot \dfrac{1}{\binom{k}{2}} \right) \sim \ \sim \dfrac{1}{\binom{n}{2}} \cdot (\ell - \ell^2/k^2)$$ we have obtained the desired lower bound.

Part 3

Same running time $\Leftrightarrow 2n = n + \ell k \Leftrightarrow \ell k = n$. We want to maximize our lower bound from part 2, with $k = \ell/n$, thus we want $$\max_{\ell \in \mathbb{R}} \dfrac{1}{\binom{n}{2}} \cdot (\ell - \ell^4/n^2)$$ By taking derivatives (to find inflection points), we have $$\partial_{\ell} \left( \dfrac{1}{\binom{n}{2}} \cdot (\ell - \ell^4/n^2) \right) = 0 \Leftrightarrow 1 - 4 \ell^3/n^2 = 0 \Rightarrow \ell = \left( \dfrac{n^2}{4} \right)^{1/3}$$ as that is the only real solution to the above equation. Checking the second derivative, to see that it is a maximum, we have $$\partial_{\ell}^2 \left( \dfrac{1}{\binom{n}{2}} \cdot (\ell - \ell^4/n^2) \right) = -12 \ell^2/n^2 < 0$$ when $\ell \neq 0$ thus we have a local maximum at $\ell = \left( \dfrac{n^2}{4} \right)^{1/3}$.

Hence, we have $k = \ell/n = (4n)^{1/3} \quad \quad \blacksquare$