We consider two functions $f(n), g(n)$ with values in $\mathbb R_{> 0}$.
We say $f(n) \in O(g(n))$ if there exist $C > 0$ and $n_0$, such that for $n \geq n_0$, $f(n) \leq Cg(n)$
We say that $f(n) \in \Omega (g(n))$ if there exist $C > 0$ and $n_0$ such that for $n \geq n_0$, $f(n) \geq Cg(n)$.
This is equivalent to $g(n) \in O(f(n))$
We say that $f(n) \in \Theta (g(n))$ if there exist $C, C' > 0$ and $n_0$ such that for $n \geq n_0$, $C'g(n) \leq f(n) \leq Cg(n)$
This is equivalent to $f(n) \in O(g(n))$ and $f(n) \in \Omega(g(n))$
In particular true if $\lim_{n \to \infty} f(n)/g(n) = C$ for some $0 < C < \infty$
We say that $f(n) \in o(g(n))$ if for all $C > 0$, there exists $n_0$ such that for $n \geq n_0$, $f(n) \leq Cg(n)$
This is equivalent to $\lim_{n \to \infty} f(n)/g(n) = 0$
This is equivalent to $\lim_{n \to \infty} f(n)/g(n) = \infty$
This is equivalent to $g(n) \in o(f(n))$
$n^k + c_{k-1} n^{k-1} + \dots + c_0 \in \Theta (n^k)$
$n^{O(1)}$ means (at most) polynomial in $n$
$n \log (n)$ is in $O(n^2)$ and $\Omega(n)$
Suppose that $a \geq 1$ and $b > 1$. Consider the recurrence $$T(n) = a T\left( \frac{n}{b} \right) + \Theta(n^y)$$ in sloppy or exact form. Denote $x = \log_b a$. Then: $$T(n) \in \begin{cases} \Theta(n^x) & \text{if } y < x \\ \Theta(n^y \log n) & \text{if } y = x \\ \Theta(n^y) & \text{if } y > x \end{cases} $$
Case | r | y, x | Complexity of $T(n)$ |
---|---|---|---|
heavy leaves | $r > 1$ | $y < x$ | $T(n) \in \Theta(n^x)$ |
balanced | $r = 1$ | $y = x$ | $T(n) \in \Theta(n^y \log n)$ |
heavy top | $r < 1$ | $y > x$ | $T(n) \in \Theta(n^y)$ |
"Heavy leaves" means that cost of the recursion tree is dominated by the cost of the leaf nodes.
"Heavy top" means that cost of the recursion tree is dominated by the cost of the root node.
Easy algorithm (two nested loops) in $\Theta (n^2)$.
Take each $i \leq n/2$ and binary-search its position in the right-hand side.
Enhanced Mergesort
Convention: $n$ is the number of vertices, $m$ is the number of edges.
Adjacency list: an array $A[1..n]$ s.t. $A[v]$ is the linked list of all edges connected to $v$. $2m$ list cells, total size $\Theta (n + m)$, but testing if an edge exists is not $O(1)$.
Adjacency matrix: a $(0, 1)$ matrix $M$ of size $n \times n$, with $M [v, w] = 1$ iff $\{v, w\}$ is an edge. Size $\Theta(n^2)$, but testing if an edge exists is $O(1)$
path: a sequence $v_1, ..., v_k$ of vertices, with $\{v_i, v_{i+1}\}$ in $E$ for all $i$. $k = 1$ is OK.
subgraph of G = (V, E): a graph $G' = (V', E')$, where
Let $G_i = (V_i, E_i)$, $i = 1, ..., s$ be the connected components of $G = (V, E)$.
Analysis:
Total: $O(n+m)$
For all vertices $v$, if visited$[v]$ is true at the end, there is a path $s \leadsto v$ in $G$
For all vertices $v$, there is a path $s \leadsto v$ in $G$ if and only if visited$[v]$ is true at the end
the BFS tree $T$ is the subgraph made of:
The BFS tree $T$ is a tree
A graph $G = (V, E)$ is bipartite if there is a partition $V = V_1 \cup V_2$ such that all edges have one end in $V_1$ and one end in $V_2$.
Suppose $G$ connected, run BFS from any $s$, and set
Idea: add an outer loop that runs BFS on successive vertices
The idea:
When we start exploring a vertex $v$, any $w$ that can be connected to $v$ by an unvisited path will be visited during explore($v$).
If $w$ is visited during explore$(v)$, there is a path $v \leadsto w$.
DFS($G$) gives a partition of $G$ into vertex-disjoint rooted trees $T_1, ..., T_k$ (DFS forest)
Suppose the DFS forest is $T_1, ... , T_k$ and let $u$, $v$ be two vertices
All edges in $G$ connect a vertex to one of its descendants or ancestors.
a back edge is an edge in $G$ connecting an ancestor to a descendant, which is not a tree edge.
All edges are either tree edges or back edges (key property).
if start$[u]$ $<$ start$[v]$, then either finish$[u]$ < start$[v]$ or finish$[v]$ < finish$[u]$.
for $G$ connected, a vertex $v$ in $G$ is a cut vertex if removing $v$ (and all edges that contain it) makes $G$ disconnected. Also called articulation points.
The root $s$ is a cut vertex if and only if it has more than one child.
for a vertex $v$, let
For any $v$ (except the root), $v$ is a cut vertex if and only if it has a child $w$ with $m(w) \geq$ level$[v]$.
if $v$ has children $w_1, ... ,w_k$, then $m(v)$ = min{$a(v)$, $m(w_1)$, ... , $m(w_k)$}
The algorithms work without any change.
Suppose we have a DFS forest. Edges of $G$ are one of the following:
If w was visited:
$G$ has a cycle if and only if there is a back edge in the DFS forest
Suppose $G = (V, E)$ is a DAG. A topological order is an ordering $<$ of $V$ such that for any edge $(v, w)$, we have $v < w$. No such order if there are cycles.
Suppose that $V$ is ordered using the reverse of the finishing order: $v < w \Leftrightarrow$ finish$[w]$ $<$ finish$[v]$. This is a topological order.
Consequence: topological order in $O(n + m)$.
A directed graph $G$ is strongly connected if for all $v, w$ in G, there is a path $v \leadsto w$ (and thus a path $w \leadsto v$).
$G$ is strongly connected iff there exists $s$ such that for all $w$, there are paths $s \leadsto w$ and $w \leadsto s$.
test in $O(n + m)$
a strongly connected component of $G$ is
The vertices of strongly connected components form a partition of $V$.
$v$ and $w$ are in the same strongly connected component if and only if there are paths $v \leadsto w$ and $w \leadsto v$.
A directed graph $G$ can be seen as a DAG of disjoint strongly connected components.
Context: we are trying to solve a combinatorial optimization problem:
Input: \(n\) intervals \([s_1, f_1], [s_2, f_2], ... , [s_n, f_n]\).
Output a maximal subset of disjoint intervals.
Assume \(O\) is an optimal solution. Our goal is to show \(|S| = |O|\).
Lemma: For all indices \(r ≤ k\) we have \(f (i_r) ≤ f (j_r)\).
Proof: We use induction. For \(r = 1\) the statement is true. Suppose \(r > 1\) and the statement is true for \(r − 1\). We will show that the statement is true for \(r\). By induction hypothesis we have \(f (i_{r−1}) ≤ f (j_{r−1})\). By the order on \(O\) we have \(f (j_{r−1}) < s(j_r)\). Hence we have \(f (i_{r−1}) < s(j_r)\). Thus at the time the greedy algorithm chose \(i_r\), the interval \(j_r\) was a possible choice. The greedy algorithm chooses an interval with the smallest finish time. So, \(f (i_r) ≤ f (j_r)\).
Theorem: The greedy algorithm returns an optimal solution Prove by contradiction. If the output \(S\) is not optimal, then \(|S| < |O|\). \(i_k\) is the last interval in \(S\) and \(O\) must have an interval \(j_{k+1}\). Apply the previous lemma with \(r = k\), and we get \(f (i_k) ≤ f (j_k)\). We have \(f (i_k) ≤ f (j_k) < s(j_{k+1})\). So, \(j_k+1\) was a possible choice to add to \(S\) by the greedy algorithm. This is a contradiction.
Input: \(n\) intervals \([s_1, f_1], [s_2, f_2], ..., [s_n, f_n]\)
Output: use the minimum number of colors to color the intervals, so that each interval gets one color and two overlapping intervals get two different colors.
Assume the greedy algorithm uses \(k\) colors. To prove the correctness, we show that there are no other way to solve the problem using at most \(k − 1\) colors.
Proof of correctness: Suppose interval ℓ is the first interval to use the color k. Interval ℓ overlaps with intervals with colors 1, ..., k − 1. Call these intervals \([s_{i_{1}} , f_{i_{1}} ], [s_{i_{2}} , f_{i_{2}} ], . . . , [s_{i_{k−1}} , f_{i_{k−1}}]\). For \(1 ≤ j ≤ k − 1\) we have \(s_{i_{j}} ≤ s_{ℓ}\). All the intervals overlap with \([s_{ℓ}, f_{ℓ}]\). Since all these intervals overlap with \([s_{ℓ}, f_{ℓ}]\), we also have \(s_{ℓ} ≤ f_{i_{j}}\) for \(1 ≤ j ≤ k − 1\). Hence \(s_{ℓ}\) is a time contained in \(k\) intervals. So, there is no \(k − 1\) coloring.
Input: n jobs, each requiring processing time \(p_{i}\)
Output: An ordering of the jobs such that the total completion time is minimized.
Note: The completion time of a job is defined as the time when it is finished.
Example: n = 5, processing times [2, 8, 1, 10, 5]
order the jobs in non-decreasing processing times
Let \(L = [e_{1}, ..., e_{n}]\) be an optimal solution (as a permutation of \([1, ..., n]\)). Suppose that \(L\) is not in non-decreasing order of processing times. So there exists \(i\) such that \(t(e_{i}) > t(e_{i+1})\). Sum of the completion times of \(L\) is \(nt(e_{1}) + (n − 1)t(e_{2}) + · · · + t(e_{n})\). The contribution of \(e_{i}\) and \(e_{i+1}\) is \((n − i + 1)t(e_{i}) + (n − i)t(e_{i+1})\). Now, switch \(e_{i}\) and \(e_{i+1}\) to get a permutation \(L'\). Their contribution becomes \((n − i + 1)t(e_{i+1}) + (n − i)t(e_{i})\). Nothing else changes so \(T (L') − T (L) = t(e_{i+1}) − t(e_{i}) < 0\). A contradiction.
Shortest path exists in any directed weighted graph with the assumption that \(G\) has no negative-weight cycles
The shortest path weight from \(u\) to \(v\): \[\delta(u, v) = \begin{cases} \min\{w(P) : u \leadsto v\} & \text{if there exists a path from u to v} \\ \infty & \text{otherwise} \end{cases} \]
Dijkstra's algorithm is a greedy algorithm
Input: A weighted directed graph with non-negative edge weights
For all vertices, maintain quantities
Initialize \(C = ∅\), repeat the following until \(C = V\):
Which Abstract Data Type (ADT) should we use for vertices?: binary min-heap
Check the course notes.
Check the notes for a complete proof.
Claim: For each vertex \(v ∈ V\) , we have \(d[v] = δ(s, v)\) at the time when v is added to set \(C\). To prove the claim, we follow these steps:
Now, suppose the edges have weights \(w(e_i)\)
A spanning tree with minimal weight
For pseudocode, check the notes.
Let \(G\) be a connected graph, and let \(A\) be a subset of the edges of \(G\). If \((V, A)\) has no cycle and \(|A| < n − 1\), then one can find an edge \(e\) not in \(A\) such that \(A ∪ \{e\}\) still has no cycle.
In any graph, #vertices − #con. comp. ≤ #edges. For \((V, A)\), this gives \(n − c < n − 1\) so \(c > 1\). Take any edge on a path that connects two components.
If the output is \(A = [e_{1}, ..., e_{r}]\), then \((V, A)\) is a spanning tree (and \(r = n − 1\))
Of course, \((V, A)\) has no cycle. Suppose \((V, A)\) is not a spanning tree. Then, there exists an edge \(e\) not in \(A\), such that \((V, A ∪ \{e\})\) still has no cycle.
Let \((V, A)\) and \((V, T)\) be two spanning trees, and let \(e\) be an edge in \(T\) but not in \(A\).
Then there exists an edge \(e'\) in \(A\) but not in \(T\) such that \((V, T + e' − e)\) is still a spanning tree.
Bonus: \(e'\) is on the cycle that \(e\) creates in \(A\).
Write \(e = \{v, w\}\). \((V, A + e)\) contains a cycle \(c = v, w, ..., v\). Removing \(e\) from \(T\) splits \((V, T − e)\) into two connected components \(T_1\), \(T_2\). \(c\) starts in \(T_1\), crosses over to \(T_2\), so it contains another edge \(e'\) between \(T_2\) and \(T_1\). \(e'\) is in \(A\), but not in \(T\). \((V, T + e' − e)\) is a spanning tree.
Let \(A\) be the output of the algorithm. Let \((V, T)\) be any spanning tree. If \(T \neq A\), let \(e\) be an edge in \(T\) but not in \(A\). So there is an edge \(e'\) in \(A\) but not in \(T\) such that \((V, T + e' − e)\) is a spanning tree, and \(e'\) is on the cycle that \(e\) creates in \(A\). During the algorithm, we considered \(e\) but rejected it, because it created a cycle in \(A\). All other elements in this cycle have smaller (or equal) weight so \(w(e') ≤ w(e)\). So \(T' = T + e' − e\) has weight \(≤ w(T)\), and one more common element with \(A\).
Keep going
Operations on disjoint sets of vertices:
Using an array of linked list. To do find, add an array of indices, \(X[i]\) = set that contains \(i\)
Worst case for one union is still \(\Theta(n)\), but total time improved.
Total cost of union per vertex is \(O(\log n)\)
\(O(n \log(n))\) for all unions and \(O(m \log(m))\) total
The naive Fibonacci algorithm runs in \(T(n) \in \Theta(\varphi^n)\), where \(\varphi = \frac{1 + \sqrt{5}}{2}\) is the golden ratio.
either we choose \(I_{n}\) or not.
Assume \(I_{1}, ..., I_{n}\) sorted by increasing end time: \(f_{i} ≤ f_{i+1}\)
For all \(j\), the set of intervals \(I_{k} ≤ I_{j}\) that do not overlap \(I_{j}\) is of the form \(I1, ..., I_{p_{j}}\) for some \(0 ≤ p_{j} < j\) (\(p_{j} = 0\) if no such interval)
The algorithm will need the \(p_{i}\)'s.
The algorithm of FindPj is available in the notes.
\(O(n \log(n))\) (sorting \(f_i\)?) and \(O(n)\) (loops)
\(M[i]\) is the maximal weight we can get with intervals \(I_{1}, ..., I_{i}\)
\(M [0] = 0\) and for \(i ≥ 1\) \[M [i] = \max(M [i − 1], M [p_{i}] + w_{i})\]
\(O(n \log(n))\) (sorting twice) and \(O(n)\) (finding the \(M [i]\)'s)
recover the optimum set for an extra \(O(n)\)
Can be solved with a greedy algorithm.
Either we choose item n or not. Then the optimum \(O[W, n]\) is the max of two values: \(v_{n} + O[W − w_{n}, n − 1]\), if we choose \(n\) (and \(w_{n} ≤ W\) ). \(O[W, n − 1]\), if we don't choose \(n\)
\(O[w, i]\) :=maximum value achievable using a knapsack of capacity \(w\) and items \(1, ..., i\)
Pseudocode is available in the notes.
\(\Theta(nW)\)
This is called a pseudo-polynomial algorithm.
In our word RAM model, we have been assuming all \(v_{i}\)s
and \(w_{i}\)s fit in a word
So input size is \(Θ(n)\) words.
But the runtime also depends on the values of the inputs
01-knapsack is NP-complete, so we don't really expect to do much better
An array \(A[1..n]\) of integers
A longest increasing subsequence of \(A\) (or just its length) (does not need to be contiguous)
Let \(L[i]\) be the length of the longest increasing subsequence of \(A[1..i]\) that ends with \(A[i]\)
Then \(L[1] = 1\).
A longest increasing subsequence \(S\) ending at \(A[i]\) looks like \[S = [..., A[j], A[i]] = S' \textbf{ cat } [A[i]]\] S' is a longest increasing subsequence ending at \(A[j]\) (or it is empty) We don't know j, but we can try all \(j < i\) for which \(A[j] < A[i]\)
Check the notes for pseudocode.
\(\Theta(n^2)\)
Arrays \(A[1..n]\) and \(B[1..m]\) of characters
The maximum length \(k\) of a common subsequence to \(A\) and \(B\) (subsequences do not need to be contiguous)
let \(M [i, j]\) be the longest subsequence between \(A[1..i]\) and \(B[1..j]\)
The algorithm computes all \(M [i, j]\), using two nested loops, so runtime \(\Theta(mn)\)
arrays \(A[1..n]\) and \(B[1..m]\) of characters
The minimum number of operations {add, delete, change} to transform \(A\) into \(B\)
Let \(D[i, j]\) be the edit distance between \(A[1..i]\) and \(B[1..j]\)
The algorithm computes all \(D[i, j]\), using two nested loops, so runtime \(\Theta(mn)\)
define \(M [i, j]\) by
Prefix sums
Check the notes for pseudocode
\(O(n^3)\)
An independent set of a graph \(G = (V, E)\), is \(S ⊆ V\) if there are no edges between elements of \(S\).
The maximum independent set problem (for a general graph):
Input: \(G(V, E)\)
Output: An independent set of maximum cardinality.
\(I(v)\) := size of largest independent set of subtree rooted at \(v\) \[I(v) = \max \left\{1 + \sum_{u \in S(v)} I(u), \sum_{u \in C(v)} I(u)\right\}\] where \(C(v)\) is the set of children of \(v\) and \(S(v)\) is the set of grandchildren of \(v\)
For \(i = 0, ..., n - 1\), set \(\delta_i(s, v)\) to be the length of a shortest \(s \leadsto v\) path with at most \(i\) edges. If no such path exists, set \(\delta_i(s, v) = \infty\).
Check the course notes for the pseudo-code
\(O(nm)\)
For all \(i\), after iteration \(i\), we have \(d[v] ≤ d _{i} [v]\) for all \(v\).
By induction on \(i\)
The operation \(d[v] ← \min(d[v], d[u] + w(u, v))\) is called a relaxation
\(d[v]\) can only decrease through a relaxation, and if \(δ(s, u) ≤ d[u]\) and \(δ(s, v) ≤ d[v]\) before relaxation, then \(δ(s, v) ≤ d[v]\) post-relaxation.
First part is obvious. For the second part, we have \begin{align*} δ(s, v) & ≤ δ(s, u) + w(u, v) \\ & ≤ d[u] + w(u, v) \\ \end{align*}
Note that \(d[v]\) gets the value of \(\min \{d[v], d[u] + w(u, v)\}\). Hence, se say \(d[v] \geq \delta(s, v)\).
For \(i = 0, . . . , n − 1\), after iteration \(i\), \(δ(s, v) ≤ d[v] ≤ δ _{i} (s, v)\) for all \(v\).
This follows from the previous two parts
\(v _{1} , v _{2} , . . . , v _{k}\) , with \(v _{k} = v _{1}\).
for extra \(O(m)\), can check the presence of a negative cycle
doing Bellman-Ford from all u takes \(O(mn^2)\)
\(D _{i} (v _{j} , v _{k} ) = \min(D _{i−1} (v _{j} , v _{k} ), D _{i−1} (v _{j} , v _{i} ) + D_{i−1} (v _{i} , v _{k} ))\)
Check the course notes for the pseudo-code
\(\Theta(n^3)\)
Given a problem instance \(I\), answer a certain question “yes” or “no”.
Input for the specified problem.
Correct answer (“yes” or “no”) for the specified problem instance. \(I\) is a yes-instance if the correct answer for the instance \(I\) is “yes”. \(I\) is a no-instance if the correct answer for the instance \(I\) is “no”.
Size(\(I\)) is the number of bits required to specify (or encode) the instance \(I\).
A (decision) problem \(A\) is polynomial time reducible to a decision problem \(B\), if there is a polynomial time algorithm \(F\) that transforms any instance \(I _{A}\) of \(A\) to an instance \(I _{B}\) of \(B\) such that \[I_{A} \text{ is a yes-instance of } A \iff I_{B} \text{ is a yes-instance of } B\]
We write \(A ≤_{ P} B\), if such a polynomial time reduction exists. We also write \(A = _{P} B\) if \(A ≤ _{P} B\) and \(B ≤ _{P} A\).
If \(A ≤ _{P} B\) and \(B ≤ _{P} C\), then \(A ≤ _{P} C\).
\(S ⊆ V\) is a clique if, \(\{u, v\} ∈ E\) for all \(u, v ∈ S\).
Input: \(G = (V, E)\) and an integer \(k\)
Output: (the answer to the question) is there a clique in \(G\) with at least \(k\) vertices?
\(S ⊆ V\) is an independent set if, \(\{u, v\} ∉ E\) for all \(u, v ∈ S\).
Input: \(G = (V, E)\) and an integer \(k\)
Output: (the answer to the question) is there an independent set in \(G\) with at least \(k\) vertices?
\(S ⊆ V\) is a vertex cover if, \(\{u, v\} \cap S \neq \emptyset\) for all \(\{u, v\} ∈ E\).
Input: \(G = (V, E)\) and an integer \(k\)
Output: (the answer to the question) is there a vertex cover in \(G\) with at most \(k\) vertices?
Clique =P IS
Assume \(G = (V, E)\) is a graph. \(S ⊆ V\) is a vertex cover \(⇐⇒\) \(V −S\) is an independent set in \(G\).
VC =P IS
A cycle is a Hamiltonian Cycle if it touches every vertex exactly once.
Input: Undirected graph \(G = (V, E)\)
Output: (the answer to the question) Does \(G\) have a Hamiltonian Cycle?
A path is a Hamiltonian Path if it touches every vertex exactly once.
Input: Undirected graph \(G = (V, E)\)
Output: (the answer to the question) Does \(G\) have a Hamiltonian Path?
HC =P HP
Input: A CNF-fomula in which each clause has at most three literals.
Output: (the answer to the question) is there a truth assignment to the variables that satisfies all the clauses?
3-SAT ≤P IS
For a problem \(X\), we represent an instance of \(X\) as a binary string \(S\).
A problem \(X\) is in NP, if there is a polynomial time verification algorithm \(ALG_{X}\) such that the input \(S\) is a yes-instance iff there is a proof (certificate) \(t\) which is a binary string of length \(\operatorname{poly}(|S|)\) so that \(ALG_{X} (S, t)\) returns yes.
\(Alg_{v}(S, t)\): go through all \(E\) and check if \(t\) covers the edges and \(t ≤ k\).
\(Alg_{v}(S, t)\): check whether \(t\) satisfies all the clauses.
Note: Clique, IS, HC, HP, Subset-Sum are all in NP.
Is a given graph non-Hamiltonian (i.e. no Hamiltonian Cycle)?
co-NP is the set of decision problems whose no-instances can be certified in polynomial time.
P is the set of decision problems that can be solved in polynomial time.
\(P \subseteq NP\)
Informally we say a problem is NP-complete if it is the hardest problem in NP.
A problem \(X ∈ NP\) is NP-complete if \(Y ≤_P X\) for all \(Y ∈ NP\).
P = NP \(⇐⇒\) an NP-complete problem can be solved in polynomial time.
3-SAT is NP-complete.
If we can prove 3-SAT \(≤_P X\), then X is NP-complete. For example IS \(∈\) NPC, since 3-SAT\(≤_P\) IS.
To prove that a problem \(X ∈\) NP is NP-complete, we just need to find a NP-complete problem \(Y\) , and prove that \(Y ≤_{P} X\).
Actual proof omitted. Check the course notes for details.