1. Introduction
In this course, we study a single broad question:
When does randomness help make computers more efficient?
This is a question that comes up in many different settings, and the answer varies greatly depending on the computational model.
We illustrate this variety with three examples.
Example 1: Communication Complexity of Equality
Consider the problem where Alice has a string
There is a simple protocol that Alice and Bob can follow to solve this problem: Alice uses
Theorem.
.
Can we do better with randomness?
Consider first the public randomness model where there is a random string
The answer is an emphatic YES, if we let the protocol fail with some probability:
Theorem. There is a randomized protocol in which Alice and Bob each send exactly 1 bit to the other player and
- When
, they both always correctly output “equal”; and - When
, they both correctly output “not equal” with probability at least 1/2. As a consequence,
.
We can also consider a more restrictive private randomness model where both Alice and Bob can flip coins to generate random strings, but they can’t see each other’s random strings. Does randomness still help in this setting? Yes!
Theorem.
.
Example 2: Time Complexity of
Here’s another problem: given a weighted graph
One of the classic results in algorithms research is that by using the Max-flow Min-cut Theorem, we can obtain a polynomial-time algorithm for this problem.
Theorem.
.
If we consider polynomial-time to represent efficiency, this result states that randomness does not help for the
In 1993, David Karger introduced a radically different solution to the minimum cut problem for unweighted graphs: while the graph has more than 2 vertices, pick an edge uniformly at random and contract it.
Surprisingly, the final cut we get that corresponds to the final 2-vertex graph is the minimum cut of the graph with probability at least
Theorem. There is a randomized algorithm that correctly solves the
problem with high probability in time .
This time complexity is optimal up to the polylogarithmic factor, since
Theorem. There is a deterministic algorithm for
with time cost .
Example 3: Query Complexity of Graph Properties
Let’s consider yet another different model of computation.
Our input is again a graph
Deterministically, no: with an adversary argument, we can see that any correct algorithm must sometimes query all possible vertex pairs before it can output the answer. We write this result in query complexity terms as follows.
Theorem.
.
(A note of warning:
With randomness, we can do a little bit better: instead of querying all
Theorem.
.
What about other graph properties: are there any of them that we can determine without querying all (or a constant fraction) of the vertex pairs?
For monotone graph properties, the properties of graphs like
Theorem. For every monotone property
of graphs, .
Does the same bound hold for randomized algorithms? No one knows! The randomized evasiveness conjecture posits that it does, but the conjecture remains open to this day.