8. Query Composition Theorems
The composition of two functions
for all
The composition of Boolean function comes up in the definition of some functions we have already seen (such as the recursive NAND function) and in the study of hardness amplification. It’s also one of the most basic operations we can apply to Boolean functions. So it’s natural to ask how randomized query complexity behaves with respect to composition.
Towards a Composition Conjecture
In the deterministic setting, query complexity behaves multiplicatively under the composition of Boolean functions:
Theorem. For every total or partial Boolean functions
and ,
The upper bound is obtained with the natural approach: if we have a decision tree
If we try to use the same type of construction in the randomized setting, we cannot use randomized decision trees that err with probability
Proposition. For every total or partial Boolean functions
and ,
But the additional logarithmic term is not necessary for all choices of
Is it possible to obtain even smaller bounds on the randomized query complexity of composed function? For total functions, this is an open problem.
Conjecture. For every pair of total functions
and ,
For partial functions, the analogous conjecture is false: there are functions for which randomized query complexity behaves sub-multiplicatively under function composition.
A Composition Counter-Example
We can show the following counter-example to the extension of the conjecture above to partial functions:
Theorem. There exist partial functions
and for which
Proof idea. We want two ingredients to prove the theorem:
- A function
that can be computed even if it only has very slightly biased predictions on the values of each of its bits, and - A function
that can be computed to small bias with few queries. We have already seen a good choice for
in the last lecture: the gap majority function, which can be computed to bias with only 1 query. Now what about a function
that can use small-bias queries to its bits to obtain its value with bounded error? In this case, another variant of the function can be designed when we have partial functions. Define the function to be like the original function along with the extra promise that all the bits of the input whose indices are close to the true address all have the same value.
Note that the counter-example in the proof sketch has very low query complexity.
As a result, it’s possible that the sublinear behaviour of randomized query complexity with respect to composition is only logarithmic in the domain size of
Question. Does every partial function
, and every partial function satisfy
A Composition Theorem
The counter-example for composition shows that there are functions like
We can formalize the idea above using the notion of noisy queries.
In a noisy decision tree, each internal node of the tree specifies the index
The noisy decision tree follows the edge labelled with
Finally, the cost of noisy queries needs to be defined in a non-trivial way.
One natural choice is to set the cost of a
denote the minimum cost of a randomized decision tree that computes
Every function
Theorem. For every pair of partial functions
and ,
The proof of the theorem is obtained by re-arranging the target inequality as an upper bound. We want to show that
We can establish this upper bound by showing how an algorithm
The problem, of course, is that we don’t know
But we can also hope to answer some of
This approach also works when the marginal distributions of
- If
, output 1. - If
, output 0. - Else, query the value of
and output 1 if and only if .
This procedure generates a random bit with bias
The heart of the proof of the theorem above is the observation that if we have access to noisy queries, we can do better.
Instead of having expected query cost
To see all the remaining of the proof of the theorem, see this paper with Shalev Ben-David (2020). And for another incomparable composition theorem, see also this paper with Shalev Ben-David, Mika Göös, and Gilbert Maystre (2022).