Making Proofs More Constructive, and Algorithms Less Random
central topic in the theory of computation is derandomization: say we have an algorithm which flips coins to achieve some goal, and succeeds with high probability. Can we transform this algorithm into a deterministic procedure, while maintaining most of its desirable behavior (i.e. efficiency and correctness)?
A similar (though more informal) question arises frequently in combinatorics: say we have a non-constructive proof of the existence of some object. Can we replace this proof with a more “constructive” argument, without weakening the result too much?
In this work we will show some formal connections between these two sorts of questions, using the framework of “total search problems.” This framework will allow us to establish new relations amongst outstanding complexity-theoretic problems pertaining to boolean circuit-size lower bounds, time-space tradeoffs for turing machines, and deterministic constructions of various combinatorial objects.