Reverse Mathematics of Complexity Lower Bounds, Part I

Why is it so hard to prove P != NP, or even to prove super-linear circuit lower bounds? While we often blame a lack of combinatorial ingenuity, the bottleneck might be more fundamental: the logical strength of our mathematical tools.

This series of talks explores the "Reverse Mathematics" of complexity theory. Instead of asking what lower bounds we can prove, we ask: What axioms are *strictly necessary* to prove certain lower bounds? By re-examining established lower bounds through the lens of feasible mathematics, we find that many complexity lower bounds are actually basic combinatorial principles in disguise.

We will focus on two intriguing phenomena:

Lower bounds as basic principles in disguise: We revisit Maass's classical lower bound (STOC'84) that one-tape Turing machines require quadratic time to recognize Palindromes. We show that this lower bound is logically equivalent to the weak pigeonhole principle over a base theory of feasible reasoning ("PV").

The minimum theory for hardness: Proving lower bounds for the Resolution proof system requires a specific threshold of logical strength ("T^1_2 + dwPHP(PV)", or, "rwPHP(PLS)"). Remarkably, this requirement is intrinsic to the Resolution system itself, independent of which specific hard tautology we are analyzing.

In this series of two talks, I will introduce the framework of reverse mathematics for complexity lower bounds and present results illustrating the two phenomena above. No prior knowledge about bounded arithmetic is assumed.

These talks are based on the following papers:

Oliver Korten. Derandomization from Time-Space Tradeoffs. CCC'22

Lijie Chen, Jiatu Li, and Igor Oliveira. Reverse Mathematics of Complexity Lower Bounds. FOCS'24

Jiawei Li, Yuhao Li, and Hanlin Ren. Finding Bugs in Short Proofs: The Metamathematics of Resolution Lower Bounds. STOC'26

Date

Speakers

Affiliation

Institute for Advanced Study