Algorithmic Dense Model Theorems, Decompositions, and Regularity Theorems
Green and Tao used the existence of a dense subset indistinguishable from the primes under certain tests from a certain class to prove the existence of arbitrarily long prime arithmetic progressions. Reingold, Trevisan, Tulsiani and Vadhan, and independently, Gowers, give a quantitatively improved characterization of when such dense models exist. An equivalent formulation was obtained earlier by Barak, Shaltiel and Wigderson.
We show that the existence of dense models can be reduced directly to the improved hardcore distribution results of Holenstein ([Hol]). Using Holenstein's uniform proof of an optimal density hard-core set theorem, we show that the dense models that one derives have a canonical form, with models being definable in terms of tests from the original class. Using this, we get an abstract decomposition theorem, that implies that of Trevisan, Tulsiani and Vadhan.
We give several applications, including generalizations of weak regularity lemmas (Frieze, Kannan; Coja-Oghlan, Cooper and Frieze). For example, we show that any graph G with Δ n2 edges has a γ-cut-approximator of rank 2poly(1/γ, 1/log (1/Δ)