Hard Optimization Problems have Soft Edges
Finding a Maximum Clique is a classic property test from graph theory; find any one of the largest complete subgraphs in an Erdös-Rényi G(N,p) random graph. It is the simplest of many such problems in which algorithms requiring only a small power of N steps cannot reach solutions which probabilistic arguments show must exist, exposing an inherently "hard" phase within the solution space of the problem. Such "hard" phases are seen in many NP-Complete problems, in the limit when N →∞. But optimization problems arise and must be solved at finite N. We use this simplest case, MaxClique, to explore the structure of the problem as a function of N and K, the clique size. It displays a complex phase boundary, a staircase of steps at each of which 2 log_2N and K_max, the maximum size of clique that can be found, increase by 1. Each of its boundaries have finite width, and these widths allow local algorithms to find cliques beyond the limits defined by the study of infinite systems. We explore the performance of a number of extensions of traditional fast local algorithms, and find that much of the "hard" space remains accessible at finite N. The "hidden clique" problem embeds a clique somewhat larger than those which occur naturally in a G(N,p) random graph. Since such a clique is unique, we find that local searches which stop early, once evidence for the hidden clique is found, may outperform the best message passing or spectral algorithms.
READ FULL TEXT