Learning from satisfying assignments under continuous distributions

07/02/2019
by   Clément L. Canonne, et al.
0

What kinds of functions are learnable from their satisfying assignments? Motivated by this simple question, we extend the framework of De, Diakonikolas, and Servedio [DDS15], which studied the learnability of probability distributions over {0,1}^n defined by the set of satisfying assignments to "low-complexity" Boolean functions, to Boolean-valued functions defined over continuous domains. In our learning scenario there is a known "background distribution" D over R^n (such as a known normal distribution or a known log-concave distribution) and the learner is given i.i.d. samples drawn from a target distribution D_f, where D_f is D restricted to the satisfying assignments of an unknown low-complexity Boolean-valued function f. The problem is to learn an approximation D' of the target distribution D_f which has small error as measured in total variation distance. We give a range of efficient algorithms and hardness results for this problem, focusing on the case when f is a low-degree polynomial threshold function (PTF). When the background distribution D is log-concave, we show that this learning problem is efficiently solvable for degree-1 PTFs (i.e., linear threshold functions) but not for degree-2 PTFs. In contrast, when D is a normal distribution, we show that this learning problem is efficiently solvable for degree-2 PTFs but not for degree-4 PTFs. Our hardness results rely on standard assumptions about secure signature schemes.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset