Learning from satisfying assignments under continuous distributions.
C. Canonne and A. De and R. Servedio.
ACM-SIAM Symposium on Discrete Algoithms (SODA), 2020, pp. 82-101.


Abstract:

What kinds of functions are learnable from their satisfying assignments? Motivated by this simple question, we extend the framework of De, Diakonikolas and Servedio (SODA 2015), which studied the learnability of probability distributions over $\{0,1\}^n$ defined by the set of satisfying assignments to ``low-complexity'' Boolean functions, to Boolean-valued functions defined over \emph{continuous} domains. In our learning scenario there is a known ``background distribution'' $\calD$ over $\R^n$ (such as a known normal distribution or a known log-concave distribution) and the learner is given i.i.d. samples drawn from a target distribution $\calD_f$, where $\calD_f$ is $\calD$ restricted to the satisfying assignments of an \emph{unknown} low-complexity Boolean-valued function $f$. The problem is to learn an approximation $\calD'$ of the target distribution $\calD_f$ which has small error as measured in total variation distance. We give a range of efficient algorithms and hardness results for this problem, focusing on the case when $f$ is a low-degree \emph{polynomial threshold function} (PTF). When the background distribution $\calD$ is log-concave, we show that this learning problem is efficiently solvable for degree-1 PTFs (i.e., linear threshold functions) but not for degree-2 PTFs. In contrast, when $\calD$ is a normal distribution, we show that this learning problem is efficiently solvable for degree-2 PTFs but not for degree-4 PTFs. Our hardness results rely on standard assumptions about secure signature schemes.

pdf of full version


Back to main papers page