Weak learning convex sets under normal distributions.
A. De and R. Servedio.
34th Annual Conference on Computational Learning Theory (COLT), pp. 1399-1428, 2021.


Abstract:

This paper addresses the following natural question: can efficient algorithms weakly learn convex sets under normal distributions? Strong learnability of convex sets under normal distributions is well understood, with near-matching upper and lower bounds given by Klivans et al (2008), but prior to the current work nothing seems to have been known about weak learning. We essentially answer this question, giving near-matching algorithms and lower bounds.

For our positive result, we give a poly(n)-time algorithm that can weakly learn the class of convex sets to advantage $\Omega(1/\sqrt{n})$ using only random examples drawn from the background Gaussian distribution. Our algorithm and analysis are based on a new ``density increment'' result for convex sets, which we prove using tools from isoperimetry.

We also give an information-theoretic lower bound showing that $O(\log(n)/\sqrt{n})$ advantage is best possible even for algorithms that are allowed to make poly(n) many membership queries.


Back to main papers page