Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions.
P. Long and R. Servedio.
Advances in Neural Information Processing Systems (NIPS), 2006.


Abstract:

We consider the well-studied problem of learning decision lists using few examples when many irrelevant features are present. We show that smooth boosting algorithms such as MadaBoost can efficiently learn decision lists of length $k$ over $n$ boolean variables using poly$(k,\log n)$ many examples provided that the marginal distribution over the relevant variables is ``not too concentrated'' in an $L_2$-norm sense. Using a recent result of H\aa{}stad, we extend the analysis to obtain a similar (though quantitatively weaker) result for learning arbitrary linear threshold functions with $k$ nonzero coefficients. Experimental results indicate that the use of a \emph{smooth} boosting algorithm, which plays a crucial role in our analysis, has an impact on the actual performance of the algorithm.

Postscript or pdf.


Back to main papers page