Computational Sample Complexity and Attribute-Efficient Learning. R. Servedio.
Journal of Computer and System Sciences 60(1), 2000, pp. 161-178.
Preliminary version in 31st Annual Symposium on Theory of Computing (STOC), 1999, pp. 701-710.


Abstract:

Two fundamental measures of the efficiency of a learning algorithm are its running time and the number of examples it requires (its sample complexity). The importance of polynomial time has long been acknowledged in learning theory, while recent work on attribute-efficiency has focused attention on algorithms which can learn from few examples. In this paper we demonstrate that even for simple concept classes, an inherent tradeoff can exist between running time and sample complexity. In our first construction, we present a concept class of $1$-decision lists and prove that while a computationally unbounded learner can learn the class from $O(1)$ examples, under a standard cryptographic assumption any polynomial-time learner requires almost $\Theta(n)$ examples. Using a different construction, we present a concept class of $k$-decision lists which exhibits a similar but stronger gap in sample complexity. These results strengthen the results of Decatur, Goldreich and Ron \cite{dgr} on distribution-free computational sample complexity and come within a logarithmic factor of the largest possible gap for concept classes of $k$-decision lists. Finally, we construct a concept class of decision lists which can be learned attribute-efficiently and can be learned in polynomial time but cannot be learned attribute-efficiently in polynomial time. This is the first result which shows that attribute-efficient learning can be computationally hard. The main tools we use are one-way permutations, error-correcting codes and pseudorandom generators.


pdf of conference version

Postscript or pdf of journal version


Back to main papers page