Learning a Classifier when the Labeling Is Known

Abstract

We introduce a new model of learning, Known-Labeling-Classifier-Learning (KLCL). The goal of such learning is to find a low-error classifier from some given target-class of predictors, when the correct labeling is known to the learner. This learning problem can be viewed as measuring the information conveyed by the identity of input examples, rather than by their labels.

Given some class of predictors H, a labeling function, and an i.i.d. unlabeled sample generated by some unknown data distribution, the goal of our learner is to find a classifier in H that has as low as possible error with respect to the sample-generating distribution and the given labeling function. When the labeling function does not belong to the target class, the error of members of the class (and thus their relative quality as label predictors) varies with the marginal of the underlying data distribution.

We prove a trichotomy with respect to the KLCL sample complexity. Namely, we show that for any learnable concept class H, its KLCL sample complexity is either 0 or Θ(1/ε) or Ω(1/ε^2). Furthermore, we give a simple combinatorial property of concept classes that characterizes this trichotomy.

Our results imply new sample-size lower bounds for the common agnostic PAC model - a lower bound of Ω(1/ε^2) on the sample complexity of learning deterministic classifiers, as well as novel results about the utility of unlabeled examples in a semi-supervised learning setup.

Publication
Proceedings of the 22nd International Conference on Algorithmic Learning Theory (ALT)