PhD Defence • Machine Learning — A Pac-Theory of Clustering with AdviceExport this event to calendar

Thursday, May 10, 2018 — 11:30 AM EDT

Mohammad Zokaei Ashtiani, PhD candidate
David R. Cheriton School of Computer Science

In the absence of domain knowledge, clustering is usually an under-specified task. For any clustering application, one can choose among a variety of different clustering algorithms, along with different preprocessing techniques, that are likely to result in dramatically different answers. Any of these solutions, however, can be acceptable depending on the application, and therefore, it is critical to incorporate prior knowledge about the data and the intended semantics of clustering into the process of clustering model selection. One scenario that we study is when the user (i.e., the domain expert) provides a clustering of a (relatively small) random subset of the data set. The clustering algorithm then uses this kind of "advice'' to come up with a data representation under which an application of a fixed clustering algorithm (e.g., k-means) results in a partition of the full data set that is aligned with the user's knowledge. We provide "advice complexity'' of learning a representation in this paradigm. Another form of "advice'' can be obtained by allowing the clustering algorithm to interact with a domain expert by asking same-cluster queries: "Do these two instances belong to the same cluster?'' The goal of the clustering algorithm will then be finding a partition of the data set that is consistent with the domain expert's knowledge (yet using only a small number of queries). Aside from studying the "advice complexity'' (i.e., query complexity) of learning in this model, we investigate the trade-offs between computational and advice complexities of learning, showing that using a little bit of advice can turn an otherwise computationally hard clustering problem into a tractable one.

In the second part of this dissertation we study the problem of learning mixture models, where we are given an i.i.d. sample generated from an unknown target from a family of mixture distributions, and want to output a distribution that is close to the target in total variation distance. In particular, given a sample-efficient learner for a base class of distributions (e.g., Gaussians), we show how one can come up with a sample-efficient method for learning mixtures of the base class (e.g., mixtures of k Gaussians). As a byproduct of this analysis, we are able to prove tighter sample complexity bounds for learning various mixture models. We also investigate how having access to the same-cluster queries (i.e., whether two instances were generated from the same mixture component) can help reduce the computational burden of learning within this model. Finally, we take a further step and introduce a novel method for distribution learning via a form of compression. In particular, we ask whether one can compress a large-enough sample set generated from a target distribution (by picking only a few instances from it) in a way that allows recovery of (an approximation to) the target distribution. We prove that if this is the case for all members of a class of distributions, then there is a sample-efficient way of distribution learning with respect to this class. As an application of this novel notion, we settle the sample complexity of learning mixtures of k axis-aligned Gaussian distributions (within logarithmic factors).

Location 
DC - William G. Davis Computer Research Centre
2314
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
5
  1. 2019 (196)
    1. October (3)
    2. September (20)
    3. August (18)
    4. July (12)
    5. June (23)
    6. May (23)
    7. April (32)
    8. March (25)
    9. February (16)
    10. January (24)
  2. 2018 (220)
    1. December (16)
    2. November (19)
    3. October (26)
    4. September (22)
    5. August (17)
    6. July (20)
    7. June (13)
    8. May (25)
    9. April (34)
    10. March (24)
    11. February (3)
    12. January (1)
  3. 2017 (36)
  4. 2016 (21)
  5. 2015 (36)
  6. 2014 (33)
  7. 2013 (23)
  8. 2012 (4)
  9. 2011 (1)
  10. 2010 (1)
  11. 2009 (1)
  12. 2008 (1)