Master’s Thesis Presentation • Artificial Intelligence — Exploring New Forms of Random Projections for Prediction and Dimensionality Reduction in Big-Data RegimesExport this event to calendar

Thursday, April 26, 2018 10:00 AM EDT

Amir-Hossein Karimi, Master’s candidate
David R. Cheriton School of Computer Science

The story of this work is dimensionality reduction. Dimensionality reduction is a method that takes as input a point-set P of n points in R^d where d is typically large and attempts to find a lower-dimensional representation of that dataset, in order to ease the burden of processing for down-stream algorithms. In today’s landscape of machine learning, researchers and practitioners work with datasets that either have a very large number of samples and/or include high-dimensional samples. Therefore, dimensionality reduction is applied as a pre-processing technique primarily to overcome the curse of dimensionality.

Generally, dimensionality reduction improves time and storage space required for processing the point-set, removes multi-collinearity and redundancies in the dataset where different features may depend on one another, and may enable simple visualizations of the dataset in 2-D and 3-D, making the relationships in the data easy for humans to comprehend. Dimensionality reduction methods come in many shapes and sizes. Methods such as Principal Component Analysis (PCA), Multi-dimensional Scaling, IsoMaps, and Locally Linear Embeddings are amongst the most commonly used method of this family of algorithms. However, the choice of dimensionality reduction method proves critical in many applications as there is no one-size-fits-all solution, and special care must be considered for different datasets and tasks.

Furthermore, the aforementioned popular methods are data-dependent, and commonly rely on computing either the Kernel / Gram matrix or the covariance matrix of the dataset. These matrices scale with increasing number of samples and increasing number of data dimensions, respectively, and are consequently poor choices in today’s landscape of big-data applications. Therefore, it is pertinent to develop new dimensionality reduction methods that can be efficiently applied to large and high-dimensional datasets, by either reducing the dependency on the data, or side-stepping it altogether. Furthermore, such new dimensionality reduction methods should be able to perform on par with, or better than, traditional methods such as PCA. To achieve this goal, we turn to a simple and powerful method called random projections.

Random projections are a simple, efficient, and data-independent method for stably embedding a point-set P of n points in R^d to R^k where d is typically large and k is on the order of log n. Random projections have a long history of use in dimensionality reduction literature with great success. In this work, we are inspired to build on the ideas of random projection theory, and extend the framework and build a powerful new setup of random projections for large high-dimensional datasets, with comparable performance to state-of-the-art data-dependent and nonlinear methods. Furthermore, we study the use of random projections in domains other than dimensionality reduction, including prediction, and show the competitive performance of such methods for processing small dataset regimes.

Location 
DC - William G. Davis Computer Research Centre
3126
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
1
2
3
4
5
  1. 2024 (100)
    1. April (23)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)