Master’s Thesis Presentation • Machine Learning • Learn Privacy-friendly Global Gaussian Processes in Federated LearningExport this event to calendar

Tuesday, August 2, 2022 — 10:00 AM to 11:00 AM EDT

Please note: This master’s thesis presentation will take place online.

Haolin Yu, Master’s candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Pascal Poupart

In the era of big data, Federated Learning (FL) has drawn great attention as it naturally operates on distributed computational resources without the need of data warehousing. Similar to Distributed Learning (DL), FL distributes most computational tasks to end devices, but emphasizes more on preserving the privacy of clients. In other words, any FL algorithm should not send raw client data, if not the information about them, that could leak privacy. As a result, in typical scenarios where the FL framework applies, it is common for clients to have or obtain insufficient training data to produce an accurate model. To decide whether a prediction is trustworthy, models that provide not only point estimations, but also some notion of confidence are beneficial. Gaussian Process (GP) is a powerful Bayesian model that comes with naturally well-calibrated variance estimations. However, it is challenging to learn a stand-alone global GP since merging local kernels leads to privacy leakage. To preserve privacy, previous works that consider federated GPs avoid learning a global model by focusing on the personalized setting or learning an ensemble of local models.

In this work, we present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy. We incorporate deep kernel learning and random features for scalability by defining a unifying random kernel. We show this random kernel can recover any stationary kernel and many non-stationary kernels. We then derive a principled approach of learning a global predictive model as if all client data is centralized. We also learn global kernels with knowledge distillation methods for non-identically and independently distributed (non-i.i.d.)~clients. We design synthetic experiments to illustrate scenarios where our model has a clear advantage and provide insights into the rationales. Experiments are also conducted on real-world regression datasets and show statistically significant improvements compared to other federated GP models.


To join this master’s thesis presentation on Zoom, please go to https://vectorinstitute.zoom.us/j/81490011072?pwd=bUlvNzdLS3lNVGFUMEsrSlA3czhhZz09.

Location 
Online master’s thesis presentation
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
  1. 2023 (2)
    1. January (2)
  2. 2022 (245)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (12)
    5. August (29)
    6. July (23)
    7. June (17)
    8. May (20)
    9. April (24)
    10. March (22)
    11. February (16)
    12. January (19)
  3. 2021 (210)
    1. December (21)
    2. November (13)
    3. October (12)
    4. September (21)
    5. August (20)
    6. July (17)
    7. June (11)
    8. May (16)
    9. April (27)
    10. March (20)
    11. February (13)
    12. January (19)
  4. 2020 (217)
  5. 2019 (255)
  6. 2018 (217)
  7. 2017 (36)
  8. 2016 (21)
  9. 2015 (36)
  10. 2014 (33)
  11. 2013 (23)
  12. 2012 (4)
  13. 2011 (1)
  14. 2010 (1)
  15. 2009 (1)
  16. 2008 (1)