Master’s Thesis Presentation • Machine Learning • Learn Privacy-friendly Global Gaussian Processes in Federated LearningExport this event to calendar

Tuesday, August 2, 2022 — 10:00 AM to 11:00 AM EDT

Please note: This master’s thesis presentation will take place online.

Haolin Yu, Master’s candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Pascal Poupart

In the era of big data, Federated Learning (FL) has drawn great attention as it naturally operates on distributed computational resources without the need of data warehousing. Similar to Distributed Learning (DL), FL distributes most computational tasks to end devices, but emphasizes more on preserving the privacy of clients. In other words, any FL algorithm should not send raw client data, if not the information about them, that could leak privacy. As a result, in typical scenarios where the FL framework applies, it is common for clients to have or obtain insufficient training data to produce an accurate model. To decide whether a prediction is trustworthy, models that provide not only point estimations, but also some notion of confidence are beneficial. Gaussian Process (GP) is a powerful Bayesian model that comes with naturally well-calibrated variance estimations. However, it is challenging to learn a stand-alone global GP since merging local kernels leads to privacy leakage. To preserve privacy, previous works that consider federated GPs avoid learning a global model by focusing on the personalized setting or learning an ensemble of local models.

In this work, we present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy. We incorporate deep kernel learning and random features for scalability by defining a unifying random kernel. We show this random kernel can recover any stationary kernel and many non-stationary kernels. We then derive a principled approach of learning a global predictive model as if all client data is centralized. We also learn global kernels with knowledge distillation methods for non-identically and independently distributed (non-i.i.d.)~clients. We design synthetic experiments to illustrate scenarios where our model has a clear advantage and provide insights into the rationales. Experiments are also conducted on real-world regression datasets and show statistically significant improvements compared to other federated GP models.


To join this master’s thesis presentation on Zoom, please go to https://vectorinstitute.zoom.us/j/81490011072?pwd=bUlvNzdLS3lNVGFUMEsrSlA3czhhZz09.

Location 
Online master’s thesis presentation
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
27
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  1. 2024 (98)
    1. April (21)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)