PhD Seminar • Artificial Intelligence | Computer Vision — Addressing Labels Shortage: Segmentation with 3% SupervisionExport this event to calendar

Friday, December 6, 2019 11:00 AM EST

Dmitrii Marin, PhD candidate
David R. Cheriton School of Computer Science

Deep learning models generalize limitedly to new datasets and require notoriously large amounts of labeled data for training. The latter problem is exacerbated by the need of ensuring that trained models are accurate in large variety of image scenes. The diversity of images comes from combinatorial nature of real world scenes, occlusions, variations in lightning, acquisition methods, etc. Many rare images may have little chance to be included in a dataset, but are still very important, as they often represent situations where a recognition mistake has a high cost.

This motivates the need of acquiring ever larger labeled datasets. While in some classic problems in computer vision obtaining labels is relatively cheap, e.g. in classification, other problems may require many hours of meticulous human labor. One of such demanding problems is semantic segmentation. It requires class assignment for each image pixel, which could be millions in a single image. There is a special setting called weak supervision where instead of labeling each pixels we allow labeling just few of them. Interestingly, this machine learning setting is similar to low level interactive segmentation problems where extensively developed methods aim to turn such weak supervision into full labeling. While it is possible to use the output of these methods as a ground truth for training deep models, a better approach is to incorporate the corresponding low level objectives into semantic segmentation losses. The resulting losses are often referred as regularized losses. 

We continue the line of works on regularized losses for segmentation and explore different regularizers, their properties and corresponding optimization challenges. For example, many efficient shallow optimization solvers are not directly applicable to deep learning training. We explore methods allowing simultaneous use of the efficient shallow solvers and standard gradient based optimization in deep learning.

Location 
DC - William G. Davis Computer Research Centre
3102
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
26
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
  1. 2024 (96)
    1. April (19)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)