Master’s Thesis Presentation • Machine Learning — Improving Object Detection with MatrixNetsExport this event to calendar

Tuesday, September 8, 2020 1:00 PM EDT

Please note: This master’s thesis presentation will be given online.

Rishav Agarwal, Master’s candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Lukasz Golab

Object detection is a popular task in computer vision with various applications, from pedestrian detection to face detection. Following the success of Convolutional Neural Networks (CNNs), many CNN based object detectors have been proposed to solve the object detection task. Early CNN based detectors suggested using deeper networks to detect objects in images. However, deeper networks cannot capture objects of varied sizes and aspect ratios with high accuracy. Thus, CNN-based detectors have two main challenges — scale invariance (detecting objects at multiple scales) and aspect-ratio invariance (detecting objects at various aspect ratios).

Modern CNN-based object detectors have two main components — a backbone network that learns features from an image and an output network that leverages these features to make predictions. Scale and aspect-ratio invariance are typically added by either making changes to the backbone or the output network. Adding scale awareness to the output network is often computationally expensive. Thus, a popular method to add scale invariance by changing the backbone is Feature Pyramid Networks (FPNs). FPNs create a hierarchy of features at different scales and implicitly capture objects at various resolutions.

However, FPNs have a square-bias and favour square objects over asymmetric ones. One solution to alleviate the square biasedness of FPNs is to add template anchor boxes of various sizes to add more bias towards non-square objects. However, anchor boxes are set as hyperparameters and add a computational overhead to the network. Newer architectures have thus moved towards anchor-free techniques; however, they still rely on FPNs, which are square-biased. Recently, MatrixNets has been proposed, a general-purpose aspect-ratio aware extension of FPNs that can explicitly model aspect-ratios better than anchor boxes while keeping the model anchor-free. While MatrixNets has been shown to improve keypoint based object detectors significantly, the implementation makes significant changes to the architecture, making it difficult to isolate the solo impact of MatrixNets.

In this thesis, we explore MatrixNets as a viable method to add aspect-ratio awareness. Specifically, we study MatrixNets along three axes — 1) Does MatrixNets make anchor-based detectors anchor-free, 2) Does MatrixNets add aspect-ratio awareness to object detectors, and 3) can MatrixNets be used for other, more complicated computer vision tasks like instance segmentation. We explore these questions via three case studies. We demonstrate the effectiveness of MatrixNets by replacing anchor boxes in RetinaNet with our MatrixNets module and showing better performance on skewed boxes while making the detector anchor-free. Then, we extend the anchor-free CornerNets to x-CornerNet to support multiple output heads and smaller backbones. We then apply MatrixNets to x-CornerNet and demonstrate a similar improvement in skewed boxes leading to an overall 5.6% mAP improvement on MS COCO, achieving competitive results. Finally, we add MatrixNets to Mask RCNN to tackle the instance segmentation tasks. We propose a new loss function, Mask Edge Loss (MEL), that leverages mask contours to reduce coarseness in predicted masks, thereby achieving higher accuracy. Together these three case studies demonstrate the effectiveness of MatrixNets for adding aspect-ratio awareness to object detectors. The code-base for our implementation will be made public.


To join this master’s thesis presentation on Zoom, please go to https://zoom.us/j/91941954965?pwd=bmROOWkrZGlzNjltSU02S0FEem9wZz09.

Location 
Online presentation
200 University Avenue West

Waterloo, ON N2L 3G1
Canada

S M T W T F S
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
  1. 2024 (96)
    1. April (19)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)