PhD Defence • Computer Vision • Higher-order Losses and Optimization for Low-level and Deep Segmentation

Monday, December 6, 2021 9:00 am - 9:00 am EST (GMT -05:00)

Please note: This PhD defence will be given online.

Dmitrii Marin, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Yuri Boykov

Regularized objectives are common in low-level and deep segmentation. Regularization incorporates prior knowledge into objectives or losses. It represents constraints necessary to address ill-posedness, data noise, outliers, lack of supervision, etc. However, such constraints come at significant costs. First, regularization priors may lead to unintended biases, known or unknown. Since these can adversely affect specific applications, it is important to understand the causes & effects of these biases and to develop their solutions. Second, common regularized objectives are highly non-convex and present challenges for optimization. As known in low-level vision, first-order approaches like gradient descent are significantly weaker than more advanced algorithms. Yet, variants of the gradient descent dominate optimization of the loss functions for deep neural networks due to their size and complexity. Hence, standard segmentation networks still require an overwhelming amount of precise pixel-level supervision for training.

This thesis addresses three related problems concerning higher-order objectives and higher-order optimizers. First, we focus on a challenging application — unsupervised tree extraction in large 3D volumes containing complex “entanglements” of near-capillary vessels. In the context of vasculature with unrestricted topology, we propose a new general curvature-regularizing model for arbitrarily complex one-dimensional curvilinear structures. In contrast, the standard surface regularization methods are impractical for thin vessels due to strong shrinking bias or the complexity of Gaussian/min curvature modeling for two-dimensional manifolds. In general, the shrinking bias is one well-known example of bias in the standard regularization methods. The second contribution of this thesis is a characterization of other new forms of biases in classical segmentation models that were not understood in the past. We develop new theories establishing data density biases in common pair-wise or graph-based clustering objectives, such as kernel K-means and normalized cut. This theoretical understanding inspires our new segmentation algorithms avoiding such biases. The third contribution of the thesis is a new optimization algorithm addressing the limitations of gradient descent in the context of regularized losses for deep learning. Our general trust-region algorithm can be seen as a high-order chain rule for network training. It can use many standard low-level regularizers and their powerful solvers. We improve the state-of-the-art in weakly-supervised semantic segmentation using a well-motivated low-level regularization model and its graph-cut solver.


To join this PhD defence on Zoom, please go to https://uwaterloo.zoom.us/j/94195323466?pwd=Y3R0eU92Nk15bEo0SVltRDA4cStvQT09.