Please note: This PhD defence will take place online.
Stephanie Maaz, PhD candidate
David R. Cheriton School of Computer Science
Supervisors: Professors Naomi Nishimura, Amer E. Mouawad
Classically, the study of computational problems has been primarily focused on what we call a \emph{static model} of the world. Specifically, given a \emph{fixed} unchanging input instance, e.g., a graph, the usual goal is to compute a \emph{fixed} subset of vertices or edges that minimizes or maximizes some objective function. In addition to unchanging instances, the static model ignores the potential existence of prior (partially feasible) solutions as well as the costs associated with materializing a new (more optimal) one. These limitations become more apparent when one considers the \emph{dynamic model} of the world in which we sometimes seek efficient transformations of a given system from some state to another. Upgrading public transport lines is a typical example of this since a reasonable strategy is expected to minimize criteria such as cost, environmental impact, and disruption time. It is therefore crucial for a corresponding computational problem to have knowledge of the current state of the system so as to seek a new, more desirable state, while minimizing the number of required “changes” (and the number of undesired changes).
\emph{Solution discovery} and \emph{reconfiguration} problems constitute two possible ways of addressing such computational problems arising in our dynamic model of the world, and they are the main topics of this thesis. Both problems model system states as configurations, such as sets of tokens placed on one of vertices or edges of a graph. Under our solution discovery problems, we start with an initial graph configuration and seek to transform it into any final configuration that satisfies a desired property (such as forming a shortest path) and where each vertex contains at most one token, using at most a given budget of token slides along graph edges. In the reconfiguration problems we study, both the initial and target configurations are specified, and we must determine whether one can be transformed into the other within a given budget of token multi-slides, where a token moves along a path of vertices with no other tokens. These token movements capture real-world constraints where changes must be local, that is, prohibiting arbitrary relocation, as seen in applications ranging from robot motion planning to quantum circuit compilation.
This thesis contributes to the computational complexity landscape for these transformation problems. For solution discovery problems where the target property is to form a matching, vertex/edge cut, or shortest path, we show that even though these properties are efficiently computable in the static setting, their transformation variants are NP-hard. Similarly, our reconfiguration problems are NP-hard regardless of whether tokens are distinguishable or indistinguishable. This necessitates a parameterized complexity approach, which provides a more refined analysis by developing (\emph{fixed-parameter tractable}) algorithms whose running times are exponential only in carefully chosen parameters, which are typically small in practice, while remaining polynomial in the input size. Under parameterized complexity, we investigate these transformation problems, including solution discovery problems for polynomial-time solvable properties (matching, vertex/edge cut, shortest path) and NP-hard properties (independent set, dominating set, vertex cover), under the fundamental parameters of number of tokens k and transformation budget b. Additionally, we examine how certain structural parameters affect the parameterized complexity; particularly, we analyze the independent set, dominating set, and vertex cover solution discovery variants with respect to the parameter pathwidth.
Beyond fixed-parameter tractability, we investigate the \emph{kernelization} complexity of the studied solution discovery problems to understand the limits of preprocessing. Kernelization provides a mathematical framework for analyzing data reduction, asking whether large instances of a problem can be efficiently compressed to equivalent instances whose size depends only on the parameter. A \emph{polynomial kernel} exists when instances of a problem can be reduced to a size that is at most polynomial in the parameter, indicating that effective preprocessing is possible. For problems that we prove admit fixed-parameter tractable algorithms, we employ advanced techniques to establish upper and lower kernel bounds. Our analysis extends to combined parameters, such as examining how the pathwidth structural parameter interacts with the transformation budget to affect preprocessing possibilities.
Finally, we transcend individual problem analysis through meta-theorems that characterize entire families of solution discovery problems using descriptive complexity. Rather than proving similar results repeatedly for each graph property, we investigate general theorems for all solution discovery problems whose target properties are definable in logic, particularly first-order (FO) or the more expressive monadic second-order (MSO) logic. We prove that MSO solution discovery is fixed-parameter tractable when parameterized by the structural parameter neighborhood diversity, exploiting vertex type equivalences to reduce the search space. Conversely, we demonstrate that even FO solution discovery is hard classically and under parameterized complexity assumptions for several natural structural parameters including twin cover, modulator to stars, and modulator to paths numbers. Through these results, we delineate precise boundaries within a well-studied hierarchy of structural parameters, establishing where in the hierarchy meta-tractability results are possible.