Qi Ge and Daniel Stefankovic. The Complexity of Counting Eulerian Tours in 4-Regular Graphs
|
Abstract: We investigate the complexity of counting Eulerian tours ({\sc \#ET}) and its variations from two perspectives---the complexity of exact counting and the complexity w.r.t. approximation-preserving reductions (AP-reductions~\cite{MR2044886}). We prove that {\sc \#ET} is \#P-complete even for planar $4$-regular graphs.
A closely related problem is that of counting A-trails ({\sc \#A-trails}) in graphs with rotational embedding schemes (so called maps). Kotzig~\cite{MR0248043} showed that {\sc \#A-trails} can be computed in polynomial time for $4$-regular plane graphs (embedding in the plane is equivalent to giving a rotational embedding scheme). We show that for $4$-regular maps the problem is \#P-hard. Moreover, we show that from the approximation viewpoint {\sc \#A-trails} in $4$-regular maps captures the essence of {\sc \#ET}, that is, we give an AP-reduction from {\sc \#ET} in general graphs to {\sc \#A-trails} in $4$-regular maps. The reduction uses a fast mixing result for a card shuffling problem~\cite{MR2023023}.
|
Jérémy Barbay, Francisco Claude and Gonzalo Navarro. Compact Rich-Functional Binary Relation Representations
|
Abstract: Binary relations are an important abstraction arising in a number of data representation problems. There data structures that exist for them have focused on a few basic operations that have been traditionally needed. However, as the research on (especially compressed) data structures evolves, more advanced binary relation functionalities are required. The attempt of this paper is to start a fundamental study of the relation between advanced operations on binary relations and their compressibility. We define an extended set of operations, show how some reduce to others, extend an existing representation and propose a novel one, and in both cases show how to handle as much of the new functionality as possible. While the novel representation is shown to be weaker in functionality, its space requirement is close to the entropy of the binary relation, and it turns out to be further compressible when the binary representation exhibits some regularities that arise in real-life.
|
Johannes Fischer. Optimal Succinctness for Range Minimum Queries |
Abstract: For a static array A of n totally ordered objects, a range minimum query asks for the position of the minimum between two specified array indices. We show how to preprocess A into a scheme of size 2n+o(n) bits that allows to answer range minimum queries on A in constant time. This space is asymptotically optimal in the important setting where access to A is not permitted after the preprocessing step. Our scheme can be computed in linear time, using only n + o(n) additional bits for construction. We also improve on LCA-computation in BPS- or DFUDS-encoded trees. |
Beate Bollig. A larger lower bound on the OBDD complexity of the most significant bit of multiplication |
Abstract: Ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for Boolean functions. The reachability problem, i.e., computing the set of nodes reachable from some source $s\in V$ in a digraph $G=(V,E)$, is an important problem in computer-aided design, hardware verification, and model checking. Sawitzki (2006) has presented exponential lower bounds on the space complexity of a restricted class of symbolic OBDD-based algorithms for the reachability problem. Here, these lower bounds are improved by presenting a larger lower bound on the OBDD complexity of the most significant bit of integer multiplication. |
Atlas F. Cook IV, Carola Wenk, Ovidiu Daescu, Steven Bitner, Yam K. Cheung and Anastasia Kurdia. Visiting a Sequence of Points with a Bevel-Tip Needle |
Abstract: Many
surgical procedures could benefit from guiding a bevel-tip needle along
circular arcs to multiple treatment points in a patient. At each
treatment point, the needle typically extracts a tissue sample or
injects a radioactive pellet into a cancerous region. Our main result
is an algorithm to steer a bevel-tip needle through a sequence of
treatment points in the plane while minimizing the number of times that
the needle must be reoriented. This algorithm takes quadratic time when
consecutive points in the sequence are sufficiently separated. When
consecutive points in the sequence are not sufficiently separated,
optimal substructure can fail, and our fixed-parameter tractable
algorithm steers a needle by accounting for locally suboptimal paths. |
MohammadHossein Bateni and MohammadTaghi Hajiaghayi. Euclidean Prize-collecting Steiner Forest |
Abstract: In this paper, we consider Steiner forest and its generalizations, prize-
collecting Steiner forest and k-Steiner forest, when the vertices of the input graph are points in the Euclidean plane and the lengths are
Euclidean distances. First, we present a simpler analysis of the polynomial-time approximation scheme (PTAS) of Borradaile et al. [10]
for the Euclidean Steiner forest problem. This is done by proving a new structural property and modifying the dynamic programming by adding a
new piece of information to each dynamic programming state. Based on our new structural property and algorithm, we develop a PTAS for a
well-motivated case, i.e., the multiplicative case, of prize-collecting and budgeted Steiner forest. The ideas used in the algorithm may have
applications in design of a broad class of bicriteria PTASs. At the end, we demonstrate why PTASs for these problems can be hard in the
general Euclidean cases (and thus for PTASs we cannot go beyond the multiplicative case). |
Dominik Gall, Riko Jacob, Andrea Richa, Christian Scheideler, Stefan Schmid and Hanjo Täubig. Time Complexity of Distributed Topological Self-Stabilization: The Case of Graph Linearization |
Abstract: Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms that converge quickly to such a desirable topology, independently of the initial network state. This paper proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization---i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. We propose two variants of a simple algorithm, and provide an extensive formal analysis of their worst-case and best-case parallel time complexities, as well as their performance under a greedy selection of the actions to be executed. |
Jonathan Backer and Mark Keil. The Mono- and Bichromatic Empty Rectangle and Square Problems in All Dimensions |
Abstract: The
maximum empty rectangle problem is as follows: Given a set of red
points in $\mathbb{R}^{d}$ and an axis-aligned hyperrectangle $B$, find
an axis-aligned hyperrectangle $R$ of greatest volume that is contained
in $B$ and contains no red points. In addition to this problem, we also
consider three natural variants: where we find a hypercube instead of a
hyperrectangle, where we try to contain as many blue points as possible
instead of maximising volume, and where we do both. Combining the
results of this paper with previous results, we now know that all four
of these problems (a) are NP-complete if $d$ is part of the input, (b)
have polynomial-time sweep-plane solutions for any fixed $d\ ge 3$, and
(c) have near linear time solutions in two dimensions. |
Greg Aloupis, Jean Cardinal, Sebastien Collette, Shinji Imahori, Matias Korman, Stefan Langerman, Oded Schwartz, Shakhar Smorodinsky and Perouz Taslakian. Colorful Strips |
Abstract: We study the following geometric hypergraph coloring problem: given a planar point set and an integer $k$, we wish to color the points with $k$ colors so that any axis-aligned strip containing sufficiently many points contains all colors. We show that if the strip contains at least $2k{-}1$ points, such a coloring can always be found. In dimension $d$, we show that the same holds provided the strip contains at least $k(4\ln k +\ln d)$ points.
We also consider the dual problem of coloring a given set of axis-aligned strips so that any sufficiently covered point in the plane is covered by $k$ colors. We show that in dimension $d$ the required coverage is at most $d(k{-}1)+1$.
Lower bounds are also given for the two problems. This complements recent impossibility results on decomposition of strip coverings with arbitrary orientations.
From the computational point of view, we show that deciding whether a three-dimensional point set can be 2-colored so that any strip containing at least three points contains both colors is NP-complete. This is in contrast with the planar case, for which it is easy to decide. |
Nadja Betzler, Jiong Guo, Christian Komusiewicz and Rolf Niedermeier. Average Parameterization and Partial Kernelization for Computing Medians |
Abstract: We propose an effective polynomial-time preprocessing strategy for intractable median problems. Developing a new methodological framework, we show that if the input instances of generally intractable problems exhibit a sufficiently high degree of similarity between each other on average, then there are efficient exact solving algorithms. In other words, we show that the central median problems Swap Median Permutation, Consensus Clustering, Kemeny Score, and Kemeny Tie Score all are fixed-parameter tractable with respect to the parameter ``average distance between input objects''. To this end, we develop the new concept of ``partial kernelization'' and identify interesting polynomial-time solvable special cases for the considered problems. |
Britta Peis, Martin Skutella and Andreas Wiese. Packet Routing on the Grid |
Abstract: The packet routing problem, i.e., the problem to send a given set of unit-size packets through a network on time, belongs to one of the most fundamental routing problems with important practical applications, e.g., in traffic routing, parallel computing, and the design of communication protocols. The problem involves critical routing and scheduling decisions. One has to determine a suitable (short) origin-destination path for each packet and resolve occurring conflicts between packets whose paths have an edge in common. The overall aim is to find a path for each packet and a routing schedule with minimum makespan.
A significant topology for practical applications are grid graphs. In this paper, we therefore investigate the packet routing problem under the restriction that the underlying graph is a grid. We establish approximation algorithms and complexity results for the general problem on grids, and under various constraints on the start and destination vertices or on the paths of the packets. |
Andreas Brandstädt, Christian Hundt and Ragnar Nevries. Efficient Edge Domination on Hole-free Graphs in Polynomial Time |
Abstract: This paper deals with the Efficient Edge Domination Problem (EED, for short), also known as Dominating Induced Matching Problem. For an undirected graph G=(V,E) EED asks for an induced matching M subset of E that simultaneously dominates all edges of G. Thus, the distance between edges of M is at least two and every edge in E is adjacent to an edge of M. EED is related to parallel resource allocation problems, encoding theory and network routing. The problem is NP-complete even for restricted classes like planar bipartite and bipartite graphs with maximum degree three. However, the complexity has been open for chordal bipartite graphs.
This paper shows that EED can be solved in polynomial time on hole-free graphs. Moreover, it provides even linear time for chordal bipartite graphs.
Finally, we strengthen the NP-completeness result to planar bipartite graphs of maximum degree three. |
Viliam Geffert and Giovanni Pighizzini. Pairs of Complementary Unary Languages with "Balanced" Nondeterministic Automata |
Abstract: For
each sufficiently large N, there exists a unary regular language N such
that both L and its complement L^c are accepted by unambiguous
nondeterministic automata with at most N states while the smallest
deterministic automata for these two languages require an exponential
number of states, at least $e^{\Omega(\sqrt[3]{N\cdot\ln^{2}\!N})}$.
Actually, and L^c are accepted by nondeterministic machines
sharing the same transition graph, differing only in the distribution
of their final states. As a consequence, the gap between the sizes of
unary unambiguous self-verifying automata and deterministic automata is also exponential. |
Amalia Duch, Conrado Martinez and Rosa M. Jimenez. Rank Selection in Multidimensional Data |
Abstract: Suppose we have a set of $K$-dimensional records stored in a general purpose spatial index like a $K$-d tree. The index efficiently supports insertions, ordinary exact searches, orthogonal range searches, nearest neighbor searches, etc. Here we consider whether we can also efficiently support search by rank, that is, to locate the i-th smallest element along the j-th coordinate. We answer this question in the affirmative by developing a simple algorithm with expected cost $O(n^{\alpha(1/K)}\log n)$, where $n$ is the size of the $K$-d tree and $\alpha(1/K)<1$ for any $K\ge 2$. The only requirement to support the search by rank is that each node in the $K$-d tree stores the size of the subtree rooted at that node (or some equivalent information). This is not too space demanding. Furthermore, it can be used to randomize the update algorithms to provide guarantees on the expected performance of the various operations on $K$-d trees. Although selection in multidimensional data can be solved more efficiently than with our algorithm, those solutions will rely on ad-hoc data structures
or superlinear space. Our solution adds to an existing data structure
($K$-d trees, for instance, but it can be easily adapted to other spatial indexes) the capability of search by rank with very little overhead. The
simplicity of the algorithm makes it easy to implement, practical and
very flexible; however, its correctness and efficiency are far from self-evident. |
Tsunehiko
Kameda, Ichiro Suzuki and John Z. Zhang. Finding the Minimum-Distance
Schedule for a Boundary Searcher with a Flashlight |
Abstract: Consider a dark polygonal region in which intruders move freely, trying to avoid detection. A robot, which is equipped with a flashlight, moves along the polygon boundary to illuminate all intruders. We want to minimize the total distance traveled by the robot until all intruders are detected. We present an $O(n\log n)$ time and $O(n)$ space algorithm for optimizing this metric, where $n$ is the number of vertices of the given polygon. This improves upon the best known time and space complexities of $O(n^2)$ and $O(n^2)$, respectively. |
Prosenjit Bose, Karim Douïeb, Vida Dujmović and John Howat. Layered Working-Set Trees |
Abstract: The
working-set bound [Sleator and Tarjan, J. ACM, 1985] roughly states
that searching for an element is fast if the element was accessed
recently. Binary search trees, such as splay trees, can achieve this
property in the amortized sense, while data structures that are not
binary search trees are known to have this property in the worst case.
We close this gap and present a binary search tree called a layered
working-set tree that guarantees the working-set property in the worst
case. The unified bound [Badoiu et al., TCS, 2007] roughly states that
searching for an element is fast if it is near (in terms of rank
distance) to a recently accessed element. We show how layered
working-set trees can be used to achieve the unified bound to within a
small additive term in the amortized sense while maintaining in the
worst case an access time that is both logarithmic and within a small
multiplicative factor of the working-set bound. |
Andrew Novocin and Mark van Hoeij. Gradual sub-lattice reduction and a new complexity for factoring polynomials |
Abstract: We
present a lattice algorithm specifically designed for some classical
applications of lattice reduction. The applications are for
lattice bases with a generalized knapsack-type structure, where the
target vectors are boundably short. For such applications,
the complexity of the algorithm improves traditional lattice reduction
by replacing some dependence on the bit-length of the input vectors by
some dependence on the bound for the output vectors. If the
bit-length of the target vectors is unrelated to the bit-length of the
input, then our algorithm is only linear in the bit-length of the input
entries, which is an improvement over the quadratic complexity
floating-point LLL algorithms. To illustrate the usefulness
of this algorithm we show that a direct application to factoring
univariate polynomials over the integers leads to the first complexity
bound improvement since 1984. A second application is
algebraic number reconstruction, where a new complexity bound is
obtained as well. |
Sylvain Lombardy and Jacques Sakarovitch. Radix cross-sections for length morphisms |
Abstract: We prove that the radix cross-section of a rational set for a length morphism, and more generally for a rational function from a free monoid into N, is rational, a property that does not hold any more if the image of the function is a subset of a free monoid with two or more generators.
The proof is based on several results on finite automata, such as the lexicographic selection of synchronous relations or the iterative decomposition of unary raional series with coefficients in the tropical semiring. It also makes use of a structural construction on weighted transducers that we call the length difference unfolding. |
MohammadTaghi Hajiaghayi and Arefeh Nasri. Prize-Collecting Steiner Networks via Iterative Rounding |
Abstract: In this paper we design an iterative rounding approach for the classic {\em prize-collecting Steiner forest problem} and more generally the {\em prize-collecting survivable Steiner network design} problem. We show as an structural result that in each iteration of our algorithm there is an LP variable in a basic feasible solution which is at least one-third-integral resulting a 3-approximation algorithm for this problem. In addition, we show this factor $3$ in our structural result is indeed tight for prize-collecting Steiner forest and thus prize-collecting survivable Steiner network design. This especially answers negatively the previous belief that one might be able to obtain an approximation factor better than 3 for these problems using a natural iterative rounding approach. Our structural result is extending the celebrated iterative rounding approach of Jain~\cite{Jain01} by using several new ideas some from more complicated linear algebra. The approach of this paper can be also applied to get a constant factor (bicriteria-)approximation algorithm for degree constrained prize-collecting network design problems.
We emphasize that though in theory we can prove existence of only an LP variable of at least one-third-integral, in practice very often in each iteration there exists a variable of integral or almost integral which results in a much better approximation factor than provable factor 3 in this paper (see patent application~\cite{patent}). This is indeed the advantage of our algorithm in this paper over previous approximation algorithms for prize-collecting Steiner forest with the same or slightly better provable approximation factors. |
Brigitte Vallee and Manfred Madritsch. Modelling the LLL algorithm via sandpiles |
Abstract: Lattice
reduction algorithms ---notably
the LLL algorithm, created by Lenstra, Lenstra and Lovasz in
1982--- aim at finding a ``reduced'' basis of a Euclidean lattice of dimension n. The LLL algorithm plays a primary role in many areas of mathematics and computer science,
like cryptology, computer algebra, integer linear programming, and number theory. However, its general behaviour is far from being
well understood. There are already many
experimental observations about the number of iterations or the
geometry of the output, that pose challenging questions that
remain unanswered and lead to natural conjectures which
are yet to be proved. However, until now, there
exist few experimental observations about the precise
execution of the algorithm. Here, we
provide experimental results which precisely
describe an essential parameter of the
execution, namely the ``logarithm of the decreasing
ratio''. These experiments give
arguments towards a ``regularity'' hypothesis. Then, we
propose a simplified model for the LLL algorithm, based
on this
``regularity'' hypothesis, which leads us to
well-known algorithms, based on
discrete dynamical systems, as sand piles or chip
firing games. Upon the validity of the
regularity hypothesis, it is then
possible to obtain a precise quantification of main
parameters of the LLL algorithm, notably the
number of iterations, and the geometry of the
output. These results well fit with experimental
results performed on general input bases, which indirectly
proves the validity of such a regularity hypothesis and shows the
usefulness of such a simplified model. |
Paolo Ferragina, Travis Gagie and Giovanni Manzini. Lightweight Data Indexing and Compression in External Memory |
Abstract: In this paper we describe algorithms for computing the BWT and for building (compressed) indexes in external memory. The innovative feature of our algorithms is that they are lightweight in the sense that, for an input of size n, they use only n bits of disk working space while all previous approaches use Theta(n log n) bits of disk working space. Moreover, our algorithms access disk data only via sequential scans, thus they take full advantage of modern disk features that make sequential disk accesses much faster than random accesses.
We also present a scan-based algorithm for inverting the BWT that uses Theta(n) bits of working space, and a lightweight internal-memory algorithm for computing the BWT which is the fastest in the literature when the available working space is o(n) bits.
Finally, we prove lower bounds on the complexity of computing and inverting the BWT via sequential scans in terms of the classic product: (internal-memory space) x (number of passes) over the disk data. |
Bettina Speckmann and Kevin Verbeek. Homotopic Rectilinear Routing with Few Links and Thick Edges |
Abstract: We
study the NP-hard problem of finding non-crossing thick minimum-link
rectilinear paths which are homotopic to a set of input paths in an
environment with rectangular obstacles. We present a 2-approximation
that runs in $O(n^3 + k_{in} \log n + k_{out})$ time, where $n$ is the
total number of input paths and obstacles and $k_{in}$ and $k_{out}$
are the total complexities of the input and output paths. Our algorithm
not only approximates the minimum number of links, but also
simultaneously minimizes the total length of the paths. We also show
that an approximation factor of 2 is optimal when using smallest paths
as lower bound. |
Yuichi Asahiro, Eiji Miyano and Kazuaki Samizo. Approximating Maximum Diameter-Bounded Subgraphs |
Abstract: The paper studies the maximum diameter-bounded subgraph problem ({¥sc MaxDBS} for short) which is defined as follows: Given an $n$-vertex graph $G$ and a fixed integer $d¥geq 1$, the goal is to find its largest subgraph of the diameter $d$. If $d=1$, the problem is identical to the maximum clique problem and thus it is ${¥cal NP}$-hard to approximate {¥sc MaxDBS} to within a factor of $n^{1-¥varepsilon}$ for any $¥varepsilon > 0$. Also, it is known to be ${¥cal NP}$-hard to approximate {¥sc MaxDBS} to within a factor of $n^{1/3-¥varepsilon}$ for any $¥varepsilon >0$ and a fixed $d¥geq 2$. In this paper, we first strengthen the hardness result; we prove that, for any $¥varepsilon > 0$ and a fixed $d¥geq 2$, it is ${¥cal NP}$-hard to approximate {¥sc MaxDBS} to within a factor of $n^{1/2-¥varepsilon}$. Then, we show that a simple polynomial-time algorithm achieves an approximation ratio of $n^{1/2}$ for any even $d¥geq 2$, and an approximation ration of $n^{2/3}$ for any odd $d¥geq 3$. Furthermore, we investigate the (in)tractability and the (in)approximability of {¥sc MaxDBS} on subclasses of graphs, including chordal graphs, split graphs, interval graphs, and $k$-partite graphs. |
Alexis Ballier, Emmanuel Jeandel and Bruno Durand. Tilings robust to errors |
Abstract: We
study the error robustness of tilings of the plane. The fundamental
question is the following: given a tileset, what happens if we allow a
small probability of errors? Are the objects we obtain close to an
error-free tiling of the plane? We prove that tilesets that produce
only periodic tilings are robust to errors; for this proof, we use a
hierarchical construction of islands of errors. We also show that
another class of tilesets, those that admit countably many tilings is
not robust and that there is no computable way to distinguish between
these two classes. |
Eric Angel, Evripidis Bampis and Nicolas Thibault. Randomized truthful algorithms for scheduling selfish tasks on parallel machines |
Abstract: We
study the problem of designing truthful algorithms for scheduling a set
of tasks, each one owned by a selfish agent, to a set of parallel
(identical or unrelated) machines in order to minimize the makespan. We
consider the following process: at first the agents declare the length
of their tasks, then given these bids the protocol schedules the tasks
on the machines. The aim of the protocol is to minimize the makespan,
i.e. the maximal completion time of the tasks, while the objective of
each agent is to minimize the completion time of its task and thus an
agent may lie if by doing so, his task may finish earlier. In this
paper, we show the existence of randomized truthful
(non-polynomial-time) algorithms with expected approximation ratio
equal to 3/2 for different scheduling settings (identical machines with
and without release dates and unrelated machines) and models of
execution (strong or weak). Our result improves the best previously
known result [1] for the problem with identical machines (P∣∣Cmax) in
the strong model of execution and reaches, asymptotically, the lower
bound of [5]. In addition, this result can be transformed to a
polynomial-time truthful randomized algorithm with expected
approximation ratio 3/2 + epsilon (resp. 11/6 − 1/3m) for Pm∣∣Cmax
(resp. P∣∣Cmax). |
C.R. Subramanian and Kunal Dutta. Largest Induced Acyclic Tournament in Random Digraphs: A 2-point concentration |
Abstract: Given a simple directed graph $D = (V,A)$, let the size of the largest induced acyclic tournament be denoted by $mat(D)$.
Let $D \in \mathcal{D}(n,p)$ be a \emph{random} instance, obtained by
choosing each of the ${{n}\choose{2}}$ possible
undirected edges independently with probability $2p$ and then orienting
each chosen edge in one of two possible directions with proabibility $1/2$. We show that for such a random instance, $mat(D)$ is asymptotically almost surely one of only 2 possible values, namely either $b^*$ or $b^*+1$, where $b^* = \lfloor 2(\log_{p^{-1}} n)+0.5 \rfloor$. It is then shown that any maximal induced acyclic tournament is of a size which is at least nearly half of any optimal solution. We also analyze a polynomial time heuristic and show that it produces a solution whose size is at least $\log_{p^{-1}} n + \Theta(\sqrt{\log_{p^{-1}} n})$. Our results also carry over to a related model in which each possible directed arc is chosen independently with probability $p$. An immediate corollary is that minimum feedback vertex set can be approximated within a ratio of $1+o(1)$ for random tournaments. |
Jaroslaw Byrka, Andreas Karrenbauer and Laura Sanita. The interval constrained 3-coloring problem |
Abstract: In this paper, we settle the open complexity status of interval constrained coloring with a fixed number of colors. We prove that the problem is already NP-complete if the number of different colors is 3. Previously, it has only been known that it is NP-complete, if the number of colors is part of the input and that the problem is solvable in polynomial time, if the number of colors is at most 2. We also show that it is hard to satisfy almost all of the constraints for a feasible instance. |
Vahab Mirrokni, S. Muthukrishnan and Uri Nadav. Quasi-Proportional Mechanisms: Prior-free Revenue Maximization |
Abstract: Inspired
by Internet ad auction applications, we study the problem of allocating
a single item via an auction when bidders place very different values
on the item. We formulate this as the problem of prior-free auction and
focus on designing a simple mechanism that always allocates the item.
Rather than designing sophisticated pricing methods like prior
literature, we design better allocation methods. In particular, we
propose quasi-proportional allocation methods in which the probability
that an item is allocated to a bidder depends (quasi-proportionally) on
the bids.
We prove that corresponding games for both all-pay
and winners-pay quasi-proportional mechanisms admit pure Nash
equilibria and this equilibrium is unique. We also give an algorithm to
compute this equilibrium in polynomial time. Further, we show that the
revenue of the auctioneer is promisingly high compared to the ultimate,
i.e., the highest value of any of the bidders, and show bounds on the
revenue of equilibria both analytically, as well as using experiments
for specific quasi-proportional functions. This is the first known
revenue analysis for these natural mechanisms (including the special
case of proportional mechanism which is common in network resource
allocation problems). |
RASENDRAHASINA VONJY and RAVELOMANANA VLADY. Limit theorems for random MAX-$2$-XORSAT |
Abstract: We consider random instances of the MAX-2-XORSAT optimization problem. A $2$-XOR formula is a conjunction of Boolean equations (or clauses) of the form $x \xor y = 0$ or $x \xor y = 1$. The MAX-2-XORSAT problem asks for the maximum number of clauses which can be satisfied by any assignment of the variables in a $2$-XOR formula. In this work, formula of size $m$ on $n$ Boolean variables are chosen uniformly at random from among all ${n(n-1) \choose m}$ possible choices. Denote by $X_{n,m}$ the \textit{minimum} number of clauses that can not be satisfied in a formula with $n$ variables and $m$ clauses. We give precise characterizations of the random variable $X_{n,m}$ around the critical density $\frac{m}{n} \sim \frac{1}{2}$ of random $2$-XOR formula. As $\mu$ and $n$ are both large but $\mu=o(n^{1/3})$, we prove that for random formulas with $m= \frac{n}{2}-\frac{\mu}{2}n^{2/3}$ clauses $X_{n,m}$ converges in distribution to a Poisson r.v. with mean $\sim \frac{\log{n}}{12} -\frac{\log{\mu}}{4}$. If $m = \frac{n}{2} + O(1)n^{2/3}$, $\frac{12 X_{n,m}}{\log{n}} \distrib 1$. If $m = \frac{n}{2} + \frac{\mu}{2}n^{2/3}$ with $1 \ll \mu = o(n^{1/3})$ then $\frac{ 6X_{n,m}}{\mu^3+O(\log{n})} \distrib 1$. For any absolute constant $\varepsilon>0$, if $\mu = \varepsilon n^{1/3}$ then $\frac{8(1+\varepsilon)}{n( \varepsilon^2 - \sigma^2)} X_{n,m} \distrib 1$ where $\sigma$ is the solution of $(1+\varepsilon)e^{-\varepsilon} = (1-\sigma)e^{\sigma}$.
Thus, our findings describe phase transitions in the optimization context similar to those encountered in decision problems. |
Frédérique Bassino, Laura Giambruno and Cyril Nicaud. Complexity of Operations on Cofinite Languages |
Abstract: We study the worst case complexity of regular operations on cofinite languages (i.e. languages whose complement is finite) and provide algorithms to compute efficiently the resulting minimal automata. |
Greg Aloupis, Jean Cardinal, Sebastien Collette, Erik D. Demaine, Martin L. Demaine, Muriel Dulieu, Ruy Fabila-Monroy, Vi Hart, Ferran Hurtado, Stefan Langerman, Maria Saumell, Carlos Seara and Perouz Taslakian. Matching Points with Things |
Abstract: Given an ordered set of points and an ordered set of geometric objects in the plane, we are interested in finding a non-crossing matching between point-object pairs. We show that when the objects we match the points to are finite point sets, the problem is NP-complete in general, and polynomial when the objects are on a line or when their size is at most 2. When the objects are line segments, we show that the problem is NP-complete in general, and polynomial when the segments form a convex polygon or are all on a line. Finally, for objects that are straight lines, we show that the problem of finding a min-max non-crossing matching is NP-complete. |
Edyta Szymanska, Marek Karpinski and Andrzej Rucinski. Computational complexity of the Hamiltonian cycle problem in dense hypergraphs |
Abstract: We study the computational complexity of deciding the existence of a Hamiltonian Cycle in some dense classes of k-uniform hypergraphs. Those problems turned out to be, along with the hypergraph Perfect Matching problems, exceedingly hard, and there is a renewed algorithmic interest in them. In this paper we design the first polynomial time algorithm for the Hamiltonian Cycle problem for k-uniform hypergraphs with density at least $\tfrac12 + \epsilon$, $\eps>0$. In doing so, we depend on a new method of constructing Hamiltonian cycles from (purely) existential statements which could be of independent interest. We establish also a lower bound for intractability of that problem with the density at least $\tfrac1k - \epsilon$. Our results seem to be the first complexity theoretic results for the Dirac-type dense hypergraph classes. |
Joachim
von zur Gathen, Alfredo Viola and Konstantin Ziegler. Counting
reducible, squareful, and relatively irreducible multivariate
polynomials over finite fields |
Abstract: Among the multivariate polynomials over a finite field, most are irreducible. We present exact counting methods for some classes of special polynomials, namely the reducible ones, those with a square factor, and the “relatively irreducible” ones which are irreducible but factor over an extension field. We also present approximations with relative errors that essentially decrease exponentially in the input size. |
Petra Berenbrink, Robert Elsaesser and Thomas Sauerwald. Randomised Broadcasting: Memory vs. Randomness |
Abstract: In this paper we analyse broadcasting in $d$-regular networks with good expansion properties. For the underlying communication, we consider modifications of the so called random phone call model. In the standard version of this model, each node is allowed in every step to open a channel to a randomly chosen neighbour, and the channels can be used for bi-directional communication. Then, broadcasting on the graphs mentioned above can be performed in time $O(\log n)$, where $n$ is the size of the network. However, every broadcast algorithm with runtime $O(\log n)$ needs in average $\Omega(\log n/\log d)$ message transmissions per node. This lower bound even holds for random $d$-regular graphs, and implies a high amount of message transmissions especially if $d$ is relatively small.
In this paper we show that it is possible to save significantly on communications if the standard model is modified such that nodes can avoid opening channels to the same neighbours in consecutive steps. We consider the so called \rr model where we assume that every node has a cyclic list of all of its neighbours, ordered in a random way. Then, in step $i$ the node communicates with the $i$-th neighbour from that list. We provide an $O(\log n)$ time algorithm which produces in average $O(\sqrt{\log n})$ transmissions per node in networks with good expansion properties. Furthermore, we present a related lower bound of $\Omega(\sqrt{\log n/\log\log n})$ for the average number of message transmissions. These results show that by using memory it is possible to reduce the number of transmissions per node by almost a quadratic factor. |
Gero Greiner and Riko Jacob. The I/O Complexity of Sparse Matrix Dense Matrix Multiplication |
Abstract: We
consider the multiplication of a sparse N x N matrix A with a dense N x
N matrix B in the I/O model. We determine the worst-case non-uniform
complexity of this task up to a constant factor for all
meaningful choices of the parameters N (dimension of the matrices), k
(average number of non-zero entries per column or row in A, i.e., there
are in total kN non-zero entries), M (main memory size), and B (block
size), as long as M > B^2 (tall cache assumption).
For large
and small k, the structure of the algorithm does not need to depend on
the structure of the sparse matrix \A, whereas for intermediate
densities it is possible and necessary to find submatrices that fit in
memory and are slightly denser than on average.
The focus of
this work is asymptotic worst-case complexity, i.e., the existence of
matrices that require a certain number of I/Os and the existence of
algorithms (sometimes depending on the shape of the sparse matrix) that
use only a constant factor more I/Os. |
Rene van Bevern, Hannes Moser and Rolf Niedermeier. Kernelization Through Tidying---A Case Study Based on s-Plex Cluster Vertex Deletion |
Abstract: We introduce the NP-hard graph-based data clustering problem s-Plex Cluster Vertex Deletion, where the task is to delete at most k vertices from a graph so that the connected components of the resulting graph are s-plexes. In an s-plex, every vertex has an edge to all but at most s-1 vertices; cliques are 1-plexes. We propose a new method for kernelizing a large class of vertex deletion problems and illustrate it by developing an O(k^2 s^3)-vertex problem kernel for s-Plex Cluster Vertex Deletion that can be computed in O(k s n^2) time, where n is the number of graph vertices. The corresponding ``kernelization through tidying'' exploits polynomial-time approximation results. |
Janusz Brzozowski, Galina Jiraskova and Baiyu Li. Quotient Complexity of Ideal Languages |
Abstract: We
study the state complexity of regular operations in the class of ideal
languages. A language L which is a subset of A* (the set of all words
over alphabet A) is a right (left) ideal if it satisfies L = LA* (L =
A*L). It is a two-sided ideal if L = A*LA*, and an all-sided ideal if L
is the shuffle of A* with L. We prefer “quotient complexity” to “state
complexity”, and we use quotient formulas to calculate upper bounds on
quotient complexity, whenever it is convenient. We find tight upper
bounds on the quotient complexity of each type of ideal language in
terms of the complexity of an arbitrary generator and of its minimal
generator, the complexity of the minimal generator, and also on the
operations union, intersection, set difference, symmetric difference,
concatenation, star and reversal of ideal languages. |
Navid Imani and Qianping Gu. Connectivity is Not a Limit for Kernelization: Planar Connected Dominating Set |
Abstract: We prove a small linear-size kernel for the connected dominating set problem in planar graphs through data reduction. Our
set of rules efficiently reduce a planar graph $G$ with $n$ vertices
and connected dominating number $\gamma_c(G)$ to a kernel of size at
most $413\gamma_c(G)$ in $O(n^3)$ answering the question of whether
the connectivity criteria hinders the construction of small kernels,
negatively (in case of the planar connected dominating set). Our result
gives a fixed-parameter algorithm of time $(2^{O(\sqrt{\gamma_c(G)})}\cdot \gamma_c(G) + n^3)$ using the standard branch-decomposition based approach. |
Emden Gansner, Yifan Hu, Michael Kaufmann and Stephen Kobourov. Optimal Polygonal Representation of Planar Graphs |
Abstract: In
this paper, we consider the problem of representing graphs by polygons
whose sides touch. We show that at least six sides per polygon are
necessary by constructing a class of planar graphs that cannot be
represented by pentagons. We also show that the lower bound of six
sides is matched by an upper bound of six sides with a linear time
algorithm for representing any planar graph by touching hexagons.
Moreover, our algorithm produces convex polygons with edges with slopes
0, 1, -1. |
Salvatore La Torre, Madhusudan Parthasarathy and Gennaro Parlato. The Language Theory of Bounded Context-Switching |
Abstract: Concurrent
compositions of recursive programs with finite data are a natural
abstraction model for concurrent programs. Since reachability is
undecidable for this class, a restricted form of reachability has
become popular in the formal verification literature, where the set of
states reached within $k$ context-switches, for a fixed small constant
$k$, is explored.
In this paper, we consider the language theory of these models. More
precisely, we consider concurrent recursive programs with finite
variable data domains that communicate using shared memory and work
within $k$ round-robin rounds of context-switches, and where further
the stack operations are made visible (as in visibly pushdown
automata). We show that the corresponding class of languages, for any
fixed $k$, forms a robust subclass of context-sensitive languages,
closed under all the Boolean operations. Also, surprisingly, the
deterministic and nondeterministic versions of the automata-theoretic
model define the same class of languages.
This class is the
first class we are aware of that includes non-context-free languages,
and yet has a decidable emptiness problem, is closed under all Boolean operations, and is determinizable. |
Adrian Dumitrescu and Minghui Jiang. Minimum-perimeter intersecting polygons |
Abstract: Given a set $\S$ of segments in the plane, a polygon $P$ is an intersecting polygon of $\S$ if every segment in $\S$ intersects the interior or the boundary of $P$. The problem MPIP of computing a minimum-perimeter intersecting polygon of a given set of $n$ segments in the plane was first considered by Rappaport in 1995. This problem is not known to be polynomial, nor it is known to be NP-hard. Rappaport (1995) gave an exponential-time exact algorithm for MPIP\@. Hassanzadeh and Rappaport (2009) gave a polynomial-time approximation algorithm with ratio $\frac{\pi}{2} \approx 1.58$. In this paper, we present two improved approximation algorithms for MPIP: a $1.28$-approximation algorithm by linear programming, and a polynomial-time approximation scheme by discretization and enumeration. Our algorithms can be generalized for computing an approximate minimum-perimeter intersecting polygon of a set of convex polygons in the plane. From the other direction, we show that computing a minimum-perimeter intersecting polygon of a set of (not necessarily convex) simple polygons is NP-hard. |
Pankaj Agarwal, Jeff Phillips and Bardia Sadri. Lipschitz Unimodal and Isotonic Regression on Paths and Trees |
Abstract: We
describe algorithms for finding the regression of t, a
sequence of values, to the closest sequence s by
mean squared error, so that s is always
increasing (isotonicity) and so the values of two consecutive points do
not increase by too much (Lipschitz). The isotonicity
constraint can be replaced with a unimodular constraint, where there is
exactly one local maximum in s. These algorithm
are generalized from sequences of values to trees of
values. For each scenario we describe near-linear time
algorithms. |
Hagai Cohen and Ely Porat. Fast Set Intersection and Two Patterns Matching |
Abstract: In this paper we present a new problem, the fast set intersection problem, which is to preprocess a collection of sets in order to efficiently report the intersection of any two sets in the collection. In addition we suggest new solutions for the two-dimensional substring indexing problem and the document listing problem for two patterns by reduction to the fast set intersection problem. |
Christine Chung, Katrina Ligett, Kirk Pruhs and Aaron Roth. The Power of Fair Pricing Mechanisms |
Abstract: Abstract.
We explore the revenue capabilities of truthful, monotone (“fair”)
allocation and pricing functions for resource-constrained auction
mechanisms within a general framework that encompasses unlimited supply
auctions, knapsack auctions, and auctions with general nondecreasing
convex production cost functions. We study and compare the revenue
obtainable in each fair pricing scheme to the profit obtained by the
ideal omniscient multi-price auction. We show (1) for capacitated
knapsack auctions, no constant pricing scheme can achieve any
approximation to the optimal profit, but proportional pricing is as
powerful as general monotone pricing, and (2) for auction settings with
arbitrary bounded non-decreasing convex production cost functions, we
present a proportional pricing mechanism which achieves a
poly-logarithmic approximation. Unlike existing approaches,
all of our mechanisms have fair (monotone) prices, and all of our
competitive analysis is with respect to the optimal profit extraction. |
Michael Coury, Pavol Hell, Jan Kratochvil and Tomas Vyskocil. Faithful Representations of Graphs by Islands in the Extended Grid |
Abstract: We investigate embeddings of graphs into the infinite extended grid graph. The problem was motivated by computation on adiabatic quantum computers, but it is related to a number of other well studied grid embedding problems. Such problems typically deal with representing vertices by grid points, and edges by grid paths, while minimizing some objective function such as the area or the maximum length of the grid paths representing the edges. Our particular model, while expressible in this language, is more naturally viewed as one where the vertices are represented by subtrees of the grid (called islands), and the edges are represented by the usual grid edges joining the islands. Somewhat unexpectedly, these graphs turn out to unify such seemingly unrelated graph classes as the string graphs and the induced subgraphs of the extended grid. The connection is established by limiting the size (number of vertices) $k$ of the representing islands. We prove that the problem of representability of $G$ by islands of size at most $k$ is NP-complete when $k < 3$ or $k > 5$. |
Qi Cheng and Yu-Hsin Li. Finding the smallest gap between sums of square roots |
Abstract: Let $k$ and $n$ be positive integers, $n>k$. Define $r(n,k)$ to be the minimum positive value of $$ |\sqrt{a_1} + \cdots + \sqrt{a_k} - \sqrt{b_1} - \cdots -\sqrt{b_k} | $$ where $ a_1, a_2, \cdots, a_k, b_1, b_2, \cdots, b_k $ are positive integers no larger than $n$. It is important to find a tight bound for $r(n,k)$, in connectin to the sum-of-square-roots problem, a famous open problem in computational geometry. The current best lower bound and upper bound are far apart. In this paper, we present an algorithm to find $r(n,k)$ {\em exactly} in $n^{k+o(k)}$ time and in $ n^{\ceil{k/2}+o(k)} $ space. As an example, we are able to compute $r(100,7)$ exactly in a few hours on one PC. |
Anna Gal and Jing-Tang Jang. The Size and Depth of Layered Boolean Circuits |
Abstract: We
consider the relationship between size and depth for layered Boolean
circuits, synchronous circuits and planar circuits as well as classes
of circuits with small separators. In particular, we show
that every layered Boolean circuit of size $s$ can be simulated by a
layered Boolean circuit of depth $O(\sqrt{s\log s})$. For
planar circuits and synchronous circuits of size $s$, we obtain
simulations of depth $O(\sqrt{s})$. The best known result so
far was by Paterson and Valiant \cite{TCS::PV1976}, and Dymond and
Tompa \cite{JCSS::DT1985}, which holds for general Boolean circuits
and states that $D(f) = O(C(f)/\log C(f))$, where $C(f)$ and $D(f)$ are
the minimum size and depth, respectively, of Boolean circuits computing
$f$. The proof of our main result uses an adaptive strategy
based on the two-person pebble game introduced by Dymond and Tompa
\cite{JCSS::DT1985}. Improving any of our results by polylog
factors would immediately improve the bounds for general circuits. |
Prosenjit Bose, Paz Carmi, Michiel Smid and Daming Xu. Communication-Efficient Construction of the Plane Localized Delaunay Graph |
Abstract: Let $V$ be a finite set of points in the plane. We present a $2$-local algorithm that constructs a plane $\frac{4 \pi \sqrt{3}}{9}$-spanner of the unit-disk graph $\UDG(V)$. Each node can only communicate with nodes that are within unit-distance from it. This algorithm only makes one round of communication and each point of $V$ broadcasts at most $5$ messages. This improves on all previously known message-bounds for this problem. |
Paul Bonsma and Felix Breuer. Counting hexagonal patches and independent sets in circle graphs |
Abstract: A
hexagonal patch is a plane graph in which inner faces have length 6,
inner vertices have degree 3, and boundary vertices have degree 2 or 3.
We consider the following counting problem: given a sequence of
twos and threes, how many hexagonal patches exist with this degree
sequence along the outer face? This problem is motivated by the enumeration of benzenoid hydrocarbons and fullerenes in computational chemistry. We
give the first polynomial time algorithm for this problem. We show that
it can be reduced to counting maximum independent sets in circle
graphs, and give a simple and fast algorithm for this problem. |
Fedor Fomin, Fabrizio Grandoni, Daniel Lokshtanov and Saket Saurabh. Sharp Separation and Applications to Exact and Parameterized Algorithms |
Abstract: Many
divide-and-conquer algorithms employ the fact that the vertex set of a
graph of bounded treewidth can be separated in two roughly balanced
subsets by removing a small subset of vertices, referred to as a
separator. In this paper we prove a trade-off between the size of the
separator and the sharpness with which we can fix the size of the two
sides of the partition. Our result appears to be a handy and powerful
tool for the design of exact and parameterized algorithms for NP-hard
problems. We illustrate that by presenting two applications.
Our
first application is a parameterized algorithm with running time
O(16^{k+o(k)} + n^{O(1)}) for the Maximum Internal Subtree problem in
directed graphs. This is a significant improvement over the best
previously known parameterized algorithm for the problem by [Cohen et
al.'09], running in time O(49.4^k + n^{O(1)}).
The second
application is a O(2^{n+o(n)}) time algorithm for the Degree
Constrained Spanning Tree problem: find a spanning tree of a graph with
the maximum number of nodes satisfying given degree constraints. This
problem generalizes some well-studied problems, among them Hamiltonian
Path, Full Degree Spanning Tree, Bounded Degree Spanning
Tree, Maximum Internal Spanning Tree and their edge weighted variants. |
Per Austrin, Siavosh Benabbas and Avner Magen. On Quadratic Threshold CSPs |
Abstract: A predicate P : {-1, 1}^k -> {0; 1} can be associated with a constraint satisfaction problem Max-CSP(P). P is called "approximation resistant" if Max CSP(P) cannot be approximated better than the approximation obtained by choosing a random assignment, and "approximable" otherwise. This classi cation of predicates has proved to be an important and challenging open problem. Motivated by a recent result of Austrin and Mossel (Computational Complexity, 2009), we consider a natural subclass of predicates de fined by signs of quadratic forms, including the special case of predicates de fined by signs of linear forms, and supply algorithms to approximate them as follows. In the quadratic case we prove that every symmetric predicate is approximable. We introduce a new rounding algorithm for the standard semide nite programming relaxation of Max CSP(P) for any predicate P : {-1, 1}^k -> {0; 1} and analyze its approximation ratio. Our rounding scheme operates by fi rst manipulating the optimal SDP solution so that all the vectors are nearly perpendicular and then applying a form of hyperplane rounding to obtain an integral solution. The advantage of this method is that we are able to analyze the behaviour of a set of k rounded variables together as opposed to just a pair of rounded variables in most previous methods. In the linear case we prove that a predicate called "Monarchy" is approximable. This predicate is not amenable to our algorithm for the quadratic case, nor to other LP/SDP-based approaches we are aware of. |
Leonor Vázquez and Carles Padró. Finding Lower Bounds on the Complexity of Secret Sharing Schemes by Linear Programming |
Abstract: Determining
the optimal complexity of secret sharing schemes for every given access
structure is a difficult and long-standing open problem in
cryptology. Lower bounds have been found by a
combinatorial method that uses polymatroids. In this
paper, we point out that the best lower bound that can be obtained by
this method can be determined by using linear programming, and this can
be effectively done for access structures on a small number of
participants. By applying this linear programming
approach, we present better lower bounds on the optimal complexity and
the optimal average complexity of several access
structures. Finally, by adding the Ingleton inequality
to the previous linear programming approach, we find a few examples of
access structures for which there is a gap between the optimal
complexity of linear secret sharing schemes and the combinatorial lower
bound. |
Diego Recalde, Cyriel Rutten, Petra Schuurman and Tjark Vredeveld. Local search performance guarantees for restricted related parallel machine scheduling |
Abstract: We consider the problem of minimizing the makespan on restricted related parallel machines. In restricted machine scheduling each job is only allowed to be scheduled on a subset of machines. We study the worst-case behavior of local search
algorithms. In particular, we analyze the quality of local optima with
respect to the jump, swap, push and lexicographical jump neighborhood. |
Martin Fürer. Almost Linear Time Computation of the Chromatic Polynomial of a Graph of Bounded Tree-Width |
Abstract: An $O(n \log^2 n)$ algorithm is presented to compute all coefficients of the chromatic polynomial of an $n$ vertex graph of bounded tree-width. Previously, it has been known how to evaluate the chromatic polynomial for such graphs in linear time, implying a computation of all coefficients of the chromatic polynomial in quadratic time. |
Daniel Panario, Brett Stevens and Qiang Wang. Ambiguity and Deficiency in Costas Arrays and APN Permutations |
Abstract: We introduce the concepts of weighted ambiguity and deficiency for a mapping between two finite Abelian groups of the same size. Then we study the optimum lower bounds of these measures for a permutation of $\mathbb{Z}_n$ and give a construction of these permutations by modifying some permutation polynomials over finite fields. These permutations are also APN permutations. |
Elizabeth Maltais and Lucia Moura. Finding the best CAFE is NP-hard |
Abstract: In
this paper, we look at covering arrays with forbidden edges (CAFEs),
which are used in testing applications (software, networks, circuits,
drug interaction, material mixtures, etc.) where certain combination of
parameter values are forbidden. Covering arrays are classical objects
used in these applications, but the situation of dealing
with forbidden configurations is much less studied. Danziger et. al.
have recently studied this problem and shown some computational
complexity results, but left some important open questions.
Around the same time, Martinez et al. defined and studied error
locating arrays (ELAs), which are very related to CAFEs, leaving
similar computational complexity questions. In particular, these papers
showed polynomial time solvability of the existence of CAFEs and ELAs
for binary alphabets ($g=2$), while their NP-hardness for $g\geq 5$.
This not only left open the complexity of determining optimum CAFEs and
ELAs for $g=2,3,4$, but some suspicion that the binary case
might be solved by a polynomial time algorithm. In this paper, we prove
that optimizing CAFEs and ELAs are indeed NP-hard even when restricted
to the case of binary alphabets. We also provide a hardness of
approximation result. The proof strategy uses a reduction from edge
clique covers of graphs (ECCs) and cover all cases of $g$. We also
explore important relationships between ECCs and CAFEs and give some
new bounds for uniform ECCs and CAFEs. |