skip to main content


Title: Even 1xn Edge-Matching and Jigsaw Puzzles are Really Hard
We prove the computational intractability of rotating and placing n square tiles into a 1 × n array such that adjacent tiles are compatible—either equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes, as in jigsaw puzzles. Beyond basic NP-hardness, we prove that it is NP-hard even to approximately maximize the number of placed tiles (allowing blanks), while satisfying the compatibility constraint between nonblank tiles, within a factor of 0.9999999702. (On the other hand, there is an easy (1/2)-approximation.) This is the first (correct) proof of inapproximability for edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of distinguishing, for a directed graph on n nodes, between having a Hamiltonian path (length n − 1) and having at most 0.999999284 (n − 1) edges that form a vertex-disjoint union of paths. We use this gap hardness and gap-preserving reductions to establish similar gap hardness for 1 × n jigsaw and edge-matching puzzles.  more » « less
Award ID(s):
1650733
NSF-PAR ID:
10078584
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of information processing
Volume:
25
ISSN:
1882-6652
Page Range / eLocation ID:
682–694
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract How many chess rooks or queens does it take to guard all squares of a given polyomino, the union of square tiles from a square grid? This question is a version of the art gallery problem in which the guards can “see” whichever squares the rook or queen attacks. We show that $$\lfloor {\frac{n}{2}} \rfloor $$ ⌊ n 2 ⌋ rooks or $$\lfloor {\frac{n}{3}} \rfloor $$ ⌊ n 3 ⌋ queens are sufficient and sometimes necessary to guard a polyomino with n tiles. We then prove that finding the minimum number of rooks or queens needed to guard a polyomino is NP-hard. These results also apply to d -dimensional rooks and queens on d -dimensional polycubes. Finally, we use bipartite matching theorems to describe sets of non-attacking rooks on polyominoes. 
    more » « less
  2. We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of k ReLUs minimizing the squared error (for k>1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ϵ. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest k-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in (1/epsilon)^2. Together with a previous work regarding improperly learning a ReLU (Goel et al., COLT'17), this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on epsilon. 
    more » « less
  3. null (Ed.)
    We review a study of average-case complexity through the lens of interactive puzzles- interactive games between a computationally bounded Challenger and computationally-bounded Solver/Attacker. Most notably, we use this treatment to review a recent result showing that if NP is hard-on-the-average, then there exists a sampleable distribution over only true statements of an NP language, for which no probabilistic polynomial time algorithm can find witnesses. We also discuss connections to the problem of whether average-case hardness in NP implies averagecase hardness in TFNP, or the existence of cryptographic one-way functions. 
    more » « less
  4. We present algorithmic results for the parallel assembly of many micro-scale objects in two and three dimensions from tiny particles, which has been proposed in the context of programmable matter and self-assembly for building high-yield micro-factories. The underlying model has particles moving under the influence of uniform external forces until they hit an obstacle. Particles bond when forced together with another appropriate particle. Due to the physical and geometric constraints, not all shapes can be built in this manner; this gives rise to the Tilt Assembly Problem (TAP) of deciding constructibility. For simply-connected polyominoes P in 2D consisting of N unit-squares (“tiles”), we prove that TAP can be decided in 𝑂(𝑁log𝑁) time. For the optimization variant MaxTAP (in which the objective is to construct a subshape of maximum possible size), we show polyAPX-hardness: unless P = NP, MaxTAP cannot be approximated within a factor of Ω(𝑁13) ; for tree-shaped structures, we give an Ω(𝑁12) -approximation algorithm. For the efficiency of the assembly process itself, we show that any constructible shape allows pipelined assembly, which produces copies of P in O(1) amortized time, i.e., N copies of P in O(N) time steps. These considerations can be extended to three-dimensional objects: For the class of polycubes P we prove that it is NP-hard to decide whether it is possible to construct a path between two points of P; it is also NP-hard to decide constructibility of a polycube P. Moreover, it is expAPX-hard to maximize a sequentially constructible path from a given start point. 
    more » « less
  5. Bojanczy, Mikolaj ; Chekuri, Chandra (Ed.)
    One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural NP-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1) First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2) Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. 3) Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. In fact, building on recently-announced results of Ren and Santhanam [Rahul Ilango et al., 2021], we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs. 
    more » « less