skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Even 1xn Edge-Matching and Jigsaw Puzzles are Really Hard
We prove the computational intractability of rotating and placing n square tiles into a 1 × n array such that adjacent tiles are compatible—either equal edge colors, as in edge-matching puzzles, or matching tab/pocket shapes, as in jigsaw puzzles. Beyond basic NP-hardness, we prove that it is NP-hard even to approximately maximize the number of placed tiles (allowing blanks), while satisfying the compatibility constraint between nonblank tiles, within a factor of 0.9999999702. (On the other hand, there is an easy (1/2)-approximation.) This is the first (correct) proof of inapproximability for edge-matching and jigsaw puzzles. Along the way, we prove NP-hardness of distinguishing, for a directed graph on n nodes, between having a Hamiltonian path (length n − 1) and having at most 0.999999284 (n − 1) edges that form a vertex-disjoint union of paths. We use this gap hardness and gap-preserving reductions to establish similar gap hardness for 1 × n jigsaw and edge-matching puzzles.  more » « less
Award ID(s):
1650733
PAR ID:
10078584
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Journal of information processing
Volume:
25
ISSN:
1882-6652
Page Range / eLocation ID:
682–694
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract How many chess rooks or queens does it take to guard all squares of a given polyomino, the union of square tiles from a square grid? This question is a version of the art gallery problem in which the guards can “see” whichever squares the rook or queen attacks. We show that $$\lfloor {\frac{n}{2}} \rfloor $$ ⌊ n 2 ⌋ rooks or $$\lfloor {\frac{n}{3}} \rfloor $$ ⌊ n 3 ⌋ queens are sufficient and sometimes necessary to guard a polyomino with n tiles. We then prove that finding the minimum number of rooks or queens needed to guard a polyomino is NP-hard. These results also apply to d -dimensional rooks and queens on d -dimensional polycubes. Finally, we use bipartite matching theorems to describe sets of non-attacking rooks on polyominoes. 
    more » « less
  2. We prove several hardness results for training depth-2 neural networks with the ReLU activation function; these networks are simply weighted sums (that may include negative coefficients) of ReLUs. Our goal is to output a depth-2 neural network that minimizes the square loss with respect to a given training set. We prove that this problem is NP-hard already for a network with a single ReLU. We also prove NP-hardness for outputting a weighted sum of k ReLUs minimizing the squared error (for k>1) even in the realizable setting (i.e., when the labels are consistent with an unknown depth-2 ReLU network). We are also able to obtain lower bounds on the running time in terms of the desired additive error ϵ. To obtain our lower bounds, we use the Gap Exponential Time Hypothesis (Gap-ETH) as well as a new hypothesis regarding the hardness of approximating the well known Densest k-Subgraph problem in subexponential time (these hypotheses are used separately in proving different lower bounds). For example, we prove that under reasonable hardness assumptions, any proper learning algorithm for finding the best fitting ReLU must run in time exponential in (1/epsilon)^2. Together with a previous work regarding improperly learning a ReLU (Goel et al., COLT'17), this implies the first separation between proper and improper algorithms for learning a ReLU. We also study the problem of properly learning a depth-2 network of ReLUs with bounded weights giving new (worst-case) upper bounds on the running time needed to learn such networks both in the realizable and agnostic settings. Our upper bounds on the running time essentially matches our lower bounds in terms of the dependency on epsilon. 
    more » « less
  3. We present algorithmic results for the parallel assembly of many micro-scale objects in two and three dimensions from tiny particles, which has been proposed in the context of programmable matter and self-assembly for building high-yield micro-factories. The underlying model has particles moving under the influence of uniform external forces until they hit an obstacle. Particles bond when forced together with another appropriate particle. Due to the physical and geometric constraints, not all shapes can be built in this manner; this gives rise to the Tilt Assembly Problem (TAP) of deciding constructibility. For simply-connected polyominoes P in 2D consisting of N unit-squares (“tiles”), we prove that TAP can be decided in 𝑂(𝑁log𝑁) time. For the optimization variant MaxTAP (in which the objective is to construct a subshape of maximum possible size), we show polyAPX-hardness: unless P = NP, MaxTAP cannot be approximated within a factor of Ω(𝑁13) ; for tree-shaped structures, we give an Ω(𝑁12) -approximation algorithm. For the efficiency of the assembly process itself, we show that any constructible shape allows pipelined assembly, which produces copies of P in O(1) amortized time, i.e., N copies of P in O(N) time steps. These considerations can be extended to three-dimensional objects: For the class of polycubes P we prove that it is NP-hard to decide whether it is possible to construct a path between two points of P; it is also NP-hard to decide constructibility of a polycube P. Moreover, it is expAPX-hard to maximize a sequentially constructible path from a given start point. 
    more » « less
  4. null (Ed.)
    We review a study of average-case complexity through the lens of interactive puzzles- interactive games between a computationally bounded Challenger and computationally-bounded Solver/Attacker. Most notably, we use this treatment to review a recent result showing that if NP is hard-on-the-average, then there exists a sampleable distribution over only true statements of an NP language, for which no probabilistic polynomial time algorithm can find witnesses. We also discuss connections to the problem of whether average-case hardness in NP implies averagecase hardness in TFNP, or the existence of cryptographic one-way functions. 
    more » « less
  5. Braverman, Mark (Ed.)
    Grothendieck’s inequality [Grothendieck, 1953] states that there is an absolute constant K > 1 such that for any n× n matrix A, ‖A‖_{∞→1} := max_{s,t ∈ {± 1}ⁿ}∑_{i,j} A[i,j]⋅s(i)⋅t(j) ≥ 1/K ⋅ max_{u_i,v_j ∈ S^{n-1}}∑_{i,j} A[i,j]⋅⟨u_i,v_j⟩. In addition to having a tremendous impact on Banach space theory, this inequality has found applications in several unrelated fields like quantum information, regularity partitioning, communication complexity, etc. Let K_G (known as Grothendieck’s constant) denote the smallest constant K above. Grothendieck’s inequality implies that a natural semidefinite programming relaxation obtains a constant factor approximation to ‖A‖_{∞ → 1}. The exact value of K_G is yet unknown with the best lower bound (1.67…) being due to Reeds and the best upper bound (1.78…) being due to Braverman, Makarychev, Makarychev and Naor [Braverman et al., 2013]. In contrast, the little Grothendieck inequality states that under the assumption that A is PSD the constant K above can be improved to π/2 and moreover this is tight. The inapproximability of ‖A‖_{∞ → 1} has been studied in several papers culminating in a tight UGC-based hardness result due to Raghavendra and Steurer (remarkably they achieve this without knowing the value of K_G). Briet, Regev and Saket [Briët et al., 2015] proved tight NP-hardness of approximating the little Grothendieck problem within π/2, based on a framework by Guruswami, Raghavendra, Saket and Wu [Guruswami et al., 2016] for bypassing UGC for geometric problems. This also remained the best known NP-hardness for the general Grothendieck problem due to the nature of the Guruswami et al. framework, which utilized a projection operator onto the degree-1 Fourier coefficients of long code encodings, which naturally yielded a PSD matrix A. We show how to extend the above framework to go beyond the degree-1 Fourier coefficients, using the global structure of optimal solutions to the Grothendieck problem. As a result, we obtain a separation between the NP-hardness results for the two problems, obtaining an inapproximability result for the Grothendieck problem, of a factor π/2 + ε₀ for a fixed constant ε₀ > 0. 
    more » « less