The SAT problem is a prototypical NP-complete problem of fundamental importance in computational complexity theory with many applications in science and engineering; as such, it has long served as an essential benchmark for classical and quantum algorithms. This study shows numerical evidence for a quadratic speedup of the Grover Quantum Approximate Optimization Algorithm (G-QAOA) over random sampling for finding all solutions to 3-SAT (All-SAT) and Max-SAT problems. G-QAOA is less resource-intensive and more adaptable for these problems than Grover’s algorithm, and it surpasses conventional QAOA in its ability to sample all solutions. We show these benefits by classical simulations of many-round G-QAOA on thousands of random 3-SAT instances. We also observe G-QAOA advantages on the IonQ Aria quantum computer for small instances, finding that current hardware suffices to determine and sample all solutions. Interestingly, a single-angle-pair constraint that uses the same pair of angles at each G-QAOA round greatly reduces the classical computational overhead of optimizing the G-QAOA angles while preserving its quadratic speedup. We also find parameter clustering of the angles. The single-angle-pair protocol and parameter clustering significantly reduce obstacles to classical optimization of the G-QAOA angles.
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract -
Free, publicly-accessible full text available May 13, 2025
-
We carefully evaluate a number of algorithms for learning in a federated environment, and test their utility for a variety of image classification tasks. We consider many issues that have not been adequately considered before: whether learning over data sets that do not have diverse sets of images affects the results; whether to use a pre-trained feature extraction "backbone"; how to evaluate learner performance (we argue that classification accuracy is not enough), among others. Overall, across a wide variety of settings, we find that vertically decomposing a neural network seems to give the best results, and outperforms more standard reconciliation-used methods.more » « less
-
Moffat, K (Ed.)
The general
de novo solution of the crystallographic phase problem is difficult and only possible under certain conditions. This paper develops an initial pathway to a deep learning neural network approach for the phase problem in protein crystallography, based on a synthetic dataset of small fragments derived from a large well curated subset of solved structures in the Protein Data Bank (PDB). In particular, electron-density estimates of simple artificial systems are produced directly from corresponding Patterson maps using a convolutional neural network architecture as a proof of concept. -
Abstract The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called . disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wall-clock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with , we train an astonishgly-wide 32–768-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of
, to SOTA performance on the Amazon2M dataset.$$8\times $$ -
null (Ed.)Motion planning for high degree-of-freedom (DOF) robots is challenging, especially when acting in complex environments under sensing uncertainty. While there is significant work on how to plan under state uncertainty for low-DOF robots, existing methods cannot be easily translated into the high-DOF case, due to the complex geometry of the robot’s body and its environment. In this paper, we present a method that enhances optimization-based motion planners to produce robust trajectories for high-DOF robots for convex obstacles. Our approach introduces robustness into planners that are based on sequential convex programming: We reformulate each convex subproblem as a robust optimization problem that “protects” the solution against deviations due to sensing uncertainty. The parameters of the robust problem are estimated by sampling from the distribution of noisy obstacles, and performing a first-order approximation of the signed distance function. The original merit function is updated to account for the new costs of the robust formulation at every step. The effectiveness of our approach is demonstrated on two simulated experiments that involve a full body square robot, that moves in randomly generated scenes, and a 7-DOF Fetch robot, performing tabletop operations. The results show nearly zero probability of collision for a reasonable range of the noise parameters for Gaussian and Uniform uncertainty.more » « less