Abstract In this article, we exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component, and a computationally efficient numerical scheme that is suitable for large-scale problems. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling of the penalty functional is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise (too weak regularization) while a too large value would lead to an excessive penalization of the solution (too strong regularization). In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, so as to allow different regularization for different components of the unknown, or for groups of them. Distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the maximum a posteriori estimation, requires no statistical tools. A clever combination of numerical linear algebra and numerical optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available. Moreover, in the case of underdetermined problems, passing through the adjoint formulation in data space may lead to substantial reduction in computational complexity.
more »
« less
Implementation of digital MemComputing using standard electronic components
Summary Digital MemComputing machines (DMMs), which employ nonlinear dynamical systems with memory (time non‐locality), have proven to be a robust and scalable unconventional computing approach for solving a wide variety of combinatorial optimization problems. However, most of the research so far has focused on the numerical simulations of the equations of motion of DMMs. This inevitably subjects time to discretization, which brings its own (numerical) issues that would be otherwise absent in actual physical systems operating in continuous time. Although hardware realizations of DMMs have been previously suggested, their implementation would require materials and devices that are not so easy to integrate with traditional electronics. Addressing this, our study introduces a novel hardware design for DMMs, utilizing readily available electronic components. This approach not only significantly boosts computational speed compared to current models but also exhibits remarkable robustness against additive noise. Crucially, it circumvents the limitations imposed by numerical noise, ensuring enhanced stability and reliability during extended operations. This paves a new path for tackling increasingly complex problems, leveraging the inherent advantages of DMMs in a more practical and accessible framework.
more »
« less
- Award ID(s):
- 2229880
- PAR ID:
- 10532127
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- International Journal of Circuit Theory and Applications
- ISSN:
- 0098-9886
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Sensors for the perception of multimodal stimuli—ranging from the five senses humans possess and beyond—have reached an unprecedented level of sophistication and miniaturization, raising the prospect of making man-made large-scale complex systems that can rival nature a reality. Artificial intelligence (AI) at the edge aims to integrate such sensors with real-time cognitive abilities enabled by recent advances in AI. Such AI progress has only been achieved by using massive computing power which, however, would not be available in most distributed systems of interest. Nature has solved this problem by integrating computing, memory and sensing functionalities in the same hardware so that each part can learn its environment in real time and take local actions that lead to stable global functionalities. While this is a challenging task by itself, it would raise a new set of security challenges when implemented. As in nature, malicious agents can attack and commandeer the system to perform their own tasks. This article aims to define the types of systemic attacks that would emerge, and introduces a multiscale framework for combatting them. A primary thesis is that edge AI systems have to deal with unknown attack strategies that can only be countered in real time using low-touch adaptive learning systems. This article is part of the theme issue ‘Emerging technologies for future secure computing platforms’.more » « less
-
Modern hydrologic models have extraordinary capabilities for representing complex process in surface-subsurface systems. These capabilities have revolutionized the way we conceptualize flow systems, but how to represent uncertainty in simulated flow systems is not as well developed. Currently, characterizing model uncertainty can be computationally expensive, in part, because the techniques are appended to the numerical methods rather than seamlessly integrated. The next generation of computers, however, presents opportunities to reformulate the modeling problem so that the uncertainty components are handled more directly within the flow system simulation. Misconceptions about quantum computing abound and they will not be a “silver bullet” for solving all complex problems, but they might be leveraged for certain kinds of highly uncertain problems, such as groundwater (GW). The point of this issue paper is that the GW community could try to revise the foundations of our models so that the governing equations being solved are tailored specifically for quantum computers. The goal moving forward should not just be to accelerate the models we have, but also to address their deficiencies. Embedding uncertainty into the models by evolving distribution functions will make predictive GW modeling more complicated, but doing so places the problem into a complexity class that is highly efficient on quantum computing hardware. Next generation GW models could put uncertainty into the problem at the very beginning of a simulation and leave it there throughout, providing a completely new way of simulating subsurface flows.more » « less
-
Abstract Simulating open quantum systems, which interact with external environments, presents significant challenges on noisy intermediate‐scale quantum (NISQ) devices due to limited qubit resources and noise. In this study, an efficient framework is proposed for simulating open quantum systems on NISQ hardware by leveraging a time‐perturbative Kraus operator representation of the system's dynamics. This approach avoids the computationally expensive Trotterization method and exploits the Lindblad master equation to represent time evolution in a compact form, particularly for systems satisfying specific commutation relations. The efficiency of this method is demonstrated by simulating quantum channels, such as the continuous‐time Pauli channel and damped harmonic oscillators, on NISQ trapped‐ion hardware, including IonQ Harmony and Quantinuum H1‐1. Additionally, hardware‐agnostic error mitigation techniques are introduced, including Pauli channel fitting and quantum depolarizing channel inversion, to enhance the fidelity of quantum simulations. These results show strong agreement between the simulations on real quantum hardware and exact solutions, highlighting the potential of Kraus‐based methods for scalable and accurate simulation of open quantum systems on NISQ devices. This framework opens pathways for simulating more complex systems under realistic conditions in the near term.more » « less
-
Abstract Over the past decade, the usefulness of quantum annealing hardware for combinatorial optimization has been the subject of much debate. Thus far, experimental benchmarking studies have indicated that quantum annealing hardware does not provide an irrefutable performance gain over state-of-the-art optimization methods. However, as this hardware continues to evolve, each new iteration brings improved performance and warrants further benchmarking. To that end, this work conducts an optimization performance assessment of D-Wave Systems’Advantage Performance Updatecomputer, which can natively solve sparse unconstrained quadratic optimization problems with over 5,000 binary decision variables and 40,000 quadratic terms. We demonstrate that classes of contrived problems exist where this quantum annealer can provide run time benefits over a collection of established classical solution methods that represent the current state-of-the-art for benchmarking quantum annealing hardware. Although this workdoes notpresent strong evidence of an irrefutable performance benefit for this emerging optimization technology, it does exhibit encouraging progress, signaling the potential impacts on practical optimization tasks in the future.more » « less
An official website of the United States government
