skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2051037

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Sequences of linear systems arise in the predictor- corrector method when computing the Pareto front for multi- objective optimization. Rather than discarding information gen- erated when solving one system, it may be advantageous to recycle information for subsequent systems. To accomplish this, we seek to reduce the overall cost of computation when solving linear systems using common recycling methods. In this work, we assessed the performance of recycling minimum residual (RMIN- RES) method along with a map between coefficient matrices. For these methods to be fully integrated into the software used in Enouen et al. (2022), there must be working version of each in both Python and PyTorch. Herein, we discuss the challenges we encountered and solutions undertaken (and some ongoing) when computing efficient Python implementations of these recycling strategies. The goal of this project was to implement RMINRES in Python and PyTorch and add it to the established Pareto front code to reduce computational cost. Additionally, we wanted to implement the sparse approximate maps code in Python and PyTorch, so that it can be parallelized in future work. 
    more » « less
  2. Knowledge bases traditionally require manual optimization to en- sure reasonable performance when answering queries. We build on previous work on training a deep learning model to learn heuristics for answering queries by comparing different representations of the sentences contained in knowledge bases. We decompose the problem into issues of representation, training, and control and propose solutions for each subproblem. We evaluate different con- figurations on three synthetic knowledge bases. In particular we compare a novel representation approach based on learning to max- imize similarity of logical atoms that unify and minimize similarity of atoms that do not unify, to two vectorization strategies taken from the automated theorem proving literature: a chain-based and a 3-term-walk strategy. We also evaluate the efficacy of pruning the search by ignoring rules with scores below a threshold. 
    more » « less
  3. Zero-Knowledge proofs are a cryptographic technique to reveal knowledge of information without revealing the information it- self, thus enabling systems optimally to mix privacy and trans- parency, and, where needed, regulatability. Application domains include health and other enterprise data, financial systems such as central-bank digital currencies, and performance enhancement in blockchain systems. The challenge of zero-knowledge proofs is that, although they are computationally easy to verify, they are computationally hard to produce. This paper examines the scala- bility limits of leading zero-knowledge algorithms and addresses the use of parallel architectures to meet performance demands of applications. 
    more » « less