This content will become publicly available on May 1, 2025
- Award ID(s):
- 2240374
- NSF-PAR ID:
- 10525893
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Journal of Low Temperature Physics
- Volume:
- 215
- Issue:
- 3-4
- ISSN:
- 0022-2291
- Page Range / eLocation ID:
- 143 to 151
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract Recent technological advances have contributed to the rapid increase in algorithmic complexity of applications, ranging from signal processing to autonomous systems. To control this complexity and endow heterogeneous computing systems with autonomous programming and optimization capabilities, we propose a
unified, end-to-end, programmable graph representation learning (PGL) framework that mines the complexity of high-level programs down to low-level virtual machine intermediate representation, extracts specific computational patterns, and predicts which code segments run best on a core in heterogeneous hardware. PGL extracts multifractal features from code graphs and exploits graph representation learning strategies for automatic parallelization and correct assignment to heterogeneous processors. The comprehensive evaluation of PGL on existing and emerging complex software demonstrates a 6.42x and 2.02x speedup compared to thread-based execution and state-of-the-art techniques, respectively. Our PGL framework leads to higher processing efficiency, which is crucial for future AI and high-performance computing applications such as autonomous vehicles and machine vision. -
null (Ed.)Traditional big data infrastructures are passive in nature, passively answering user requests to process and return data. In many applications however, users not only need to analyze data, but also to subscribe to and actively receive data of interest, based on their subscriptions. Their interest may include the incoming data's content as well as its relationships to other data. Moreover, data delivered to subscribers may need to be enriched with additional relevant and actionable information. To address this Big Active Data (BAD) challenge we have advocated the need for building scalable BAD systems that continuously and reliably capture big data while enabling timely and automatic delivery of relevant and possibly enriched information to a large pool of subscribers. In this demo we showcase how to build an end-to-end active application using a BAD system and a standard email broker for data delivery. This includes enabling users to register their interests with the bad system, ingesting and monitoring data, and producing customized results and delivering them to the appropriate subscribers. Through this example we demonstrate that even complex active data applications can be created easily and scale to many users, considerably limiting the effort of application developers, if a BAD approach is taken.more » « less
-
Many high-stakes policies can be modeled as a sequence of decisions along a pipeline. We are interested in auditing such pipelines for both Our empirical focus is on policy decisions made by the New efficiency and equity. Using a dataset of over 100,000 crowdsourced resident requests for po- life-tentially hazardous tree maintenance in New York City, we observe a sequence of city government decisions about whether to inspect and work on a reported incident. At each decision in the pipeline, we define parity definitions and tests to identify inefficient, inequitable treatment. Disparities in resource allocation and scheduling across census tracts are reported as preliminary results.more » « less
-
We consider the end-to-end deep learning approach for phase retrieval, a central problem in scientific imaging. We highlight a fundamental difficulty for learning that previous work has neglected, likely due to the biased datasets they use for training and evaluation. We propose a simple yet different formulation for PR that seems to overcome the difficulty and return consistently better qualitative results.more » « less