skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: An O(N) algorithm for computing expectation of N-dimensional truncated multi-variate normal distribution II: computing moments and sparse grid acceleration
Award ID(s):
2012451 1821093
PAR ID:
10416117
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Advances in Computational Mathematics
Volume:
48
Issue:
6
ISSN:
1019-7168
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Processing large amounts of data, especially in learning algorithms, poses a challenge for current embedded computing systems. Hyperdimensional (HD) computing (HDC) is a brain-inspired computing paradigm that works with high-dimensional vectors called hypervectors . HDC replaces several complex learning computations with bitwise and simpler arithmetic operations at the expense of an increased amount of data due to mapping the data into high-dimensional space. These hypervectors, more often than not, cannot be stored in memory, resulting in long data transfers from storage. In this article, we propose Store-n-Learn, an in-storage computing solution that performs HDC classification and clustering by implementing encoding, training, retraining, and inference across the flash hierarchy. To hide the latency of training and enable efficient computation, we introduce the concept of batching in HDC. We also present on-chip acceleration for HDC encoding in flash planes. This enables us to exploit the high parallelism provided by the flash hierarchy and encode multiple data points in parallel in both batched and non-batched fashion. Store-n-Learn also implements a single top-level FPGA accelerator with novel implementations for HDC classification training, retraining, inference, and clustering on the encoded data. Our evaluation over 10 popular datasets shows that Store-n-Learn is on average 222× (543×) faster than CPU and 10.6× (7.3×) faster than the state-of-the-art in-storage computing solution, INSIDER for HDC classification (clustering). 
    more » « less
  2. We consider the future of cloud computing and ask how we might guide it towards a more coherent service we call sky computing. The barriers are more economic than technical, and we propose reciprocal peering as a key enabling step. 
    more » « less
  3. As a means of inquiry and expression, computing has become a literacy across many professional paths. This paper casts a vision for how a small, STEM-focused school supports this role of computing-as-literacy. We share several examples, both future visions and past experiences. We hope to prompt and join discussions that further the reach, use, and enjoyment of computing. 
    more » « less