skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2127780

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. IntroductionBrain-inspired computing has become an emerging field, where a growing number of works focus on developing algorithms that bring machine learning closer to human brains at the functional level. As one of the promising directions, Hyperdimensional Computing (HDC) is centered around the idea of having holographic and high-dimensional representation as the neural activities in our brains. Such representation is the fundamental enabler for the efficiency and robustness of HDC. However, existing HDC-based algorithms suffer from limitations within the encoder. To some extent, they all rely on manually selected encoders, meaning that the resulting representation is never adapted to the tasks at hand. MethodsIn this paper, we propose FLASH, a novel hyperdimensional learning method that incorporates an adaptive and learnable encoder design, aiming at better overall learning performance while maintaining good properties of HDC representation. Current HDC encoders leverage Random Fourier Features (RFF) for kernel correspondence and enable locality-preserving encoding. We propose to learn the encoder matrix distribution via gradient descent and effectively adapt the kernel for a more suitable HDC encoding. ResultsOur experiments on various regression datasets show that tuning the HDC encoder can significantly boost the accuracy, surpassing the current HDC-based algorithm and providing faster inference than other baselines, including RFF-based kernel ridge regression. DiscussionThe results indicate the importance of an adaptive encoder and customized high-dimensional representation in HDC. 
    more » « less
  2. Abstract Machine learning (ML) models are used for in-situ monitoring in additive manufacturing (AM) for defect detection. However, sensitive information stored in ML models, such as part designs, is at risk of data leakage due to unauthorized access. To address this, differential privacy (DP) introduces noise into ML, outperforming cryptography, which is slow, and data anonymization, which does not guarantee privacy. While DP enhances privacy, it reduces the precision of defect detection. This paper proposes combining DP with Hyperdimensional Computing (HDC), a brain-inspired model that memorizes training sample information in a large hyperspace, to optimize real-time monitoring in AM while protecting privacy. Adding DP noise to the HDC model protects sensitive information without compromising defect detection accuracy. Our studies demonstrate the effectiveness of this approach in monitoring anomalies, such as overhangs, using high-speed melt pool data analysis. With a privacy budget set at 1, our model achieved an F-score of 94.30%, surpassing traditional models like ResNet50, DenseNet201, EfficientNet B2, and AlexNet, which have performance up to 66%. Thus, the intersection of DP and HDC promises accurate defect detection and protection of sensitive information in AM. The proposed method can also be extended to other AM processes, such as fused filament fabrication. 
    more » « less
    Free, publicly-accessible full text available November 17, 2025
  3. Abstract Increasing complexity, and requirements for the precise creation of parts, necessitate the use of computer numerical control (CNC) manufacturing. This process involves programmed instructions to remove material from a workpiece through operations such as milling, turning, and drilling. This manufacturing technique incorporates various process parameters (e.g., tools, spindle speed, feed rate, cut depth), leading to a highly complex operation. Additionally, interacting phenomena between the workpiece, tools, and environmental conditions further add to complexity which can lead to defects and poor product quality. Two main areas are of focus for an efficient automated system: monitoring and swift quality assessment. Within these areas, the critical aspects ascertaining the quality of a CNC manufacturing operation are: 1) Tool wear: the inherent deterioration of machine components caused by prolonged utilization, 2) Chatter: vibration that occurs during the machining process, and 3) Surface finish: the final product’s surface roughness. Many research domains tend to focus on just one of these areas while neglecting the interconnected influences of all three. Therefore, to capture a more holistic and comprehensive assessment of a manufacturing process, the overall product quality should be considered, as that’s what ultimately counts. The integration of CNC systems with in-situ monitoring devices such as acoustic sensors, high-speed cameras, and thermal cameras is aimed at understanding the underlying physical aspects of the CNC machining process, including tool wear, chatter, and surface roughness. The incorporation of these monitoring devices has allowed the use of artificial intelligence and machine learning (ML) in smart CNC systems with hopes of increasing productivity, minimizing downtime, and ensuring product quality. By capturing the underlying phenomena that occur during the manufacturing process, users hope to understand the interlinking dynamics for zero-defect automated manufacturing. However, even though the use of ML methods has yielded noteworthy results in analyzing in-situ process data for CNC manufacturing, the black-box nature of these models and their tendency to focus predominantly on single-task objectives rather than multi-task scenarios pose challenges. In real-world part creation and manufacturing scenarios, there is often a need to address multiple interconnected tasks simultaneously which demands models that can multitask effectively. Yet, many ML models designed and trained for singular objectives are limited in their applicability and efficiency in more complex multi-faceted environments. Addressing these challenges, we introduce MTaskHD, a novel multi-task framework, that leverages hyperdimensional computing (HDC) to effortlessly fuse data from various channels and process signals while characterizing quality within a multi-task manufacturing operation. Moreover, it yields interpretable outcomes, allowing users to understand the process behind predictions. In a real-world experiment conducted on a hybrid 5-axis CNC Deckel-Maho-Gildemeister, MTaskHD was implemented to forecast the quality of three distinct features: left 25.4 mm counterbore diameter, right 25.4 mm counterbore diameter, and 2.54 mm milled radius. Demonstrating remarkable performance, the model excelled in predicting the quality levels of all three features in its multi-task configuration with an F1-Score of 95.3%, outperforming alternative machine learning approaches, including support vector machines, Naïve Bayes, multi-layer perceptron, convolutional neural network, and time-LeNet. The inherent multi-task capability, robustness, and interpretability of HDC collectively offer a solution for comprehending intricate manufacturing dynamics and operations. 
    more » « less
  4. Abstract Although the connectivity offered by industrial internet of things (IIoT) enables enhanced operational capabilities, the exposure of systems to significant cybersecurity risks poses critical challenges. Recently, machine learning (ML) algorithms such as feature-based support vector machines and logistic regression, together with end-to-end deep neural networks, have been implemented to detect intrusions, including command injection, denial of service, reconnaissance, and backdoor attacks, by capturing anomalous patterns. However, ML algorithms not only fall short in agile identification of intrusion with few samples, but also fail in adapting to new data or environments. This paper introduces hyperdimensional computing (HDC) as a new cognitive computing paradigm that mimics brain functionality to detect intrusions in IIoT systems. HDC encodes real-time data into a high-dimensional representation, allowing for ultra-efficient learning and analysis with limited samples and a few passes. Additionally, we incorporate the concept of regenerating brain cells into hyperdimensional computing to further improve learning capability and reduce the required memory. Experimental results on the WUSTL-IIOT-2021 dataset show that HDC detects intrusion with the accuracy of 92.6%, which is superior to multi-layer perceptron (40.2%), support vector machine (72.9%), logistic regression (84.2%), and Gaussian process classification (89.1%) while requires only 300 data and 5 iterations for training. 
    more » « less
  5. Free, publicly-accessible full text available May 1, 2026
  6. Free, publicly-accessible full text available May 1, 2026
  7. Free, publicly-accessible full text available May 1, 2026
  8. Free, publicly-accessible full text available April 1, 2026
  9. Free, publicly-accessible full text available April 1, 2026
  10. Tor users derive anonymity in part from the size of the Tor user base, but Tor struggles to attract and support more users due to performance limitations. Previous works have proposed modifications to Tor’s path selection algorithm to enhance both performance and security, but many proposals have unintended consequences due to incorporating information related to client location. We instead propose selecting paths using a global view of the network, independent of client location, and we propose doing so with a machine learning classifier to predict the performance of a given path before building a circuit. We show through a variety of simulated and live experimental settings, across different time periods, that this approach can significantly improve performance compared to Tor’s default path selection algorithm and two previously proposed approaches. In addition to evaluating the security of our approach with traditional metrics, we propose a novel anonymity metric that captures information leakage resulting from location-aware path selection, and we show that our path selection approach leaks no more information than the default path selection algorithm. 
    more » « less
    Free, publicly-accessible full text available March 13, 2026