Human activity recognition (HAR) from wearable sensor data has recently gained widespread adoption in a number of fields. However, recognizing complex human activities, postural and rhythmic body movements (e.g., dance, sports) is challenging due to the lack of domain-specific labeling information, the perpetual variability in human movement kinematics profiles due to age, sex, dexterity and the level of professional training. In this paper, we propose a deep activity recognition model to work with limited labeled data, both for simple and complex human activities. To mitigate the intra- and inter-user spatio-temporal variability of movements, we posit novel data augmentation and domain normalization techniques. We depict a semi-supervised technique that learns noise and transformation invariant feature representation from sparsely labeled data to accommodate intra-personal and inter-user variations of human movement kinematics. We also postulate a transfer learning approach to learn domain invariant feature representations by minimizing the feature distribution distance between the source and target domains. We showcase the improved performance of our proposed framework, AugToAct, using a public HAR dataset. We also design our own data collection, annotation and experimental setup on complex dance activity recognition steps and kinematics movements where we achieved higher performance metrics with limited label data compared to simple activity recognition tasks.
more »
« less
This content will become publicly available on April 28, 2026
Data-Efficient Prediction of Minimum Operating Voltage via Inter- and Intra-Wafer Variation Alignment
Predicting the minimum operating voltage Vmin of chips stands as a crucial technique in enhancing the speed and reliability of manufacturing testing flow. However, existing Vmin prediction methods often overlook various sources of variations in both training and deployment phases. Notably, overlooking wafer zone-to-zone (intra-wafer) variations and wafer-to-wafer (inter-wafer) variations diminishes the accuracy, data efficiency, and reliability of Vmin predictors. To address this challenge, we propose Restricted Bias Alignment (RBA), a novel data-efficient Vmin prediction framework that introduces a variation alignment technique to simultaneously estimate inter- and intra-wafer variations. Furthermore, we propose utilizing class probe data to model inter-wafer variations for the first time.
more »
« less
- Award ID(s):
- 1956313
- PAR ID:
- 10593476
- Publisher / Repository:
- 2025 IEEE VLSI Test Symposium
- Date Published:
- ISSN:
- 979-8-3315-2144-8
- Format(s):
- Medium: X
- Location:
- Tempe, AZ, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Link prediction has been widely applied in social network analysis. Despite its importance, link prediction algorithms can be biased by disfavoring the links between individuals in particular demographic groups. In this paper, we study one particular type of bias, namely, the bias in predicting inter-group links (i.e., links across different demographic groups). First, we formalize the definition of bias in link prediction by providing quantitative measurements of accuracy disparity, which measures the difference in prediction accuracy of inter-group and intra-group links. Second, we unveil the existence of bias in six existing state-of-the-art link prediction algorithms through extensive empirical studies over real world datasets. Third, we identify the imbalanced density across intra-group and inter-group links in training graphs as one of the underlying causes of bias in link prediction. Based on the identified cause, fourth, we design a pre-processing bias mitigation method named FairLP to modify the training graph, aiming to balance the distribution of intra-group and inter-group links while preserving the network characteristics of the graph. FairLP is model-agnostic and thus is compatible with any existing link prediction algorithm. Our experimental results on real-world social network graphs demonstrate that FairLP achieves better trade-off between fairness and prediction accuracy than the existing fairness-enhancing link prediction methods.more » « less
-
In smart manufacturing, semiconductors play an indispensable role in collecting, processing, and analyzing data, ultimately enabling more agile and productive operations. Given the foundational importance of wafers, the purity of a wafer is essential to maintain the integrity of the overall semiconductor fabrication. This study proposes a novel automated visual inspection (AVI) framework for scrutinizing semiconductor wafers from scratch, capable of identifying defective wafers and pinpointing the location of defects through autonomous data annotation. Initially, this proposed methodology leveraged a texture analysis method known as gray-level co-occurrence matrix (GLCM) that categorized wafer images—captured via a stroboscopic imaging system—into distinct scenarios for high- and low-resolution wafer images. GLCM approaches further allowed for a complete separation of low-resolution wafer images into defective and normal wafer images, as well as the extraction of defect images from defective low-resolution wafer images, which were used for training a convolutional neural network (CNN) model. Consequently, the CNN model excelled in localizing defects on defective low-resolution wafer images, achieving an F1 score—the harmonic mean of precision and recall metrics—exceeding 90.1%. In high-resolution wafer images, a background subtraction technique represented defects as clusters of white points. The quantity of these white points determined the defectiveness and pinpointed locations of defects on high-resolution wafer images. Lastly, the CNN implementation further enhanced performance, robustness, and consistency irrespective of variations in the ratio of white point clusters. This technique demonstrated accuracy in localizing defects on high-resolution wafer images, yielding an F1 score greater than 99.3%.more » « less
-
The uncertainty quantification of prediction mod- els (e.g., neural networks) is crucial for their adoption in many robotics applications. This is arguably as important as making accurate predictions, especially for safety-critical applications such as self-driving cars. This paper proposes our approach to uncertainty quantification in the context of visual localization for autonomous driving, where we predict locations from images. Our proposed framework estimates probabilistic uncertainty by creating a sensor error model that maps an inter- nal output of the prediction model to the uncertainty. The sensor error model is created using multiple image databases of visual localization, each with ground-truth location. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting, weather (sunny, snowy, night), and alignment errors between databases. We analyze both the predicted uncertainty and its incorporation into a Kalman-based localization filter. Our results show that prediction error variations increase with poor weather and lighting condition, leading to greater uncertainty and outliers, which can be predicted by our proposed uncertainty model. Additionally, our probabilistic error model enables the filter to remove ad hoc sensor gating, as the uncertainty automatically adjusts the model to the input data.more » « less
-
null (Ed.)Physically Unclonable Functions (PUFs) are emerging hardware security primitives that leverage random variations during chip manufacturing process to generate unique secrets. The security level of generated PUF secrets is mainly determined by its unpredictability feature which is typically evaluated using the metric of entropy bits. In this paper, we propose a novel Pairwise Distinct-Modulus (PDM) technique that significantly improves the upper bound of PUF entropy bits from the scale of log2(N!) up to O(N^2). The PDM technique boosts entropy by eliminating the correlation within PUF response bits caused by element reuse in conventional pairwise comparison. We also propose a reliability-enhancing scheme to compensate the impact on reducing reliability by saving a significant portion of potential reliable response bits. Experimental results based on a published large-scale RO PUF frequency dataset validated that the proposed technique significantly boosts PUF entropy bits from the scale of O(N∙log2(N)) up to approach the new upper bound of O(N^2) with a comparable reliability, and the reliability-enhancing technique saves 4x more on the percentage of reliable response bits.more » « less
An official website of the United States government
