skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Modeling and Simulation of Circuit-Level Nonidealities for an Analog Computing Design Approach with Application to EEG Feature Extraction
This paper presents a design approach for the modeling and simulation of ultra-low power (ULP) analog computing machine learning (ML) circuits for seizure detection using EEG signals in wearable health monitoring applications. In this paper, we describe a new analog system modeling and simulation technique to associate power consumption, noise, linearity, and other critical performance parameters of analog circuits with the classification accuracy of a given ML network, which allows to realize a power and performance optimized analog ML hardware implementation based on diverse application-specific needs. We carried out circuit simulations to obtain non-idealities, which are then mathematically modeled for an accurate mapping. We have modeled noise, non-linearity, resolution, and process variations such that the model can accurately obtain the classification accuracy of the analog computing based seizure detection system. Noise has been modeled as an input-referred white noise that can be directly added at the input. Device process and temperature variations were modeled as random fluctuations in circuit parameters such as gain and cut-off frequency. Nonlinearity was mathematically modeled as a power series. The combined system level model was then simulated for classification accuracy assessments. The design approach helps to optimize power and area during the development of tailored analog circuits for ML networks with the ability to potentially trade power and performance goals while still ensuring the required classification accuracy. The simulation technique also enables to determine target specifications for each circuit block in the analog computing hardware. This is achieved by developing the ML hardware model, and investigating the effect of circuit nonidealities on classification accuracy. Simulation of an analog computing EEG seizure detection block shows a classification accuracy of 91%. The proposed modeling approach will significantly reduce design time and complexity of large analog computing systems. Two feature extraction approaches are also compared for an analog computing architecture.  more » « less
Award ID(s):
1812588
PAR ID:
10330276
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
ISSN:
0278-0070
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Hyperdimensional (HD) computing holds promise for classifying two groups of data. This paper explores seizure detection from electroencephalogram (EEG) from subjects with epilepsy using HD computing based on power spectral density (PSD) features. Publicly available intra-cranial EEG (iEEG) data collected from 4 dogs and 8 human patients in the Kaggle seizure detection contest are used in this paper. This paper explores two methods for classification. First, few ranked PSD features from small number of channels from a prior classification are used in the context of HD classification. Second, all PSD features extracted from all channels are used as features for HD classification. It is shown that for about half the subjects small number features outperform all features in the context of HD classification, and for the other half, all features outperform small number of features. HD classification achieves above 95% accuracy for six of the 12 subjects, and between 85-95% accuracy for 4 subjects. For two subjects, the classification accuracy using HD computing is not as good as classical approaches such as support vector machine classifiers. 
    more » « less
  2. Hyperdimensional (HD) computing is a form of brain-inspired computing which can be applied to numerous classification problems. In past research, it has been shown that seizures can be detected from electroencephalograms (EEG) with high accuracy using local binary pattern (LBP) encoding. This paper explores applicability of binary HD computing to seizure detection from intra-cranial EEG (iEEG) data from the Kaggle seizure detection contest based on using both LBP and power spectral density (PSD) features. In the PSD method, three novel approaches to HD classification are presented for both selected features and all features. These are referred as single classifier long hypervector, multiple classifiers, and single classifier short hypervector. To visualize the quality of classification of test data, a hypervector distance plot is introduced that plots the Hamming distance of the query hpervectors from one class hypervector vs. that from the other. Simulation results show that: 1). LBP method offers an average 80.9% test accuracy, 71.9% sensitivity, 81.4% specificity and 76.6% test AUC whereas the PSD method can achieve an average of 91.0% test accuracy, 81.8% sensitivity, 92.0% specificity and 86.9% test AUC. 2). The average seizure detection latency is 2.5s for LBP method and is 4.5s for the PSD methods. This average latency, less than 5s, is a relevant parameter for fast drug delivery, indicating that both LBP and PSD methods are able to detect the seizures in a timely manner. The performance using selected PSD features is better than that using all features. 3). It is shown that the dimensionality of the hypervector can be reduced to 1,000 bits for LBP and PSD methods from 10,000. Futhermore, for some approaches of selected features, the dimensionality of the hypervector can be reduced to 100 bits. 
    more » « less
  3. Advances in algorithms and low-power computing hardware imply that machine learning is of potential use in off-grid medical data classification and diagnosis applications such as electrocardiogram interpretation. However, although support vector machine algorithms for electrocardiogram classification show high classification accuracy, hardware implementations for edge applications are impractical due to the complexity and substantial power consumption needed for kernel optimization when using conventional complementary metal–oxide–semiconductor circuits. Here we report reconfigurable mixed-kernel transistors based on dual-gated van der Waals heterojunctions that can generate fully tunable individual and mixed Gaussian and sigmoid functions for analogue support vector machine kernel applications. We show that the heterojunction-generated kernels can be used for arrhythmia detection from electrocardiogram signals with high classification accuracy compared with standard radial basis function kernels. The reconfigurable nature of mixed-kernel heterojunction transistors also allows for personalized detection using Bayesian optimization. A single mixed-kernel heterojunction device can generate the equivalent transfer function of a complementary metal–oxide–semiconductor circuit comprising dozens of transistors and thus provides a low-power approach for support vector machine classification applications. 
    more » « less
  4. Embedded differential temperature sensors can be utilized to monitor the power consumption of circuits, taking advantage of the inherent on-chip electrothermal coupling. Potential applications range from hardware security to linearity, gain/bandwidth calibration, defect-oriented testing, and compensation for circuit aging effects. This paper introduces the use of on-chip differential temperature sensors as part of a wireless Internet of Things system. A new low-power differential temperature sensor circuit with chopped cascode transistors and switched-capacitor integration is described. This design approach leverages chopper stabilization in combination with a switched-capacitor integrator that acts as a low-pass filter such that the circuit provides offset and low-frequency noise mitigation. Simulation results of the proposed differential temperature sensor in a 65 nm complementary metal-oxide-semiconductor (CMOS) process show a sensitivity of 33.18V/°C within a linear range of ±36.5m°C and an integrated output noise of 0.862mVrms (from 1 to 441.7 Hz) with an overall power consumption of 0.187mW. Considering a figure of merit that involves sensitivity, linear range, noise, and power, the new temperature sensor topology demonstrates a significant improvement compared to state-of-the-art differential temperature sensors for on-chip monitoring of power dissipation. 
    more » « less
  5. Circuit linearity calibration can represent a set of high-dimensional search problems if the observability is limited. For example, linearity calibration of digital-to-time converters (DTC), an essential building block of modern digital phaselocked loops (DPLLs), is an example of a high-dimensional search problem as difficulty of measuring ps delays hinders prior methods that calibrate stage by stage. And, a calibrated DTC can become nonlinear again due to changes in temperature (T) and power supply voltage (V). Prior work reports a deep reinforcement learning framework that is capable of performing DTC linearity calibration with nonlinear calibration banks; however, this prior work does not address maintaining calibration in the face of temperature and supply voltage variations. In this paper, we present a meta-reinforcement learning (RL) method that can enable the RL agent to quickly adapt to a new environment when the temperature and/or voltage change. Inspired by the Style Generative Adversarial Networks (StyleGANs), we propose to treat temperature and voltage changes as the styles of the circuits. In contrast to traditional methods employing circuit sensors to detect changes in T and V, we utilize a machine learning (ML) sensor, to implicitly infer a wide range of environmental changes. The style information from the ML sensor is subsequently injected into a small portion of the policy network, modulating its weights. As a proof of concept, we first designed a 5-bit DTC at the normal voltage (1V) and normal temperature (27℃) corner (NVNT) as the environment. The RL agent begins its training in the NVNT environment. Following this initial phase, the agent is then tasked with adapting to environments with different temperature and supply voltages. Our results show that the proposed technique can reduce the Integral Non-Linearity (INL) to less than 0.5 LSB within 10, 000 search steps in a changed environment. Compared to starting learning from a random initialized policy and a trained policy, the proposed meta-RL approach takes 63% and 47% fewer steps to complete the linearity calibration, respectively. Our method is also applicable to the calibration of many other kinds of analog and RF circuits. 
    more » « less