skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Classifying Functional Brain Graphs Using Graph Hypervector Representation
Hyperdimensional computing (HDC) has drawn significant attention due to its comparable performance with traditional machine learning techniques. HDC classifiers achieve high parallelism, consume less power, and are well-suited for edge applications. Encoding approaches such as record-based encoding and N -gram-based encoding have been used to generate features from input signals and images. These features are mapped to hypervectors and are input to HDC classifiers. This paper considers the group-based classification of graphs constructed from time series. The graph is encoded to a hypervector and the graph hypervectors are used to train the HDC classifier. This paper applies HDC to brain graph classification using fMRI data. Both the record-based encoding and GrapHD encoding are explored. Experimental results show that 1) For HDC encoding approaches, GrapHD encoding can achieve comparable classification performance and require significantly less memory storage compared to record-based encoding. 2) The utilization of sparsity can achieve higher performance as compared to fully connected brain graphs. Both threshold strategy and the minimum redundancy maximum relevance (mRMR) algorithm are employed to generate sub-graphs, where mRMR achieves higher performance for three binary classification problems: emotion vs. gambling, emotion vs. no-task, and gambling vs. no-task. The corresponding AUCs are 0.87, 0.88, and 0.88, respectively.  more » « less
Award ID(s):
1814759
PAR ID:
10498220
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
2023 57th Asilomar Conference on Signals, Systems, and Computers
ISBN:
979-8-3503-2574-4
Page Range / eLocation ID:
275 to 279
Format(s):
Medium: X
Location:
Pacific Grove, CA, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Hyperdimensional computing (HDC) has been assumed to be attractive for time-series classification. These classifiers are ideal for one or few-shot learning and require fewer resources. These classifiers have been demonstrated to be useful in seizure detection. This paper investigates subject-specific seizure prediction using HDC from intracranial elec-troencephalogram (iEEG) from the publicly available Kaggle dataset. In comparison to seizure detection (interictal vs. ictal), seizure prediction (interictal vs. preictal) is a more challenging problem. Two HDC-based encoding strategies are explored: local binary pattern (LBP) and power spectral density (PSD). The average performance of HDC classifiers using the two encoding approaches is computed using the leave-one-seizure-out cross-validation method. Experimental results show that the PSD method using a small number of features selected by the minimum redundancy maximum relevance (mRMR) achieves better seizure prediction performance than the LBP method on the trairring and validation data. 
    more » « less
  2. Hyperdimensional vector processing is a nascent computing approach that mimics the brain structure and offers lightweight, robust, and efficient hardware solutions for different learning and cognitive tasks. For image recognition and classification, hyperdimensional computing (HDC) utilizes the intensity values of captured images and the positions of image pixels. Traditional HDC systems represent the intensity and positions with binary hypervectors of 1K–10K dimensions. The intensity hypervectors are cross-correlated for closer values and uncorrelated for distant values in the intensity range. The position hypervectors are pseudo-random binary vectors generated iteratively for the best classification performance. In this study, we propose a radically new approach for encoding image data in HDC systems. Position hypervectors are no longer needed by encoding pixel intensities using a deterministic approach based on quasi-random sequences. The proposed approach significantly reduces the number of operations by eliminating the position hypervectors and the multiplication operations in the HDC system. Additionally, we suggest a hybrid technique for generating hypervectors by combining two deterministic sequences, achieving higher classification accuracy. Our experimental results show up to 102× reduction in runtime and significant memory-usage savings with improved accuracy compared to a baseline HDC system with conventional hypervector encoding. 
    more » « less
  3. Hyperdimensional computing (HD) is an emerging brain-inspired paradigm used for machine learning classification tasks. It manipulates ultra-long vectors-hypervectors- using simple operations, which allows for fast learning, energy efficiency, noise tolerance, and a highly parallel distributed framework. HD computing has shown a significant promise in the area of biological signal classification. This paper addresses group-specific premature ventricular contraction (PVC) beat detection with HD computing using the data from the MIT-BIH arrhythmia database. Temporal, heart rate variability (HRV), and spectral features are extracted, and minimal redundancy maximum relevance (mRMR) is used to rank and select features for classification. Three encoding approaches are explored for mapping the features into the HD space. The HD computing classifiers can achieve a PVC beat detection accuracy of 97.7 % accuracy, compared to 99.4% achieved by more computationally complex methods such as convolutional neural networks (CNNs). 
    more » « less
  4. Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors. The hypervectors are constructed as long bit-streams and form the basic building blocks of HDC systems. In HDC, hypervectors are generated from scalar values without considering bit significance. HDC is efficient and robust for various data processing applications, especially computer vision tasks. To construct HDC models for vision applications, the current state-of-the-art practice utilizes two parameters for data encoding: pixel intensity and pixel position. However, the intensity and position information embedded in high-dimensional vectors are generally not generated dynamically in the HDC models. Consequently, the optimal design of hypervectors with high model accuracy requires powerful computing platforms for training. A more efficient approach is to generate hypervectors dynamically during the training phase. To this aim, this work uses low-discrepancy sequences to generate intensity hypervectors, while avoiding position hypervectors. Doing so eliminates the multiplication step in vector encoding, resulting in a power-efficient HDC system. For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data instead of using conventional comparator-based generators. 
    more » « less
  5. Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors. The hypervectors are constructed as long bit-streams and form the basic building blocks of HDC systems. In HDC, hypervectors are generated from scalar values without considering bit significance. HDC is efficient and robust for various data processing applications, especially computer vision tasks. To construct HDC models for vision applications, the current state-of-the-art practice utilizes two parameters for data encoding: pixel intensity and pixel position. However, the intensity and position information embedded in high-dimensional vectors are generally not generated dynamically in the HDC models. Consequently, the optimal design of hypervectors with high model accuracy requires powerful computing platforms for training. A more efficient approach is to generate hypervectors dynamically during the training phase. To this aim, this work uses low-discrepancy sequences to generate intensity hypervectors, while avoiding position hypervectors. Doing so eliminates the multiplication step in vector encoding, resulting in a power-efficient HDC system. For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data instead of using conventional comparator-based generators. 
    more » « less