skip to main content


Search for: All records

Creators/Authors contains: "Najafi, M Hassan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Word-aware sentiment analysis has posed a significant challenge over the past decade. Despite the considerable efforts of recent language models, achieving a lightweight representation suitable for deployment on resource-constrained edge devices remains a crucial concern. This study proposes a novel solution by merging two emerging paradigms, the Word2Vec language model and Hyperdimensional Computing, and introduces an innovative framework named Word2HyperVec. Our framework prioritizes model size and facilitates low-power processing during inference by incorporating embeddings into a binary space. Our solution demonstrates significant advantages, consuming only 2.2 W, up to 1.81 × more efficient than alternative learning models such as support vector machines, random forest, and multi-layer perceptron. 
    more » « less
    Free, publicly-accessible full text available June 12, 2025
  2. Detecting vulnerable code blocks has become a highly popular topic in computer-aided design, especially with the advancement of natural language processing (NLP). Analyzing hardware description languages ( HDLs ), such as Verilog, involves dealing with lengthy code. This letter introduces an innovative identification of attack-vulnerable hardware by the use of opcode processing. Leveraging the advantage of architecturally defined opcodes and expressing all operations at the beginning of each code line, the word processing problem is efficiently transformed into opcode processing. This research converts a benchmark dataset into an intermediary code stack, subsequently classifying secure and fragile codes using NLP techniques. The results reveal a framework that achieves up to 94% accuracy when employing sophisticated convolutional neural networks (CNNs) architecture with extra embedding layers. Thus, it provides a means for users to quickly verify the vulnerability of their HDL code by inspecting a supervised learning model trained on the predefined vulnerabilities. It also supports the superior efficacy of opcode -based processing in Trojan detection by analyzing the outcomes derived from a model trained using the HDL dataset. 
    more » « less
    Free, publicly-accessible full text available June 1, 2025
  3. Hyperdimensional computing (HDC) is a novel computational paradigm that operates on long-dimensional vectors known as hypervectors. The hypervectors are constructed as long bit-streams and form the basic building blocks of HDC systems. In HDC, hypervectors are generated from scalar values without considering bit significance. HDC is efficient and robust for various data processing applications, especially computer vision tasks. To construct HDC models for vision applications, the current state-of-the-art practice utilizes two parameters for data encoding: pixel intensity and pixel position. However, the intensity and position information embedded in high-dimensional vectors are generally not generated dynamically in the HDC models. Consequently, the optimal design of hypervectors with high model accuracy requires powerful computing platforms for training. A more efficient approach is to generate hypervectors dynamically during the training phase. To this aim, this work uses low-discrepancy sequences to generate intensity hypervectors, while avoiding position hypervectors. Doing so eliminates the multiplication step in vector encoding, resulting in a power-efficient HDC system. For the first time in the literature, our proposed approach employs lightweight vector generators utilizing unary bit-streams for efficient encoding of data instead of using conventional comparator-based generators. 
    more » « less
    Free, publicly-accessible full text available March 25, 2025
  4. Stochastic computing (SC) division circuits have gained importance in recent years compared to other arithmetic circuits due to their low complexity as a result of an accuracy tradeoff. Designing a division circuit is already complex in conventional binary-based hardware systems. Developing an accurate and efficient SC division circuit is an open research problem. Prior work proposed different SC division circuits by using multiplexers and JK-flip-flop units, which may require correlated or uncorrelated input bit-streams. This study is primarily centered on exploring a cost-effective and highly efficient bit-stream generator specifically designed for SC division circuits. In conjunction with this objective, we assess the performance of multiple bit-stream generators and analyze the impact of correlation on SC division. We compare different designs in terms of accuracy and hardware cost. Moreover, we discuss a low-cost and energy-efficient bit-stream generator via powers-of-2 Van der Corput (VDC) sequences. Among the tested sequence generators, our best results were achieved with VDC sequences. Our evaluation results demonstrate that the novel VDC-based design yields promising outputs, resulting in a 15.5% reduction in the area-delay product and an 18.05% saving in energy consumption for the same accuracy level compared to conventional bit-stream generators. Significantly, our investigation reveals that employing the proposed generator improves the precision compared to the state-of-the-art. We validate the proposed architecture with an image processing case study, achieving high PSNR and structural similarity values. 
    more » « less
    Free, publicly-accessible full text available March 7, 2025
  5. Hyperdimensional vector processing is a nascent computing approach that mimics the brain structure and offers lightweight, robust, and efficient hardware solutions for different learning and cognitive tasks. For image recognition and classification, hyperdimensional computing (HDC) utilizes the intensity values of captured images and the positions of image pixels. Traditional HDC systems represent the intensity and positions with binary hypervectors of 1K–10K dimensions. The intensity hypervectors are cross-correlated for closer values and uncorrelated for distant values in the intensity range. The position hypervectors are pseudo-random binary vectors generated iteratively for the best classification performance. In this study, we propose a radically new approach for encoding image data in HDC systems. Position hypervectors are no longer needed by encoding pixel intensities using a deterministic approach based on quasi-random sequences. The proposed approach significantly reduces the number of operations by eliminating the position hypervectors and the multiplication operations in the HDC system. Additionally, we suggest a hybrid technique for generating hypervectors by combining two deterministic sequences, achieving higher classification accuracy. Our experimental results show up to 102× reduction in runtime and significant memory-usage savings with improved accuracy compared to a baseline HDC system with conventional hypervector encoding. 
    more » « less
    Free, publicly-accessible full text available December 1, 2024
  6. Free, publicly-accessible full text available December 1, 2024
  7. Few studies have explored the complex circuit simulation of stochastic and unary computing systems, which are referred to under the umbrella term of bit-stream processing. The computer simulation of multi-level cascaded circuits with reconvergent paths has not been largely examined in the context of bit-stream processing systems. This study addresses this gap and proposes a contingency table-based reconvergent path-aware simulation method for fast and efficient simulation of multi-level circuits. The proposed method exhibits significantly better runtime and accuracy. 
    more » « less
  8. Sorting is a fundamental function in many applications from data processing to database systems. For high performance, sorting-hardware based sorting designs are implemented by conventional binary or emerging stochastic computing (SC) approaches. Binary designs are fast and energy-efficient but costly to implement. SC-based designs, on the other hand, are area and power-efficient but slow and energy-hungry. So, the previous studies of the hardware-based sorting further faced scalability issues. In this work, we propose a novel scalable low-cost design for implementing sorting networks. We borrow the concept of SC for the area- and power efficiency but use weighted stochastic bit-streams to address the high latency and energy consumption issue of SC designs. A new lock and swap (LAS) unit is proposed to sort weighted bit-streams. The LAS-based sorting network can determine the result of comparing different input values early and then map the inputs to the corresponding outputs based on shorter weighted bit-streams. Experimental results show that the proposed design approach achieves much better hardware scalability than prior work. Especially, as increasing the number of inputs, the proposed scheme can reduce the energy consumption by about 3.8% - 93% compared to prior binary and SC-based designs. 
    more » « less