skip to main content

Search for: All records

Creators/Authors contains: "Li, Ming"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract This technical brief reports an experimental investigation on the effect of feed region density on resultant sintered density and intermediate densities (powder bed density and green density) during the binder jetting additive manufacturing process. The feed region density was increased through compaction. The powder bed density and green density were determined by measuring the mass and dimension. The sintered density was measured with the Archimedes’ method. As the relative feed region density increased from 44% to 65%, the powder bed density increased by 5.7%, green density by 8.5%, and finally sintered density by 4.5%. Statistical testing showed that these effects were significant. This study showed that compacting the powder in the feed region is an effective method to alter the density of parts made via binder jetting additive manufacturing.
    Free, publicly-accessible full text available September 1, 2023
  2. Free, publicly-accessible full text available July 15, 2023
  3. Free, publicly-accessible full text available May 1, 2023
  4. Optimally extracting the advantages available from reconfigurable intelligent surfaces (RISs) in wireless communications systems requires estimation of the channels to and from the RIS. The process of determining these channels is complicated when the RIS is composed of passive elements without any sensing or data processing capabilities, and thus, the channels must be estimated indirectly by a noncolocated device, typically a controlling base station (BS). In this article, we examine channel estimation for passive RIS-based systems from a fundamental viewpoint. We study various possible channel models and the identifiability of the models as a function of the available pilot data and behavior of the RIS during training. In particular, we will consider situations with and without line-of-sight propagation, single-antenna and multi-antenna configurations for the users and BS, correlated and sparse channel models, single-carrier and wideband orthogonal frequency-division multiplexing (OFDM) scenarios, availability of direct links between the users and BS, exploitation of prior information, as well as a number of other special cases. We further conduct simulations of representative algorithms and comparisons of their performance for various channel models using the relevant Cramér-Rao bounds.
    Free, publicly-accessible full text available May 9, 2023
  5. Free, publicly-accessible full text available August 1, 2023
  6. Free, publicly-accessible full text available April 26, 2023
  7. As virtual reality (VR) offers an unprecedented experience than any existing multimedia technologies, VR videos, or called 360-degree videos, have attracted considerable attention from academia and industry. How to quantify and model end users' perceived quality in watching 360-degree videos, or called QoE, resides the center for high-quality provisioning of these multimedia services. In this work, we present EyeQoE, a novel QoE assessment model for 360-degree videos using ocular behaviors. Unlike prior approaches, which mostly rely on objective factors, EyeQoE leverages the new ocular sensing modality to comprehensively capture both subjective and objective impact factors for QoE modeling. We propose a novel method that models eye-based cues into graphs and develop a GCN-based classifier to produce QoE assessment by extracting intrinsic features from graph-structured data. We further exploit the Siamese network to eliminate the impact from subjects and visual stimuli heterogeneity. A domain adaptation scheme named MADA is also devised to generalize our model to a vast range of unseen 360-degree videos. Extensive tests are carried out with our collected dataset. Results show that EyeQoE achieves the best prediction accuracy at 92.9%, which outperforms state-of-the-art approaches. As another contribution of this work, we have publicized our dataset on https://github.com/MobiSec-CSE-UTA/EyeQoE_Dataset.git.
    Free, publicly-accessible full text available March 29, 2023
  8. Free, publicly-accessible full text available January 1, 2023
  9. Free, publicly-accessible full text available January 1, 2023
  10. Free, publicly-accessible full text available December 1, 2022