Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2026
-
Due to the widespread applications of high-dimensional representations in many fields, the three-dimension (3D) display technique is increasingly being used for commercial purpose in a holographic-like and immersive demonstration. However, the visual discomfort and fatigue of 3D head mounts demonstrate the limits of usage in the sphere of marketing. The compressive light field (CLF) display is capable of providing binocular and motion parallaxes by stacking multiple liquid crystal screens without any extra accessories. It leverages optical viewpoint fusion to bring an immersive and visual-pleasing experience for viewers. Unfortunately, its practical application has been limited by processing complexity and reconstruction performance. In this paper, we propose a dual-guided learning-based factorization on polarization-based CLF display with depth-assisted calibration (DAC). This substantially improves the visual performance of factorization in real-time processing. In detail, we first take advantage of a dual-guided network structure under the constraints of reconstructed and viewing images. Additionally, by utilizing the proposed DAC, we distribute each pixel on displayed screens following the real depth. Furthermore, the subjective performance is increased by using a Gauss-distribution-based weighting (GDBW) toward the concentration of the observer’s angular position. Experimental results illustrate the improved performance in qualitative and quantitative aspects over other competitive methods. A CLF prototype is assembled to verify the practicality of our factorization.more » « less
-
Lung or heart sound classification is challenging due to the complex nature of audio data, its dynamic properties of time, and frequency domains. It is also very difficult to detect lung or heart conditions with small amounts of data or unbalanced and high noise in data. Furthermore, the quality of data is a considerable pitfall for improving the performance of deep learning. In this paper, we propose a novel feature-based fusion network called FDC-FS for classifying heart and lung sounds. The FDC-FS framework aims to effectively transfer learning from three different deep neural network models built from audio datasets. The innovation of the proposed transfer learning relies on the transformation from audio data to image vectors and from three specific models to one fused model that would be more suitable for deep learning. We used two publicly available datasets for this study, i.e., lung sound data from ICHBI 2017 challenge and heart challenge data. We applied data augmentation techniques, such as noise distortion, pitch shift, and time stretching, dealing with some data issues in these datasets. Importantly, we extracted three unique features from the audio samples, i.e., Spectrogram, MFCC, and Chromagram. Finally, we built a fusion of three optimal convolutional neural network models by feeding the image feature vectors transformed from audio features. We confirmed the superiority of the proposed fusion model compared to the state-of-the-art works. The highest accuracy we achieved with FDC-FS is 99.1% with Spectrogram-based lung sound classification while 97% for Spectrogram and Chromagram based heart sound classification.more » « less
An official website of the United States government
