skip to main content

Search for: All records

Creators/Authors contains: "Wu, Z"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the Proceedings on the 11th International Conference on Probabilistic Graphical Models (PGM), published as part of the PMLR series.
    Free, publicly-accessible full text available January 1, 2023
  2. Hydride-dehydride (HDH) Ti-6Al-4V powders with non-spherical particle morphology are typically not used in laser-beam powder bed fusion (LB-PBF). Here, HDH powders with two size distributions of 50-120 μm (fine) and 75-175 μm (coarse) are compared for flowability, packing density, and resultant density of the LB-PBF manufactured parts. It is shown that a suitable laser power-velocity-hatch spacing combination can result in part production with a relative density of > 99.5% in LB-PBF of HDH Ti-6Al-4V powder. Size, morphology and spatial distribution of pores are analyzed in 2D. The boundaries of the lack-of-fusion and keyhole porosity formation regimes are assessed and showedmore »that the build rate ratio of 1.5-2 would be attained to produce parts with a relative density of > 99.5%. The synchrotron x-ray high-speed imaging reveals the laser-powder interaction and potential porosity formation mechanism associated with HDH powder. It is found that lower powder packing density of coarse powder and high keyhole fluctuation result in higher fractions of porosity within builds during the LB-PBF process.« less
    Free, publicly-accessible full text available April 1, 2023
  3. We propose a semi-supervised learning approach for video classification, VideoSSL, using convolutional neural networks (CNN). Like other computer vision tasks, existing supervised video classification methods demand a large amount of labeled data to attain good performance. However, annotation of a large dataset is expensive and time consuming. To minimize the dependence on a large annotated dataset, our proposed semi-supervised method trains from a small number of labeled examples and exploits two regulatory signals from unlabeled data. The first signal is the pseudo-labels of unlabeled examples computed from the confidences of the CNN being trained. The other is the normalized probabilities,more »as predicted by an image classifier CNN, that captures the information about appearances of the interesting objects in the video. We show that, under the supervision of these guiding signals from unlabeled examples, a video classification CNN can achieve impressive performances utilizing a small fraction of annotated examples on three publicly available datasets: UCF101, HMDB51, and Kinetics.« less
  4. The goal of the proposed project is to transform a large transportation hub into a smart and accessible hub (SAT-Hub), with minimal infrastructure change. The societal need is significant, especially impactful for people in great need, such as those who are blind and visually impaired (BVI) or with Autism Spectrum Disorder (ASD), as well as those unfamiliar with metropolitan areas. With our inter-disciplinary background in urban systems, sensing, AI and data analytics, accessibility, and paratransit and assistive services, our solution is a hu-man-centric system approach that integrates facility modeling, mobile navigation, and user interface designs. We leverage several transportation facili-tiesmore »in the heart of New York City and throughout the State of New Jersey as testbeds for ensuring the relevance of the research and a smooth transition to real world applications.« less
  5. Recent advances in convolutional neural network (CNN) model interpretability have led to impressive progress in vi- sualizing and understanding model predictions. In partic- ular, gradient-based visual attention methods have driven much recent effort in using visual attention maps as a means for visual explanations. A key problem, however, is these methods are designed for classification and categorization tasks, and their extension to explaining generative models, e.g., variational autoencoders (VAE) is not trivial. In this work, we take a step towards bridging this crucial gap, proposing the first technique to visually explain VAEs by means of gradient-based attention. We present methodsmore »to generate visual attention from the learned latent space, and also demonstrate such attention explanations serve more than just explaining VAE predictions. We show how these attention maps can be used to localize anomalies in images, demonstrating state-of-the-art performance on the MVTec- AD dataset. We also show how they can be infused into model training, helping bootstrap the VAE into learning im- proved latent space disentanglement, demonstrated on the Dsprites dataset.« less