skip to main content


Search for: All records

Creators/Authors contains: "Zhu, Z."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Robles, A. (Ed.)
    Although various navigation apps are available, people who are blind or have low vision (PVIB) still face challenges to locate store entrances due to missing geospatial information in existing map services. Previously, we have developed a crowdsourcing platform to collect storefront accessibility and localization data to address the above challenges. In this paper, we have significantly improved the efficiency of data collection and user engagement in our new AI-enabled Smart DoorFront platform by designing and developing multiple important features, including a gamified credit ranking system, a volunteer contribution estimator, an AI-based pre-labeling function, and an image gallery feature. For achieving these, we integrate a specially designed deep learning model called MultiCLU into the Smart DoorFront. We also introduce an online machine learning mechanism to iteratively train the MultiCLU model, by using newly labeled storefront accessibility objects and their locations in images. Our new DoorFront platform not only significantly improves the efficiency of storefront accessibility data collection, but optimizes user experience. We have conducted interviews with six adults who are blind to better understand their daily travel challenges and their feedback indicated that the storefront accessibility data collected via the DoorFront platform would be very beneficial for them. 
    more » « less
    Free, publicly-accessible full text available June 1, 2024
  2. Santiago, J. (Ed.)
    The storefront accessibility can substantially impact the way people who are blind or visually impaired (BVI) travel in urban environments. Entrance localization is one of the biggest challenges to the BVI people. In addition, improperly designed staircases and obstructive store decorations can create considerable mobility challenges for BVI people, making it more difficult for them to navigate their community hence reducing their desire to travel. Unfortunately, there are few approaches to acquiring this information in advance through computational tools or services. In this paper, we propose a solution to collect large- scale accessibility data of New York City (NYC) storefronts using a crowdsourcing approach on Google Street View (GSV) panoramas. We develop a web-based crowdsourcing application, DoorFront, which enables volunteers not only to remotely label storefront accessibility data on GSV images, but also to validate the labeling result to ensure high data quality. In order to study the usability and user experience of our application, an informal beta-test is conducted and a user experience survey is designed for testing volunteers. The user feedback is very positive and indicates the high potential and usability of the proposed application. 
    more » « less
  3. ABSTRACT

    GRB 230812B is a bright and relatively nearby (z = 0.36) long gamma-ray burst (GRB) that has generated significant interest in the community and has thus been observed over the entire electromagnetic spectrum. We report over 80 observations in X-ray, ultraviolet, optical, infrared, and submillimetre bands from the GRANDMA (Global Rapid Advanced Network for Multimessenger Addicts) network of observatories and from observational partners. Adding complementary data from the literature, we then derive essential physical parameters associated with the ejecta and external properties (i.e. the geometry and environment) of the GRB and compare with other analyses of this event. We spectroscopically confirm the presence of an associated supernova, SN2023pel, and we derive a photospheric expansion velocity of v ∼ 17 × 103 km s−1. We analyse the photometric data first using empirical fits of the flux and then with full Bayesian inference. We again strongly establish the presence of a supernova in the data, with a maximum (pseudo-)bolometric luminosity of 5.75 × 1042 erg s−1, at $15.76^{+0.81}_{-1.21}$ d (in the observer frame) after the trigger, with a half-max time width of 22.0 d. We compare these values with those of SN1998bw, SN2006aj, and SN2013dx. Our best-fitting model favours a very low density environment ($\log _{10}({n_{\rm ISM}/{\rm cm}^{-3}}) = -2.38^{+1.45}_{-1.60}$) and small values for the jet’s core angle $\theta _{\rm core} = 1.54^{+1.02}_{-0.81} \ \rm {deg}$ and viewing angle $\theta _{\rm obs} = 0.76^{+1.29}_{-0.76} \ \rm {deg}$. GRB 230812B is thus one of the best observed afterglows with a distinctive supernova bump.

     
    more » « less
  4. M. Hadwiger, M. Larsen (Ed.)
    In this work, we present Unity Point-Cloud Interactive Core, a novel interactive point cloud rendering pipeline for the Unity Development Platform. The goal of the proposed pipeline is to expedite the development process for point cloud applications by encapsulating the rendering process as a standalone component, while maintaining flexibility through an implementable interface. The proposed pipeline allows for rendering arbitrarily large point clouds with improved performance and visual quality. First, a novel dynamic batching scheme is proposed to address the adaptive point sizing problem for level-of-detail (LOD) point cloud structures. Then, an approximate rendering algorithm is proposed to reduce overdraw by minimizing the overall number of fragment operations through an intermediate occlusion culling pass. For the purpose of analysis, the visual quality of renderings is quantified and measured by comparing against a high-quality baseline. In the experiments, the proposed pipeline maintains above 90 FPS for a 20 million point budget while achieving greater than 90% visual quality during interaction when rendering a point-cloud with more than 20 billion points. 
    more » « less