skip to main content


Search for: All records

Creators/Authors contains: "Qin, Yu"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Extreme wind phenomena play a crucial role in the efficient operation of wind farms for renewable energy generation. However, existing detection methods are computationally expensive and limited to specific coordinates. In real-world scenarios, understanding the occurrence of these phenomena over a large area is essential. Therefore, there is a significant demand for a fast and accurate approach to forecast such events. In this paper, we propose a novel method for detecting wind phenomena using topological analysis, leveraging the gradient of wind speed or critical points in a topological framework. By extracting topological features from the wind speed profile within a defined region, we employ topological distance to identify extreme wind phenomena. Our results demonstrate the effectiveness of utilizing topological features derived from regional wind speed profiles. We validate our approach using high-resolution simulations with the Weather Research and Forecasting model (WRF) over a month in the US East Coast. 
    more » « less
    Free, publicly-accessible full text available October 22, 2024
  2. This paper presents the first approach to visualize the importance of topological features that define classes of data. Topological features, with their ability to abstract the fundamental structure of complex data, are an integral component of visualization and analysis pipelines. Although not all topological features present in data are of equal importance. To date, the default definition of feature importance is often assumed and fixed. This work shows how proven explainable deep learning approaches can be adapted for use in topological classification. In doing so, it provides the first technique that illuminates what topological structures are important in each dataset in regards to their class label. In particular, the approach uses a learned metric classifier with a density estimator of the points of a persistence diagram as input. This metric learns how to reweigh this density such that classification accuracy is high. By extracting this weight, an importance field on persistent point density can be created. This provides an intuitive representation of persistence point importance that can be used to drive new visualizations. This work provides two examples: Visualization on each diagram directly and, in the case of sublevel set filtrations on images, directly on the images themselves. This work highlights 
    more » « less
    Free, publicly-accessible full text available October 22, 2024
  3. This paper presents the first approach to visualize the importance of topological features that define classes of data. Topological features, with their ability to abstract the fundamental structure of complex data, are an integral component of visualization and analysis pipelines. Although not all topological features present in data are of equal importance. To date, the default definition of feature importance is often assumed and fixed. This work shows how proven explainable deep learning approaches can be adapted for use in topological classification. In doing so, it provides the first technique that illuminates what topological structures are important in each dataset in regards to their class label. In particular, the approach uses a learned metric classifier with a density estimator of the points of a persistence diagram as input. This metric learns how to reweigh this density such that classification accuracy is high. By extracting this weight, an importance field on persistent point density can be created. This provides an intuitive representation of persistence point importance that can be used to drive new visualizations. This work provides two examples: Visualization on each diagram directly and, in the case of sublevel set filtrations on images, directly on the images themselves. This work highlights real-world examples of this approach visualizing the important topological features in graph, 3D shape, and medical image data. 
    more » « less
    Free, publicly-accessible full text available October 22, 2024
  4. Abstract

    Multipeaked supernovae with precursors, dramatic light-curve rebrightenings, and spectral transformation are rare, but are being discovered in increasing numbers by modern night-sky transient surveys like the Zwicky Transient Facility. Here, we present the observations and analysis of SN 2023aew, which showed a dramatic increase in brightness following an initial luminous (−17.4 mag) and long (∼100 days) unusual first peak (possibly precursor). SN 2023aew was classified as a Type IIb supernova during the first peak but changed its type to resemble a stripped-envelope supernova (SESN) after the marked rebrightening. We present comparisons of SN 2023aew’s spectral evolution with SESN subtypes and argue that it is similar to SNe Ibc during its main peak. P-Cygni Balmer lines are present during the first peak, but vanish during the second peak’s photospheric phase, before Hαresurfaces again during the nebular phase. The nebular lines ([Oi], [Caii], Mgi], Hα) exhibit a double-peaked structure that hints toward a clumpy or nonspherical ejecta. We analyze the second peak in the light curve of SN 2023aew and find it to be broader than that of normal SESNe as well as requiring a very high56Ni mass to power the peak luminosity. We discuss the possible origins of SN 2023aew including an eruption scenario where a part of the envelope is ejected during the first peak and also powers the second peak of the light curve through interaction of the SN with the circumstellar medium.

     
    more » « less
  5. Persistence diagrams have been widely used to quantify the underlying features of filtered topological spaces in data visualization. In many applications, computing distances between diagrams is essential; however, computing these distances has been challenging due to the computational cost. In this paper, we propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams, which allows for fast computation of distances. This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process. Instead of using standard representations, we hash diagrams into binary codes, which have natural advantages in large-scale tasks. The training of this model is domain-oblivious in that it can be computed purely from synthetic, randomly created diagrams. As a consequence, our proposed method is directly applicable to various datasets without the need for retraining the model. These binary codes, when compared using fast Hamming distance, better maintain topological similarity properties between datasets than other vectorized representations. To evaluate this method, we apply our framework to the problem of diagram clustering and we compare the quality and performance of our approach to the state-of-the-art. In addition, we show the scalability of our approach on a dataset with 10k persistence diagrams, which is not possible with current techniques. Moreover, our experimental results demonstrate that our method is significantly faster with the potential of less memory usage, while retaining comparable or better quality comparisons. 
    more » « less
  6. The persistence diagram (PD) is an important tool in topological data analysis for encoding an abstract representation of the homology of a shape at different scales. Different vectorizations of PD summary are commonly used in machine learning applications, however distances between vectorized persistence summaries may differ greatly from the distances between the original PDs. Surprisingly, no research has been carried out in this area before. In this work we compare distances between PDs and between different commonly used vectorizations. Our results give new insights into comparing vectorized persistence summaries and can be used to design better feature-based learning models based on PDs 
    more » « less