skip to main content


Search for: All records

Creators/Authors contains: "Chen, Ping"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The understanding of chaotic systems is challenging not only for theoretical research but also for many important applications. Chaotic behavior is found in many nonlinear dynamical systems, such as those found in climate dynamics, weather, the stock market, and the space-time dynamics of virus spread. A reliable solution for these systems must handle their complex space-time dynamics and sensitive dependence on initial conditions. We develop a deep learning framework to push the time horizon at which reliable predictions can be made further into the future by better evaluating the consequences of local errors when modeling nonlinear systems. Our approach observes the future trajectories of initial errors at a time horizon to model the evolution of the loss to that point with two major components: 1) a recurrent architecture, Error Trajectory Tracing, that is designed to trace the trajectories of predictive errors through phase space, and 2) a training regime, Horizon Forcing, that pushes the model’s focus out to a predetermined time horizon. We validate our method on classic chaotic systems and real-world time series prediction tasks with chaotic characteristics, and show that our approach outperforms the current state-of-the-art methods. 
    more » « less
  2. Learning sentence representations which capture rich semantic meanings has been crucial for many NLP tasks. Pre-trained language models such as BERT have achieved great success in NLP, but sentence embeddings extracted directly from these models do not perform well without fine-tuning. We propose Contrastive Learning of Sentence Representations (CLSR), a novel approach which applies contrastive learning to learn universal sentence representations on top of pre-trained language models. CLSR utilizes semantic similarity of two sentences to construct positive instance for contrastive learning. Semantic information that has been captured by the pre-trained models is kept by getting sentence embeddings from these models with proper pooling strategy. An encoder followed by a linear projection takes these embeddings as inputs and is trained under a contrastive objective. To evaluate the performance of CLSR, we run experiments on a range of pre-trained language models and their variants on a series of Semantic Contextual Similarity tasks. Results show that CLSR gains significant performance improvements over existing SOTA language models. 
    more » « less
  3. Abstract

    Using data from the Complete Nearby (redshiftzhost< 0.02) sample of Type Ia Supernovae (CNIa0.02), we find a linear relation between two parameters derived from theBVcolor curves of Type Ia supernovae: thecolor stretchsBVand the rising color slopes0*(BV)after the peak, and this relation applies to the full range ofsBV. ThesBVparameter is known to be tightly correlated with the peak luminosity, especially forfast decliners(dim Type Ia supernovae), and the luminosity correlation withsBVis markedly better than with the classic light-curve width parameters such as Δm15(B). Thus, our new linear relation can be used to infer peak luminosity froms0*. UnlikesBV(or Δm15(B)), the measurement ofs0*(BV)does not rely on a well-determined time of light-curve peak or color maximum, making it less demanding on the light-curve coverage than past approaches.

     
    more » « less
  4. null (Ed.)
  5. null (Ed.)