skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Li, Xin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Local stresses in a tissue, a collective property, links cell division and dynamics. 
    more » « less
    Free, publicly-accessible full text available February 5, 2026
  2. Free, publicly-accessible full text available December 10, 2025
  3. Free, publicly-accessible full text available December 1, 2025
  4. Free, publicly-accessible full text available December 10, 2025
  5. Free, publicly-accessible full text available September 1, 2025
  6. Waves running up and down the beach (‘swash’) at the landward edge of the ocean can cause changes to the beach topology, can erode dunes, and can result in inland flooding. Despite the importance of swash, field observations are difficult to obtain in the thin, bubbly, and potentially sediment laden fluid layers. Here, swash excursions along an Atlantic Ocean beach are estimated with a new framework, V-BeachNet, that uses a fully convolutional network to distinguish between sand and the moving edge of the wave in rapid sequences of images. V-BeachNet is trained with 16 randomly selected and manually segmented images of the swash zone, and is used to estimate swash excursions along 200 m of the shoreline by automatically segmenting four 1-h sequences of images that span a range of incident wave conditions. Data from a scanning lidar system are used to validate the swash estimates along a cross-shore transect within the camera field of view. V-BeachNet estimates of swash spectra, significant wave heights, and wave-driven setup (increases in the mean water level) agree with those estimated from the lidar data. 
    more » « less
    Free, publicly-accessible full text available September 1, 2025
  7. Abstract This study examines the effect of surface moisture flux on fog formation, as it is an essential factor of water vapor distribution that supports fog formation. A one‐way nested large‐eddy simulation embedded in the mesoscale community Weather Research and Forecasting model is used to examine the effect of surface moisture flux on a cold fog event over the Heber Valley on January 16, 2015. Results indicate that large‐eddy simulation successfully reproduces the fog over the mountainous valley, with turbulent mixing of the fog aloft in the valley downward. However, the simulated fog is too dense and has higher humidity, a larger mean surface moisture flux, more extensive liquid water content, and longer duration relative to the observations. The sensitivity of fog simulations to surface moisture flux is then examined. Results indicate that reduction of surface moisture flux leads to fog with a shorter duration and a lower height extension than the original simulation, as the decrease in surface moisture flux impairs water vapor transport from the surface. Consequently, the lower humidity combined with the cold air helps the model reproduce a realistic thin fog close to the observations. The outcomes of this study illustrate that a minor change in moisture flux can have a significant impact on the formation and evolution of fog events over complex terrain, even during the winter when moisture flux is typically very weak. 
    more » « less
    Free, publicly-accessible full text available May 12, 2025
  8. The goal of the trace reconstruction problem is to recover a string x E {0, 1} given many independent traces of x, where a trace is a subsequence obtained from deleting bits of x independently with some given probability. In this paper we consider two kinds of algorithms for the trace reconstruction problem. We first observe that the state-of-the-art result of Chase (STOC 2021), which is based on statistics of arbitrary length-k subsequences, can also be obtained by considering the “k-mer statistics”, i.e., statistics regarding occurrences of contiguous k-bit strings (a.k.a, k-mers) in the initial string x, for k = Mazooji and Shomorony (ISIT 2023) show that such statistics (called k-mer density map) can be estimated within accuracy from poly(n, 2k, l/e) traces. We call an algorithm to be k-mer-based if it reconstructs x given estimates of the k-mer density map. Such algorithms essentially capture all the analyses in the worst-case and smoothed-complexity models of the trace reconstruction problem we know of so far. Our first, and technically more involved, result shows that any k-mer-based algorithm for trace reconstruction must use exp n)) traces, under the assumption that the estimator requires poly(2k, 1 e) traces, thus establishing the optimality of this number of traces. Our analysis also shows that the analysis technique used by Chase is essentially tight, and hence new techniques are needed in order to improve the worst-case upper bound. Our second, simple, result considers the performance of the Maximum Likelihood Estimator (MLE), which specifically picks the source string that has the maximum likelihood to generate the samples (traces). We show that the MLE algorithm uses a nearly optimal number of traces, i.e., up to a factor of n in the number of samples needed for an optimal algorithm, and show that this factor of n loss may be necessary under general “model estimation” settings. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  9. The goal of the trace reconstruction problem is to recover a string x E {0, 1} given many independent traces of x, where a trace is a subsequence obtained from deleting bits of x independently with some given probability. In this paper we consider two kinds of algorithms for the trace reconstruction problem. We first observe that the state-of-the-art result of Chase (STOC 2021), which is based on statistics of arbitrary length-k subsequences, can also be obtained by considering the “k-mer statistics”, i.e., statistics regarding occurrences of contiguous k-bit strings (a.k.a, k-mers) in the initial string x, for k = Mazooji and Shomorony (ISIT 2023) show that such statistics (called k-mer density map) can be estimated within accuracy from poly(n, 2k, l/e) traces. We call an algorithm to be k-mer-based if it reconstructs x given estimates of the k-mer density map. Such algorithms essentially capture all the analyses in the worst-case and smoothed-complexity models of the trace reconstruction problem we know of so far. Our first, and technically more involved, result shows that any k-mer-based algorithm for trace reconstruction must use exp n)) traces, under the assumption that the estimator requires poly(2k, 1 e) traces, thus establishing the optimality of this number of traces. Our analysis also shows that the analysis technique used by Chase is essentially tight, and hence new techniques are needed in order to improve the worst-case upper bound. Our second, simple, result considers the performance of the Maximum Likelihood Estimator (MLE), which specifically picks the source string that has the maximum likelihood to generate the samples (traces). We show that the MLE algorithm uses a nearly optimal number of traces, i.e., up to a factor of n in the number of samples needed for an optimal algorithm, and show that this factor of n loss may be necessary under general “model estimation” settings. 
    more » « less
    Free, publicly-accessible full text available July 7, 2025
  10. Free, publicly-accessible full text available September 6, 2025