skip to main content

Search for: All records

Creators/Authors contains: "Huang, Xin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Spatial resolution is critical for observing and monitoring environmental phenomena. Acquiring high-resolution bathymetry data directly from satellites is not always feasible due to limitations on equipment, so spatial data scientists and researchers turn to single image super-resolution (SISR) methods that utilize deep learning techniques as an alternative method to increase pixel density. While super resolution residual networks (e.g., SR-ResNet) are promising for this purpose, several challenges still need to be addressed: (1) Earth data such as bathymetry is expensive to obtain and relatively limited in its data record amount; (2) certain domain knowledge needs to be complied with during model training; (3) certain areas of interest require more accurate measurements than other areas. To address these challenges, following the transfer learning principle, we study how to leverage an existing pre-trained super-resolution deep learning model, namely SR-ResNet, for high-resolution bathymetry data generation. We further enhance the SR-ResNet model to add corresponding loss functions based on domain knowledge. To let the model perform better for certain spatial areas, we add additional loss functions to increase the penalty of the areas of interest. Our experiments show our approaches achieve higher accuracy than most baseline models when evaluating using metrics including MSE, PSNR, andmore »SSIM.« less
    Free, publicly-accessible full text available January 1, 2024
  2. Domain adaptation techniques using deep neural networks have been mainly used to solve the distribution shift problem in homogeneous domains where data usually share similar feature spaces and have the same dimensionalities. Nevertheless, real world applications often deal with heterogeneous domains that come from completely different feature spaces with different dimensionalities. In our remote sensing application, two remote sensing datasets collected by an active sensor and a passive one are heterogeneous. In particular, CALIOP actively measures each atmospheric column. In this study, 25 measured variables/features that are sensitive to cloud phase are used and they are fully labeled. VIIRS is an imaging radiometer, which collects radiometric measurements of the surface and atmosphere in the visible and infrared bands. Recent studies have shown that passive sensors may have difficulties in prediction cloud/aerosol types in complicated atmospheres (e.g., overlapping cloud and aerosol layers, cloud over snow/ice surface, etc.). To overcome the challenge of the cloud property retrieval in passive sensor, we develop a novel VAE based approach to learn domain invariant representation that capture the spatial pattern from multiple satellite remote sensing data (VDAM), to build a domain invariant cloud property retrieval method to accurately classify different cloud types (labels) in themore »passive sensing dataset. We further exploit the weight based alignment method on the label space to learn a powerful domain adaptation technique that is pertinent to the remote sensing application. Experiments demonstrate our method outperforms other state-of-the-art machine learning methods and achieves higher accuracy in cloud property retrieval in the passive satellite dataset.« less
    Free, publicly-accessible full text available November 1, 2023
  3. Free, publicly-accessible full text available November 6, 2023
  4. We analyze the run-time complexity of computing allocations that are both fair and maximize the utilitarian social welfare, defined as the sum of agents’ utilities. We focus on two tractable fairness concepts: envy-freeness up to one item (EF1) and proportionality up to one item (PROP1). We consider two computational problems: (1) Among the utilitarian-maximal allocations, decide whether there exists one that is also fair; (2) among the fair allocations, compute one that maximizes the utilitarian welfare. We show that both problems are strongly NP-hard when the number of agents is variable, and remain NP-hard for a fixed number of agents greater than two. For the special case of two agents, we find that problem (1) is polynomial-time solvable, while problem (2) remains NP-hard. Finally, with a fixed number of agents, we design pseudopolynomial-time algorithms for both problems. We extend our results to the stronger fairness notions envy-freeness up to any item (EFx) and proportionality up to any item (PROPx).
  5. Free, publicly-accessible full text available July 1, 2023
  6. MODIS (Moderate Resolution Imaging Spectroradiometer) is a key instrument onboard NASA’s Terra (launched in 1999) and Aqua (launched in 2002) satellite missions as part of the more extensive Earth Observation System (EOS). By measuring the reflection and emission by the Earth-Atmosphere system in 36 spectral bands from the visible to thermal infrared with near-daily global coverage and high-spatial-resolution (250 m ~ 1 km at nadir), MODIS is playing a vital role in developing validated, global, interactive Earth system models. MODIS products are processed into three levels, i.e., Level-1 (L1), Level-2 (L2) and Level-3 (L3). To shift the current static and “one-size-fits-all” data provision method of MODIS products, in this paper, we propose a service-oriented flexible and efficient MODIS aggregation framework. Using this framework, users only need to get aggregated MODIS L3 data based on their unique requirements and the aggregation can run in parallel to achieve a speedup. The experiments show that our aggregation results are almost identical to the current MODIS L3 products and our parallel execution with 8 computing nodes can work 88.63 times faster than a serial code execution on a single node.