skip to main content


Search for: All records

Creators/Authors contains: "Wu, Ying"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 17, 2025
  2. Free, publicly-accessible full text available June 17, 2025
  3. Free, publicly-accessible full text available December 16, 2024
  4. Free, publicly-accessible full text available December 1, 2024
  5. Free, publicly-accessible full text available October 6, 2024
  6. Abstract

    Powdery mildew fungi are obligate biotrophic pathogens that only invade plant epidermal cells. There are two epidermal surfaces in every plant leaf: the adaxial (upper) side and the abaxial (lower) side. While both leaf surfaces can be susceptible to adapted powdery mildew fungi in many plant species, there have been observations of leaf abaxial immunity in some plant species including Arabidopsis. The genetic basis of such leaf abaxial immunity remains unknown. In this study, we tested a series of Arabidopsis mutants defective in one or more known defense pathways with the adapted powdery mildew isolate Golovinomyces cichoracearum UCSC1. We found that leaf abaxial immunity was significantly compromised in mutants impaired for both the EDS1/PAD4- and PEN2/PEN3-dependent defenses. Consistently, expression of EDS1–yellow fluorescent protein and PEN2–green fluorescent protein fusions from their respective native promoters in the respective eds1-2 and pen2-1 mutant backgrounds was higher in the abaxial epidermal cells than in the adaxial epidermal cells. Altogether, our results indicate that leaf abaxial immunity against powdery mildew in Arabidopsis is at least partially due to enhanced EDS1/PAD4- and PEN2/PEN3-dependent defenses. Such transcriptionally pre-programmed defense mechanisms may underlie leaf abaxial immunity in other plant species such as hemp and may be exploited for engineering adaxial immunity against powdery mildew fungi in crop plants.

     
    more » « less
  7. This paper considers the active recognition scenario, where the agent is empowered to intelligently acquire observations for better recognition. The agents usually compose two modules, i.e., the policy and the recognizer, to select actions and predict the category. While using ground-truth class labels to supervise the recognizer, the policy is typically updated with rewards determined by the current in-training recognizer, like whether achieving correct predictions. However, this joint learning process could lead to unintended solutions, like a collapsed policy that only visits views that the recognizer is already sufficiently trained to obtain rewards, which harms the generalization ability. We call this phenomenon lingering to depict the agent being reluctant to explore challenging views during training. Existing approaches to tackle the exploration-exploitation trade-off could be ineffective as they usually assume reliable feedback during exploration to update the estimate of rarely-visited states. This assumption is invalid here as the reward from the recognizer could be insufficiently trained.To this end, our approach integrates another adversarial policy to constantly disturb the recognition agent during training, forming a competing game to promote active explorations and avoid lingering. The reinforced adversary, rewarded when the recognition fails, contests the recognition agent by turning the camera to challenging observations. Extensive experiments across two datasets validate the effectiveness of the proposed approach regarding its recognition performances, learning efficiencies, and especially robustness in managing environmental noises. 
    more » « less
  8. Weakly-supervised Temporal Action Localization (WTAL) aims to classify and localize action instances in untrimmed videos with only video-level labels. Existing methods typically use snippet-level RGB and optical flow features extracted from pre-trained extractors directly. Because of two limitations: the short temporal span of snippets and the inappropriate initial features, these WTAL methods suffer from the lack of effective use of temporal information and have limited performance. In this paper, we propose the Temporal Feature Enhancement Dilated Convolution Network (TFE-DCN) to address these two limitations. The proposed TFE-DCN has an enlarged receptive field that covers a long temporal span to observe the full dynamics of action instances, which makes it powerful to capture temporal dependencies between snippets. Furthermore, we propose the Modality Enhancement Module that can enhance RGB features with the help of enhanced optical flow features, making the overall features appropriate for the WTAL task. Experiments conducted on THUMOS’14 and ActivityNet v1.3 datasets show that our proposed approach far outperforms state-of-the-art WTAL methods. 
    more » « less
  9. Existing solutions to instance-level visual identification usually aim to learn faithful and discriminative feature extractors from offline training data and directly use them for the unseen online testing data. However, their performance is largely limited due to the severe distribution shifting issue between training and testing samples. Therefore, we propose a novel online group-metric adaptation model to adapt the offline learned identification models for the online data by learning a series of metrics for all sharing-subsets. Each sharing-subset is obtained from the proposed novel frequent sharing-subset mining module and contains a group of testing samples that share strong visual similarity relationships to each other. Furthermore, to handle potentially large-scale testing samples, we introduce self-paced learning (SPL) to gradually include samples into adaptation from easy to difficult which elaborately simulates the learning principle of humans. Unlike existing online visual identification methods, our model simultaneously takes both the sample-specific discriminant and the set-based visual similarity among testing samples into consideration. Our method is generally suitable to any off-the-shelf offline learned visual identification baselines for online performance improvement which can be verified by extensive experiments on several widely-used visual identification benchmarks. 
    more » « less