skip to main content

Search for: All records

Creators/Authors contains: "Ren, Yi"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available October 1, 2023
  2. Recent studies demonstrated the vulnerability of control policies learned through deep reinforcement learning against adversarial attacks, raising concerns about the application of such models to risk-sensitive tasks such as autonomous driving. Threat models for these demonstrations are limited to (1) targeted attacks through real-time manipulation of the agent's observation, and (2) untargeted attacks through manipulation of the physical environment. The former assumes full access to the agent's states/observations at all times, while the latter has no control over attack outcomes. This paper investigates the feasibility of targeted attacks through visually learned patterns placed on physical objects in the environment, a threat model that combines the practicality and effectiveness of the existing ones. Through analysis, we demonstrate that a pre-trained policy can be hijacked within a time window, e.g., performing an unintended self-parking, when an adversarial object is present. To enable the attack, we adopt an assumption that the dynamics of both the environment and the agent can be learned by the attacker. Lastly, we empirically show the effectiveness of the proposed attack on different driving scenarios, perform a location robustness test, and study the tradeoff between the attack strength and its effectiveness Code is available at https://github.com/ASU-APG/ Targeted-Physical-Adversarial-Attacks-on-AD
    Free, publicly-accessible full text available May 23, 2023
  3. Free, publicly-accessible full text available August 11, 2023
  4. Generative models are now capable of synthesizing images, speeches, and videos that are hardly distinguishable from authentic contents. Such capabilities cause concerns such as malicious impersonation and IP theft. This paper investigates a solution for model attribution, i.e., the classification of synthetic contents by their source models via watermarks embedded in the contents. Building on past success of model attribution in the image domain, we discuss algorithmic improvements for generating user-end speech models that empirically achieve high attribution accuracy, while maintaining high generation quality. We show the tradeoff between attributability and generation quality under a variety of attacks on generated speech signals attempting to remove the watermarks, and the feasibility of learning robust watermarks against these attacks.
  5. Motivated by the parameter identification problem of a reaction-diffusion transport model in a vapor phase infiltration processes, we propose a Bayesian optimization procedure for solving the inverse problem that aims to find an input setting that achieves a desired functional output. The proposed algorithm improves over the standard single-objective Bayesian optimization by (i) utilizing the generalized chi-square distribution as a more appropriate predictive distribution for the squared distance objective function in the inverse problems, and (ii) applying functional principal component analysis to reduce the dimensionality of the functional response data, which allows for efficient approximation of the predictive distribution and the subsequent computation of the expected improvement acquisition function.
  6. Neutrophils are short-lived cells of the innate immune system and the first line of defense at the site of an infection and tissue injury. Pattern recognition receptors on neutrophils recognize pathogen-associated molecular patterns or danger-associated molecular patterns, which recruit them to the destined site. Neutrophils are professional phagocytes with efficient granular constituents that aid in the neutralization of pathogens. In addition to phagocytosis and degranulation, neutrophils are proficient in creating neutrophil extracellular traps (NETs) that immobilize pathogens to prevent their spread. Because of the cytotoxicity of the associated granular proteins within NETs, the microbes can be directly killed once immobilized by the NETs. The role of neutrophils in infection is well studied; however, there is less emphasis placed on the role of neutrophils in tissue injury, such as traumatic spinal cord injury. Upon the initial mechanical injury, the innate immune system is activated in response to the molecules produced by the resident cells of the injured spinal cord initiating the inflammatory cascade. This review provides an overview of the essential role of neutrophils and explores the contribution of neutrophils to the pathologic changes in the injured spinal cord.
  7. Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of user-end models where the contents under question are generated from. Existing studies showed empirical feasibility of attribution through a centralized classifier trained on all user-end models. However, this approach is not scalable in reality as the number of models ever grows. Neither does it provide an attributability guarantee. To this end, this paper studies decentralized attribution, which relies on binary classifiers associated with each user-end model. Each binary classifier is parameterized by a user-specific key and distinguishes its associated model distribution from the authentic data distribution. We develop sufficient conditions of the keys that guarantee an attributability lower bound. Our method is validated on MNIST, CelebA, and FFHQ datasets. We also examine the trade-off between generation quality and robustness of attribution against adversarial post-processes.