This content will become publicly available on May 7, 2025
- Award ID(s):
- 1822085
- PAR ID:
- 10534945
- Publisher / Repository:
- The Twelfth International Conference on Learning Representations (ICLR) 2024
- Date Published:
- Format(s):
- Medium: X
- Location:
- Vienna Austria
- Sponsoring Org:
- National Science Foundation
More Like this
-
Rebeille, F. ; Marechal, E. (Ed.)N-acylethanolamines (NAEs) are a group of lipid signaling molecules derived from the phospholipid precursor N-acylphosphatidylethanolamine (NAPE). NAEs can be processed by a wide range of metabolic processes including hydrolysis by fatty acid amide hydrolase (FAAH), peroxidation by lipoxygenases (LOX), and conjugation by glycosyl- and malonyl-transferases. The diversity of NAE metabolites points to participation in multiple downstream pathways for regulation and function. NAEs with acyl chains of 18C are typically the most predominant types in vascular plants. Whereas in nonvascular plants and some algae, the arachidonic acid-containing NAE, anandamide (a functional “endocannabinoid” in animal systems), was recently reported. A signaling role for anandamide and other NAEs is well established in vertebrates, while NAEs and their oxylipin metabolites are recently becoming appreciated for lipid mediator roles in vascular plants. Here, the NAE metabolism and function in plants are overviewed, with particular emphasis on processes described in vascular plants where most attention has been focused.more » « less
-
While deep learning models have achieved unprecedented success in various domains, there is also a growing concern of adversarial attacks against related applications. Recent results show that by adding a small amount of perturbations to an image (imperceptible to humans), the resulting adversarial examples can force a classifier to make targeted mistakes. So far, most existing works focus on crafting adversarial examples in the digital domain, while limited efforts have been devoted to understanding the physical domain attacks. In this work, we explore the feasibility of generating robust adversarial examples that remain effective in the physical domain. Our core idea is to use an image-to-image translation network to simulate the digital-to-physical transformation process for generating robust adversarial examples. To validate our method, we conduct a large-scale physical-domain experiment, which involves manually taking more than 3000 physical domain photos. The results show that our method outperforms existing ones by a large margin and demonstrates a high level of robustness and transferability.more » « less
-
Abstract Fatty acid amide hydrolase (FAAH) is a conserved amidase that is known to modulate the levels of endogenous
N ‐acylethanolamines (NAEs) in both plants and animals. The activity of FAAH is enhancedin vitro by synthetic phenoxyacylethanolamides resulting in greater hydrolysis of NAEs. Previously, 3‐n ‐pentadecylphenolethanolamide (PDP‐EA) was shown to exert positive effects on the development of Arabidopsis seedlings by enhancing Arabidopsis FAAH (AtFAAH) activity. However, there is little information regarding FAAH activity and the impact of PDP‐EA in the development of seedlings of other plant species. Here, we examined the effects of PDP‐EA on growth of upland cotton ( L. cv Coker 312) seedlings including two lines of transgenic seedlings overexpressingGossypium hirsutum AtFAAH . Independent transgenic events showed accelerated true‐leaf emergence compared with non‐transgenic controls. Exogenous applications of PDP‐EA led to increases in overall seedling growth in AtFAAH transgenic lines. These enhanced‐growth phenotypes coincided with elevated FAAH activities toward NAEs and NAE oxylipins. Conversely, the endogenous contents of NAEs and NAE‐oxylipin species, especially linoleoylethanolamide and 9‐hydroxy linoleoylethanolamide, were lower in PDP‐EA treated seedlings than in controls. Further, transcripts for endogenous cottonFAAH genes were increased following PDP‐EA exposure. Collectively, our data corroborate that the enhancement of FAAH enzyme activity by PDP‐EA stimulates NAE‐hydrolysis and that this results in enhanced growth in seedlings of a perennial crop species, extending the role of NAE metabolism in seedling development beyond the model annual plant species, .Arabidopsis thaliana -
null (Ed.)Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates against them. Additionally, in contrast to smoothing-based defenses against L_p and sparse attacks, our defense method against patch attacks is de-randomized, yielding improved, deterministic certificates. Compared to the existing patch certification method proposed by Chiang et al. (2020), which relies on interval bound propagation, our method can be trained significantly faster, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at ImageNet scale. For example, for a 5-by-5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy). Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet.more » « less
-
Modern image classification systems are often built on deep neural networks, which suffer from adversarial examples—images with deliberately crafted, imperceptible noise to mislead the network’s classification. To defend against adversarial examples, a plausible idea is to obfuscate the network’s gradient with respect to the input image. This general idea has inspired a long line of defense methods. Yet, almost all of them have proven vulnerable. We revisit this seemingly flawed idea from a radically different perspective. We embrace the omnipresence of adversarial examples and the numerical procedure of crafting them, and turn this harmful attacking process into a useful defense mechanism. Our defense method is conceptually simple: before feeding an input image for classification, transform it by finding an adversarial example on a pre- trained external model. We evaluate our method against a wide range of possible attacks. On both CIFAR-10 and Tiny ImageNet datasets, our method is significantly more robust than state-of-the-art methods. Particularly, in comparison to adversarial training, our method offers lower training cost as well as stronger robustness.more » « less