skip to main content


Title: Adversarially trained neural representations may already be as robust as corresponding biological neural representations
Award ID(s):
1815221 1553428 2134108
NSF-PAR ID:
10348874
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 38th International Conference on Machine Learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a method to map 2D image observations of a scene to a persistent 3D scene representation, enabling novel view synthesis and disentangled representation of the movable and immovable components of the scene. Motivated by the bird’s-eye-view (BEV) representation commonly used in vision and robotics, we propose conditional neural groundplans, ground-aligned 2D feature grids, as persistent and memory-efficient scene representations. Our method is trained self-supervised from unlabeled multi-view observations using differentiable rendering, and learns to complete geometry and appearance of occluded regions. In addition, we show that we can leverage multi-view videos at training time to learn to separately reconstruct static and movable components of the scene from a single image at test time. The ability to separately reconstruct movable objects enables a variety of downstream tasks using simple heuristics, such as extraction of object-centric 3D representations, novel view synthesis, instance-level segmentation, 3D bounding box prediction, and scene editing. This highlights the value of neural groundplans as a backbone for efficient 3D scene understanding models. 
    more » « less
  2. Lee, Kyoung Mu (Ed.)
    A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound. Training and testing on aggregated supertrials derived by summing trials demonstrates that the two prior methods achieve statistically significant above-chance accuracy while the newly proposed method does not. 
    more » « less
  3. Ermentrout, Bard (Ed.)
  4. To guide social interaction, people often rely on expectations about the traits of other people, based on markers of social group membership (i.e., stereotypes). Although the influence of stereotypes on social behavior is widespread, key questions remain about how traits inferred from social-group membership are instantiated in the brain and incorporated into neural computations that guide social behavior. Here, we show that the human lateral orbitofrontal cortex (OFC) represents the content of stereotypes about members of different social groups in the service of social decision-making. During functional MRI scanning, participants decided how to distribute resources across themselves and members of a variety of social groups in a modified Dictator Game. Behaviorally, we replicated our recent finding that inferences about others' traits, captured by a two-dimensional framework of stereotype content (warmth and competence), had dissociable effects on participants' monetary-allocation choices: recipients' warmth increased participants’ aversion to advantageous inequity (i.e., earning more than recipients), and recipients’ competence increased participants’ aversion to disadvantageous inequity (i.e., earning less than recipients). Neurally, representational similarity analysis revealed that others' traits in the two-dimensional space were represented in the temporoparietal junction and superior temporal sulcus, two regions associated with mentalizing, and in the lateral OFC, known to represent inferred features of a decision context outside the social domain. Critically, only the latter predicted individual choices, suggesting that the effect of stereotypes on behavior is mediated by inference-based decision-making processes in the OFC. 
    more » « less