Accurate 3D object detection in real-world environments requires a huge amount of annotated data with high quality. Acquiring such data is tedious and expensive, and often needs repeated effort when a new sensor is adopted or when the detector is deployed in a new environment. We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector. For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area. This setting is label-efficient, sensor-agnostic, and communication-efficient: nearby units only need to share the predictions with the ego agent (e.g., car). Naively using the received predictions as ground-truths to train the detector for the ego car, however, leads to inferior performance. We systematically study the problem and identify viewpoint mismatches and mislocalization (due to synchronization and GPS errors) as the main causes, which unavoidably result in false positives, false negatives, and inaccurate pseudo labels. We propose a distance-based curriculum, first learning from closer units with similar viewpoints and subsequently improving the quality of other units' predictions via self-training. We further demonstrate that an effective pseudo label refinement module can be trained with a handful of annotated data, largely reducing the data quantity necessary to train an object detector. We validate our approach on the recently released real-world collaborative driving dataset, using reference cars' predictions as pseudo labels for the ego car. Extensive experiments including several scenarios (e.g., different sensors, detectors, and domains) demonstrate the effectiveness of our approach toward label-efficient learning of 3D perception from other units' predictions.
more »
« less
Do infants and adults process others' actions differently based on others' linguistic group?
- Award ID(s):
- 2041218
- PAR ID:
- 10356910
- Date Published:
- Journal Name:
- Flux Congress
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
To guide social interaction, people often rely on expectations about the traits of other people, based on markers of social group membership (i.e., stereotypes). Although the influence of stereotypes on social behavior is widespread, key questions remain about how traits inferred from social-group membership are instantiated in the brain and incorporated into neural computations that guide social behavior. Here, we show that the human lateral orbitofrontal cortex (OFC) represents the content of stereotypes about members of different social groups in the service of social decision-making. During functional MRI scanning, participants decided how to distribute resources across themselves and members of a variety of social groups in a modified Dictator Game. Behaviorally, we replicated our recent finding that inferences about others' traits, captured by a two-dimensional framework of stereotype content (warmth and competence), had dissociable effects on participants' monetary-allocation choices: recipients' warmth increased participants’ aversion to advantageous inequity (i.e., earning more than recipients), and recipients’ competence increased participants’ aversion to disadvantageous inequity (i.e., earning less than recipients). Neurally, representational similarity analysis revealed that others' traits in the two-dimensional space were represented in the temporoparietal junction and superior temporal sulcus, two regions associated with mentalizing, and in the lateral OFC, known to represent inferred features of a decision context outside the social domain. Critically, only the latter predicted individual choices, suggesting that the effect of stereotypes on behavior is mediated by inference-based decision-making processes in the OFC.more » « less
-
null (Ed.)We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long‐run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents' actions over time, where agents' actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. Second, any misperception of the type distribution leads long‐run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long‐run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents' payoff‐relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents' learning environment. The key feature behind our findings is that agents' belief‐updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results.more » « less
An official website of the United States government

