skip to main content

Search for: All records

Creators/Authors contains: "Yuan, L."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core componentsmore »of the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems.« less
  2. Since its publication, the authors of Wang et al. (2021) have brought to our attention an error in their article. A grant awarded by the National Science Foundation (grant no. MCB 1817985) to author Elizabeth Vierling was omitted from the Acknowledgements section. The correct Acknowledgements section is shown below. Acknowledgements We thank Suiwen Hou (Lanzhou University) and Zhaojun Ding (Shandong University) for providing the seeds used in this study. We thank Xiaoping Gou (Lanzhou University) and Ravishankar Palanivelu (University of Arizona) for critically reading the manuscript and for suggestions regarding the article. This work was supported by grants from National Natural Science Foundation of China (31870298) to SX, the US Department of Agriculture (USDA-CSREES-NRI-001030) and the National Science Foundation (MCB 1817985) to EV, and the Youth 1000-Talent Program of China (A279021801) to LY.
  3. In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency. The teacher adjusts her teaching method for different students, and the student, after getting familiar with the teacher’s instruction mechanism, can infer the teacher’s intention to learn faster. Recently, the benefits of integrating this cooperative pedagogy into machine concept learning in discrete spaces have been proved by multiple works. However, how cooperative pedagogy can facilitate machine parameter learning hasn’t been thoroughly studied. In this paper, we propose a gradient optimization based teacher-aware learner who can incorporate teacher’s cooperative intention into the likelihood function and learn provably faster compared with the naive learning algorithms used in previous machine teaching works. We give theoretical proof that the iterative teacher-aware learning (ITAL) process leads to local and global improvements. We then validate our algorithms with extensive experiments on various tasks including regression, classification, and inverse reinforcement learning using synthetic and real data. We also show the advantage of modeling teacher-awareness when agents are learning from human teachers.
  4. Abstract The XENON collaboration has published stringent limits on specific dark matter – nucleon recoil spectra from dark matter recoiling on the liquid xenon detector target. In this paper, we present an approximate likelihood for the XENON1T 1 t-year nuclear recoil search applicable to any nuclear recoil spectrum. Alongside this paper, we publish data and code to compute upper limits using the method we present. The approximate likelihood is constructed in bins of reconstructed energy, profiled along the signal expectation in each bin. This approach can be used to compute an approximate likelihood and therefore most statistical results for any nuclear recoil spectrum. Computing approximate results with this method is approximately three orders of magnitude faster than the likelihood used in the original publications of XENON1T, where limits were set for specific families of recoil spectra. Using this same method, we include toy Monte Carlo simulation-derived binwise likelihoods for the upcoming XENONnT experiment that can similarly be used to assess the sensitivity to arbitrary nuclear recoil signatures in its eventual 20 t-year exposure.
    Free, publicly-accessible full text available November 1, 2023
  5. Free, publicly-accessible full text available October 1, 2023
  6. Free, publicly-accessible full text available August 1, 2023
  7. Free, publicly-accessible full text available July 1, 2023