- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
30
- Author / Contributor
- Filter by Author / Creator
-
-
Zhang, Edwin (3)
-
Wang, William Yang (2)
-
Antoniades, Antonis (1)
-
Bai, Qinxun (1)
-
Chen, Yiling (1)
-
Gasztowtt, Henry (1)
-
Hossain, Safwan (1)
-
Kosan, Mert (1)
-
Li, Jiachen (1)
-
Li, Jiachen and (1)
-
Li, Lei (1)
-
Luong, Kha-Dinh (1)
-
Parkes, David C (1)
-
Singh, Ambuj (1)
-
Tambe, Milind (1)
-
Wang, Danqing (1)
-
Wang, Tonghan (1)
-
Wang, Yu-Xiang (1)
-
Yin, Ming (1)
-
Zhao, Sadie (1)
-
- Filter by Editor
-
-
Brunskill, Emma (1)
-
Cho, Kyunghyun (1)
-
Engelhardt, Barbara (1)
-
Krause, Andreas and (1)
-
Sabato, Sivan (1)
-
Scarlett, Jonathan (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Counterfactual explanations of Graph Neural Networks (GNNs) offer a powerful way to understand data that can naturally be represented by a graph structure. Furthermore, in many domains, it is highly desirable to derive data-driven global explanations or rules that can better explain the high-level properties of the models and data in question. However, evaluating global counterfactual explanations is hard in real-world datasets due to a lack of human-annotated ground truth, which limits their use in areas like molecular sciences. Additionally, the increasing scale of these datasets provides a challenge for random search-based methods. In this paper, we develop a novel global explanation model RLHEX for molecular property prediction. It aligns the counterfactual explanations with humandefined principles, making the explanations more interpretable and easy for experts to evaluate. RLHEX includes a VAE-based graph generator to generate global explanations and an adapter to adjust the latent representation space to human-defined principles. Optimized by Proximal Policy Optimization (PPO), the global explanations produced by RLHEX cover 4.12% more input graphs and reduce the distance between the counterfactual explanation set and the input set by 0.47% on average across three molecular datasets. RLHEX provides a flexible framework to incorporate different human-designed principles into the counterfactual explanation generation process, aligning these explanations with domain expertise. The code and data are released at https://github.com/dqwang122/RLHEX.more » « less
-
Zhang, Edwin; Zhao, Sadie; Wang, Tonghan; Hossain, Safwan; Gasztowtt, Henry; Zheng, Stephan; Parkes, David C; Tambe, Milind; Chen, Yiling (, Proceedings of the 41st International Conference on Machine Learning)
-
Li, Jiachen and; Zhang, Edwin; Yin, Ming; Bai, Qinxun; Wang, Yu-Xiang; Wang, William Yang (, Proceedings of Machine Learning Research)Krause, Andreas and; Brunskill, Emma; Cho, Kyunghyun; Engelhardt, Barbara; Sabato, Sivan; Scarlett, Jonathan (Ed.)Behavior constrained policy optimization has been demonstrated to be a successful paradigm for tackling Offline Reinforcement Learning. By exploiting historical transitions, a policy is trained to maximize a learned value function while constrained by the behavior policy to avoid a significant distributional shift. In this paper, we propose our closed-form policy improvement operators. We make a novel observation that the behavior constraint naturally motivates the use of first-order Taylor approximation, leading to a linear approximation of the policy objective. Additionally, as practical datasets are usually collected by heterogeneous policies, we model the behavior policies as a Gaussian Mixture and overcome the induced optimization difficulties by leveraging the LogSumExp’s lower bound and Jensen’s Inequality, giving rise to a closed-form policy improvement operator. We instantiate both one-step and iterative offline RL algorithms with our novel policy improvement operators and empirically demonstrate their effectiveness over state-of-the-art algorithms on the standard D4RL benchmark. Our code is available at https://cfpi-icml23.github.io/.more » « less
An official website of the United States government

Full Text Available