- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
02200000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Peng, Letian (4)
-
Shang, Jingbo (4)
-
Wang, Zihan (3)
-
Zhang, Yuwei (2)
-
An, Chenyang (1)
-
Chen, Zhibo (1)
-
First, Emily (1)
-
Lerner, Sorin (1)
-
Liu, Gaowen (1)
-
Srinivasa, Jayanth (1)
-
Wang, Zilong (1)
-
Ye, Qihao (1)
-
Zhang, Jiayun (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Peng, Letian ; Zhang, Yuwei ; Wang, Zilong ; Srinivasa, Jayanth ; Liu, Gaowen ; Wang, Zihan ; Shang, Jingbo ( , Annual Conf. of the Association for Computational Linguistics (ACL) 2024)
-
An, Chenyang ; Chen, Zhibo ; Ye, Qihao ; First, Emily ; Peng, Letian ; Zhang, Jiayun ; Wang, Zihan ; Lerner, Sorin ; Shang, Jingbo ( , The 62nd Annual Meeting of the Association for Computational Linguistics)Recent advances in Automated Theorem Proving have shown the effectiveness of leveraging a (large) language model that generates tactics (i.e. proof steps) to search through proof states. The current model, while trained solely on successful proof paths, faces a discrepancy at the inference stage, as it must sample and try various tactics at each proof state until finding success, unlike its training which does not incorporate learning from failed attempts. Intuitively, a tactic that leads to a failed search path would indicate that similar tactics should receive less attention during the following trials. In this paper, we demonstrate the benefit of training models that additionally learn from failed search paths. Facing the lack of such trial-and-error data in existing open-source theorem-proving datasets, we curate a dataset on intuitionistic propositional logic theorems and formalize it in Lean, such that we can reliably check the correctness of proofs. We compare our model trained on relatively short trial-and-error information (TRIALMASTER) with models trained only on the correct paths and discover that the former solves more unseen theorems with lower trial searches.more » « lessFree, publicly-accessible full text available August 11, 2025
-
Peng, Letian ; Wang, Zihan ; Shang, Jingbo ( , Association for Computational Linguistics)