- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0001200001000000
- More
- Availability
-
04
- Author / Contributor
- Filter by Author / Creator
-
-
Choshen, Leshem (2)
-
Ecsedi, Boglarka (2)
-
Hoffman, Judy (2)
-
Ramesh, Pratik (2)
-
Stoica, George (2)
-
Chau, Duen_Horng Polo (1)
-
Cho, Aeree (1)
-
Fu, Yonggan (1)
-
Helbling, Alec (1)
-
Hong, Jihoon (1)
-
Hoover, Benjamin (1)
-
Karpekov, Alexander (1)
-
Kim, Grace C (1)
-
Kundu, Souvik (1)
-
Lee, Seongmin (1)
-
Li, Leshu (1)
-
Lin, Yingyan Celine (1)
-
Wan, Cheng (1)
-
Wang, Zheng (1)
-
Wang, Zijie J (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 24, 2026
-
Stoica, George; Ramesh, Pratik; Ecsedi, Boglarka; Choshen, Leshem; Hoffman, Judy (, International Conference on Learning Representations (ICLR))Free, publicly-accessible full text available April 24, 2026
-
Cho, Aeree; Kim, Grace C; Karpekov, Alexander; Helbling, Alec; Wang, Zijie J; Lee, Seongmin; Hoover, Benjamin; Chau, Duen_Horng Polo (, Proceedings of the AAAI Conference on Artificial Intelligence)Transformers have revolutionized machine learning, yet their inner workings remain opaque to many. We present TRANSFORMER EXPLAINER, an interactive visualization tool designed for non-experts to learn about Transformers through the GPT-2 model. Our tool helps users understand complex Transformer concepts by integrating a model overview and smooth transitions across abstraction levels of math operations and model structures. It runs a live GPT-2 model locally in the user’s browser, empowering users to experiment with their own input and observe in real-time how the internal components and parameters of the Transformer work together to predict the next tokens. 125,000 users have used our open-source tool at https://poloclub.github.io/ transformer-explainer/.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Ye, Zhifan; Wang, Zheng; Xia, Kejing; Hong, Jihoon; Li, Leshu; Whalen, Lexington; Wan, Cheng; Fu, Yonggan; Lin, Yingyan Celine; Kundu, Souvik (, Association for Computational Linguistics)Free, publicly-accessible full text available January 1, 2026
An official website of the United States government
