Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 10, 2025
-
Free, publicly-accessible full text available December 10, 2025
-
Free, publicly-accessible full text available July 13, 2025
-
In this talk, we have presented the use of LLM to study the embodied learning. We have discovered that the use of LLM is really effective to summarize the videos of embodied learning and conventional learning. LLM is also very effective in summarize the sentiment of the user comments of the two different videos. Correlation between the user comments and video content was also made. It is discovered that the use of the embodied learning is rather effective to engage learners in learning roboticsmore » « lessFree, publicly-accessible full text available June 12, 2025
-
Free, publicly-accessible full text available July 13, 2025
-
Free, publicly-accessible full text available May 30, 2025
-
Free, publicly-accessible full text available July 1, 2025
-
The process of training deep learning models produces a huge amount of meta-data, including but not limited to losses, hidden feature embeddings, and gradients. Model diagnosis tools have been developed to analyze losses and feature embeddings with the aim to improve the performance of these models. However, gradients, despite carrying rich information that is potentially relevant for model interpretation and data debugging, have yet to be fully explored due to their size and complexity. Each single gradient has a size as large as the number of parameters of the neural net - often measured in the tens of millions. This makes it extremely challenging to efficiently collect, store, and analyze large numbers of gradients in these models. In this work, we develop MetaStore to fill this gap. MetaStore leverages our observation that storing certain compact intermediate results produced in the back propagation process, namely, the prefix and suffix gradients, is sufficient for the exact restoration of the original gradient. These prefix and suffix gradients are much more compact than the original gradients, thus allowing us to address the gradient collection and storage challenges. Furthermore, MetaStore features a rich set of analytics operators that allow the users to analyze the gradients for data debugging or model interpretation. Rather than first having to restore the original gradients and then run analytics on top of this decompressed view, MetaStore directly executes these operators on the compact prefix and suffix structures, making gradient-based analytics efficient and scalable. Our experiments on popular deep learning models such as VGG, BERT, and ResNet and benchmark image and text datasets demonstrate that MetaStore outperforms strong baseline methods from 4 to 678x in storage costs and from 2 to 1000x in running time.more » « lessFree, publicly-accessible full text available May 3, 2025