The cold angular rolling process (CARP) is being developed as a continuous severe plastic deformation technique, which can process metal sheets without any length limitations at room temperature. CARP contains cold rolling and equal‐channel angular process components. The sheet thickness is kept consistent before and after CARP, allowing multiple passes of the sheet. The desired microstructure and mechanical properties can be achieved in the processed metallic sheets. The current study is aimed to evaluate the capability of CARP by processing copper sheets with different sheet widths for repetitive passes. The CARP‐treated sheets are examined by lab‐scale X‐ray and high‐energy synchrotron X‐ray diffraction to investigate the evolution in dislocation density, texture, and strain anisotropy, and by tensile testing to identify the bulk mechanical properties. The digital image correlation method is applied to tensile testing so that strain localization within the sample gauge is visualized and deformation behavior is evaluated after yielding till postnecking by estimating the hardening exponent and strain hardening rate of the CARP‐treated sheet. Comparing the reported continuous and multiple‐step processes on Cu and its alloys, the present study confirms that the CARP is potentially a useful sheet process for strengthening ductile metals.
Mechanical metamaterials are usually designed to show desired responses to prescribed forces. In some applications, the desired force–response relationship is hard to specify exactly, but examples of forces and desired responses are easily available. Here, we propose a framework for supervised learning in thin, creased sheets that learn the desired force–response behavior by physically experiencing training examples and then, crucially, respond correctly (generalize) to previously unseen test forces. During training, we fold the sheet using training forces, prompting local crease stiffnesses to change in proportion to their experienced strain. We find that this learning process reshapes nonlinearities inherent in folding a sheet so as to show the correct response for previously unseen test forces. We show the relationship between training error, test error, and sheet size (model complexity) in learning sheets and compare them to counterparts in machine-learning algorithms. Our framework shows how the rugged energy landscape of disordered mechanical materials can be sculpted to show desired force–response behaviors by a local physical learning process.
more » « less- PAR ID:
- 10162498
- Publisher / Repository:
- Proceedings of the National Academy of Sciences
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 117
- Issue:
- 26
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- p. 14843-14850
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Automatic short answer grading is an important research di- rection in the exploration of how to use artificial intelligence (AI)-based tools to improve education. Current state-of-the- art approaches use neural language models to create vector- ized representations of students responses, followed by clas- sifers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained lan- guage models that are not well-adapted to educational sub- ject domains and/or student-generated text and ii) they al- most always train one model per question, ignoring the link- age across question and result in a significant model storage problem due to the size of advanced language models. In this paper, we study the problem of automatic short answer grad- ing for students’ responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model and fine-tune it on the downstream task of student response grading. Sec- ond, we use an in-context learning approach that provides scoring examples as input to the language model to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often signif- icantly) outperform existing approaches, especially for new questions that are not seen during training.more » « less
-
Mitrovic, A ; Bosch, N. (Ed.)Automatic short answer grading is an important research direction in the exploration of how to use artificial intelligence (AI)-based tools to improve education. Current state-of-theart approaches use neural language models to create vectorized representations of students responses, followed by classifiers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained language models that are not well-adapted to educational subject domains and/or student-generated text and ii) they almost always train one model per question, ignoring the linkage across question and result in a significant model storage problem due to the size of advanced language models. In this paper, we study the problem of automatic short answer grading for students’ responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model and fine-tune it on the downstream task of student response grading. Second, we use an in-context learning approach that provides scoring examples as input to the language model to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often significantly) outperform existing approaches, especially for new questions that are not seen during training.more » « less
-
Mitrovic, A ; Bosch, N (Ed.)Automatic short answer grading is an important research direction in the exploration of how to use artificial intelligence (AI)-based tools to improve education. Current state-of-theart approaches use neural language models to create vectorized representations of students responses, followed by classifiers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained language models that are not well-adapted to educational subject domains and/or student-generated text and ii) they almost always train one model per question, ignoring the linkage across question and result in a significant model storage problem due to the size of advanced language models. In this paper, we study the problem of automatic short answer grading for students’ responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model and fine-tune it on the downstream task of student response grading. Second, we use an in-context learning approach that provides scoring examples as input to the language model to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often significantly) outperform existing approaches, especially for new questions that are not seen during training.more » « less
-
Automatic short answer grading is an important research direction in the exploration of how to use artificial intelligence (AI)-based tools to improve education. Current state-of-theart approaches use neural language models to create vectorized representations of students responses, followed by classifiers to predict the score. However, these approaches have several key limitations, including i) they use pre-trained language models that are not well-adapted to educational subject domains and/or student-generated text and ii) they almost always train one model per question, ignoring the linkage across question and result in a significant model storage problem due to the size of advanced language models. In this paper, we study the problem of automatic short answer grading for students’ responses to math questions and propose a novel framework for this task. First, we use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model and fine-tune it on the downstream task of student response grading. Second, we use an in-context learning approach that provides scoring examples as input to the language model to provide additional context information and promote generalization to previously unseen questions. We evaluate our framework on a real-world dataset of student responses to open-ended math questions and show that our framework (often significantly) outperform existing approaches, especially for new questions that are not seen during training.more » « less