Abstract The release and rapid diffusion of ChatGPT has forced teachers and researchers around the world to grapple with the consequences of artificial intelligence (AI) for education. For second language educators, AI-generated writing tools such as ChatGPT present special challenges that must be addressed to better support learners. We propose a five-part pedagogical framework that seeks to support second language learners through acknowledging both the immediate and long-term contexts in which we must teach students about these tools: understand, access, prompt, corroborate, and incorporate. By teaching our students how to effectively partner with AI, we can better prepare them for the changing landscape of technology use in the world beyond the classroom.
more »
« less
Building them up, breaking them down: Topology, vendor selection patterns, and a digital drug market’s robustness to disruption
- Award ID(s):
- 1729067
- PAR ID:
- 10057486
- Date Published:
- Journal Name:
- Social Networks
- Volume:
- 52
- Issue:
- C
- ISSN:
- 0378-8733
- Page Range / eLocation ID:
- 238 to 250
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Weight quantization for deep ConvNets has shown promising results for applications such as image classification and semantic segmentation and is especially important for applications where memory storage is limited. However, when aiming for quantization without accuracy degradation, different tasks may end up with different bitwidths. This creates complexity for software and hardware support and the complexity accumulates when one considers mixed-precision quantization, in which case each layer’s weights use a different bitwidth. Our key insight is that optimizing for the least bitwidth subject to no accuracy degradation is not necessarily an optimal strategy. This is because one cannot decide optimality between two bitwidths if one has smaller model size while the other has better accuracy. In this work, we take the first step to understand if some weight bitwidth is better than others by aligning all to the same model size using a width-multiplier. Under this setting, somewhat surprisingly, we show that using a single bitwidth for the whole network can achieve better accuracy compared to mixed-precision quantization targeting zero accuracy degradation when both have the same model size. In particular, our results suggest that when the number of channels becomes a target hyperparameter, a single weight bitwidth throughout the network shows superior results for model compression.more » « less
An official website of the United States government

