Generative artificial intelligence has become prevalent in discussions of educational technology, particularly in the context of mathematics education. These AI models can engage in human‐like conversation and generate answers to complex questions in real‐time, with education reports accentuating their potential to make teachers' work more efficient and improve student learning. This paper provides a review of the current literature on generative AI in mathematics education, focusing on four areas: generative AI for mathematics problem‐solving, generative AI for mathematics tutoring and feedback, generative AI to adapt mathematical tasks, and generative AI to assist mathematics teachers in planning. The paper discusses ethical and logistical issues that arise with the application of generative AI in mathematics education, and closes with some observations, recommendations, and future directions.
more »
« less
Data Scientists Discuss AI Risks and Opportunities
This is an edited summary of a virtual panel conversation that took place on December 19, 2023, concerning the risks and benefits of AI systems . The many topics that are covered include AI impacts on education, the economy, cybercrime and warfare, autonomous vehicles, bias and fairness, and regulation. In addition, the role for data scientists is discussed. But the field is moving quickly, and some of the issues and concerns may have changed by the time this discussion is published. We note that the discussion was refereed, which led to some post hoc changes to the actual conversation. Most of the participants are mostly comfortable with the new points that have been attributed to them.
more »
« less
- Award ID(s):
- 2022040
- PAR ID:
- 10539879
- Publisher / Repository:
- MIT Press
- Date Published:
- Journal Name:
- Harvard Data Science Review
- Volume:
- 6
- Issue:
- 3
- ISSN:
- 2644-2353
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Generative Artificial Intelligence has become prevalent in discussions of educational technology. These AI models can engage in human-like conversation and generate answers to complex questions in real-time, with education reports accentuating their potential to make teachers’ work more efficient and improve student learning. In this paper, I provide a review of the current literature on generative AI in mathematics education, focusing on four areas: generative AI for mathematics problem-solving, generative AI for mathematics tutoring and feedback, generative AI to adapt mathematical tasks, and generative AI to assist mathematics teachers in planning. I then discuss ethical and logistical issues that arise with the application of generative AI in mathematics education, and close with some observations, recommendations, and future directions for the field.more » « less
-
Controversial posts are those that split the preferences of a community, receiving both significant positive and significant negative feedback. Our inclusion of the word “community” here is deliberate: what is controversial to some audiences may not be so to others. Using data from several different communities on reddit.com, we predict the ultimate controversiality of posts, leveraging features drawn from both the textual content and the tree structure of the early comments that initiate the discussion. We find that even when only a handful of comments are available, e.g., the first 5 comments made within 15 minutes of the original post, discussion features often add predictive capacity to strong content-andrate only baselines. Additional experiments on domain transfer suggest that conversations tructure features often generalize to other communities better than conversation-content features do.more » « less
-
Conversational AI is a rapidly developing research field in both industry and academia. As one of the major branches of conversational AI, question answering and conversational search has attracted significant attention of researchers in the information retrieval community. It has been a long overdue feature for search engines or conversational assistants to retrieve information iteratively and interactively in a conversational manner. Previous work argues that conversational question answering (ConvQA) is a simplified but concrete setting of conversational search. In this setting, one of the major challenges is to leverage the conversation history to understand and answer the current question. In this work, we propose a novel solution for ConvQA that involves three aspects. First, we propose a positional history answer embedding method to encode conversation history with position information using BERT (Bidirectional Encoder Representations from Transformers) in a natural way. BERT is a powerful technique for text representation. Second, we design a history attention mechanism (HAM) to conduct a "soft selection" for conversation histories. This method attends to history turns with different weights based on how helpful they are on answering the current question. Third, in addition to handling conversation history, we take advantage of multi-task learning (MTL) to do answer prediction along with another essential conversation task (dialog act prediction) using a uniform model architecture. MTL is able to learn more expressive and generic representations to improve the performance of ConvQA. We demonstrate the effectiveness of our model with extensive experimental evaluations on QuAC, a large-scale ConvQA dataset. We show that position information plays an important role in conversation history modeling. We also visualize the history attention and provide new insights into conversation history understanding. The complete implementation of our model will be open-sourced.more » « less
-
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1] . Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2] . With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3] . But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4] , [5] . When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6] . However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.more » « less
An official website of the United States government

