In this work, we present the design and plan of Quantum machine learning (QML) course in a computer science (CS) University program at senior undergraduate level / first year graduate level. Based on our survey, there is a lack of detailed design and assessment plan for the delivery of QML course. In this paper we have presented the QML course design with week by week details of QML concepts and hands on activities that are covered in the course. We also present how this QML course can be assessed from CS program learning outcomes perspective.
more »
« less
Error Management Bias in Student Design Teams
Abstract This research examines how cognitive bias manifests in the design activities of graduate student design teams, with a particular focus on how to uncover evidence of these biases through survey-based data collection. After identifying bias in design teams, this work discusses those biases with consideration for the intent of error management, through the lens of adaptive rationality. Data was collected in one graduate level design course across nine design teams over the course of a semester-long project. Results are shown for five different types of bias: bandwagon, availability, status quo, ownership, and hindsight biases. The conclusions drawn are based on trends and statistical correlations from survey data, as well as course deliverables. This work serves as a starting point for highlighting the most common forms of bias in design teams, with the goal of developing ways in which to mitigate those biases in future work.
more »
« less
- Award ID(s):
- 2207448
- PAR ID:
- 10418251
- Date Published:
- Journal Name:
- Journal of Mechanical Design
- Volume:
- 145
- ISSN:
- 1050-0472
- Page Range / eLocation ID:
- 1 to 40
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Data-driven algorithms are only as good as the data they work with, while datasets, especially social data, often fail to represent minorities adequately. Representation Bias in data can happen due to various reasons, ranging from historical discrimination to selection and sampling biases in the data acquisition and preparation methods. Given that “bias in, bias out,” one cannot expect AI-based solutions to have equitable outcomes for societal applications, without addressing issues such as representation bias. While there has been extensive study of fairness in machine learning models, including several review papers, bias in the data has been less studied. This article reviews the literature on identifying and resolving representation bias as a feature of a dataset, independent of how consumed later. The scope of this survey is bounded to structured (tabular) and unstructured (e.g., image, text, graph) data. It presents taxonomies to categorize the studied techniques based on multiple design dimensions and provides a side-by-side comparison of their properties. There is still a long way to fully address representation bias issues in data. The authors hope that this survey motivates researchers to approach these challenges in the future by observing existing work within their respective domains.more » « less
-
AI systems have been known to amplify biases in real-world data. Explanations may help human-AI teams address these biases for fairer decision-making. Typically, explanations focus on salient input features. If a model is biased against some protected group, explanations may include features that demonstrate this bias, but when biases are realized through proxy features, the relationship between this proxy feature and the protected one may be less clear to a human. In this work, we study the effect of the presence of protected and proxy features on participants’ perception of model fairness and their ability to improve demographic parity over an AI alone. Further, we examine how different treatments—explanations, model bias disclosure and proxy correlation disclosure—affect fairness perception and parity. We find that explanations help people detect direct but not indirect biases. Additionally, regardless of bias type, explanations tend to increase agreement with model biases. Disclosures can help mitigate this effect for indirect biases, improving both unfairness recognition and decision-making fairness. We hope that our findings can help guide further research into advancing explanations in support of fair human-AI decision-making.more » « less
-
Universities have been expanding undergraduate data science programs. Involving graduate students in these new opportunities can foster their growth as data science educators. We describe two programs that employ a near-peer mentoring structure, in which graduate students mentor undergraduates, to (a) strengthen their teaching and mentoring skills and (b) provide research and learning experiences for undergraduates from diverse backgrounds. In the Data Science for Social Good program, undergraduate participants work in teams to tackle a data science project with social impact. Graduate mentors guide project work and provide just-in-time teaching and feedback. The Stanford Mentoring in Data Science course offers training in effective and inclusive mentorship strategies. In an experiential learning framework, enrolled graduate students are paired with undergraduate students from non-R1 schools, whom they mentor through weekly one-on-one remote meetings. In end-of-program surveys, mentors reported growth through both programs. Drawing from these experiences, we developed a self-paced mentor training guide, which engages teaching, mentoring and project management abilities. These initiatives and the shared materials can serve as prototypes of future programs that cultivate mutual growth of both undergraduate and graduate students in a high-touch, inclusive, and encouraging environment.more » « less
-
Abstract. In the geosciences, recent attention has been paid to the influence of uncertainty on expert decision making. When making decisions under conditions of uncertainty, people tend to employ heuristics (rules of thumb) based on experience, relying on their prior knowledge and beliefs to intuitively guide choice. Over 50 years of decision making research in cognitive psychology demonstrates that heuristics can lead to less-than-optimal decisions, collectively referred to as biases. For example, a geologist who confidently interprets ambiguous data as representative of a familiar category form their research (e.g., strike slip faults for expert in extensional domains) is exhibiting the availability bias, which occurs when people make judgments based on what is most dominant or accessible in memory. Given the important social and commercial implications of many geoscience decisions, there is a need to develop effective interventions for removing or mitigating decision bias. In this paper, we summarize the key insights from decision making research about how to reduce bias and review the literature on debiasing strategies. First, we define an optimal decision, since improving decision making requires having a standard to work towards. Next, we discuss the cognitive mechanisms underlying decision biases and describe three biases that have been shown to influence geoscientists decision making (availability bias, framing bias, anchoring bias). Finally, we review existing debiasing strategies that have applicability in the geosciences, with special attention given to those strategies that make use of information technology and artificial intelligence (AI). We present two case studies illustrating different applications of intelligent systems for the debiasing of geoscientific decision making, where debiased decision making is an emergent property of the coordinated and integrated processing of human-AI collaborative teams.more » « less
An official website of the United States government

