Food insecurity is defined as an individual or household’s inability or limited access to safe and nutritious food that every person in the household need for an active, healthy life. In this research, we apply visual analytics, the integration of data analytics and interactive visualization, to provide evidence-based decision-making for a local food bank to better understand the people and communities in its service area and improve the reach and impact of the food bank. We have identified the indicators of the need, rates of usage, and other factors related to the general accessibility of the food bank and its programs. Interactive dashboards were developed to allow decision-makers of the food bank to combine their field knowledge with the computing power to make evidence-based informed decisions in complex hunger relief operations.
more »
« less
Advances in Human Error, Reliability, Resilience, and Performance. AHFE 2018. Advances in Intelligent Systems and Computing
In the fight against hunger, Food Banks must routinely make strategic distribution decisions under uncertain supply (donations) and demand. One of the challenges facing the decision makers is that they tend to rely heavily on their prior experiences to make decisions, a phenomenon called cognitive bias. This preliminary study seeks to address cognitive bias through a visual analytics approach in the decision-making process. Using certain food bank data, interactive dashboards were prepared as an alternative to the customary spreadsheet format. A preliminary study was conducted to evaluate the effectiveness of the dashboard and results indicated dashboards reduced the amount of confirmation bias.
more »
« less
- Award ID(s):
- 1718672
- PAR ID:
- 10066342
- Date Published:
- Journal Name:
- A Visual Analytics Approach to Combat Confirmation Bias for a Local Food Bank
- Volume:
- 778
- Page Range / eLocation ID:
- 13-23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract. In the geosciences, recent attention has been paid to the influence of uncertainty on expert decision-making. When making decisions under conditions of uncertainty, people tend to employ heuristics (rules of thumb) based on experience, relying on their prior knowledge and beliefs to intuitively guide choice. Over 50 years of decision-making research in cognitive psychology demonstrates that heuristics can lead to less-than-optimal decisions, collectively referred to as biases. For example, the availability bias occurs when people make judgments based on what is most dominant or accessible in memory; geoscientists who have spent the past several months studying strike-slip faults will have this terrain most readily available in their mind when interpreting new seismic data. Given the important social and commercial implications of many geoscience decisions, there is a need to develop effective interventions for removing or mitigating decision bias. In this paper, we outline the key insights from decision-making research about how to reduce bias and review the literature on debiasing strategies. First, we define an optimal decision, since improving decision-making requires having a standard to work towards. Next, we discuss the cognitive mechanisms underlying decision biases and describe three biases that have been shown to influence geoscientists' decision-making (availability bias, framing bias, anchoring bias). Finally, we review existing debiasing strategies that have applicability in the geosciences, with special attention given to strategies that make use of information technology and artificial intelligence (AI). We present two case studies illustrating different applications of intelligent systems for the debiasing of geoscientific decision-making, wherein debiased decision-making is an emergent property of the coordinated and integrated processing of human–AI collaborative teams.more » « less
-
Abstract. In the geosciences, recent attention has been paid to the influence of uncertainty on expert decision making. When making decisions under conditions of uncertainty, people tend to employ heuristics (rules of thumb) based on experience, relying on their prior knowledge and beliefs to intuitively guide choice. Over 50 years of decision making research in cognitive psychology demonstrates that heuristics can lead to less-than-optimal decisions, collectively referred to as biases. For example, a geologist who confidently interprets ambiguous data as representative of a familiar category form their research (e.g., strike slip faults for expert in extensional domains) is exhibiting the availability bias, which occurs when people make judgments based on what is most dominant or accessible in memory. Given the important social and commercial implications of many geoscience decisions, there is a need to develop effective interventions for removing or mitigating decision bias. In this paper, we summarize the key insights from decision making research about how to reduce bias and review the literature on debiasing strategies. First, we define an optimal decision, since improving decision making requires having a standard to work towards. Next, we discuss the cognitive mechanisms underlying decision biases and describe three biases that have been shown to influence geoscientists decision making (availability bias, framing bias, anchoring bias). Finally, we review existing debiasing strategies that have applicability in the geosciences, with special attention given to those strategies that make use of information technology and artificial intelligence (AI). We present two case studies illustrating different applications of intelligent systems for the debiasing of geoscientific decision making, where debiased decision making is an emergent property of the coordinated and integrated processing of human-AI collaborative teams.more » « less
-
Abstract Social decision making involves balancing conflicts between selfishness and pro-sociality. The cognitive processes underlying such decisions are not well understood, with some arguing for a single comparison process, while others argue for dual processes (one intuitive and one deliberative). Here, we propose a way to reconcile these two opposing frameworks. We argue that behavior attributed to intuition can instead be seen as a starting point bias of a sequential sampling model (SSM) process, analogous to a prior in a Bayesian framework. Using mini-dictator games in which subjects make binary decisions about how to allocate money between themselves and another participant, we find that pro-social subjects become more pro-social under time pressure and less pro-social under time delay, while selfish subjects do the opposite. Our findings help reconcile the conflicting results concerning the cognitive processes of social decision making and highlight the importance of modeling the dynamics of the choice process.more » « less
-
Abstract Despite an increasing reliance on fully-automated algorithmic decision-making in our day-to-day lives, humans still make consequential decisions. While the existing literature focuses on the bias and fairness of algorithmic recommendations, an overlooked question is whether they improve human decisions. We develop a general statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment in the United States criminal justice system. Our analysis of the preliminary data shows that providing the PSA to the judge has little overall impact on the judge’s decisions and subsequent arrestee behaviour.more » « less
An official website of the United States government

