Abstract Challenging goals can induce harder work but also greater stress, in turn potentially undermining goal achievement. We sought to examine how mental effort and subjective experiences thereof interact as a function of the challenge level and the size of the incentives at stake. Participants performed a task that rewarded individual units of effort investment (correctly performed Stroop trials) but only if they met a threshold number of correct trials within a fixed time interval (challenge level). We varied this challenge level (Study 1, n = 40) and the rewards at stake (Study 2, n = 79) and measured variability in task performance and self-reported affect across task intervals. Greater challenge and higher rewards facilitated greater effort investment but also induced greater stress, whereas higher rewards (and lower challenge) simultaneously induced greater positive affect. Within intervals, we observed an initial speed up then slowdown in performance, which could reflect dynamic reconfiguration of control. Collectively, these findings further our understanding of the influence of task demands and incentives on mental effort exertion and well-being.
more »
« less
Do you ever get tired of being wrong? The unique impact of feedback on subjective experiences of effort-based decision-making
To achieve a goal, people have to keep track of how much effort they are putting in (effort monitoring) and how well they are performing (performance monitoring), which can be informed by endogenous signals, or exogenous signals providing explicit feedback about whether they have met their goal. Interventions to improve performance often focus on adjusting feedback to direct the individual on how to better invest their efforts, but is it possible that this feedback itself plays a role in shaping the experience of how effortful the task feels? Here, we examine this question directly by assessing the relationship between effort monitoring and performance monitoring. Participants (N = 68) performed a task in which their goal was to squeeze a handgrip to within a target force level (not lower or higher) for a minimum duration. On most trials, they were given no feedback as to whether they met their goal, and were largely unable to detect how they had performed. On a subset of trials, however, we provided participants with (false) feedback indicating that they had either succeeded or failed at meeting their goal (positive vs. negative feedback blocks, respectively). Sporadically, participants rated their experience of effort exertion, fatigue, and confidence in having met the target grip force on that trial. Despite being non-veridical to their actual performance, we found that the type of feedback participants received influenced their experience of effort. When receiving negative (vs. positive) feedback, participants fatigued faster and adjusted their grip strength more for higher target force levels. We also found that confidence gradually increased with increasing positive feedback and decreased with increasing negative feedback, again despite feedback being uniformly uninformative. These results suggest differential influences of feedback on experiences related to effort and further shed light on the relationship between experiences related to performance monitoring and effort monitoring.
more »
« less
- Award ID(s):
- 2046111
- PAR ID:
- 10524030
- Publisher / Repository:
- PsyArXiv
- Date Published:
- Format(s):
- Medium: X
- Institution:
- PsyArXiv
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Kamar, Ece; Luther, Kurt (Ed.)This study investigates how different forms of input elicitation obtained from crowdsourcing can be utilized to improve the quality of inferred labels for image classification tasks, where an image must be labeled as either positive or negative depending on the presence/absence of a specified object. Three types of input elicitation methods are tested: binary classification (positive or negative); level of confidence in binary response (on a scale from 0-100%); and what participants believe the majority of the other participants’ binary classification is. We design a crowdsourcing experiment to test the performance of the proposed input elicitation methods and use data from over 200 participants. Various existing voting and machine learning (ML) methods are applied and others developed to make the best use of these inputs. In an effort to assess their performance on classification tasks of varying difficulty, a systematic synthetic image generation process is developed. Each generated image combines items from the MPEG-7 Core Experiment CE-Shape-1 Test Set into a single image using multiple parameters (e.g., density, transparency, etc.) and may or may not contain a target object. The difficulty of these images is validated by the performance of an automated image classification method. Experimental results suggest that more accurate classifications can be achieved when using the average of the self-reported confidence values as an additional attribute for ML algorithms relative to what is achieved with more traditional approaches. Additionally, they demonstrate that other performance metrics of interest, namely reduced false-negative rates, can be prioritized through special modifications of the proposed aggregation methods that leverage the variety of elicited inputs.more » « less
-
Abstract Objective . Brain–computer interfaces (BCIs) show promise as a direct line of communication between the brain and the outside world that could benefit those with impaired motor function. But the commands available for BCI operation are often limited by the ability of the decoder to differentiate between the many distinct motor or cognitive tasks that can be visualized or attempted. Simple binary command signals (e.g. right hand at rest versus movement) are therefore used due to their ability to produce large observable differences in neural recordings. At the same time, frequent command switching can impose greater demands on the subject’s focus and takes time to learn. Here, we attempt to decode the degree of effort in a specific movement task to produce a graded and more flexible command signal. Approach. Fourteen healthy human subjects (nine male, five female) responded to visual cues by squeezing a hand dynamometer to different levels of predetermined force, guided by continuous visual feedback, while the electroencephalogram (EEG) and grip force were monitored. Movement-related EEG features were extracted and modeled to predict exerted force. Main results. We found that event-related desynchronization (ERD) of the 8–30 Hz mu-beta sensorimotor rhythm of the EEG is separable for different degrees of motor effort. Upon four-fold cross-validation, linear classifiers were found to predict grip force from an ERD vector with mean accuracies across subjects of 53% and 55% for the dominant and non-dominant hand, respectively. ERD amplitude increased with target force but appeared to pass through a trough that hinted at non-monotonic behavior. Significance. Our results suggest that modeling and interactive feedback based on the intended level of motor effort is feasible. The observed ERD trends suggest that different mechanisms may govern intermediate versus low and high degrees of motor effort. This may have utility in rehabilitative protocols for motor impairments.more » « less
-
Abstract Women frequently feel alienated in science, technology, engineering, and mathematics (STEM) environments due to gender biases, ultimately leading them to feel less competent or leave the field altogether. This study utilizes personal statements from a subset of participants from a National Science Foundation (NSF) funded Research Experiences for Undergraduates (REU) Site: Biomedical Engineering in Simulations, Imaging, and Modeling (BME-SIM) to investigate how confidence is shown by participants and how confidence is perceived by faculty reviewers in personal statements. This study compares feedback from faculty reviewers to perceived and self-reported confidence using lexical (i.e., word choices and use) and syntactic (i.e., structures of language segments such as sentences, phrases, and organization of words) features of these personal statements. Women received more negative feedback related to confidence compared to their male counterparts, notably in relation to modesty. Few differences were found between writing styles of genders in their pre- and post-program statements. Overall, writing styles did not seem to correlate with the genders' perceived or self-reported confidence; however, perception of confidence suggested a relationship between genders' pre- and post-program statements when examined by noun and adjective variation. A similar relationship was found between self-reported confidence and noun variation in men and women participants. Findings suggest that writing style perceptions and practices may be influenced by gender norms; however, without looking at the specific diction and content of personal statements, these conclusions cannot be fully established.more » « less
-
This work investigates how different forms of input elicitation obtained from crowdsourcing can be utilized to improve the quality of inferred labels for image classification tasks, where an image must be labeled as either positive or negative depending on the presence/absence of a specified object. Five types of input elicitation methods are tested: binary classification (positive or negative); the ( x, y )-coordinate of the position participants believe a target object is located; level of confidence in binary response (on a scale from 0 to 100%); what participants believe the majority of the other participants' binary classification is; and participant's perceived difficulty level of the task (on a discrete scale). We design two crowdsourcing studies to test the performance of a variety of input elicitation methods and utilize data from over 300 participants. Various existing voting and machine learning (ML) methods are applied to make the best use of these inputs. In an effort to assess their performance on classification tasks of varying difficulty, a systematic synthetic image generation process is developed. Each generated image combines items from the MPEG-7 Core Experiment CE-Shape-1 Test Set into a single image using multiple parameters (e.g., density, transparency, etc.) and may or may not contain a target object. The difficulty of these images is validated by the performance of an automated image classification method. Experiment results suggest that more accurate results can be achieved with smaller training datasets when both the crowdsourced binary classification labels and the average of the self-reported confidence values in these labels are used as features for the ML classifiers. Moreover, when a relatively larger properly annotated dataset is available, in some cases augmenting these ML algorithms with the results (i.e., probability of outcome) from an automated classifier can achieve even higher performance than what can be obtained by using any one of the individual classifiers. Lastly, supplementary analysis of the collected data demonstrates that other performance metrics of interest, namely reduced false-negative rates, can be prioritized through special modifications of the proposed aggregation methods.more » « less
An official website of the United States government

