skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Comparing Self-Report Assessments and Scenario-Based Assessments of Systems Thinking Competence
Abstract Self-report assessments are used frequently in higher education to assess a variety of constructs, including attitudes, opinions, knowledge, and competence. Systems thinking is an example of one competence often measured using self-report assessments where individuals answer several questions about their perceptions of their own skills, habits, or daily decisions. In this study, we define systems thinking as the ability to see the world as a complex interconnected system where different parts can influence each other, and the interrelationships determine system outcomes. An alternative, less-common, assessment approach is to measure skills directly by providing a scenario about an unstructured problem and evaluating respondents’ judgment or analysis of the scenario (scenario-based assessment). This study explored the relationships between engineering students’ performance on self-report assessments and scenario-based assessments of systems thinking, finding that there were no significant relationships between the two assessment techniques. These results suggest that there may be limitations to using self-report assessments as a method to assess systems thinking and other competencies in educational research and evaluation, which could be addressed by incorporating alternative formats for assessing competence. Future work should explore these findings further and support the development of alternative assessment approaches.  more » « less
Award ID(s):
1824594
PAR ID:
10403303
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Journal of Science Education and Technology
Volume:
32
Issue:
6
ISSN:
1059-0145
Format(s):
Medium: X Size: p. 793-813
Size(s):
p. 793-813
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    While systems engineers rely on systems thinking skills in their work, given the increasing complexity of modern engineering problems, engineers across disciplines need to be able to engage in systems thinking, including what we term comprehensive systems thinking. Due to the inherent complexity of systems thinking, and more specifically comprehensive systems thinking, it is not easy to know how well students (and practitioners) are learning and leveraging systems thinking approaches. Thus, engineering managers and educators can benefit from systems thinking assessments. A variety of systems thinking assessments exist that are relevant to engineers, including some focused on the demonstration of systems thinking knowledge or skills and others measuring attitudes, interests, or values related to systems thinking. Starting with a collection of systems thinking assessments from a systematic literature review conducted by our team, we analyzed in-depth those behavior-based assessments that included the creation of a visual representation and were open-ended, i.e., it did not presuppose or provide answers. The findings from this in-depth analysis of systems thinking behavior-based assessments identified 1) six visualization types that were leveraged, 2) dimensions of systems thinking that were assessed and 3) tensions between the affordances of different assessments. In addition, we consider the ways assessments can be used. For example, using assessments to provide feedback to students or using assessments to determine which students are meeting defined learning goals. We draw on our findings to highlight opportunities for future comprehensive systems thinking behavior-based assessment development. 
    more » « less
  2. null (Ed.)
    Algorithmic impact assessments (AIAs) are an emergent form of accountability for entities that build and deploy automated decision-support systems. These are modeled after impact assessments in other domains. Our study of the history of impact assessments shows that "impacts" are an evaluative construct that enable institutions to identify and ameliorate harms experienced because of a policy decision or system. Every domain has different expectations and norms about what constitutes impacts and harms, how potential harms are rendered as the impacts of a particular undertaking, who is responsible for conducting that assessment, and who has the authority to act on the impact assessment to demand changes to that undertaking. By examining proposals for AIAs in relation to other domains, we find that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability. To address this challenge of algorithmic accountability, and as impact assessments become a commonplace process for evaluating harms, the FAccT community should A) understand impacts as objects constructed for evaluative purposes, B) attempt to construct impacts as close as possible to actual harms, and C) recognize that accountability governance requires the input of various types of expertise and affected communities. We conclude with lessons for assembling cross-expertise consensus for the co-construction of impacts and to build robust accountability relationships. 
    more » « less
  3. Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has some potential to increase the assessment’s validity and reliability. The primary objective of this study is to examine the feasibility of a novel approach for stealthily assessing literacy skills using games in an intelligent tutoring system (ITS) designed for reading comprehension strategy training. We investigated the degree to which learners’ game performance and enjoyment predicted their scores on standardized reading tests. Amazon Mechanical Turk participants (n = 211) played three games in iSTART and self-reported their level of game enjoyment after each game. Participants also completed the Gates–MacGinitie Reading Test (GMRT), which includes vocabulary knowledge and reading comprehension measures. The results indicated that participants’ performance in each game as well as the combined performance across all three games predicted their literacy skills. However, the relations between game enjoyment and literacy skills varied across games. These findings suggest the potential of leveraging serious games to assess students’ literacy skills and improve the adaptivity of game-based learning environments. 
    more » « less
  4. With growing interest in supporting the development of computational thinking (CT) in early childhood, there is also need for new assessments that serve multiple purposes and uses. In particular, there is a need to understand the design of formative assessments that can be used during classroom instruction to provide feedback to teachers and children in real-time. In this paper, we report on an empirical study and advance a new unit of observational analysis for formative assessment that we call an indicator of a knowledge refinement opportunity or as a shorthand , KRO indicators . We put forth a new framework for conceptualizing the design of formative assessments that builds on the Evidence Centered Design framework but centers identification and analysis of indicators of knowledge refinement opportunities. We illustrate a number of key indicators through empirical examples drawn from video recordings of Kindergarten classroom lessons. 
    more » « less
  5. Engineers are called to play an important role in addressing the complex problems of our global society, such as climate change and global health care. In order to adequately address these complex problems, engineers must be able to identify and incorporate into their decision making relevant aspects of systems in which their work is contextualized, a skill often referred to as systems thinking. However, within engineering, research on systems thinking tends to emphasize the ability to recognize potentially relevant constituent elements and parts of an engineering problem, rather than how these constituent elements and parts are embedded in broader economic, sociocultural, and temporal contexts and how all of these must inform decision making about problems and solutions. Additionally, some elements of systems thinking, such as an awareness of a particular sociocultural context or the coordination of work among members of a cross-disciplinary team, are not always recognized as core engineering skills, which alienates those whose strengths and passions are related to, for example, engineering systems that consider and impact social change. Studies show that women and minorities, groups underrepresented within engineering, are drawn to engineering in part for its potential to address important social issues. Emphasizing the importance of systems thinking and developing a more comprehensive definition of systems thinking that includes both constituent parts and contextual elements of a system will help students recognize the relevance and value of these other elements of engineering work and support full participation in engineering by a diverse group of students. We provide an overview of our study, in which we are examining systems thinking across a range of expertise to develop a scenario-based assessment tool that educators and researchers can use to evaluate engineering students’ systems thinking competence. Consistent with the aforementioned need to define and study systems thinking in a comprehensive, inclusive manner, we begin with a definition of systems thinking as a holistic approach to problem solving in which linkages and interactions of the immediate work with constituent parts, the larger sociocultural context, and potential impacts over time are identified and incorporated into decision making. In our study, we seek to address two key questions: 1) How do engineers of different levels of education and experience approach problems that require systems thinking? and 2) How do different types of life, educational, and work experiences relate to individuals’ demonstrated level of expertise in solving systems thinking problems? Our study is comprised of three phases. The first two phases include a semi-structured interview with engineering students and professionals about their experiences solving a problem requiring systems thinking and a think-aloud interview in which participants are asked to talk through how they would approach a given engineering scenario and later reflect on the experiences that inform their thinking. Data from these two phases will be used to develop a written assessment tool, which we will test by administering the written instrument to undergraduate and graduate engineering students in our third study phase. Our paper describes our study design and framing and includes preliminary findings from the first phase of our study. 
    more » « less