Asynchronous online courses are popular because they offer benefits to both students and instructors. Students benefit from the convenience, flexibility, affordability, freedom of geography, and access to information. Instructors and institutions benefit by having a broad geographical reach, scalability, and cost-savings of no physical classroom. A challenge with asynchronous online courses is providing students with engaging, collaborative and interactive experiences. Here, we describe how an online poster symposium can be used as a unique educational experience and assessment tool in a large-enrollment (e.g., 500 students), asynchronous, natural science, general education (GE) course. The course, Introduction to Environmental Science (ENR2100), was delivered using distance education (DE) technology over a 15-week semester. In ENR2100 students learn a variety of topics including freshwater resources, surface water, aquifers, groundwater hydrology, ecohydrology, coastal and ocean circulation, drinking water, water purification, wastewater treatment, irrigation, urban and agricultural runoff, sediment and contaminant transport, water cycle, water policy, water pollution, and water quality. Here we present a is a long-term study that takes place from 2017 to 2022 (before and after COVID-19) and involved 5,625 students over 8 semesters. Scaffolding was used to break up the poster project into smaller, more manageable assignments, which students completed throughout the semester. Instructions, examples, how-to videos, book chapters and rubrics were used to accommodate Students’ different levels of knowledge. Poster assignments were designed to teach students how to find and critically evaluate sources of information, recognize the changing nature of scientific knowledge, methods, models and tools, understand the application of scientific data and technological developments, and evaluate the social and ethical implications of natural science discoveries. At the end of the semester students participated in an asynchronous online poster symposium. Each student delivered a 5-min poster presentation using an online learning management system and completed peer reviews of their classmates’ posters using a rubric. This poster project met the learning objectives of our natural science, general education course and taught students important written, visual and verbal communication skills. Students were surveyed to determine, which parts of the course were most effective for instruction and learning. Students ranked poster assignments first, followed closely by lectures videos. Approximately 87% of students were confident that they could produce a scientific poster in the future and 80% of students recommended virtual poster symposiums for online courses.
more »
« less
Alternatives to Simple Multiple-Choice Questions: Computer Scorable Questions that Reveal and Challenge Student Thinking (Abstract Only)
When creating assessments, computer science educators and researchers must balance items' cognitive complexity and authenticity against scoring efficiency. In this poster, the author reports results from an end-of-course assessment administered to over 500 high school students in an introductory block-based programming course. The poster focuses on three atypical multiple-choice items, in which students had to select all the correct responses. The items were designed to be more cognitively complex than simple multiple choice questions while remaining easy to score. Results show that this type of item was challenging for students but was predictive of their overall performance.
more »
« less
- Award ID(s):
- 1348866
- PAR ID:
- 10353802
- Date Published:
- Journal Name:
- SIGCSE '18: Proceedings of the 49th ACM Technical Symposium on Computer Science Education
- Page Range / eLocation ID:
- 1078 to 1078
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Security failures in software arising from failures to practice secure programming are commonplace. Improving this situation requires that practitioners have a clear understanding of the foundational concepts in secure programming to serve as a basis for building new knowledge and responding to new challenges. We developed a Secure Programing Concept Inventory (SPCI) to measure students' understanding of foundational concepts in secure programming. The SPCI consists of thirty-five multiple choice items targeting ten concept areas of secure programming. The SPCI was developed by establishing the content domain of secure programming, developing a pool of test items, multiple rounds of testing and refining the items, and finally testing and inventory reduction to produce the final scale. Scale development began by identifying the core concepts in secure programming. A Delphi study was conducted with thirty practitioners from industry, academia, and government to establish the foundational concepts of secure programming and develop a concept map. To build a set of misconceptions in secure programming, the researchers conducted interviews with students and instructors in the field. These interviews were analyzed using content analysis. This resulted in a taxonomy of misconceptions in secure programming covering ten concept areas. An item pool of multiple-choice questions was developed. The item pool of 225 was administered to a population of 690 students across four institutions. Item discrimination and item difficulty scores were calculated, and the best performing items were mapped to the misconception categories to create subscales for each concept area resulting in a validated 35 item scale.more » « less
-
Understanding how individual students cognitively engage while participating in small group activities in a General Chemistry class can provide insight into what factors may be influencing their level of engagement. The Interactive-Constructive-Active-Passive (ICAP) framework was used to identify individual students’ level of engagement on items in multiple activities during a General Chemistry course. The effects of timing, group size, and question type on engagement were investigated. Results indicate students’ engagement varied more in the first half of the term, and students demonstrated higher levels of engagement when working in smaller groups or subsets of larger groups when these groups contained students with similar levels of knowledge. Finally, the relation between question type (algorithmic versus explanation) and engagement depended on the activity topic. In an activity on Solutions and Dilutions, there was a significant relation where algorithmic items had higher occurrences of Interactive engagement. The implications of this work regarding teaching and research are discussed.more » « less
-
Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and humanauthored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM’s performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance.more » « less
-
Moore, S; Stamper, J; Cao, T; Liu, Z; Hu, X; Lu, Y; Liang, J; Khosravi, H; Denny, P; Singh, A (Ed.)Multiple choice questions are traditionally expensive to produce. Recent advances in large language models (LLMs) have led to fine-tuned LLMs that generate questions competitive with human-authored questions. However, the relative capabilities of ChatGPT-family models have not yet been established for this task. We present a carefully-controlled human evaluation of three conditions: a fine-tuned, augmented version of Macaw, instruction-tuned Bing Chat with zero-shot prompting, and humanauthored questions from a college science textbook. Our results indicate that on six of seven measures tested, both LLM’s performance was not significantly different from human performance. Analysis of LLM errors further suggests that Macaw and Bing Chat have different failure modes for this task: Macaw tends to repeat answer options whereas Bing Chat tends to not include the specified answer in the answer options. For Macaw, removing error items from analysis results in performance on par with humans for all metrics; for Bing Chat, removing error items improves performance but does not reach human-level performance.more » « less
An official website of the United States government

