Silent hypoxemia, or "happy hypoxia", is a puzzling phenomenon in which patients who have contracted COVID-19 exhibit very low oxygen saturation (
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract < 80%) but do not experience discomfort in breathing. The mechanism by which this blunted response to hypoxia occurs is unknown. We have previously shown that a computational model of the respiratory neural network (Diekman et al. in J Neurophysiol 118(4):2194–2215, 2017) can be used to test hypotheses focused on changes in chemosensory inputs to the central pattern generator (CPG). We hypothesize that altered chemosensory function at the level of the carotid bodies and/or thenucleus tractus solitarii are responsible for the blunted response to hypoxia. Here, we use our model to explore this hypothesis by altering the properties of the gain function representing oxygen sensing inputs to the CPG. We then vary other parameters in the model and show that oxygen carrying capacity is the most salient factor for producing silent hypoxemia. We call for clinicians to measure hematocrit as a clinical index of altered physiology in response to COVID-19 infection.Free, publicly-accessible full text available June 17, 2025 -
The emphasis on an equitable vision of science learning in current science education reform efforts sees students as contributing to knowledge-building through drawing on their rich cultural and linguistic backgrounds while engaging in the three dimensions to make sense of compelling, relevant phenomena. However, this vision will not be fully realized without coherence between curriculum, instruction, and assessment. As a majority of states have now adopted standards aligned to or adapted from the Framework, we see an urgent need for assessments that can support rather than conflict with equitable science learning. In this study, we seek to understand the current state of Framework-aligned assessment tasks. We have amassed 352 middle school tasks, originating from state-level assessment banks and assessment developers at universities or research organizations. Our preliminary findings from characterizing 104 tasks revealed that the majority of tasks target dimensions of the NGSS or Framework-based standards and include a phenomenon. However, there are challenges in framing phenomena that attend to students’ interests and identities and engage students in three-dimensional sensemaking. Additionally, some phenomena are not based in real-world observations and are not authentic from students’ perspectives, which makes it difficult for students to see connections of local or global relevance.more » « less
-
Abstract Argumentation is fundamental to science education, both as a prominent feature of scientific reasoning and as an effective mode of learning—a perspective reflected in contemporary frameworks and standards. The successful implementation of argumentation in school science, however, requires a paradigm shift in science assessment from the measurement of knowledge and understanding to the measurement of performance and knowledge in use. Performance tasks requiring argumentation must capture the many ways students can construct and evaluate arguments in science, yet such tasks are both expensive and resource‐intensive to score. In this study we explore how machine learning text classification techniques can be applied to develop efficient, valid, and accurate constructed‐response measures of students' competency with written scientific argumentation that are aligned with a validated argumentation learning progression. Data come from 933 middle school students in the San Francisco Bay Area and are based on three sets of argumentation items in three different science contexts. The findings demonstrate that we have been able to develop computer scoring models that can achieve substantial to almost perfect agreement between human‐assigned and computer‐predicted scores. Model performance was slightly weaker for harder items targeting higher levels of the learning progression, largely due to the linguistic complexity of these responses and the sparsity of higher‐level responses in the training data set. Comparing the efficacy of different scoring approaches revealed that breaking down students' arguments into multiple components (e.g., the presence of an accurate claim or providing sufficient evidence), developing computer models for each component, and combining scores from these analytic components into a holistic score produced better results than holistic scoring approaches. However, this analytical approach was found to be differentially biased when scoring responses from English learners (EL) students as compared to responses from non‐EL students on some items. Differences in the severity between human and computer scores for EL between these approaches are explored, and potential sources of bias in automated scoring are discussed.