Background: "Frank autism," recognizable through the first minutes of an interaction, describes a behavioral presentation of a subset of autistic individuals that is closely tied to social communication challenges, and may be linked to so-called "prototypical autism." To date, there is no research on frank autism presentations of autistic adolescents and young adults, nor individuals diagnosed with autism spectrum disorder (ASD) in childhood who do not meet diagnostic criteria during or after adolescence (loss of autism diagnosis, LAD). In addition, there are currently no data on the factors that drive frank autism impressions in these adolescent groups. Methods: This study quantifies initial impressions of autistic characteristics in 24 autistic, 24 LAD and 26 neurotypical (NT) individuals ages 12 to 39 years. Graduate student and expert clinicians completed five-minute impressions, rated confidence in their own impressions, and scored the atypicality of behaviors associated with impressions; impressions were compared with current gold-standard diagnostic outcomes. Results: Overall, clinicians' impressions within the first five minutes generally matched current gold-standard diagnostic status (clinical best estimate), were highly correlated with ADOS-2 CSS, and were driven primarily by prosodic and facial cues. However, this brief observation did not detect autism in all cases. While clinicians noted some subclinical atypicalities in the LAD group, impressions of the LAD and NT groups were similar. Limitations: The brief observations in this study were conducted during clinical research, including some semi-structured assessments. While results suggest overall concordance between initial impressions and diagnoses following more thorough evaluation, findings may not generalize to less structured, informal contexts. In addition, our sample was demographically homogeneous and comprised only speaking autistic participants. They were also unmatched for sex, with more females in the non-autistic group. Future studies should recruit samples that are diverse in demographic variables and ability level to replicate these findings and explore their implications. Conclusions: Results provide insights into the behavioral characteristics that contribute to the diagnosis of adolescents and young adults and may help inform diagnostic decision making in the wake of an increase in the demand for autism evaluations later than childhood. They also substantiate claims of an absence of apparent autistic characteristics in individuals who have lost the diagnosis.
more »
« less
Psychometric validation and refinement of the Interoception Sensory Questionnaire (ISQ) in adolescents and adults on the autism spectrum
Abstract Background Individuals on the autism spectrum are reported to display alterations in interoception, the sense of the internal state of the body. The Interoception Sensory Questionnaire (ISQ) is a 20-item self-report measure of interoception specifically intended to measure this construct in autistic people. The psychometrics of the ISQ, however, have not previously been evaluated in a large sample of autistic individuals. Methods Using confirmatory factor analysis, we evaluated the latent structure of the ISQ in a large online sample of adults on the autism spectrum and found that the unidimensional model fit the data poorly. Using misspecification analysis to identify areas of local misfit and item response theory to investigate the appropriateness of the seven-point response scale, we removed redundant items and collapsed the response options to put forth a novel eight-item, five-response choice ISQ. Results The revised, five-response choice ISQ (ISQ-8) showed much improved fit while maintaining high internal reliability. Differential item functioning (DIF) analyses indicated that the items of the ISQ-8 were answered in comparable ways by autistic adolescents and adults and across multiple other sociodemographic groups. Limitations Our results were limited by the fact that we did not collect data for typically developing controls, preventing the analysis of DIF by diagnostic status. Additionally, while this study proposes a new 5-response scale for the ISQ-8, our data were not collected using this method; thus, the psychometric properties for the revised version of this instrument require further investigation. Conclusion The ISQ-8 shows promise as a reliable and valid measure of interoception in adolescents and adults on the autism spectrum, but additional work is needed to examine its psychometrics in this population. A free online score calculator has been created to facilitate the use of ISQ-8 latent trait scores for further studies of autistic adolescents and adults (available at https://asdmeasures.shinyapps.io/ISQ_score/ ).
more »
« less
- Award ID(s):
- 1922697
- PAR ID:
- 10286321
- Date Published:
- Journal Name:
- Molecular Autism
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2040-2392
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Although theory of mind (ToM) is seen as a primary contributor to pragmatic language use in autistic individuals, less work has considered the influence of structural language. This study examines grammaticality judgements, ToM (Reading the Eyes in the Mind, Social Attribution Test), and pragmatic language (a de novo measure based on Pragmatic Language Scales), and their associations, in three groups with heterogenous abilities: Current autism (n = 36); those with a history of autism spectrum disorder, who no longer display symptoms ( “loss of autism diagnosis”, LAD; n = 32), and non-autistic (n = 36) adolescents and adults with fluent verbal skills. Results showed pragmatic difficulties in autism, relative to both other groups, difficulties in affective ToM relative to both other groups, and difficulties in structural language relative to neurotypical controls; LAD individuals showed no impairments. While pairwise associations of structural language and matrix reasoning with pragmatic language were observed, ToM was the only unique predictor of pragmatic language when all measures were included in the models. Results suggest complex interactions among pragmatic and structural language, and ToM, and that pragmatic language improves meaningfully with broad changes in broad aspects of autism when individuals lose the autism diagnosis.more » « less
-
Abstract Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also on the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require knowing several anchor items that are DIF-free in order to draw inferences on whether each of the rest is a DIF item, where the anchor items are used to identify the latent trait distributions. When no prior information on anchor items is available, or some anchor items are misspecified, item purification methods and regularized estimation methods can be used. The former iteratively purifies the anchor set by a stepwise model selection procedure, and the latter selects the DIF-free items by a LASSO-type regularization approach. Unfortunately, unlike the methods based on a correctly specified anchor set, these methods are not guaranteed to provide valid statistical inference (e.g., confidence intervals andp-values). In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal$$L_1$$ norm condition for identifying the latent trait distributions. Without requiring prior knowledge about an anchor set, it can accurately estimate the DIF effects of individual items and further draw valid statistical inferences for quantifying the uncertainty. Specifically, the inference results allow us to control the type-I error for DIF detection, which may not be possible with item purification and regularized estimation methods. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is applied to analysing the three personality scales of the Eysenck personality questionnaire-revised (EPQ-R).more » « less
-
Item response theory (IRT) has become one of the most popular statistical models for psychometrics, a field of study concerned with the theory and techniques of psychological measurement. The IRT models are latent factor models tailored to the analysis, interpretation, and prediction of individuals’ behaviors in answering a set of measurement items that typically involve categorical response data. Many important questions of measurement are directly or indirectly answered through the use of IRT models, including scoring individuals’ test performances, validating a test scale, linking two tests, among others. This paper provides a review of item response theory, including its statistical framework and psychometric applications. We establish connections between item response theory and related topics in statistics, including empirical Bayes, nonparametric methods, matrix completion, regularized estimation, and sequential analysis. Possible future directions of IRT are discussed from the perspective of statistical learning.more » « less
-
The Standards for educational and psychological assessment were developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (AERA et al., 2014). The Standards specify assessment developers establish five types of validity evidence: test content, response processes, internal structure, relationship to other variables, and consequential/bias. Relevant to this proposal is consequential validity evidence that identifies the potential negative impact of testing or bias. Standard 3.1 of The Standards (2014) on fairness in testing states that “those responsible for test development, revision, and administration should design all steps of the testing process to promote valid score interpretations for intended score uses for the widest possible range of individuals and relevant sub-groups in the intended populations” (p. 63). Three types of bias include construct, method, and item bias (Boer et al., 2018). Testing for differential item functioning (DIF) is a standard analysis adopted to detect item bias against a subgroup (Boer et al., 2018). Example subgroups include gender, race/ethnic group, socioeconomic status, native language, or disability. DIF is when “equally able test takers differ in their probabilities answering a test item correctly as a function of group membership” (AERA et al., 2005, p. 51). DIF indicates systematic error as compared to real mean group differences (Camilli & Shepard, 1994). Items exhibiting significant DIF are removed or reviewed for sources leading to bias to determine modifications to retain and further test an item. The Delphi technique is an emergent systematic research method whereby expert panel members review item content through an iterative process (Yildirim & Büyüköztürk, 2018). Experts independently evaluate each item for potential sources leading to DIF, researchers group their responses, and experts then independently complete a survey to rate their level of agreement with the anonymously grouped responses. This process continues until saturation and consensus are reached among experts as established through some criterion (e.g., median agreement rating, item quartile range, and percent agreement). The technique allows researchers to “identify, learn, and share the ideas of experts by searching for agreement among experts” (Yildirim & Büyüköztürk, 2018, p. 451). Research has illustrated this technique applied after DIF is detected, but not before administering items in the field. The current research is a methodological illustration of the Delphi technique applied in the item construction phase of assessment development as part of a five-year study to develop and test new problem-solving measures (PSM; Bostic et al., 2015, 2017) for U.S.A. grades 6-8 in a computer adaptive testing environment. As part of an iterative design-science-based methodology (Middleton et al., 2008), we illustrate the integration of the Delphi technique into the item writing process. Results from two three-person panels each reviewing a set of 45 PSM items are utilized to illustrate the technique. Advantages and limitations identified through a survey by participating experts and researchers are outlined to advance the method.more » « less
An official website of the United States government

