The Hogan Personality Inventory (HPI) and Hogan Developmental Survey (HDS) are among the most widely used and extensively well-validated personality inventories for organizational applications; however, they are rarely used in basic research. We describe the Hogan Personality Content Single-Items (HPCS) inventory, an inventory designed to measure the 74 content subscales of the HPI and HDS via a single-item each. We provide evidence of the reliability and validity of the HPCS, including item-level retest reliability estimates, both self-other agreement and other-other (or observer) agreement, convergent correlations with the corresponding scales from the full HPI/HDS instruments, and analyze how similarly the HPCS and full HPI/HDS instruments relate to other variables. We discuss situations where administering the HPCS may have certain advantages and disadvantages relative to the full HPI and HDS. We also discuss how the current findings contribute to an emerging picture of best practices for the development and use of inventories consisting of single-item scales.
more » « less- Award ID(s):
- 2121275
- PAR ID:
- 10473953
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Assessment
- Volume:
- 31
- Issue:
- 6
- ISSN:
- 1073-1911
- Format(s):
- Medium: X Size: p. 1233-1261
- Size(s):
- p. 1233-1261
- Sponsoring Org:
- National Science Foundation
More Like this
-
For two days in February 2018, 17 cybersecurity ed- ucators and professionals from government and in- dustry met in a “hackathon” to refine existing draft multiple-choice test items, and to create new ones, for a Cybersecurity Concept Inventory (CCI) and Cyber- security Curriculum Assessment (CCA) being devel- oped as part of the Cybersecurity Assessment Tools (CATS) Project. We report on the results of the CATS Hackathon, discussing the methods we used to develop test items, highlighting the evolution of a sample test item through this process, and offer- ing suggestions to others who may wish to organize similar hackathons. Each test item embodies a scenario, question stem, and five answer choices. During the Hackathon, par- ticipants organized into teams to (1) Generate new scenarios and question stems, (2) Extend CCI items into CCA items, and generate new answer choices for new scenarios and stems, and (3) Review and refine draft CCA test items. The CATS Project provides rigorous evidence- based instruments for assessing and evaluating educa- tional practices; these instruments can help identify pedagogies and content that are effective in teach- ing cybersecurity. The CCI measures how well stu- dents understand basic concepts in cybersecurity— especially adversarial thinking—after a first course in the field. The CCA measures how well students understand core concepts after completing a full cy- bersecurity curriculum.more » « less
-
Abstract Reliable simulations of molecules in condensed phase require the combination of an accurate quantum mechanical method for the core region, and a realistic model to describe the interaction with the environment. Additionally, this combination should not significantly increase the computational cost of the calculation compared to the corresponding in vacuo case. In this review, we describe the combination of methods based on coupled cluster (CC) theory with polarizable classical models for the environment. We use the polarizable continuum model (PCM) of solvation to discuss the equations, but we also show how the same theoretical framework can be extended to polarizable force fields. The theory is developed within the perturbation theory energy and singles‐T density (PTES) scheme, where the environmental response is computed with the CC single excitation amplitudes as an approximation for the full one‐particle reduced density. The CC‐PTES combination provides the best compromise between accuracy and computational effort for CC calculations in condensed phase, because it includes the response of the environment to the correlation density at the same computational cost of in vacuo CC. We discuss a number of numerical applications for ground and excited state properties, based on the implementation of CC‐PTES with single and double excitations (CCSD‐PTES), which show the reliability and computational efficiency of the method in reproducing experimental or full‐CC data.
This article is characterized under:
Electronic Structure Theory > Ab Initio Electronic Structure Methods
Electronic Structure Theory > Combined QM/MM Methods
Software > Quantum Chemistry
-
Abstract One of the most difficult tasks facing survey researchers is balancing the imperative to keep surveys short with the need to measure important concepts accurately. Not only are long batteries prohibitively expensive but lengthy surveys can also lead to less informative answers from respondents. Yet, scholars often wish to measure traits that require a multi-item battery. To resolve these contradicting constraints, we propose the use of adaptive inventories. This approach uses computerized adaptive testing methods to minimize the number of questions each respondent must answer while maximizing the accuracy of the resulting measurement. We provide evidence supporting the utility of adaptive inventories through an empirically informed simulation study, an experimental study, and a detailed case study using data from the 2016 American National Election Study (ANES) Pilot. The simulation and experiment illustrate the superior performance of adaptive inventories relative to fixed-reduced batteries in terms of precision and accuracy. The ANES analysis serves as an illustration of how adaptive inventories can be developed and fielded and also validates an adaptive inventory with a nationally representative sample. Critically, we provide extensive software tools that allow researchers to incorporate adaptive inventories into their own surveys.
-
Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers' resumes or social media profiles. We interrogate the reliability of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. We develop a methodology for an external audit of stability of algorithmic personality tests, and instantiate this methodology in an audit of two systems, Humantic AI and Crystal. Rather than challenging or affirming the assumptions made in psychometric testing --- that personality traits are meaningful and measurable constructs, and that they are indicative of future success on the job --- we frame our methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. In our audit of Humantic AI and Crystal, we find that both systems show substantial instability on key facets of measurement, and so cannot be considered valid testing instruments. For example, Crystal frequently computes different personality scores if the same resume is given in PDF vs. in raw text, violating the assumption that the output of an algorithmic personality test is stable across job-irrelevant input variations. Among other notable findings is evidence of persistent --- and often incorrect --- data linkage by Humantic AI. An open-source implementation of our auditing methodology, and of the audits of Humantic AI and Crystal, is available at https://github.com/DataResponsibly/hiring-stability-audit.more » « less
-
Abstract Automated hiring systems are among the fastest-developing of all high-stakes AI systems. Among these are algorithmic personality tests that use insights from psychometric testing, and promise to surface personality traits indicative of future success based on job seekers’ resumes or social media profiles. We interrogate the validity of such systems using stability of the outputs they produce, noting that reliability is a necessary, but not a sufficient, condition for validity. Crucially, rather than challenging or affirming the assumptions made in psychometric testing — that personality is a meaningful and measurable construct, and that personality traits are indicative of future success on the job — we frame our audit methodology around testing the underlying assumptions made by the vendors of the algorithmic personality tests themselves. Our main contribution is the development of a socio-technical framework for auditing the stability of algorithmic systems. This contribution is supplemented with an open-source software library that implements the technical components of the audit, and can be used to conduct similar stability audits of algorithmic systems. We instantiate our framework with the audit of two real-world personality prediction systems, namely, Humantic AI and Crystal. The application of our audit framework demonstrates that both these systems show substantial instability with respect to key facets of measurement, and hence cannot be considered valid testing instruments.