We study the problem of approximating maximum Nash social welfare (NSW) when allocatingmindivisible items amongnasymmetric agents with submodular valuations. TheNSWis a well-established notion of fairness and efficiency, defined as the weighted geometric mean of agents’ valuations. For special cases of the problem with symmetric agents and additive(-like) valuation functions, approximation algorithms have been designed using approaches customized for these specific settings, and they fail to extend to more general settings. Hence, no approximation algorithm with a factor independent ofmwas known either for asymmetric agents with additive valuations or for symmetric agents beyond additive(-like) valuations before this work. In this article, we extend our understanding of theNSWproblem to far more general settings. Our main contribution is two approximation algorithms for asymmetric agents with additive and submodular valuations. Both algorithms are simple to understand and involve non-trivial modifications of a greedy repeated matchings approach. Allocations of high-valued items are done separately by un-matching certain items and re-matching them by different processes in both algorithms. We show that these approaches achieve approximation factors ofO(n) andO(nlogn) for additive and submodular cases, independent of the number of items. For additive valuations, our algorithm outputs an allocation that also achieves the fairness property of envy-free up to one item (EF1). Furthermore, we show that theNSWproblem under submodular valuations is strictly harder than all currently known settings with an\(\frac{\mathrm{e}}{\mathrm{e}-1}\)factor of the hardness of approximation, even for constantly many agents. For this case, we provide a different approximation algorithm that achieves a factor of\(\frac{\mathrm{e}}{\mathrm{e}-1}\), hence resolving it completely.
more »
« less
This content will become publicly available on March 1, 2026
Multi-Group Regularized Gaussian Variational Estimation: Fast Detection of DIF
Abstract Data harmonization is an emerging approach to strategically combining data from multiple independent studies, enabling addressing new research questions that are not answerable by a single contributing study. A fundamental psychometric challenge for data harmonization is to create commensurate measures for the constructs of interest across studies. In this study, we focus on a regularized explanatory multidimensional item response theory model (re-MIRT) for establishing measurement equivalence across instruments and studies, where regularization enables the detection of items that violate measurement invariance, also known as differential item functioning (DIF). Because the MIRT model is computationally demanding, we leverage the recently developed Gaussian Variational Expectation–Maximization (GVEM) algorithm to speed up the computation. In particular, the GVEM algorithm is extended to a more complicated and improved multi-group version with categorical covariates and Lasso penalty for re-MIRT, namely, the importance weighted GVEM with one additional maximization step (IW-GVEMM). This study aims to provide empirical evidence to support feasible uses of IW-GVEMM for re-MIRT DIF detection, providing a useful tool for integrative data analysis. Our results show that IW-GVEMM accurately estimates the model, detects DIF items, and finds a more reasonable number of DIF items in a real world dataset. The proposed method has been integrated intoRpackageVEMIRT(https://map-lab-uw.github.io/VEMIRT).
more »
« less
- PAR ID:
- 10634737
- Publisher / Repository:
- Cambridge University Press
- Date Published:
- Journal Name:
- Psychometrika
- Volume:
- 90
- Issue:
- 1
- ISSN:
- 0033-3123
- Page Range / eLocation ID:
- 2 to 23
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Differential item functioning (DIF) screening has long been suggested to ensure assessment fairness. Traditional DIF methods typically focus on the main effects of demographic variables on item parameters, overlooking the interactions among multiple identities. Drawing on the intersectionality framework, we define intersectional DIF as deviations in item parameters that arise from the interactions among demographic variables beyond their main effects and propose a novel item response theory (IRT) approach for detecting intersectional DIF. Under our framework, fixed effects are used to account for traditional DIF, while random item effects are introduced to capture intersectional DIF. We further introduce the concept of intersectional impact, which refers to interaction effects on group-level mean ability. Depending on which item parameters are affected and whether intersectional impact is considered, we propose four models, which aim to detect intersectional uniform DIF (UDIF), intersectional UDIF with intersectional impact, intersectional non-uniform DIF (NUDIF), and intersectional NUDIF with intersectional impact, respectively. For efficient model estimation, a regularized Gaussian variational expectation-maximization algorithm is developed. Simulation studies demonstrate that our methods can effectively detect intersectional UDIF, although their detection of intersectional NUDIF is more limited.more » « less
-
Abstract Establishing the invariance property of an instrument (e.g., a questionnaire or test) is a key step for establishing its measurement validity. Measurement invariance is typically assessed by differential item functioning (DIF) analysis, i.e., detecting DIF items whose response distribution depends not only on the latent trait measured by the instrument but also on the group membership. DIF analysis is confounded by the group difference in the latent trait distributions. Many DIF analyses require knowing several anchor items that are DIF-free in order to draw inferences on whether each of the rest is a DIF item, where the anchor items are used to identify the latent trait distributions. When no prior information on anchor items is available, or some anchor items are misspecified, item purification methods and regularized estimation methods can be used. The former iteratively purifies the anchor set by a stepwise model selection procedure, and the latter selects the DIF-free items by a LASSO-type regularization approach. Unfortunately, unlike the methods based on a correctly specified anchor set, these methods are not guaranteed to provide valid statistical inference (e.g., confidence intervals andp-values). In this paper, we propose a new method for DIF analysis under a multiple indicators and multiple causes (MIMIC) model for DIF. This method adopts a minimal$$L_1$$ norm condition for identifying the latent trait distributions. Without requiring prior knowledge about an anchor set, it can accurately estimate the DIF effects of individual items and further draw valid statistical inferences for quantifying the uncertainty. Specifically, the inference results allow us to control the type-I error for DIF detection, which may not be possible with item purification and regularized estimation methods. We conduct simulation studies to evaluate the performance of the proposed method and compare it with the anchor-set-based likelihood ratio test approach and the LASSO approach. The proposed method is applied to analysing the three personality scales of the Eysenck personality questionnaire-revised (EPQ-R).more » « less
-
It is well established that access to social supports is essential for engineering students’ persistence and yet access to supports varies across groups. Understanding the differential supports inherent in students’ social networks and then working to provide additional needed supports can help the field of engineering education become more inclusive of all students. Our work contributes to this effort by examing the reliability and fairness of a social capital instrument, the Undergraduate Supports Survey (USS). We examined the extent to which two scales were reliable across ability levels (level of social capital), gender groups and year-in-school. We conducted two item response theory (IRT) models using a graded response model and performed differential item functioning (DIF) tests to detect item differences in gender and year-in-school. Our results indicate that most items have acceptable to good item discrimination and difficulty. DIF analysis shows that multiple items report DIF across gender groups in the Expressive Support scale in favor of women and nonbinary engineering students. DIF analysis shows that year-in-school has little to no effect on items, with only one DIF item. Therefore, engineering educators can use the USS confidently to examine expressive and instrumental social capital in undergraduates across year-in-school. Our work can be used by the engineering education research community to identify and address differences in students’ access to support. We recommend that the engineering education community works to be explicit in their expressive and instrumental support. Future work will explore the measurement invariance in Expressive Support items across gender.more » « less
-
It is well established that access to social supports is essential for engineering students’ persistence and yet access to supports varies across groups. Understanding the differential supports inherent in students’ social networks and then working to provide additional needed supports can help the field of engineering education become more inclusive of all students. Our work contributes to this effort by examing the reliability and fairness of a social capital instrument, the Undergraduate Supports Survey (USS). We examined the extent to which two scales were reliable across ability levels (level of social capital), gender groups and year-in-school. We conducted two item response theory (IRT) models using a graded response model and performed differential item functioning (DIF) tests to detect item differences in gender and year-in-school. Our results indicate that most items have acceptable to good item discrimination and difficulty. DIF analysis shows that multiple items report DIF across gender groups in the Expressive Support scale in favor of women and nonbinary engineering students. DIF analysis shows that year-in-school has little to no effect on items, with only one DIF item. Therefore, engineering educators can use the USS confidently to examine expressive and instrumental social capital in undergraduates across year-in-school. Our work can be used by the engineering education research community to identify and address differences in students’ access to support. We recommend that the engineering education community works to be explicit in their expressive and instrumental support. Future work will explore the measurement invariance in Expressive Support items across gender.more » « less
An official website of the United States government
