- Award ID(s):
- 2031849
- NSF-PAR ID:
- 10231688
- Date Published:
- Journal Name:
- International Conference Machine Learning
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)This research addresses the global initiative to increase diversity in the engineering work force. The military Veteran student population was identified as one of the most diverse student groups in engineering; however, discontinue and dismissal rates of Veteran students in engineering are significantly higher than traditional engineering students in the United States. These Veteran students hold identifiable traits that are different than traditional engineering students who are under the age of 24 and financially dependent on their parents. While great leaps have been made in engineering student retention, most has focused on these traditional students. This research seeks to fill this gap by specifically addressing the retention of Veteran students using the concept of social responsibility. Social responsibility is generally considered to be acting to benefit society. It is a common ideal promoted in the military (e.g., service before self in the U.S. Air Force fundamental and enduring values). It is also embodied in the engineer’s creed (i.e., engineers using their professional skills to improve human welfare) and revealed by the literature as a major factor that attracts many students from historically underrepresented groups into engineering. Therefore, the objective of this research is to explore the associations between Veteran student retention, social responsibility, and demographics. A survey instrument was developed based on a model for assessing first-year engineering student understanding of social responsibility. The survey was updated to include demographics specific to the Veteran student cohort (e.g., military branch, prior job attributes, and university transfer credits) and questions specifically linking military service and engineering. The survey was piloted, followed by a focus group to clarify survey questions; it was then revised and launched in October 2018 to all students who self-identify as Veterans and all first-year students in the college of engineering at a 4-year land grant institution. Approximately 48% of the Veteran student cohort and 52% of the first-year cohort responded to the survey. This paper will discuss the Veteran and first-year student perceptions of social responsibility in engineering based on results from the instrument. The results of this research will be used to design an intervention, likely in the first-year when most Veteran students discontinue or are dismissed, to increase Veteran retention in engineering programs.more » « less
-
Data sets and statistics about groups of individuals are increasingly collected and released, feeding many optimization and learning algorithms. In many cases, the released data contain sensitive information whose privacy is strictly regulated. For example, in the U.S., the census data is regulated under Title 13, which requires that no individual be identified from any data released by the Census Bureau. In Europe, data release is regulated according to the General Data Protection Regulation, which addresses the control and transfer of personal data. Differential privacy has emerged as the de-facto standard to protect data privacy. In a nutshell, differentially private algorithms protect an individual’s data by injecting random noise into the output of a computation that involves such data. While this process ensures privacy, it also impacts the quality of data analysis, and, when private data sets are used as inputs to complex machine learning or optimization tasks, they may produce results that are fundamentally different from those obtained on the original data and even rise unintended bias and fairness concerns. In this talk, I will first focus on the challenge of releasing privacy-preserving data sets for complex data analysis tasks. I will introduce the notion of Constrained-based Differential Privacy (C-DP), which allows casting the data release problem to an optimization problem whose goal is to preserve the salient features of the original data. I will review several applications of C-DP in the context of very large hierarchical census data, data streams, energy systems, and in the design of federated data-sharing protocols. Next, I will discuss how errors induced by differential privacy algorithms may propagate within a decision problem causing biases and fairness issues. This is particularly important as privacy-preserving data is often used for critical decision processes, including the allocation of funds and benefits to states and jurisdictions, which ideally should be fair and unbiased. Finally, I will conclude with a roadmap to future work and some open questions.more » « less
-
null (Ed.)In this position paper, we argue for applying recent research on ensuring sociotechnical systems are fair and non-discriminatory to the privacy protections those systems may provide. Privacy literature seldom considers whether a proposed privacy scheme protects all persons uniformly, irrespective of membership in protected classes or particular risk in the face of privacy failure. Just as algorithmic decision-making systems may have discriminatory outcomes even without explicit or deliberate discrimination, so also privacy regimes may disproportionately fail to protect vulnerable members of their target population, resulting in disparate impact with respect to the effectiveness of privacy protections.We propose a research agenda that will illuminate this issue, along with related issues in the intersection of fairness and privacy, and present case studies that show how the outcomes of this research may change existing privacy and fairness research. We believe it is important to ensure that technologies and policies intended to protect the users and subjects of information systems provide such protection in an equitable fashion.more » « less
-
We investigate the power of censoring techniques, first developed for learning {\em fair representations}, to address domain generalization. We examine {\em adversarial} censoring techniques for learning invariant representations from multiple "studies" (or domains), where each study is drawn according to a distribution on domains. The mapping is used at test time to classify instances from a new domain. In many contexts, such as medical forecasting, domain generalization from studies in populous areas (where data are plentiful), to geographically remote populations (for which no training data exist) provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness. We study an adversarial loss function for k domains and precisely characterize its limiting behavior as k grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps. The limiting results are accompanied by non-asymptotic learning-theoretic bounds. Furthermore, we obtain sufficient conditions for good worst-case prediction performance of our algorithm on previously unseen domains. Finally, we decompose our mappings into two components and provide a complete characterization of invariance in terms of this decomposition. To our knowledge, our results provide the first formal guarantees of these kinds for adversarial invariant domain generalization.more » « less
-
Abstract Background In order to accurately accumulate delivered dose for head and neck cancer patients treated with the Adapt to Position workflow on the 1.5T magnetic resonance imaging (MRI)‐linear accelerator (MR‐linac), the low‐resolution T2‐weighted MRIs used for daily setup must be segmented to enable reconstruction of the delivered dose at each fraction.
Purpose In this pilot study, we evaluate various autosegmentation methods for head and neck organs at risk (OARs) on on‐board setup MRIs from the MR‐linac for off‐line reconstruction of delivered dose.
Methods Seven OARs (parotid glands, submandibular glands, mandible, spinal cord, and brainstem) were contoured on 43 images by seven observers each. Ground truth contours were generated using a simultaneous truth and performance level estimation (STAPLE) algorithm. Twenty total autosegmentation methods were evaluated in ADMIRE: 1–9) atlas‐based autosegmentation using a population atlas library (PAL) of 5/10/15 patients with STAPLE, patch fusion (PF), random forest (RF) for label fusion; 10–19) autosegmentation using images from a patient's 1–4 prior fractions (individualized patient prior [IPP]) using STAPLE/PF/RF; 20) deep learning (DL) (3D ResUNet trained on 43 ground truth structure sets plus 45 contoured by one observer). Execution time was measured for each method. Autosegmented structures were compared to ground truth structures using the Dice similarity coefficient, mean surface distance (MSD), Hausdorff distance (HD), and Jaccard index (JI). For each metric and OAR, performance was compared to the inter‐observer variability using Dunn's test with control. Methods were compared pairwise using the Steel‐Dwass test for each metric pooled across all OARs. Further dosimetric analysis was performed on three high‐performing autosegmentation methods (DL, IPP with RF and 4 fractions [IPP_RF_4], IPP with 1 fraction [IPP_1]), and one low‐performing (PAL with STAPLE and 5 atlases [PAL_ST_5]). For five patients, delivered doses from clinical plans were recalculated on setup images with ground truth and autosegmented structure sets. Differences in maximum and mean dose to each structure between the ground truth and autosegmented structures were calculated and correlated with geometric metrics.
Results DL and IPP methods performed best overall, all significantly outperforming inter‐observer variability and with no significant difference between methods in pairwise comparison. PAL methods performed worst overall; most were not significantly different from the inter‐observer variability or from each other. DL was the fastest method (33 s per case) and PAL methods the slowest (3.7–13.8 min per case). Execution time increased with a number of prior fractions/atlases for IPP and PAL. For DL, IPP_1, and IPP_RF_4, the majority (95%) of dose differences were within ± 250 cGy from ground truth, but outlier differences up to 785 cGy occurred. Dose differences were much higher for PAL_ST_5, with outlier differences up to 1920 cGy. Dose differences showed weak but significant correlations with all geometric metrics (
R 2 between 0.030 and 0.314).Conclusions The autosegmentation methods offering the best combination of performance and execution time are DL and IPP_1. Dose reconstruction on on‐board T2‐weighted MRIs is feasible with autosegmented structures with minimal dosimetric variation from ground truth, but contours should be visually inspected prior to dose reconstruction in an end‐to‐end dose accumulation workflow.