We report the discovery of six new magnetar counterpart candidates from deep near-infrared Hubble Space Telescope (HST) imaging. The new candidates are among a sample of 19 magnetars for which we present HST data obtained between 2018 and 2020. We confirm the variability of previously established near-infrared counterparts, and newly identify candidates for PSR J1622−4950, Swift J1822.3−1606, CXOU J171405.7−381031, Swift J1833−0832, Swift J1834.9−0846, and AX J1818.8−1559 based on their proximity to X-ray localizations. The new candidates are compared with the existing counterpart population in terms of their colours, magnitudes, and near-infrared to X-ray spectral indices. We find two candidates for AX J1818 that are both consistent with previously established counterparts. The other new candidates are likely to be chance alignments, or otherwise have a different origin for their near-infrared emission not previously seen in magnetar counterparts. Further observations and studies of these candidates are needed to firmly establish their nature.
- Award ID(s):
- 1836650
- PAR ID:
- 10167451
- Author(s) / Creator(s):
- ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
- Date Published:
- Journal Name:
- SciPost Physics
- Volume:
- 7
- Issue:
- 1
- ISSN:
- 2542-4653
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
ABSTRACT -
Multivariable models for prediction or estimating associations with an outcome are rarely built in isolation. Instead, they are based upon a mixture of covariates that have been evaluated in earlier studies (eg, age, sex, or common biomarkers) and covariates that were collected specifically for the current study (eg, a panel of novel biomarkers or other hypothesized risk factors). For that context, we present the multistep elastic net (MSN), which considers penalized regression with variables that can be qualitatively grouped based upon their degree of prior research support: established predictors vs unestablished predictors. The MSN chooses between uniform penalization of all predictors (the standard elastic net) and weaker penalization of the established predictors in a cross‐validated framework and includes the option to impose zero penalty on the established predictors. In simulation studies that reflect the motivating context, we show the comparability or superiority of the MSN over the standard elastic net, the Integrative LASSO with Penalty Factors, the sparse group lasso, and the group lasso, and we investigate the importance of not penalizing the established predictors at all. We demonstrate the MSN to update a prediction model for pediatric ECMO patient mortality.
-
Drawing from social capital theory, this study examines the extent to which stable versus new friendship patterns affect low income students’ educational aspirations in urban and rural high schools. Using whole school sociometric data (744 high school students over a two-year period), this study applies a social influence model to determine the effects of stable and newly established friendships on conformity regarding college-going aspirations. Findings indicate that urban students have more new friends and their educational aspirations increased, conforming to those of their newly established friends. In contrast, rural students have more stable friendships than the urban students and their educational aspirations conformed to those of their stable friends. This work shows that rural students tend not to change their school network size or nominations. However, urban students are more willing to include new students in their school networks which have a positive effect on raising their educational aspirations.more » « less
-
Continuous integration (CI) is a well-established technique in commercial and open-source software projects, although not routinely used in scientific publishing. In the scientific software context, CI can serve two functions to increase reproducibility of scientific results: providing an established platform for testing the reproducibility of these results, and demonstrating to other scientists how the code and data generate the published results. We explore scientific software testing and CI strategies using two articles published in the areas of applied mathematics and computational physics. We discuss lessons learned from reproducing these articles as well as examine and discuss existing tests. We introduce the notion of a scientific test as one that produces computational results from a published article. We then consider full result reproduction within a CI environment. If authors find their work too time or resource intensive to easily adapt to a CI context, we recommend the inclusion of results from reduced versions of their work (e.g., run at lower resolution, with shorter time scales, with smaller data sets) alongside their primary results within their article. While these smaller versions may be less interesting scientifically, they can serve to verify that published code and data are working properly. We demonstrate such reduction tests on the two articles studied.more » « less
-
Allen, Genevra (Ed.)Throughout the last decade, random forests have established themselves as among the most accurate and popular supervised learning methods. While their black-box nature has made their mathematical analysis difficult, recent work has established important statistical properties like consistency and asymptotic normality by considering subsampling in lieu of bootstrapping. Though such results open the door to traditional inference procedures, all formal methods suggested thus far place severe restrictions on the testing framework and their computational overhead often precludes their practical scientific use. Here we propose a hypothesis test to formally assess feature significance, which uses permutation tests to circumvent computationally infeasible estimates of nuisance parameters. This test is intended to be analogous to the F-test for linear regression. We establish asymptotic validity of the test via exchangeability arguments and show that the test maintains high power with orders of magnitude fewer computations. Importantly, the procedure scales easily to big data settings where large training and testing sets may be employed, conducting statistically valid inference without the need to construct additional models. Simulations and applications to ecological data, where random forests have recently shown promise, are provided.more » « less