A Review of Perceptual Expertise in Radiology-How it develops, How we can test it, and Why humans still matter in the era of Artificial Intelligence
More Like this
-
Abstract There is a long-standing discrepancy between the observed Galactic classical nova rate of ∼10 yr −1 and the predicted rate from Galactic models of ∼30–50 yr −1 . One explanation for this discrepancy is that many novae are hidden by interstellar extinction, but the degree to which dust can obscure novae is poorly constrained. We use newly available all-sky three-dimensional dust maps to compare the brightness and spatial distribution of known novae to that predicted from relatively simple models in which novae trace Galactic stellar mass. We find that only half (53%) of the novae are expected to be easily detectable ( g ≲ 15) with current all-sky optical surveys such as the All-Sky Automated Survey for Supernovae (ASAS-SN). This fraction is much lower than previously estimated, showing that dust does substantially affect nova detection in the optical. By comparing complementary survey results from the ASAS-SN, OGLE-IV, and Palomar Gattini IR surveys using our modeling, we find a tentative Galactic nova rate of ∼30 yr −1 , though this could be as high as ∼40 yr −1 , depending on the assumed distribution of novae within the Galaxy. These preliminary estimates will be improved in future work through more sophisticated modeling of nova detection in ASAS-SN and other surveys.more » « less
-
The risk of overparameterized models, in particular deep neural networks, is often double- descent shaped as a function of the model size. Recently, it was shown that the risk as a function of the early-stopping time can also be double-descent shaped, and this behavior can be explained as a super-position of bias-variance tradeoffs. In this paper, we show that the risk of explicit L2-regularized models can exhibit double descent behavior as a function of the regularization strength, both in theory and practice. We find that for linear regression, a double descent shaped risk is caused by a superposition of bias-variance tradeoffs corresponding to different parts of the model and can be mitigated by scaling the regularization strength of each part appropriately. Motivated by this result, we study a two-layer neural network and show that double descent can be eliminated by adjusting the regularization strengths for the first and second layer. Lastly, we study a 5-layer CNN and ResNet-18 trained on CIFAR-10 with label noise, and CIFAR-100 without label noise, and demonstrate that all exhibit double descent behavior as a function of the regularization strength.more » « less
An official website of the United States government

