We present a study of the potential for convolutional neural networks (CNNs) to enable separation of astrophysical transients from image artifacts, a task known as “real–bogus” classification, without requiring a template-subtracted (or difference) image, which requires a computationally expensive process to generate, involving image matching on small spatial scales in large volumes of data. Using data from the Dark Energy Survey, we explore the use of CNNs to (1) automate the real–bogus classification and (2) reduce the computational costs of transient discovery. We compare the efficiency of two CNNs with similar architectures, one that uses “image triplets” (templates, search, and difference image) and one that takes as input the template and search only. We measure the decrease in efficiency associated with the loss of information in input, finding that the testing accuracy is reduced from ∼96% to ∼91.1%. We further investigate how the latter model learns the required information from the template and search by exploring the saliency maps. Our work (1) confirms that CNNs are excellent models for real–bogus classification that rely exclusively on the imaging data and require no feature engineering task and (2) demonstrates that high-accuracy (>90%) models can be built without the need to construct difference images, but some accuracy is lost. Because, once trained, neural networks can generate predictions at minimal computational costs, we argue that future implementations of this methodology could dramatically reduce the computational costs in the detection of transients in synoptic surveys like Rubin Observatory's Legacy Survey of Space and Time by bypassing the difference image analysis entirely.
- Award ID(s):
- 2141680
- PAR ID:
- 10332028
- Date Published:
- Journal Name:
- ACL 2022
- Page Range / eLocation ID:
- 819 to 862
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Motivation Protein structure prediction has been greatly improved by deep learning, but the contribution of different information is yet to be fully understood. This article studies the impacts of two kinds of information for structure prediction: template and multiple sequence alignment (MSA) embedding. Templates have been used by some methods before, such as AlphaFold2, RoseTTAFold and RaptorX. AlphaFold2 and RosetTTAFold only used templates detected by HHsearch, which may not perform very well on some targets. In addition, sequence embedding generated by pre-trained protein language models has not been fully explored for structure prediction. In this article, we study the impact of templates (including the number of templates, the template quality and how the templates are generated) on protein structure prediction accuracy, especially when the templates are detected by methods other than HHsearch. We also study the impact of sequence embedding (generated by MSATransformer and ESM-1b) on structure prediction.
Results We have implemented a deep learning method for protein structure prediction that may take templates and MSA embedding as extra inputs. We study the contribution of templates and MSA embedding to structure prediction accuracy. Our experimental results show that templates can improve structure prediction on 71 of 110 CASP13 (13th Critical Assessment of Structure Prediction) targets and 47 of 91 CASP14 targets, and templates are particularly useful for targets with similar templates. MSA embedding can improve structure prediction on 63 of 91 CASP14 (14th Critical Assessment of Structure Prediction) targets and 87 of 183 CAMEO targets and is particularly useful for proteins with shallow MSAs. When both templates and MSA embedding are used, our method can predict correct folds (TMscore > 0.5) for 16 of 23 CASP14 FM targets and 14 of 18 Continuous Automated Model Evaluation (CAMEO) targets, outperforming RoseTTAFold by 5% and 7%, respectively.
Availability and implementation Available at https://github.com/xluo233/RaptorXFold.
Supplementary information Supplementary data are available at Bioinformatics online.
-
Few-shot classification (FSC) requires training models using a few (typically one to five) data points per class. Meta learning has proven to be able to learn a parametrized model for FSC by training on various other classification tasks. In this work, we propose PLATINUM (semi-suPervised modeL Agnostic meTa-learnIng usiNg sUbmodular Mutual information), a novel semi-supervised model agnostic meta-learning framework that uses the submodular mutual information (SMI) functions to boost the performance of FSC. PLATINUM leverages unlabeled data in the inner and outer loop using SMI functions during meta-training and obtains richer meta-learned parameterizations for meta-test. We study the performance of PLATINUM in two scenarios - 1) where the unlabeled data points belong to the same set of classes as the labeled set of a certain episode, and 2) where there exist out-of-distribution classes that do not belong to the labeled set. We evaluate our method on various settings on the miniImageNet, tieredImageNet and Fewshot-CIFAR100 datasets. Our experiments show that PLATINUM outperforms MAML and semi-supervised approaches like pseduo-labeling for semi-supervised FSC, especially for small ratio of labeled examples per class.more » « less
-
We present Prompt Diffusion, a framework for enabling in-context learning in diffusion-based generative models. Given a pair of task-specific example images, such as depth from/to image and scribble from/to image, and a text guidance, our model automatically understands the underlying task and performs the same task on a new query image following the text guidance. To achieve this, we propose a vision-language prompt that can model a wide range of vision-language tasks and a diffusion model that takes it as input. The diffusion model is trained jointly on six different tasks using these prompts. The resulting Prompt Diffusion model becomes the first diffusion-based vision-language foundation model capable of in-context learning. It demonstrates high-quality in-context generation for the trained tasks and effectively generalizes to new, unseen vision tasks using their respective prompts. Our model also shows compelling text-guided image editing results. Our framework aims to facilitate research into in-context learning for computer vision. We share our code and pre-trained models at https://github. com/Zhendong-Wang/Prompt-Diffusion.more » « less
-
Charles, Cyril (Ed.)Manually collecting landmarks for quantifying complex morphological phenotypes can be laborious and subject to intra and interobserver errors. However, most automated landmarking methods for efficiency and consistency fall short of landmarking highly variable samples due to the bias introduced by the use of a single template. We introduce a fast and open source automated landmarking pipeline (MALPACA) that utilizes multiple templates for accommodating large-scale variations. We also introduce a K-means method of choosing the templates that can be used in conjunction with MALPACA, when no prior information for selecting templates is available. Our results confirm that MALPACA significantly outperforms single-template methods in landmarking both single and multi-species samples. K-means based template selection can also avoid choosing the worst set of templates when compared to random template selection. We further offer an example of post-hoc quality check for each individual template for further refinement. In summary, MALPACA is an efficient and reproducible method that can accommodate large morphological variability, such as those commonly found in evolutionary studies. To support the research community, we have developed open-source and user-friendly software tools for performing K-means multi-templates selection and MALPACA.more » « less