skip to main content

Title: Detecting outliers in astronomical images with deep generative networks
ABSTRACT With the advent of future big-data surveys, automated tools for unsupervised discovery are becoming ever more necessary. In this work, we explore the ability of deep generative networks for detecting outliers in astronomical imaging data sets. The main advantage of such generative models is that they are able to learn complex representations directly from the pixel space. Therefore, these methods enable us to look for subtle morphological deviations which are typically missed by more traditional moment-based approaches. We use a generative model to learn a representation of expected data defined by the training set and then look for deviations from the learned representation by looking for the best reconstruction of a given object. In this first proof-of-concept work, we apply our method to two different test cases. We first show that from a set of simulated galaxies, we are able to detect ${\sim}90{{\ \rm per\ cent}}$ of merging galaxies if we train our network only with a sample of isolated ones. We then explore how the presented approach can be used to compare observations and hydrodynamic simulations by identifying observed galaxies not well represented in the models. The code used in this is available at https://github.com/carlamb/astronomical-outliers-WGAN.
Authors:
; ; ; ; ; ; ;
Award ID(s):
1816330
Publication Date:
NSF-PAR ID:
10234815
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
496
Issue:
2
Page Range or eLocation-ID:
2346 to 2361
ISSN:
0035-8711
Sponsoring Org:
National Science Foundation
More Like this
  1. There are significant disparities between the conferring of science, technology, engineering, and mathematics (STEM) bachelor’s degrees to minoritized groups and the number of STEM faculty that represent minoritized groups at four-year predominantly White institutions (PWIs). Studies show that as of 2019, African American faculty at PWIs have increased by only 2.3% in the last 20 years. This study explores the ways in which this imbalance affects minoritized students in engineering majors. Our research objective is to describe the ways in which African American students navigate their way to success in an engineering program at a PWI where the minoritized faculty representation is less than 10%. In this study, we define success as completion of an undergraduate degree and matriculation into a Ph.D. program. Research shows that African American students struggle with feeling like the “outsider within” in graduate programs and that the engineering culture can permeate from undergraduate to graduate programs. We address our research objective by conducting interviews using navigational capital as our theoretical framework, which can be defined as resilience, academic invulnerability, and skills. These three concepts come together to denote the journey of an individual as they achieve success in an environment not created with them inmore »mind. Navigational capital has been applied in education contexts to study minoritized groups, and specifically in engineering education to study the persistence of students of color. Research on navigational capital often focuses on how participants acquire resources from others. There is a limited focus on the experience of the student as the individual agent exercising their own navigational capital. Drawing from and adapting the framework of navigational capital, this study provides rich descriptions of the lived experiences of African American students in an engineering program at a PWI as they navigated their way to academic success in a system that was not designed with them in mind. This pilot study took place at a research-intensive, land grant PWI in the southeastern United States. We recruited two students who identify as African American and are in the first year of their Ph.D. program in an engineering major. Our interview protocol was adapted from a related study about student motivation, identity, and sense of belonging in engineering. After transcribing interviews with these participants, we began our qualitative analysis with a priori coding, drawing from the framework of navigational capital, to identify the experiences, connections, involvement, and resources the participants tapped into as they maneuvered their way to success in an undergraduate engineering program at a PWI. To identify other aspects of the participants’ experiences that were not reflected in that framework, we also used open coding. The results showed that the participants tapped into their navigational capital when they used experiences, connections, involvement, and resources to be resilient, academically invulnerable, and skillful. They learned from experiences (theirs or others’), capitalized on their connections, positioned themselves through involvement, and used their resources to achieve success in their engineering program. The participants identified their experiences, connections, and involvement. For example, one participant who came from a blended family (African American and White) drew from the experiences she had with her blended family. Her experiences helped her to understand the cultures of Black and White people. She was able to turn that into a skill to connect with others at her PWI. The point at which she took her familial experiences to use as a skill to maneuver her way to success at a PWI was an example of her navigational capital. Another participant capitalized on his connections to develop academic invulnerability. He was able to build his connections by making meaningful relationships with his classmates. He knew the importance of having reliable people to be there for him when he encountered a topic he did not understand. He cultivated an environment through relationships with classmates that set him up to achieve academic invulnerability in his classes. The participants spoke least about how they used their resources. The few mentions of resources were not distinct enough to make any substantial connection to the factors that denote navigational capital. The participants spoke explicitly about the PWI culture in their engineering department. From open coding, we identified the theme that participants did not expect to have role models in their major that looked like them and went into their undergraduate experience with the understanding that they will be the distinct minority in their classes. They did not make notable mention of how a lack of minority faculty affected their success. Upon acceptance, they took on the challenge of being a racial minority in exchange for a well-recognized degree they felt would have more value compared to engineering programs at other universities. They identified ways they maneuvered around their expectation that they would not have representative role models through their use of navigational capital. Integrating knowledge from the framework of navigational capital and its existing applications in engineering and education allows us the opportunity to learn from African American students that have succeeded in engineering programs with low minority faculty representation. The future directions of this work are to outline strategies that could enhance the path of minoritized engineering students towards success and to lay a foundation for understanding the use of navigational capital by minoritized students in engineering at PWIs. Students at PWIs can benefit from understanding their own navigational capital to help them identify ways to successfully navigate educational institutions. Students’ awareness of their capacity to maintain high levels of achievement, their connections to networks that facilitate navigation, and their ability to draw from experiences to enhance resilience provide them with the agency to unleash the invisible factors of their potential to be innovators in their collegiate and work environments.« less
  2. ABSTRACT Astronomers have typically set out to solve supervised machine learning problems by creating their own representations from scratch. We show that deep learning models trained to answer every Galaxy Zoo DECaLS question learn meaningful semantic representations of galaxies that are useful for new tasks on which the models were never trained. We exploit these representations to outperform several recent approaches at practical tasks crucial for investigating large galaxy samples. The first task is identifying galaxies of similar morphology to a query galaxy. Given a single galaxy assigned a free text tag by humans (e.g. ‘#diffuse’), we can find galaxies matching that tag for most tags. The second task is identifying the most interesting anomalies to a particular researcher. Our approach is 100 per cent accurate at identifying the most interesting 100 anomalies (as judged by Galaxy Zoo 2 volunteers). The third task is adapting a model to solve a new task using only a small number of newly labelled galaxies. Models fine-tuned from our representation are better able to identify ring galaxies than models fine-tuned from terrestrial images (ImageNet) or trained from scratch. We solve each task with very few new labels; either one (for the similarity search) or several hundredmore »(for anomaly detection or fine-tuning). This challenges the longstanding view that deep supervised methods require new large labelled data sets for practical use in astronomy. To help the community benefit from our pretrained models, we release our fine-tuning code zoobot. Zoobot is accessible to researchers with no prior experience in deep learning.« less
  3. Abstract Motivation

    Modeling the structural plasticity of protein molecules remains challenging. Most research has focused on obtaining one biologically active structure. This includes the recent AlphaFold2 that has been hailed as a breakthrough for protein modeling. Computing one structure does not suffice to understand how proteins modulate their interactions and even evade our immune system. Revealing the structure space available to a protein remains challenging. Data-driven approaches that learn to generate tertiary structures are increasingly garnering attention. These approaches exploit the ability to represent tertiary structures as contact or distance maps and make direct analogies with images to harness convolution-based generative adversarial frameworks from computer vision. Since such opportunistic analogies do not allow capturing highly structured data, current deep models struggle to generate physically realistic tertiary structures.

    Results

    We present novel deep generative models that build upon the graph variational autoencoder framework. In contrast to existing literature, we represent tertiary structures as ‘contact’ graphs, which allow us to leverage graph-generative deep learning. Our models are able to capture rich, local and distal constraints and additionally compute disentangled latent representations that reveal the impact of individual latent factors. This elucidates what the factors control and makes our models more interpretable. Rigorous comparative evaluationmore »along various metrics shows that the models, we propose advance the state-of-the-art. While there is still much ground to cover, the work presented here is an important first step, and graph-generative frameworks promise to get us to our goal of unraveling the exquisite structural complexity of protein molecules.

    Availability and implementation

    Code is available at https://github.com/anonymous1025/CO-VAE.

    Supplementary information

    Supplementary data are available at Bioinformatics Advances online.

    « less
  4. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should producemore »identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues.« less
  5. Xu, Jinbo (Ed.)
    Abstract Motivation Expanding our knowledge of small molecules beyond what is known in nature or designed in wet laboratories promises to significantly advance cheminformatics, drug discovery, biotechnology and material science. In silico molecular design remains challenging, primarily due to the complexity of the chemical space and the non-trivial relationship between chemical structures and biological properties. Deep generative models that learn directly from data are intriguing, but they have yet to demonstrate interpretability in the learned representation, so we can learn more about the relationship between the chemical and biological space. In this article, we advance research on disentangled representation learning for small molecule generation. We build on recent work by us and others on deep graph generative frameworks, which capture atomic interactions via a graph-based representation of a small molecule. The methodological novelty is how we leverage the concept of disentanglement in the graph variational autoencoder framework both to generate biologically relevant small molecules and to enhance model interpretability. Results Extensive qualitative and quantitative experimental evaluation in comparison with state-of-the-art models demonstrate the superiority of our disentanglement framework. We believe this work is an important step to address key challenges in small molecule generation with deep generative frameworks. Availability andmore »implementation Training and generated data are made available at https://ieee-dataport.org/documents/dataset-disentangled-representation-learning-interpretable-molecule-generation. All code is made available at https://anonymous.4open.science/r/D-MolVAE-2799/. Supplementary information Supplementary data are available at Bioinformatics online.« less