skip to main content

Title: FETCH: A deep-learning based classifier for fast transient classification
ABSTRACT With the upcoming commensal surveys for Fast Radio Bursts (FRBs), and their high candidate rate, usage of machine learning algorithms for candidate classification is a necessity. Such algorithms will also play a pivotal role in sending real-time triggers for prompt follow-ups with other instruments. In this paper, we have used the technique of Transfer Learning to train the state-of-the-art deep neural networks for classification of FRB and Radio Frequency Interference (RFI) candidates. These are convolutional neural networks which work on radio frequency-time and dispersion measure-time images as the inputs. We trained these networks using simulated FRBs and real RFI candidates from telescopes at the Green Bank Observatory. We present 11 deep learning models, each with an accuracy and recall above 99.5 per cent on our test data set comprising of real RFI and pulsar candidates. As we demonstrate, these algorithms are telescope and frequency agnostic and are able to detect all FRBs with signal-to-noise ratios above 10 in ASKAP and Parkes data. We also provide an open-source python package fetch (Fast Extragalactic Transient Candidate Hunter) for classification of candidates, using our models. Using fetch, these models can be deployed along with any commensal search pipeline for real-time candidate classification.
Authors:
; ; ; ;
Award ID(s):
1714897
Publication Date:
NSF-PAR ID:
10191190
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
497
Issue:
2
Page Range or eLocation-ID:
1661 to 1674
ISSN:
0035-8711
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT We conducted a drift-scan observation campaign using the 305-m Arecibo telescope in 2020 January and March when the observatory was temporarily closed during the intense earthquakes and the initial outbreak of the COVID-19 pandemic, respectively. The primary objective of the survey was to search for fast radio transients, including fast radio bursts (FRBs) and rotating radio transients (RRATs). We used the seven-beam ALFA receiver to observe different sections of the sky within the declination region ∼(10°–20°) on 23 nights and collected 160 h of data in total. We searched our data for single-pulse transients, of covering up to a maximum dispersion measure of 11 000 pc cm−3 at which the dispersion delay across the entire bandwidth is equal to the 13-s transit length of our observations. The analysis produced more than 18 million candidates. Machine learning techniques sorted the radio frequency interference and possibly astrophysical candidates, allowing us to visually inspect and confirm the candidate transients. We found no evidence for new astrophysical transients in our data. We also searched for emission from repeated transient signals, but found no evidence for such sources. We detected single pulses from two known pulsars in our observations and their measured flux densities are consistent with themore »expected values. Based on our observations and sensitivity, we estimated the upper limit for the FRB rate to be <2.8 × 105 sky−1 d−1 above a fluence of 0.16 Jy ms at 1.4 GHz, which is consistent with the rates from other telescopes and surveys.« less
  2. Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)
    Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which would be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that ismore »inaccessible to optical or wearable devices: namely, a visual representation of the kinematic patterns of motion via the micro-Doppler signature. Micro-Doppler refers to frequency modulations that appear about the central Doppler shift, which are caused by rotational or vibrational motions that deviate from principle translational motion. In prior work, we showed that fractal complexity computed from RF data could be used to discriminate signing from daily activities and that RF data could reveal linguistic properties, such as coarticulation. We have also shown that machine learning can be used to discriminate with 99% accuracy the signing of native Deaf ASL users from that of copysigning (or imitation signing) by hearing individuals. Therefore, imitation signing data is not effective for directly training deep models. But, adversarial learning can be used to transform imitation signing to resemble native signing, or, alternatively, physics-aware generative models can be used to synthesize ASL micro-Doppler signatures for training deep neural networks. With such approaches, we have achieved over 90% recognition accuracy of 20 ASL signs. In natural environments, however, near real-time implementations of classification algorithms are required, as well as an ability to process data streams in a continuous and sequential fashion. In this work, we focus on extensions of our prior work towards this aim, and compare the efficacy of various approaches for embedding deep neural networks (DNNs) on platforms such as a Raspberry Pi or Jetson board. We examine methods for optimizing the size and computational complexity of DNNs for embedded micro-Doppler analysis, methods for network compression, and their resulting sequential ASL recognition performance.« less
  3. ABSTRACT

    We present four new fast radio bursts discovered in a search of the Parkes 70-cm pulsar survey data archive for dispersed single pulses and bursts. We searched dispersion measures (DMs) ranging between 0 and 5000 pc cm−3 with the HEIMDALL and FETCH detection and classification algorithms. All four of the fast radio bursts (FRBs) discovered have significantly larger widths (>50 ms) than almost all of the FRBs detected and catalogued to date. The large pulse widths are not dominated by interstellar scattering or dispersive smearing within channels. One of the FRBs has a DM of 3338 pc cm3, the largest measured for any FRB to date. These are also the first FRBs detected by any radio telescope so far, predating the Lorimer Burst by almost a decade. Our results suggest that pulsar survey archives remain important sources of previously undetected FRBs and that searches for FRBs on time-scales extending beyond ∼100 ms may reveal the presence of a larger population of wide-pulse FRBs.

  4. Abstract Radio Frequency Interference (RFI) is an ever-present limiting factor among radio telescopes even in the most remote observing locations. When looking to retain the maximum amount of sensitivity and reduce contamination for Epoch of Reionization studies, the identification and removal of RFI is especially important. In addition to improved RFI identification, we must also take into account computational efficiency of the RFI-Identification algorithm as radio interferometer arrays such as the Hydrogen Epoch of Reionization Array grow larger in number of receivers. To address this, we present a Deep Fully Convolutional Neural Network (DFCN) that is comprehensive in its use of interferometric data, where both amplitude and phase information are used jointly for identifying RFI. We train the network using simulated HERA visibilities containing mock RFI, yielding a known “ground truth” dataset for evaluating the accuracy of various RFI algorithms. Evaluation of the DFCN model is performed on observations from the 67 dish build-out, HERA-67, and achieves a data throughput of 1.6× 105 HERA time-ordered 1024 channeled visibilities per hour per GPU. We determine that relative to an amplitude only network including visibility phase adds important adjacent time-frequency context which increases discrimination between RFI and Non-RFI. The inclusion of phasemore »when predicting achieves a Recall of 0.81, Precision of 0.58, and F2 score of 0.75 as applied to our HERA-67 observations.« less
  5. Previous literature shows that deep learning is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, deep neural networks are often computationally complex and not feasible to work in real-time. Here we investigate different approaches' advantages and disadvantages to enhance the deep learning-based motor decoding paradigm's efficiency and inform its future implementation in real-time. Our data are recorded from the amputee's residual peripheral nerves. While the primary analysis is offline, the nerve data is cut using a sliding window to create a “pseudo-online” dataset that resembles the conditions in a real-time paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality, which later helps substantially lower the motor decoder's complexity, making it feasible for translation to a real-time paradigm. Next, we investigate two different strategies for deploying deep learning models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. This research predicts five individual finger movements and four combinations of the fingers. The 1S approach using a recurrent neural network (RNN) to concurrently predict all fingers' trajectories generally gives better prediction resultsmore »than all the machine learning algorithms that do the same task. This result reaffirms that deep learning is more advantageous than classic machine learning methods for handling a large dataset. However, when training on a smaller input data set in the 2S approach, which includes a classification stage to identify active fingers before predicting their trajectories, machine learning techniques offer a simpler implementation while ensuring comparably good decoding outcomes to the deep learning ones. In the classification step, either machine learning or deep learning models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in a comparable mean squared error (MSE) and variance accounted for (VAF) scores as those of the 1S approach. Our study outlines the trade-offs to inform the future implementation of real-time, low-latency, and high accuracy deep learning-based motor decoder for clinical applications.« less