skip to main content


Title: Pulsar Double Lensing Sheds Light on the Origin of Extreme Scattering Events
Abstract

In extreme scattering events, the brightness of a compact radio source drops significantly, as light is refracted out of the line of sight by foreground plasma lenses. Despite recent efforts, the nature of these lenses has remained a puzzle, because any roughly round lens would be so highly overpressurized relative to the interstellar medium that it could only exist for about a year. This, combined with a lack of constraints on distances and velocities, has led to a plethora of theoretical models. We present observations of a dramatic double-lensing event in pulsar PSR B0834+06 and use a novel phase-retrieval technique to show that the data can be reproduced remarkably well with a two-screen model: one screen with many small lenses and another with a single, strong one. We further show that the latter lens is so strong that it would inevitably cause extreme scattering events. Our observations show that the lens moves slowly and is highly elongated on the sky. If similarly elongated along the line of sight, as would arise naturally from a sheet of plasma viewed nearly edge-on, no large overpressure is required and hence the lens could be long-lived.

 
more » « less
Award ID(s):
2009759
NSF-PAR ID:
10422354
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
DOI PREFIX: 10.3847
Date Published:
Journal Name:
The Astrophysical Journal
Volume:
950
Issue:
2
ISSN:
0004-637X
Format(s):
Medium: X Size: Article No. 109
Size(s):
["Article No. 109"]
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT

    We describe how gravitational lensing of fast radio bursts (FRBs) is affected by a plasma screen in the vicinity of the lens or somewhere between the source and the observer. Wave passage through a turbulent medium affects gravitational image magnification, lensing probability (particularly for strong magnification events), and the time delay between images. The magnification is suppressed because of the broadening of the angular size of the source due to scattering by the plasma. The time delay between images is modified as the result of different dispersion measures (DM) along photon trajectories for different images. Each of the image light curves is also broadened due to wave scattering so that the images could have distinct temporal profiles. The first two effects are most severe for stellar and sub-stellar mass lens, and the last one (scatter broadening) for lenses and plasma screens at cosmological distances from the source/observer. This could limit the use of FRBs to measure their cosmic abundance. On the other hand, when the time delay between images is large, such that the light curve of a transient source has two or more well-separated peaks, the different DMs along the wave paths of different images can probe density fluctuations in the IGM on scales ≲10−6 rad and explore the patchy reionization history of the universe using lensed FRBs at high redshifts. Different rotation measures (RM) along two-image paths can convert linearly polarized radiation from a source to partial circular polarization.

     
    more » « less
  2. ABSTRACT

    We report on the low Galactic latitude (b = 4${_{.}^{\circ}}$3) quasar 2005 + 403, the second active galactic nuclei, in which we detected a rare phenomenon of multiple imaging induced by refractive-dominated scattering. The manifestation of this propagation effect is revealed at different frequencies (≲ 8 GHz) and epochs of Very Long Baseline Array (VLBA) observations. The pattern formed by anisotropic scattering is stretched out along the line of constant Galactic latitude with a local position angle, PA ≈ 40° showing 1–2 sub-images, often on either side of the core. Analysing the multifrequency VLBA data ranging from 1.4 to 43.2 GHz, we found that both the angular size of the apparent core component and the separation between the primary and secondary core images follow a wavelength squared dependence, providing convincing evidence for a plasma scattering origin for the multiple imaging. Based on the Owens Valley Radio Observatory long-term monitoring data at 15 GHz obtained for 2005 + 403, we identified the characteristic flux density excursions occurred in 2019 April and May and attributed to an extreme scattering event (ESE) associated with the passage of a plasma lens across the line of sight. Modelling the ESE, we determined that the angular size of the screen is 0.4 mas and it drifts with the proper motion of 4.4 mas yr−1. Assuming that the scattering screen is located in the highly turbulent Cygnus region, the transverse linear size and speed of the lens with respect to the observer are 0.7 au and 37 km s−1, respectively.

     
    more » « less
  3. Abstract Strong gravitational lensing of gravitational wave sources offers a novel probe of both the lens galaxy and the binary source population. In particular, the strong lensing event rate and the time-delay distribution of multiply imaged gravitational-wave binary coalescence events can be used to constrain the mass distribution of the lenses as well as the intrinsic properties of the source population. We calculate the strong lensing event rate for a range of second- (2G) and third-generation (3G) detectors, including Advanced LIGO/Virgo, A+, Einstein Telescope (ET), and Cosmic Explorer (CE). For 3G detectors, we find that ∼0.1% of observed events are expected to be strongly lensed. We predict detections of ∼1 lensing pair per year with A+, and ∼50 pairs per year with ET/CE. These rates are highly sensitive to the characteristic galaxy velocity dispersion, σ * , implying that observations of the rates will be a sensitive probe of lens properties. We explore using the time-delay distribution between multiply imaged gravitational-wave sources to constrain properties of the lenses. We find that 3G detectors would constrain σ * to ∼21% after 5 yr. Finally, we show that the presence or absence of strong lensing within the detected population provides useful insights into the source redshift and mass distribution out to redshifts beyond the peak of the star formation rate, which can be used to constrain formation channels and their relation to the star formation rate and delay-time distributions for these systems. 
    more » « less
  4. Obeid, Iyad ; Selesnick, Ivan ; Picone, Joseph (Ed.)
    The Temple University Hospital Seizure Detection Corpus (TUSZ) [1] has been in distribution since April 2017. It is a subset of the TUH EEG Corpus (TUEG) [2] and the most frequently requested corpus from our 3,000+ subscribers. It was recently featured as the challenge task in the Neureka 2020 Epilepsy Challenge [3]. A summary of the development of the corpus is shown below in Table 1. The TUSZ Corpus is a fully annotated corpus, which means every seizure event that occurs within its files has been annotated. The data is selected from TUEG using a screening process that identifies files most likely to contain seizures [1]. Approximately 7% of the TUEG data contains a seizure event, so it is important we triage TUEG for high yield data. One hour of EEG data requires approximately one hour of human labor to complete annotation using the pipeline described below, so it is important from a financial standpoint that we accurately triage data. A summary of the labels being used to annotate the data is shown in Table 2. Certain standards are put into place to optimize the annotation process while not sacrificing consistency. Due to the nature of EEG recordings, some records start off with a segment of calibration. This portion of the EEG is instantly recognizable and transitions from what resembles lead artifact to a flat line on all the channels. For the sake of seizure annotation, the calibration is ignored, and no time is wasted on it. During the identification of seizure events, a hard “3 second rule” is used to determine whether two events should be combined into a single larger event. This greatly reduces the time that it takes to annotate a file with multiple events occurring in succession. In addition to the required minimum 3 second gap between seizures, part of our standard dictates that no seizure less than 3 seconds be annotated. Although there is no universally accepted definition for how long a seizure must be, we find that it is difficult to discern with confidence between burst suppression or other morphologically similar impressions when the event is only a couple seconds long. This is due to several reasons, the most notable being the lack of evolution which is oftentimes crucial for the determination of a seizure. After the EEG files have been triaged, a team of annotators at NEDC is provided with the files to begin data annotation. An example of an annotation is shown in Figure 1. A summary of the workflow for our annotation process is shown in Figure 2. Several passes are performed over the data to ensure the annotations are accurate. Each file undergoes three passes to ensure that no seizures were missed or misidentified. The first pass of TUSZ involves identifying which files contain seizures and annotating them using our annotation tool. The time it takes to fully annotate a file can vary drastically depending on the specific characteristics of each file; however, on average a file containing multiple seizures takes 7 minutes to fully annotate. This includes the time that it takes to read the patient report as well as traverse through the entire file. Once an event has been identified, the start and stop time for the seizure is stored in our annotation tool. This is done on a channel by channel basis resulting in an accurate representation of the seizure spreading across different parts of the brain. Files that do not contain any seizures take approximately 3 minutes to complete. Even though there is no annotation being made, the file is still carefully examined to make sure that nothing was overlooked. In addition to solely scrolling through a file from start to finish, a file is often examined through different lenses. Depending on the situation, low pass filters are used, as well as increasing the amplitude of certain channels. These techniques are never used in isolation and are meant to further increase our confidence that nothing was missed. Once each file in a given set has been looked at once, the annotators start the review process. The reviewer checks a file and comments any changes that they recommend. This takes about 3 minutes per seizure containing file, which is significantly less time than the first pass. After each file has been commented on, the third pass commences. This step takes about 5 minutes per seizure file and requires the reviewer to accept or reject the changes that the second reviewer suggested. Since tangible changes are made to the annotation using the annotation tool, this step takes a bit longer than the previous one. Assuming 18% of the files contain seizures, a set of 1,000 files takes roughly 127 work hours to annotate. Before an annotator contributes to the data interpretation pipeline, they are trained for several weeks on previous datasets. A new annotator is able to be trained using data that resembles what they would see under normal circumstances. An additional benefit of using released data to train is that it serves as a means of constantly checking our work. If a trainee stumbles across an event that was not previously annotated, it is promptly added, and the data release is updated. It takes about three months to train an annotator to a point where their annotations can be trusted. Even though we carefully screen potential annotators during the hiring process, only about 25% of the annotators we hire survive more than one year doing this work. To ensure that the annotators are consistent in their annotations, the team conducts an interrater agreement evaluation periodically to ensure that there is a consensus within the team. The annotation standards are discussed in Ochal et al. [4]. An extended discussion of interrater agreement can be found in Shah et al. [5]. The most recent release of TUSZ, v1.5.2, represents our efforts to review the quality of the annotations for two upcoming challenges we hosted: an internal deep learning challenge at IBM [6] and the Neureka 2020 Epilepsy Challenge [3]. One of the biggest changes that was made to the annotations was the imposition of a stricter standard for determining the start and stop time of a seizure. Although evolution is still included in the annotations, the start times were altered to start when the spike-wave pattern becomes distinct as opposed to merely when the signal starts to shift from background. This cuts down on background that was mislabeled as a seizure. For seizure end times, all post ictal slowing that was included was removed. The recent release of v1.5.2 did not include any additional data files. Two EEG files had been added because, originally, they were corrupted in v1.5.1 but were able to be retrieved and added for the latest release. The progression from v1.5.0 to v1.5.1 and later to v1.5.2, included the re-annotation of all of the EEG files in order to develop a confident dataset regarding seizure identification. Starting with v1.4.0, we have also developed a blind evaluation set that is withheld for use in competitions. The annotation team is currently working on the next release for TUSZ, v1.6.0, which is expected to occur in August 2020. It will include new data from 2016 to mid-2019. This release will contain 2,296 files from 2016 as well as several thousand files representing the remaining data through mid-2019. In addition to files that were obtained with our standard triaging process, a part of this release consists of EEG files that do not have associated patient reports. Since actual seizure events are in short supply, we are mining a large chunk of data for which we have EEG recordings but no reports. Some of this data contains interesting seizure events collected during long-term EEG sessions or data collected from patients with a history of frequent seizures. It is being mined to increase the number of files in the corpus that have at least one seizure event. We expect v1.6.0 to be released before IEEE SPMB 2020. The TUAR Corpus is an open-source database that is currently available for use by any registered member of our consortium. To register and receive access, please follow the instructions provided at this web page: https://www.isip.piconepress.com/projects/tuh_eeg/html/downloads.shtml. The data is located here: https://www.isip.piconepress.com/projects/tuh_eeg/downloads/tuh_eeg_artifact/v2.0.0/. 
    more » « less
  5. null (Ed.)
    ABSTRACT A promising route for revealing the existence of dark matter structures on mass scales smaller than the faintest galaxies is through their effect on strong gravitational lenses. We examine the role of local, lens-proximate clustering in boosting the lensing probability relative to contributions from substructure and unclustered line-of-sight (LOS) haloes. Using two cosmological simulations that can resolve halo masses of Mhalo ≃ 109 M⊙ (in a simulation box of length $L_{\rm box}{\sim }100\, {\rm Mpc}$) and 107 M⊙ ($L_{\rm box}\sim 20\, {\rm Mpc}$), we demonstrate that clustering in the vicinity of the lens host produces a clear enhancement relative to an assumption of unclustered haloes that persists to $\gt 20\, R_{\rm vir}$. This enhancement exceeds estimates that use a two-halo term to account for clustering, particularly within $2-5\, R_{\rm vir}$. We provide an analytic expression for this excess, clustered contribution. We find that local clustering boosts the expected count of 109 M⊙ perturbing haloes by $\sim \! 35{{\ \rm per\ cent}}$ compared to substructure alone, a result that will significantly enhance expected signals for low-redshift (zl ≃ 0.2) lenses, where substructure contributes substantially compared to LOS haloes. We also find that the orientation of the lens with respect to the line of sight (e.g. whether the line of sight passes through the major axis of the lens) can also have a significant effect on the lensing signal, boosting counts by an additional $\sim 50{{\ \rm per\ cent}}$ compared to a random orientations. This could be important if discovered lenses are biased to be oriented along their principal axis. 
    more » « less